text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
A Comparative Analysis of Human Translation and Machine Translation in Diplomatic Languages under the Theory of Functional Equivalence: A Case Study of the US ‐ China High ‐ Level Strategic Dialogue in 2021 : This paper presents a comparative analysis of human and machine translations of Yang Jiechi's 16-minute speech during the 2021 China-U.S. High-Level Strategic Dialogue. Guided by the functional equivalence theory, this study examines the lexical and discourse choices of the human translator, Zhang Jing, against three different machine translations. The analysis focuses on the differences in translation regarding political sensitivity, cultural and emotional undertones, and the irreplaceable role of human translators in political language translation. The paper aims to demonstrate how effective human translation contributes to improving China-U.S. relations and enhancing China's international influence. Introduction Diplomatic language is a special form of discourse in foreign affairs and an important part of the political language of the country.Diplomatic language is also a language with high cultural load, which is an important part of national language life and diplomatic work.Diplomatic language is the embodiment of the collective will of the national leadership, national leaders and diplomats express their positions in foreign countries in strict terms, and the words and phrases represent the attitude of the country.The content of diplomatic language is related to the important aspects of national politics, security, economy and military interests, which also determines that in all translation practices, diplomatic translation is characterized by the strongest politics and the highest policy sensitivity.Therefore, translators must have a good grasp and deep understanding of the cultural background and political positions of the two countries, and must follow the four principles of political equivalence, uniformity of translation, professional expression and convention. The booming development of translation technology in recent years has made the call for "machine translation (MT) to replace human translation (HT)" louder and louder.Some experts predict that machine translation will reach the level of human translation by 2029.In this project, the researchers will take the live translation of the 16-minute impromptu speech of State Councillor Yang Jiechi by Zhang Jing, an interpreter in the Sino-US High-Level Strategic Dialogue in 2021, as an example, and under the guidance of the theory of functional reciprocity, starting from the vocabulary and parts of speech of Zhang Jing's translation, supplemented with the Chinese side's attitude and emotional tendency, and in the practice of translation of politically sensitive languages, they will make a deep comparative analysis of the main differences between human translation and machine translation and the irreplaceable role of human translation in the practice of translation of political languages.The irreplaceable role of human translation in the practice of political language translation, at the same time, is committed to the mutual integration and achievement of culture and politics, laying a solid foundation for easing the current Sino-US relations and for taking them to the next level in the future, and then enhancing China's international influence in the international community, upgrading China's international status, and promoting the development of global political, economic, and cultural prosperity. Current Status of Domestic and International Research There are a total of 149 studies on diplomatic language translation in China's full-text journal data network (CNKI).There are only a dozen studies on the comparison of humancomputer translation, and there are no studies on the comparative analysis of diplomatic language as the language material. Among the 149 studies on diplomatic translation, the main research direction of scholars is the characteristics and skills of diplomatic translation, and a small part of them combines political equivalence, functional equivalence and other related translation theories for evaluation and analysis. Among the studies on the characteristics and skills of diplomatic translation: Li Qin (2006), in her article on the characteristics of Chinese diplomatic language and translation, summarizes the characteristics of diplomatic language in terms of clarity, ambiguity, sense of proportion and accuracy.In Xu Yanan (2000) Characteristics of Diplomatic Translation and Requirements for Diplomatic Translators, the article emphasizes the political quality requirements of diplomatic translators, and explicitly points out that diplomatic translations are characterized by strong politics and high sensitivity of the text.In Gao Bin's (2014) article, it is pointed out that diplomatic language is a contradiction between accuracy and ambiguity, and as a special form of political language, there are certain difficulties in realizing "political equivalence", and the principle of precision and unity, the combination of direct translation and Italian translation, and the conversion of cultural image equivalence are put forward in the practice of diplomatic translation. Among the related studies that analyze diplomatic language by combining relevant evaluation theories: Yang Dongqing (2012) combines Naida's theory of reciprocal translation in diplomatic interpreting under the guidance of the theory of functional equivalence and points out that the core of the theory of functional equivalence is that the response of the receiver of the translated language to the information in the translated text is basically the same as that of the receiver of the source language to the information in the original text, and that in order to realize the functional equivalence, it is necessary to combine the linguistic environment with the choice of words, dissolve the cultural differences in the interpretation of the meanings, respect the habits of the target language in the simplification of meanings, and rationalize logical relations in the paraphrasing.logical relations.Hu Xiang et al. (2013) introduced Noether's theory of textual function in their article on the use of pragmatic function analysis in diplomatic translation, and took Premier Wen Jiabao's answer to Chinese and foreign journalists' questions as the object of analysis, briefly describing the functions of information, evaluation, narration, infection, lyricism and persuasion in diplomatic language translation.In the article of Ren Dongsheng et al. ( 2021) on the metaphorical translation strategy of diplomatic discourse based on "political equivalence", Naida's theory of translation equivalence is also combined, but what is different is that Ren Dongsheng et al. mention an important theoretical branch of the theory of equivalence in the political language: political equivalence in this article.The political theory is characterized by politics, dynamics and balance.Politics means that in translating political language, the translator must take into account the cultural thinking and ideological differences between the speaker and the audience so as to convey the political stance and political connotation.Dynamism means that the translator must keep abreast of the latest developments in both the source language and the translated language.Balance means that the translation should seek a balance between the importance of the source language and the acceptance of the translated language, and seek twoway recognition.In Yang Yifan's (2021) article on the study of the principle of political equivalence in diplomatic language translation strategies, a relevant example study is also carried out on the basis of politics, dynamics and balance in the principle of political equivalence. In the study of comparative analysis of human translation and machine translation in recent years: Huang Junyan et al. ( 2021) in the comparative study of machine translation and human translation-taking the evaluative meaning in Siege as an example, the researchers, under the perspective of the evaluative meaning of the words, came to the conclusions that the human translation is close to the expression of the emotional color of the source language, the machine translation is based on the direct translation, the quality of the in-depth translation is lower, the meaning of the machine translation is in line with, but out of, the cultural context, and the machine translation will make mistakes, and so on.In the article of Wang Xiaolu (2021) Machine Translation VS Popular Translation: My Opinion on the English Translation of Popular Words, a comparative analysis is made in the degree of word selection fitness, the power of conversion of logical thinking and the power of adjusting linguistic inertia, respectively, taking the examples of current popular words, network popular words, and logo popular words, and it is pointed out that the machine translation is insufficient. In foreign journals, the research and discussion on the fundamental theory of translation "Naida's theory of functional equivalence": Wu Runzhi (2018) in Reflections on Application of Eugene Nida's "Functional Equivalence" Theory to Translation Nida's Functional Equivalence Theory is discussed in general terms, firstly, it is clarified that translation is both a linguistic and cultural transformation, thus further clarifying that equivalence is not sameness in the mathematical sense, and that the proximity between the source language and the translated language and thus the realization of cultural correspondences is the ideal definition of functional equivalence.Meanwhile, the paper further analyzes the practical significance of the theory of functional equivalence in translation, pointing out that the realization of functional equivalence is subject to the influence of dual cultures, which is further explained by the example of the different cultural imagery of "horse" and "cow" in Chinese and English history.Huang Jing (2021), in his article Chinese-English Translation from the Perspective of Chinese-English Compression -A Review of Functional Equivalence Theory, explains that the realization of functional equivalence is affected by dual culture, taking "horse" and "cow" as examples of different cultural imagery in Chinese-English history.(2021) analyzed a new paradigm in translation evaluation, the Functional Equivalence Theory, from the perspective of compression in Chinese-English translation practice.Naida's Functional Equivalence Theory focuses on the evaluation of translation content and result, and the transmission of deeper meaning is much more important than the expression of surface structure, in order to find the optimal solution of linguistic structure and linguistic meaning.The differences between Chinese and English thinking habits are further analyzed and discussed in detail. Domestic and International Developments At present, there are few studies on English translation of political texts focusing on specific major diplomatic occasions and comprehensive evaluation and analysis of them.With the enhancement of China's national strength, the ideological exchanges between China and the United States will become more and more intense, how to clarify China's position, convey China's voice, and tell China's story is a major mission in diplomatic translation.And based on this, even in the context of the development and maturity of machine translation technology, in the political language, a highly sensitive form of language, compared to machine translation human translation always has an irreplaceable role.Literature References Research Findings and Analysis Comparative Analysis: 1. Lexical Choices and Syntax Example1: "我们抗疫取得了决定性的胜利, 取得了重要 的战略成果。" -Human Translation: "We have achieved a decisive victory in our fight against the pandemic and gained significant strategic outcomes."-Machine Translation 1: "We have achieved decisive victories in epidemic prevention and control and have made important strategic achievements."-Machine Translation 2: "We have achieved decisive victories in fighting the epidemic and achieved important strategic results."-Machine Translation 3: "We have achieved a decisive victory in epidemic prevention and control and obtained important strategic results."Analysis: Zhang Jing's translation preserves the political tone and unity of the message, emphasizing the strategic outcomes in a cohesive manner.The machine translations, while accurate, lack the subtle emphasis on strategic victory that is politically significant. Discourse Coherence and Paragraph Transitions -Example2: "中国现在正处于两个百年目标交汇期。" -Human Translation: "China is now at the intersection of two centenary goals."-Machine Translation 1: "China is now at the juncture of two centenary goals."-Machine Translation 2: "China is now at the intersection of two centennial goals."-Machine Translation 3: "China is now in the period of the convergence of two centenary goals."Analysis: Zhang Jing's translation effectively conveys the historical significance and forward-looking perspective of the statement, aligning with the broader narrative of China's strategic planning.Machine translations vary in word choice but do not consistently capture the forward-looking nuance of the original. Political Sensitivity and Cultural Nuances -Example3: "美国的民主不仅由美国人来评价,而且要 由世界人民来评价。" -Human Translation: "American democracy is not only evaluated by Americans but also by the people of the world."-Machine Translation 1: "American democracy is evaluated not only by Americans but also by people around the world."-Machine Translation 2: "American democracy is evaluated not only by Americans but also by the world's people."-Machine Translation 3: "American democracy is evaluated not only by Americans, but also by people around the world."Analysis: Zhang Jing's translation carefully maintains the political delicacy and the implied criticism, balancing diplomatic politeness with pointed critique.Machine translations, though similar, lack the nuanced delivery that ensures diplomatic appropriateness. Major Findings: Glossary selection In terms of vocabulary selection, human translators pay more attention to the precision and cultural connotation of vocabulary.For example, Zhang Jing translated "The result of the confrontation did not serve the United States well" as "The result did not serve the United States well", in which the word "serve" appropriately conveys the attitude and emotion of the Chinese side, whereas DeepL translated "The result did not benefit the United States", which also conveys similar meaning but lacks cultural connotation and emotional color.The result did not benefit the United States", which expresses a similar meaning but lacks cultural connotation and emotional color. Articulation In terms of discourse articulation, human translators are better able to maintain the coherence and logic of speeches.For example, when translating Yang Jiechi's speech on China-US cooperation, Zhang Jing used several connecting words and transitional sentences, which made the speech more coherent and smooth, while machine translation lacked in these aspects, resulting in a slightly lack of coherence and logic in the translation. Chinese attitudes and emotional tendencies In terms of expressing Chinese attitudes and emotions, human translators can more accurately convey the speaker's emotions and attitudes.For example, when Yang Jiechi mentioned the competition between China and the United States, Zhang Jing's translation used expressions such as "we need to enhance communication properly manage our differences and expand our cooperation", reflecting the rational and cooperative attitude of the Chinese side.On the other hand, DeepL mechanically translates it as "we need to strengthen mutual communication properly manage our differences and strive to promote cooperation", which lacks emotional color and subtle expression in diplomatic context. Summary This paper utilizes the theory of "Functional Equivalence (Dynamic Equivalence)" as the main theoretical guide, and the concept of Dynamic Equivalence (Functional Equivalence) was firstly proposed from the linguistic point of view by Eugune A. Nida (1964) based on the nature of translation in his monograph "The Science of Translation".It includes four aspects of equivalence: (1) lexical equivalence, (2) syntactic equivalence, (3) chapter equivalence, and (4) stylistic equivalence. (1) Lexical equivalence: The meaning of a word lies in its usage in the language.The corresponding meaning is found in the target language. (2) Syntactic equivalence: The translator should not only know whether the target language has this kind of structure, but also understand how often it is used. (3) Chapter Equivalence: When analyzing a language, one should not only analyze the language itself, but also look at how the language embodies the meaning and function in a particular context.It is necessary to refer to the contextual, situational and cultural contexts at the same time. (4) Stylistic Equivalence: Translations of different genres have their own unique linguistic characteristics.Only when the translator masters the characteristics of both the source language and the target language and can skillfully utilize both languages can he/she create a translation work that truly reflects the style of the source language. Translation is the reproduction of information in the source language in the most appropriate, natural and reciprocal language, from semantic to stylistic reproduction.This study will take the live translation of the 16-minute impromptu speech of State Councillor Yang Jiechi by Zhang Jing's interpreter and AI interpreter in the US-China High-Level Strategic Dialogue in 2021 as the content of the study, and will use the theories of various disciplines, such as linguistics, diplomacy and communication, to combine the principles of political equivalence and functional equivalence with the interdisciplinary perspective of the principles of political equivalence and functional equivalence in diplomatic translation.Under the interdisciplinary perspective of "political equivalence" and "functional equivalence" in diplomatic translation, and in combination with the symbiotic relationship between diplomatic translation and diplomatic contexts, we follow the dual "evaluation equivalence" principle of evaluation type equivalence and diplomatic stance equivalence.From a multidisciplinary perspective, human translation and machine translation in diplomatic translation are examined and comparatively analyzed. In order to make the analysis of the subject more objective, real and persuasive, this study selects a number of reports and literatures that are closely related to the content of the study.In terms of enriching the theoretical foundation of this study and establishing a comprehensive evaluation system for the research content, we have reviewed and combined the literature that elaborates on the theoretical contents of the principle of political equivalence and the principle of evaluative equivalence.In terms of accumulating experience in analyzing other studies and combining the research methodology of similar topics, we have reviewed and combined a large amount of literature on political language with diplomatic language as the core. In this paper: (1) Diplomatic language translatability is analyzed.Analyzing the Chinese translation of diplomatic language in the 2021 China-U.S. High-Level Strategic Dialogue, taking the four principles of political equivalence, uniformity of translation, professional expression and convention as the entry point, it aims at highlighting the main differences between human translation and machine translation of diplomatic language in the political field, as well as the irreplaceable role of human translation in the practice of translation of political language, and the academic significance, application value, and market prospects of the study in the political field. (2) Argues that cultural emotions can be transmitted.To disprove the self-evident false proposition that "machine translation will replace human translation".By comparing and contrasting the similarities and differences between human translators and machine translators in national political topics, and under the guidance of the theory of functional equivalence, it is argued that human translators can more closely express the status quo and political will in terms of lexical choices and syntactic structure, which further demonstrates the decisive position of human translators in spreading cultural connotations and transmitting emotions, and strengthens people's understanding of, recognition of, and recognition of human translators.It further proves that human translation is irreplaceable in the translator's ability of independent thinking, empathy and re-creation.Machine translation should always be a supplement to human translation and can never override human translators. (3) Building China's international discourse power.China should effectively build its own discourse system, master the ability of foreign translation, become proficient in international languages, break through the barriers, and improve the practical effect of discourse expression, so as to enhance China's international discourse power, and transmit China's positive energy to the international public opinion arena. (4) Demonstrating the advantages of human translation.Through the comparative analysis of machine translation and human translation of political texts, the advantages of human translation are highlighted.Guide college students to reduce their reliance on machine translation and enhance their translation ability, especially for English majors.Enhance the "four confidence", especially cultural confidence and cultural identity. (5) Promote the cultivation of students' values.Actively respond to a series of important documents issued by the CPC Central Committee, the State Council and the Ministry of Education, provide students with a dialectical and objective way to draw information in the information age of "Internet Plus", help students establish a correct worldview and values, encourage college students to actively participate in the great practice of socialism with national characteristics, and form a correct understanding of the motherland and the world.the motherland and the world.
4,436.6
2024-06-12T00:00:00.000
[ "Linguistics", "Political Science" ]
Non-invasive detection and localization of microplastic particles in a sandy sediment by complementary neutron and X-ray tomography Microplastics have become a ubiquitous pollutant in marine, terrestrial and freshwater systems that seriously affects aquatic and terrestrial ecosystems. Common methods for analysing microplastic abundance in soil or sediments are based on destructive sampling or involve destructive sample processing. Thus, substantial information about local distribution of microplastics is inevitably lost. Tomographic methods have been explored in our study as they can help to overcome this limitation because they allow the analysis of the sample structure while maintaining its integrity. However, this capability has not yet been exploited for detection of environmental microplastics. We present a bimodal 3D imaging approach capable to detect microplastics in soil or sediment cores non-destructively. In a first pilot study, we demonstrate the unique potential of neutrons to sense and localize microplastic particles in sandy sediment. The complementary application of X-rays allows mineral grains to be discriminated from microplastic particles. Additionally, it yields detailed information on the 3D surroundings of each microplastic particle, which supports its size and shape determination. The procedure we developed is able to identify microplastic particles with diameters of approximately 1 mm in a sandy soil. It also allows characterisation of the shape of the microplastic particles as well as the microstructure of the soil and sediment sample as depositional background information. Transferring this approach to environmental samples presents the opportunity to gain insights of the exact distribution of microplastics as well as their past deposition, deterioration and translocation processes. Introduction Microplastics (MPs) are present not only in marine environments but also in lakes and rivers (Blair et al. 2017), the latter also acting as major sources of MPs to the oceans (Schmidt et al. 2017). Due to their ubiquitous presence in marine, terrestrial and freshwater systems, MPs are an environmental pollutant of substantial concern and represent an urgent challenge for research (Rochman 2018). In river water, MP concentrations are typically present in an order of several particles per cubic metre (Horton et al. 2017), but much higher values can also be found (Koelmans et al. 2019), up to around 10,000 MP particles per cubic metre close to the surface in an urban watercourse (Schmidt et al. 2018). The density differences of MPs to water make them float or sink in the water column. The ones lighter than water tend to float and be transported away from their source, with the potential to be ultimately deposited downstream or downwind at river banks and lake shores. In contrast, particles denser than water tend to sink and be deposited in river or lake beds close to their source, at least initially. However, there are several additional processes Responsible editor: Geraldene Wharton influencing the net buoyancy such as attachment of biofilms or gas bubbles or ageing. Thus, MPs lighter than water can be found in bottom sediments, e.g. up to about 9000 pieces foamed polystyrene per square metre (Sagawa et al. 2018). While there is no generally accepted definition of the upper and lower size limit of MP, a common definition is that MP is smaller than 5 mm and larger than 1 μm (Frias and Nash 2019). The size range from 1 to 5 mm in diameter can be called large MPs. The current lower size limit for identification is in the range between 20 and 100 μm (Frias and Nash 2019) and this implies that currently mainly medium to large MP particles can be detected. Sediment samples are usually taken by a grab sampler, spade or corer and then destructively processed, mainly including volume reduction via net collection or sieving and density separation or filtration before detection of MP (Prata et al. 2019). Common methods for identification of medium to large MP particles after extraction and processing are optical inspection, sometimes together with a needle test or similar (Masura et al. 2015;Willis et al. 2017;Silva et al. 2018), attenuated total reflection-Fourier-transformed infrared spectroscopy (ATR-FTIR) (Löder and Gerdts 2015;Renner et al. 2019), thermoanalytical methods such as pyrolysis with subsequent gas chromatography-mass spectrometry (GC-MS) (Fischer and Scholz-Böttcher 2017;Käppler et al. 2018) or thermal extraction-desorption gas chromatography mass spectrometry (TED-GC-MS) (Dümichen et al. 2017), or using near infrared imaging (Schmidt et al. 2018;Corradini et al. 2019). For detecting smaller MPs, a recent comparative study tested measurement results of different methods (Müller et al. 2020). Furthermore, optical analysis of destructively sampled soil material can provide information on presence of MPs, e.g. PET and LDPE (by Fourier-transformed infrared spectroscopy) or PE, PP, PS, PET and PVC (by near infrared spectroscopy in combination with chemometrics), however, requiring MP abundance to be at or above 1% by weight (Hahn et al. 2019;Paul et al. 2019). Recent studies have shown that MPs are found in significant concentrations at lake shores and in river banks and bed sediments. For example, in river banks, MPs have been found at an abundance of hundreds to several thousand pieces per square metre, and showing large scatter (Castañeda et al. 2014;Dris et al. 2015;Zhang et al. 2017). While the size of MP particles is an important property, for example by influencing deposition processes (Blair et al. 2019), it can only be retrieved by some of the detection methods. Also, the mass fraction has been found to strongly increase with MP size (Klein et al. 2015). In river bed sediments, MP particles in the medium to large MP size ranging from about 0.1 to 5.0 mm have been reported to be about 1000 particles per kilogramme dry weight of sediment (Frei et al. 2019). The methods used are destructive, laborious and struggle with differentiating MP from natural material, and thus provide only limited insights. Studies on quantitative identification of MPs in soil are still rare (Bläsing and Amelung 2018) although it can strongly affect soil properties, such as bulk density and soil structure, and biological processes, such as evapotranspiration and root biomass growth (de Souza Machado et al. 2019). There seems to be a large variability in MP contents in agricultural soils depending on management practices. In one study, MPs of size > 1 mm in diameter with 0.34 particles per kilogramme dry weight, mainly foils and fragments, were found in the top 5 cm of soil at an agricultural site, though on this ploughed field no agricultural plastic had been used and neither sewage sludge applied (Piehl et al. 2018). However, in another investigation, between 7100 and 42,960 MP particles per kilogramme dry weight of soil were reported on cropped vegetable fields in China, with the majority below 1 mm and mainly consisting of fibres, where irrigation with wastewater had been applied (Zhang and Liu 2018). This also demonstrates that large differences between sites and along soil profiles can be expected based on past management with varying MP inputs. Furthermore, the vertical distribution of MP in sediments has been investigated, though in all of these studies, to our knowledge, via destructive sampling and extraction of MPs. Typically, sections ranging from of a few to 10 m were extracted from different depths from marine, beach and river sediments and analysed as a whole for their total MP content (Turra et al. 2014;Willis et al. 2017;De Ruijter et al. 2019;Frei et al. 2019). MP abundance in beach sands, for example, was around a few hundred per kilogramme dry weight in the shallow depths investigated by Besley et al. (2017) or between 5 and about 60 MP particles per kilogramme dry weight down to 40 cm depth (Kreiss 2020). Studies obtaining vertical distributions of MPs in soils seem to be lacking so far. For river, lake and marine sediments as well as soils, investigations are needed that provide sizes and shapes of MP particles, and also there should be investigations that go beyond destructive analysis and that achieve a vertical resolution down to the scale of the size of MP particles, i.e. millimetres rather than centimetres or decimetre. So far, tomography methods have rarely been used to investigate the presence and fate of MPs in the environment. X-ray microtomography was applied to study the shapes of individual MP particles after having been extracted from samples and identified with other methods (Sagawa et al. 2018). Optical coherence tomography has been applied and tested to image internalized MPs accumulated in the intestines of living Daphnia magna (Barroso et al. 2019). However, tomography methods have unique capabilities and some are common investigation tools in analysis of sediments and soils. X-ray tomography (CT) is mainly used in soil physics to investigate soil structures, soil properties and root-soil interaction, or to study flow and transport processes in porous soil media (Helliwell et al. 2013;Schlüter et al. 2014). Similar applications can be found in sedimentology and earth sciences (Duliu 1999;Fouinat et al. 2017). CT may be applied at small scales as microtomography or synchrotron tomography (Lombi and Susini 2009;Mooney et al. 2012;Keyes et al. 2017). Imaging with neutrons is used for investigating water flow, root water uptake and rhizosphere properties as 2D transmission imaging (Oswald et al. 2008;Carminati et al. 2010) or as 3D tomography (Esser et al. 2010;Moradi et al. 2011). While long acquisition times seemed to limit the application of neutron tomography (NT) to quasi-stationary situations, recent developments could yield similar information over much shorter time scales, down to seconds per tomogram (Tötzke et al. 2017;Tötzke et al. 2019). A third common imaging method for soils and sediments is magnetic resonance imaging, which can provide water distribution, water movement, transport of paramagnetic tracers and differences in texture and water mobility (Chen et al. 2002;Moradi et al. 2010). These imaging methods can also be combined (Oswald et al. 2015;van Veelen et al. 2018) and a recent study showed the combination of all three of them (Haber-Pohlmeier et al. 2019). Tomographic investigation can help to identify MP particles in soils and sediments, obtain information on their shape and context, e.g. if different fragments belong together as remnants of a larger mother particle or if they are embedded in particular layers of specific texture resulting from particular events. Coring and non-invasive analysis for MP particles can even constitute a historical record of MP deposition in the past and its changes (Willis et al. 2017). That applies probably more for marine, lake and river bed sediments than for beaches, river banks and soils, where human or natural activities cause disturbances, e.g. translocation by ploughing or earthworms (Rillig et al. 2017). Our study is the first to test a combination of neutron and X-ray tomography for the detection of MP particles in sandy sediments. A particular advantage of this imaging approach over common detection methods is that no destructive sample preparation procedures are required. By maintaining the integrity of the sediment sample during analysis, the imaging approach offers the potential to go beyond simply quantifying the number of MP particles present. This includes advanced analysis options such as detecting the 3D shape and spatial distribution of the plastic particles are possible as well as capturing the microstructure of the sediment surrounding the MP particles. Although X-ray tomography provides excellent contrast to analyse the microstructure of sediment and soil samples, the detection of MP particles requires a complementary method, as common plastic materials are quite transparent for X-rays (e.g. attenuation coefficient polyethylene: μ(E = 100 keV) = 0.16 cm −1 ) (NIST 2020). On the other hand, neutrons are a sensitive probe for MPs as they are strongly attenuated by common plastic materials (e.g. neutron attenuation coefficient of polyethylene: μ(λ = 3 Å) = 6.6 cm −1 ) (NIST 2020). The basic concept is to use the different contrast behaviour of these imaging modalities to clearly identify MP particles and to gain additional information about the microstructure of the sediment surrounding them. This approach can achieve an unprecedented vertical resolution and the information gained can crucially support the understanding of the depositional context of MP particles. For example, the identification of local cracks or macropores could explain the preferential deposition of plastic particles in respective regions of the sediment sample. The abundance of MPs found in soils and sediments in the environment, at least for substantially polluted sites, makes it likely that a few MP particles can be expected in cored sediment or soil samples. For this scenario, we have developed this non-destructive measurement approach to provide an option for reconstructing MP deposition in the past and investigate deposition and translocation processes. Sample preparation To test the feasibility of detecting MPs, a sand column containing a known number of MP particles was prepared in a boron-free glass cylinder to enable the use of neutron and Xray tomography on the same sample. The dimensions of the container were diameter 20 mm and height 100 mm. The bottom half of the container was filled with quartz sand (type FH 31, Quartzwerke Frechen/Germany, well-sorted medium sand size fraction), which is considered a simple surrogate of a natural sandy soil or sediment in a surface water course (Fig. 1a). Five small almost rectangular pieces about 1 mm in width were cut from the disposable security ring band of a polyethylene (PE) bottle screw cap and embedded into the sand. In the next step, a cardboard disc was used as separator covering the bottom sand compartment before the upper half was filled with thermally treated FH31 sand. The thermal treatment (3 h at 800°C) was supposed to eliminate potential organic matter present in the sand. Finally, six similar-shaped (PE) particles with a size of roughly 1 mm ( Fig. 1b and c) were embedded in the sand of the upper compartment. Afterwards, the container was closed at the top using aluminium tape. Dual-mode neutron and X-ray imaging Complementary imaging experiments were performed at the Helmholtz Centre Berlin for Energy and Materials (HZB) in Berlin, Germany. Neutron images were captured at the tomography station CONRAD II, which was supplied with cold neutrons by the research reactor BER II via a curved neutron guide (Kardjilov et al. 2016). The neutron detector system was equipped with a 100-μm-thick 6 LiZnS:Ag scintillator 16-bit sCMOS camera (Andor "Neo") in combination with a Nikon photo lens (focus 60 mm, aperture 1:2.8). The neutron beam collimation ratio L/D was set to 250. A total number of 500 radiographs with an exposure time of 19 s each and a resolution of 39 μm/pixel were taken while the sample was stepwise rotated between images over an angular range of 180°. The acquisition time for the entire scan was 3 h and 14 min. X-ray computed tomography was performed using a laboratory μCT scanner with a cone beam geometry. The major components of the scanner were a micro-focus X-ray source (type L8121-03, Hamamatsu Photonics, Hamamatsu, Japan), operated with an acceleration voltage and current set to 90 kV and 111 μA, respectively, and a flat panel detector (type C7942SK-05 Hamamatsu Photonics, Hamamatsu, Japan). The latter had 2316 × 2316 pixels with pixel size 50 μm × 50 μm. The source object distance of 216 mm and the source detector distance of 300 mm resulted in an image resolution of 35 μm/pixel and a corresponding field of view of 81 mm × 81 mm. Nine hundred radiographic projections were recorded via a sample manipulation stage over an angular range of 360°. Three frames with 0.6 s exposure time were taken at each angular step and a median image calculated to improve the statistics of the projection. The acquisition time for the entire scan was about 1 h. Neutron and X-ray radiographs (projection images) were corrected by flatfield and darkfield images. Tomograms were reconstructed using filtered back algorithms implemented in the software Octopus (Inside Matters, Gent/Belgium) and IDL (Harris Geospatial Solutions, Broomfield, USA). A 3D nonlocal mean filter efficiently reduced the noise of the image data. The neutron and X-ray tomograms were registered using the software ImageJ. Resolution, field of view and 3D orientation of the volume data sets were matched manually to keep full control during the registration procedure, similar to the procedure in Haber-Pohlmeier et al. (2019). 3D rendering and data analysis of 3D volumes were performed using the software VGSTUDIO MAX (Volume Graphics, Heidelberg, Germany). Identification of potential microplastics by neutron tomography Through an NT scan, the attenuation property for each point of a sample can be reconstructed using mathematical algorithms (Kardjilov et al. 2018). Our measured sand column contained a known number of polyethylene particles to explore and demonstrate the feasibility of this non-invasive imaging approach. Figure 2 shows three different 3D representations of the sand sample. To start with, the rendering settings were adjusted such that only the outer shape of the sample became visible, i.e. the glass container sealed with aluminium Fig. 1 Preparation of the sand column loaded with a few microplastics. a Boron-free glass container filled with sand. The sand in the upper compartment had been heated to 800°C for 3 h resulting in a slight colour difference. b Photograph of the microplastic particles that were embedded in the sand. A cardboard disc was used to separate the upper and lower compartment. c Light-microscopic image of the tabular microplastic particles used, shown here the ones from the upper sand compartment tape (Fig. 2a). For rendering, we used a ramp function that varied the opacity between zero and one as indicated by the red line in the histogram (Fig. 2d). In the next step, the least attenuating components of the sample appear transparent in the 3D representation because they were rendered transparent to reveal the more attenuating particles including also the cardboard divider disc (Fig. 2b). Subsequently, a segmentation threshold was introduced at μ = 2.8 cm −1 to select only the most attenuating sample components ( Fig. 2f and Fig. 6). The selection contained the plastic particles (six pieces in the top, five in the bottom half of the sand column as described in the "Sample preparation" section) but also a number of additional particles in the sand matrix that attenuated neutrons in a similar strong manner (Fig. 2c). This indicates that all MP particles present in the sediment were marked as potential MP by neutron tomography, which would not be possible by just using X-ray CT. However, the analysis solely based on neutron attenuation coefficients remains ambiguous to some extent. This problem can be solved by using complementary X-ray tomography revealing further distinguishing features and gaining complementary information on local structure and composition of a sample as demonstrated in the next step. Discrimination of potential microplastics by X-ray computed tomography We performed an X-ray scan of the sand sample, reconstructed the 3D sample volume and registered the two modalities, which facilitated the evaluation of the individual attenuation properties for each point of a sample for both neutrons and Xrays. The complementary character of the registered image data facilitates the identification and segmentation of the individual sample components and helps to reveal distribution and shape of potential MP particles in 3D and to study their embedding in the sand matrix (Fig. 3a). The 2D cross-sectional views presented for X-rays (Fig. 3b) and neutrons (Fig. 3c) illustrate well the complementary character of the two imaging modalities. MP particles and the cardboard material are clearly visible in the neutron images (bright pixels) while the contrast for the sand particles is rather low. On the other hand, the Xray image provides excellent mineral contrast necessary to with higher neutron attenuation are revealed by modified opacity setting displayed in e. c Potential MP particles are selected by setting a segmentation threshold at the attenuation coefficient μ = 2.8 cm −1 , as illustrated in the histogram f analyse the microstructural features of sand, but MP particles and the cardboard structure appear only as gaps in the sand matrix. Some particles, e.g. the one labelled with "2", strongly attenuate both neutrons and X-rays, i.e. appear bright in Fig. 3b and c, indicating that they are non-plastics. Figure 3d displays a bivariate histogram plotted for a sub-volume containing a plastic particle labelled with "1" and mineral particle labelled with "2". It illustrates the benefit of combining neutron and X-ray tomography as the registered information about the bimodal attenuation characteristics facilitates the identification and segmentation of different components in the sample (Kaestner et al. 2017). In addition to the particles "1" and "2", the bulk sand contains a third group of voxels visible in the lower right part of the histogram. These voxels seem to contain metallic components strongly attenuating X-rays but neutrons only weakly (neutron attenuation coefficients ranging from 0.1 cm −1 < μ < 0.5 cm −1 ). Now we can define a two-step procedure to identify and select just the MP particles. First, particles are identified by the neutron measurement as potential MP particles. The corresponding histogram of potential MP particles ( Fig. 4a) confirms that these particles differ in their X-ray attenuation coefficients. Therefore, secondly, a threshold is set at μ = 0.65 cm −1 in order to discard the more attenuating non-plastic particles. Voxels above the threshold are excluded and only voxels with lower attenuation than this threshold are assigned to belong to an MP particle. Resulting MP particles are rendered in green in the 3D representation (Fig. 4b). The number of identified MP particles matches exactly the number of MP particles added during sample preparation: six in the upper and five in the lower sand compartment (Fig. 1b). This procedure was equally successful for both the thermally treated sand and the non-treated sand with natural content of organic matter. Furthermore, size and shape of particles are in good agreement with the light-microscopic measurement of the MPs (Fig. 4c). To further check the result, we tracked down one MP particle and one discarded particle in the stack of tomographic 2D slices as illustrated in Fig. 5 as examples for detailed consideration. The magnified inset proves that the discarded one is a sand particle (see red-coloured region of interest (ROI) in Fig. 5b, top row). However, this particle seemed to have a specific elementary composition, which Fig. 3 Combining neutron and X-ray tomography. a 3D-rendered image of co-registered X-ray (rendered in grey) and neutron data (red). Virtual cuts reveal the interior structure of the sand column including the cardboard disc and some of the potential MP particles. The front cutting plane is also displayed in 2D as X-ray (b) and neutron image (c) to illustrate the complementary character of these imaging modalities. d The bivariate histogram of a sample sub-volume containing a plastic and a mineral particle labelled with "1" and "2", respectively. The histogram illustrates that the different components can be better identified by dual-mode imaging. The red-marked area is the target range fulfilling both thresholds and thus the voxels assigned to belong to MPs led to its distinct attenuation characteristic. It attenuated both probes' X-rays and neutrons while the majority of sand particles interacted only weakly with neutrons. The detected sand grain may have contained some boron, gadolinium or cadmium compounds. The cross-sectional view on the MP particle (Fig. 5c) reveals an apparent cavity in the sand matrix of about 1 mm as proved by the red-marked ROI. Due to the low contrast between polyethylene and air, it is not possible to determine the outer contour of the MP particle directly in the X-ray image, but only the shape of the total void in respect to the sand matrix. Nevertheless, the size of this void is a valuable information for the accurate determination of the MP particle size. Determination of MP particle size The detection of MP particles by NT relies on a thresholdbased voxel-wise analysis of attenuation properties of sediment samples. At the edges of the particles, partial volume effects impair the reconstruction of local attenuation coefficients. This causes a blurriness of the particle edges that depends on the resolution limit of the tomography. Figure 6a highlights the influence of the selected threshold value on the detected particle size thus illustrating the challenge of correctly reproducing the true particle fringe in the tomographic image. An appropriate strategy for determining the accurate MP particle size is to adjust the segmentation threshold iteratively such that the MP particles fit into the corresponding pores of the sand matrix. This is achieved when the margins of the MP particle have at least one contact point but should not overlap with the surrounding sand particles. A welladjusted segmentation threshold (μ = 2.8 cm −1 ) is indicated by the red-bordered ROI for particle "2" in the horizontal and vertical cross section shown in Fig. 6b and c. Using this threshold, the MP particle volume was calculated and represented as diameter of a volume-equivalent sphere and the Fig. 4 Identification of true MP particles by selection from potential MPs (in white) via analysis of CT data. a Histogram of X-ray image containing all potential MP particles as extracted from the neutron data. As microplastic is a weakly attenuating compound for X-rays, only particles with μ < 0.65 cm −1 are selected and coloured in green. b 3D-rendered view of the potential MP particles. Identified plastics are coloured in green using the rendering settings displayed in the histogram in a. The number of identified MP particles in the lower and upper compartment matches exactly with the preparation procedure (c.f. Fig. 1b). Note that the structure of the cardboard divider disc was extracted from neutron data and superimposed on the X-ray data to indicate the border between the upper and lower sand compartment. c Left: 3D-rendered volume of the MP particle marked by an arrow in b. Right: the light-microscopic image of the same particle shows the good agreement of particle shape and size cumulative distribution plotted in Fig. 6d. Particle sizes range around 1 mm with D min = 0.91 mm and D max = 1.09 mm. The accuracy of the size determination depends on the size of the particles (with smaller ones having higher relative errors) and the physical spatial resolution of the method. In the present study, the relative error is estimated to be 5%. Discussion The experimental results have shown that a non-destructive detection of MP particles in sandy sediment or soil cores is possible. While neutron tomography was the key step in detecting MPs as hydrogen-rich particles, complementary X-ray tomography analysis enabled the unambiguous identification as MP particles. This tomography approach goes beyond a mere numerical identification and provides further valuable information. The general shape of each particle could be correctly detected as well as its basic size (Fig. 6). Complementary tomographic information about the sand matrix was gained from the X-ray tomography to allow for a precise adjustment of the segmentation threshold in the neutron images, which decreased uncertainty of particle size determination to an approximated error of ±5%. The position and orientation of each MP particle can be identified that is primarily not only its depth below the sample surface, but also the distance to other MP particles and structures in the sediment or soil, here for example the cardboard layer. Moreover, the X-ray tomography provides detailed information on the 3D surrounding of each MP particle and could be used to determine local grain size distribution and porosity (Naveed et al. 2013;Evans et al. 2015). The sample size is limited by the transmission capacity of the neutron and X-ray beam. The maximum diameter for a tomographic measurement with reasonable contrast depends on the elementary composition of the sediment or soil core, since this composition determines the total attenuation of the sample. Another important point is the spatial resolution needed to detect smaller MP particles. The principal detection limit for MP particles corresponds to the resolution capacity of the neutron tomographic measurements. Recent advances have improved the physical spatial resolution down to a few Fig. 5 a Location and appearance of a selected MP particle (green) and a discarded particle (white) located in the lower sand compartment shown in a 3D sub-volume and in the respective cross-sectional 2D view. b Inset showing the position and shape of the non-plastic particle as red-bordered ROI as identified by the two-step identification procedure. The high Xray attenuation coefficients (bright pixels) within the ROI indicate the mineral character. c MP particle appearing as void (at bottom) micrometres (Tengattini et al. 2020). At this high resolution, however, the size of the field of view and thus the sample size that can be examined shrinks down to a few millimetres. To find a reasonable compromise between sample size and spatial resolution, the actual size of MP particles to be detected has to be taken into account. Provided the sediment sample contains only moderately attenuating components, sample diameters of up to 6 cm seem possible for the detection of larger MP particles (> 1 mm). For smaller MP particles (0.05 mm < D < 1 mm), realistic sample core diameters range rather between 1 and 5 cm. Note, the smaller the particles to be detected, the more important the precision of the registration procedure becomes. As the detection of plastic particles relies on the sensitivity of neutrons to hydrogen as constituents of the plastic compounds, the method is able to detect most common plastic materials except for polytetrafluoroethylene (PTFE), which contains no hydrogen. However, it does not provide information to distinguish between types of plastics. In this pilot study, the combination of neutron and X-ray tomography was presented as a unique approach to study MP in soil and sediment samples. Unlike most commonly used methods, it is not only suitable for determining the number of particles and classifying their size and shape, but also provides high-resolution information on the spatial distribution of the MP particles. The complementary application of neutrons and X-rays ensures sensitivity and robustness to detect even small MP particles down to the spatial resolution of the two methods, which is less than 100 μm. Most importantly, the tomographic analysis of real environmental samples would allow for studying the detailed relative positioning of all detected MP particles as well as the microstructure of the intact sediment or soil core promising new insights into the depositional context of the MP particles. This may promote a better understanding how the deposition of MPs influences the microstructure of the soil or sediment and vice versa. The deposition of MP particles could lead to structural changes that have significant consequences for the hydraulic properties of the sampled soil. For example, preferential deposition of MP particles in soil macrospores may result in clogging of efficient water pathways through soil layers. Furthermore, MP Fig. 6 Determination of segmentation threshold for the MP particle size analysis. a Impact of segmentation threshold on the particle size illustrated for a selected MP particle. Particle shapes for an exemplary selection of segmentation thresholds (1-4) are displayed as ROIs in the cross-sectional X-ray images in b and c. Setting a segmentation threshold of μ = 2.8 cm −1 , the cumulative MP particle size distribution was calculated from the neutron image and plotted in d particles deposited in pores and surface interstices may significantly affect the soil-water contact angle and thus the wettability and water holding capacity of soil. Only through methods providing high spatial resolution, such as the tomography approach presented here, that enable analysis beyond bulk samples, will it be possible to better understand the deposition of MPs and implications of their presence for sediment and soil properties and their hydroecology. Clearly, there is a need to test this tomography approach in the future with real environmental samples and subsequently refine it, which may also result in different procedures adapted to measurement of soils, beach and river bank or river bed, lake bed and marine sediments. One challenge is the analysis of soils or sediments containing natural organic matter. A potential approach could be to treat the sample, as is often the case in existing analyses on MPs, e.g. with hydrogen peroxide or enzyme cocktails, to degrade and flush out organic matter before drying and imaging. Another option could be an additional treatment for staining natural organic matter with an X-ray contrast agent, to discriminate them from MPs in the Xray CTs. Finally, thresholds may also be adjusted or the internal structure of larger particles visualized to help discrimination of natural organic matter from MPs. For future measurements, it is also promising to apply segmentation algorithms based on artificial intelligence. Since initially only a small number of data are available, classification procedures such as random forest (machine learning) are preferred. At a later stage, when a large amount of training data are available, neural networks (deep learning) can also be used to identify MP particles and to discriminate non-plastics such as organic matter. These algorithms are particularly promising as they not only take the local attenuation properties of both imaging modalities into account but also recognize specific shapes of structures. This appears to be of great benefit for identifying specifically shaped MPs such as fragments of foils or fibres. Furthermore, shape recognition can certainly be of great assistance when it comes to discriminating specific organic matter such as remnants of plant roots, snails or shells. Conclusions The combined tomography method presented here is a first approach to identify and characterize some aspects of MP particles in undisturbed cores taken from sediments or soils. Our study has demonstrated the detection of MP particles in the millimetre size range. However, the method has the potential to identify MP particles down to at least 100 μm as the detection limit depends mainly on the chosen spatial resolution of the tomography. The non-invasive character of the method offers a valuable opportunity to quantify not only the MP abundance, but also the spatial distribution of MP particles and the microstructure of the sediment or soil sample itself. As soon as this approach can be transferred to environmental samples, there is not only enormous potential to gain insights into the exact distribution of MPs deposited in past events (e.g. floods) or by direct human intervention (e.g. irrigation with waste water), but also possible mechanical translocation or bioturbation processes. Author contribution N.K., A.H. and C.T. conducted the neutron and Xray experiments and performed the image processing. S.E.O. and C.T. contributed equally in developing this tomography approach, writing the manuscript and generating the figures. All authors analysed the results, contributed to the respective discussions and reviewed the manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. The research presented here was funded by the German Research Foundation (DFG) under grant numbers OS 351/8-1 and TO 949/2-1. Data availability The datasets generated during the current study are available from the corresponding author on reasonable request. Compliance with ethical standards Competing interests The authors declare no competing interests. Code availability Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
8,239
2021-01-27T00:00:00.000
[ "Environmental Science", "Materials Science" ]
Research on the Measurement Technology of Rotational Inertia of Rigid Body Based on the Principles of Monocular Vision and Torsion Pendulum Damping is an important factor contributing to errors in the measurement of rotational inertia using the torsion pendulum method. Identifying the system damping allows for minimizing the measurement errors of rotational inertia, and accurate continuous sampling of torsional vibration angular displacement is the key to realizing system damping identification. To address this issue, this paper proposes a novel method for measuring the rotational inertia of rigid bodies based on monocular vision and the torsion pendulum method. In this study, a mathematical model of torsional oscillation under a linear damping condition is established, and an analytical relationship between the damping coefficient, torsional period, and measured rotational inertia is obtained. A high-speed industrial camera is used to continuously photograph the markers on a torsion vibration motion test bench. After several data processing steps, including image preprocessing, edge detection, and feature extraction, with the aid of a geometric model of the imaging system, the angular displacement of each frame of the image corresponding to the torsion vibration motion is calculated. From the characteristic points on the angular displacement curve, the period and amplitude modulation parameters of the torsion vibration motion can be obtained, and finally the rotational inertia of the load can be derived. The experimental results demonstrate that the proposed method and system described in this paper can achieve accurate measurements of the rotational inertia of objects. Within the range of 0–100 × 10−3 kg·m2, the standard deviation of the measurements is better than 0.90 × 10−4 kg·m2, and the absolute value of the measurement error is less than 2.00 × 10−4 kg·m2. Compared to conventional torsion pendulum methods, the proposed method effectively identifies damping using machine vision, thereby significantly reducing measurement errors caused by damping. The system has a simple structure, low cost, and promising prospects for practical applications. Introduction Rotational inertia is a physical quantity that characterizes the magnitude of an object's inertia during rotational motion around an axis. It is a measure of the rotational performance of self-propelled equipment. Like mass, center of mass, and product of inertia, it is a mass characteristic parameter of objects. Rotational inertia is an inherent property of any object with mass [1][2][3][4]. The role of rotational inertia in rotational motion is analogous to the role of mass in linear motion. It is an essential parameter in the dynamic modeling and analysis of rotating rigid bodies [5][6][7]. For example, rotational inertia is crucial in various applications, such as gyroscopes [8], celestial bodies [9], and motor rotors [10]. Furthermore, a rotational inertia test is required for all equipment with rotational behavior, such as spacecrafts [11], aircrafts [12], automobiles [13], robots [14], specialized helmets [15], and tennis rackets [16]. The rotational inertia of an object is related to its mass, the position of its rotation axis, and the distribution of its mass. For rigid bodies with complex shapes and nonuniform mass distributions, experimental methods are usually required to determine the rotational inertia [17]. In an experiment, the object under test is generally set in motion in a certain way, and the measurement of rotational inertia is obtained from the mathematical relationship between motion characteristics and rotational inertia. The torsion vibration response method is a commonly used mechanical performance testing method in engineering [18], and different torsion vibration testing methods, including the multiple pendulum method [13], compound pendulum method [19], and torsion pendulum method [20], are often used for testing inertial parameters. Among them, the torsion pendulum method is currently the most accurate and reliable method for measuring rotational inertia and is widely used in the measurement of the rotational inertia of large-sized equipment [21][22][23]. Regardless of which torsion vibration testing method is used, the calculation of rotational inertia is based on the relationship between measured rotational inertia and the torsion vibration motion period. In recent years, scholars have focused on three fields of research on the measurement technology of rotational inertia: The first field is integrated test technology for measuring the mass characteristic parameters of large-scale objects. For example, Zhang et al. [21] designed and studied an integrated measurement system for the measurement of the mass characteristic parameters of high-mass nonrotary aircrafts, which can complete the measurement of mass, center of mass, rotational inertia, and product of inertia in a single hoisting cycle. Teng et al. [24] combined the multi-point weighing method and torsion pendulum method to realize the integrated measurement of satellite mass characteristic parameters. Olmedo et al. [25] studied an experimental test method for the mass characteristic parameters of robots and developed a set of torsion pendulum test platforms. The second field is online or on-orbit identification technology for measuring inertial parameters. For example, Jin et al. [26] proposed a method based on a double unscented Kalman filter (DUKF) for the online identification of lightweight electric vehicle inertial parameters to solve the impact of a sharp reduction in vehicle mass and body size on the identification of the inertial parameters during operation. Manshadi et al. [27] studied a nonlinear filtering method for estimating aircraft mass properties during airdrop maneuvers. In the first step of the method, a single extended Kalman filter is used to estimate the total mass and moment of inertia of an aircraft before the start of an airdrop maneuver. In the second step, a joint extended Kalman filter is employed to estimate the dynamic state and mass parameters of the aircraft during the airdrop maneuver. The third field is research on the damping effect in the measurement process of the torsion pendulum method and corresponding error compensation techniques. For example, Gandino et al. [28] studied the structural damping effect in a time-varying inertia complex pendulum torsion vibration system, gave an analytical model of the torsion vibration system, and defined the equivalent damping ratio from the energy point of view. Zhao et al. [29] studied the nonlinear damping effect in moment of inertia measurement by the torsion pendulum method and proposed a measurement error compensation model. With the gradual application of air bearings to torsion pendulum test benches, the frictional damping of torsion vibration has been so reduced that it can be ignored. However, for objects with complex aerodynamic shapes, air damping during torsion vibration will also affect the measurement of rotational inertia. In order to minimize the influence of damping on measurement, sensors must be used to accurately record the torsion pendulum curve. Vibration damping is estimated by the attenuation of the angular displacement amplitude, so as to eliminate or compensate for measurement error. In engineering, grating displacement sensors or angle encoders are often used to accurately record the angular displacement curve of torsion vibration [23,30]. The use of these sensors greatly increases both the cost of the measurement system and the complexity of the measurement algorithms. Vision measurement technology utilizes high-precision industrial cameras to capture images of a measured object and obtains the dimensional parameters of the object through the object-image relationship of the imaging system. When combined with artificial intelligence algorithms, it forms machine vision technology, which plays an irreplaceable role in an increasing number of dimensional measurement applications [31][32][33][34]. In this paper, monocular vision technology is introduced into the measurement of rigid bodies' rotational inertia and is used to realize the real-time recording of the torsion vibration angle displacement and to obtain the torsion pendulum curve. This approach can not only accurately determine the torsion vibration period of an object but also map the changes in relevant parameters of the object under the influence of damping, thus achieving an accurate measurement of rotational inertia. Principle of the Torsion Pendulum Method for Measuring Rotational Inertia As shown in Figure 1, the principle of measuring rotational inertia by the torsion pendulum method is described using a torsion pendulum measuring table as an example. The core component of the torsion pendulum measuring table is an elastic element (usually a torsion bar or torsion spring). When the load platform and the measured object rotate around the axis of the torsion pendulum by a certain angle θ, the elastic element continuously converts kinetic energy into potential energy, driving the measured object into reciprocating torsion vibration. When the damping effect is ignored, the measured rotational inertia is proportional to the square of the torsion vibration period. The torsion vibration period is often counted using a proximity switch as shown in the figure below. Vision measurement technology utilizes high-precision industrial cameras to capture images of a measured object and obtains the dimensional parameters of the object through the object-image relationship of the imaging system. When combined with artificial intelligence algorithms, it forms machine vision technology, which plays an irreplaceable role in an increasing number of dimensional measurement applications [31][32][33][34]. In this paper, monocular vision technology is introduced into the measurement of rigid bodies' rotational inertia and is used to realize the real-time recording of the torsion vibration angle displacement and to obtain the torsion pendulum curve. This approach can not only accurately determine the torsion vibration period of an object but also map the changes in relevant parameters of the object under the influence of damping, thus achieving an accurate measurement of rotational inertia. Principle of the Torsion Pendulum Method for Measuring Rotational Inertia As shown in Figure 1, the principle of measuring rotational inertia by the torsion pendulum method is described using a torsion pendulum measuring table as an example. The core component of the torsion pendulum measuring table is an elastic element (usually a torsion bar or torsion spring). When the load platform and the measured object rotate around the axis of the torsion pendulum by a certain angle θ, the elastic element continuously converts kinetic energy into potential energy, driving the measured object into reciprocating torsion vibration. When the damping effect is ignored, the measured rotational inertia is proportional to the square of the torsion vibration period. The torsion vibration period is often counted using a proximity switch as shown in the figure below. The torsion vibration system is a typical single-degree-of-freedom system, and the torsion angle θ can be defined as its generalized coordinate. Therefore, according to the Lagrangian equation, we have the following: The torsion vibration system is a typical single-degree-of-freedom system, and the torsion angle θ can be defined as its generalized coordinate. Therefore, according to the Lagrangian equation, we have the following: where L represents the Lagrangian function. L = T − V, where T is the kinetic energy of the system, V is the potential energy of the system, Q is the generalized external force, and t is time. In the torsion vibration system, the kinetic energy T can be expressed as where I is the rotational inertia of the load. Neglecting the nonlinear damping of the torsion vibration system, we have the following: where k is the stiffness coefficient of the elastic element, which is related to the mechanical properties of the material and the length and diameter of the elastic element. The generalized force of the torsion vibration system is the damping torque, which is composed of air damping caused by the shape of the object being tested, bearing friction damping, and the internal damping of the elastic element. It often takes the form of linear damping proportional to the torsion vibration angular velocity. It can be expressed as By combining Equations (1)-(4), the differential equation of torsion vibration motion can be obtained as follows: For the torsion vibration system, assuming that the initial deflection angle is θ(0) = θ 0 and the initial angular velocity is dθ dt t=0 = 0, the solution to the differential equation can be obtained as follows: As shown in Equation (6), when the damping factor is considered, the curve of torsion pendulum angle displacement over time is a modulated asymptotic single-frequency curve with a frequency of k I − c 2I 2 and an initial phase of 0. Using displacement sensors, such as grating sensors, it is easy to sense the torsion pendulum angle displacement. When the damping is small, the torsion pendulum angle displacement signal is a narrowband signal, and its instantaneous characteristics can be identified using the Hilbert transform. Thus, it is possible to estimate the parameters of objects with time-varying inertial parameters. Let ω m be the main frequency of the torsion pendulum and ζ be the amplitude modulation parameter that reflects the effect of damping. Let t n be the time corresponding to the nth maximum (or minimum) value of the angular displacement curve, and let θ n be the corresponding angular displacement. Then, we have the following: Therefore, by extracting the extremum points of the torsion pendulum angular displacement curve, it is possible to calculate the main frequency ω m and amplitude modulation parameter ζ of the torsion pendulum and thus determine the rotational inertia I of the tested load. The calculation formula is as follows: Sensors 2023, 23, 4787 of 18 As a result, the torsion pendulum frequency ω m can also be measured by extracting the zero-crossing points of the torsion pendulum angle displacement curve. Therefore, accurately recording the angle displacement curve of the torsion pendulum is the key to measuring the rotational inertia. Principle of Recording Torsion Pendulum Curve Based on Monocular Vision Monocular vision measurement technology can detect and record the size information of a measured object. This paper applies it to the recording of the torsion pendulum angle displacement curve. As shown in Figure 2, when monocular vision is used to record the torsion pendulum curve a marker is first set at the edge of the load table of the torsion pendulum measurement platform. Then, the marker is continuously captured using a high-speed camera and an imaging lens. As a result, the torsion pendulum frequency ωm can also be measured by extracting the zero-crossing points of the torsion pendulum angle displacement curve. Therefore, accurately recording the angle displacement curve of the torsion pendulum is the key to measuring the rotational inertia. Principle of Recording Torsion Pendulum Curve Based on Monocular Vision Monocular vision measurement technology can detect and record the size information of a measured object. This paper applies it to the recording of the torsion pendulum angle displacement curve. As shown in Figure 2, when monocular vision is used to record the torsion pendulum curve a marker is first set at the edge of the load table of the torsion pendulum measurement platform. Then, the marker is continuously captured using a high-speed camera and an imaging lens. Image and Coordinate System Conversion Relationship The monocular vision measurement technology is theoretically built on a pinhole imaging model. To accurately describe the relationship between the object and image, it is necessary to establish an accurate transformation relationship between the object plane coordinate system and the pixel coordinate system. The definitions of the relevant coordinate systems are shown in Figure 3. O-UV in the figure is called the pixel coordinate system, which reflects the arrangement of pixels in the camera's CCD/CMOS chip. Its origin is located at the upper-left corner of the image, and the U and V axes are parallel to the two sides of the image plane, with coordinate values as integers representing pixel numbers. Os-XsYs is called the image plane coordinate system, which is also a two-dimensional Cartesian coordinate system with two axes parallel to the U and V axes of the pixel coordinate system, respectively. Its origin is located at the intersection of the optical axis and the image plane of the imaging system. Ot-XtYt is called the object plane coordinate system. For convenience, its two axes Image and Coordinate System Conversion Relationship The monocular vision measurement technology is theoretically built on a pinhole imaging model. To accurately describe the relationship between the object and image, it is necessary to establish an accurate transformation relationship between the object plane coordinate system and the pixel coordinate system. The definitions of the relevant coordinate systems are shown in Figure 3. As a result, the torsion pendulum frequency ωm can also be measured by extracting the zero-crossing points of the torsion pendulum angle displacement curve. Therefore, accurately recording the angle displacement curve of the torsion pendulum is the key to measuring the rotational inertia. Principle of Recording Torsion Pendulum Curve Based on Monocular Vision Monocular vision measurement technology can detect and record the size information of a measured object. This paper applies it to the recording of the torsion pendulum angle displacement curve. As shown in Figure 2, when monocular vision is used to record the torsion pendulum curve a marker is first set at the edge of the load table of the torsion pendulum measurement platform. Then, the marker is continuously captured using a high-speed camera and an imaging lens. Image and Coordinate System Conversion Relationship The monocular vision measurement technology is theoretically built on a pinhole imaging model. To accurately describe the relationship between the object and image, it is necessary to establish an accurate transformation relationship between the object plane coordinate system and the pixel coordinate system. The definitions of the relevant coordinate systems are shown in Figure 3. O-UV in the figure is called the pixel coordinate system, which reflects the arrangement of pixels in the camera's CCD/CMOS chip. Its origin is located at the upper-left corner of the image, and the U and V axes are parallel to the two sides of the image plane, with coordinate values as integers representing pixel numbers. Os-XsYs is called the image plane coordinate system, which is also a two-dimensional Cartesian coordinate system with two axes parallel to the U and V axes of the pixel coordinate system, respectively. Its origin is located at the intersection of the optical axis and the image plane of the imaging system. Ot-XtYt is called the object plane coordinate system. For convenience, its two axes O-UV in the figure is called the pixel coordinate system, which reflects the arrangement of pixels in the camera's CCD/CMOS chip. Its origin is located at the upper-left corner of the image, and the U and V axes are parallel to the two sides of the image plane, with coordinate values as integers representing pixel numbers. O s -X s Y s is called the image plane coordinate system, which is also a two-dimensional Cartesian coordinate system with two axes parallel to the U and V axes of the pixel coordinate system, respectively. Its origin is located at the intersection of the optical axis and the image plane of the imaging system. O t -X t Y t is called the object plane coordinate system. For convenience, its two axes are set parallel to the two axes of the image plane coordinate system, respectively, and its origin is located at the intersection of the optical axis and the marker. According to the principle of pinhole imaging, the coordinates (x t ,y t ) of a point in the object plane can be transformed into the coordinates (u,v) in the pixel coordinate system of the image, as shown in Equation (9): In the equation, U 0 represents the object distance, and u 0 ,v 0 represents the coordinate position of the origin of the image plane coordinate system in the pixel coordinate system. These parameters can be calibrated using Zhang's calibration method [35]. f x is the normalized focal length in the U-axis direction, f x = f /dx, f y is the normalized focal length in the V-axis direction, with f y = f /dy, where f is the focal length of the lens, and dx and dy are the sizes of the image pixels in the two directions. Equation (10) establishes a one-to-one correspondence between the pixels in the image and the points on the object plane. Method for Calculating the Angular Displacement of Torsion Motion In this paper, black and white pattern boundaries are used as the marker to record torsion vibration movements. When the camera system is set, the marker is adjusted to align perfectly with the vertical axis of the image plane coordinate system. The center point of the marker is used as the measurement point to calculate the torsion angular displacement. During the torsion pendulum, only the changes of the horizontal coordinate component of this point need to be considered. Since the marker is pasted on a cylindrical surface, the center point of the marker deviates from the initial zero position when torsion pendulum occurs, resulting in a change in the object distance of the imaging system. As shown in Figure 4, it is assumed that the center point of the marker is A, its coordinates in the object plane coordinate system are x 1 ,y 1 , and the corresponding image point is A'. The pixel coordinates of A' are u 1 ,v 1 , and the coordinates of A" can be obtained according to Equation (10). are set parallel to the two axes of the image plane coordinate system, respectively, and its origin is located at the intersection of the optical axis and the marker. According to the principle of pinhole imaging, the coordinates (xt,yt) of a point in the object plane can be transformed into the coordinates (u,v) in the pixel coordinate system of the image, as shown in Equation (9): namely, In the equation, U0 represents the object distance, and u0,v0 represents the coordinate position of the origin of the image plane coordinate system in the pixel coordinate system. These parameters can be calibrated using Zhang's calibration method [35]. fx is the normalized focal length in the U-axis direction, fx = f/dx, fy is the normalized focal length in the V-axis direction, with fy = f/dy, where f is the focal length of the lens, and dx and dy are the sizes of the image pixels in the two directions. Equation (10) establishes a one-to-one correspondence between the pixels in the image and the points on the object plane. Method for Calculating the Angular Displacement of Torsion Motion In this paper, black and white pattern boundaries are used as the marker to record torsion vibration movements. When the camera system is set, the marker is adjusted to align perfectly with the vertical axis of the image plane coordinate system. The center point of the marker is used as the measurement point to calculate the torsion angular displacement. During the torsion pendulum, only the changes of the horizontal coordinate component of this point need to be considered. Since the marker is pasted on a cylindrical surface, the center point of the marker deviates from the initial zero position when torsion pendulum occurs, resulting in a change in the object distance of the imaging system. As shown in Figure 4, it is assumed that the center point of the marker is A, its coordinates in the object plane coordinate system are x1,y1, and the corresponding image point is A'. The pixel coordinates of A' are u1,v1, and the coordinates of A" can be obtained according to Equation (10). Based on the geometric relationship shown in Figure 4, the formula for calculating the torsion angular displacement θ is as follows: where x 0 represents the horizontal coordinate of the center point of the marker at the initial position. The polarity of the angular displacement is determined by the polarity of the X-axis coordinate value of point A, and the initial torsion angle is defined as the positive polarity direction. As shown in the figure, the change in object distance ∆U 0 can be calculated using the following equation: Further derivation can be obtained as follows: As seen from Equation (13), there is only one unknown variable θ, and the angular displacement at each time point can be accurately calculated by using the table lookup method. In engineering practice, to ensure safety during measurement of the moment of inertia of large-mass objects, the pendulum angle is generally limited to a range of ±5 • , and cosθ corresponding to this range is approximately equal to 1, so the angular displacement can be approximately calculated using Equation (14): where u 0 represents the horizontal pixel coordinate of the center of the marker at the initial position, and this value is 0 when the initial position is ideally aligned. Image Processing Algorithm The main purpose of the image processing algorithm is to extract the center point of the marker in each frame of image. In order to make the marker features more distinct, the preprocessing of the digital image of the collected points is required, as shown in Figure 5. This mainly includes image distortion correction, ROI extraction, Gaussian filtering, binarization, and Canny edge detection. Based on the geometric relationship shown in Figure 4, the formula for calculating the torsion angular displacement θ is as follows: where x0 represents the horizontal coordinate of the center point of the marker at the initial position. The polarity of the angular displacement is determined by the polarity of the Xaxis coordinate value of point A, and the initial torsion angle is defined as the positive polarity direction. As shown in the figure, the change in object distance ΔU0 can be calculated using the following equation: Further derivation can be obtained as follows: As seen from Equation (13), there is only one unknown variable θ, and the angular displacement at each time point can be accurately calculated by using the table lookup method. In engineering practice, to ensure safety during measurement of the moment of inertia of large-mass objects, the pendulum angle is generally limited to a range of ±5°, and cosθ corresponding to this range is approximately equal to 1, so the angular displacement can be approximately calculated using Equation (14): where 0 u represents the horizontal pixel coordinate of the center of the marker at the initial position, and this value is 0 when the initial position is ideally aligned. Image Processing Algorithm The main purpose of the image processing algorithm is to extract the center point of the marker in each frame of image. In order to make the marker features more distinct, the preprocessing of the digital image of the collected points is required, as shown in Figure 5. This mainly includes image distortion correction, ROI extraction, Gaussian filtering, binarization, and Canny edge detection. Image correction refers to the correction calculation of the image based on distortion coefficients calibrated by the camera. The purpose of Gaussian filtering is to convolve the original image matrix with a weight matrix based on Gaussian distribution, which helps reduce noise generated by the camera and environment. Image binarization sets the grayscale of different pixels in the image to either 0 or 255 based on calculation with a certain threshold, which highlights the features of the measured marker. Morphological operations are simple operations based on the shape of the image, such as dilation, erosion, and opening and closing operations. The Canny edge detection algorithm is used to extract the contour of the measured marker. In this paper, the powerful image processing capabilities of the OpenCV library are utilized to further treat the preprocessed images. The algorithm for extracting the coordinates of the center point of the marker is shown in Figure 6. This paper mainly uses the HoughLines() function of the Hough transform to find the straight lines in the preprocessed image and determines whether they are marker straight lines based on the slope of the lines. If a line is a marker straight line, the center point of the line is returned as the coordinates of the marker center. Image correction refers to the correction calculation of the image based on distortion coefficients calibrated by the camera. The purpose of Gaussian filtering is to convolve the original image matrix with a weight matrix based on Gaussian distribution, which helps reduce noise generated by the camera and environment. Image binarization sets the grayscale of different pixels in the image to either 0 or 255 based on calculation with a certain threshold, which highlights the features of the measured marker. Morphological operations are simple operations based on the shape of the image, such as dilation, erosion, and opening and closing operations. The Canny edge detection algorithm is used to extract the contour of the measured marker. In this paper, the powerful image processing capabilities of the OpenCV library are utilized to further treat the preprocessed images. The algorithm for extracting the coordinates of the center point of the marker is shown in Figure 6. This paper mainly uses the HoughLines() function of the Hough transform to find the straight lines in the preprocessed image and determines whether they are marker straight lines based on the slope of the lines. If a line is a marker straight line, the center point of the line is returned as the coordinates of the marker center. Based on the analysis presented earlier, the overall algorithm of the proposed method for measuring rotational inertia based on monocular vision and torsion pendulum principles can be obtained. The measurement algorithm is briefly described as follows: Obtain the detection video and extract a frame of the image at a pertinent time point. Preprocess the image and call the algorithm for extracting the center point of the marker. Calculate the torsion angular displacement under the current image condition based on Formula (14). Repeat the above steps to complete the processing of all images, obtain the sequence of torsion angular displacement and time θ(t), draw a torsion pendulum curve, extract the zero and pole points of the torsion motion angular displacement curve, and calculate the measured rotational inertia based on the formula for calculating rotational inertia. Based on the analysis presented earlier, the overall algorithm of the proposed method for measuring rotational inertia based on monocular vision and torsion pendulum principles can be obtained. The measurement algorithm is briefly described as follows: Obtain the detection video and extract a frame of the image at a pertinent time point. Preprocess the image and call the algorithm for extracting the center point of the marker. Calculate the torsion angular displacement under the current image condition based on Formula (14). Repeat the above steps to complete the processing of all images, obtain the sequence of torsion angular displacement and time θ(t), draw a torsion pendulum curve, extract the zero and pole points of the torsion motion angular displacement curve, and calculate the measured rotational inertia based on the formula for calculating rotational inertia. Experiment System To verify the accuracy of the method proposed in this paper, an experimental system was built as shown in Figure 7. The system mainly consists of three parts: (1) a torsion pendulum measurement platform with an elastic element as its core, which uses a large torque torsion spring as the elastic element; (2) a torsion motion recording and imaging system consisting of a high-speed industrial camera and an imaging lens, with the main parameters of the imaging system shown in Table 1; (3) a computer and software system used to process the measured images and calculate the rotational inertia of the measured object. Experiment System To verify the accuracy of the method proposed in this paper, an experimental system was built as shown in Figure 7. The system mainly consists of three parts: (1) a torsion pendulum measurement platform with an elastic element as its core, which uses a large torque torsion spring as the elastic element; (2) a torsion motion recording and imaging system consisting of a high-speed industrial camera and an imaging lens, with the main parameters of the imaging system shown in Table 1; (3) a computer and software system used to process the measured images and calculate the rotational inertia of the measured object. After calibration, the optical magnification β of the constructed imaging system is about 0.1, and the size resolution of the object plane is about 0.035 mm. According to the radius of the cylinder that the marker is pasted on, the resolution of angular displacement measurement can be calculated to be about 0.02°. In addition, more angular resolutions can be obtained by adjusting the distance between the camera and the detected object. Based on the camera frame rate, the time interval between adjacent detected frames is about 0.0167 s. After algorithm refinement, the measurement resolution of the torsion period is about 0.008 s, which can fully meet the requirements for measuring the main frequency of torsion motion. Although the sampling time interval of the camera is not absolutely uniform, the period calculation of torsion vibration is taken from multiple complete sinusoidal waveforms, so the inertia measurement error caused by uneven samplings can be ignored. To measure various rotational inertias, standard specimens with regular shapes (cylinders) and uniform mass distributions are prepared. The relative true values After calibration, the optical magnification β of the constructed imaging system is about 0.1, and the size resolution of the object plane is about 0.035 mm. According to the radius of the cylinder that the marker is pasted on, the resolution of angular displacement measurement can be calculated to be about 0.02 • . In addition, more angular resolutions can be obtained by adjusting the distance between the camera and the detected object. Based on the camera frame rate, the time interval between adjacent detected frames is about 0.0167 s. After algorithm refinement, the measurement resolution of the torsion period is about 0.008 s, which can fully meet the requirements for measuring the main frequency of torsion motion. Although the sampling time interval of the camera is not absolutely uniform, the period calculation of torsion vibration is taken from multiple complete sinusoidal waveforms, so the inertia measurement error caused by uneven samplings can be ignored. To measure various rotational inertias, standard specimens with regular shapes (cylinders) and uniform mass distributions are prepared. The relative true values of the measured rotational inertia in the following experiments are all obtained by theoretical calculations. Calibration of the Stiffness Coefficient and the No-Load Rotational Inertia of the Elastic Element Based on Equation (8), further derivation can be obtained as follows: The torsion dominant frequency and modulation parameter ζ in the equation can be obtained by analyzing the torsion angle displacement curve. In order to obtain the stiffness coefficient k, only one known measured moment of inertia is needed. Suppose the unloaded moment of inertia of the measuring table is I 0 , the torsion natural frequency is ω m0 , and the modulation parameter is ζ 0 under an unloaded condition. If the measured moment of inertia is I 1 , the calculated torsion main frequency will be ω m1 , and the modulation parameter will be ζ 1 . The unloaded moment of inertia and the stiffness coefficient of the elastic element can be calculated by Equation (16): In this paper, a pair of uniformly dense and standard cylindrical metal bodies are used to calibrate the unloaded moment of inertia and stiffness coefficient of the elastic element. The basic parameters of these two standard specimens are as follows: The mass of the standard specimens are 0.5021 kg and 0.5006 kg, their diameters are 60.07 mm and 60.04 mm, and their moments of inertia about their own rotational axes are 2.265 × 10 −4 kg·m 2 and 2.256 × 10 −4 kg·m 2 , respectively. To keep the bias within a controllable range, the two measured standard specimens are symmetrically placed on the load platform, and the distance between their center axes and the center of the torsion pendulum axis is denoted as L. The measurement system can obtain different values of the loaded moment of inertia when the center distance varies. Stiffness coefficient calibration experiments were performed in an unloaded state and three different loaded states, and the calibration data of the stiffness coefficient are shown in Table 2. The torsion oscillation motion captured by the imaging system in the unloaded measurement is shown in Figure 8. The relative true values of the measured moment of inertia in the table are calculated based on the parallel axis theorem. The averages of the three calibration results are taken as the calibration values of the practical unloaded moment of inertia and stiffness coefficient of the elastic element. As shown in Figure 8, the machine vision method described in this paper can capture the torsion angle displacement very well, and the characteristics of the measured torsion angle displacement curve are completely consistent with the theoretical analysis. Correction of the Pose Deviation of the Imaging System The method described in this paper has certain requirements for the pose of the imaging system. As shown in Figure 4, at the initial position it is necessary to ensure that the optical center of the imaging lens, the center point of the marker, and the center of the torsion pendulum are located on the same straight line, and the optical axis of the imaging system is required to be parallel to the line connecting the center point of the marker and the center of the torsion pendulum. In the experiment, before the measurement it is necessary to adjust the pose of the imaging system by turning on the camera to observe the position of the captured marker in the image and adjusting the pose of the imaging system so that the centerline of the marker is exactly in the middle of the image and parallel to the U-axis of the image. After such adjustment, there may still exist an angle deviation of the optical axis of the imaging system relative to the ideal position (as shown in Figure 9a), which is the pose error (as shown in Figure 9b). Correction of the Pose Deviation of the Imaging System The method described in this paper has certain requirements for the pose of the imaging system. As shown in Figure 4, at the initial position it is necessary to ensure that the optical center of the imaging lens, the center point of the marker, and the center of the torsion pendulum are located on the same straight line, and the optical axis of the imaging system is required to be parallel to the line connecting the center point of the marker and the center of the torsion pendulum. In the experiment, before the measurement it is necessary to adjust the pose of the imaging system by turning on the camera to observe the position of the captured marker in the image and adjusting the pose of the imaging system so that the centerline of the marker is exactly in the middle of the image and parallel to the U-axis of the image. After such adjustment, there may still exist an angle deviation of the optical axis of the imaging system relative to the ideal position (as shown in Figure 9a), which is the pose error (as shown in Figure 9b). Correction of the Pose Deviation of the Imaging System The method described in this paper has certain requirements for the pose of the imaging system. As shown in Figure 4, at the initial position it is necessary to ensure that the optical center of the imaging lens, the center point of the marker, and the center of the torsion pendulum are located on the same straight line, and the optical axis of the imaging system is required to be parallel to the line connecting the center point of the marker and the center of the torsion pendulum. In the experiment, before the measurement it is necessary to adjust the pose of the imaging system by turning on the camera to observe the position of the captured marker in the image and adjusting the pose of the imaging system so that the centerline of the marker is exactly in the middle of the image and parallel to the U-axis of the image. After such adjustment, there may still exist an angle deviation of the optical axis of the imaging system relative to the ideal position (as shown in Figure 9a), which is the pose error (as shown in Figure 9b). Figure 9 depicts the torsion swing displacement curves that can be measured by the imaging system in the ideal state and with a pose error, respectively. Comparing these two cases shows that the main frequency of the torsion vibration does not change when the imaging system has a pose deviation, but significant errors occur in the calculation of the amplitude modulation parameter. For the case of large loads, such errors in the amplitude modulation parameter are intolerable, and it is necessary to adjust the pose of the imaging system. Figure 10 illustrates the torsion swing displacement curve when the imaging system's optical axis has a counterclockwise deviation angle relative to the line connecting the center of the marker and the center of the torsion pendulum. It can be seen that when there is a pose error in the imaging system the amplitude and decay rate of the torsion swing displacement in the positive and negative directions are different, and the attenuation coefficients of the upper and lower envelope curves of the torsion swing displacement curve are also different. In order to estimate the deviation angle of the imaging system's optical axis, it is necessary to obtain the amplitude of the torsion swing displacement in the clockwise and counterclockwise directions (i.e., the positive and negative polarities of the angular displacement) under a zero-damping condition. The amplitude of the torsion swing displacement in the clockwise direction is the initial displacement of the excitation position, as indicated by A 0 in Figure 10; when the damping effect is significant, the initial amplitude in the counterclockwise direction can be calculated by the lower envelope curve, as indicated by A 0 ' in Figure 10. Sensors 2023, 23, x FOR PEER REVIEW 12 of 19 Figure 9 depicts the torsion swing displacement curves that can be measured by the imaging system in the ideal state and with a pose error, respectively. Comparing these two cases shows that the main frequency of the torsion vibration does not change when the imaging system has a pose deviation , but significant errors occur in the calculation of the amplitude modulation parameter. For the case of large loads, such errors in the amplitude modulation parameter are intolerable, and it is necessary to adjust the pose of the imaging system. Figure 10 illustrates the torsion swing displacement curve when the imaging system's optical axis has a counterclockwise deviation angle relative to the line connecting the center of the marker and the center of the torsion pendulum. It can be seen that when there is a pose error in the imaging system the amplitude and decay rate of the torsion swing displacement in the positive and negative directions are different, and the attenuation coefficients of the upper and lower envelope curves of the torsion swing displacement curve are also different. In order to estimate the deviation angle of the imaging system's optical axis, it is necessary to obtain the amplitude of the torsion swing displacement in the clockwise and counterclockwise directions (i.e., the positive and negative polarities of the angular displacement) under a zero-damping condition. The amplitude of the torsion swing displacement in the clockwise direction is the initial displacement of the excitation position, as indicated by A0 in Figure 10; when the damping effect is significant, the initial amplitude in the counterclockwise direction can be calculated by the lower envelope curve, as indicated by A0' in Figure 10. The mathematical model for the lower envelope of the torsion pendulum curve is shown in Equation (17): where ξ represents the attenuation coefficient. By capturing the torsion pendulum displacement curve using the imaging system, extracting the minimum points, and substituting two minimum points with the maximum time interval into Equation (17), the initial amplitude A0' and attenuation coefficient ξ can be calculated. After obtaining the initial amplitude in clockwise and counterclockwise directions, the deflection angle of the imaging system's optical axis can be calculated according to the following equation: The sign of the angle ϕ indicates the direction of the deviation of the imaging system's optical axis relative to the line connecting the center of the marker and the center of the torsion pendulum. A positive angle represents clockwise deviation, while a negative angle represents counterclockwise deviation. Based on the estimated deviation angle, the pose of the imaging system can be corrected. The mathematical model for the lower envelope of the torsion pendulum curve is shown in Equation (17): where ξ represents the attenuation coefficient. By capturing the torsion pendulum displacement curve using the imaging system, extracting the minimum points, and substituting two minimum points with the maximum time interval into Equation (17), the initial amplitude A 0 ' and attenuation coefficient ξ can be calculated. After obtaining the initial amplitude in clockwise and counterclockwise directions, the deflection angle of the imaging system's optical axis can be calculated according to the following equation: The sign of the angle φ indicates the direction of the deviation of the imaging system's optical axis relative to the line connecting the center of the marker and the center of the torsion pendulum. A positive angle represents clockwise deviation, while a negative angle represents counterclockwise deviation. Based on the estimated deviation angle, the pose of the imaging system can be corrected. Rotational Inertia Measurement Experiments To further verify the effectiveness of the proposed method, multiple measurement experiments were carried out under different load conditions. Figure 11 shows the torsion pendulum angular displacement curves captured by the imaging system under four different load conditions. The relative true values of the measured moment of inertia for these four load conditions are as follows: (a) 4.519 × 10 −4 kg·m 2 ; (b) 6.359 × 10 −4 kg·m 2 ; (c) 14.452 × 10 −4 kg·m 2 ; (d) 28.238 × 10 −4 kg·m 2 . It can be observed from the figure that as the measured moment of inertia increases, the main frequency of torsion pendulum gradually decreases, and the amplitude attenuation becomes more significant. Rotational Inertia Measurement Experiments To further verify the effectiveness of the proposed method, multiple measurement experiments were carried out under different load conditions. Figure 11 shows the torsion pendulum angular displacement curves captured by the imaging system under four different load conditions. The relative true values of the measured moment of inertia for these four load conditions are as follows: (a) 4.519 × 10 −4 kg·m 2 ; (b) 6.359 × 10 −4 kg·m 2 ; (c) 14.452 × 10 −4 kg·m 2 ; (d) 28.238 × 10 −4 kg·m 2 . It can be observed from the figure that as the measured moment of inertia increases, the main frequency of torsion pendulum gradually decreases, and the amplitude attenuation becomes more significant. By stacking standard components or moving their position, the system can obtain different measured rotational inertia. In order to analyze measurement repeatability, correctness, and the effect of damping on the measurement results of the method and measurement system proposed in this paper, multiple sets of measurement experiments were By stacking standard components or moving their position, the system can obtain different measured rotational inertia. In order to analyze measurement repeatability, correctness, and the effect of damping on the measurement results of the method and measurement system proposed in this paper, multiple sets of measurement experiments were conducted in 10 different load states with a range of rotational inertia from 0 to 100 × 10 −4 kg·m 2 and an inertia difference of about 0.01 kg·m 2 . Ten identical measurement experiments were conducted in each load state. Figure 12 shows the standard deviation and maximum measurement error of 10 measurements for each load state. As shown in the figure, the standard deviation of the measurement increases with increases in the measured rotational inertia value. Within the range of 0-100 × 10 −3 kg·m 2 , the method proposed in this paper and the experimental system used can achieve an accurate measurement of the measured rotational inertia, the absolute value of the maximum measurement error is less than 2.00 × 10 −4 kg·m 2 , and the standard deviation of the measurement is less than 0.90 × 10 −4 kg·m 2 . Sensors 2023, 23, x FOR PEER REVIEW 14 of 19 conducted in 10 different load states with a range of rotational inertia from 0 to 100 × 10 −4 kg·m 2 and an inertia difference of about 0.01 kg·m 2 . Ten identical measurement experiments were conducted in each load state. Figure 12 shows the standard deviation and maximum measurement error of 10 measurements for each load state. As shown in the figure, the standard deviation of the measurement increases with increases in the measured rotational inertia value. Within the range of 0-100 × 10 −3 kg·m 2 , the method proposed in this paper and the experimental system used can achieve an accurate measurement of the measured rotational inertia, the absolute value of the maximum measurement error is less than 2.00 × 10 −4 kg·m 2 , and the standard deviation of the measurement is less than 0.90 × 10 −4 kg·m 2 . To observe the influence of torsion damping on the measurement of the rotational inertia, the relative measurement errors of the average of 10 measurements when neglecting damping and those considering damping using the method proposed in this paper (i.e., calculating the measured rotational inertia using Formula (8)) were both calculated. The results are shown in Figure 13, where the average values of the amplitude modulation parameter ζ for each tested load state are also given. To observe the influence of torsion damping on the measurement of the rotational inertia, the relative measurement errors of the average of 10 measurements when neglecting damping and those considering damping using the method proposed in this paper (i.e., calculating the measured rotational inertia using Formula (8)) were both calculated. The results are shown in Figure 13, where the average values of the amplitude modulation parameter ζ for each tested load state are also given. As shown in Figure 13, it can be observed that, within the experimental measurement range of 0-100 × 10 -3 kg·m 2 , the influence of damping on the measurement results is insignificant when the measured load's rotational inertia is relatively small. However, as the measured rotational inertia gradually increases, neglecting the effect of damping will result in increasingly larger measurement errors. The relative error of the average value of As shown in Figure 13, it can be observed that, within the experimental measurement range of 0-100 × 10 -3 kg·m 2 , the influence of damping on the measurement results is insignificant when the measured load's rotational inertia is relatively small. However, as the measured rotational inertia gradually increases, neglecting the effect of damping will result in increasingly larger measurement errors. The relative error of the average value of 10 measurements can exceed 1%, and the relative error of a single measurement is even greater. Therefore, the impact of damping cannot be ignored when using a low-cost measurement system (without air bearing). The proposed method and system in this paper significantly reduce the influence of damping on the measurements, and the relative error of the average value of 10 measurements is better than 0.1% within the experimental range. This measurement performance is comparable to that of a low-damping torsion pendulum measurement system equipped with an air bearing [21]. To further validate the effectiveness and applicability of the proposed method, rotational inertia measurement experiments were conducted on samples with different shapes and materials. As shown in Figure 14, the rotational inertia of a standard sphere, a sleeve assembly consisting of a metal cylindrical sleeve and a cylindrical base, and the Z-axis rotational inertia of a quadrotor UAV were measured. Table 3 presents the results of these rotational inertia measurements for the three types of loads. From Table 3, it can be observed that the method described in this paper enables stable rotational inertia measurements for samples with different shapes, and the standard deviation of the measurements remains at a low level. As shown in Figure 13, it can be observed that, within the experimental measurement range of 0-100 × 10 -3 kg·m 2 , the influence of damping on the measurement results is insignificant when the measured load's rotational inertia is relatively small. However, as the measured rotational inertia gradually increases, neglecting the effect of damping will result in increasingly larger measurement errors. The relative error of the average value of 10 measurements can exceed 1%, and the relative error of a single measurement is even greater. Therefore, the impact of damping cannot be ignored when using a low-cost measurement system (without air bearing). The proposed method and system in this paper significantly reduce the influence of damping on the measurements, and the relative error of the average value of 10 measurements is better than 0.1% within the experimental range. This measurement performance is comparable to that of a low-damping torsion pendulum measurement system equipped with an air bearing [21]. To further validate the effectiveness and applicability of the proposed method, rotational inertia measurement experiments were conducted on samples with different shapes and materials. As shown in Figure 14, the rotational inertia of a standard sphere, a sleeve assembly consisting of a metal cylindrical sleeve and a cylindrical base, and the Z-axis rotational inertia of a quadrotor UAV were measured. Table 3 presents the results of these rotational inertia measurements for the three types of loads. From Table 3, it can be observed that the method described in this paper enables stable rotational inertia measurements for samples with different shapes, and the standard deviation of the measurements remains at a low level. Among the three tested samples, the standard sphere and the sleeve assembly have regular shapes and uniform density. In this paper, their theoretical calculated values of rotational inertia are taken as the relative true values to calculate the measurement errors. The rotational inertia of the quadcopter around the Z-axis referenced from the specifications provided by the manufacturer is taken as the basis value to calculate the measurement deviations. The measurement errors or deviations for the 10 measurement experiments are calculated and shown in Figure 15. From the figure, it can be observed that the relative errors (or deviations) of the rotational inertia measurements for all three samples are better than 0.18% for a single measurement. These experiments demonstrate that the method and system described in this paper can adapt to the rotational inertia measurement requirements of various rigid bodies. Standard deviation σ 0.0151 0.0679 0.0303 Among the three tested samples, the standard sphere and the sleeve assembly have regular shapes and uniform density. In this paper, their theoretical calculated values of rotational inertia are taken as the relative true values to calculate the measurement errors. The rotational inertia of the quadcopter around the Z-axis referenced from the specifications provided by the manufacturer is taken as the basis value to calculate the measurement deviations. The measurement errors or deviations for the 10 measurement experiments are calculated and shown in Figure 15. From the figure, it can be observed that the relative errors (or deviations) of the rotational inertia measurements for all three samples are better than 0.18% for a single measurement. These experiments demonstrate that the method and system described in this paper can adapt to the rotational inertia measurement requirements of various rigid bodies. Conclusions In contrast to previous studies, this paper innovatively applies machine vision technology to the measurement of the rotational inertia of rigid bodies. By accurately recording the angular displacement of torsional vibration using monocular vision, the identification of the torsional damping parameter (i.e., the amplitude modulation parameter) is achieved, thereby significantly reducing the impact of damping on rotational inertia measurements. An experimental system was constructed to validate and test the technical approach's feasibility and measurement performance. The following conclusions were primarily obtained: Figure 15. Inertia measurement error/deviation of three samples. Conclusions In contrast to previous studies, this paper innovatively applies machine vision technology to the measurement of the rotational inertia of rigid bodies. By accurately recording the angular displacement of torsional vibration using monocular vision, the identification of the torsional damping parameter (i.e., the amplitude modulation parameter) is achieved, thereby significantly reducing the impact of damping on rotational inertia measurements. An experimental system was constructed to validate and test the technical approach's feasibility and measurement performance. The following conclusions were primarily obtained: (1) A mathematical model for the measurement of rotational inertia using the torsion pendulum method was established. The solution to the torsional vibration differential equation under the assumption of linear damping was obtained, along with the relationship between the angular displacement, damping parameter, and rotational inertia measurement. A calculation model for rotational inertia considering damping was proposed. (2) An experimental system based on monocular vision and a torsion pendulum platform was designed and developed. The rotational inertias of different rigid bodies were measured through experiments. The imaging system successfully captured the torsional vibration of the pendulum platform and obtained the torsional angular displacement time domain waveform. Based on the calculation model, the measured rotational inertia was determined, validating the feasibility of applying machine vision methods to rotational inertia measurement. (3) Multiple measurement experiments were conducted on various types of rigid bodies, demonstrating the good measurement accuracy and repeatability of the proposed method. Within the range of 0-100 × 10 −3 kg·m 2 , the standard deviation of the measurements was better than 0.90 × 10 −4 kg·m 2 , and the absolute value of the measurement error was less than 2.00 × 10 −4 kg·m 2 . The experimental results effectively demonstrate the effectiveness and applicability of the proposed method. (4) The experimental results also revealed that neglecting damping would lead to measurement errors exceeding 1%. The proposed method successfully identifies system damping and corrects the rotational inertia calculation formula, resulting in a significant reduction in measurement errors. Moreover, the proposed system has a simple structure, low cost, and promising prospects for practical applications. Furthermore, the experimental results also show that the absolute value of the damping factor increases with increases in the measured load, i.e., the amplitude attenuation of the torsion motion becomes more significant. This may be due to two reasons: (1) as the load increases, the internal damping of the elastic element during the torsion process increases (as manifested by heating and torsion fatigue of the elastic element), and thus the effect of damping is more significant than for smaller loads; (2) the mathematical model of damping used in this paper assumes linear damping, i.e., the damping torque is proportional to the angular velocity of the torsion motion. However, damping has various causes, and different forms of damping may occur under different conditions. Actual mathematical models of damping torque are more complex than the one used in this paper.
13,686.4
2023-05-01T00:00:00.000
[ "Engineering" ]
Parasites of Sardinella maderensis (Lowe, 1838) (Actinopterygii: Clupeidae) and Their Potential as Biological Tags for Stock Identification along the Coast of West Africa Simple Summary Sardinella maderensis, representing one of the most commercially important small pelagic fish species along the coast of West Africa, is suffering a drastic decline as a result of overfishing, the overcapacity of fishing fleets, the destruction of fish habitat, the use of inappropriate fishing gears and techniques, as well as environmental changes. The limited reliable information on their stock for sustainable management constitutes one of the problems facing small pelagic fish in general and Sardinella maderensis particularly along the coast of West Africa. The key goals of this study were to identify the parasites of Sardinella maderensis and to assess their potential use as biological tags for stock identification along the coast of West Africa (Benin and Ghana). The objectives of this study were to determine the morphological parameters (total length and body weight) of S. maderensis, identify their parasites in Benin and Ghana and select appropriate parasites with potential to be used as biological tags. The results suggest that the nematode Anisakis sp(p). and the cestode Tentacularia coryphaenae may serve as potential biological tags for the stock identification of Sardinella maderensis. Abstract This study is the first to provide information on the parasite fauna of Sardinella maderensis along the coasts of Benin and Ghana, and the first to investigate the potential use of parasites as biological tags in fish population studies in the area. It may thus serve as a starting point for upcoming studies. From February to June 2021, a total of 200 S. maderensis were sampled from the fishing port of Cotonou (Benin) and the Elmina landing site (Ghana). The prevalence and abundance of each parasite were recorded. The following are the outcomes of this study: Parasite species, such as Parahemiurus merus, Mazocraeoides sp. and Hysterothylacium fortalezae, were recorded along the coasts of Benin and Ghana, while Anisakis sp(p). and Tentacularia coryphaenae were only recorded along the coast of Benin. Parahemiurus merus was the most prevalent and abundant among all the parasites recorded. Anisakis sp(p). and T. coryphaenae were selected as having potential in the stock identification of S. maderensis. Both parasites were only recorded along the coast of Benin at a low prevalence. As a result, examinations of more S. maderensis from each location for these parasites may justify their use in stock identification studies. Introduction It has been recognized that the study of parasites in fish in sub-Saharan Africa needs greater attention, especially given the considerable aquaculture and wild-caught fishery industries found across the continent. Studies on marine fish parasitology have so far mainly been focused on parasite population surveys and new species identification. Research on the impact of parasites related to economically harvested fish in this region, or on how parasite data can be used to enhance fisheries management, is limited [1]. Sardinella maderensis and S. aurita represent the most abundant and commercially important species of small marine pelagic fish along the coast of West Africa [2]. Together, they account for more than 40% and 16.2% of total landings in Ghana and Benin, respectively, with S. maderensis being the most abundant fish species in Benin [3,4]. The lack of reliable data on stock structure for the management of small pelagics in general, and these two Sardinella sp(p). in particular, is a significant problem in this region. This problem can be addressed by providing fishery managers and scientists with reliable data using multiple methods of stock identification. The present study thus seeks to provide an alternative low-cost method for the stock identification of these marine pelagic species through the use of parasites as biological tags to augment existing methods. The basic principle underlying the use of parasites as tags in fish population studies is that fish can become infected with a parasite only when they are within the endemic area of that parasite. The endemic area is the geographic region in which conditions are suitable for the transmission of the parasite, including biotic factors, such as the presence of other hosts essential for the completion of the parasite's life cycle, and abiotic factors, such as temperature and salinity. If infected fish are found outside the endemic area of the parasite, we can infer that these fish had been in the parasite's endemic area at some time in the past [5]. Fish can thus be said to carry a "parasitological fingerprint" by which their past movements can be traced. Various authors have listed criteria or guidelines for the selection of parasites suitable for use as biological tags in fish population studies [5]. The most important of these is that the parasite should have a lifespan in the target host appropriate to the nature of the study. For stock identification studies, this means a lifespan of more than one year. The fish parasites that best meet this criterion are the larval stages of helminths, such as trematode metacercariae and larval nematodes and cestodes. These are the "resting" stages in the fish host, which may exist in this state for many years. The efficiency of the parasite tag approach thus relies on sufficient information on the biology and ecology of the parasite, particularly with regard to its life cycle and its lifespan in the fish. A lack of such information was earlier recognized as a limiting factor, but with the increase in studies of marine parasite biology in recent years, the resulting information has greatly increased the efficiency of the method. The use of parasites as biological tags in fish population studies has now become a widely accepted method of stock identification [6]. The use of parasites as biological tags has the following advantages over other methods of stock identification: • It is more appropriate for studies of small delicate species of fish, such as small clupeoids, for which artificial tags can be used with difficulty, or not at all. • Using parasite as tags is more cost-effective than artificial tagging because fish samples can be obtained from the routine sampling of commercial or research vessel catches without the need for costly dedicated sampling programs. • The use of parasite tags has an advantage over the use of host genetics because it can often be used to identify subpopulations of fish distinguished by behavioral differences, but between which there is still a considerable amount of gene flow which can render genetic studies inconclusive. A recent stock identification study of South African sardines Sardinops sagax is an excellent example of what can be achieved in the fishery management of a small pelagic species using parasites as biological tags [7,8]. Using parasites in this way proved to be a powerful tool for population structure studies of these sardines and provided more convincing support for a multiple-stock hypothesis than other methods of stock identification [8]. Using this study as an example, the aim of the present study is to present the results of a preliminary survey carried out to determine what parasites infect the target host in the study area. We also identify those parasites of S. maderensis with the potential to be used as biological tags for stock identification, the null hypothesis being that all fish populations in the study area belong to a single stock. Sample Collection A total of 200 specimens of S. maderensis, consisting of 100 specimens each from Cotonou fishing port in Benin (6 • 21 4.212" N, 2 • 25 58.296" E) and the Elmina landing site, Elmina, Ghana (5 • 04 57.3" N 1 • 21 02.6" W) (Figure 1), were obtained from artisanal catches in 2021 from February to June. The specimens were kept on ice and transported to the laboratory. The Benin samples were analyzed at the laboratory of Parasitology and Ecology of Parasites of the Department of Zoology, the University of Abomey-Calavi, and the Ghana samples were analyzed at the laboratory of the Department of Fisheries and Aquatic Sciences, University of Cape Coast. These two study areas were chosen because of the existence of two nurseries of S. maderensis along the coast of Ghana. The first nursery is located in the East coast of Ghana, a shared stock by Togo and Benin and the second nursery in the west coast of Ghana, a shared stock with la Côte d'Ivoire [9]. Biology 2023, 12, 389 3 of 16 powerful tool for population structure studies of these sardines and provided more convincing support for a multiple-stock hypothesis than other methods of stock identification [8]. Using this study as an example, the aim of the present study is to present the results of a preliminary survey carried out to determine what parasites infect the target host in the study area. We also identify those parasites of S. maderensis with the potential to be used as biological tags for stock identification, the null hypothesis being that all fish populations in the study area belong to a single stock. Sample Collection A total of 200 specimens of S. maderensis, consisting of 100 specimens each from Cotonou fishing port in Benin (6° 21′4.212″ N, 2°25′58.296″ E) and the Elmina landing site, Elmina, Ghana (5°04′57.3″ N 1°21′02.6″ W) (Figure 1), were obtained from artisanal catches in 2021 from February to June. The specimens were kept on ice and transported to the laboratory. The Benin samples were analyzed at the laboratory of Parasitology and Ecology of Parasites of the Department of Zoology, the University of Abomey-Calavi, and the Ghana samples were analyzed at the laboratory of the Department of Fisheries and Aquatic Sciences, University of Cape Coast. These two study areas were chosen because of the existence of two nurseries of S. maderensis along the coast of Ghana. The first nursery is located in the East coast of Ghana, a shared stock by Togo and Benin and the second nursery in the west coast of Ghana, a shared stock with la Côte d'Ivoire [9]. Morphological Data The total length (TL) of the fish was measured as the length from the snout to the most posterior part of the caudal fin. The total lengths were measured to the nearest 0.1 cm using a measuring board. The body weight (BW) of the fish were measured to the nearest 0.1 g by placing the fish on an ADAM scale electronic balance. The samples were sexed by opening and observing the characteristics of the gonads. Morphological Data The total length (TL) of the fish was measured as the length from the snout to the most posterior part of the caudal fin. The total lengths were measured to the nearest 0.1 cm using a measuring board. The body weight (BW) of the fish were measured to the nearest 0.1 g by placing the fish on an ADAM scale electronic balance. The samples were sexed by opening and observing the characteristics of the gonads. Parasite Collection The protocol for parasite collection used in this study was that of the book Parasites of Marine Fish and Cephalopods [10]. Ectoparasites were examined macroscopically on the fish's body surface and apertures (eyes, skin, fins, gills, nostrils, anus, mouth cavity and fins) using a hand lens and an AmScope dissecting microscope at 30× magnification. The mucus was scraped from the skin, fins, nasal pits, gills and the internal portion of the operculum, and was examined for ectoparasites under a Motic microscope at 10× and 40× magnification. The eyeballs were removed and then punctured with a syringe to extract the eye fluid. The eye fluid was examined under the Motic microscope for digenean metacercaria. For endoparasites, the fish specimens were dissected by applying four incisions. The first incision was made vertically from the anus to end of the lateral line, whereas the second incision was made through the end of the lateral line to the beginning of the upper operculum bone. The third incision was made from the beginning of the upper operculum bone to the lower operculum bone, and the final incision made horizontally through the ventral portion of the fish to the lower of the operculum bone. The dissected parts of the fish were removed, and the organs were exposed. The viscera were split into the stomach, pyloric caeca, intestine, gonad, gall bladder, liver, kidney and spleen. All the organs were removed, placed in labelled Petri dishes and filled with a 0.9% saline solution. The stomach, intestine and pyloric caeca were opened longitudinally, and the contents were scraped and examined for parasites. The gall bladder was punctured, and the bile was examined for parasites. A smear of the liver, pylorus, kidney and spleen were prepared by cutting a small piece of each organ before being gently pressed with the back of the forceps on a microscope slide and examined under the Motic microscope at 40× magnification. Parasite Preparation and Preservation All the parasites recorded in this study, except nematodes, were counted and fixed in 70% ethanol. They were stained with borax carmine and cleared with eugenol (clove oil), whereas the nematodes were cleared with glycerin. All the parasites were mounted in Canada balsam and viewed under the Motic microscope when dry at different magnifications based on the size of specimen. All the images captured were used for taxonomical identification. Data Analysis All the data collected in this study were skewed, hence non-parametric tests were performed. Morphological Data Analysis For morphometrics data, a Mann-Whitney U test was performed to determine whether the fish total length and weight were significantly different across all the sampling locations. Additionally, a Kruskal-Wallis test was conducted to determine if the fish total lengths significantly differed among sexes (male, female and indeterminate), followed by a pairwise post-hoc Dunn's test for multiple comparison with Bonferroni adjustments. Parasitological Data Analysis For parasitological data, prevalence P(%) and mean abundance (MA) of infection were calculated according to [11]. where ni = number of hosts with a specific parasite i and n∑ = total number of hosts examined. MA = IΣ i nΣ where I∑ i = total number of a specific parasite species i and n∑ i = total number of examined hosts [10]. The prevalence of parasites were compared among locations using unconditional exact tests [12]. Abundances were compared between localities using the bootstrap t-test. The biased accelerated bootstrap (BCa bootstrap) was used to provide the confidence interval of the mean abundance. The length classes of the fish were compared with parasite abundance using the Mann-Whitney U test. The comparison between the sex categories (males, females and indeterminate) and the abundance of parasites was performed using the Kruskal-Wallis test. Spearman's correlation was conducted to find the relationship between the abundance of parasites and the length classes. All statistical analyses were performed using the Quantitative Parasitology Web portal (https://www2.univet.hu/qpweb/qp10/ (accessed on 28 July 2022)) [12] and the Statistical Package for Social Science (SPSS) 2019 version 26. The significance level was set at p < 0.05. Morphological Data Specimens from Benin varied from 14.5 to 32.2 cm in total length with a mean length of 17.70 ± 2.97, whereas those from Ghana varied from 16.00 to 32.00 cm in total length with a mean length of 16.00 ± 3.98. The body weight of the specimens from Benin varied from 28.00 to 287.00 g with a mean body weight of 144.95 ± 47.25, and those from Ghana varied from 38.46 to 258.92 g with a mean body weight of 120.53 ± 57.41 (Table 1). In Benin, 50% of the sampled specimens were males, while females and indeterminates constituted 47% and 3%, respectively. In Ghana, the males represented 41% of the sampled specimens, while the indeterminates and females represented 38% and 21%, respectively (Table 1). Comparison between Fish Total Length and Body Weight across Sample Locations (Benin and Ghana) A Mann-Whitney U test demonstrated that the fish recorded in Benin were significantly longer (Median = Mdn = 25.10 cm, n = 100) than those recorded from Ghana (Mdn = 23.00 cm, n = 100), (p = 0.001), with a small effect size r = 0.29. Additionally, this same test showed that the fish recorded in Benin were significantly heavier (Mdn = 140.00 g, n = 100) than those recorded in Ghana (Mdn = 119.14 g, n = 100), (p = 0.003), with a small effect size r = 0.21. Comparison between the Fish Total Lengths and Body Weight across Sex Categories (Male, Female and Indeterminate Sex) A Kruskal-Wallis test showed a significant difference in fish total lengths across sexes (p = 0.001). This same test demonstrated a significant difference in fish body weight across sexes (p = 0.001). A pairwise post-hoc Dunn test with Bonferroni adjustments indicated that Indeterminate sex were observed to be significantly different from males (x 2 = 80.08; p = 0.001) and females (x 2 = 110.10; p = 0.001) in terms of fish total lengths. Additionally, there was a significant difference between males and females (x 2 = 30.08; p = 0.004) in terms of fish length. Therefore, all the sexes differed significantly from each other in terms of fish total lengths ( Figure 2). Furthermore, this same test showed that the indeterminate sex were observed to be significantly different from males (x 2 = 79.57; p = 0.001) and females (x 2 = 110.66; p = 0.001). Additionally, there was a significant difference between males and females (x 2 = 31.09; p = 0.002). Therefore, all the sexes differed significantly from each other in terms of fish body weight (Figure 3). Additionally, there was a significant difference between males and females (x = 30.08; p = 0.004) in terms of fish length. Therefore, all the sexes differed significantly from each other in terms of fish total lengths ( Figure 2). Furthermore, this same test showed that the indeterminate sex were observed to be significantly different from males (x 2 = 79.57; p = 0.001) and females (x 2 = 110.66; p = 0.001). Additionally, there was a significant difference between males and females (x 2 = 31.09; p = 0.002). Therefore, all the sexes differed significantly from each other in terms of fish body weight (Figure 3). Additionally, there was a significant difference between males and females (x = 30.08; p = 0.004) in terms of fish length. Therefore, all the sexes differed significantly from each other in terms of fish total lengths ( Figure 2). Furthermore, this same test showed that the indeterminate sex were observed to be significantly different from males (x 2 = 79.57; p = 0.001) and females (x 2 = 110.66; p = 0.001). Additionally, there was a significant difference between males and females (x 2 = 31.09; p = 0.002). Therefore, all the sexes differed significantly from each other in terms of fish body weight (Figure 3). Anisakis sp(p).), and 7 cestodes (Tentacularia coryphaenae), were recorded during this study. The digenean Parahemiurus merus was the most prevalent among all the parasites found in this study, with a frequency of occurrence of 45% in Benin and 21% in Ghana, with corresponding mean abundances of 1.63 ± 2.76 and 0.75 ± 2.08 in Benin and Ghana, respectively ( Table 2). Comparison of Parasite Prevalence and Mean Abundance of Infection across Sampling Locations The prevalence and mean abundance of P. merus recorded in Benin were higher than those recorded in Ghana ( Table 2). The unconditional exact test revealed that the prevalence of P. merus recorded in Benin differed significantly from that recorded in Ghana (p < 0.05). A bootstrap two-sample t-test based on 2000 bootstrap replications showed that the mean abundance of P. merus in Benin differed significantly from that in Ghana (p < 0.05) ( Table 3). Table 3. Parasites displaying significant differences in prevalence (%) and/or mean abundance in S. maderensis. Parasites Prevalence ( The prevalence and mean abundance of H. fortalezae recorded in Ghana were all higher than those recorded in Benin (Table 2). However, the unconditional exact test revealed that there was no significant difference in the prevalence of H. fortalezae in Benin and that in Ghana (p > 0.05) ( Table 3). The mean abundances of H. fortalezae were not compared due to low numbers. The prevalence and mean abundance of Mazocraeoides sp. recorded in Ghana were higher than those recorded in Benin ( Table 2). The unconditional exact test showed that the prevalence of Mazocraeoides sp. in Ghana differed significantly from that in Benin (p < 0.05). A bootstrap two-sample t-test based on 2000 bootstrap replications revealed that there was no significant difference between the mean abundances of Mazocraeoides sp. in Ghana and Benin (p > 0.05) ( Table 3). Comparison of Parasite Prevalence and Mean Abundance of Infection across Fish Length Classes Two length classes (≤25 and <25 cm) were selected from the total length data (14.5-25.0 cm) and (25.01-32.2 cm). Parahemiurus merus and T. coryphaenae were more prevalent in the length class > 25.00 cm than in the length class ≤25.00 cm. However, H. fortalezae, Anisakis sp(p). and Mazocraeoides sp. had higher a prevalence in fish of length class ≤25.00 cm (Figure 4). Further analysis indicated that the abundance of P. merus was significantly different across the two length classes (p < 0.05) with a small effect size r = 0.21. Comparison of Parasite Prevalence and Mean Abundance of Infection across Fish Length Classes Two length classes (≤25 and <25 cm) were selected from the total length data (14.5-25.0 cm) and (25.01-32.2 cm). Parahemiurus merus and T. coryphaenae were more prevalent in the length class > 25.00 cm than in the length class ≤25.00 cm. However, H. fortalezae, Anisakis sp(p). and Mazocraeoides sp. had higher a prevalence in fish of length class ≤25.00 cm (Figure 4). Further analysis indicated that the abundance of P. merus was significantly different across the two length classes (p < 0.05) with a small effect size r = 0.21. Comparison of Parasite Prevalence and Mean Abundance of Infection across Fish Sex Categories Among the sexes, the males of S. maderensis had the highest prevalence values for P. merus and Anisakis sp(p). compared to the female and indeterminate sex. Conversely, females had the highest prevalence for T. coryphaenae compared to the male and indeterminate sex. Hysterothylacium fortalezae was most prevalent in the indeterminate sex category. On the other hand, the female and indeterminate sex of S. maderensis had a higher prevalence of Mazocraeoides sp. than males ( Figure 5). The Kruskal-Wallis test showed that the abundance of P. merus was significantly different between the sexes (Kruskal-Wallis test: N = 200; df = 2; x 2 = 24.01; p < 0.05). The Kruskal-Wallis test showed that the abundance of P. merus was significantly different between the sexes (Kruskal-Wallis test: N = 200; df = 2; x 2 = 24.01; p < 0.05). A pairwise post-hoc Dunn test with Bonferroni adjustments indicated that the indeterminate sex was significantly different from males (x 2 = 31.83; p < 0.05) and females (x 2 = 44.62; p < 0.05) in terms of the abundance of P. merus. However, there was no significant difference between the males and females (x 2 = 12.79; p > 0.05). Therefore, the Biology 2023, 12, 389 9 of 15 indeterminate sex category differed significantly from females and males in terms of the abundance of P. merus, whilst male and female were not significantly different ( Figure 6). higher prevalence of Mazocraeoides sp. than males ( Figure 5). The Kruskal-Wallis test showed that the abundance of P. merus was significantly different between the sexes (Kruskal-Wallis test: N = 200; df = 2; x 2 = 24.01; p < 0.05). The Kruskal-Wallis test showed that the abundance of P. merus was significantly different between the sexes (Kruskal-Wallis test: N = 200; df = 2; x 2 = 24.01; p < 0.05). A pairwise post-hoc Dunn test with Bonferroni adjustments indicated that the indeterminate sex was significantly different from males (x 2 = 31.83; p < 0.05) and females (x 2 = 44.62; p < 0.05) in terms of the abundance of P. merus. However, there was no significant difference between the males and females (x 2 = 12.79; p > 0.05). Therefore, the indeterminate sex category differed significantly from females and males in terms of the abundance of P. merus, whilst male and female were not significantly different ( Figure 6). Relationship between Abundance of Parasites and Fish Length Spearman's correlation test showed a significant but weak positive linear relationship between the fish lengths and the abundance of P. merus only (r = 0.21, p < 0.05) (Figure 7). Relationship between Abundance of Parasites and Fish Length Spearman's correlation test showed a significant but weak positive linear relationship between the fish lengths and the abundance of P. merus only (r = 0.21, p < 0.05) (Figure 7). Relationship between Abundance of Parasites and Fish Length Spearman's correlation test showed a significant but weak positive linear relationship between the fish lengths and the abundance of P. merus only (r = 0.21, p < 0.05) (Figure 7). Fish Morphological Data The results obtained from this study show that the total length of S. maderensis collected along the coast of Ghana ranged from 16.00 to 32.00 cm, while those collected along the coast of Benin ranged from 14.50 to 32.20 cm. These total lengths were similar to those obtained for S. maderensis (14 to 32 cm) along the coast of Benin [4] and longer than those recorded for the same fish species (9.8 to 28.2 cm TL) along the coast of Ghana [13]. They were, however, smaller than those recorded from the Liberian coast (5.5 to 42 cm TL) [14]. The body weights of the fish recorded in this study were greater than those recorded along the Nigerian coast (9.73 to 39.55 g) [15] and the south-west of Turkey (10.8 to 73 g) [16]. These variations in total length and body weight recorded in this study may be due to environmental factors, such as temperature, salinity, and food availability, as well as genetic diversity. Parasitological Data In the present study, four parasite groups (Monogenea, Digenea, Cestoda and Nematoda) and five genera of parasites (Parahemiurus merus, Hysterothylacium fortalezae, Anisakis sp(p)., Tentacularia coryphaenae and Mazocraeoides sp.) were recorded along the coasts of Benin and Ghana. The digenetic trematode P. merus was the most predominant among the parasite species infecting S. maderensis in the two sampling areas. This parasite has been recorded in many marine fish species worldwide, including clupeids [17]. It was reported previously in Sardinella cameronensis (S. maderensis) along the coasts of Ghana [18] and Senegal [19], from Sardinella aurita from the Gulf of Gabès, Tunisia [20,21] and from the Algerian coast [22]. The prevalence of P. merus in Benin (45%) was higher than that in Ghana (21%) during the study. The prevalence of this species in Benin and Ghana was lower than that recorded from S. aurita in Bizerte (84%), Kelibia (84.44%), Mahdia (48.05%) and Zarzis (86.84%) off the coast of Tunisia [21], but higher than that recorded from S. aurita from Gabès (11.57%) off the coast of Tunisia [21] and the Algerian coast (5.31%) [22]. The high prevalence of P. merus recorded in this study may be due to the abundance of its intermediate hosts in the study area. The life cycle of P. merus remains unknown, but metacercariae have been reported from chaetognaths [23]. Gastropod molluscs and copepods are presumed to be the primary and secondary intermediate hosts of P. merus [24]. However, known invertebrate hosts were not examined in this study. Two nematode parasites were recorded in S. maderensis, namely the third larval stages of Hysterothylacium fortalezae along the coasts of Benin and Ghana and Anisakis sp(p). in Benin. Hysterothylacium fortalezae larvae were previously reported from some midwater and benthopelagic stomiiform fish in the northern Gulf of Mexico [25], and from Selene setapinnis in the state of Rio de Janeiro, Brazil [26]. It was also recorded from Percophis brasiliensis in the municipality of Niterói, Rio de Janeiro, Brazil [27]. However, it has not been reported from any fish along the coast of West Africa. The only known definitive host of H. fortalezae is the serra Spanish mackerel Scomberomorus brasiliensis [28]. However, the West African Spanish mackerel Scomberomorus tritor may be a likely host in the present study area. The prevalence of H. fortalezae recorded in this study in fish collected along the coast of Benin (2%) and Ghana (4%) were lower compared to those recorded from Selene setapinnis (26.7%) in the northern Gulf of Mexico [26] and from Percophis brasiliensis (21.87%) in the municipality of Niterói, Rio de Janeiro, Brazil [27]. However, the prevalence of H. fortalezae in specimens from Ghana (4%) was higher compared to those recorded from Pollichthys mauli and Polyipnus clarus (3% and 1%, respectively) in the northern Gulf of Mexico [25]. The third-stage larvae of Anisakis sp(p). were found only in Benin at a low prevalence of 5%. They have also been reported in S. maderensis off the coast of Nigeria with a prevalence of 2% [29]. The genus Anisakis currently comprises nine "cryptic" or "sibling" species [30], which are very similar in morphology and usually rely on molecular methods for specific identification. Cetaceans, mainly toothed whales, are the definitive hosts of Anisakis sp(p). Pelagic crustaceans are the first intermediate hosts, while larger crustaceans, particularly euphausiids, and smaller fish species are thought to be the important second intermediate hosts. Larger fish and cephalopods serve as paratenic hosts [31,32]. A monogenean parasite of the genus Mazocraeoides was recorded from the gills of S. maderensis in both study areas. This monogenean genus is relatively diverse in species and infects many species of clupeid fish [33,34]. A species of Mazocraeoides was found infecting Sardinella longiceps from the Visakhapatnam coast, Bay of Bengal in India [34], but none appear to have been previously reported from S. maderensis, so this may be a new species. The genus is characterized by a broad body and clamps arranged along the lateral margins of the body with the anterior pair anterior to the level of the ovary [35]. The plerocercoid of the trypanorhynch cestode Tentacularia coryphaenae was found only along the coast of Benin. The Trypanorhyncha is the most species-rich order of cestodes infecting elasmobranch fish as definitive hosts. Larval trypanorhynchs, of as many as 14 genera, have been reported from second intermediate hosts, mostly teleosts, but the identities of the elasmobranch hosts of many of these larvae have yet to be established. Various invertebrate groups, as well as possibly fish, apparently serve as the first intermediate hosts [36]. While most trypanorhynch species are fairly host-specific, at least as adults, T. coryphaenae has been reported from 11 different shark species and its larvae has been reported from more than 60 teleost species [37], but this appears to be the first record from S. maderensis. The prevalence of T. coryphaenae (6%) recorded along the coast of Benin was lower than that recorded off the coast of South Africa in oilfish (Ruvettus pretiosus) (100%) and snoek (Thryrsites atun) (49.7%) [38,39], and in the black scabbardfish (Aphanopus carbo) (25.8%) from Portuguese waters [40]. However, the prevalence was higher than those recorded from Scomber japonicus (2%) and South African sardines (Sardinops sagax) (1%) off the coast of South Africa [7,41]. The absence of T. coryphaenae in the fish sampled from Ghana may be a result of the low number of fish sampled. It may also be related to the differential occurrence of suitable shark definitive hosts in the two study areas. Relationship between Abundance of Parasite and the Fish Sizes In this study, a weak positive relationship was found between the abundance of P. merus and the length of S. maderensis, which implies that the abundance of P. merus increases with fish length. A positive relationship was also reported from Anchoa tricolor, Spagrus spagrus and Opisthonema oglinum along the coast of Brazil [42][43][44]. This may be due to the fact that larger S. maderensis consume greater numbers of the intermediate prey organisms of this parasite, [45,46]. Both juvenile and adult S. maderensis show a preference for crustaceans [47,48]. Parasites Selected as Potential Biological Tags Two parasite taxa, Anisakis sp(p). and T. coryphaenae, were selected following established guidelines [5] as potentially useful biological tags for the stock identification of S. maderensis along the coast of West Africa. Both parasites have long-lived larval stages that survive as "resting" stages in their fish intermediate hosts for several years, possibly for as long as the infected host lives. Anisakis sp(p). larvae have proved to be amongst the best tag parasites for small pelagic fish [49], and T. coryphaenae was selected by [50] as potentially the most valuable biological tag for the stock identification of skipjack tuna, Katsuwonus pelamis. As the genus Anisakis comprises a group of nine species, each with its own cetacean host preferences, it is crucially important to identify the species using molecular methods. This identification can then be related to the known occurrence of the cetacean host(s) in the study area. This approach cannot be used for T. coryphaenae because of its wide host specificity to both fish intermediate and definitive hosts, but statistically significant variations in the prevalence and abundance between sampling areas may be indicative of different dietary compositions between fish populations [51]. However, the hosts' stomach content analyses were not examined in this study. For example, the occurrence of T. coryphaenae in Benin but not in Ghana may be related to differences in the diet of S. maderensis between the two areas. Parahemiurus merus and Mazocraeoides sp. have short-lived adult stages and are therefore not considered useful as biological tags for fish stock identification, but they may be useful in seasonal migration studies. More information is needed on the life cycle of H. fortalezae, particularly regarding the identity of its definitive host(s) in the study area, before its potential as a biological tag can be assessed. Conclusions The present study intended to apply parasite data to the stock identification of Sardinella maderensis, one of the most valuable small pelagic fish species along the coast of West Africa. Tentacularia coryphaenae and Anisakis sp(p). were found to have potential for the future stock identification of S. maderensis along the coasts of Benin and Ghana and adjacent areas in West Africa. Even though the prevalence of these two parasites were low, the fact that they have only been found in fish sampled from one of the two localities is promising. Examinations of more S. maderensis for Anisakis and T. coryphaenae from Ghana and Benin, and from adjacent areas, would clarify the distribution of each parasite. It is also important to take samples at different seasons to check for seasonal migratory patterns. The genetic identification of the species of Anisakis present in the samples is essential as its distribution could then be related to the known distribution of the cetacean hosts. Finally, due to the limited research on marine parasitology in West Africa, there is an urgent need to continue exploring this area of study.
8,132.6
2023-02-28T00:00:00.000
[ "Environmental Science", "Biology" ]
Research onOnline andOfflineMixed Teaching Practice Based on College Film and Television Literature Course With the rapid development of new media technology, the demand for applied professional film and television talents in China’s film and television industry is getting higher and higher, especially the comprehensive practical ability. In the Internet age, information technology has been widely used. It is integrated into people’s lives with the unique characteristics of interaction and communication among all employees, and opens up a new mode of work, study and life with Internet as the medium. Under the new situation, all kinds of schools at all levels also make full and active use of the Internet for online teaching. In the past, the quality monitoring work focused on theoretical teaching. However, in the face of online and offline mixed practice teaching, there are bound to bemany problems and deficiencies when using the theoretical teaching quality monitoring system.(is papermainly discusses the application of online and offline mixed tutorial mode in film and television literature course from the aspects of taskoriented teaching content, application of classroom teaching methods, diversification and dynamics of evaluation methods, etc., hoping to provide reference for college teachers who implement online and offline mixed teaching. Introduction Human society has entered the era of information society and knowledge economy at the end of the twentieth century. e mass media is credited with ushering in this era. Film and television are examples of rapidly evolving mass media in the twentieth century [1]. With the rapid advancement of new medium technology, the demand for application-oriented professional film and television talents in China's film and television industry is growing, particularly for those with a broad range of practical skills [2]. In the process of cultivating students, film and television majors must strengthen the cultivation of students' cognition and practical ability in addition to imparting basic theories [3]. As the most vigorous modern comprehensive art, film and television art is based on the development of sci & tech from birth to development. It is a new artistic flower open on the tree of modern sci & tech [4]. With the application of a series of high and new technologies such as Internet, digital technology, multimedia technology and interactive TV, the charm of film and television art is increasing day by day [5]. Compared with the traditional poetry, novel, prose and drama, film and television literature should be said to be a new literary style. People's understanding of film and television literature is not only inferior to the traditional literary style, but also inferior to the understanding of film and television art [6]. In the Internet age, information technology (IT) has been widely used. It integrates into people's life with its unique characteristics of full staff interaction and communication, and opens a new mode of work, study and life with the Internet as the media. Under the new situation, all kinds of schools at all levels also make full and active use of the Internet for online teaching [7]. Different from most tutorial modes dominated by online network teaching, hybrid teaching is not confined to a single teaching method, but through the combination of online digital education and offline classroom teaching, emphasizing student-centered and giving full play to the enthusiasm, initiative and creativity of students as learning subjects, It advocates using online educational resources and IT to promote curriculum teaching and improve learning effect [8,9]. e core meaning of mixed teaching does not lie in the unique innovation of teaching means, but in whether the student-centered learning goal is realized. erefore, all means that help to improve the learning effect can be used, which undoubtedly greatly expands the inclusiveness of mixed teaching [10]. At present, online and offline mixed teaching is a new tutorial mode, and practical teaching is in the stage of exploration and exploration [11]. In the past, the quality monitoring work focused on theoretical teaching, while in the face of online and offline mixed practical teaching links, there must be many problems and deficiencies in using the quality monitoring system of theoretical teaching [12]. is paper discusses the application of online and offline mixed tutorial mode in film and television literature courses, mainly from the aspects of task-based teaching content, the application of divided classroom teaching method, diversified and dynamic evaluation methods, hoping to provide reference for university teachers who implement online and offline mixed teaching. Film and television literature is a relatively broad concept. It is the collective name and abbreviation of film literature and television literature. It is a new literary style with the prosperity of film and television art [13]. Film and television art has more audience groups than other arts, which is unmatched by any other art category, such as poetry, novel, drama, etc. the new era of film and television art has come [14]. Films, TV dramas and other film and television art works not only penetrate into human life style and change human concept and consciousness, but also give birth to a new cultural form such as film and television literature [15]. e student-centered hybrid tutorial model can effectively integrate all kinds of teaching resources and teaching forms into the classroom [16]. is rich and diverse teaching form is exactly what is urgently needed in the writing classroom aimed at stimulating students' enthusiasm and creativity [17]. e organic combination of online information-based teaching means and traditional classroom teaching can effectively make up for the poor interaction between teachers and students by using online teaching alone, and meet the needs of writing courses in simulating situations and inspiring emotional resonance. Based on the film and television literature course, this study explores the online and offline mixed tutorial mode, how to flexibly combine the online course with the traditional classroom, enhance students' knowledge application ability and integration ability, and provide new ideas and experience for the mathematics teaching reform in the classroom of universities. Related work Film and television literature is a relatively broad concept, the collective name and abbreviation of film literature and television literature, and a new literary style that appears with the prosperity of film and television art [18]. Literature [19] mentions that film and television literature is also an auditory art, and all descriptions of dialogue, monologue and narration should take into account the needs of pictures, with pictures as the main body of expression, and music, language and sound are all to expand and strengthen the expressive force of pictures. According to the literature [20], because film and television art is developed on the basis of photography, it is necessary to truly reproduce the object and its movement, so that it can approach life to the maximum extent in terms of expression form, and the screen image is not only visible but also realistic. Film and television literature, according to Literature [21], is a television art work that vividly reflects life, shapes characters, and expresses emotions through special screen modelling means, and imparts literary aesthetic taste to audiences. From the standpoint of the birth process of film and television literature, literature, as an artistic form, has become an organic part of a new type of literature as it has been absorbed and integrated, according to Literature [22]. Teachers should strive to meet the needs of curriculum reform, continue to learn, update their ideas, and improve their own cultural literacy, according to literature [23]. We should also study textbooks carefully and strengthen the inspiration and guidance provided to students in the cooperation and interaction of an equal dialogue with them. e mixed tutorial mode is introduced into the teaching of writing courses at universities in this paper, and it is sorted out and summarised after being combined with the actual teaching effect. Connotation of online and offline mixed tutorial mode As a kind of integrated tutorial mode, blended teaching is more flexible in learning methods. rough the online and offline hybrid tutorial mode, students can search relevant learning materials through the Internet anytime and anywhere, so as to learn in fragmented time [24]. And prepare or review according to the materials pushed by the teacher, so as to lay the foundation for the teacher's explanation of knowledge in class. Teachers and students strengthen the interaction and exchange of knowledge understanding in class, and can prepare relevant information for students to strengthen review and expansion after class. e purpose of hybrid teaching is to combine the advantages of traditional teaching methods with the advantages of network learning, so as to realize their complementary advantages and obtain better teaching results. From the perspective of teaching platform, the hybrid tutorial mode is mainly implemented based on MOOC (massive open online courses) platform and university network teaching platform. MOOC has the advantages of large scale, high efficiency, low cost, excellent teachers and flexible time, but there are still deficiencies in the depth of knowledge interaction. Although MOOC can preach and teach, it can not effectively solve doubts and achieve efficient and in-depth knowledge interaction. Teachers cannot provide one-on-one personalised learning guidance to learners in the process of knowledge interaction between teaching and learning in the MOOC context, and learners find it difficult to conduct in-depth knowledge exchange. Knowledge interaction links like discussion are difficult to match. As a result, it is critical to supplement offline instruction to compensate for the shortcomings of online instruction. However, in order to achieve the effect of complementary advantages, online and offline teaching should be focused on each other, and the proportions should be appropriate and reasonable [25]. e mixed tutorial model's theoretical foundation is to create a relatively stable, systematic, 2 Scientific Programming and theoretical tutorial model around a specific theme in teaching activities, guided by specific teaching ideas. Students' learning enthusiasm can be mobilised through rich online learning resources and communication modes. At the same time, incorporating various offline activities into online courses can help to improve the teaching effect while also increasing students' interest in learning [26]. With the application of online and offline hybrid tutorial mode in Universities, it not only eliminates the limitations of traditional tutorial mode in time and space, but also makes teachers access to more high-quality demonstration curriculum resources, thus promoting the integration of teaching materials and lesson preparation methods. e learning process should be student-centered, and students must actively participate in the whole learning process. e second is that knowledge is the social construction agreed by individuals and others through consultation. erefore, in the learning process, we should pay attention to the interactive learning method and change the current situation of students' passive acceptance of knowledge. Improve students' cultural literacy. In the process of learning film and television literary works, students' listening, speaking, reading and writing abilities are improved, their aesthetic ability is constantly strengthened, and their artistic taste is gradually improved, which helps to cultivate students' practical ability and innovative spirit, and help to form students' good personality and sound personality. erefore, the teaching of film and television literary works should be placed in an important position in literature education. Students should be instructed to read rich and excellent literary works, acquire necessary literary knowledge, cultivate and improve their literary literacy, and at the same time, incorporate ideological education into them to cultivate lofty ideals and ambitions. e education of literature development and achievements in common sense should rely on colorful literary styles such as poetry, prose, novel, drama, film and television literature, etc., and enter students' vision and thoughts, and a considerable part of the content will become their lifelong cultural wealth. Reading and appreciation of literary works is the focus and center of literary education, which includes reading and appreciation of ancient, modern and contemporary Chinese poems, essays, novels, dramas, film and television literary works and excellent foreign literary works, which is the main body of literary education. Students' understanding of the original novel and our explanation of the adapted film and television literature works can refer to each other, and the similarities and differences formed in the process can help students form their initial feelings and experience material accumulation of film and television literature works, thus leading them to rerecognize the shaping and deepening functions of film and television literature. Present situation of mixed teaching of film and television literature At present, when we teach film and television literature works, the way is extremely monotonous, which basically ignores the unique characteristics of film and television literature. Instead, we simply teach film and television literature works as general literature works, and more often analyze the writing and writing techniques from the perspective of language, resulting in that the unique appeal of film and television literature works has not been fully exerted, which can not really impress students and enhance the aesthetic ability of literature. Simplification of film and television literature teaching content. At present, the outstanding problem in Chinese teaching is not the incorrect understanding of works, but the lack of full respect for students' subjectivity in the teaching process and the lack of more independent space for students. Teachers don't guide and encourage students to actively examine society and life with their own eyes, but always instill existing fixed answers into students, which is mandatory and oppressive. e film and television literature course in universities is different from other courses in content, which requires not only deep excavation of literary works based on writing background, historical environment, characters' personality, expression of emotions, etc., but also reasonable research and application of various theoretical knowledge that is convenient for exploring profound meaning. is kind of online and offline mixed teaching for Scientific Programming literature will be difficult to achieve more rational application of teaching resources, which will make it difficult to display its value. Although students have a strong interest in film and television literature, most of the teachers, starting from the practical purpose of dealing with the examination, led the students to walk through the classroom in a superficial way. Students only remember what the examination contents need to be memorized, but they have little knowledge of film and television art and film and television literature, let alone deeply appreciate film and television literature works. Poor teaching process design. e design of the teaching process determines not only the quality of the teaching effect, but also the quality of the teaching process organisation in the online and offline mixed tutorial mode. e works should have a three-dimensional sense and image, a broad thinking space, and promote thinking activities, rather than being limited to the words themselves. Furthermore, film and television literary works use a variety of expressive techniques. Fully digging and explaining can also help students gain a better understanding of other fields besides Chinese, as well as inspire them and provide them with a variety of writing opportunities. Students' self-study before class, teachers' deepening in class, and students' strengthening and consolidating after class are the three parts of the teaching process in the online and offline mixed tutorial mode. However, in online and offline mixed teaching, some teachers do not provide self-study materials prior to class and instead ask students to find relevant literature and author information on their own. Because of the big data mode presented by Internet platform information, students will encounter content deviations and misunderstandings in their interpretation of relevant literature materials or author information. Improve students' participation in autonomous learning. ere are relatively few theoretical knowledge parts in film and television literature courses, and most of the understanding and appreciation methods used are common forms. In this regard, teachers can concentrate on teaching relevant theoretical knowledge according to its application categories. After students understand and master the basic ways of appreciating literary works, the follow-up teaching of literary works will be more targeted. Before class, the teacher's main task is to carefully analyze the learning situation, so as to choose reasonable teaching content and online teaching platform, and put forward certain requirements and effective suggestions for students' online learning content. At the same time, the teacher should make clear the learning objectives, learning priorities and learning difficulties of film and television literature course teaching through micro-class or PPT, so that students can actively, effectively and independently learn, and provide knowledge reserve for offline classroom teaching. Figure 1 is the dimension of effective learning environment of film and television literature and the path analysis model with learning effect. Before class, teachers construct structured courses by designing curriculum guidance, unit knowledge tree and knowledge branch, setting exercises, discussion and inquiry topics before class, and establishing evaluation system. Teachers can arrange the task of watching movies in advance, so that students can have a preliminary understanding of the movies they want to learn, and set up simulation topics and examples related to movies by using teaching platforms such as rain class or cloud class class, so that students can learn by themselves first. Using modern IT to implement online and offline mixed teaching can not only effectively and rationally use time, but also remind and urge students to study tasks that need to be previewed before each class, and provide students with a broader learning path, thus helping to improve the teaching effect of film and television literature courses. Teachers should deeply understand and combine students' individual characteristics, carefully select, design and prepare high-quality teaching and learning resources with different online difficulties according to the syllabus and teaching difficulties, so as to facilitate students' graded learning and help them to preliminarily understand and master knowledge content. Figure 2 shows the resource supply relationship of mixed teaching courses in universities. Students are encouraged to explore and learn independently in class by using high-quality online curriculum resources or teacher-prepared video courses. Teachers use heuristic knowledge topics in class to guide students' independent thinking, discuss knowledge points, explain examples and exercises using the network platform, and analyse and calculate students' feedback questions that have been summarised before class. Simultaneously, combine different knowledge points, select application cases for teaching to enrich classroom content, and set up classroom tasks or group discussions to encourage students to engage in interactive communication, thus enlivening the classroom atmosphere. Students can use the mutual evaluation system set up in class to score and evaluate each other, enhancing students' sense of classroom integration. Optimizing teaching process design. Optimizing the design of the teaching process can not only help students to strengthen their understanding of the literature knowledge they have learned, but also promote the teaching effect on the basis of enhancing students' interest in literature. First of all, in the pre-class self-study stage, teachers should give priority to searching and sorting out relevant literature knowledge, and tell students from which aspects to interpret the information of works. Teachers should have a purposeful dialogue between teachers and students in class according to the difficult points of students' feedback before class, and guide students at different levels individually to teach students in accordance with their aptitude. For students with good foundation and strong learning ability, teachers can assign some difficult learning tasks to expand their knowledge, and for students with poor foundation and weak learning ability, teachers can help them answer questions in class. In the class, teachers can selectively determine the content of lectures according to the movie watching tasks assigned before class, the feedback from the teaching platform to students' preview tasks before class, and the questions and discussions that students have made through the platform. e implementation method of mixed teaching is shown in Figure 3. One of the great advantages of mixed tutorial mode is that it can provide students with a personalized learning space, so that learners can achieve completely independent personality learning. However, it is not easy to truly reflect this advantage, which requires systematic design of the whole network learning environment. e learning path components are shown in Figure 4. Because students have a preliminary knowledge and understanding of the film before class, teachers should deepen and guide students' appreciation and understanding of the film in class. At the end of the course, teachers can use the last five minutes of the class to summarize and clarify the key points and difficulties in this class, so that the teaching effect can be guaranteed to the greatest extent. After class, teachers can arrange homework through online learning platform, the content can be about film appreciation or film appreciation, etc., so that students can extend and expand their knowledge about what they have learned about film appreciation. Students consolidate their knowledge and finish their homework, and then give feedback on their homework. After the first stage of listening and speaking teaching of film and television literature supported by mixed teaching method is completed, a stage test is required, and the test results are shown in Figure 5. Teachers can reflect after class according to students' preview before class and the implementation of teaching in class, and both teachers and students can summarize from the perspective of teaching and learning. Online-offline mixed tutorial mode can enable teachers to adopt the most effective and direct teaching methods, carry out reform and exploration, and further improve students' learning status and teachers' teaching level. For example, Figures 6 and 7 is the survey result of students' satisfaction with online and offline mixed teaching of film and television literature course. It is clear from Figure 6 and 7 that most students are satisfied with the multi-mixed tutorial mode. In addition to assigning homework of classroom teaching content, teachers can also introduce the frontiers of scientific research and provide practical application cases in combination with teaching knowledge points and teachers' own research direction, so as to help students broaden their horizons, stimulate their interest in learning and deepen their understanding of knowledge. Table 7 shows the statistics of students' evaluation on teachers' use of multimedia courseware in film and television literature teaching. e premise of mixed tutorial mode is the Internet, which requires teachers to master the application of IT, which is the general trend in the teaching of film and television literature. Teachers should improve their IT level in actual teaching, fully and effectively integrate teaching resources of film and television literature with IT, innovate tutorial mode, change teaching environment and optimize classroom atmosphere. e independent explanatory power Scientific Programming data of the three dimensions of effective learning environment shows that both learning behavior and situational support have strong explanatory power to learning effect, as shown in Figure 8. Most students take part in some exams related to professional skills while studying professional courses. e heavy learning tasks lead to the change of unsuitable teaching methods. erefore, they have coping psychology for online learning, and can't finish online learning tasks assigned by teachers on time, quality and quantity. erefore, schools should actively adjust teaching evaluation methods, emphasize the importance of process evaluation, arouse students' attention and attention, and then improve students' learning consciousness and enthusiasm. Students independently or in groups carefully complete the exercises, discussions and feedback after class, and freely put forward their own opinions and suggestions on teaching design, teaching content and teaching videos through the teaching evaluation system, thus forming a good cycle between teachers' teaching and students' learning. Conclusions Film and television literature courses are different from general cultural courses. Today, with the emphasis on humanistic quality education, the methods and skills of film and television works appreciation can reflect students' thinking mode and aesthetic ability. Today, with the rapid development of the Internet, the film and television literature course is no longer a simple explanation of works and appreciation of contents. rough network information expansion, it can show students a more brilliant side of literary works. Strengthening the teaching of film and television literature works is the need of the development of the times and an important channel to improve students' cultural literacy. e appreciation of film and television literary works will provide a new growing point for the deepening of Chinese teaching reform. Online-offline mixed tutorial mode provides students with rich teaching resources and broad learning space, which enables students to change from passive learning to active learning. Students can watch videos independently before class, complete the preview work, and fully stimulate students' learning enthusiasm, thus contributing to the improvement of classroom teaching quality and teaching effect. With the development of the times, the teaching method based solely on lectures can no longer meet the needs of students. e online and offline mixed tutorial mode can improve the teaching quality, enhance the learning efficiency of students and promote their all-round development. Data Availability e data used to support the findings of this study are included within the article.
5,721.8
2022-03-09T00:00:00.000
[ "Education", "Computer Science" ]
Proteomic analysis of bone marrow-derived mesenchymal stem cell extracellular vesicles from healthy donors: implications for proliferation, angiogenesis, Wnt signaling, and the basement membrane Background Bone marrow-derived mesenchymal stem cells (BM-MSCs) have shown therapeutic potential in various in vitro and in vivo studies in cutaneous wound healing. Furthermore, there are ubiquitous studies highlighting the pro-regenerative effects of BM-MSC extracellular vesicles (BM-MSC EVs). The similarities and differences in BM-MSC EV cargo among potential healthy donors are not well understood. Variation in EV protein cargo is important to understand, as it may be useful in identifying potential therapeutic applications in clinical trials. We hypothesized that the donors would share both important similarities and differences in cargo relating to cell proliferation, angiogenesis, Wnt signaling, and basement membrane formation—processes shown to be critical for effective cutaneous wound healing. Methods We harvested BM-MSC EVs from four healthy human donors who underwent strict screening for whole bone marrow donation and further Good Manufacturing Practices-grade cell culture expansion for candidate usage in clinical trials. BM-MSC EV protein cargo was determined via mass spectrometry and Proteome Discoverer software. Corresponding proteomic networks were analyzed via the UniProt Consortium and STRING consortium databases. Results More than 3000 proteins were identified in each of the donors, sharing > 600 proteins among all donors. Despite inter-donor variation in protein identities, there were striking similarities in numbers of proteins per biological functional category. In terms of biologic function, the proteins were most associated with transport of ions and proteins, transcription, and the cell cycle, relating to cell proliferation. The donors shared essential cargo relating to angiogenesis, Wnt signaling, and basement membrane formation—essential processes in modulating cutaneous wound repair. Conclusions Healthy donors of BM-MSC EVs contain important similarities and differences among protein cargo that may play important roles in their pro-regenerative functions. Further studies are needed to correlate proteomic signatures to functional outcomes in cutaneous repair. Conclusions: Healthy donors of BM-MSC EVs contain important similarities and differences among protein cargo that may play important roles in their pro-regenerative functions. Further studies are needed to correlate proteomic signatures to functional outcomes in cutaneous repair. Background The relationship between the skin and other body tissues, such as the bone marrow, is complex and relies on the interaction and exchange of information and signals, including secreted proteins. The bone marrow appears to serve key roles in maintaining skin homeostasis. The relationship of the bone marrow to the skin is intricately connected via its secretome-the totality of proteins produced by the bone marrow that can serve functions in skin tissues. In patients that have dysfunctional bone marrow, the skin may be the first sign of an underlying pathology, through, for example, development of chronic wounds [1], changes in pigmentation, and infection. In subjects with genetic mutations resulting in dermatologic phenotypes, such as forms of epidermolysis bullosa, bone marrow transplants have been shown to be effective in attenuating skin pathology [2]. While bone marrow-derived mesenchymal cells (BM-MSCs) have been shown to be beneficial in a variety of diseases, including wound healing [3][4][5], but engraftment and survival into other tissues after transplant is very low, the exact mechanisms as to how patients experience benefit from cellular therapy remain to be fully understood. We hypothesized that the secretome of the bone marrow cells contains proteins important in skin structure (ex. basement membrane components) and function that may help explain, in part, the beneficial effects of bone marrow transplants and BM-MSC treatment in patients with cutaneous disease. In this study, using mass spectrometry, we analyzed the proteins in the secretome that co-purified with extracellular vesicles secreted by BM-MSCs from 4 healthy donors. Bone marrow donors Collection of primary human donor bone marrow was under the approval of the University of Miami Institutional Review Board (IRB) and in accordance with policies of the Interdisciplinary Stem Cell Institute. All experiments were performed in accordance with relevant guidelines and regulations and complied with the Declaration of Helsinki. Informed consent was obtained for all human subjects, and permission was given by all 4 human subjects to publish results derived from the tissues and cells and, if necessary, to publish any identifying information, including images. The human donors of the bone marrow were a 33-year-old male (donor 1), 33-year-old female (donor 2), 28-year-old female (donor 3), and 28-year-old male (donor 4). As is standard for bone marrow donors at the Interdisciplinary Stem Cell Institute, all 4 donors tested negative for anti-human immunodeficiency virus (HIV)-1/HIV-2, anti-human lymphotrophic virus (HTLV) I/II, anti-hepatitis C virus (HCV), HIV-1 nucleic acid test, HCV nucleic acid test, hepatitis B surface antigen (HBsAg), anti-HBc (core antigen) (IgG and IgM), anticytomegalovirus (CMV), West Nile virus (WNV) nucleic acid, T. cruzi ELISA (Chagas disease), rapid plasma regain (RPR) for syphilis, and had no clinical/history/laboratory evidence to suggest Creutzfeldt-Jakob disease. The bone marrow (approximately 80 mL) was aspirated from the posterior iliac crests as per standard practice of the University of Miami Bone Marrow (BM) Transplant Programs. The marrow was aspirated into heparinized syringes, and labeled syringes were transported at room temperature to the Good Manufacturing Practices (GMP) facility at the Interdisciplinary Stem Cell Institute at the University of Miami. BM was processed using Lymphocyte Separation Medium (LSM; specific gravity 1.077) to prepare the density-enriched, mononuclear cells (MNCs). Cells were diluted with Plasmalyte A or phosphatebuffered saline (PBS) buffer and layered onto LSM using conical tubes to isolate MNCs following established standardized operating procedures. The MNCs were washed with Plasmalyte A or PBS buffer containing 1% human serum albumin (HSA). The washed cells were sampled to determine the total number of viable nucleated cells. MSCs were initially cultured in Alpha-MEM media (Corning Cat. No 15-012-CV) supplemented with 2mM Lglutamine, 20% fetal bovine serum (FBS), 100 units/ml penicillin, and 100 μg/ml streptomycin. The expansion was performed in T175 cm 2 flasks (Corning Cat. No 431466) at 37°C, using a 5% CO 2 humidified incubator. MSCs were detached from the culture vessels using trypsin exposure, passaged, and cryopreserved at passage three prior to use in the following experiments. MSCs were verified in the GMP as viable, CD105 + , CD45cells, sterile, mycoplasma-free, and endotoxin-free. Our previous work with MSCs of this nature revealed expression of HLA-class 1, CD90, CD73, and CD105 while being negative for CD45, and contained differentiation capacity into different lineages [6,7]. Isolation of EVs Passage three cells were taken from cryopreservation, recovered, and cultured in T75 cm 2 flasks (Corning Cat. No 3276) until 80% confluency, at which time the MSCs were washed several times with PBS, switched to serumfree Alpha-MEM media for 24 h to allow for EV collection into the serum-free media, which was then isolated and processed for downstream isolation using ExoQuick-TC® ULTRA EV Isolation Kit for Tissue Culture Media (Cat # EQULTRA-20TC-1), according to the manufacturer's instructions. Dot blot was performed to verify extracellular vesicles were isolated without cellular contaminants (Exo-Check Exosome Antibody Arrays, Cat # EXORAY200A-4, Cat #EXORAY210A-8) according to the manufacturer's instructions. Processing of EV samples prior to mass spectrometry analysis Lysing the EVs were as follows (all reagents from Sigma, unless otherwise stated). Isolated extracellular vesicles were centrifuged for 10 min at 2000×g at 4°C. Samples were speed vacuumed dry until the sample was dry. Fifty microliters of 20 mM Tris-2% (sodium dodecyl sulfate) SDS was added. The mixture was heated at 95°C for 30 s and chilled for 30 s; this was cycled for a total of 5 min. Samples were sonicated for 1 min. Proteins were precipitated with cold acetone. Samples were speed vacuumed until dry and resuspended in 100 μL ammonium bicarbonate. Eight micrograms of protein was added, centrifuged for 10 min, and speed vacuumed until the sample was dry. Eight microliters of 50 mM ammonium bicarbonate (pH 7.8) was added to the samples. Samples underwent denaturation with 15 μL of 10 M urea in 50 mM ammonium bicarbonate (pH 7.8). Samples were reduced using 2 μL of 125 dithiothreitol DTT in 50 mM ammonium bicarbonate (pH 7.8). Samples were incubated for 1 h at room temperature. Samples underwent alkylation with 5 μl of 90 mM iodoacetamide in 50 mM ammonium bicarbonate, pH (7.8) and incubated in room temperature for 30 min. Samples were quenched with 3.33 μL of 125 mM DTT in 50 mM ammonium bicarbonate (pH 7.8). Samples were incubated at room temperature for 1 h in the dark. Ammonium bicarbonate (50 mM) was added to dilute urea to 1 molar concentration. Samples were digested with trypsin corresponding to 1:30 w/w enzyme to protein and incubated overnight at 37°for 18 h. Formic acid (50%) was added to stop trypsin reaction (5:100 v/v formic acid to sample). Samples were desalted using the Pierce C18 Spin Tips (Thermo Scientific). Trifluoroacetic acid (TFA) (2.5%) was added to the sample to adjust TFA concentration to 0.05%; pH of less than 4 was verified. C18 Spin Tips were used were placed into a spin adapter and the tip was wetted with 0.1% TFA in 80% acetonitrile (ACN) and centrifuged for 1 min. After discarding the flow through, the sample was added to C18 spin tip and centrifuged at 1000×g for 1 min; this process was repeated until all sample was passed through the C18 Spin Tip. The Spin Tip was then transferred to a fresh microcentrifuge tube. The sample was eluted by adding 20 μL of 0.1% TFA in 80% ACN and centrifuge at 1000×g for 1 min; this step was repeated again to further elute the sample. The sample was speed vacuumed to dry. The samples were reconstituted in 50 μL of 2% acetonitrile in LC-MS grade water with 0.1% formic acid prior to LC-MS/MS analysis. High-performance liquid chromatography (HPLC) and mass spectrometry The following methods were performed as previously described [8]. In brief, reversed-phase chromatographic separation utilized an Easy-nLC 1000 system (Thermo) with an Acclaim PepMap RSLC 75 μm × 15 cm, nanoViper column (Thermo). The solvents were LC-MS grade water and acetonitrile with 0.1% formic acid. Peptides were analyzed using a Q Exactive mass spectrometer (Thermo) with a heated electrospray ionization source (HESI) operating in positive ion mode. Protein identifications from MS/MS data utilized the Proteome Discoverer 2.2 software (Thermo Fisher Scientific) using Sequest HT search engines. The data was searched against the Homo sapiens entries in Uniprot protein sequence database. The search parameters included precursor mass tolerance 10 ppm and 0.02 Da for fragments, 2 missed trypsin cleavages, oxidation (Met) and acetylation (protein N-term) as variable modifications, and carbamidomethylation (Cys) as a static modification. Percolator PSM validation was used with the following parameters: strict false discover rate (FDR) of 0.01, relaxed FDR of 0.1, maximum ΔCn of 0.05, and validation based on q-value. We obtained the high confidence peptides and filtered out the low and medium confidence peptides. Results The four donors each contained more than 3000 unique proteins identified within their EV cargo (Fig. 1A). More than 600 of these proteins were in common among all four donors (Fig. 1A). In terms of biologic function, the proteins among all donors had similar numbers of unique proteins among each functional category (Fig. 1B). The most common functional categories were proteins involved in transport (especially transport of ions and other proteins), followed by transcription, cell cycle, ubiquitin conjugation pathways, cell adhesion, deoxyribonucleic acid (DNA) damage, immunity, lipid metabolism, sensory transduction, host-virus interaction, apoptosis, messenger ribonucleic acid (mRNA) processing, neurogenesis, cilium biogenesis/degradation, protein biosynthesis, endocytosis, ribosome biogenesis, Wnt signaling, DNA replication, inflammatory response, translation regulation, autophagy, angiogenesis, exocytosis, notch signaling, and keratinization (Fig. 1B). In terms of the cellular component with which the proteins were associated, the most common were proteins associated with the cell membrane, followed by the nucleus, cytoplasm, cell projections, mitochondrion, endoplasmic reticulum, cell junctions, golgi apparatus, microtubules, chromosomes, endosomes, cytoplasmic vesicles, lysosomes, dynein, peroxisomes, keratin, intermediate filaments, DNA-directed RNA polymerase, and lipid droplets (Fig. 1C). Using the STRING consortium database, we visualized the structural and functional networks among the common proteins involved in transport ( Fig. 2A). Central among the network were calcium transport-related proteins, such as the Voltage-dependent T-type calcium (Fig. 2A), which mediate the entry of calcium ions into cells and is involved in cell motility, cell division, and gene expression [9,10]. Closely related in this hub was the voltage-dependent L-type calcium channel subunit beta-2 (CACNB2) (Fig. 2A), which increases the peak calcium current across cell membranes [11]. Ryanodine receptor 1 (RYR1), another calcium channel that is also expressed in epidermal keratinocytes and associated with keratinocyte differentiation and epidermal permeability barrier homeostasis [12], was detected in all donor EVs ( Fig. 2A). Calcium-transporting ATPase type 2C member 1 (ATP2C1), a magnesiumdependent enzyme that is critical in calcium homeostasis and keratinocyte adhesion, was functionally connected to the aforementioned proteins ( Fig. 2A) [13]. Several sodium-related channels were discovered in EVs. The sodium channel proteins type 4 subunit alpha (SCN4A) and type 10 subunit alpha (SCN10A) were present ( Fig. 2A) [14]. Transient receptor potential cation channel subfamily M member 2 (TRPM2), a voltage-independent cation channel mediating both sodium and calcium influx, was detected ( Fig. 2A) [15]. The detected transport proteins also included endosomal trafficking-related proteins, such as DnaJ homolog subfamily C member 13 (DNAJC13) [16,17] (Fig. 2A), which is involved in membrane trafficking through early endosomes and implicated in recycling epidermal growth factor receptor. Coatomer subunit beta (COPB1) [18,19] (Fig. 2A) is a cytosolic protein that associates with vesicles from the Golgi apparatus and mediating protein transport from the endoplasmic reticulum. BM-MSC EV cargo contained proteins involved in electron transport that have been shown to be co-expressed together in independent experiments of human cells (Fig. 2B) (STRING database analytics). NADH-ubiquinone oxidoreductase chain 4 (MT-ND4) [20], a core subunit of the mitochondrial membrane respiratory chain NADH dehydrogenase, which plays a critical role in the electron transport chain, was co-expressed with cytochrome b (MT-CYB) [21] (Fig. 2B), which is a component of the ubiquinolcytochrome c reductase complex, also a critical component of the respiratory chain, ultimately contributing to the synthesis of ATP needed for cellular processes. Overall, the donors all shared BM-MSC EV cargo proteins essential to ion, protein, and electron transport. All donor BM-MSC EV cargo contained important transcriptional regulators. DNA-directed RNA polymerase II subunit RPB1 (POLR2A) [22] was central in the network hub (Fig. 3A). POLR2A is the largest component of RNA polymerase II and catalyzes the transcription of DNA into RNA. AF4/FMR2 family member 4 is a component of the super elongation complex (SEC), which increases the catalytic rate of RNA polymerase II transcription (Fig. 3A) [23]. Epigenetic modulators, such as histone-lysine N-methyltransferases 2A and 2B (KMT2A and KMT2B) (Fig. 3A) were present in all donor EVs [24]. Also present were chromodomainhelicase-DNA-binding proteins 1 and 3 (CHD1 and CHD3) (Fig. 3A) [25]. CHD1 is an ATP-dependent chromatin-remodeling protein associated with the histone acetylation (HAT) complex regulating RNA polymerase transcription; CHD3 is a component of the histone deacetylase NuRD complex involved in epigenetic regulation. The helicase, SRCAP, belongs to the SNF2/RAD54 helicase family and mediates ATP-dependent histone modification [26] (Fig. 3A). Jumonji (JARID2) (Fig. 3A) is a regulator of histone methyltransferase by promoting recruitment of histone methyltransferase complexes to their target genes [27]. Bromodomain adjacent to zinc finger domain proteins 2A and 1B (BAZ2A and BAZ1B) (Fig. 3A) were detected. BAZ2A is an essential component of the nucleolar remodeling complex (NoRC) [28]. BAZ1B is an atypical tyrosine-protein kinase that plays a central role in chromatin remodeling as a component of the WICH complex, which mobilizes nucleosomes and reconfigures chromatin [29]. The enriched functions of the proteins detected in all donors were concentrated in histone H3-K4 trimethylation, epigenetic gene regulation, DNA duplex unwinding, and DNA methylation among others (Fig. 3B). There were 19 cell cycle-related proteins that were detected in all four donors (Fig. 4A). Most of these proteins were associated with functions in the nucleus (Fig. 4B). MCM7 (Fig. 4C) is a DNA replication licensing factor which is a replicative helicase essential for DNA replication [30]. Timeless (Fig. 4C) plays an important role in DNA replication via maintenance of replication fork and genome stability [31,32]. Protein DBF4 (Fig. 4C) plays a central role in DNA replication and cell proliferation. Serine-protein kinase ATM (Fig. 4C) activates checkpoint signaling upon DNA damage [33,34]. RIF1 (Fig. 4C) is a telomere-associated protein that plays a role in double-strand DNA breaks and promotes nonhomologous end joining-mediated repair [35][36][37]. Specific cyclins were conserved among the donors' BM-MSC EVs. Cyclin-A2 (CCNA2) (Fig. 4C) controls G1/S and G2/M transition phases in the cell cycle and complexes with cyclin-dependent protein kinases CDK1 and CDK2 [38]. Cyclin-F (CCNF) (Fig. 4C) is a substrate recognition component of the SKP1-CUL-F-box protein E3 ubiquitin-protein ligase complex that mediates proteasomal degradation to inhibit centrosome duplication. Cell division cycle protein 23 homolog (CDC23) (Fig. 4C) is a component of the anaphase promoting complex/cyclosome (APC/C), which is a cell cycle-regulated E3 ubiquitin ligase that controls cell cycle progression [39]. Centromere protein F (CENPF) is required for kinetochore functions and segregation of chromosomes in mitosis [40]. Abnormal spindle-like microcephalyassociated protein (ASPM) (Fig. 4C) is involved in the regulation of the mitotic spindle [41]. ECT2 (Fig. 4C) is a guanine nucleotide exchange factor that acts on Rho family members and plays roles in signal transduction and cytokinesis [42]. Cytoskeleton-associated protein 5 (CKAP5) (Fig. 4C) binds to microtubules and regulates the organization of microtubules [43]. Wnt signaling activity has been demonstrated to be important in cutaneous wound healing. We hypothesized that BM-MSC EVs would contain Wnt signaling modulators. All donors' BM-MSC EVs contained the tumor adenomatous polyposis coli (APC) protein (Fig. 6A), which promotes rapid degradation of beta-catenin and consequently regulates Wnt signaling activity [62]. Secreted frizzled-related proteins 1 and 5 (SFRP1 and SFRP5) (Fig. 6A) were also present in all donor BM-MSC EVs. SFRPs function as modulators of Wnt signaling via direct interactions with Wnt ligands in the extracellular environment [63]. Depending on the type of Wnt ligands they bind, SFRPs can induce or inhibit canonical Wnt signaling, which may have differing temporal effects on processes such as angiogenesis and fibrosis during cutaneous wound healing [64,65]. Various Wnt ligands were expressed in some, but not all, donors. For example, donors 2 and 3 EVs contained WNT8A, while donor 3 contained WNT11 and donor 4 contained WNT4 and WNT9A (Fig. 6B). We found that Wnt receptors were present in BM-MSC EVs. Frizzled (Fz) receptors were found in all donors (Fig. 6B). Low-density lipoprotein receptor-related protein 6 (LRP6) was present in donor 1 EVs, while LRP4 was present in donors 1 and 3 (Fig. 6B). AXIN2, present in donors 1, 2, and 3 (Fig. 6B), is a component of Wnt signaling that is involved in beta-catenin degradation [66]. Overall, the BM-MSC EVs exhibit both conserved cargo and significant variation that may alter the balance of Wnt signaling. Given our previous findings, we hypothesized that BM-MSC EVs would contain important basement membrane proteins. All donors' EVs contained multiple subunits of collagen IV and VII (Fig. 7A, B) [67][68][69], which are critical in the formation of the skin basement membrane. Donors 2, 3, and 4 contained laminin subunits A1 and A3 (LAMA1 and LAMA3) (Fig. 7A, B), which are crucial in the formation of the basement membrane. Thus, BM-MSC EVs could carry cargo proteins to healing wounds in both damaged skin and in patients with genetic deficiencies. Discussion Our study finds that healthy donors of BM-MSC EVs contain important similarities and differences that should be considered in the development of EVs as therapeutics. BM-MSC EVs carry functional cargo important for a wide variety of biologic processes, including transportation of proteins and ions, transcription, cell cycle, and epigenetic processes, but this list is not exhaustive. With relevance to cutaneous wound healing, BM-MSC EVs could play a key role in the promotion of repair and regeneration via its modulation of cell proliferation and angiogenesis and critical signaling pathways, such as Wnt signaling. Furthermore, replenishment of basement membrane proteins is critical to repair and regeneration. An important future avenue of investigation would involve comparing BM-MSC EVs from healthy donors and patients with various diseases (such as chronic wound healing or diabetes); however, we recognize the ethical challenges in obtaining such bone marrow samples in patients at risk for potential complications related to invasive procedures. Additionally, it would be important for screening to understand if there are key, circulating biomarkers in the blood that could predict the relevant cargo that might be contained in a donor's BM-MSC EVs, before isolating the bone marrow. Ideally, some of the key protein cargo from the BM-MSCs identified as useful in the promotion of cutaneous regeneration would be available for detection in the circulation, allowing for a more optimal screening strategy. One limitation of our study is that we only assessed four healthy donors; further studies on a larger number of donors, across different age groups, in independent institutions are needed to help validate cargo signatures in BM-MSC EVs. Furthermore, efforts to correlate proteomic (and genomic) signatures to functional outcomes (in in vitro potency assays and clinical trials) are warranted. Given the importance of stem cells in the development of therapeutics, BM-MSC EVs may play an important role in translational therapeutic development in cutaneous wound repair and regeneration. Conclusion BM-MSCs contain important protein cargo that makes them significant candidates for endogenous contributors and therapeutic candidates for cutaneous wound repair and regeneration. Donor screening for clinical trials is warranted for ultimate application to examine the effects of BM-MSCs on recipient wound healing in a variety of disease conditions.
4,991
2021-06-05T00:00:00.000
[ "Biology", "Medicine" ]
Helix Matrix Transformation Combined With Convolutional Neural Network Algorithm for Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry-Based Bacterial Identification Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) analysis is a rapid and reliable method for bacterial identification. Classification algorithms, as a critical part of the MALDI-TOF MS analysis approach, have been developed using both traditional algorithms and machine learning algorithms. In this study, a method that combined helix matrix transformation with a convolutional neural network (CNN) algorithm was presented for bacterial identification. A total of 14 bacterial species including 58 strains were selected to create an in-house MALDI-TOF MS spectrum dataset. The 1D array-type MALDI-TOF MS spectrum data were transformed through a helix matrix transformation into matrix-type data, which was fitted during the CNN training. Through the parameter optimization, the threshold for binarization was set as 16 and the final size of a matrix-type data was set as 25 × 25 to obtain a clean dataset with a small size. A CNN model with three convolutional layers was well trained using the dataset to predict bacterial species. The filter sizes for the three convolutional layers were 4, 8, and 16. The kernel size was three and the activation function was the rectified linear unit (ReLU). A back propagation neural network (BPNN) model was created without helix matrix transformation and a convolution layer to demonstrate whether the helix matrix transformation combined with CNN algorithm works better. The areas under the receiver operating characteristic (ROC) curve of the CNN and BPNN models were 0.98 and 0.87, respectively. The accuracies of the CNN and BPNN models were 97.78 ± 0.08 and 86.50 ± 0.01, respectively, with a significant statistical difference (p < 0.001). The results suggested that helix matrix transformation combined with the CNN algorithm enabled the feature extraction of the bacterial MALDI-TOF MS spectrum, which might be a proposed solution to identify bacterial species. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) analysis is a rapid and reliable method for bacterial identification. Classification algorithms, as a critical part of the MALDI-TOF MS analysis approach, have been developed using both traditional algorithms and machine learning algorithms. In this study, a method that combined helix matrix transformation with a convolutional neural network (CNN) algorithm was presented for bacterial identification. A total of 14 bacterial species including 58 strains were selected to create an in-house MALDI-TOF MS spectrum dataset. The 1D array-type MALDI-TOF MS spectrum data were transformed through a helix matrix transformation into matrix-type data, which was fitted during the CNN training. Through the parameter optimization, the threshold for binarization was set as 16 and the final size of a matrix-type data was set as 25 × 25 to obtain a clean dataset with a small size. A CNN model with three convolutional layers was well trained using the dataset to predict bacterial species. The filter sizes for the three convolutional layers were 4, 8, and 16. The kernel size was three and the activation function was the rectified linear unit (ReLU). A back propagation neural network (BPNN) model was created without helix matrix transformation and a convolution layer to demonstrate whether the helix matrix transformation combined with CNN algorithm works better. The areas under the receiver operating characteristic (ROC) curve of the CNN and BPNN models were 0.98 and 0.87, respectively. The accuracies of the CNN and BPNN models INTRODUCTION Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) is a fast, inexpensive and reliable tool for the identification of bacteria, and it has become a gold standard for microbial identification in clinical microbiology laboratories within the last decades (Lasch et al., 2009;Bryson et al., 2019;Hou et al., 2019;Welker et al., 2019). As a spectrumrecognition-based method, the classification algorithm plays a critical role in the process (Fangous et al., 2014). The similarity evaluation system for the MALDI-TOF MS spectra of bacteria is commonly used in routine analysis. Standard spectra are acquired from multiple measurements of a single defined strain so that the biological variability of strains is captured and the impact from the random sampling of MALDI-TOF MS is removed. Sample spectra are compared with the standard spectrum library by calculating the similarity among multiple parameters, such as peak positions, intensities and frequencies, thus ensuring the highest possible levels of accuracy and reproducibility across a complete range of microorganisms (Wang et al., 2018;Rotcheewaphan et al., 2019). Then, a matching score is obtained. The results of potential species with a matching score above a set threshold will be listed and sorted by the scores. The Biotyper software (Bruker Daltonik GmbH, Bermen, Germany), a typical example of a similarity evaluation system, is widely used in both routine analysis and scientific research. The standard spectrum library can be extended by users to identify more species of bacteria. However, only a small number of attributes in MALDI-TOF MS spectra such as the peak height and peak area are analyzed and empirically linked to microbial species in a similarity evaluation system (Weis et al., 2020). Therefore, some challenging species with similar MS peaks, such as Shigella and E. coli species are difficult to be identified by traditional algorithm (Ling et al., 2019). To fully exploit the MALDI-TOF MS spectrum features, machine learning algorithms have been used to refine species identification (Mather et al., 2016;Kim et al., 2019). Many types of machine learning algorithms, such as the support vector machine (SVM) and random forest (RF), have been applied to optimize bacterial identification. De Bruyne and colleagues used the SVM and RF to binarize the MALDI-TOF MS spectra of the genera Leuconostoc, Fructobacillus, and Lactococcus, and the method achieved excellent discriminatory performance (De Bruyne et al., 2011). The SVM algorithm was also used to discriminate methicillin-resistant (MRSA) from methicillin-sensitive S. aureus (MSSA) based on their MALDI-TOF MS spectra. An artificial neural network, a high performance machine learning algorithm, was employed to conduct the rapid and accurate identification of Bacillus fragilis and some of its subgroups (Zhang et al., 2004;Lasch et al., 2009). In the previous study, a short-term culture method was presented to induce over expression of new proteins as biomarkers which can be detected using MALDI-TOF MS (Ling et al., 2019). The dimensionalities of the full spectra were reduced using a isomap non-linear dimensionality reduction algorithm to fit the BPNN's input requirement. After that, a neural network algorithm was employed as a classifier for MS spectrum identification. The back propagation neural network (BPNN) model achieved great success in distinguishing Escherichia coli and Shigella species. The prediction accuracy of the BPNN model was 97.71% with the novel culture approach. However, the multi-class classification of species using the BPNN model was not achieved because there was no spectral feature extraction process. Recently, convolutional neural networks (CNNs) have achieved great success in image classification, object recognition and natural language processing (Hsieh et al., 2020). Unlike other machine learning algorithms, the convolutional layers in CNNs extract image feature information from source images to form a weight map during the training process, which provide more feature details than manual acquisition (Wang et al., 2020). Fully connected layers are an essential component of CNNs, which have been proven to be very successful in image classification. The features broken down from images are fed into a fully connected neural network structure that drives the final classification decision. Seemingly, the MALDI-TOF MS spectrum is an image. In fact, the data form of the MALDI-TOF MS spectrum is an one-dimensional (1D) array of intensity values, which is drawn as a line chart. An 1D array data type is a structure that contains an ordered collection of data elements in which each element can be referenced by its ordinal position in the collection. The data elements and their ordinal positions serve as critical attributes of 1D array-type data, which are equivalent to peak intensity and peak location in original MALDI-TOF MS spectrum. In this study, we present a novel helix matrix transformation combined with CNN algorithm for the multi-class classification of species. Helix matrix is a kind of inerratic matrix in mathematics. The helix matrix transformation was suggested in order to convert 1D array-type MALDI-TOF MS spectrum data into image-like matrix-type data for CNN model training for the first time. The spectrum was converted into an image (matrix-type data) with some black and gray blocks after the helix matrix transformation. The correlation between peaks in original spectrum was established when folding 1D array-type data in two dimensions. The smaller parts of the image, black and gray block groups in each view, were new spectrum features, which were characteristic of MS peak and peak correlation in original MALDI-TOF MS spectrum. Then, the CNN algorithm was employed, which successfully classified 14 bacterial species based on their MALDI-TOF MS spectra. The convolution layer "scanned" the image with a convolution kernel to extract features which may be important for classification. Afterward, the features were downsampled, and then the same convolutional structure repeated again. The convolution identified successively features and sub-features from the original image and its sub-parts. Eventually, the process of convolution identified the essential features which can help to classify the image. Culture Condition and Sample Preparation The strains were incubated on commercial tryptic soy agar (Huankai microbial, Guangzhou, China) at 35 • C for 24 h to obtain fresh colonies. The fresh colony was extracted with 60 µL of 70% formic acid (Sigma-Aldrich, Louis, United States) and 60 µL of acetonitrile (Merck, Darmstadt, Germany) with a vortex for 30 s. After the centrifugation of the extracting solution at 10000 g for 3 min, 1 µL of the supernatant was loaded onto a MALDI target plate spot and left to dry. Each sample spot was overlaid with 1 µL α-Cyano-4-hydroxycinnamic acid (CHCA) (5 mg/mL) (Sigma-Aldrich, Louis, United States) in a 50:48:2 acetonitrile:water:trifluoroacetic acid (Tedia, Fairfield, United States) matrix solution and was dried at room temperature. MALDI-TOF MS Analysis The MS analyses were performed using a 4800 Plus MALDI-TOF/TOF TM (Applied Biosystems, Framingham, MA, United States). The mass spectrometer was externally calibrated before use. The mass error parameter of calibration was set as 50 ppm. Each MS spectrum was obtained by summing 50 acceptable sub-spectrums obtained in random sampling mode with a fixed laser intensity of 3500 for the MS analysis. The raw data were collected from 2000 to 12,000 m/z in the linear positive-ionization mode. The peak detection parameters were set as follows: Signal/Noise >20, local noise window width = 250 m/z and minimum peak width at full width half max = 2.9 m/z. Dataset Preparation Each MALDI-TOF MS spectrum was preprocessed with noise removal and baseline correction using the Data Explore software (Ab Sciex, Redwood City, United States), followed by it being exported into an individual text file. The text file contained the numeric value of the intensity for every single point of the MS spectrum. To manage the bulk data, these numeric values of the intensity in text files were read and normalized to a range from 0 to 255 using Python v3.7.4, then compacted into 2,500 points and inserted into a MySQL v5.7.20 (MySQL AB, Sweden) data table with some basic information, such as species, strain, and date of analysis. Numeric labels of data from 0 to 13 were assigned to each species. Before modeling, all MS numeric value data were exported with labels in line into a text file to obtain high loading performance. Data Transformation Here, we present a helix matrix transformation for the array of an MS spectrum, which makes 1D array-type spectrum data into matrix-type. Firstly, a square helix matrix was created using the formula as follows: where k is the number of elements on the matrix side, n is the ordinal of the square from outside-in of the helix matrix, and i and j are row number and column number, respectively. If k was an odd integer, the center of the helix matrix was set using the equation as follows: The numeric values of 2,500 points of the MS spectrum were clockwise rolled into the square helix matrix with a 50 × 50 size using the equation as follows: where A is the data array of the 2,500 points of the MS spectrum. To remove the low intensity noise and peaks, image binarization was carried out using the formula as follows: where T is the threshold value. A bicubic interpolation over a 4 × 4 pixel neighborhood resize method was selected to resize the images. The equations were as follows: where a is a factor, and i and j are image channels. In our study, the parameters are set as follows: a = −0.5, i = 0, and j = 0. The data visualization after each step was performed using the Matplotlib library. The data labels were converted into one-hot labels using the Keras library. The dataset containing all numeric values and labels was split randomly into a training dataset and validation dataset with a split ratio of 0.8, which means that 80% of the data was used for model training and the other 20% was used for model validation. The test dataset was created using 1000 additional MS spectra of each species followed by helix data transformation. These spectra were never used before to be a test set. Convolutional Neural Network Modeling All training and evaluations were carried out on a Dell T7820 workstation equipped with two Intel Xeon Gold 5118 CPUs, 64 Gb of DDR4 RAM and two Nvidia GTX1080Ti graphics cards. CUDA v10.0, a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs) was installed for two GTX1080Ti graphics cards. The operating system was the 64-bit CentOS Linux system v7.5. The CNN models were constructed using TensorFlow v2.0.0, which is widely used for building and training artificial neural network models. The NVIDIA CUDA Deep Neural Network library (cuDNN) v7.4.2, a GPU-accelerated library of primitives for deep neural networks was used for creating model. The cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. As shown in Supplementary Table 2, the CNN model contains 3 convolutional layers, 2 batch normalization layers, 1 max pooling layer, 1 fully connected layer, and 1 Softmax layer to form the output prediction. The numbers of filters were set as 4, 8, and 16 for the three convolutional layers, respectively. The kernel size was set as 3. The numbers of nodes in the fully connected layer and output layer were 128 and 14, respectively. The activation function of the convolutional layer and output layer were the rectified linear unit (ReLU) function and Softmax function, respectively. The Softmax function defined in Eq. (7) was applied in the last layer to produce the prediction probability over the 14 output classes (Hsieh et al., 2020). where s i are the scores inferred by the net for each class in C. The categorical cross-entropy was selected as the loss function, which was defined in Eq. (8). The goal of the network is to minimize CE. where s p is the CNN score for the positive class. Adam is selected as the optimizer. The hyper-parameters ß1 and ß2 are 0.9 and 0.999, respectively. The learning rate is set as 0.001 and the number of epochs was set as 1. Back Propagation Neural Network Modeling To investigate the benefits of data transformation and convolutional layers in our algorithm, a back propagation neural network (BPNN) was created by removing the data transformation step and convolutional layers (see Supplementary Figure 1). The BPNN models were trained and evaluated using the same environment and library as that of CNN. The input size was set as 2,500 to fit the data array of the original spectrum. The numbers of nodes in the fully connected layer and output layer, the loss function, the optimizer, the learning rate and the number of epochs were set the same as those of the CNN model. Model Evaluation The loss, precision, accuracy and recall were selected to evaluate the model training since they are commonly used in most cases for evaluations. The loss values were calculated using categorical cross-entropy formula mentioned above. The precision, accuracy and recall were calculated as follows: Accuracy = tp + tn tp + tn + fp + fn (10) where tp is true positives, fp is false positives, tn is true negatives, and fn is false negatives. A confusion matrix was established to investigate the classification performance. Each row of the matrix stands for a predicted label while each column represents a true label. The receiver operating characteristic (ROC) curve was drawn with the true positive rate and false positive rate. FIGURE 1 | Procedures of the data transformation combined with the CNN modeling. The one-dimensional MS spectra were converted into a two-dimensional matrix with a novel helix matrix transformation method. The two-dimensional matrix data were binarized and resized, and then compressed into a dataset for CNN training. Finally, a CNN model with convolutional, pooling and, dense layers was created and trained with a dataset. Figure 1, a data transformation and CNN modeling approach was established for the identification of bacteria using MALDI-TOF MS. Data Transformation The visualizations of the helix matrix transformation are shown in Figure 2, which provided insights into how transformation works and how the features of the MS spectrum were revamped after transformation. As shown in Figure 2, the original MS spectrum was 1D array-type data. After the helix matrix transformation, the MS data of the strains were rolled similar to a Swiss roll into a matrix-type data with a size of 50 × 50. The MS peaks were transformed into lines with various shades of gray to black depending on their intensity, which kept the profile of the spectrum. To remove the low intensity noises and peaks, binarization was performed using threshold segmentation. The T threshold was set as a maximum value which makes all peaks in spectra detected by DataExplore software involved. Firstly, peak list I was obtained using DataExplore software, and peak list II was obtained from helix matrix transformed image after filtering with T threshold value. The T threshold value was decided by comparing the peak list II with the peak list I. After the parameter optimization, the threshold for binarization was set as 16 and the final size of the matrix-type data was set as 25 × 25 to obtain a clean dataset with a small size in order to greatly reduce the computational burden (data not shown). The bicubic interpolation method was used to prevent adjacent lines from being joined together. The features in the 2D image (matrix-type data) were obviously preserved after resizing, as shown in Figure 2. Model Evaluation The training dataset including 67,200 MS spectra was used for model training while the validation dataset including 16,800 MS spectra was used for model validation. A total of 2,400 iterations were carried out in 1 epoch. The loss curve of the training is shown in Figure 3. The loss values were 2.9561, 0.0418, 0.0269, and 0.0187 at the points at the beginning, after 500 iterations, after 1000 iterations, and after 1500 iterations, respectively. The loss value holds steady after 1500 iterations. At the end of the training, the loss value, accuracy, precision, and recall were 0.0126, 0.9996, 0.9977, and 0.9962, respectively, which indicated the model was well trained (Figure 3). The test set including a total of 14,000 MS spectra (1000 MS spectra for each species) with labels was used to test the prediction performance of the CNN and BPNN models. Figure 4A shows the confusion matrix and ROC curve of the prediction results based on the CNN model. In the confusion matrix, the diagonal shows the percent of correctly predicted records for each species and the off-diagonals show the percentage of misclassifications for each species. The classification accuracy for the 12 species was close to 100%, which suggested high classification performance for the CNN model. The overview of the ROC curves is shown in Figure 4A. The area under curve (AUC) value was 0.98. The confusion matrix and ROC curve of the predicted results based on the BPNN model are shown in Figure 4B. The AUC value of ROC curve was 0.87. The predicted accuracies of the CNN and BPNN models for each species are shown in Figure 4C. The accuracies of the CNN and BPNN models were 97.78 ± 0.08 and 86.50 ± 0.01, respectively, with a difference (p < 0.001) that supports a difference between the two accuracy results. These results suggested that the helix matrix transformation combined with the CNN model algorithm achieves better classification performance in bacterial identification based on MALDI-TOF MS. DISCUSSION Matrix-assisted laser desorption ionization-time of flight mass spectrometry is a rapid, high-throughput identification method for bacterial identification, which has been successfully applied in clinical microbiology laboratories (Schubert and Kostrzewa, 2017;Cordovana et al., 2018). The classification algorithm for classifying a bacterial MS database plays a critical role in the identification approach (Fangous et al., 2014;Mesureur et al., 2018). Manufacturer-provided software, such as FlexAnalysis and ClinProTools from Bruker Daltonics, are widely used for classification (Epperson et al., 2018;Rahi and Vaishampayan, 2019). A large proportion of classification studies were performed using FlexAnalysis and ClinProTools with preprogrammed machine learning algorithms including the SVM, spiking neural network (SNN), and quantum clustering (QC) (Weis et al., 2020;Delavy et al., 2019). The preprogrammed algorithms are easy to use, but they restrict the development of new algorithms. Recently, CNNs have achieved great success in image classification in daily use and have also been applied in scientific studies (Hochuli et al., 2018;Hsieh et al., 2020;Zhou et al., 2018). A novel helix matrix transformation method was suggested to convert 1D array-type MS spectrum data into matrix-type. Because the peaks were standing in a row in the original spectrum, a very close distance between two adjacent peaks would reduce the recognition of the spectrum, which may cause low bacterial identification accuracy. In addition, the MS peaks are considered as independent protein types in some traditional algorithms. The correlation between peaks is ignored. After the helix matrix transformation, the distances of the peaks in the part of the low m/z range at the periphery of the matrix were extended, which increased the recognition in the spectrum. Meanwhile, the helix transformation also gave the correlation in space between two peaks in low and high m/z ranges. These changes balanced the spatial distribution of peaks, which revamped the profile of the MS spectrum. The binarization process removed the low intensity noises and peaks so that the classifier would focus on the major features of data. The threshold value of binarization can be set lower to obtained more information for distinguishing species with similar spectra. The proposed CNN structure extracts the low-level features of an image with 2D convolutional filters in earlier layers and more complex features in deeper layers, which allowed the model to learn complex image differences (Zhou et al., 2018). Meanwhile, the BPNN can only use fully connected layers for classification. Therefore, the CNN is better than BPNN in multiclass spectrum classification. In algorithm studies, public data sets are commonly used to test whether the algorithm can work on the type of object. For examples, MNIST and CIFAR datasets are well-known for deep FIGURE 4 | Predicted results of the bacterial species based on the CNN and BPNN models. Confusion matrices and receiver operating characteristic curves of the CNN (A) and BPNN (B) models are plotted based on the extent of matching between the predicted labels and true labels. (C) The accuracies are calculated by the prediction model using test samples. CNN, convolutional neural network. BPNN, back propagation neural network. The annotation of labels A to N related to the species that are listed in Supplementary Table 1. learning research (training and testing neural network model) (Ferré et al., 2018). The MNIST is a dataset of handwritten digits. It has 60,000 training samples and 10,000 test samples. CIFAR-10 is an established computer-vision dataset used for object recognition. The CIFAR-10 dataset consists of 60000 of 32 × 32 color images in 10 classes, with 6,000 images per class. Since there is no public data set of MALDI-TOF MS spectrum on bacteria for deep learning research, we created an in-house dataset refers to the number of categories and data volume of MNIST and CIFAR-10 (shown in Supplementary Table 1). Then, the CNN and BPNN models were created and evaluated using the in-house dataset with 14 classes of bacterial species. Ten of the fourteen species with close relationship belong to staphylococcus, which increase the difficulty of classification. When conducting classification using the BPNN model, the AUC value of the ROC curve was 0.87. The value was significantly increased to 0.98 using the helix matrix transformation combined with the CNN algorithm. The predicted accuracies of the CNN and BPNN models for each species had a statistical difference (p < 0.001) according to a t-test. These results suggested that the helix data transformation combined with the CNN algorithm has a better classification ability and can solve the multi-classification problems for MALDI-TOF MS-based identification of bacteria. In summary, we presented a novel method that combined a helix data transformation with a CNN algorithm for MALDI-TOF MS-based identification of bacteria. The code can be downloaded at https://github.com/ttelva/HMTCNN.git.The helix matrix transformation converted the 1D array-type MS spectrum of bacteria into matrix-type data with the original spectrum profile. An in-house dataset with 84,000 of MALDI-TOF MS spectra was built for training the neural network model. The algorithm was proved to be successfully applied in bacterial identification using an independent test dataset with 14,000 MS spectra. We also compared our algorithm with BPNN and the results indicate that helix matrix transformation combined with convolution provide a better performance of classification. In the following research, more species will be selected to train a model for the routine identification of bacteria in a laboratory. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of Ningbo Medical Center Lihuili Hospital. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS GC, HS, YS, and JL contributed to the conception and design of the study. JL, GL, HW, and HY performed the MALDI-TOF MS analysis. JL organized the database and wrote the draft of the manuscript. GC and HZ contributed to the manuscript's revision. All authors contributed to the article and approved the submitted version. FUNDING This study was supported by the Pharmacopeia Committee Project (No. 2020S06) and the Zhejiang Province Public Welfare Technology Application Research Project (No. LGF19H030008).
6,214.6
2020-11-12T00:00:00.000
[ "Computer Science" ]
Decellularized Human Umbilical Artery Used as Nerve Conduit Treatment of injuries to peripheral nerves after a segmental defect is one of the most challenging surgical problems. Despite advancements in microsurgical techniques, complete recovery of nerve function after repair has not been achieved. The purpose of this study was to evaluate the use of the decellularized human umbilical artery (hUA) as nerve guidance conduit. A segmental peripheral nerve injury was created in 24 Sprague–Dawley rats. The animals were organized into two experimental groups with different forms of repair: decellularized hUA (n = 12), and autologous nerve graft (n = 12). Sciatic faction index and gastrocnemius muscle values were calculated for functional recovery evaluation. Nerve morphometry was used to analyze nerve regeneration. Results showed that decellularized hUAs after implantation were rich in nerve fibers and characterized by improved Sciatic Functional index (SFI) values. Decellularized hUA may support elongation and bridging of the 10 mm nerve gap. Introduction Peripheral Nervous injuries (PNI) are a global clinical problem, since they significantly affect the quality of life of patients and cause enormous socio-economic burden [1][2][3]. Indicatively, in the United States alone, more than 50,000 peripheral nerve repair surgeries are performed annually [4]. The use of autologous nerve graft is considered as the gold standard procedure for bridging peripheral nerve defects. However, surgical approaches are characterized by several drawbacks. For instance, a secondary surgical procedure, which is often associated with donor site pain and mobility, is needed in order to obtain the nerve graft. Moreover, the sources of neural tissue that can be used as nerve conduits, are particularly limited [5][6][7][8]. Other approaches include the development of three-dimensional scaffold-nerve conduits which can be used for gap bringing between the proximal and distal stump of the nerve tissue. In many cases, nerve conduits also act as cells or growth factors carriers [9,10]. Nerve conduits have been fabricated using different types of material; natural and synthetic, biodegradable and non-biodegradable [11,12]. Natural biological nerve conduits such as vessels (veins and arteries), decellularized nerve [13] and muscle tissue have been widely used to bridge peripheral nerve gap in animal models [14] and also in clinical practice [15,16]. Tissue decellularization offers the possibility to obtain a cell-free, natural extracellular matrix (ECM), Bioengineering 2018, 5, 100 2 of 12 characterized by an adequate 3D organization with proper composition to repair different tissues or organs, including peripheral nerves [13]. The human umbilical cord contains two arteries, which can easily be isolated without invasive procedures. Previous reports showed that hUAs retained after decellurization their components such as collagen type I, laminin, and fibronectin [17]. These ECM components are also represented in the ECM of the peripheral nerve [18][19][20]. The aim of this study was to evaluate the use of the decellularized hUA as a nerve guidance conduit in a rat sciatic nerve model. Collection and Isolation of Human Umbilical Arteries Human umbilical cords were collected after informed consent from healthy donors. The informed consent was in accordance with Helsinki declaration and approved by the ethical committee of Biomedical Research Foundation Academy of Athens (BRFAA). The cords were stored at 4 • C immediately after birth and the overall storage time until processing did not exceed 24 h. Arteries were isolated from the cords using sterile surgical tools followed by brief rinses in Phosphate Buffer Saline 1× (PBS 1×). Contact Cytotoxicity Assay The decellularized hUAs (n = 10) were cut into 5 × 5 mm and placed in a 24 well culture plate (Orange Scientific, Braine-l'Alleud, Belgium). MSCs (Mesenchymal stem cell) were isolated from Wharton's Jelly tissue and were seeded into each well at a density of 1 × 10 4 cells. Then, the samples were incubated at 37 • C in 5% (v/v) CO 2 for 48 h. As positive control group for this assay, SDS was added in MSCs (n = 10), and as negative control group MSCs (n = 10) were cultured under normal conditions. Morphological examination of seeded cells was performed using brightfield microscope (LEICA DM 1L, Wetzlar, Germany). Images were captured using IC Capture 2.2 software. ADP/ATP Ratio Assay Native (n = 20) and decellularized hUAs (n = 20) were digested using lysis buffer, consisted of 1 mL α-MEM with 1 mg/mL Proteinase K (Sigma-Aldrich, Darmstadt, Germany). The digestion was performed overnight at 56 • C, and the following day the Proteinase K was inactivated at 95 • C for 5 min. The lysates from native and decellularized hUAs were used as culture medium for the evaluation of metabolic activity in MSCs. Then, 1 × 10 3 MSCs were adhered to each well of 96-wells plate, and the above lysates were added. Specifically, lysate derived from native hUAs was added in 10 wells with adhered MSCs. Lysates derived from decellularized hUAs were added to the next 10 wells with adhered MSCs. MSCs cultured with 1.2 mM SDS (Sigma-Aldrich, Darmstadt, Germany) were used as positive control group. As negative control was used MSCs cultured with standard medium. The culture medium was consisted of α-MEM ((Sigma-Aldrich, Darmstadt, Germany) supplemented with 15% v/v FBS (Sigma-Aldrich, Darmstadt, Germany) and 1% v/v Penicillin (Sigma-Aldrich, Darmstadt, Germany) and 1% v/v Streptomycin (Sigma-Aldrich, Darmstadt, Germany). The 96-well plate was incubated at 37 • C in 5% (v/v) CO 2 for 24 h. Subsequently ADP/ATP ratio assay (Sigma-Aldrich Ratio Assay Kit) was performed according to manufacturer's instructions. Animals Twenty-four male Sprague-Dawley (DS) rats, weighting 250-300 g were randomly divided into two groups (n = 12 in each group): The first group was consisted of decellularized hUAs and compared with the second group, which consisted of nerve autograft. The animals were provided by the Animal center of BRFAA and were handled in compliance with the guidelines for the use and care of laboratory animals. Furthermore, all animals were kept in a temperature-controlled room with a 12/12-h light/dark cycle and provided with rodent diet and water ad libitum. The study protocol was approved by the general veterinary directorate and animal health directorate with reference number 2777/26-04-2016 and was accepted by the Bioethics Committee of BRFAA Surgical Procedure The animals were anesthetized by isoflurane 3% in 1 L of oxygen. A dorsal gluteal-splitting approach was used to expose and mobilize the right sciatic nerve of each animal. The right sciatic nerve was exposed and a 1 cm gap was made in the mid portion of the nerve. In the nerve autograft group, the removed segment of nerve was oriented at 180 • and grafted into the same nerve gap with 6 stiches of prolene 8-0 sutures. In the umbilical artery group, a 1.5 cm artery was grafted into the gap. Both proximal and distal stump were inserted about 2-3 mm from the ends of the artery graft and four stiches were performed in each stump. The manipulations of the nerves were made under an operational microscope. Sciatic Functional Index (SFI) The functional condition of the animals was assessed with the estimation of SFI, according to Bain et al. Formula [21]: Walk track analysis was performed at pre-operative, and at the 4th and 12th week after the surgery [22]. The rats' hind feet were painted with ink and the animals were placed in a walking pathway to walk down the track, leaving their footprint. Footprints from the experimental (E) and contralateral normal (N) sides were analyzed by measuring the lengths of the third toe to heel (PL), the first toe to the fifth toe (TS), and the second toe to the fourth toe (IT). Index values close to 0 indicated normal function and values close to −100 represented loss of function. Nerve Graft Harvested Tissue Twelve weeks postoperatively, the regenerated sciatic nerves were harvested. The midportion of the graft n = 6 from each group were fixed with 10% neutral buffered formalin solution, for immunohistochemical analysis. In addition, the other n = 6 from each group were fixed with 2.5% glutaraldehyde in 0.1 M phosphate buffer for morphometric analysis. Transverse sections were cut both in immunochemistry and in morphometry. The sections which were analyzed in this assay, were 5 mm distal from the side of proximal lesion. Nerve Immunohistochemistry Grafts were fixed in 10% v/v neutral buffer formalin solution (Sigma-Aldrich, Darmstadt, Germany) paraffin embedded and sectioned. Then, the slides were deparafinnized, rehydrated and blocked. Dako Envision Flex kit was used for the immunohistochemistry assay according to manufacturer's instructions (Dako, Agilent, Glostrup, Denmark). Briefly, nerve graft sections were incubated at 4 • C over night with rabbit anti-neurofilament 200 (nf 200) antibody (1:80, Sigma, St. Louis, MO, USA) to identify axons and S100 antibody (1:100, Sigma, St. Louis, MO, USA) to identify Schwann cells. Briefly, washes were performed, and addition of horseradish peroxidase (HRP) conjugated with goat secondary antibody against rabbit and mouse was performed. The slides were incubated at Room Temperature (RT) for 45 min. Finally, 3 3 diaminobenzidine (DAB) was added to the slides. Slides were visualized by light microscopy and images were acquired with IC Capture 2.2 software and processed with imageJ software version 1.52g. Morphometric Analysis of Nerve The midportion of the graft n = 6 from each group was fixed with 2.5% v/v glutaraldehyde (Sigma-Aldrich, Darmstadt, Germany) in 0.1 M phosphate buffer (pH 7.4) for 48 h at room temperature and post-fixed with 1% osmium tetroxide (Sigma-Aldrich, Darmstadt, Germany). The nerve specimens were embedded in epoxy resin, cut into 1-µm, semi-thin sections with an ultramicrotome and stained with 1% toluidine blue (Sigma-Aldrich, Darmstadt, Germany) for light microscopy. Images were digitized with a charge-coupled device camera and analyzed with standard image processing at a magnification of ×1000. Ten random fields from each semi-thin section were analyzed with imaging software (IMARIS 8, Bitplane, Zurich, Switzerland). The sample area was chosen in a systemic, uniform, random manner ensuring that all locations in the nerve cross-section were equally represented. The number of nerve fibers was counted, followed by estimation of mean fiber area and density of myelinated nerve fibers (fibers/µm 2 ) were determined [7]. Gastrocnemius Muscle Histology and Muscle Weight Ratio The gastrocnemius muscle was weighted on an analytical balance immediately after removal from the animals from both sides, normal and experimental, and muscle weight ratio was calculated. Then, the middle part of the muscles was cut and put in a 10% natural formalin solution overnight. The muscles were embedded in paraffin and cut on a microtome into transverse sections at 5 µm, which were subjected to H&E staining followed by observation under light microscope. Statistical Analysis Data was expressed as mean ± standard deviation (SD) and statistical analyses, performed using Graph Pad Prism 6 software (GraphPad Software, San Diego, CA, USA). All data was analyzed with non-parametric student's test, except ADT/ATP essay data that was analyzed with Kruskal-Wallis test and the statically significance level was defined at p < 0.05. Histological Analysis Histological analysis was performed in order to evaluate the impact of decellularization procedure in hUAs. Specifically, H&E showed the preservation of ECM, while the cellular populations were totally absent. Furthermore, Masson's Trichrome staining revealed the presence of properly oriented collagens in decellularized hUA ( Figure 1). Cytotoxicity Tolerance of the Decellularized hUA Contact cytotoxicity assay results showed that MSCs were expanded and attached successfully to the decellularized hUA segments in 96 well-plates after 48 h of incubation ( Figure 2). Moreover, the cells were characterized by the same morphology as the MSCs from negative control group, indicating no cytotoxicity. These findings were further confirmed by determination of ADP/ATP ratio. Specifically, ADP/ATP ratio values were similar between, native, decellularized hUA samples and negative control group. Statistically significant difference was observed only between positive control group, native (p < 0.001) and decellularized (p < 0.001) hUA samples ( Figure 3). Cytotoxicity Tolerance of the Decellularized hUA Contact cytotoxicity assay results showed that MSCs were expanded and attached successfully to the decellularized hUA segments in 96 well-plates after 48 h of incubation ( Figure 2). Moreover, the cells were characterized by the same morphology as the MSCs from negative control group, indicating no cytotoxicity. These findings were further confirmed by determination of ADP/ATP ratio. Specifically, ADP/ATP ratio values were similar between, native, decellularized hUA samples and negative control group. Statistically significant difference was observed only between positive control group, native (p < 0.001) and decellularized (p < 0.001) hUA samples ( Figure 3). Cytotoxicity Tolerance of the Decellularized hUA Contact cytotoxicity assay results showed that MSCs were expanded and attached successfully to the decellularized hUA segments in 96 well-plates after 48 h of incubation ( Figure 2). Moreover, the cells were characterized by the same morphology as the MSCs from negative control group, indicating no cytotoxicity. These findings were further confirmed by determination of ADP/ATP ratio. Specifically, ADP/ATP ratio values were similar between, native, decellularized hUA samples and negative control group. Statistically significant difference was observed only between positive control group, native (p < 0.001) and decellularized (p < 0.001) hUA samples (Figure 3). Macroscopic Examination of the Experimental Sciatic Nerve In Situ All animals survived until the end of the experiment and were in good health as indicated by visual inspection. No signs of self-injury were observed in the operated limb. After euthanasia, the implanted nerve conduits derived either from autograft or decellularized group were visually checked. Regenerated nerves were passed through the nerve conduits from both experimental procedures and bridged successfully the 10 mm gap. No evidence of inflammation was observed in both groups. Furthermore, the thickness of the regenerated nerves was similar in animals of both experimental groups (Figure 4). Motor Function Assessment The recovery of motor function was assessed by calculating the SFI pre-operative animals, after 4 and 12 weeks. The SFI in all rats prior to surgery was within normal range. SFI values were −10.07 ± 3.38 at the autograft group and −8.26 ± 5.1 at the hUA group without a statistically significant difference. The values of the SFI were reduced at the first post-operative evaluation at week 4 for both groups. SFI values of autograft group and decellularized group was −87.36 ± 6.27 and −80.89 ± 9.22, respectively. No statistically significant difference was observed between the above groups (p > 0.05). At 12 weeks, the values of the autograft group showed better improvement than the ones of the hUA group (p = 0.0013). Nevertheless, none of these two groups approached the normal values of SFI (Table 1). Macroscopic Examination of the Experimental Sciatic Nerve In Situ All animals survived until the end of the experiment and were in good health as indicated by visual inspection. No signs of self-injury were observed in the operated limb. After euthanasia, the implanted nerve conduits derived either from autograft or decellularized group were visually checked. Regenerated nerves were passed through the nerve conduits from both experimental procedures and bridged successfully the 10 mm gap. No evidence of inflammation was observed in both groups. Furthermore, the thickness of the regenerated nerves was similar in animals of both experimental groups (Figure 4). Macroscopic Examination of the Experimental Sciatic Nerve In Situ All animals survived until the end of the experiment and were in good health as indicated by visual inspection. No signs of self-injury were observed in the operated limb. After euthanasia, the implanted nerve conduits derived either from autograft or decellularized group were visually checked. Regenerated nerves were passed through the nerve conduits from both experimental procedures and bridged successfully the 10 mm gap. No evidence of inflammation was observed in both groups. Furthermore, the thickness of the regenerated nerves was similar in animals of both experimental groups (Figure 4). Motor Function Assessment The recovery of motor function was assessed by calculating the SFI pre-operative animals, after 4 and 12 weeks. The SFI in all rats prior to surgery was within normal range. SFI values were −10.07 ± 3.38 at the autograft group and −8.26 ± 5.1 at the hUA group without a statistically significant difference. The values of the SFI were reduced at the first post-operative evaluation at week 4 for both groups. SFI values of autograft group and decellularized group was −87.36 ± 6.27 and −80.89 ± 9.22, respectively. No statistically significant difference was observed between the above groups (p > 0.05). At 12 weeks, the values of the autograft group showed better improvement than the ones of the hUA group (p = 0.0013). Nevertheless, none of these two groups approached the normal values of SFI (Table 1). Motor Function Assessment The recovery of motor function was assessed by calculating the SFI pre-operative animals, after 4 and 12 weeks. The SFI in all rats prior to surgery was within normal range. SFI values were −10.07 ± 3.38 at the autograft group and −8.26 ± 5.1 at the hUA group without a statistically significant difference. The values of the SFI were reduced at the first post-operative evaluation at week 4 for both groups. SFI values of autograft group and decellularized group was −87.36 ± 6.27 and −80.89 ± 9.22, respectively. No statistically significant difference was observed between the above groups (p > 0.05). At 12 weeks, the values of the autograft group showed better improvement than the ones of the hUA group (p = 0.0013). Nevertheless, none of these two groups approached the normal values of SFI (Table 1). Immunohistochemical Detection of Neurofilaments and Schwann Cells The transversal section from each group of nerve grafts were stained with anti-NF 200 and anti-S100 antibody for evaluating the axon regeneration. Positive expression of NF200 and S100 detected in all sections ( Figure 5A-D). Immunohistochemical Detection of Neurofilaments and Schwann Cells The transversal section from each group of nerve grafts were stained with anti-NF 200 and anti-S100 antibody for evaluating the axon regeneration. Positive expression of NF200 and S100 detected in all sections ( Figure 5A-D). Morphometric Analysis Morphometric analysis was performed in the middle portion of the grafts at week 12. Regenerated myelinated nerve fibers, different in sizes, were observed in each group (Figure 2A,B). The number of nerve fibers between the two groups did not present a statistically significant difference ( Figure 6C), but the nerve fibers areas had a wide distribution range (9.87-21.25 μm 2 ). The Morphometric Analysis Morphometric analysis was performed in the middle portion of the grafts at week 12. Regenerated myelinated nerve fibers, different in sizes, were observed in each group (Figure 2A,B). The number of nerve fibers between the two groups did not present a statistically significant difference ( Figure 6C), but the nerve fibers areas had a wide distribution range (9.87-21.25 µm 2 ). The area of the hUA group was significantly smaller than that of the autograft group (p = 0.0073, Figure 2D). This trend was also reflected in the density of the nerve fibers (p < 0.0001, Figure 6E). area of the hUA group was significantly smaller than that of the autograft group (p = 0.0073, Figure 2D). This trend was also reflected in the density of the nerve fibers (p < 0.0001, Figure 6E). (I) fiber density was evaluated and compared to statistical analysis * p < 0.05. Gastrocnemius Muscle Histology and Muscle Weight Ratio In both experimental groups, gastrocnemius muscle showed intense atrophy, while the autograft group presented less atrophy, when compared to the decellularized hUA group. The ratio of muscle mass retention of autologous and decellularized group was 0.55 ± 0.10 and 0.33 ± 0.3, respectively. ( Figure 7D). The difference between the two groups is statistically significant (p < 0.001). These results were also supported by the histological evaluation. More specifically, in the autograft group, fibers appeared polygonal with sub-sarcolemmal localization of their nuclei and minimal growth of connective tissue. In the hUA group, the muscle fibers formed small groups of atrophic fibers and more fibrous connective tissues among muscle bundles. In addition, decellularized hUA presented an increased number of cell nuclei when compared to autograft groups ( Figure 7A-C). Gastrocnemius Muscle Histology and Muscle Weight Ratio In both experimental groups, gastrocnemius muscle showed intense atrophy, while the autograft group presented less atrophy, when compared to the decellularized hUA group. The ratio of muscle mass retention of autologous and decellularized group was 0.55 ± 0.10 and 0.33 ± 0.3, respectively. ( Figure 7D). The difference between the two groups is statistically significant (p < 0.001). These results were also supported by the histological evaluation. More specifically, in the autograft group, fibers appeared polygonal with sub-sarcolemmal localization of their nuclei and minimal growth of connective tissue. In the hUA group, the muscle fibers formed small groups of atrophic fibers and more fibrous connective tissues among muscle bundles. In addition, decellularized hUA presented an increased number of cell nuclei when compared to autograft groups ( Figure 7A-C). NATIVE AUTOGRAFT hUA Discussion Peripheral nerve injuries are very common worldwide, and there is no easily available treatment. Decellularized grafts could be used as an alternative source for nerve conduits. These grafts are characterized by reduced antigenicity and could be a promising therapeutic strategy, when no autologous tissues are available [23]. Different types of decellularized tissues such as nerves and arteries have been used for reconstruction of transected peripheral nerve and showed promising results [24,25]. In this context, hUAs, which can be efficiently isolated from human umbilical cords, a material that is discarded after the gestation, may be good candidates for peripheral nerve reconstruction. The hUA is composed of a complex ECM, which apparently includes collagen, fibronectin, laminin and proteoglycans [17,20]. These proteins, especially laminin, promote neurite and enhance nerve cells adhesion, proliferation and differentiation, thus helping to direct growth cone neurite [26,27]. Due to their importance during the development and the regeneration of the sensory nervous system, laminin, fibronectin and collagen have been successfully used as substrates of tissue culture plastic and poly-3hydroxybutyrate mats to enhance Schwann cell (SC) response [28]. Furthermore, after the decellularization procedure, the proteoglycans significantly reduced as has been confirmed by others D Figure 7. Gastrocnemius muscle histology and weight ratio. (A-C) gastrocnemius muscle transverse section stained with H&E from normal, autograft group and hUA group, respectively. Original magnification 40×; (D) muscle weight ratio was evaluated and compared by statistically analysis * p < 0.05. Discussion Peripheral nerve injuries are very common worldwide, and there is no easily available treatment. Decellularized grafts could be used as an alternative source for nerve conduits. These grafts are characterized by reduced antigenicity and could be a promising therapeutic strategy, when no autologous tissues are available [23]. Different types of decellularized tissues such as nerves and arteries have been used for reconstruction of transected peripheral nerve and showed promising results [24,25]. In this context, hUAs, which can be efficiently isolated from human umbilical cords, a material that is discarded after the gestation, may be good candidates for peripheral nerve reconstruction. The hUA is composed of a complex ECM, which apparently includes collagen, fibronectin, laminin and proteoglycans [17,20]. These proteins, especially laminin, promote neurite and enhance nerve cells adhesion, proliferation and differentiation, thus helping to direct growth cone neurite [26,27]. Due to their importance during the development and the regeneration of the sensory nervous system, laminin, fibronectin and collagen have been successfully used as substrates of tissue culture plastic and poly-3-hydroxybutyrate mats to enhance Schwann cell (SC) response [28]. Furthermore, after the decellularization procedure, the proteoglycans significantly reduced as has been confirmed by others [17]. In this context, chondroitin sulfate, which has a negative impact on nerve guidance and regeneration, can be removed efficiently by the decellularization approach. Histological analysis indicated the absence of cellular and nuclear residues. Additionally, the structural proteins of the ECM, such as collagen, were preserved when compared to native arteries. After the decellularization of the hUAs, the ATP assay was performed and MSCs were co-cultured within 5 × 5 mm patches of decellularized tissue. As it was expected, the tissue supported the cell attachment and the ADP/ATP assay confirmed the maintenance of proliferation capacity of the cells. MSCs were used for this assay because they are characterized by multilineage differentiation potential. Previous studies have shown that MSCs can differentiate efficiently to neuronal like cells. In addition, future experiments will involve the repopulation of the decellularized hUA with MSCs, implantation in the rat sciatic model, and final evaluation of the function between decellularized and repopulated hUAs. For this purpose, we used MSCs and not neural cells for the cytotoxicity assay. After implantation, artery conduits supported the regeneration sciatic nerves and no inflammatory response was observed. Previous studies have shown that nerve-conduits have better outcomes, which were similar to autograft results, or even better when they are loaded with different types of cells like adipose-derived stem cells, olfactory cells, Schwann cells, neurotrophic factors or platelet rich plasma [29][30][31][32][33][34]. In this study, hUA was used alone as an initial step to find out whether it can support elongation of the nerve fibers. Walk track analysis was performed by estimation of SFI for evaluation of the motor function in animals. Our results demonstrated that neither the autologous nor the decellularized hUA graft restored the SFI close to pre-operative values. Nevertheless, a better functional outcome was observed at the autograft graft group (−51.35). Yeong Kim et al. [32] had similar results on the same week (12th) of SFI evaluation. On the contrary, in other studies, better outcomes were obtained after repairing nerve gaps with autologous grafts (−23.4) [35]. Morphometric and immunohistochemical analysis confirmed the elongation of the nerve fibers and also that the hUA was recellularized and remodeled successfully by the animals. Immunohistochemical analysis showed positive expression positive of NF200 and S100 in both experimental groups. These findings indicated that decellularized hUA allows the migration of Schwann cells and elongation of the fibers through the umbilical artery tissue. Morphometric analysis showed the number of fibers between the two experimental groups did not present any statistically significant difference (p = 0.563). However, the area and density of the nerve fibers were higher in the autograft group compared to decellularized hUA. These findings may suggest that the nerve fibers at the hUA group were still in a pre-mature stage of [36]. Another parameter to evaluate the re-innervation in the sciatic nerve lesion model is the gastrocnemius muscle weight ratio. When a muscle is denervated, it shifts to degradation, which leads to weight loss [36]. In both experimental groups, the gastrocnemius muscle showed atrophy. The hUA group showed higher atrophy than the autograft group and more cell n. However, normal and smaller muscle fibers co-existed at the hUA group as it was observed at the histological image. However, more cell nuclei were still observed at the hUA group. This can be explained partially by the fact that muscle atrophy was established in decellularized hUA group. Further clarification could be performed by immunohistochemistry for CD11b (macrophage marker) and pro collagen beta 1 (fibroblast markers) [37]. Conclusions In conclusion, this study showed that the decellularized hUA could support the nerve regeneration and could allow the reinnervation of the target organ. Further research in decellularized hUA is needed in order to be used as nerve conduits. Glycosaminoglycans (GAGs) such as chondroitin sulfate, which are important components of hUA, must be properly identified. In future studies, the decellularized hUAs could be combined with different cell populations or neurotrophic factors, in order to obtain better outcomes, thus bringing them one step closer to clinical application. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
6,358.8
2018-11-21T00:00:00.000
[ "Medicine", "Biology" ]
Some Remarks on Spaces of Morrey Type and Applied Analysis 3 In literature, several authors have considered different kinds of weighted spaces of Morrey type and their applications to the study of elliptic equations, both in the degenerate case and in the nondegenerate one see e.g., 9–11 . In this paper, given a weight ρ in a class of measurable functions G Ω see § 6 for its definition , we prove that the corresponding weighted space M ρ Ω is a space settled between M o Ω and M̃ Ω . In particular, we provide some conditions on ρ that entail M p,λ o Ω M p,λ ρ Ω . Taking into account the results of this paper, we are now in position to approach the study of some classes of elliptic problems with discontinuous coefficients belonging to the weighted Morrey type spaceM ρ Ω . 2. Notation and Preliminary Results Let G be a Lebesgue measurable subset of R and Σ G be the σ-algebra of all Lebesgue measurable subsets of G. Given F ∈ Σ G , we denote by |F| its Lebesgue measure and by χF its characteristic function. For every x ∈ F and every t ∈ R ,we set F x, t F∩B x, t ,where B x, t is the open ball with center x and radius t, and in particular, we put F x F x, 1 . The class of restrictions to F of functions ζ ∈ C∞ o R with F∩supp ζ ⊆ F is denoted by D F and, for p ∈ 1, ∞ , Lploc F is the class of all functions g : F → R such that ζ g ∈ L F for any ζ ∈ D F . Let us recall the definition of the classical Morrey space L R . For n ≥ 2, λ ∈ 0, n and p ∈ 1, ∞ , L R is the set of the functions g ∈ Lploc R such that ∥ ∥g ∥ ∥ Lp,λ Rn sup τ>0 x∈Rn τ−λ/p ∥ ∥g ∥ ∥ Lp B x,τ < ∞, 2.1 equipped with the norm defined by 2.1 . IfΩ is an unbounded open subset of R and t is fixed in R , we can consider the space M Ω, t , which is larger than L R whenΩ R. More precisely,M Ω, t is the set of all functions g in Lploc Ω such that ∥ ∥g ∥ ∥ Mp,λ Ω,t sup τ∈ 0,t x∈Ω τ−λ/p ∥ ∥g ∥ ∥ Lp Ω x,τ < ∞, 2.2 endowed with the norm defined in 2.2 . We explicitly observe that a diadic decomposition gives for every t1, t2 ∈ R the existence of c1, c2 ∈ R , depending only on t1, t2, and n, such that c1 ∥ ∥g ∥ ∥ Mp,λ Ω,t1 ≤ ∥∥g∥∥Mp,λ Ω,t2 ≤ c2 ∥ ∥g ∥ ∥ Mp,λ Ω,t1 , ∀g ∈Mp,λ Ω, t1 . 2.3 All the norms being equivalent, from now on, we consider the space M Ω M Ω, 1 . 2.4 4 Abstract and Applied Analysis For the reader’s convenience, we briefly recall some properties of functions in L R andM Ω needed in the sequel. The first lemma is a particular case of a more general result proved in 12, Proposition 3 . Lemma 2.1. Let Jh h∈N be a sequence of mollifiers in R . If g ∈ L R and lim y→ 0 ∥ ∥g ( x − y) − g x ∥∥Lp,λ Rn 0, 2.5 Introduction Let Ω be an unbounded open subset of R n , n ≥ 2. For p ∈ 1, ∞ and λ ∈ 0, n , we consider the space M p,λ Ω of the functions g in L p loc Ω such that This space of Morrey type, defined by Transirico et al. in 1 , is a generalization of the classical Morrey space L p,λ and strictly contains L p,λ R n when Ω R n .Its introduction is related to the solvability of certain elliptic problems with discontinuous coefficients in the case of unbounded domains see e.g., 1-3 .In the first part of this work, we deepen the study of two subspaces of M p,λ Ω , denoted by M p,λ Ω and M p,λ o Ω , that can be seen, respectively, as the closure of L ∞ Ω and C ∞ o Ω in M p,λ Ω .We start proving some characterization lemmas that allow us to construct suitable decompositions of functions in M p,λ Ω and M p,λ o Ω .This is done in the spirit of the classical decomposition L 1 , L ∞ , proved in 4 by Calder ón and Zygmund for L 1 , where a given function in L 1 is decomposed, for any t > 0, in the sum of a part f t ∈ L ∞ whose norm can be controlled by f t L ∞ Ω < c n • t and a remaining one f − f t ∈ L 1 .Analogous decompositions can be found also for different functional spaces see e.g., 5, 6 for decompositions L 1 , L 1,λ , L p , Sobolev , and L p , BMO . The idea of our decomposition, both for a g in M p,λ Ω and M p,λ o Ω , is the following: for any h ∈ R , the function g can be written as the sum of a "good" part g h , which is more regular, and of a "bad" part g − g h , whose norm can be controlled by means of a continuity modulus of the function g itself. Decompositions are useful in different contexts as the proof of interpolation results, norm inequalities and a priori estimates for solutions of boundary value problems. For instance, in the study of several elliptic problems with solutions in Sobolev spaces, it is sometimes necessary to establish regularity results and a priori estimates for a fixed operator L. These results often rely on the boundedness and possibly on the compactness of the multiplication operator u ∈ W k,q Ω −→ gu ∈ L q Ω , 1.2 which entails the estimate where c ∈ R depends on the regularity properties of Ω and on the summability exponents, and g is a given function in a normed space V satisfying suitable conditions.In some particular cases, this cannot be done for the operator L itself, but there is the need to introduce a suitable class of operators L h , whose coefficients, more regular, approximate the ones of L. This "deviation" of the coefficients of L h from the ones of L needs to be done controlling the norms of the approximating coefficients with the norms of the given ones.Hence, it is necessary to obtain estimates where the dependence on the coefficients is expressed just in terms of their norms.Decomposition results play an important role in this approximation process, providing estimates where the constants involved depend just on the norm of the given coefficients and on their moduli of continuity and do not depend on the considered decomposition. In the framework of Morrey type spaces, in 1 , the authors studied, for k 1, the operator defined in 1.2 , generalizing a well-known result proved by Fefferman in 7 cf.also 8 .They established conditions for the boundedness and compactness of this operator.In 2 , the boundedness result and the straightforward estimates have been extended to any k ∈ N. In view of the above considerations, the second part of this work is devoted to a further analysis of the multiplication operator defined in 1.2 , for functions g in M p,λ Ω .By means of our decomposition results, we are allowed to deduce a compactness result for the operator given in 1.2 .The obtained estimates can be used in the study of elliptic problems to prove that the considered operators have closed range or are semi-Fredholm. The deeper examination of the structure of M p,λ Ω and of its subspaces leads us to the definition of a new functional space, that is a weighted Morrey type space, denoted by M p,λ ρ Ω . In literature, several authors have considered different kinds of weighted spaces of Morrey type and their applications to the study of elliptic equations, both in the degenerate case and in the nondegenerate one see e.g., 9-11 .In this paper, given a weight ρ in a class of measurable functions G Ω see § 6 for its definition , we prove that the corresponding weighted space M p,λ ρ Ω is a space settled between M p,λ o Ω and M p,λ Ω .In particular, we provide some conditions on ρ that entail ρ Ω .Taking into account the results of this paper, we are now in position to approach the study of some classes of elliptic problems with discontinuous coefficients belonging to the weighted Morrey type space M p,λ ρ Ω . Notation and Preliminary Results Let G be a Lebesgue measurable subset of R n and Σ G be the σ-algebra of all Lebesgue measurable subsets of G. Given F ∈ Σ G , we denote by |F| its Lebesgue measure and by χ F its characteristic function.For every x ∈ F and every t ∈ R , we set F x, t F ∩ B x, t , where B x, t is the open ball with center x and radius t, and in particular, we put Let us recall the definition of the classical Morrey space equipped with the norm defined by 2.1 . If Ω is an unbounded open subset of R n and t is fixed in R , we can consider the space M p,λ Ω, t , which is larger than L p,λ R n when Ω R n .More precisely, M p,λ Ω, t is the set of all functions g in L p loc Ω such that endowed with the norm defined in 2.2 .We explicitly observe that a diadic decomposition gives for every t 1 , t 2 ∈ R the existence of c 1 , c 2 ∈ R , depending only on t 1 , t 2 , and n, such that 2.3 All the norms being equivalent, from now on, we consider the space M p,λ Ω M p,λ Ω, 1 . 2.4 For the reader's convenience, we briefly recall some properties of functions in L p,λ R n and M p,λ Ω needed in the sequel. The first lemma is a particular case of a more general result proved in 12, Proposition 3 . Lemma 2.1.Let J h h∈N be a sequence of mollifiers in R n .If g ∈ L p,λ R n and The second results concerns the zero extensions of functions in M p,λ Ω see also 1, Remark 2.4 . Remark 2.2.Let g ∈ M p,λ Ω .If we denote by g 0 the zero extension of g outside Ω, then g 0 ∈ M p,λ R n and for every τ in 0, 1 where c 1 ∈ R is a constant independent of g, Ω and τ.Furthermore, if diam Ω < ∞, then g 0 ∈ L p,λ R n and where c 2 ∈ R is a constant independent of g and Ω. For a general survey on Morrey and Morrey type spaces, we refer to 1, 2, 13, 14 . Ω This section is devoted to the study of two subspaces of M p,λ Ω , denoted by M p,λ Ω and M p,λ o Ω .Here, we point out the peculiar characteristics of functions belonging to these sets by means of two characterization lemmas. Let us put, for h ∈ R and g ∈ M p,λ Ω , ∞ , and g ∈ M p,λ Ω .The following properties are equivalent: 3.4 We denote by M p,λ Ω the subspace of M p,λ Ω made up of functions verifying one of the above properties. Proof of Lemma 3.1.The equivalence between 3.2 and 3.3 is proved in of 1, Lemma 1.3 .Let us show that 3.2 entails 3.4 and vice versa. Fix g in the closure of L ∞ Ω in M p,λ Ω , then for each ε > 0, there exists a function Fixed E ∈ Σ Ω , from 3.5 , it easily follows that On the other hand Therefore, if we set 3.9 Putting together 3.6 and 3.9 , we get 3.4 . Conversely, if we take a function g ∈ M p,λ Ω satisfying 3.4 , for any ε > 0, there exists For each k ∈ R , we set Observe that Therefore, if we put and then gχ E kε M p,λ Ω < ε. 3.14 To end the proof, we define the function g ε g − gχ E kε .Indeed, by construction g ε ∈ L ∞ Ω and by 3.14 , one gets that g − g ε M p,λ Ω < ε. Now, we introduce two classes of applications needed in the sequel. To define the second class, we first fix for more details on the existence of such an α, see for instance 15 .Hence, for h ∈ R , we put It is easy to prove that ψ h belongs to where ∞ , and g ∈ M p,λ Ω .The following properties are equivalent: 3.25 The subspace of M p,λ Ω of the functions satisfying one of the above properties will be denoted by M p,λ o Ω . Proof of Lemma 3.3.The equivalence between 3.21 and 3.22 is a consequence of 3.3 and of 1, Lemmas 2.1 and 2.5 .The one between 3.21 and 3.24 follows from of 1, Remark 2.2 .Always in 1 , see Lemma 2.1 and Remark 2.2, it is proved that 3.21 entails 3.25 and vice versa.Let us show that 3.21 and 3.23 are equivalent too. Let us firstly assume that g belongs to the closure of 8 Abstract and Applied Analysis To this aim, observe that fixed ε > 0, there exists On the other hand, if we consider the sets Ω h defined in 3.20 , one has Therefore, since g ε has a compact support, there exists The above considerations together with 3.28 give, for any Conversely, assume that g ∈ M p,λ Ω and that 3.23 holds. First of all, we observe that denoted by g o the zero extension of g to R n , by 2.7 of Remark 2.2, there exists a positive constant c 1 , independent of g, ψ h and of Ω, such that 3.32 Furthermore, by 3.23 , we get that fixed ε > 0, there exists h ε such that 3.40 We are now in the hypotheses of Lemma 2.1.Hence, denoted by J k k∈N a sequence of mollifiers in R n , we can find a positive integer k ε > h ε such that Furthermore, using 3.34 and 3.41 , we get this concludes the proof.Let g be a function in M p,λ Ω .A modulus of continuity of g in M p,λ Ω is a map σ p,λ g : R → R such that Decompositions of Functions in Let us show now the decomposition results. 4.7 In view of 4.6 , 4.10 Proof.To prove this second decomposition result, we exploit again the definition of the set E h introduced in 4.5 and inequality 4.6 . In this case, for any h ∈ R , we define 4.11 To obtain the first inequality in 4.10 , we observe that 4.6 gives 4.12 The second one is a consequence of 4.5 . A Compactness Result In this section, as application, we use the previous results to prove the compactness of a multiplication operator on Sobolev spaces.To this aim, let us recall an imbedding theorem proved in 2, Theorem 3.2 . Let us specify the assumptions: is an open subset of R n having the cone property with cone C, the parameters k, r, p, q, λ satisfy one of the following conditions: > 0, with r > q when p n/k > 1 and λ 0, and with λ > n 1 − rγ when rγ < 1, Theorem 5.1.Under hypothesis h 1 and if h 2 or h 3 holds, for any u ∈ W k,p Ω and for any g ∈ M r,λ Ω , one has gu ∈ L q Ω .Moreover, there exists a constant c ∈ R , depending on n, k, p, q, r, λ, and C, such that Putting together Lemma 4.1 and Theorem 5.1, we easily have the following result. Corollary 5.2.Under hypothesis h 1 and if h 2 or h 3 holds, for any g ∈ M r,λ Ω and for any h ∈ R , one has If g is in M r,λ o Ω , the previous estimate can be improved as showed in the corollary below. Corollary 5.3. Under hypothesis h 1 and if h 2 or h 3 holds, for any g ∈ M r,λ o Ω and for any h ∈ R , there exists an open set A h ⊂⊂ Ω with the cone property, such that for each u ∈ W k,p Ω , where c ∈ R is the constant of 5.1 . Proof.Fix g ∈ M r,λ o Ω and h ∈ R .In view of Lemma 4.2 and Theorem 5.1, for any u ∈ W k,p Ω , we have 5.4 Using again Lemma 4.2, we obtain We are now in position to prove the compactness result. Corollary 5.4.Suppose that condition h 1 is satisfied, that h 2 or h 3 holds, and fix g ∈ M r,λ o Ω .Then, the operator Proof.Observe that if Ω ⊂⊂ Ω is a bounded open set with the cone property, the operator is linear and bounded.Moreover, since Ω has the cone property, the Rellich-Kondrachov Theorem see e.g., 17 applies and gives that the operator w ∈ W k,p Ω −→ w ∈ L q Ω 5.9 is compact. Let us consider now a sequence u n n∈N bounded in W k,p Ω , and let M ∈ R be such that u n W k,p Ω ≤ M for all n ∈ N. According to the above considerations, fixed ε > 0, there exist a subsequence u n m m∈N and ν ∈ N such that 5.10 On the other hand, given g ∈ M r,λ o Ω and h ∈ R , in view of Corollary 5.3, there exists a constant c ∈ R and an open set A h ⊂⊂ Ω with the cone property, independent of u n , such that 5.11 From 5.11 and 5.10 written for ε c 12 By 5.12 and 4.2 , we conclude that gu n m m∈N is a Cauchy sequence in L q Ω , which gives the compactness of the operator defined in 5.6 . Ω In this section, we introduce some weighted spaces of Morrey type settled between M p,λ o Ω and M p,λ Ω .To this aim, given d ∈ R , we consider the set G Ω, d defined in 18 as the class of measurable weight functions ρ : It is easy to show that ρ ∈ G Ω, d if and only if there exists γ ∈ R , independent on x and y, such that We put For p ∈ 1, ∞ , s ∈ R, and ρ ∈ G Ω , we denote by L p s Ω the Banach space made up of measurable functions g : Ω → R such that ρ s g ∈ L p Ω equipped with the norm It can be proved that the space C ∞ o Ω is dense in L p s Ω see e.g., 18, 19 .From now on, we consider ρ ∈ G Ω ∩ L ∞ Ω , and we denote by d the positive real number such that ρ ∈ G Ω, d .Lemma 6.1.Let λ ∈ 0, n , p ∈ 1, ∞ and g ∈ M p,λ Ω .The following properties are equivalent: We denote by M p,λ ρ Ω the set of functions satisfying one of the above properties. Proof of Lemma 6.1.We start proving the equivalence between 6.6 and 6.7 .This proof is in the spirit of the one of Lemma 3.1.For the reader's convenience, we write down just few lines pointing out the main differences.If 6.6 holds, fixed ε > 0, there exists a function 6.9 From 6.9 , we get that for any E ∈ Σ Ω , Furthermore, in view of the equivalence of the spaces M p,λ Ω, d and M p,λ Ω given by 2.3 and taking into account 6.2 , where c 1 ∈ R depends only on n and d.Hence, set 6.12 from 6.11 we deduce that if sup τ∈ 0,d Putting together 6.10 and 6.13 , we obtain 6.7 .Now, assume that g is a function in M p,λ Ω and that 6.7 holds.Then, for any ε > 0, there exists 6.14 For each k ∈ R , we define the set Using again 2.3 , there exists c 2 ∈ R depending on the same parameters as c 1 such that and then gχ G kε M p,λ Ω < ε. Arguing similarly, we prove also that 6.6 entails 6.8 and vice versa.Indeed, if g ∈ M p,λ Ω and 6.6 holds, we can obtain as before 6.10 and 6.11 . On the other hand, there exists a constant c 3 c 3 n such that sup 6.20 Putting together 6.11 and 6.20 , we obtain where Therefore, if we put 6.32 where 6.33 The thesis followed by 6.2 and 2.3 arguing as in the proof of Lemma 4.1. Let us show the following inclusion. p Ω and then 6.6 holds.On the other hand, for α < 1/p, we can show that if g ∈ L ∞ −α Ω ∩ M p,λ Ω , then 6.7 holds.Indeed, observe that by 2.3 , there exists a constant c 1 c 1 n, d such that for any E ∈ Σ Ω 6.34 Moreover, there exists a constant c 2 c 2 n such that A straightforward consequence of the definitions 3.21 of Lemma 3.3, 6.6 of Lemma 6.1, and 3.2 of Lemma 3.1 is given by the following result.τ −λ/p g L p Ω x,τ . 6.42 We can treat the first term on the right-hand side of this last equality as done in 6.41 obtaining sup τ∈ 0,d τ −λ/p g L p Ω x,τ ≤ d n−λ /p cγ α g L ∞ −α Ω ρ α x , 6.43 the constant c c n being the one of 6.41 . Concerning the second one, observe that for any x ∈ Ω and τ ∈ d, 1 , we have the inclusion Ω x, τ ⊂ Q x, τ , where Q x, τ denotes an n-dimensional cube of center x and edge 2τ.Now, there exists a positive integer k such that we can decompose the cube Q x, 1 in k cubes of edge less than d/2 and center x i , with x i ∈ Ω for i 1, . . ., k.Therefore, Q x, 1 ⊂ k i 1 B x i , d/2 .Hence, for any x ∈ Ω and τ ∈ d, 1 , we have, arguing as before with opportune modifications, τ −λ/p g L p Ω x,τ ≤ d −λ/p k i 1 g L p Ω x i ,d/2 ≤ kd n−λ /p cγ α g L ∞ −α Ω ρ α x , 6.44 the constant c c n being the same of 6.41 .The thesis follows then from 6.41 , 6.42 , 6.43 , and 6.44 passing to the limit as |x| → ∞, as a consequence of hypothesis 6.40 . From the latter result, we easily obtain the following lemma. where B x, τ is the open ball with center x and radius τ. Ω M p,λ Ω and M p,λ o The characterizations of the spaces M p,λ Ω and M p,λ o Ω naturally lead us to the introduction of the following moduli of continuity.10 Abstract and Applied Analysis
5,688.2
2010-11-07T00:00:00.000
[ "Mathematics" ]
Cohomology Characterizations of Diagonal Non-Abelian Extensions of Regular Hom-Lie Algebras † In this paper, first we show that under the assumption of the center of h being zero, diagonal non-abelian extensions of a regular Hom-Lie algebra g by a regular Hom-Lie algebra h are in one-to-one correspondence with Hom-Lie algebra morphisms from g to Out(h). Then for a general Hom-Lie algebra morphism from g to Out(h), we construct a cohomology class as the obstruction of existence of a non-abelian extension that induces the given Hom-Lie algebra morphism. Introduction The notion of a Hom-Lie algebra was introduced by Hartwig, Larsson, and Silvestrov in [1] as part of a study of deformations of the Witt and the Virasoro algebras.In a Hom-Lie algebra, the Jacobi identity is twisted by a linear map, called the Hom-Jacobi identity.The set of (σ, σ)-derivations of an associative algebra and some q-deformations of the Witt and the Virasoro algebras have the structure of a Hom-Lie algebra [1][2][3].Because of the close relation to discrete and deformed vector fields and differential calculus [1,4,5], more people have started paying attention to this algebraic structure.In particular, representations and deformations of Hom-Lie algebras are studied in [6][7][8]; extensions of Hom-Lie algebras are studied in [4,9,10].Some split regular Hom-structures are studied in [11,12]. The notion of a Hom-Lie 2-algebra, which is the categorification of a Hom-Lie algebra, is given in [13].The category of Hom-Lie 2-algebras and the category of 2-term HL ∞ -algebras are equivalent.Skeletal Hom-Lie 2-algebras can be classified by the third cohomology group of a Hom-Lie algebra.Many known Hom-structures, such as Hom-pre-Lie algebras and symplectic Hom-Lie algebras, lead to skeletal or strict Hom-Lie 2-algebras.In [14], we give the notion of a derivation of a regular Hom-Lie algebra (g, [•, •] g , φ g ).The set of derivations Der(g) is a Hom-Lie subalgebra of the regular Hom-Lie algebra (gl(g), [•, •] φ g , Ad φ g ), which is given in [15].We constructed the derivation Hom-Lie 2-algebra DER(g), by which we characterize non-abelian extensions of regular Hom-Lie algebras as Hom-Lie 2-algebra morphisms.More precisely, we characterize a diagonal non-abelian extension of a regular Hom-Lie algebra g by a regular Hom-Lie algebra h using a Hom-Lie 2-algebra morphism from g to the derivation Hom-Lie 2-algebra DER(h).Associated to a non-abelian extension of a regular Hom-Lie algebra g by a regular Hom-Lie algebra h, there is a Hom-Lie algebra morphism from g to Out(h) naturally.However, given an arbitrary Hom-Lie algebra morphism from g to Out(h), whether there is a non-abelian extension of g by h that induces the given Hom-Lie algebra morphism and what the obstruction is are not known yet. The aim of this paper is to solve the above problem.It turns out that the result is not totally parallel to the case of Lie algebras [16][17][18][19][20][21].We need to add some conditions on the short exact sequence related to derivations of Hom-Lie algebras.Under these conditions, first we show that under the assumption of the center of h being zero, there is a one-to-one correspondence between diagonal non-abelian extensions of g by h and Hom-Lie algebra morphisms from g to Out(h).Then for the general case, we show that the obstruction of the existence of a non-abelian extension is given by an element in the third cohomology group. The paper is organized as follows.In Section 2, we recall some basic notions of Hom-Lie algebras, representations of Hom-Lie algebras, their cohomologies and derivations of Hom-Lie algebras.In Section 3, we study non-abelian extensions of g by h in the case that the center of h is zero.We show that if the center of h is zero and the short exact sequence related to derivations of Hom-Lie algebras is also diagonal, then diagonal non-abelian extensions of g by h correspond bijectively to Hom-Lie algebra morphisms from g to Out(h) (Theorem 2).In Section 4, we give a cohomology characterization of the existence of general non-abelian extensions of g by h.We show that the obstruction of the existence of a diagonal non-abelian extension of g by h that induces a given Hom-Lie algebra morphism from g to Out(h) is given by a cohomology class in H 3 (g; Cen(h)) (Theorem 3).Moreover, isomorphism classes of diagonal non-abelian extensions of g by h are parameterized by H 2 (g; Cen(h)) (Theorem 4).In Section 5, we give a conclusion of the paper. Preliminaries In this paper, we work over an algebraically closed field K of characteristic 0, and all the vector spaces are over K.We only work on finite-dimensional vector spaces.(i) A (multiplicative) Hom-Lie algebra is a triple (g, [•, •] g , φ g ) consisting of a vector space g, a skew-symmetric bilinear map (bracket) [•, •] g : ∧ 2 g −→ g and a linear map φ g : g → g preserving the bracket, such that the following Hom-Jacobi identity with respect to φ g is satisfied: (ii) A Hom-Lie algebra is called a regular Hom-Lie algebra if φ g is an algebra automorphism. (iii) The center Cen(g) of a regular Hom-Lie algebra (g, [•, •] g , φ g ) is defined by Remark 1.The center of a Hom-Lie algebra (g, [•, •] g , φ g ) (not necessarily regular) is usually defined by Equation (2); see [9] (Definition 2.13).However, for x ∈ Cen(g), φ g (x) may not be in Cen(g).This is a conflict with the definition of a subalgebra of a Hom-Lie algebra, for which one requires that the subspace is closed with respect to both [•, •] g and φ g .We note that for a regular Hom-Lie algebra, if x ∈ g is such that [x, y] = 0 for all y ∈ g, then φ g (x) also satisfies this property.This follows from Thus, we suggest that for a general Hom-Lie algebra, one should define its center by In the sequel, we always assume that φ g is an algebra automorphism.That is, in this paper, all the Hom-Lie algebras are assumed to be regular Hom-Lie algebras despite that some results also hold for general Hom-Lie algebras. Definition 2. A morphism of Hom-Lie algebras f : Definition 3. A representation of a Hom-Lie algebra (g, [•, •] g , φ g ) on a vector space V with respect to β ∈ gl(V) is a linear map ρ : g → gl(V) such that for all x, y ∈ g, the following equalities are satisfied: We denote a representation by (ρ, V, β). For all x ∈ g, we define ad x : g → g by Then ad : g −→ gl(g) is a representation of the Hom-Lie algebra (g, [•, •] g , φ g ) on g with respect to φ g , which is called the adjoint representation. Let (ρ, V, β) be a representation.We define the set of k-Hom-cochains by where d ρ • d ρ = 0 is proved in [8].Denote by Z k (g; ρ) and B k (g; ρ) the sets of k-cocycles and k-coboundaries, respectively.We define the kth cohomology group H k (g; ρ) to be Z k (g; ρ)/B k (g; ρ).See also [6] for more details about such cochain and coboundary setups. Remark 2. The above definition of a derivation of a Hom-Lie algebra is more general than that given in [8].Under the condition D • φ g = φ g • D, the above definition is the same as the α-derivation given in [8].See Remark 3.2 in [14] for more details. For all x ∈ g, ad x is a derivation of the Hom-Lie algebra (g, [•, •] g , φ g ), which we call an inner derivation.See [14] for more details.Denote by Inn(g) the set of inner derivations of the Hom-Lie algebra (g, [•, •] g , φ g ), that is, ) be a Hom-Lie algebra.For all x ∈ g and D ∈ Der(g), we have Therefore, Inn(g) is an ideal of the Hom-Lie algebra (Der(g), [•, •] φ g , Ad φ g ). Denote by Out(g) the set of out derivations of the Hom-Lie algebra (g, [•, •] g , φ g ), that is, We use π to denote the quotient map from Der(g) to Out(g). Non-Abelian Extensions of Hom-Lie Algebras Definition 5. A non-abelian extension of a Hom-Lie algebra is a commutative diagram with rows being short exact sequences of Hom-Lie algebra morphisms: We can regard h as a subspace of ĝ and φ ĝ| h = φ h .Thus, h is an invariant subspace of φ ĝ.We say that an extension is diagonal if ĝ has an invariant subspace X of φ ĝ such that h ⊕ X = ĝ.In general, ĝ does not always have an invariant subspace X of φ ĝ such that h ⊕ X = ĝ.For example, the matrix representation of φ ĝ is a Jordan block.We only study diagonal non-abelian extensions in the sequel.Definition 6.Two extensions of g by h, (ĝ 1 , [•, •] ĝ1 , φ ĝ1 ) and (ĝ 2 , [•, •] ĝ2 , φ ĝ2 ) are said to be isomorphic if there exists a Hom-Lie algebra morphism θ : ĝ2 → ĝ1 such that we have the following commutative diagram: ) and s : g → ĝ be a diagonal section.Define linear maps ω : g ∧ g → h and ρ : g → gl(h) respectively by Clearly, ĝ is isomorphic to g ⊕ h as vector spaces.Transferring the Hom-Lie algebra structure on ĝ to that on g ⊕ h, we obtain a Hom-Lie algebra (g The following proposition gives the conditions on ρ and ω such that Proposition 1 ([14], (Proposition 4.5)).With the above notations, (g ⊕ h, [•, •] (ρ,ω) , φ) is a Hom-Lie algebra if and only if ρ and ω satisfy the following equalities: where c.p. is the cyclic permutation of x, y, z. For any diagonal non-abelian extension, by choosing a diagonal section, it is isomorphic to ) , φ) be two diagonal non-abelian extensions of g by h.The two extensions are equivalent if and only if there is a linear map ξ : g → h such that Classification of Diagonal Non-Abelian Extensions of Hom-Lie Algebras: Special Case In this section, we classify diagonal non-abelian extensions of Hom-Lie algebras for the case that Cen(h) = 0. Theorem 2. Let (g, [•, •] g , φ g ) and (h, [•, •] h , φ h ) be Hom-Lie algebras such that Cen(h) = 0.If the following short exact sequence of Hom-Lie algebra morphisms: is a diagonal non-abelian extension of Out(h) by Inn(h), then isomorphism classes of diagonal non-abelian extensions of g by h correspond bijectively to Hom-Lie algebra homomorphisms: , φ) be a diagonal non-abelian extension of g by h given by Equations ( 15) and (16).By Equation ( 18), we have ρ x ∈ Der(h).Let π : Der(h) → Out(h) be the quotient map.We denote the induced Hom-Lie algebra structure on Out(h) by [•, •] φ h and Ad φ h .Hence we can define ρ = π • ρ By Equation ( 17), for all x ∈ g, we have By Equation ( 20), we have Thus, ρ is a Hom-Lie algebra homomorphism from g to Out(h). •] (ρ,ω) , φ) be isomorphic diagonal non-abelian extensions of g by h.By Proposition 2, we have Thus, we obtain that isomorphic diagonal non-abelian extensions of g by h correspond to the same Hom-Lie algebra homomorphism from g to Out(h). Obstruction of Existence of Diagonal Non-Abelian Extensions of Hom-Lie Algebras In this section, we always assume that the following short exact sequences of Hom-Lie algebra morphisms: are diagonal non-abelian extensions.Given a Hom-Lie algebra morphism ρ : g → Out(h), where Cen(h) = 0, we consider the obstruction of existence of non-abelian extensions.By choosing a diagonal section s of π : Der(h) → Out(h), we can still define ρ by Equation ( 26) such that Equation ( 27) holds.Moreover, we can choose a linear map ω : g ∧ g → h such that Equations ( 28) and (29) hold.Thus, (g ⊕ h, [•, •] (ρ,ω) , φ) is a diagonal non-abelian extension of g by h if and only if Let d ρ be the formal coboundary operator associated to ρ.Then we have ) is a diagonal non-abelian extension of g by h if and only if d ρ ω = 0. Definition 7. Let ρ : g → Out(h) be a Hom-Lie algebra morphism.We call ρ an extensible homomorphism if there exists a diagonal section s of π : Der(h) → Out(h) and linear map ω : g ∧ g → h such that Equations (27), ( 28) and (33) hold. For all u ∈ Cen(h), it is clear that φ h (u) ∈ Cen(h).For v ∈ h, we have Thus, we have ρ x (u) ∈ Cen(h).Therefore, we can define ρ : g → gl(Cen(h)) by ρx ρ x | Cen(h) By Equations ( 27) and (28), we obtain that ρ is a Hom-Lie morphism from g to gl(Cen(h)).By Theorem 1, ρ is a representation of (g, [•, •] g , φ g ) on Cen(h) with respect to φ h | Cen(h) .By Equation (31), we deduce that different diagonal sections of π give the same representation of g on Cen(h) with respect to φ h | Cen(h) .In the sequel, we always assume that ρ is a representation of g on Cen(h) with respect to φ h | Cen(h) , which is induced by ρ.By Equation (30), we have (g; Cen(h)).Moreover, we have the following lemma.Lemma 3. d ρ ω is a 3-cocycle on g with the coefficient in Cen(h), and the cohomology class [d ρ ω] does not depend on the choices of the diagonal section s of π : Der(h) → Out(h) and ω that we make. Proof.For all x, y, z, t ∈ g, by straightforward computations, we have By the definition of d ρ ω, we have 60 terms in the right-hand side of the above formula.Fortunately, we can cancel the following terms: By Equations ( 27)-(29), the above formula reduces to the following: ) and φ g is an algebra morphism, this is rewritten as follows: [ω(φ g (x), φ g (y)), ω(φ g (z), Thus, we obtain d ρ ω ∈ Z 3 (g; ρ).Now we check that the cohomology class [d ρ ω] does not depend on the choices of the diagonal section s of π : Der(h) → Out(h) and ω that we make.Let s be another diagonal section of π; we have ρ x ∈ Der(h) and choose ω such that Equations ( 27) and ( 28 We define ω * by By straightforward computations, we obtain that Equations ( 28) and (29) hold for ρ , ω * .For all x, y, z ∈ g, we have = A(x, y, z) + A(y, z, x) + A(z, x, y) + B(x, y, z) + B(y, z, x) + B(z, x, y) + C(x, y, z) where By Equations ( 27) and (28), we have A(x, y, z) = 0.Because ρ φ g (x) is a derivation, we obtain B(x, y, z) = 0.Because b • φ g = φ h • b and g, h are Hom-Lie algebras, we obtain C(x, y, z) = 0. Thus, we have d ρ ω = d ρ ω * .Because Equations ( 28) and (29) hold for ρ , ω * and ρ , ω , respectively, we have Thus, we have ad (ω −ω * )(x,y) = 0.Moreover, we have (ω − ω * )(x, y) ∈ Cen(h).By Equation (29), we can define τ ∈ C 2 Now we are ready to give the main result in this paper, namely, that the obstruction of a Hom-Lie algebra homomorphism ρ : g → Out(h) being extensible is given by the cohomology class [d ρ ω] ∈ H 3 (g; ρ).Theorem 3. Let ρ : g → Out(h) be a Hom-Lie algebra morphism.Then ρ is an extensible homomorphism if and only if Proof.Let ρ : g → Out(h) be an extensible Hom-Lie algebra morphism.Then we can choose a diagonal section s of π : Der(h) → Out(h) and define ρ by Equation (26).Moreover, we can choose a linear map ω : g ∧ g → h such that Equations ( 27) and (28) hold.Because ρ is extensible, we have d ρ ω = 0, which implies that [d (g; Cen(h)), we also have By Proposition 1, we can construct a Hom-Lie algebra (g ⊕ h, [•, •] (ρ,ω−σ) , φ).Therefore, ρ is an extensible morphism.The proof is finished.2 The following theorem classifies diagonal non-abelian extensions of g by h once they exist.Theorem 4. Let ρ : g → Out(h) be an extensible morphism.Then isomorphism classes of diagonal non-abelian extensions of g by h induced by ρ are parameterized by H 2 (g; ρ). Conclusions In this paper, we use a cohomological approach to study diagonal non-abelian extensions of regular Hom-Lie algebras.First, for the case that Cen(h) = 0, we classify diagonal non-abelian extensions of a regular Hom-Lie algebra g by a regular Hom-Lie algebra h by Hom-Lie algebra morphisms from g to the outer derivation of the Hom-Lie algebra Out(h).More precisely, we show that under the condition Cen(h) = 0, isomorphism classes of diagonal non-abelian extensions of a regular Hom-Lie algebra g by a regular Hom-Lie algebra h one-to-one correspond to Hom-Lie algebra morphisms from g to Out(h).Then for the general case, isomorphic diagonal non-abelian extensions of a regular Hom-Lie algebra g by h give rise to the same morphism from g to Out(h).However, given a morphism from g to Out(h), there is an obstruction for the existence of a diagonal non-abelian extension of regular Hom-Lie algebras that induces the given morphism.We show that the obstruction is given by a cohomological class in the third cohomology group.More precisely, if the cohomological class is trivial, then there is a diagonal non-abelian extension of regular Hom-Lie algebras inducing the given morphism.In this case, we say that the given morphism is extensible.In particular, if the third cohomology group is trivial, then ) hold.We prove that [d ρ ω] = [d ρ ω ].Because s and s are diagonal sections of π, we have a linear map b : g → h such that b
4,264.4
2017-12-05T00:00:00.000
[ "Mathematics" ]
A hypoplastic constitutive model for debris materials Debris flow is a very common and destructive natural hazard in mountainous regions. Pore water pressure is the major triggering factor in the initiation of debris flow. Excessive pore water pressure is also observed during the runout and deposition of debris flow. Debris materials are normally treated as solid particle–viscous fluid mixture in the constitutive modeling. A suitable constitutive model which can capture the solid-like and fluid-like behavior of solid–fluid mixture should have the capability to describe the developing of pore water pressure (or effective stresses) in the initiation stage and determine the residual effective stresses exactly. In this paper, a constitutive model of debris materials is developed based on a framework where a static portion for the frictional behavior and a dynamic portion for the viscous behavior are combined. The frictional behavior is described by a hypoplastic model with critical state for granular materials. The model performance is demonstrated by simulating undrained simple shear tests of saturated sand, which are particularly relevant for the initiation of debris flows. The partial and full liquefaction of saturated granular material under undrained condition is reproduced by the hypoplastic model. The viscous behavior is described by the tensor form of a modified Bagnold’s theory for solid–fluid suspension, in which the drag force of the interstitial fluid and the particle collisions are considered. The complete model by combining the static and dynamic parts is used to simulate two annular shear tests. The predicted residual strength in the quasi-static stage combined with the stresses in the flowing stage agrees well with the experimental data. The non-quadratic dependence between the stresses and the shear rate in the slow shear stage for the relatively dense specimens is captured. Introduction Debris flow is a very common natural hazard in the mountainous areas of many countries. It represents the gravity-driven flow of a mixture of various sizes of sediment, water and air, down a steep slope, often initiated by heavy rainfall and landslides [17]. The highest velocity of debris flows can be more than 30 m/s; however, typical velocities are less than 10 m/s [24]. The fast debris flows may cause significant erosion, while increasing the sediment charge and destructive potential. Such mass flows cause serious casualties and property losses in many countries around the world. The initiation mechanisms of debris flow and the predicted possible velocity are essential information for the design of protective measures. Numerical analysis plays an important role to obtain this information, where a competent constitutive model for debris materials is required. The main factors influencing the initiation of debris flow are, among others, the topography, material parameters, water and the initial stress state in the affected slope [22]. Earth slopes with inclinations ranging from 26 to 45 have been generally identified as most prone to debris flow initiation [40]. The volume fraction of debris materials, defined as the ratio between the solid volume and the total volume of a representative volume element, varies between about 30 and 65 %. The water from heavy rainfall or snow melting makes the unconsolidated superficial deposit on a steep hillside saturated, thereby leading to a reduced shear strength due to the decreasing of matric suction, and further triggering a landslide. Such an upland landslide may develop into a hillside debris flow when the water in the sliding mass cannot be discharged quickly and therefore gives rise to excessive pore water pressure. In this case, based on the principles of soil mechanics, the effective stresses between solid particles will decrease to cause the reduction or complete loss of shear strength. Upon initiation of debris flow, debris material shows fluid-like behavior. As concluded by Iverson [19], debris flow can be mobilized by three processes: (i) widespread Coulomb failure along a rupture surface within a saturated soil or sediment mass, (ii) partial or complete liquefaction of a sliding mass due to high pore-fluid pressure and (iii) conversion of landslide translational energy to internal vibrational energy. In these processes, the development of high pore water pressure is likely the most significant triggering factor. In addition, experimental observation [18] shows that an almost constant excess pore water pressure persists during the runout and depositing of debris flows. Thus, a suitable constitutive model which can capture the solid-like behavior before failure and the fluid-like behavior after failure should has the capability to describe the developing of pore water pressure (or effective stresses) in the initiation stage and determine the residual effective stresses exactly. Some important material parameters such as solid volume fraction (or void ratio in soil mechanics) and the internal friction coefficient need to be taken into account. Actually, debris materials are normally simplified as solid spherical particle-viscous fluid mixture and treated as a fluid continuum with microstructural effect in the constitutive modeling [10,11]. In most conventional models, constitutive equations for the static and dynamic regimes are formulated and applied separately, such as the models for the solid-like behaviors of granular materials [8,27,41,43] and that for the fluid-like behaviors [1,6,21]. Although some models for granular-fluid flows have taken the stress state of the quasi-static stage into account, the employed theories for the static regime, such as Mohr-Coulomb criterion [34] and extended von Mises yield criterion [32], still fail to determine the changing of pore water pressure from the deformation directly. Hypoplasticity was proposed as an alternative to plasticity for the description of solid-like behavior of granular materials [41,43]. The distinctive features of hypoplasticity are its simple formulation and capacity to capture some salient features of granular materials, such as non-linearity, dilatancy and yielding [42]. It may be the suitable choice for the description of solid-like behavior of debris materials. In this paper, a framework which consists of a static portion for the frictional behavior and a dynamic portion for the viscous behavior is introduced at first. Bagnold's constitutive model for a gravity-free suspension [1] is chosen as the dynamic portion in the framework. Then, the applicability of a specific hypoplastic model in the description of granular-fluid flows is studied by using this model to simulate the undrained simple shear test of saturated granular materials as shown in Fig. 1, which is in analogy to the initiation of a debris flow. The dynamic model, which is modified by fitting Bagnold's experimental data and taking a parameter termed critical solid volume fraction into account [15], is combined with the hypoplastic portion to obtain a new complete constitutive model for debris flows. The performance of the proposed model is demonstrated by some element tests in which the new model is used to simulate two annular shear tests with different materials and apparatus. The framework of constitutive modeling for debris materials As stated in the preceding section, debris materials show solid-like behavior before failure and fluid-like behavior after failure. This particular phenomenon cannot be modeled only within the framework of statics or dynamics. An applicable model may need to combine a static and a dynamic portion and make the transition from solid-like to fluid-like behavior turns out as an outcome [42]. In our former work [15], based on the velocity analysis of dry sand flow [4,26] and the force balance of an inclined plane supporting a uniform layer of sand-water mixture beneath a uniform layer of pure water [34], a framework for the constitutive model of debris materials was developed as the following form, Fig. 1 Schematic of undrained simple shear tests where P and T are the normal and shear stresses for the solid phase; P 0 and T 0 are the normal and shear stress caused by prolonged contact between particles; T v , T i and P i are slightly modified Bagnold's constitutive relations for a gravity-free dispersion of solid spheres sheared in Newtonian liquids. The stresses P 0 and T 0 are the static portion of the framework which satisfy a generalized Mohr-Coulomb type yield criterion [9,31]. Thus, where / denotes the residual friction angle after failure. They correspond to the residual stresses of debris materials in the quasi-static stage. For a simple shearing, the shear stress for the so-called macro-viscous regime, T v , has the following expression where the coefficient K 1 is related to the material property and expressed as U is the shear velocity as shown in Fig. 1 and dU=dy denotes the shear rate changing along the depth direction; C is the mean solid volume fraction and C c is the maximum solid volume fraction to assure a full shearing to occur; n is a fitting parameter and l is the dynamic viscosity of the interstitial fluid; k is a dimensionless parameter termed linear concentration. For perfectly spherical particles, k is defined as where s is the mean free distance between two particles; C 1 is the asymptotic limit of the maximum measured solid volume fraction as the container dimensions approach infinity, which is also related to the size of the particles [16]. The shear stress for the 'grain-inertia' regime, T i , is formulated as in which is also a coefficient related to the material property and is a correction factor based on the experimental results in [1]; q s and d denote the material density and mean diameter of the grains, respectively; the tangent of the angle a i corresponds to the ratio between the shear and normal stress in the 'grain-inertia' regime. Therefore, the expression of the normal stress in the 'grain-inertia' regime is T v , T i and P i are termed the dynamic portion of the framework (1). This framework implies that the contributions of contact friction, fluid viscosity and particle collisions coexist in the entire flow process. Bagnold's tests [1] for two different interstitial fluids with different viscosities but the same density show significant differences in the slow shear stage and tend to the same stress-strain relation when the shear velocity is large enough. In the rapid shear stage, the particle collisions become very fierce; the bulk behavior and dissipation of the flow kinetic energy are dominated by the inelastic and frictional particle collisions. An impact between two particles in a viscous liquid approximates a dry impact since the fluid effect is insignificant in comparison with the collision force in this stage [45]. Therefore, the linear term T v , rather than the quadratic terms T i and P i , makes the models based on the framework (1) capable of distinguishing granular-fluid mixtures with different interstitial fluid. The dry granular flow can be treated as a particular case where air is the interstitial fluid. For a free surface dry granular flow shown in Fig. 2, the viscous terms T v are normally much less than the residual strength T 0 in the beginning of the flow since the shear rate is very small in this stage. It is also negligible in the fast shearing stage since the viscous effect of air is insignificant compared to the frictional and collisional effect of particles. Thus, the framework (1) will be reduced to the following form in the case of free surface dry granular flow. As stated in the literature [26], the relation (6) predicts a steady uniform flow only when the slope h is equal to the angle a i . However, experimental results [3] show that such steady flow can be obtained not only at a single slope but over a slope range. This experimental observation can be predicted by the reduced framework (10) [25,31]. According to force balance when a steady uniform flow is obtained in the free surface dry granular flow, we have where g is the gravity acceleration; h is the depth along the y axis which is normal to the flow bed. Then we get the stress ratio Let us assume that a i is greater than the residual friction angle /, which is consistent with the experimental observations of dry granular flows [30]. The normal stress P i is zero in the critical state of triggering the flow since the flow velocity is almost null at that time point. Thus, from (11) and (12), we obtain and where h 1 is the critical inclination for the granular material start flowing. With the increasing of inclination, another critical state will be reached. In this state, the component of gravity perpendicular to the flowing bed is totally supported by P i since the flow velocity is large enough at this inclination. From (12), we get where h 2 is the maximum inclination for equation (12) holds. It indicates that the framework (10), in which the stresses are divided into a static portion generated by prolonged contact of particles and a dynamic portion produced in particle collisions, can predict steady uniform flows over a slope range h 2 ½/; a i . By taking the effect of the interstitial fluid into account, a constitutive model developed within the complete framework (1) can describe not only dry granular flows but also granular-fluid flows. In the above analysis, the simple formula for the initial value of P 0 , (13), is only applicable for free surface dry granular flows. As pointed out in the preceding section, debris materials are saturated solid-fluid mixtures which will be partially or fully liquefied in the initiation of debris flows. The normal stress P 0 is the effective stress and obtained by subtracting the excess pore water pressure from the total normal stress in this case. A proper theory is required to capture the partial or complete liquefaction, and further determine the residual strength P 0 and T 0 . As introduced before, hypoplasticity may be the suitable choice for the description of solid-like behavior of debris materials. In the following section, we study the capability of a specific hypoplastic model for capturing the main properties of debris materials in the quasi-static stage. 3 The applicability of hypoplastic models for debris materials Hypoplastic constitutive equations are based on nonlinear tensorial functions with the major advantages of simple formulation and few parameters. Two hypoplastic models, the one developed by Wu et al. [41] and the one by Gudehus [13], are compared in the selection of the static portion for the framework (1). In the more recent models by Gudehus [13], mainly the stiffness is modified by the two factors, f b and f e , which take into account the influence of stress state and density, respectively. In modeling debris flow, however, the strength is very important and the stiffness is not important. Moreover, his model makes use of the exponential functions for the dependence of critical void ratio and minimum void ratio on pressure. For each function the parameters reduce from 3 to 2. However, there are only few data in the literature for the exponential functions. Therefore, in this paper, we will embark on the model proposed by Wu et al. [41] which is the first hypoplastic model with critical state to verify that, by employing an appropriate hypoplastic model as the static portion, the combined model based on the framework (1) can fulfill an entire and quantitative description of stress state for debris materials from quasi-static stage to fast flow stage. It is worth mentioning that the hypoplastic model with critical state is just one of the choices for describing the initiation of debris flows. Recently some improved models have been available, e.g. [12,23,35], which are developed from some widely used versions of hypoplastic model [28,37] and aim to improve the dependence of stiffness on pressure and density. However, the capability of these models for capturing the phenomenon of liquefaction and the stability in the cases of large deformation or low confining pressure still need to be verified. A more concise hypoplastic model with the former mentioned capability and stability can be employed to determine the stress state in the quasi-static stage of debris materials. The hypoplastic model with critical state is an improvement of a basic hypoplastic model for sand developed by Wu and Bauer [43] as where c i ði ¼ 1; . . .; 4Þ are dimensionless material parameters; T h and D denote the stress tensor and the strain rate tensor, respectively; T h à is the deviatoric stress tensor expressed by k D k¼ ffiffiffiffiffiffiffiffiffiffiffiffi ffi trðD 2 Þ p stands for the Euclidean norm and 1 is unit tensor. The Jaumann stress rate tensor T h in (16) is defined by where _ T h is the stress rate tensor (material time derivative of T h ); W denotes the spin tensor. The hypoplastic model (16) possesses simple mathematical formulation and contains only four material parameters, c 1 $ c 4 . The specific determination process of c 1 $ c 4 can be obtained in the literatures [5,41,43]. Two stress states, the initial hydrostatic and the state at failure, are chosen for the identification of c 1 $ c 4 based on a triaxial test with constant confining pressure, i. e. _ T h ð2; 2Þ ¼ _ T h ð3; 3Þ ¼ 0. And then, the following parameters are introduced: the stress ratio, R ¼ T h ð1; 1Þ=T h ð3; 3Þ; the initial tangent modulus, the initial Poisson ratio, t i ¼ ½Dð3; 3Þ=Dð1; 1Þ R¼1 ; the failure stress ratio, the failure Poisson ratio, t f ¼ ½Dð3; 3Þ=Dð1; 1Þ R¼R f : The failure stress ratio R f and the failure Poisson ratio t f are related to the friction angle / 0 and the dilatancy angle w, respectively, through the following relations [43]: and Taking the four material constants c 1 $ c 4 as unknowns, a system of four linear equations can be obtained by substituting the corresponded stress and strain rate of the two stress states into the model (16). Therefore, the material constants are determined as functions of the wellestablished parameters in soil mechanics, the initial tangent modulus E i , the initial Poisson ratio t i , the friction angle / 0 and the dilatancy angle w. It should be pointed out that these parameters are related to a specific confining pressure, all the sets of material constants used in this paper are obtained with a confining pressure T h ð3; 3Þ ¼ 100 kPa. In addition, the deviatoric loading in the initial hydrostatic state is considered to be zero, i. e. the initial Poisson ratio t i ¼ 0. By taking the effect of void ratio and stress level into account, the model (16) was slightly modified to the following form [41]. where is a factor called density function. a is a material parameter related to the stress level and is the modified relative density; e is the void ratio; e min and e crt are the minimum and the critical void ratio, respectively. The effect of void ratio and stress level on the behavior of granular materials is taken into account in the model (21) by using the following expressions, and a ¼ q 1 þ q 2 expðq 3 j trT h jÞ ð25Þ where p i ði ¼ 1; . . .; 3Þ and q i ði ¼ 1; . . .; 3Þ are material parameters and can be determined by fitting the experimental data of drained triaxial tests under different confining pressure; j Á j denotes absolute value. It is shown that the model (21) is applicable to both initially and fully developed plastic deformation of granular materials with drained or undrained conditions [41,43]. It will reduce to the original one (16) when the void ratio e is equal to the critical value e crt from (22) and (23). It means, for same material, same constants c 1 $ c 4 will be obtained for the original and extended models in the case of e ¼ e ecrt . Thus, the material constants emerging in the model (21) can be determined by the same way as done for (16). The dilatancy angle w is equal to zero since there is no volume deformation in this case [44]. About the material parameters p i ði ¼ 1; . . .; 3Þ and q i ði ¼ 1; . . .; 3Þ, some theoretical and experimental analyses are presented in [41]. p 1 is the critical void ratio when the confining pressure approaches infinity, since p 3 is negative. The value of p 1 should be close to the minimum void ratio under a high confining pressure. For the case of zero confining pressure, the critical void ratio is equal to p 1 þ p 2 which may close to the maximum void ration measured with very low confining pressure. q 1 is assumed to be always equal to 1 and q 3 is a negative value. For the case of trT h ! 1, the difference between dense and loose packing tends to disappear since the parameter a ! 1. Based on the numerical parametric study [41], q 2 is suggested to lie in the range (-0.3, 0.0). p 3 and q 3 for quartz sand are assumed to be -0.0001 kPa. In the case of very low confining pressure, such as the state of liquefaction, relatively higher values of q 2 , p 3 and q 3 may be needed to keep the sensitivity of I e to the stress level. The hypoplastic model (21) may be a proper choice for describing the shear softening (liquefaction) and the residual strength in the beginning of a debris flow. During debris flow, the material is subjected to large shear deformation. For developing and evaluating constitutive models, the planar simple shear motion is particularly relevant [14]. Therefore, we try to verify the applicability of the hypoplastic model (21) in the simulation of debris flows initiation by using this model to reproduce the typical experimental results of granular materials in undrained simple shear tests. As presented in the literatures [7,46], saturated sand specimens with different initial void ratios demonstrate three types of stress-strain behavior in undrained simple shear tests as indicated in Fig. 3: (i) the dense specimens have tendency of dilation and show shear hardening to reach a ultimate steady state (USS) finally; (ii) the very loose specimens demonstrate shear softening to obtain constant residual strength or complete liquefaction in the critical steady state (CSS); (iii) the specimens with medium void ratio first soften, then harden and reach also a ultimate steady state [47]. The shear softening is considered to be the main mechanism in the mobilization of debris flows. Now we intend to reproduce these three types of stressstrain behavior in the element tests. The experimental results will be reproduced qualitatively rather than precisely, due to some important material parameters are not presented in the literature [46]. In order to obtain the material constants c 1 $ c 4 for sand in the critical state with I e ¼ 1, the initial tangent modulus E i is determined approximately by the following relation [20,36] in which P a is the atmospheric pressure (101.3 kPa) and r 33 is the effective confining stress, given as 100 kPa in the experiments. Thus, the initial tangent modulus is obtained approximately 15 MPa. The friction angle / 0 is assumed equal to a relatively small value, 25 , for saturated loose sand in the critical state with e ¼ e crt . Both the initial Poisson ratio t i and the dilatancy angle w are assumed to be 0 as stated before. The determined material constants for the model (21) are presented in Table 1. The three types of stress-strain behavior are reproduced as shown in Fig. 4, when the values in Table 2 are employed for p i and q i in the relations (24) and (25). It is indicated that, when the hardening arises, the increasing of the stress level reduces the critical void ratio e crt and increases the parameter a. Both changes increase the density function I e and then limit the developing of hardening. Conversely, when the softening occurs, I e will decrease to restrict softening and liquefaction. Due to the regulatory function of I e , the model (21) can describe the shear softening and the residual strength of very loose Fig. 5, the normal stresses r ii ði ¼ 1; 2; 3Þ of the very loose specimen with e ¼ 0:876 tend to be isotropic when the shear strain is large enough, no matter what is the initial stress state. The isotropic normal stress in the large deformation stage corresponds to the former mentioned thermodynamic pressure P 0 . A new constitutive model for debris materials Based on the above analysis, the hypoplastic model (21) and the tensor form of the modified Bagnold's model are employed as the static and dynamic portions of the new constitutive model, respectively. The structure of the new model is proposed as In our former work about the constitutive model of granular-fluid flows [15], the three-dimensional form of the dynamic portion was obtained based on a simplest model structure for describing non-Newtonian fluid (see, for example, [38]). The general three-dimensional form of the dynamic portion is where Table 1 Material constants for the model (21) in the simulation of the experiments in [46] [-] [-] [-] -50.0 -629.6 -629.6 1220.8 Fig. 4 Simulation results of (21) for saturated sand with different initial void ratio in undrained simple shear tests: a shear strain versus shear stress, b mean principal stress versus shear stress Table 2 Parameters for e crt and a in the simulation of the experiments in [46] is the strain rate deviator tensor and is the second invariant of D à . It can be easily shown that for a simple shear flow where the strain rate tensor takes the form D ¼ the dynamic stress (28) is reducible to the dynamic portion of the framework (1). From (27), the concrete model for debris materials is determined as where _ T h can be determined by (21). The structure of the new model is demonstrated by simulating an undrained simple shearing flow. As shown in Fig. 6, a static portion obtained by the hypoplastic model (21) is combined with a dynamic portion to get the total effective stress. What need to be mentioned is that the static portion, T h , is rate independent. It is varying due to the accumulation of the shear strain rather than the changing of the shear rate. By merging with the dynamic portion, the total effective stress (32) becomes rate dependent. As shown in Fig. 5, the normal stresses reach a residual constant when the shear strain is approximately 0.4. This process is finished with very small shear velocity in the so-called quasi-static stage. Thus, in the simulation, the shear rate must be kept at a small value before the failure of the granular-fluid mixture to make sure that the static portion is the dominant part in the total effective stress T. The static portion should be much greater than the dynamic portion at the point A in Fig. 6. One approach to meet this requirement in numerical calculations is using small shear strain acceleration and increasing the time steps for the stage before failure. It is worth mentioning that Wu [42] developed a rate form framework by combining a hypoplastic model and a rate-dependent dynamic model as where T is the total Jaumann stress rate tensor and T d is the dynamic part of the Jaumann stress rate tensor. The models developed within this framework may have the capability to account for the different behaviors for loading and unloading. However, the Jaumann strain acceleration tensor, D, makes the implementation of these models in some numerical methods more difficult. It will be an interesting exploration to solve this problem in our future work. Performance of the proposed model In this section, the new model, (32), will be used to predict the stress-strain relations of granular-fluid flows with different materials and experimental apparatus in some element tests. The experimental data of two annular shear tests as undrained simple shear tests are employed to verify the applicability of the new model. In our former work [15], these two experiments are also simulated by a constitutive model which cannot capture the shear softening of granular-fluid materials in the quasi-static stage. The former simulation results can be used as a control group to highlight the function of the hypoplastic portion in the new model. Dry granular materials The experimental data of dry granular materials sheared in a annular shear cell were reported by Savage and Sayed [33]. The data for 1:0 mm spherical polystyrene beads are selected for the element tests. The loads applied by the upper disk range from 100 to 1500 N/m 2 which is normal to the flow surface. By checking the measured normal stress for 1:0 mm beads, we assume that the initial confining pressure of an element at the upper surface of the specimen has a value around 500 N/m 2 . The exact value of C 1 was not reported in the literature [33] and here is assumed equal to 0.64 which is a typical value for monosized spheres [2,16]. Thus, the corresponding minimum void ratio is determined equal to 0.563. The critical volume fraction C c is approximately 0.62 [34]. The internal friction angle / 0 of 1:0 mm spherical beads is 23 and the initial tangent modulus E i is assumed equal to 15 MPa as a typical value of loose granular materials with the confining pressure is 100 kPa. Based on the identification of parameters introduced in Sect. 3, the material constants c 1 $ c 4 and the parameters for the density function I e are determined and listed in Table 3. The parameters for the dynamic portion are listed in Table 4. As shown in Fig. 7, the predicted results are in good agreement with the experimental data of different solid volume fractions when some typical values are employed for the unstated parameters. The non-quadratic dependence between the stresses and strain rate in the slow shear stage for the samples with C ¼ 0:524 is captured by the new model. In the experiments [33], the shear velocity was adjusted to keep the height constant, thereby keeping the volume of the samples unchanged. It is equivalent to the undrained condition in the tests of saturated granular materials. The mean effective stress would decrease from the initial confining pressure to a residual value or zero to offset the tendency of volume compression in the quasistatic stage of very loose granular-fluid mixture. The residual normal and shear stresses, corresponding to the stresses P 0 and T 0 in the framework (1), are determined by the hypoplastic portion and presented in Table 5. Only the test with C ¼ 0:524 demonstrates residual strength. It is consistent with the experimental observation [46] that granular materials will be fully liquefied when the initial void ratio exceeds a threshold value. For the looser specimens where C ¼ 0:504, 0.483 and 0.461, the stress-strain rate curves in the rapid shear stage show a slope of about 2 in the logarithmic coordinates. It means the linear term T v which characterizes the effect of the interstitial fluid is insignificant while the quadratic law T i is dominant in this case as analyzed in the Sect. 2. It proves that the proposed model (32) can describe the shear softening of dry granular materials in the quasi-static stage and the stress-shear rate relation throughout the shear process from the quasi-static stage to the fast shearing stage. Granular-water mixture For the case of a granular-fluid mixture, we take Hanes and Inman's experiments [16] about spherical particles sheared in water as an example. The data for particles with diameter 1:85 mm which were stated as good quality ones are chosen to verify the new model. The maximum measured volume fraction for 1:85 mm particles was reported to be 0.55. Thus, the asymptotic limit C 1 is presumed to be approximately 0.61 and the minimum void ratio is 0.64. The critical volume fraction is assumed to be 0.52 since partial shearing was observed in the test of the specimen with C ¼ 0:53. The load from the upper disk is almost 500 N/m 2 . The internal friction angle / 0 is stated to be 28 and the initial tangent modulus E i is 15 MPa. The determined material constants c 1 $ c 4 and the parameters for the density function I e are presented in Table 6. The parameters of the dynamic portion are listed in Table 7. The simulation results are shown in Fig. 8. The stress states of the two specimens are reproduced based on the prediction of the residual stresses. The specific values are presented in Table 8. The sample with C ¼ 0:51 results in residual strength after failure in the undrained simple shearing. The shear stress-shear rate curves of C ¼ 0:49 has a slope less than 2 in the stage with shear rate between 1 and 10 in the logarithmic coordinates. Comparing to the dry granular flows, the effect of the interstitial fluid in a granular-fluid flow is non-negligible. The slight difference between the slopes of predicted curves and the experimental data for C ¼ 0:49 in the rapid shear stage implies a nonzero residual strength of this specimen. It may be attributed to that the employed parameters for the hypoplastic portion are not reasonable for this case. Conclusions In the initiation of debris flows, the development of excess pore water pressure is considered as the most significant triggering factor. Debris materials are normally simplified as granular-fluid mixture for constitutive modeling. Therefore, a theory which can be used to describe the solid-like behavior of debris materials should have the ability to capture the changing of pore water pressure. Moreover, a constitutive relation for the debris materials in the flowing stage should be rate dependent, in which some important material parameters in a granular-fluid flow, such as solid volume fraction, fluid viscosity and particle density, are taken into account. In a former developed framework [15], a static portion for the friction component and a dynamic portion for the viscous component are combined. The dynamic portion is composed of a linear term for drag force of the fluid and a quadratic term for the collisional force. For a dry granular flow on an inclined plane, the linear term is negligible since the viscous effect of air is insignificant compare to the frictional and collisional effect of particles. In this case, the model predicts a steady uniform flow over a slope range, which is consistent with the experimental observation [3]. The constitutive relations based on this framework can describe the stress state throughout the shear process from yielding to high-speed shearing. Moreover, a smooth transition is obtained between the so-called macro-viscous and grain-inertia regimes. The applicability of hypoplastic model in describing debris flows before failure is studied by simulating the undrained simple shear test of saturated granular material. Such test condition is particularly relevant to the initiation mechanism of debris flow. Three types of stressstrain behavior in which the 'liquefaction' is regarded as the main factor of debris flow mobilization are reproduced by the hypoplastic model. It is shown that the hypoplastic model has the capability to describe the changes of pore water pressure and further capture the shear softening and hardening behavior of granular-fluid mixtures. Therefore, it Then, this static part is combined with the tensor form of the modified Bagnold's dynamic model to obtain a new complete model for the modeling of debris materials from static to dynamic state. The new model is employed to simulate two annular shear tests with dry and water-saturated granular materials. In the case of dry granular flow with constant volume, the hypoplastic portion predicts that only the densest one of the four specimens has residual strength. It implies a non-quadratic dependence between the stresses and the shear rate in the slow flowing stage which was observed in the experiments. Similar conclusion is also obtained in the case of water-saturated granular flow. Comparing to the dry granular flow, the linear term T v , which characterizes the effect of the interstitial fluid, is nonnegligible in the granular-fluid flow. The element test results show that the new model is applicable to the modeling of granular materials with different interstitial fluid. The predicted stress-strain curves agree well with the experimental data. Further verification is still needed for the new model. It is our intention to implement this model in some numerical codes for large deformation, such as SPH and computational fluid dynamics (CFD) codes, to simulate granularfluid flows in an inclined channel or a rotating drum. As mentioned before, a hypoplastic model developed by Wang and Wu [39] has been implemented in SPH for large deformation analysis [29]. Therefore, SPH will be the preferred choice for further verification of the new model. As mentioned before, the models in the rate form may have the capability to account for the different behaviors for loading and unloading. It will be an interesting exploration to develop the rate form expression for the dynamic portion in which the loading and unloading process can be distinguished.
8,653.4
2016-09-19T00:00:00.000
[ "Engineering", "Materials Science" ]
Fast-Response Photodetector Based on Hybrid Bi2Te3/PbS Colloidal Quantum Dots Colloidal quantum dots (CQDs) as photodetector materials have attracted much attention in recent years due to their tunable energy bands, low cost, and solution processability. However, their intrinsically low carrier mobility and three-dimensional (3D) confinement of charges are unsuitable for use in fast-response and highly sensitive photodetectors, hence greatly restricting their application in many fields. Currently, 3D topological insulators, such as bismuth telluride (Bi2Te3), have been employed in high-speed broadband photodetectors due to their narrow bulk bandgap, high carrier mobility, and strong light absorption. In this work, the advantages of topological insulators and CQDs were realized by developing a hybrid Bi2Te3/PbS CQDs photodetector that exhibited a maximum responsivity and detectivity of 18 A/W and 2.1 × 1011 Jones, respectively, with a rise time of 128 μs at 660 nm light illumination. The results indicate that such a photodetector has potential application in the field of fast-response and large-scale integrated optoelectronic devices. Introduction At present, HgCdTe (MCT), InSb, and type-II superlattices (T2SLs) are some of the widely used materials for infrared photodetectors [1]. These materials are usually grown under high-vacuum and high-temperature conditions using complex processes (such as epitaxial growth), hence resulting in high manufacturing cost. Furthermore, many existing photodetectors are required to operate in a relatively low-temperature environment in order to reduce the noise of the detection system, improve the sensitivity of detection, and reduce the influence of the thermal background on the performance of the device. The need for a cooling system would increase the overall size of the detection system and increase power consumption, as well as cost significantly, which often limits its application in the civilian market. Therefore, semiconductor materials that can be manufactured at low cost and exhibit excellent photodetection performance are highly desirable for the development of state-of-the-art photodetector technology. Colloidal quantum dots (CQDs) have many significant advantages when they are used in photodetectors, for example, the optical and electrical properties can be adjusted by regulating the size and shape of the CQDs. Moreover, the nanomaterials can be solutionprocessed, and the device can be easily manufactured at low cost on almost any substrate materials [2]. To date, PbS CQDs are one of the most studied CQDs due to their wellestablished simple synthesis process. PbS CQDs have a large Bohr radius (18 nm) and a wide adjustable energy bandgap (0.6-1.6 eV). The first exciton peak of PbS can be adjusted from ultraviolet to short wavelength infrared. Therefore, PbS CQDs have become one of the most studied quantum dot materials for solar cells [3], ultraviolet and infrared photodetectors [4], and light-emitting diodes [5]. In addition, PbS CQDs demonstrated a high absorption coefficient (e.g., strong absorbability in visible and infrared regions) and good stability in air [6], hence the nanomaterial is ideal for the development of stable photoelectric devices and is a promising quantum dot material for optoelectronic applications [7]. However, the low carrier mobility (10 −5 -10 −2 cm 2 ·V −1 ·s −1 ) of PbS CQDs and numerous trap states in the nanomaterial will ease the recombination of the photogenerated carriers before they are being collected, which seriously affects the response speed and performance of the photodetector. Several methods have been reported to improve the performance of quantum dots. One of these methods was using ligand exchange to replace long insulating alkyl chains (e.g., oleic acid) during quantum dot synthesis to improve carrier mobility [8,9] and effectively reduce the defect density [10]. Another method was to combine quantum dots with other materials that exhibit high carrier mobility. This would result in a strong built-in potential, which can effectively improve the transport of carriers, response time and speed of the device [11]. Jeong et al. [12] reported a near-infrared photodetector based on a hybrid graphene/PbS CQD material, which exhibited a fivefold increase in the photocurrent, 22% increase in the rise rate, and 47% increase in the decay rate as compared with a PbS CQD device. In addition, a combination of transition metal disulfides (TMDs) and CQDs has also been reported. Kufer et al. [13] prepared a photoelectric transistor by combining MoS 2 with PbS CQDs using MoS 2 nanosheets as the electron transport layer. The responsivity of the device was higher by several orders of magnitude than photodetectors solely based on PbS CQDs and MoS 2 . Therefore, the combination of CQDs with two-dimensional (2D) materials that exhibit high carrier mobility can provide an effective solution to the slow response speed of CQD-based photodetectors. However, there are some limitations on hybrid 2D/CQD-based photodetectors due to the low light absorption of the 2D materials resulting in a low response rate over a broadband. The use of 3D topological materials can potentially provide a new solution to the low absorption of 2D materials. Since the discovery of the quantum Hall effect, 3D topological insulator materials, such as Bi 2 Te 3 , have attracted much attention due to their unique energy bandgap structure (e.g., an insulating energy gap in the bulk with gapless edge or surface states) [14]. Bi 2 Te 3 has been widely used in the study of broadband photoelectric detection due to its narrow bulk bandgap (0.17 eV) and high carrier mobility (e.g., surface carrier mobility of 5800 cm 2 ·V −1 ·s −1 ) without external influence [15]. In 2016, Wang et al. [16] reported a photovoltaic detector consisting of n-type topological insulator Bi 2 Te 3 thin films grown on p-type silicon substrates, and the device demonstrated good photovoltaic effect over a broadband range from ultraviolet (UV) to near-infrared (NIR). A short-circuit current of 19.2 µA and an open-circuit voltage of 235 mV were achieved under 1000 nm illumination. By taking advantage of the strong bulk bandgap optical absorption and high surface carrier mobility of 3D topological materials, the combination of 3D topological materials with CQDs can offer an excellent solution to the slow response speed of CQDs as well as the low response of 2D materials under a broad spectrum. Presently, there is limited report on photodetectors based on hybrid 3D topological insulating materials and CQDs. Most of the work on 3D topological materials is concerned with the quantum spin Hall effect, and little attention has been paid to its application in the field of photodetectors. In this paper, high-quality Bi 2 Te 3 thin films and PbS CQDs with a uniform size distribution were prepared. A heterojunction photodetector consisting of hybrid Bi 2 Te 3 /PbS CQDs in a device structure of indium tin oxide (ITO)/ [6,6]-phenyl-C61-butyric acid methyl ester (PCBM)/PbS/Bi 2 Te 3 /Al was developed and studied in which the advantages of topological insulators and CQDs were realized. The responsivity (R) and detectivity (D*) of the device were 18 A/W and 2.1 × 10 11 Jones, respectively, with a fast response time of 128 µs. Materials ITO grown on a quartz substrate was purchased from Beijing Jinji Aomeng Technology Co., Ltd., Beijing, China. PbS CQDs were synthesized by thermal injection method as reported by Hines Ma et al. [17]. The as-prepared PbS CQDs were dissolved in an noctane solvent with a concentration of 30 mg/mL (n-octane was purchased from Tianjin Zhiyuan Chemical Reagent Co., Ltd., Tianjin, China). PCBM was dissolved in chloroform with a concentration of 100 mg/mL (PCBM and chloroform were purchased from Jilin OLED Material Tech Co., Ltd., Changchun, China. and Chengdu Chron Chemicals Co., Ltd., Chengdu, China, respectively). The Bi 2 Te 3 film was deposited using a magnetronsputtering technique. Electrical contact pads consisting of Al electrodes were evaporated in a vacuum metal evaporator. Bi 2 Te 3 targets (99.99%) and Al slice (99.99%) were all purchased from Zhongnuo Advanced Material (Beijing) Technology Co., Ltd., Beijing, China. Device Fabrication After cleaning and drying the ITO substrate, a layer of PCBM was spin-coated onto the substrate at a rotational speed of 2500 rpm for a duration of 30 s. Subsequently, a solution of PbS CQDs was spin-coated on the PCBM film. Tetrabutyl-ammonium iodide (TBAI) was introduced on the PbS CQD layer and rested for 60 s before spin-coating. The duration of spin-coating was set at 30 s for each step with a rotational speed of 2500 rpm. The coated film was then rinsed using methanol. Ten layers of PbS CQDs films were spin-coated using the same method. This was followed by the deposition of the Bi 2 Te 3 film by magnetron-sputtering at an Ar flow of 60 standard cubic centimeters per minute (sccm). The sputtering was carried out at room temperature with a sputtering power of 200 W, sputtering pressure of 5 Pa, and sputtering duration of 1 s. Finally, Al electrodes were evaporated in a vacuum metal evaporator. The magnetron sputtering equipment and vacuum metal evaporator were all purchased from Shenyang Kecheng Vacuum Tech Co., Ltd., Shenyang, China. Results and Discussion PbS CQDs exhibited strong absorption from ultraviolet to near infrared (the UV-Vis absorption spectrum of PbS CQDs was shown in Figure S1), they were used as an absorption layer and photoelectric conversion layer in the device. The morphology and size distribution of the PbS CQDs were investigated by transmission electron microscopy (TEM). As presented in Figure 1a, the as-synthesized PbS CQDs showed excellent monodispersity with an average particle size of 4.01 nm and a full width at half maximum (FWHM) of 0.49 nm. High-resolution TEM (HRTEM) was performed to study the lattice fringes of PbS CQDs. As shown in Figure 1b Structural characterization and analysis on the 3D topological insulator material Bi 2 Te 3 were performed as it is an important functional layer in the device. Figure 1g shows a lowresolution TEM image of the Bi 2 Te 3 film. An area on the image in Figure 1g was selected to study the crystal microstructure at high resolution as shown in Figure 1h. Crystal lattice spacing of 0.314 nm was measured that corresponded to the (015) crystal plane of Bi 2 Te 3 . The selected area electron diffraction (SAED) pattern of the Bi 2 Te 3 film is shown in Figure 1i. The thickness of the Bi 2 Te 3 film has a significant effect on the performance of the photodetector as it can influence the optical absorbance of the film as well as the diffusion length of the carriers. Figure 2a shows a Bi 2 Te 3 film with a thickness of 7.5 nm deposited on a silicon dioxide substrate using the same process conditions as in the device fabrication. The internal molecular vibration state of the material was studied by Raman spectroscopy to determine the phase of the as-prepared film. The crystal structure of Bi 2 Te 3 is in the space group R3m having a layered structure in the order of -Bi-Te(1)]-[Te(1)-Bi-Te(2)-Bi-Te(1)]-[Te(1)-Bi- [18]. Bi 2 Te 3 has 15 vibrational modes, and the optical modes that can be detected by Raman spectroscopy are E g , A 1g , E u , and A 1u [19]. E g is generated by the in-plane vibration of the five-layer structure, and A g is generated by the out-of-plane vibration of the five-layer structure. Furthermore, A 1u is due to five-layer defects [20]. At T = 300 k, there are four Raman active lattice vibrations with wavenumbers at 36.5 cm −1 (E 1 g ), 62.0 cm −1 (A 1 1g ), 102.3 cm −1 (E 2 g ), and 134.0 cm −1 (A 2 1g ) [18]. As shown in Figure 2b, three vibrational peaks at 62.6 cm −1 (A 1 1g ), 101.1 cm −1 (E 2 g ), and 130.7 cm −1 (A 2 1g ) were observed, and they are in good agreement with previously reported work, therefore suggesting a successful preparation of the Bi 2 Te 3 film. Figure 2c depicts the vibrational mode of Bi 2 Te 3 [21]. In the A 1 1g vibration mode, Bi and Te(1) were vibrating in phase. However, in the A 2 1g mode, Bi and Te(1) were vibrating out of phase. The adjacent Te(1) atoms always vibrate out of phase [18]. shows a low-resolution TEM image of the Bi2Te3 film. An area on the image in Figure 1g was selected to study the crystal microstructure at high resolution as shown in Figure 1h. Crystal lattice spacing of 0.314 nm was measured that corresponded to the (015) crystal plane of Bi2Te3. The selected area electron diffraction (SAED) pattern of the Bi2Te3 film is shown in Figure 1i. The thickness of the Bi2Te3 film has a significant effect on the performance of the photodetector as it can influence the optical absorbance of the film as well as the diffusion length of the carriers. Figure 2a shows a Bi2Te3 film with a thickness of 7.5 nm deposited on a silicon dioxide substrate using the same process conditions as in the device fabrication. The internal molecular vibration state of the material was studied by Raman spectroscopy to determine the phase of the as-prepared film. The crystal structure of Bi2Te3 is in the space group R3m having a layered structure in the order of -Bi-Te(1)]-[Te(1)-Bi-Te(2)-Bi-Te(1)]-[Te(1)-Bi- [18]. Bi2Te3 has 15 vibrational modes, and the optical modes that can be detected by Raman spectroscopy are Eg, A1g, Eu, and A1u [19]. Eg is generated by the in-plane vibration of the five-layer structure, and Ag is work, therefore suggesting a successful preparation of the Bi2Te3 film. Figure 2c depic the vibrational mode of Bi2Te3 [21]. In the A 1 1g vibration mode, Bi and Te(1) wer vibrating in phase. However, in the A 2 1g mode, Bi and Te(1) were vibrating out of phas The adjacent Te(1) atoms always vibrate out of phase [18]. (1115), respectively (according to the standard card pdf# 080027). Th is in good agreement with previously reported work [22], which implied that th prepared Bi2Te3 film exhibited good crystalline quality. Xray photoelectron spectroscop (XPS) was used to analyze the elemental composition and surface oxidation state of th Bi2Te3 film. The Bi 4f core level (as shown in Figure 2e) consisted of peaks at 164.3, 163. 159, and 157.8 eV, corresponding to Bi 4f7/2 (oxide), Bi 4f5/2 (metal), Bi 4f5/2 (oxide), and B 4f7/2 (metal), respectively [23]. Figure 2f shows the core level peaks of Te 3d situated a 586.3, 582.6, 575.9, and 572.2 eV, which corresponded to Te 3d3/2 (oxide), Te 3d3/2 (metal Te 3d5/2 (oxide), and Te 3d5/2 (metal), respectively, similar to previously reported wor [24,25]. The above studies showed that the Bi2Te3 film was deposited but exhibited som degree of surface oxidation due to its interaction with ambient air. Interestingly, Bi2Te3 (a a 3D topological insulating material) has strong surface states, which are unlikely to b influenced by its oxidation states; hence, they will have little effect on the performance o the device. The fabrication process of the photovoltaic detector with the device structure o ITO/PCBM/PbS/Bi2Te3/Al is illustrated in Figure 3a as described in Section 2.2. Figure 3 shows a cross-sectional scanning electron microscope (SEM) image of the devic structure. The thicknesses of ITO, PCBM, PbS, Bi2Te3, and Al were approximately 270, 7 150, 7, and 95 nm, respectively. . This is in good agreement with previously reported work [22], which implied that the prepared Bi 2 Te 3 film exhibited good crystalline quality. Xray photoelectron spectroscopy (XPS) was used to analyze the elemental composition and surface oxidation state of the Bi 2 Te 3 film. The Bi 4f core level (as shown in Figure 2e) consisted of peaks at 164.3, 163.1, 159, and 157.8 eV, corresponding to Bi 4f 7/2 (oxide), Bi 4f 5/2 (metal), Bi 4f 5/2 (oxide), and Bi 4f 7/2 (metal), respectively [23]. Figure 2f shows the core level peaks of Te 3d situated at 586.3, 582.6, 575.9, and 572.2 eV, which corresponded to Te 3d 3/2 (oxide), Te 3d 3/2 (metal), Te 3d 5/2 (oxide), and Te 3d 5/2 (metal), respectively, similar to previously reported work [24,25]. The above studies showed that the Bi 2 Te 3 film was deposited but exhibited some degree of surface oxidation due to its interaction with ambient air. Interestingly, Bi 2 Te 3 (as a 3D topological insulating material) has strong surface states, which are unlikely to be influenced by its oxidation states; hence, they will have little effect on the performance of the device. The fabrication process of the photovoltaic detector with the device structure of ITO/PCBM/PbS/Bi 2 Te 3 /Al is illustrated in Figure 3a as described in Section 2.2. Figure 3b shows a cross-sectional scanning electron microscope (SEM) image of the device structure. The thicknesses of ITO, PCBM, PbS, Bi 2 Te 3 , and Al were approximately 270, 77, 150, 7, and 95 nm, respectively. The mechanism and advantages of this device structure can be explained using the energy band diagram as depicted in the inset of Figure 3c. Electrons and holes are generated at the PbS photosensitive layer. The electrons are transported to the ITO electrode through the PCBM layer, and the holes are transported to the Al electrode through the Bi 2 Te 3 layer, hence forming photogenerated carriers. Due to the higher conduction band minimum position of PbS CQDs, electrons are prevented from entering the hole transport layer via the PCBM layer, effectively reducing the carrier loss due to recombination. The valence band maximum of PbS CQDs is lower than the highest occupied molecular orbital (HOMO) of Bi 2 Te 3 , which reduces the potential barrier for hole transport and thus facilitates hole transport from the PbS CQD layer to Bi 2 Te 3 layer. Therefore, the hybrid layer structure is beneficial to the transport of carriers and the collection of photogenerated carriers, which will lead to significant improvement on the performance of the device. The mechanism and advantages of this device structure can be explained using energy band diagram as depicted in the inset of Figure 3c. Electrons and holes generated at the PbS photosensitive layer. The electrons are transported to the The performance of the photodetector was characterized and analyzed. Since the devices were capable of responding to wavelength from UV to visible bands (I-V plots under dark and light illumination at 365, 500 and 850 nm were shown in Figure S2), we had chosen an intermediate band (e.g., 660 nm) for the detailed studies. I-V measurement was performed on the device using Keithley 2400 under dark and light illumination at 660 nm with power density of 2380 µW·cm −2 . Figure 3c shows the I-V characteristic curves of the device. It is evident that the device produced photogenerated current under light illumination. The responsivity (R) and detectivity (D*) of the device can be calculated using the following equations [26]: where λ is the incident light wavelength, J ph (λ) is the photocurrent density, P opt (λ) is the optical power density at a specific wavelength, q is the unit charge, and J dark is the dark current density. All measurements were performed at room temperature. The responsivity and detectivity obtained by the above equations are shown in Figure 3f. The maximum responsivity and detectivity at 660 nm were 18 A/W and 2.1 × 10 11 Jones, respectively. The photoelectric response time of the device is shown in Figure 3d,e. The device was stored up to 6 months and characterized after storage of 1, 3, and 6 months. The results showed that the device performance was stable and reproducible under 660 nm illumination after storage of 6 months. The rise and fall times were 128 µs and 3 ms, respectively. Conclusions In this paper, a fast-response high-performance photodetector based on hybrid PbS CQDs/Bi 2 Te 3 was prepared. Combining the synergistic effect of PbS CQDs and a Bi 2 Te 3 film, the photodetector exhibited a fast response time of 128 µs with responsivity and detectivity of 18 AW −1 and 2.1 × 10 11 Jones, respectively. The response time of the device was faster by several orders of magnitude than that of photodetectors consisting of solely PbS CQDs and other PbS QD heterojunctions. The energy band structure of the device is beneficial to the fast carrier transport and collection of photogenerated carriers, thereby improving the response rate and performance of the device. With the advantages of excellent performance, low cost, and solution processability, the hybrid PbS CQDs/Bi 2 Te 3 photodetector has promising applications in the field of fast-response and large-scale integrated QD-based optoelectronic devices. Supplementary Materials: The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/nano12183212/s1, Figure S1: UV-Vis absorption spectrum of PbS CQDs, Figure Data Availability Statement: All data, models, and codes generated or used during the study appear in the submitted article.
4,793.6
2022-09-01T00:00:00.000
[ "Physics" ]
Care Ethics and the Feminist Personalism of Edith Stein : The personalist ethics of Edith Stein and her feminist thought are intrinsically interrelated. This unique connection constitutes perhaps the main novelty of Stein’s ethical thought that makes her a forerunner of some recent developments in feminist ethics, particularly ethics of care. A few scholars have noticed the resemblance between Stein’s feminist personalism and care ethics, yet none of them have properly explored it. This paper offers an in-depth discussion of the overlaps and differences between Stein’s ethical insights and the core ideas of care ethics. It argues that both Stein and care ethicists relocate a certain set of practices, values and attitudes from the periphery to the center of ethical reflection. This includes relationality, emotionality and care. The paper finally argues that it is plausible and fruitful to read Stein’s advocacy of ‘woman’s values and attitudes’ in a critical feminist way, rather than as an instance of essentialist difference feminism. Introduction Since its first formulation in the early 1980s, care ethics has developed into a burgeoning field of ethical inquiry that has spread worldwide as a viable alternative to the mainstream currents in moral and political philosophy. Care ethicists are rightly credited with refocusing ethical reflections on the relational nature of the human condition and revaluing care as a fundamental human practice that was historically marginalized and devalued as a matter of private life and family relationships. It has been widely acknowledged, though, that some seminal care ethical insights draw on the ideas of care ethics' predecessors or at least have striking parallels in the history of philosophy. A few scholars have noticed remarkable similarities between care ethics and the ideas of the German philosopher Edith Stein, who developed her phenomenological anthropology and social philosophy between the 1910s and the late 1930s [1][2][3][4][5]. Yet, most of the previous contributions to the question of the relationship between Stein's philosophy and care ethics are rather cursory or suffer from a disproportionate emphasis on one of the sides of the relationship, approaching it either from the perspective of Stein's thought or the one of care ethics. In this paper, I provide a more balanced and thorough account of this relationship. I start with a presentation of care ethics focusing mainly on its development in the work of Sara Ruddick, Carol Gilligan and Joan Tronto. In the next section, I explore the core ideas of Stein's phenomenological personalism and her feminist thought. Finally, I conclude by reflecting on the overlaps and differences between the two ethical approaches. This reflection can help us to arrive at a more adequate and relational understanding of the place of care ethics within the diverse landscape of traditional moral and political philosophy. It also provides an impulse to a more vivid dialogue between care ethicists and the current proponents of personalist ethics who often take Stein's philosophical work as a source of inspiration. Care Ethics In most general terms, care ethics can be described as an approach to ethics that foregrounds caring as a core human practice and conceives of the goals and values of this practice as fundamental for achieving good life at both individual and collective levels. The idea of rehabilitating care as a critical human practice and value arose from the historical, intellectual and socio-political context of the 1970s in North America and Europe-in particular, from second-wave feminism and its critique of the dominant 'hegemonic masculinity' manifested in the image of the human person as an autonomous independent individual. In the early 1980s, the shift to the description and revaluation of care and caring relationality was reinforced by numerous works across many academic disciplines, such as epistemology [6], sociology [7][8][9], social policy [10][11][12], political economy [13], philosophy of education [14], social philosophy [15] and developmental psychology [16]. It was the field of developmental psychology where Carol Gilligan coined the term 'an ethic of care'. Yet, two years before the term 'an ethic of care' would appear in Gilligan's widely known book In a Different Voice, an American philosopher Sara Ruddick published an essay "Maternal Thinking" (1980) [17], in which she put forth several key ideas that have proven of central importance for the subsequent forty years of the development of care ethics. I start this section with a discussion of Ruddick's 1980 paper, followed by a brief introduction to the formation of care ethics in the 1980s and the early 1990s. Revaluing Care as a Core Human Practice Ruddick builds her reflections on the observation that our "working and caring with others" and the corresponding way of thinking plays a crucial role in human life. Her main point is that, although caring practices form the core of human existence, they have historically been marginalized, devalued and portrayed in a sentimental and romantic way. Ruddick seeks to provide an adequate philosophical description of this practice, point out its distinctiveness and explain its value as an important source of an alternative moral, social and political theory. Though Ruddick links her central concept of 'maternal practice' primarily to the activity of taking care of and raising a child, she concedes that maternal thinking expresses itself "in various kinds of working and caring with others" [17] (p. 346). Maternal practice that gives rise to maternal thinking, Ruddick argues, is a response to three basic interests or demands of a child, namely for preservation, growth and acceptability. Ruddick defines 'maternal thinking' as a distinctive style of reflecting, judging and feeling that is guided by distinctive goals and interests of 'maternal practice'. Ruddick distinguishes between degenerative and non-degenerative forms of maternal practice. The actor of the non-generative form of maternal practice would typically feature attention, love, humility, understanding, respect for the other, sense of complexity, the capacity to change (alongside with or in response to the changing reality), explore, create and insist upon one's own values and the ability to see and name existing forms of oppression and domination. In contrast, the actor of the degenerative form of maternal practice is characterized by rigid and excessive control over the other, self-refusal and uncritical acceptance of the values of the dominant culture or obedience-a sense of wanting to 'be good' in the 'eyes' of the dominant culture and society [17] (p. 354-55.). For Ruddick "'maternal' is a social category" [17] (p. 346), which entails that her account focuses on the practice itself and by "concentrating on what mothers do" rather than on what they are suspends any question about the 'essence' of this practice. Ruddick rejects "the ideology of womanhood" and argues that it was invented by men and caused the oppression of women [17] (p. 345). Moreover, any identification of maternal practice with biological or adoptive motherhood is false, Ruddick argues, since it "obscures the many kinds of mothering performed by those who do not parent particular children in families" [17] (p. 363). Together with 'the ideology of womanhood', Ruddick rejects "all accounts of gender difference or maternal nature which would claim an essential and ineradicable difference between female and male parents" [17] (p. 346). 1 In sum, Ruddick describes maternal practice as a fundamental human practice that has been historically associated with women (and other marginalized groups), but in fact has no essential relation to any sex or gender identity. 2 Ruddick finally borrows the notion of 'feminist consciousness' from Sandra Bartky [20] and concludes her essay by envisioning 'maternal thought transformed by feminist consciousness'. It is a task of 'feminist consciousness' to critique the current economic, social and political structures that perpetuate the marginalization and devaluation of the practice of 'working and caring with others' and that foster the dominant association of this practice with women and other oppressed groups. When shaped by 'feminist consciousness', maternal thinking reveals "the damaging effects of the prevailing sexual arrangements and social hierarchies on maternal lives" [17] (p. 356) and raises a voice "affirming its own criteria of acceptability, insisting that the dominant values are unacceptable and need not to be accepted" [17] (p. 357). In order to create a society based on the values and rationality of this practice, Ruddick argues we must "work to bring transformed maternal thought into the public realm" and to make it "a work of public conscience and legislation" [17] (p. 361). This would require, on the practical level, a transformation of politics and "moral reforms of economic life" [17] (p. 360) and, on the theoretical level, "articulating a theory of justice shaped by and incorporating maternal thinking" [17] (p. 361). To summarize, the four key elements that are present in Ruddick's early essay and that prefigure what will constitute the core of the subsequent development of moral and political theory of care are: (1) The focus on caring as a human practice that, though fundamental to the human condition, was historically marginalized, devalued and kept outside the scope of the dominant Western moral, social and political thought; (2) the aim to provide an adequate analysis of this practice, which would replace the widespread sentimentalizing and romanticizing distortions that go often hand-in-hand with the sociocultural and political devaluating of the practice and the focus on the practice itself, which entails a rejection of its naturalistic and essentialist accounts; 3 (3) the emphasis on the transformative potential of such an analysis, which inspires a critique of the social, economic and political structures that hinder realization of the nondegenerative forms of the practice and (4) the insight that the relational values and ideals inherent in caring practice are connected with the values and ideals of justice and that promoting both requires a transformation of our social and political institutions. Identifying a Different Voice in Ethics As described above, the notion of an 'ethic of care' was coined by the American developmental psychologist Carol Gilligan in her widely acclaimed book In a Different Voice [16]. Gilligan famously characterizes an ethic of care as a distinctive style of moral judging and way of constructing moral problems which centers around the responsibility for human relationships, builds moral judgment on concrete knowledge of a particular situation and context, emphasizes the priority of connection and starts from the insight that there is no contradiction in acting responsibly towards oneself and others. As Gilligan puts it, "the ideal of care is thus an activity of relationship, of seeing and responding to need, taking care of the world by sustaining the web of connection that no one is left alone" [16] (p. 62). It is noteworthy that Gilligan, in contrast to Ruddick, formulates the idea of an ethic of care within a fundamentally dualistic framework. In her view, an ethic of care is a 'different voice', which differs from the voice of an ethic of justice (or rights). In contrast to an ethic of care, an ethic of justice emphasizes the priority of the individual, derives moral judgement from formal and abstract rules, foregrounds the ideal of equality and impartiality and considers the struggle for individual rights as the fundamental dynamics of social relations. Despite the numerous harsh contrasts in her exposition of an ethic of care and an ethic of justice, Gilligan ultimately contends that the "two views of morality . . . are complementary rather than sequential or opposed" [16] (p. 33) and that "to understand how the tension between responsibilities and rights sustains the dialectic of human development is to see the integrity of two disparate modes of experiences that are in the end connected" [16] (p. 174). Yet, perhaps due to the fact that she expresses this view with restraint, or due to her failure to provide an account of how the two views of morality should be connected in the real life of individuals and communities, many of the critics as well as admirers of Gilligan's work have one-sidedly focused on the opposition of the "two different constructions of the moral domain" [16] (p. 69). 4 Another notable duality that marks Gilligan's initial presentation of an ethic of care is the duality of the female and male 'voices', the female and male ways of telling the story of what it means to be oneself, to be an adult human being. The author of In a Different Voice contends that the male voice typically speaks "of the role of separation as it defines and empowers the self", whereas the female voice typically speaks "of the ongoing process of attachment that creates and sustains the human community" [16] (p. 156). Gilligan conceives of the dual way of defining the self and its relationships to other selves and the world as rooted in the difference between the psychology of men and "the psychology of women that has constantly been described as distinctive" [16] (p. 22). Yet, on the opening pages of her book, Gilligan assures her reader that an ethic of care "is characterized not by gender but theme". She maintains that "the contrasts between male and female voices are presented here to highlight a distinction between two modes of thought and to focus a problem of interpretation rather than to represent a generalization about either sex" [16] (p. 2). I agree with Tronto's remark that "the equation of Gilligan's work with women's morality is a cultural phenomenon, and not of Gilligan's making" [21] (p. 646). 5 However, I think that the conceptual ambiguity of Gilligan's early work opened the way for the formation of this cultural phenomenon, as well as the related misunderstanding as regards the nature of care ethics. Care as a Political Concept The American moral and political philosopher Joan Tronto was among the first care theorists who clearly showed that confusing care ethics with private life-oriented women's morality not only leads to an easy dismissal of the feminist 'different voice' in the context of dominant moral and political theories but also jeopardizes care ethics' feminists goals and may result in harmful consequences for women, such as sidestepping structural problems of domination, exploitation, oppression and marginalization (cf. [21]). To address this issue, Fisher and Tronto [23] took up the task of constructing a full moral and political theory of care by offering a broader definition and analysis of caring that enables the inclusion of the whole range of human activities and allows for taking into account the political dimensions of power and conflict entailed in all caring activities. 6 Fisher and Tronto famously define caring as "a species activity that includes everything we do to maintain, continue, and repair our 'world' so that we can live in it as well as possible. That world includes our bodies, our selves, and our environment, all of which we seek to interweave in a complex, life-sustaining web" [23] (p. 40). This definition, which emphasizes the processual dimension of care and implies that the caring process may be directed not only toward people but also other living being and things, has been widely influential in further development of a moral and political theory of care and has served as a starting point for numerous applications of a care ethical perspective (which I discuss later on). The same holds true for Fisher and Tronto's related distinction and analysis of four intertwining phases or components of the caring process: (1) caring about-paying attention to something with a focus on continuity, maintenance and repair; (2) taking care of-taking responsibility for activities responding to the facts noticed in caring about; (3) care giving-the concrete tasks and the hands-on care work and (4) care-receiving-the responses of those toward whom caring is directed [23] (p. 40). In a way similar to Ruddick's reflection on degenerative forms of 'maternal practice', Fisher and Tronto describe ineffective and destructive patterns in caring activities. They think of them as characterized by fragmentation and alienation in the caring process, as opposed to the integrity of caring where the four phases of the care process fit together into a whole. Such ineffective patterns in caring occur, for example, when caregivers suffer a shortage of time and/or other resources necessary for caring or when care-receivers have little control over how their needs are defined in the caring process. Against the background of the insight that how we think about care is deeply affected by existing social and political structures of power and inequality, Fisher and Tronto conclude that the patterns of fragmentation and imbalance of the caring process are mainly created by deficient social and political arrangements. Hence, a full-fledged moral theory of care needs to be developed hand in hand with a political theory of care that scrutinizes the workings of our social and political institutions (e.g., the household, the market and the state) from a critical perspective inspired by the ideal of good caring. While an ethic of care envisions "a different world, one where the daily caring of people for each other is a valued premise of human existence, . . . an alternative vison of life, one centred on human care and interdependence" [27] (p. x), a political theory of care reveals that "what this vision requires is that individuals and groups be frankly assessed in terms of the extent to which they are permitted to be care demanders and required to be care providers" [27] (p. 168). In her path-breaking book Moral Boundaries [27], Tronto lays ground for a full-fledged political theory of care that aims to explicate what "a just distribution of caring tasks and benefits" [27] (p. 169) entails and which social and political arrangements facilitate caring and contribute to creating "a more just world that embodies good caring" [27] (p. xii). A political theory of care sheds light on the close relationship between care and justice. On the one hand, to address the problems of care and to conceptualize the prerequisites of good caring requires concepts of justice, equality and democracy, since caring is always deeply affected by unequal power and access to material conditions and recourses necessary for caring. Thus, Tronto argues, "only in a just, pluralistic, democratic society can care flourish" [27] (p. 162). On the other hand, "care as a practice can inform the practices of democratic citizenship" [27] (p. 177), since it describes "the qualities necessary for democratic citizens to live together well in a pluralistic society" [27] (p. 161-62). Refection on the mutually enabling relationship, foregrounded by Tronto [27], between good caring and democratic citizenship in a just society is a thread that connects most subsequent developments in a political theory of care. The exploration of a close relationship between caring, democracy, citizenship and equality inspired Tronto's more recent reflection on the practice of 'caring with' as constitutive for a 'caring democracy' [28]. To be a citizen in a democracy means, Tronto argues, "to care for citizens and to care for democracy itself" [28] (p. x). This requires that citizens take seriously the collective responsibility for 'caring with' each other and that democratic politics recognizes the centrality of "assigning responsibilities for care, and for ensuring that democratic citizens are as capable as possible of participating in this assignment of responsibilities" [28] (p. 30). Tronto expands the original distinction of the four phases of caring [23] by adding 'caring with' as the final fifth phase of the care process and identifying plurality, communication, trust, respect and solidarity as the key moral qualities that 'caring with' requires [28] (p. 35-36). Edith Stein's Feminist Personalism Edith Stein (1891-1942), a patron saint of Europe, was a German philosopher and religious thinker. She was a pupil and follower of the founder of phenomenological philosophy, Edmund Husserl. Stein was born in a German Jewish family, but she later converted to Catholicism and became a Carmelite nun. Nazis murdered her in Auschwitz in 1942. Stein has left an extensive philosophical and theological corpus of work. The following exposition of her thought focuses in particular on her personalist and feminist views in relation to ethics. Stein's Personalist Ethics In spite of the fact that Stein has never wrote any systematic work on ethics per se, it is plausible to argue that her entire philosophical corpus "implicitly entails a consciously developing ethical vision entering into conversation with ethical philosophy's major representatives" [29] (p. 73-74) and "ethical concern is deeply and thematically woven into the fabric of her studies in anthropology, community, and political existence" [29] (p. 86). In the phenomenological phase of her philosophical work, from the late 1910s to the early 1930s, Stein's ethical views draw heavily on the personalist ethics developed by Edmund Husserl and Max Scheler. 7 Yet, as we will see in a moment, Stein enriches the personalist perspective of her phenomenological companions by a unique feminist tweak. Stein's latest thought shifts towards the Aristotelian-Thomist tradition, attempting to link the Christian perspective with the phenomenological position of her earlier works (see, e.g., [30]). For the purpose of this paper, I want to narrow my focus down to the phenomenological phase of Stein's ethical thought, which offers enough material for a comparison with care ethics. Following the methods of her teacher, Husserl, Stein devotes a great deal of her philosophical project to answering the questions of what it means to be a self and how the self relates to the world. The question of how the world can be given to us as an objective world appears for Stein, as well as for Husserl, as inseparably connected with the question of how we can know and understand others as subjects who relate to the same shared world as we do. Stein conceives of the act of understanding or knowing the other subject as an act of empathy (Einfühlung) and characterizes it as a unique type of perception that differs from all other forms of perceiving. In empathy, Stein argues, I grasp the other person as a person who has her own perspective and experiences, whereas, in nonempathic perceptions, I perceive things and external objects in the world. Stein notes that there is a fundamental difference between the way in which I grasp my own inner life and the way in which I grasp the other's inner life: the content of empathy is never fully present for the empathizer, as long as it belongs to someone else, to a different person. Empathy is an other-oriented type of consciousness; the aspect of otherness is constitutive for empathy. However, this self-other distinction in empathy also entails that the empathizer is connected with her own experiences too. As Hamington puts it, for Stein, "empathy does not negate the self but actually strengthens self-concept" [31] (p. 80). Hence, in Stein's view, empathy plays an important double role in that it both constitutes the other self for me, and it constitutes my own self as different from the other. Stein's investigations in the constitution of the human person led her to her study of the human person's relations and intersubjective links, which resulted in a fundamentally relational view of the human person. As Fuentes stresses, "it is through empathy that the individual, human person (psychophysical individual), becomes constructed as such-and I cannot be formed without a you-the possibility of knowing oneself in the other and knowing the other is inseparable-one cannot be oneself, build oneself as a self, or form one's own identity, without the reference of the other" [32] (p. 206). Human persons are referred from themselves to the other in order to be what they are and to become what they can be. This deeply relational perspective on the human person becomes manifest in Stein's ethical reflections that start from the notion of the person. As she argues already in her dissertation, "one's own moral life and moral character is constituted alongside the moral encounter with the other and in one's own response to her moral character" [29] (p. 81). In the ethics derived from the relational perspective, "the other, the good of the other, is not only something tolerable or acceptable, but it is indispensable for the same comprehension and realization of one's own good-one's own good cannot be carried out without the other's own good and vice versa" [32] (p. 206). Stein shares and further develops the view of other early phenomenologists that it is through emotions that a person grasps "the meaning of another being in relation to its own being, and then the significance of the inherent value of exterior things, of other persons, and impersonal things" [33] (p. 96). Emotions are the "essential organ for comprehension of the existent in its totality and its peculiarity" [33] (p. 96), and through emotions, we open ourselves to the world of values that Stein takes to be present in the world of persons. It is important to stress that by 'emotions' Stein does not mean fluctuating states of sentiment, although emotions may include sentiment. Stein relates the primordial recognition of others to the emotions as a peculiar spiritual capacity, present both in self-knowledge and empathy (Haney 458). What we ought to be and do shows itself to us through the feelings we develop in encountering the experiences and actions of other persons [34] (p. 757). In line with the emotional value realism of Scheler, Stein claims that the structure of personal depth and periphery is mapped out in response to a range of values and that the person ought to be affected in the deepest way by the highest values [29] (p. 74). On the top of the hierarchy of values resides the absolute value of the human person: "the human person is more precious than all objective values" [33] (p. 256-cited in [35]). To be responsive to the highest value, to the absolute value of the person as person requires, in Stein's view, love. In love, the person opens herself to the value of other persons, as well as to the value of one's own person. Thus, love, Stein argues, is vital for the individual and community alike. By contrast, hate is a vital disvalue both for the individual and community: "love operates within the one who loves as an invigorating force that might even develop more powers within him than experiencing it costs him. And hate depletes his powers far more severely as a content than as an experiencing of hate. Thus, love and positive attitudes in general don't feed upon themselves; rather, they are a font from which I can nourish others without impoverishing myself" [36] (p. 212). Attitudes such as love, trust and gratitude have the effect of 'enlivening' the person who receives them, inasmuch as there is a real community among persons who are evaluatively 'affirmed' in and through them. The opposite acts of "distrust, aversion, hatred-in short, the whole set of 'rejecting' manner of behavior" [36] (p. 211) are devitalizing, because in them the person is evaluatively negated [29] (p. 78). However, even the 'personal attitude' and 'love', in Stein's view, can become devitalizing and destructive if it takes on excessive forms, such an excess of interest in the other person, the urge to lose oneself completely in the other. Stein states that, in a passion of wanting to confiscate the other [33] (p. 257), one does justice neither to one's self nor to the humanity of another [33] (p. 257): "The woman who hovers anxiously over her children as if they were her own possessions will try to bind them to her in every way . . . She will try to curtail their freedom of development; she will check their development and destroy their happiness" [33] (p. 75). In contrast, the true capacity to love is the capacity to 'go out to the other' without losing oneself. Along this trajectory, Stein arrives at a normative ideal of community (Gemeinschaft as opposed to Gesellschaft) as "the union of purely free persons who are united with their innermost 'personal' life, or the life of the soul, and each of whom feels for himself or herself and for the community" [36] (p. 273). This ideal of community-of love freely given and received-is oriented around the consciousness of collective and individual responsibility for one another [29] (pp. 81-82). The authentic community orders persons towards "not separated living but common living, fed from common sources and stirred by common motives" [36] (p. 215). In her Investigation Concerning the State, Stein conceives of political community as a major stage upon which social-ethical responsibility is born [29] (p. 82). Since real communities and polities deviate-to a greater or lesser extent-from the ideal patterns of the forms of freedom, love and co-responsibility, such deviations must be navigated ethically and addressed through a never completed process of moral reform and renovation (Erneuerung). Stein contends that, although the process of morality's reform must originate in the souls of those who are capable of intuiting the right order of values, the state can be utilized as the specific 'tool' of social reformation by transforming the prevailing morality through legal regulation, as well as through the development of institutions that facilitate desirable forms of moral and social life [29] (p. 84). Stein's Feminism As described above, Stein's ethical personalism has a unique character mainly due to a remarkable feminist element that is increasingly present in the development of her thought. 8 The 'question of woman' is one the of the questions that occupy Stein throughout her life and writing. Already as a young university student, Stein was a "radical fighter for women's rights" [37] (p. 185); she advocated women's suffrage and engaged in vocational counselling for female students. In the 1930s, after she had given a series of public lectures and radio addresses on women's issues in Germany, Austria and Switzerland, Stein gained a reputation as an international spokesperson for the catholic women's movement and a leading figure of the educational reform. Stein's theoretical reflections on the 'question of woman' appeared in the volume Essays on Woman [33], which serves as the primary source for our present interpretation. 9 In her 1928 lecture "The significance of woman's intrinsic value in national life", Stein describes the situation of the European women's movement in the 1920s as follows: "We women have become aware once again of our peculiarity. [ . . . ] And this 'self-awareness' could also develop the conviction that an intrinsic value resides in the peculiarity" [33] (p. 254). Even if Stein approaches the idea of the revaluation of 'woman's peculiarity' with caution-indeed, she resists painting a shining ideal of feminine nature with the hope that a realization of this ideal will be the cure for all contemporary problems-she defends the view that "the purely developed feminine nature does include a sublime vital value" as well as "ethical value" [33] (p. 46). Thus, in her lectures on woman, Stein aims not only to provide an account of 'woman's distinctive personality' but also to reveal the quality and significance of the value that is, in her view, inherent in woman's peculiar style of being a person. The phenomenological method, which Stein uses to reveal the sense of 'woman's peculiarity' [33] (p. 255), requires her to focus on the form and structure of the intentional life as it is lived through by the person. It is on this experiential, phenomenal level that she finds the core differences between man and woman. Stein obviously does not think of 'woman's peculiarity' in terms of exclusive traits and faculties. The personal traits in question are primarily human ones, and all faculties that are present in woman's personality are also present in man's personality. Nonetheless, Stein argues, the human traits may generally appear in different degrees and relationships in man and woman [3] (p. 72). When it comes to the question of equality between the sexes, an attentive reading of Stein's lectures reveals that she insists on genuine equality between men and women. Thus, Stein consistently affirms her commitment to a distinctive feminine personality without thereby undermining the equality of the sexes [3] (p. 67f.). Let us take a closer look at Stein's views of the peculiarity of woman's intentional life. With woman, Stein believes there is a more intense and complete unity of the living body and soul, which includes that women are more capable of being affected by that which they encounter as concrete persons living in and through the body. Stein also claims that "the strength of the woman lies in the emotional life" [33] (p. 96). 10 Due to the centrality of "understanding of the things of value" [33] (p. 73), a woman seems also more capable of feeling a "joy in creatures", which makes her "sensitive and attentive to all that lives, grows and strives for development" [33] (p. 73). Stein characterizes women's prevalent attitude as 'personal', which means several things: "in one instance she is happily involved with her total person in what she does; then, she has particular interest in the living, concrete person, and, indeed, as much for her own personal life as for other persons and their personal affairs" [33] (p. 255). Finally, Stein argues, "in woman, there lives a natural drive towards wholeness and completeness. And, again, this drive has a twofold direction: she herself would like to become a complete human being, one who is fully developed in every way; and she would like to help others to become so, and by all means, she would like to do justice to the complete human being whenever she has to deal with human beings" [33] (p. 255). Women's personal attitude and tendency to completeness go, in Stein's view, hand in hand with two major existential tasks: being a mother and being a companion. Stein claims that "the innermost formative principle of woman's soul is the love" [33] (p. 57) and "the deepest feminine yearning is to achieve a loving union which, in its development, validates her maturation and simultaneously stimulates and furthers the desire for perfection in others" [33] (p. 94). Stein sees an intimate link between woman's task of being a mother and her yearning to embrace that what is living, personal and whole to cherish, guard, protect, nourish and advance growth [33] (p. 45). Women's peculiar orientation toward the personal, the concrete and living and toward the full development of each being comes to a special, intense expression in her motherhood. A similar set of values becomes manifest in a woman's task of being a companion: "where a human being is alone, especially one in bodily or psychological need, she stands lovingly participating and understanding, advising and helping; she is the companion of life who helps so that 'man is not alone'" [39] (p. 50). A critical feminist reader may object that this view of Stein "reads as if she is trying to rehabilitate the patriarchy [40] (p. 214). Yet, it is Stein's firm contention that patriarchal society in its many destructive manifestations is abnormal and morally unacceptable and that "only subjective delusion could deny that women are capable of practicing vocations other than that of spouse and mother" [33] (p. 49). It needs to be stressed that what Stein means by motherhood and companionship is by no means mere physical motherhood and marital companionship. For Stein, to be a mother is to nourish and protect what is alive and bring it to development, to be a companion is to provide support and be a mainstay [33] (p. 256). Hence, any woman, regardless of her actual state in life, can take up the tasks of companionship and motherhood. Stein also emphasizes the possibility of spiritual companionship and motherhood that "extend to all people with whom woman comes into contact" [33] (p. 132), and stresses that the motherhood she has in mind "must be that which does not remain within the narrow circle of blood relations or of personal friends" [33] (p. 264) Stein's idea of motherhood also has a deeper ontological meaning that reflects her fundamentally relational view of the human person. In her mature work, Finite and Eternal Being, Stein meditates our existence as something that is constantly given to us moment to moment anew. She describes human persisting in being as 'ontological security'. As Calcagno rightly notes, "the image she employs to give resonance to this insight is the image of a child being held in the arms of her mother, certain and comfortable that no danger will come to him or her while sleeping". This image, Calcagno concludes, "also shows how Stein conceives of being not as a solitary enterprise of an ego or a Dasein, but as a communal enterprise, the living of one in the security of the other" [5] (p. 74). Drawing on her philosophy of the human person and authentic community, Stein contends that woman's peculiar attitudes and values can and should help us in transforming social and moral life of our communities [33] (p. 262). For example, Stein explains potential transformation of health care profession by stressing that in a still increasing medical specialization we should not forget that often it is not only the organ but the entire person who is sick along with the organ. Women, in Stein's view, have insight into diverse human situations and get to see clearly material and moral needs of others [33] (pp. 262-263). Counteracting abstract medical procedures, a woman's attitude is oriented towards the concrete and whole person. Stein recommends the healthcare professional to exercise courage in following her intuition and to liberate herself whenever necessary from methods learned and practiced according to formal rules. Yet, the intent must be to understand correctly the whole human situation, and to intervene helpfully not only by medical means but also as a mother or a sister [33] (pp. 111-112). Finally, Stein stresses the significance of woman's unique attitudes and values in political life. In legislation, she observes, there is always danger that a resolution will be based on elaboration of the most perfect paragraphs without consideration of actual needs in practical life. Women, Stein argues, are suited to act in accordance with the concrete human needs, and so they are able to serve as redress here [33] (pp. 263-264). Stein refers to a particular historical example when in the deliberation of youth laws there was the danger that the project would end in failure by party opposition. At that time, the women of the differing parties worked together and reached an agreement [33] (p. 264). Women's attitudes and values can also work beneficially in the application of the law, provided it does not lead to abstract validation of the letter of the law but to the accomplishment of justice for humanity. Stein eventually does not restrict her account to the level of individual states and nations but maintains that "there is a connection between success and adversity in both private and national life; just so are the individual nations and states connected one with the other. . . . woman's sphere of action has been extended from the home to the world" [33] (p. 154). Conclusions: Overlaps and Differences There are obvious overlaps, as well as differences, between care ethics and Stein's feminist personalism. Let us first focus on some overlaps between the two approaches. First, both care ethicists and Stein start from a fundamentally relational view of human beings. Human existence is inevitably marked by interdependence. Human persons are referred from themselves to the other in order to be what they are and to become what they can be. Thus, ethical reflections on what does it mean to live a good life and what we ought to do-both at individual and collective level-should refocus on the ways in which we relate to each other and examine the social and political structures that frame our relationships. Second, both approaches relocate certain sets of practices, values and attitudes from the periphery to the center of ethics. The dominant currents of Western ethical and political thought typically devalue care and love as matters of intimate relationships and biological ties that belong to the narrow sphere of private life and, thus, do not constitute a proper subject of ethics and politics. In contrast, Stein and care ethicists share the view that the historically marginalized practices and attitudes of care and love build the core of our human existence and are of ultimate importance for any normative moral and political theory that aims to adequately respond to the true nature and complexity of human life. Yet, in both approaches, a key normative task concerns distinguishing the 'empowering' and 'enlivening' patterns of care and love from the 'degenerative' ones. Third, the refocusing of ethics on the practices, values and attitudes of care and love goes hand-in-hand in both care ethics and Stein's philosophy, with revaluing the experiences of the members of certain marginalized groups, typically women, who historically bore the burden of excessive caring responsibilities. Both approaches emphasize the need for a more just attribution of these responsibilities and see a transformative potential of the realization of the corresponding values and attitudes in the everyday life of our communities and polities. Finally, in contrast to the emphasis on abstract moral reasoning and rule following, both care ethics and Stein promote "concrete thinking" (in Sara Ruddick's phrase) based on practical experience of situated persons. They offer a counterbalance to the perspective which focuses one-sidedly on the cognitive and rational dimension of what is to be a human, by stressing that we are essentially embodied and emotional beings to who affects and emotions say important things about what is of value and how the life can be made better. Let us turn to the differences between Stein's feminist personalism and care ethics. The most obvious difference seems to lie in Stein's embracing emotional value realism and the idea of the absolute value of the human person. Scheler's idea that there is an objective hierarchy of values that can be grasped in correct or incorrect ways by human persons in specific acts of value-feeling (emotions) is part and parcel of Stein's ethical personalism. Most care ethicists, however, emphasize the context-related and situated nature of all moral knowledge, as well as the importance, of particularity and singularity in the practice of caring. This is not to say that there is no place for particularity and singularity in Stein's ethical thought. Stein, as we have seen, conceives of empathy and personal attitude as inherently linked to the capacity of understanding the meaning and value of the concrete and particular. Yet, her insistence on the existence of a universal hierarchy of values and the distinction between rightness and wrongness of value-intuitions clearly restricts her appreciation of the relevance of context and situation in moral knowledge. Stein's ethical personalism, as we could see, revolves around the idea that the value of the human person is the most precious of all values. Only a few care ethicists would embrace this view. Although some (mainly early) formulations of care ethics start with the image of a person-to-person relationship which implies the ethical centrality of the human being and her relationships, most recent developments in care ethics show a broader understanding of caring as a process which "includes everything we do to maintain, continue, and repair our 'world' so that we can live in it as well as possible." That world includes not only human persons, but also our environment "all of which we seek to interweave in a complex, life-sustaining web" [23] (p. 40). Drawing on this broader concept of caring, care ethicists have laid ground for non-anthropocentric environmental ethics which is hard to incorporate in the personalist perspective of Stein. Finally, an important and tricky question concerns the difference between Stein's philosophy and care ethics as regards the feminist dimension of the two approaches. In the first section, I argued that the effort to dissociate care ethics from the idea of women's morality growing from 'woman's nature' was of a great importance for further development of care ethics towards a full-blown moral and political theory of care. Now, Stein's feminist personalism depends on an account of the sexual difference which seems to rely on some essentialist presuppositions. It is precisely an essentialist view of 'woman's capacities' that, for some scholars, provides the very grounds for calling Stein's ethics 'feminist': "Stein's ethics are correctly called feminist . . . because they include a capacity for which woman is especially well suited" [1] (p. 473). Other interpretations, on the contrary, take an issue with Stein's apparently essentialist view of womanhood and consider it as a weakness of her account. For example, Calcagno points out several ambiguities in Stein's description of 'the female essence' and argues: "one wonders whether the essence of woman as mother cannot also apply to men, especially to men who find themselves in situations where they are constrained to be both mother and father to a child. This brings to light the possibility that the female essence may be shared by both men and women, and need not be tied exclusively to the gender of the person" [5] (p. 73). It is beyond any doubt that Stein adopts what Ales Bello [41] aptly calls 'dual anthropology', namely a view that woman and man differ in their specific natures and capacities. There is plenty of textual evidence for this claim across Stein's philosophical work. In her lectures on woman Stein makes a crystal-clear statement: "I am convinced that the species 'human' is actualized as a double species-'man' and 'woman'; that the essence of human being, whose features cannot be lacking in either one, becomes expressed in a binate way; that the entire essential structure demonstrates the specific stamp" [33] (p. 187f.). Yet, I do believe that it is possible and even correct not to interpret Stein as an advocate of a feminism characterized by essentialist difference. Elsewhere [42], I provided a detailed argument in favor of a phenomenological reading of Stein's 'dual anthropology' by stressing that Stein conceives of the sexual difference as a difference between two related styles of intentional life rather than a difference between two separate essences (regardless of if it is ontologically or biologically defined). From the phenomenological perspective it seems plausible to read Stein's descriptions of woman's specific capacities and attitudes as describing a particular life form that can be shared by women and men alike. Hence, the alternative options suggested by Calcagno in the quote above seem to me not only right but also compatible with Stein's own perspective. This brings us back to the initial question concerning the difference between Stein's philosophy and care ethics. It is challenging to come up with a clear-cut answer. On the one hand, Stein's account of woman's specific capacities, attitudes and values and their importance for a renewal of the moral life of individuals and communities has some resemblances to the currents in care ethics that aim to promote a 'feminine approach to ethics' and advocate an essentialist difference feminism (e.g., [14]). This entails that Stein's feminist personalism is vulnerable when faced with some of the forms of criticisms that many raised against the 'feminine approach' in care ethics. On the other hand, Stein's feminist personalism, when detached from its essentialist interpretation-which is something that can and perhaps should be done-shares some seminal feminist insights with those care ethicists who reject gender essentialism and adopt a critical feminist perspective on various social and political issues. The confrontation, or rather the encounter, between care ethics and Stein's philosophy that I explored in this article helps us better see and appreciate how several 'mainstream philosophers' anticipated some key care ethical insights. A deeper understanding of the alternative contexts of the birth of similar ethical insights can broaden the dominant selfconcept of care ethics. A more relational understanding of the place of care ethics within the diverse landscape of traditional moral and political philosophy would certainly fit well in care ethics' relational perspective. Moreover, this encounter provides an impulse to a more vivid dialogue between care ethicists and the current proponents of personalist ethics who often take Stein's philosophical work as a source of inspiration. The awareness of shared core ideas can help the personalist ethicists to better appreciate the way in which care ethics decenters the human and allows for the relationality of all things, which makes a non-anthropocentric relational environmental ethics possible. The critical feminist emphasis on the analysis and normative assessment of our social and political arrangements of caring can also enrich the perspective of personalist ethics which tends to underestimate the salience of wider social and political contexts for the lives of individual persons and communities. Finally, the non-essentialist and non-differentialist understanding of feminism in contemporary care ethics can foster further development of non-essentialist variants of feminist personalism that follow some of the paths foreshadowed in Edith Stein's thought. Data Availability Statement: Not applicable. 9 In this paper, I mostly offer a thorough modification of the available English translation. For an apt comment on the inaccuracy of Oben's translation of Die Frau, see [38] (p. 326; 335, fn. 16 and 17). 10 Cf. "her [woman's] strength lies in her intuitive grasp of the concrete and the living, especially of the personal. She has the gift of adapting herself to the inner life of others, to their goal orientation and working methods. Feelings are central to her as the faculty which grasps concrete being in its unique nature and specific value; and it is through feeling that she expresses her attitude. She desires to bring humanity in its specific and individual character in herself and in others to the most perfect development possible" [33] (p. 188).
11,492.6
2022-06-01T00:00:00.000
[ "Philosophy" ]
Statistical and functional convergence of common and rare genetic influences on autism at chromosome 16p The canonical paradigm for converting genetic association to mechanism involves iteratively mapping individual associations to the proximal genes through which they act. In contrast, in the present study we demonstrate the feasibility of extracting biological insights from a very large region of the genome and leverage this strategy to study the genetic influences on autism. Using a new statistical approach, we identified the 33-Mb p-arm of chromosome 16 (16p) as harboring the greatest excess of autism’s common polygenic influences. The region also includes the mechanistically cryptic and autism-associated 16p11.2 copy number variant. Analysis of RNA-sequencing data revealed that both the common polygenic influences within 16p and the 16p11.2 deletion were associated with decreased average gene expression across 16p. The transcriptional effects of the rare deletion and diffuse common variation were correlated at the level of individual genes and analysis of Hi-C data revealed patterns of chromatin contact that may explain this transcriptional convergence. These results reflect a new approach for extracting biological insight from genetic association data and suggest convergence of common and rare genetic influences on autism at 16p. Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. further. Nature Genetics is committed to improving transparency in authorship. As part of our efforts in this direction, we are now requesting that all authors identified as 'corresponding author' on published papers create and link their Open Researcher and Contributor Identifier (ORCID) with their account on the Manuscript Tracking System (MTS), prior to acceptance. ORCID helps the scientific community achieve unambiguous attribution of all scholarly contributions. You can create and link your ORCID from the home page of the MTS by clicking on 'Modify my Springer Nature account'. For more information please visit please visit <a href="http://www.springernature.com/orcid">www.springernature.com/orcid</a>. We look forward to seeing the revised manuscript and thank you for the opportunity to review your work. 4 Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Remarks to the Author: Authors introduce a stratified-pTDT (S-pTDT), which estimates transmission in parent-child trios of PRS constructed from regions/blocks of the genome. Authors tested whether S-pTDT could identify any regions of the genome with transmission of ASD polygenic risk significantly over or under genome-wide expectation for large blocks of the genome. Authors estimated transmission two large trio samples and the transmission of regional polygenic risk for ASD is correlated between the two samples. Partitions with large S-pTDT z-scores cluster on 16p (0-33Mb). The 16p does not contain a genome-wide significant locus for ASD, authors still sought to determine whether the S-pTDT signal at 16p could be explained by one or a small number of common variant associations -single driving locus in the region was not found and authors verified that the overtransmission of ASD polygenic risk at 16p is not driven by CNV carriers in their data. 16p is gene rich and many of the genes are only expressed in the brain. Gene density, density of brainspecific genes or density of constrained genes, based on authors' inquiries, cannot explain the region's degree of polygenic over-transmission. Interestingly, authors observe across independent cohorts that increased 16p ASD PRS is associated with an average decrease in expression of brain expressed genes within the 16p region. Also, in vitro deletion of the 16p11.2 locus was associated with decreased average expression of 200 neuronally expressed genes on chromosome 16p. Furthermore, authors suggest that the 16p region may have increased within-region chromatin contact, which could explain the apparent non-independence of genetic and expression variation at mega-base scale. Authors hypothesize that this diffusely elevated within-region contact at 16p could facilitate the influence of regional polygenic effects on gene expression across 16p, via complex distal regulatory interactions. Lastly authors conclude that the 16p11.2 CNV has increased physical interaction with the telomeric region and the 3D conformation of 16p may mediate convergent ASD-related genetic effects on gene expression via regulatory interactions across mega-bases of separation. Based on these observations authors present the "Integrative model of ASD liability at 16p". Comments 1. The samples studied are large and impressive, the analyses are transparent, novel and sound and the model is interesting. The main weakness is how modest the mean expression effects are for the 16p region, particularly when compared to the decrease in gene expression associated with heterozygous gene deletion (16p11.2 CNV). Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. 2. The deletion described and studied in this manuscript is the 16p11.2 proximal deletion explaining up to 1% of autistic cases. It is particularly interesting that distal to this locus is another recurrent deletion, 16p11.2 distal deletion, not mentioned in the manuscript. That deletion, the 16p11.2 distal deletion confers high-risk of very similar phenotypes (including autism, cognitive impairments and obesity). If that deletion, the 16p11.2 distal deletion, also affects gene expression on 16p in similar manner as the 16p11.2 proximal deletion and the 16p ASD PRS, the story would be more convincing. Thus, my recommendation is to include as well data on the 16p11.2 distal deletion in the manuscript. In this study, Weiner et al. investigate the feasibility of extracting biological insight from a large genomic region and understanding how it is associated with risk for autism spectrum disorder. They identified the 33 Mb short arm of chromosome 16 as harboring the greatest excess of common polygenic risk for ASD. Analysis of bulk and single-cell RNA-sequencing data from post-mortem human brain samples revealed that common polygenic risk for ASD within 16p is associated with decreased average expression of genes throughout this 33-Mb region. They subsequently use isogenic neuronal cell lines with CRISPR/Cas9-mediated deletion of 16p11.2 to show that the deletion is also associated with depressed average gene expression across the short arm of chromosome 16. The effects of the rare deletion and diffuse common variation were correlated at the level of individual genes. Their results also suggest that very dense 3D chromatin contact within the short arm of chromosome 16 may coordinate genetic and transcriptional disease liability across this region. 6 Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. This study focuses on a large genomic segment rather than on specific genes. It is original and advances the field. The claims are supported by the data. In particular, the authors provide a rather convincing link between the transcriptional effects of rare and common variants implicated in ASD. My major comment is that there is a lack of functional characterization of this large group of genes on the short arm of 16p. If these genes are co-regulated, this would imply that they are implicated in shared functional modules. The study would benefit from a functional characterization: i.e., Are genes within this genomic block enriched in well-known functional modules? Or in modules identified by contrasting gene expression in the brains of individuals with ASD and controls? Specific comments: In the 2nd paragraph of the results: "... we constructed stratified PRS from adjacent blocks of SNPs, yielding 2,006 (often overlapping) partitions collectively covering the whole genome (median number of SNPs per block: 3,000, minimum length: 4.3Mb, maximum length: 52.9Mb, median length: 11.7Mb, Supplementary Figure 3, Methods)..." It is unclear how they defined these genomic blocks of very different sizes. Was it based on the number of LD blocks, the number of genes? Or completely random? Figure 1E. The number of blocks removed stops at n=25. Is that just because there was no more effect? One would expect that the SE of the pTDT would get larger as more blocks are removed, but that doesn't appear to be the case. It also seems like there is a trend that may become significantly protective at one point. What would happen beyond =25? In other words, once authors remove the most over transmitted blocks, are there protective blocks in the 16p11.2 short arm? As a sensitivity analysis, the authors performed an analogous analysis using a cohort of ADHD trios and an external ADHD GWAS and they did not replicate the finding in ADHD. One could argue that ADHD may be the worst condition to perform such a sensitivity analysis since the PRS doesn't explain much variance. Schizophrenia would appear to be much more relevant. The PRS is more robust, cohorts are larger, and the 16p11.2 locus is associated with schizophrenia. The relationship between gene density and over-transmission is an important point and should be represented in a figure in the main text. The authors asked whether, on average, the 200 neuronally expressed genes on 16p were differentially Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. expressed in response to the 16p11.2 deletion. Genes on 16p had significantly lower expression in the deletion lines. The deletion's effect on 16p genes differed from the effect on all other 8,533 neuronally expressed genes in the genome (P = 0.02 -somewhat of a trend -), whose expression was not, on average, changed by the deletion (P = 0.43). Can the authors provide information on the non-neuronally expressed genes on the short arm? Would these represent a better "control group", providing a stronger contrast? Shouldn't the Y-axis of figure 2C represent "fold change" instead of t-stat? Increased ASD PRS within 16p was associated with decreased expression in glutamatergic neurons of genes through the 16p region. Do the authors observe an increase in the variance of gene expression? In other words, is this a simple shift in mean expression, or do results also suggest that there may also be some genes with an increase in expression? Authors show that the CNV-telomeric contacts (n = 291 100kb x 100kb contacts) are 2.9x more frequent than contacts between distance-matched control regions on 16p (n = 1,808 100kb x 100kb contacts, P < 1e-10). However, the 16p11.2 region is 30MB away from the telomeric region. I don't see, therefore how they can test distance-matched regions on the short arm. Distance-matched regions would only be found downstream of the 16p11.2 locus on the long arm. Authors suggest that the entire short arm may represent a group of co-regulated genes involved in ASD. It would be necessary to demonstrate that this is the case and test the enrichment of these genes in known functional modules (in health and disease. i.e.). For example, are these gene enriched differentially expressed genes obtained by contrasting gene expression in the brains of individuals with ASD and controls. This data is available, and the analysis should be straightforward. Response to reviewers We thank the reviewers for their thoughtful comments and have responded point-by-point below. We have also highlighted any corresponding changes in the main text. Reviewer #1: Comment #1.1: "The deletion described and studied in this manuscript is the 16p11.2 proximal deletion explaining up to 1% of autistic cases. It is particularly interesting that distal to this locus is another recurrent deletion, 16p11.2 distal deletion, not mentioned in the manuscript. That deletion, the 16p11.2 distal deletion confers high-risk of very similar phenotypes (including autism, cognitive impairments and obesity). If that deletion, the 16p11.2 distal deletion, also affects gene expression on 16p in similar manner as the 16p11.2 proximal deletion and the 16p ASD PRS, the story would be more convincing. Thus, my recommendation is to include as well data on the 16p11.2 distal deletion in the manuscript." We thank the reviewer for this thoughtful comment about the possibility that other ASD-associated CNVsespecially those located on 16p -may confer disease liability through similar mechanisms to the proximal 16p11.2 studied here. The reviewer notes the potential relevance of the 16p11.2 distal deletion, and we agree this proximate and disease-associated CNV would be interesting to investigate. However, as far as we can tell, there are no published whole-genome RNA-sequencing datasets of the 16p11.2 distal deletion, including in the provided reference of Sønderby et al. It is also materially infeasible for us to generate additional isogenic distal deletions at this time. Without RNA-sequencing, we are unable to evaluate our model for this deletion. That said, we are very interested in whether other neuropsychiatric CNVs confer disease liability through similar mechanisms to those described for the proximal 16p11.2 CNV. We are actively testing this hypothesis in our research group, and we look forward to sharing our findings with the community in the future. Finally, we clarified in the manuscript that we analyzed the proximal and not the distal deletion (page 3). Reviewer #3: Comment #3.1: "My major comment is that there is a lack of functional characterization of this large group of genes on the short arm of 16p. If these genes are co-regulated, this would imply that they are implicated in shared functional modules. The study would benefit from a functional characterization: i.e., Are genes within this genomic block enriched in well-known functional modules? Or in modules identified by contrasting gene expression in the brains of individuals with ASD and controls?" We thank the reviewer for raising this important question -we are also extremely eager to understand the aggregated downstream functional consequence of genetic variation in the region. We first performed gene ontology (GO) analysis to evaluate enrichment of genes on 16p in annotated biological pathways (http://geneontology.org/). We used the same 17,909 genes from the gene density analysis as reference genes. We tested for enrichment of all genes on 16p (midpoint < 32,000,000 bp, n = 432 genes) across three classes of annotations: biological process, molecular function, and cellular component. The GO analysis for molecular function and cellular component returned multiple bonferroni-significant enrichments: multiple lipid/fatty acid pathways (Fatty-acyl-CoA synthase activity, Butyrate-CoA ligase activity, Medium-chain fatty acid-CoA ligase activity, >20x enrichment for each), and hemoglobin complex (19x enrichment). The lipid/fatty acid pathways return an enrichment because there are 5 acyl-CoA-synthase genes located within 500kb of each other on 16p around Mb 20. Similarly, the hemoglobin complex pathway returns an enrichment because 4 hemoglobin subunits are clustered together within 100kb of each other at the start of chromosome 16. These examples raise a critical point: since functionally similar genes are often clustered together in the genome (Andrews et al. 2015 Genome Research), a gene set enrichment signal will be dominated by whichever functional cluster of genes happens to be located within the region of interest. Thus, we do not believe that canonical gene set enrichment approaches are suited to regional enrichment analysis. That said, it is also possible that decreased expression across 16p does not exert direct phenotypic effect, but instead propagates to interact with gene/protein networks elsewhere in the cell or cellular network. As cell-type specific interaction networks come on line in coming years, we look forward to integrating with our analyses. Next, we tested the hypothesis that genes on 16p are over-represented in analysis of differential expression in the brains of individuals with ASD vs. controls. We identified differentially expressed genes between ASD cases (n = 51) and controls (n = 936) from a recent publication and retained those significantly variable at a bonferroni-significant level (n = 83 genes) (Gandal et al 2018 Science). We used a chi-squared test for over-representation of genes on 16p (n = 383 in Gandal dataset) in this n = 83 differentially expressed gene set. We did not find over-or under-representation of 16p-related genes in the Gandal DEG set (p > 0.05). Given the genetic heterogeneity of ASD, among the other non-genetic factors contributing to expression variability between ASD cases and controls, we do not find it surprising that there is no overlap between these gene sets. In summary, we believe it is most likely that the genes on 16p -modulated by both common variants on 16p and the 16p11.2 deletion -are integrated in a complex network that is not ascertainable through canonical gene set enrichment approaches. We are engaging with members of the community to develop approaches to extract additional biological meaning out of regional variation in gene expression. We have added these analyses to the main text (page 6) and supplement (Supplementary Table 1 We agree this important section of the methods deserves additional detail in the text; we have expanded this methods section in the manuscript (page 18). In brief, for creating genomic blocks of 2,000 SNPs, we identified the first PRS SNP on chromosome 1 (the SNP closest to the first base pair), counted 2,000 PRS SNP, and called that the first partition. Then, we counted the next 2,000 PRS SNPs on chromosome 1, called that the next partition, etc, until we ran out of SNPs on chromosome 1. Then we started the same process on chromosome 2, etc. We repeated this for blocks of different sizes (3,000 SNPs, 4,000 SNPs, 5,000 SNPs, and 6,000 SNPs), as well as repeated the entire process starting at the ends of chromosomes and going backwards. The partitions were not based on LD blocks, nor on genes/gene density. Comment #3.3: " Figure 1E. The number of blocks removed stops at n=25. Is that just because there was no more effect? In Figure 1E, the number of blocks removed stops at n = 25 because that is the number of these blocks located within 16p (median length of each block: 1.31 Mb). We have clarified this point in the text (page 19). Comment #3.4: "One would expect that the SE of the pTDT would get larger as more blocks are removed, but that doesn't appear to be the case." The SE of the S-pTDT decreases with larger sample size (right plot) but does not vary with the number of SNPs in the PRS partition (left plot). Comment #3.5: ( Figure 1E) "It also seems like there is a trend that may become significantly protective at one point. What would happen beyond =25? In other words, once authors remove the most over transmitted blocks, are there protective blocks in the 16p11.2 short arm?" Some of the blocks on 16p are (non-significantly) under-transmitted to ASD probands (Supplementary Figure 8), which indeed reflects a trend towards the common variants in that block being protective. This is not unique to 16p, but reflects a genome-wide pattern, where many regions of ASD common variation are under-transmitted in our three trio cohorts (Supplementary Figure 4). This reflects the genetic variability among our trio cohorts, where it is only with some probability at a given locus that an ASD proband inherited the liability-increasing haplotype. Comment #3.6: "As a sensitivity analysis, the authors performed an analogous analysis using a cohort of ADHD trios and an external ADHD GWAS and they did not replicate the finding in ADHD. One could argue that ADHD may be the worst condition to perform such a sensitivity analysis since the PRS doesn't explain much variance. Schizophrenia would appear to be much more relevant. The PRS is more robust, cohorts are larger, and the 16p11.2 locus is associated with schizophrenia." We agree that the SCZ PRS is a more predictive instrument for SCZ than is the ADHD PRS for ADHD. However, relative to the ASD PRS, the ADHD PRS performs well (ADHD Nagelkerke's R 2 = 5.5%, Demontis et al. 2019 Nature Genetics, vs. ASD Nagelkerke's R 2 = 2.5%, Grove et al. 2019 Nature Genetics). Regarding cohorts, the schizophrenia cohorts in the PGC are case-control design and not trio, which is required for the within-family transmission analysis of S-pTDT. In contrast, we were able to use ADHD trios from the PGC. Finally, the 16p11.2 locus is also associated with ADHD (Niarchou et al. 2019 Translational Psychiatry). Comment #3.7: "The relationship between gene density and over-transmission is an important point and should be represented in a figure in the main text." We agree this is an important point and have moved one of the panels relating gene-density and over-transmission to the main text as an inset to Figure 1F. Comment #3.8: "The authors asked whether, on average, the 200 neuronally expressed genes on 16p were differentially expressed in response to the 16p11.2 deletion. Genes on 16p had significantly lower expression in the deletion lines. The deletion's effect on 16p genes differed from the effect on all other 8,533 neuronally expressed genes in the genome (P = 0.02 -somewhat of a trend -), whose expression was not, on average, changed by the deletion (P = 0.43). Can the authors provide information on the non-neuronally expressed genes on the short arm? Would these represent a better "control group", providing a stronger contrast? Shouldn't the Y-axis of figure 2C represent "fold change" instead of t-stat?" We thank the reviewer for this thoughtful comment, and agree that the low expression condition should be included in the analysis for comparison. We have assessed the effect in low expression genes in both the isogenic 16p11.2 deletion lines, and using the regional PRS, and confirmed that the effect on decreased expression is attenuated in those lower expressed genes in both sets of analyses. We have summarized the findings in a figure below in this response. In the main text and supplement, we have added text describing the analyses and results for both the isogenic deletion (main text page 9, supplementary figure 16) and for the regional PRS approach (main text page 11, supplementary figure 20). Figure 2C, the left panel is in log(fold-change) for intuitive interpretability, while the right panel is the statistical comparison of the two groups that incorporates uncertainty in the fold-change estimates, hence displaying the changes in uncertainty-normalized t-statistics. Comment #3.9: "Increased ASD PRS within 16p was associated with decreased expression in glutamatergic neurons of genes through the 16p region. Do the authors observe an increase in the variance of gene expression? In other words, is this a simple shift in mean expression, or do results also suggest that there may also be some genes with an increase in expression?" Regarding units in Thank you for this interesting question. To explore this further, for each 33Mb region of the genome ("partition"), we associated regional PRS with expression of each gene and extracted the association t-statistic across our 544 samples. There was no association between either the partition's mean(t-statistic) or |mean(t-statistic)| and variance(t-statistic) across partitions (see plots below, where each dot is a partition with 16p in blue. P > 0.05 for each). For 16p specifically, we do not see a dramatic increase in expression variance given the decrease in expression averaged across all genes. Comment #3.10: "Authors show that the CNV-telomeric contacts (n = 291 100kb x 100kb contacts) are 2.9x more frequent than contacts between distance-matched control regions on 16p (n = 1,808 100kb x 100kb contacts, P < 1e-10). However, the 16p11.2 region is 30MB away from the telomeric region. I don't see, therefore how they can test distance-matched regions on the short arm. Distance-matched regions would only be found downstream of the 16p11.2 locus on the long arm." We define the telomeric region from 0 Mb to 5.2 Mb on chromosome 16, while the 16p11.2 (proximal) CNV ranges from 29.5Mb-30.2Mb. The contacts between these regions are denoted in Figure 4C in the blue shaded rectangle inside the larger triangular contact matrix. The range of distances encompassed between these contacts begins at 24.3Mb in distance (contact between Mb 5.2 of the telomeric region and Mb 29.5 of the CNV: 29.5 -5.2 = 24.3) and extends to 30.2Mb in distance (contact between 30.2Mb of the CNV to 0 Mb of the telomeric region). Therefore, the distance-matched control regions are contacts on 16p that span 24.3Mb to 30.2Mb. There are many such 100kb x 100kb contacts (n = 1,808), and such contacts are denoted in the red shaded trapezoid in Figure 4C (for example, contact between Mb 6 and Mb 31 = 25 Mb apart). Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Decision Letter, first revision: Our ref: NG-A59672R 4th August 2022 Dear Dan, Your revised manuscript "Statistical and functional convergence of common and rare genetic influences on autism at chromosome 16p" (NG-A59672R) has been seen by the original referees. As you will see from their comments below, they find that the paper has improved in revision, and therefore we will be happy in principle to publish it in Nature Genetics as an Article pending final revisions to address the referees' remaining points and to comply with our editorial and formatting guidelines. We are now performing detailed checks on your paper and we will send you a checklist detailing our editorial and formatting requirements soon. Please do not upload the final materials or make any revisions until you receive this additional information from us. Thank you again for your interest in Nature Genetics. Please do not hesitate to contact me if you have any questions. Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Reviewer #1 (Remarks to the Author): Reviewer #1: Comment #1.1: "The deletion described and studied in this manuscript is the 16p11.2 proximal deletion explaining up to 1% of autistic cases. It is particularly interesting that distal to this locus is another recurrent deletion, 16p11.2 distal deletion, not mentioned in the manuscript. That deletion, the 16p11.2 distal deletion confers high-risk of very similar phenotypes (including autism, cognitive impairments and obesity). If that deletion, the 16p11.2 distal deletion, also affects gene expression on 16p in similar manner as the 16p11.2 proximal deletion and the 16p ASD PRS, the story would be more convincing. Thus, my recommendation is to include as well data on the 16p11.2 distal deletion in the manuscript." Authors reply We thank the reviewer for this thoughtful comment about the possibility that other ASD-associated CNVs -especially those located on 16p -may confer disease liability through similar mechanisms to the proximal 16p11.2 studied here. The reviewer notes the potential relevance of the 16p11.2 distal deletion, and we agree this proximate and disease-associated CNV would be interesting to investigate. However, as far as we can tell, there are no published whole-genome RNA-sequencing datasets of the 16p11.2 distal deletion, including in the provided reference of Sønderby et al. It is also materially infeasible for us to generate additional isogenic distal deletions at this time. Without RNA-sequencing, we are unable to evaluate our model for this deletion. That said, we are very interested in whether other neuropsychiatric CNVs confer disease liability through similar mechanisms to those described for the proximal 16p11.2 CNV. We are actively testing this hypothesis in our research group, and we look forward to sharing our findings with the community in the future. Finally, we clarified in the manuscript that we analyzed the proximal and not the distal deletion (page 3). Further comments from Reviewer #1: I find the results presented in this manuscript most interesting. However, they should be confirmed. Authors can identify RNA-sequenced samples suitable for confirming their findings or they can analyze the isogenic neuronal cell lines with CRISPR/Cas9-mediated "distal" deletion of 16p11.2. That may reveal that the deletion also associates with depressed average gene expression across 16p and, hence, confirm the findings. Reviewer #3 (Remarks to the Author): The authors responded to all comments and questions in a satisfactory way. Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. The only response that remains unclear relates to comment 3.10. The authors describe the contacts between the 16p11.2 region and the telomeric region ranging from 24.3 to 30.2 Mb in distance. They give an example of the distance-matched control regions between MB6 and MB31. However, this contact is beyond the 16p11.2 region. Does this mean that the distance-matched control contacts do not include the 16p11.2 region? Author Rebuttal to Initial comments Dear Kyle, Thank you for sharing the reviewer comments. Please see our responses below: Response to reviewer #1 We are glad the reviewer finds our manuscript of great interest. We interpret the reviewer's specific request here as asking for analysis of an additional isogenic CRISPR-generated lines of either the proximal or distal 16p11.2 deletion. While we've reviewed the literature and inquired broadly, such a resource does not seem to currently exist, unfortunately. We're happy to reflect more on the one result in question (the data in Figure 2). We do find support for the observation from other analyses presented in the manuscript, including a) convergence with expression effects to the 16p ASD PRS ( Figure 4A), b) elevated chromatin contact between the 16p11.2 deletion region and the telomeric region of convergent effect ( Figure 4C), and c) lack of a similar observation at the 15q locus supporting the specificity of the 16p11.2 deletion effect on regional gene expression (Supplementary Figure 17). We will also add a note to the discussion that replication using further isogenic lines or very large patient-derived samples will be valuable once those resources are developed. Should the NG editors or the reviewer have other questions or suggestions, we're happy to discuss. Response to reviewer #3 Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. We are glad the reviewer finds our responses satisfactory. With regards to comment 3.10: yes, that is correct that almost all of the control contacts do not include the 0.7Mb 16p11.2 region. Three-dimensional contact frequencies as assayed by Hi-C are strongly dependent on the distance between the contact loci (decaying with distance). Thus, we defined control regions based on their distance in such a way that the range of control contact distances (red trapezoid in Figure 4C) is the same as the range of contact distances between the 16p11.2 region and the telomeric contact region (blue rectangle in Figure 4C). Hopefully this is clarifying. Final Decision Letter: In reply please quote: NG-A59672R1 Weiner 15th September 2022 Dear Dan, I am delighted to say that your manuscript "Statistical and functional convergence of common and rare genetic influences on autism at chromosome 16p" has been accepted for publication in an upcoming issue of Nature Genetics. Over the next few weeks, your paper will be copyedited to ensure that it conforms to Nature Genetics style. Once your paper is typeset, you will receive an email with a link to choose the appropriate publishing options for your paper and our Author Services team will be in touch regarding any additional information that may be required. After the grant of rights is completed, you will receive a link to your electronic proof via email with a request to make any corrections within 48 hours. If, when you receive your proof, you cannot meet this deadline, please inform us at<EMAIL_ADDRESS>immediately. You will not receive your proofs until the publishing agreement has been received through our system. Due to the importance of these deadlines, we ask that you please let us know now whether you will be difficult to contact over the next month. If this is the case, we ask you provide us with the contact Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. information (email, phone and fax) of someone who will be able to check the proofs on your behalf, and who will be available to address any last-minute problems. Your paper will be published online after we receive your corrections and will appear in print in the next available issue. You can find out your date of online publication by contacting the Nature Press Office<EMAIL_ADDRESS>after sending your e-proof corrections. Now is the time to inform your Public Relations or Press Office about your paper, as they might be interested in promoting its publication. This will allow them time to prepare an accurate and satisfactory press release. Include your manuscript tracking number (NG-A59672R1) and the name of the journal, which they will need when they contact our Press Office. Before your paper is published online, we will be distributing a press release to news organizations worldwide, which may very well include details of your work. We are happy for your institution or funding agency to prepare its own press release, but it must mention the embargo date and Nature Genetics. Our Press Office may contact you closer to the time of publication, but if you or your Press Office have any enquiries in the meantime, please contact<EMAIL_ADDRESS>Acceptance is conditional on the data in the manuscript not being published elsewhere, or announced in the print or electronic media, until the embargo/publication date. These restrictions are not intended to deter you from presenting your data at academic meetings and conferences, but any enquiries from the media about papers not yet scheduled for publication should be referred to us. Please note that Nature Genetics is a Transformative Journal (TJ). Authors may publish their research with us through the traditional subscription access route or make their paper immediately open access through payment of an article-processing charge (APC). Authors will not be required to make a final decision about access to their article until it has been accepted. <a href="https://www.springernature.com/gp/open-research/transformative-journals"> Find out more about Transformative Journals</a> Authors may need to take specific actions to achieve <a href="https://www.springernature.com/gp/open-research/funding/policy-compliance-faqs"> compliance</a> with funder and institutional open access mandates. If your research is supported by a funder that requires immediate open access (e.g. according to <a href="https://www.springernature.com/gp/open-research/plan-s-compliance">Plan S principles</a>), then you should select the gold OA route, and we will direct you to the compliant route where possible. For authors selecting the subscription publication route, the journal's standard licensing terms will need
9,856
2022-10-24T00:00:00.000
[ "Biology", "Medicine" ]
Market volatility and spillover across 24 sectors in Vietnam Abstract While market volatility and volatility connectedness across different financial markets have been examined, the spillover effects across sectors have been under-examined. As such, this study aims to examine market volatility and the volatility patterns for 24 Vietnamese sectors. Our study uses the ARMA-GARCH estimation technique over the 2012–2021 period. The spillover effects between these sectors are then investigated using the vector autoregression (VAR) technique. Three key findings are as follows. First, the market volatility of Development Investment, Education, and Securities is most affected by the market volatility from the previous periods, whereas Construction is least affected. Second, the Vietnamese stock market exhibits a substantial inter-sector connectedness above 60 per cent from 2012 to 2021. However, the sectoral spillover effects increase to around 90 per cent during the Covid-19 pandemic. We found that Aquaculture, Building Materials, Food, and Plastic are the four primary risk transmitters at the sectoral level. Third, market volatility for Energy, Plastic, and Steel is unaffected by the pandemic. Meanwhile, Securities, Fertilizer, and Transportation exhibited a significant increase in market volatility during Covid-19. Based on these empirical results, policy implications have emerged for the Vietnamese government to support affected industries to recover and develop. Introduction Vietnam is classified as a developing nation. The Vietnamese economy is currently ranked 37 th globally using the nominal gross domestic product (GDP) and 23 rd using the purchasing power parity (PPP) adjusted GDP in 2020 (World Bank, 2021). Many investors have considered Vietnam with significant growth potential and decided to invest in Vietnam. Therefore, understanding the Vietnamese stock market and the sectors' characteristics is important to estimate the risks and design a proper investment strategy. The Vietnamese stock market is considered one of the frontier markets. Therefore, the stock market is considered small, risky, and illiquid, which is easier to be manipulated than other stock markets. However, since the market is small and has a huge growth potential when the Vietnamese economy develops, it provides substantial profitable investment opportunities for investors. Therefore, understanding the market risk, especially the stocks' volatility, can help investors mitigate the risk and manage their portfolios when they invest in the Vietnamese stock market. As a frontier market, the Vietnamese stock market has exhibited high volatility for most of the time since the market was established. Along with the economic growth and development, more and more investors participate in the Vietnamese stock market. However, the emergence of the Covid-19 pandemic has caused the global economy to suffer a significant economic downturn. The Vietnamese government has responded to the pandemic by enacting lockdowns, quarantines, and social distancing, forcing many Vietnamese people to stay home for several months. Also, the unemployment rate increased since many businesses have struggled to operate after being severely hit by the pandemic. Therefore, more people seek investment opportunities to generate income and profit during the pandemic, thus creating a significant number of new investors trading accounts joining the market during the Covid-19 period of 2020-2021. The participation of substantial individual investors has lifted the stock market liquidity to more than 10 trillion VND, approximately 2 billion USD, in December 2021. Figure 1 presents Vietnam's total stock trading accounts from December 2015 to December 2021. The total accounts in 2020 increased by 17 per cent compared to 2019. In 2021, the increase reached 56 per cent, thus showing that investors are incredibly interested in investing in the stock market (The State Securities Commission of Vietnam, 2021). The Covid-19 pandemic has affected various Vietnamese stock sectors, and significant shocks as the pandemic usually amplify the market volatility across financial markets. Figure 2 illustrates the market price volatility in 24 Vietnamese stock sectors and compares them with the Vietnamese stock market index, the VN-Index (VNI), over the ten years of 2012-2021 ( Figure 2a) and the Covid-19 pandemic period of 2020-2021 ( Figure 2b). The significant fluctuation trend of the sectoral market indices can be observed among all sectors. However, from 2012 to 2021, Aviation exhibits the most significant price volatility over the entire period. Aviation reached the highest price increase of around 1,550 per cent in January 2018. The level then significantly dropped by approximately 50 per cent from the peak to 660 per cent in March 2020 when Covid-19 emerged. The pandemic has caused extreme chaos among Vietnamese people and severely damaged the aviation sector by forcing the government to restrict flights and enact lockdowns. Also, the VN-Index and other sectors experienced a plunge in March 2020 to hit bottom and then gradually recovered (The State Securities Commission of Vietnam, 2021). Besides, we argue that all stock sectors have inter-connectedness in which substantial volatility emerging in one sector may spread to other sectors (Zhang et al., 2020 sectoral volatility connectedness during the Covid-19 pandemic plays an essential role in designing public policies by the government and policymakers and managing investment portfolios by practitioners and investors. However, the sectoral spillover effects during the Covid-19 pandemic appear to be ignored in the existing literature. Previous studies have concerned the sectoral volatility spillover within specific financial markets other than Vietnam during the pandemic period, such as in China (Shahzad et al., 2021;Su & Liu, 2021;Umar et al., 2021), the US (Laborda & Olmo, 2021;Yousaf et al., 2020a), Europe (Aslam et al., 2021;Mensi et al., 2022), Islamic markets (Yousaf & Yarovaya, 2022a), and the G7 countries (Zhang et al., 2021). As such, the primary research objective of this paper is to examine the spillovers among equity sectors of Vietnamese stock markets in pre and post-COVID-19 periods. We analyze the market volatility and its patterns across 24 sectors in Vietnam during the ten years of 2012-2021 and identify the most vulnerable and resilient sectors to previous shocks using the ARMA-GARCH estimations. This research objective is examined on the ground of theoretical consideration of volatility spillover and empirical analyses discussed later in this study. The contributions of this study are twofold. First, market volatility spillover has been widely investigated for a sample of the equity markets. However, market volatility spillovers across sectors appear to be under-examined in the existing literature, especially for emerging markets. Our literature review indicates that this is the first study to examine the market volatility spillovers across Vietnam sectors, especially focusing on the current Covid-19 pandemic. Second, our study identifies the risk-transmitted and risk-received sectors among these 24 Vietnamese sectors and understands their behaviours, particularly during events such as the current Covid-19 pandemic. Understanding these classifications and the changes in behaviour play an important role in policy implications. The structure of this study is as follows. Section 2 discusses and synthesizes related academic papers on market volatility spillovers following this introduction. Section 3 presents the research methodology and data. Empirical results on market volatility, spillovers, and significant changes in market volatility pre-and post-breakpoints during the Covid-19 pandemic, including our robustness analysis, are presented and discussed in section 4 of the paper. Finally, section 5 offers concluding remarks. Literature review Volatility spillover has been widely examined in the existing literature. Volatility spillovers are generally understood as a phenomenon that one market's volatility prompts the volatility in other markets (Yarovaya et al., 2016). The market volatility spillovers of various markets are widely examined in the existing literature (for example, see Forbes and Rigobon, 2002;French and Poterba, 1991;Goetzmann et al., 2005;Pukthuanthong and Roll, 2015;Solnik and Watewai, 2016). Spillovers from one market to another across assets or asset classes have also been widely investigated (for example, see Engle et al., 1990;Eun and Shim, 1989;King and Wadhwani, 1990, for the earliest studies of spillovers). In particular, empirical studies on this market volatility spillovers have been conducted for the US and other markets, Scandinavian markets, and European markets. Market volatility spillovers have also been examined for spot and futures markets, the energy markets, credit markets, commodity markets, bond markets, foreign exchange or cryptocurrency markets (see, Hoang & Baur, 2021;Yousaf & Ali, 2020;Yousaf et al., 2020aYousaf et al., , 2020bYousaf et al., , 2020cYousaf & Hassan, 2019;Yousaf & Yarovaya, 2022b). Regarding sectoral volatility spillovers, Zhang et al. (2020) argue that sectors in an economy are closely connected and that significant volatility occurring in one sector could also trigger the other sectors' volatility. Such volatility spillovers are representatives of financial crises (Laborda & Olmo, 2021). As such, constructing appropriate methods to examine the volatility spillovers has largely drawn scholars' attention. Su and Liu (2021) have presented three main approaches to measuring volatility spillovers. The first approach includes using Granger causality to examine the mean spillover and returns volatility (Atukeren et al., 2021;Hong, 2001;Hong et al., 2009). The second approach focuses on the vector autoregression (VAR) family models to investigate the volatility spillovers across markets through the mechanism of network topology (Diebold & Yilmaz, 2009, 2012Diebold & Yılmaz, 2014;Gabauer & Gupta, 2018;Laborda & Olmo, 2021;Su & Liu, 2021;Yousaf et al., 2022). The third approach uses the generalized autoregressive conditional heteroskedasticity (GARCH) models. The GARCH family models appear to be comprehensive as they are widely adopted to examine volatility spillovers among markets, sectors, and institutions Cheung & Ng, 1996;Gabauer, 2020;Hamao et al., 1990;Hassan & Malik, 2007;Yousaf et al., 2020b). Besides, scholars have recently combined these three conventional approaches with a novel method known as wavelet transform to measure volatility spillovers. For example, Boubaker and Raza (2017) combine a multivariate ARMA-GARCH model and wavelet multiresolution analysis to investigate the volatility spillovers between oil prices and BRICS stock markets. Ghosh and Chaudhuri (2019) implement a comprehensive approach including the Granger causality test, Diebold-Yilmaz VAR model, GARCH family models, and wavelet decomposition to examine the temporal correlation, causal relationship, and spillovers among eight financial time series/indices, including CBOE Volatility Index (VIX), India VIX, Crude Oil, FX1, DJIA, Nifty IT, Metal and BSE Sensex. The recent Covid-19 outbreak has been reported to have caused economic downturns in many countries, which may also trigger their stock markets' volatility and volatility spillovers. Significantly, the stock markets have more robust responses to the pandemic than the real economy (Hasan et al., 2021). In addition, findings from Aslam et al. (2021), Akinlaso et al. (2021), Bissoondoyal-Bheenick et al. (2021), and Mensi et al. (2022) have confirmed a significant increase in stock markets' volatility associated with the Covid-19 rapid outbreaks. As such, sectors in the stock markets might also have experienced a rise in volatility and volatility spillovers during the pandemic. Su and Liu (2021) adopt the VAR model developed by Diebold and Yilmaz (2012) to examine the spillover network of financial shocks among ten Chinese sectors before and during the Covid-19 period. Their findings confirm a more vital connectedness among Chinese sectors since the Covid-19 outbreaks. Furthermore, these findings confirm the increased volatility spillovers among Chinese sectors in which consumer discretionary, industrials, and materials sectors become the primary risk transmitters. Also, Shahzad et al. (2021) examined the volatility spillovers across ten Chinese sectors. They observed the asymmetric effect of positive and negative volatility patterns which are intense and time-varying during the Covid-19 period. Specifically, the negative volatility spillover patterns appear to outweigh the positive patterns. Laborda and Olmo (2021) have found that the energy, banking & insurance, and biotechnology sectors are the main volatility transmitters for the US sectoral volatility spillovers. Similarly, Choi (2022) has observed that the Covid-19 outbreak magnifies the volatility spillovers across eleven sectors in the US, in which the main volatility transmitters are the energy, consumer discretionary, and consumer staples sectors. The literature on volatility spillovers in the context of Vietnam remains limited. Previous studies only confirm the significant role of foreign ownership in stabilizing stock return volatility in Vietnam (Vo, 2015) and improved corporate earnings quality (Vo et al., 2019) for listed firms in Vietnam. In particular, Vo (2015) examines the effects of foreign ownership on the firm-level volatility of stock returns in Vietnam using a sample of listed firms on the Ho Chi Minh City stock exchange. The author concludes that a firm's ownership by foreign investors decreases the firm's stock price volatility in Vietnam's stock market. In another attempt, Vo et al. (2019) examine the nature of foreign stakeholders' role in enhancing the quality of financial reported earnings. Their findings indicate that firms with greater foreign shareholdings are aligned with a higher quality of financial disclosure. Our literature review indicates that no studies have examined the volatility spillovers across sectors in Vietnam, especially during the Covid-19 period. As the literature on volatility spillovers across sectors in Vietnam is still underdeveloped, we consider using a combination approach, including the Diebold-Yilmaz VAR model and the GARCH family models, to examine the volatility spillovers in Vietnam as essential. The current lack of evidence on this crucial topic of volatility spillover across sectors warrants our analysis in this study. Data and sampling This study uses the daily closing prices of 24 Vietnamese sectors' indices over ten years from 2012 to 2021. These sectors include Aquaculture, Aviation, Banking, Building Materials, Business, Construction, Construction Investment, Development Investment, Education, Energy, Fertilizer, Food, Mineral, Oil & Gas, Pharmaceutical, Plastic, Real Estate, Rubber, Securities, Services, Steel, Technology, Trade, and Transportation. Additionally, the sub-period, including the Covid-19 pandemic of 2020-2021, is also used to examine the effects of the pandemic on market volatility among the Vietnamese sectors. The ARMA-GARCH approach to estimating volatility We employ the ARMA-GARCH estimation to investigate the market volatility and the volatility patterns across 24 Vietnamese sectors. Noticeably, the GARCH model captures volatility clustering and heteroskedasticity. Zivot (2009) argues that the GARCH model is more advantageous than the ARCH model in estimating the unstable magnitude residuals obtained from the ARMA model. We also conduct the ARMA model to examine the market volatility (). The market return for each sector is estimated as follows. where:D it represents the closing price of index i at time t, X it denotes the change in index i at time t. The optimal lag length of AR (r) and MA (s) employed in the ARMA model are determined using the Bayesian Information Criterion (BIC) maximization. where:B i denotes the backshift operator. β i and γ i are the coefficients of the ARMA model's autoregressive (AR) and moving average (MA) parts. ε t represents the residual expected to be independently and identically distributed. Next, we conduct the heteroskedasticity test for the dataset to confirm the ARCH effect using the Ljung-Box portmanteau (Q) test (Ljung & Box, 1978). where: m represents the degree of freedom to investigate the model stationarity. ρ 2 j ð Þ is the estimated autocorrelation of lag j. n stands for the observation number. The test's null hypothesis (H0) is that the series follows a white noise process, implying no ARCH effects occurrence. However, when we reject the null hypothesis, the return series have ARCH effects. The test result warrants our next step to employ the ARMA(r,s)-GARCH p; q ð Þ model to extract the conditional variance (market volatility) of 24 Vietnamese sectors. The GARCH p; q ð Þ equation is described as follows: .2. Estimating the market volatility spillover using the forecast error variance decomposition in a vector autoregression (VAR) model After examining the market volatility of each sector, our next objective is to analyze the volatility transmission/spillover among 24 sectors in the Vietnamese stock market. Therefore, we adopt the connectedness/spillover network analysis approach by Yilmaz (2012, 2015) in our analysis of volatility spillover. Under this spillover network analysis approach, the association structure among all sectors can be identified deeper. Also, each sector's directions of transmission structure and node weight can be identified simultaneously (Diebold & Yılmaz, 2014). The simplicity and informativeness of this connectedness network analysis approach perfectly fit our study's objectives. The spillover index is calculated based on the forecast error variance decomposition in the vector autoregression (VAR) model, which is constructed by Yilmaz (2012, 2015). The procedure for applying the model is as follows. First, the VAR model of order p is fitted to the time series of market volatility obtained from the ARMA-GARCH estimation. We conducted the augmented Dickey-Fuller (ADF) test to examine the stationarity of the market indices of 24 Vietnamese sectors. Our findings confirm that employing the VAR model for these market indices is appropriate. 1 Second, we forecast the market volatility for all sectors for h periods ahead. Using data up to time t, we obtain the error variance decomposition of each forecast corresponding to the arising shocks from the same or other network components at time t. Third, based on the forecast error variance decomposition obtained from the previous step, we estimate the volatility connectedness index of each market index and the total connectedness/spillover index of the Vietnamese stock market. We then calculate the total spillover/connectedness index for the entire Vietnamese stock market, which is a per cent of the total variation in the spillover network. The total spillover index (TSI) can be measured as: where N denotes the number of time series and d ij (for) denotes the pairwise directional connectedness/spillover This paper examines the volatility spillover effects by employing the VAR model with a lag length of three, a forecast variance error of a 12-day-ahead period, and 200-day rolling-sample windows. These parameters for the VAR estimation are used in the Diebold and Yilmaz (2015) study. The lag length of three in our model is selected based on the final prediction error (FPE) and Akaike's information criterion (AIC). Moreover, we also conduct the robustness test for our model using several VAR lags (from lag 1 to lag 5), forecast horizons (5-10-15 days), and rolling-sample window lengths (250-500-750 days). Estimating the market volatility using the ARMA-GARCH model The market volatility patterns for 24 stock sectors in Vietnam are presented and discussed. Table 1 presents the ARMA-GARCH estimation results for the entire period of 2012-2021. Results show that all 24 sectors have both ARCH and GARCH effects, which means that the market volatilities in the current period are affected by both the shocks and volatilities from the previous periods. Additionally, the variance constants are not significantly different from 0, implying that the market volatilities across all sectors are severely affected by the shocks and volatility from the previous periods. The GARCH effects (β) across 24 sectors in Vietnam indicate each sector's likability to being affected by the previous periods' volatility. Development Investment is the most severely affected sector by its volatility from prior periods, as recorded with the highest β of 0.966, followed by Education (β of 0.928) and Securities (β of 0.914). In contrast, the volatility of Construction is least affected by the previous periods' volatility (the lowest β of 0.719). Furthermore, the absolute value of (α + β) shown in the last row of Table 1 and Table 2 indicates the speed of the mean-reversion process of each sector. We note that sectors with the absolute value of (α + β), smaller than 1 have the mean-reversion process. Additionally, the lower the absolute value of (α + β) is in each sector, the faster the mean reversion of that sector. As such, our findings from Table 1 indicate that 23 Vietnamese sectors, except for Development Investment, exhibit a slow meanreversion process. Meanwhile, the Development Investment sector witnessed no mean-reversion process because the absolute value of (α + β) is more significant than 1 (α + β = 1.0013). Securities have experienced the longest mean reversion (with the highest sum of (α + β) of 0.9989). In contrast, Pharmaceutical has the fastest mean reversion with the lowest absolute value (α + β) of 0.925. Besides, all 24 sectors recorded persistent volatility because the absolute value of (α + β) is approximately equal to 1. We then employ the same ARMA-GARCH estimation for the sub-period of 2020-2021 to examine the Vietnamese stock market volatility patterns during the Covid-19 pandemic. As presented in Table 3, our results confirm that the ARCH and GARCH effects across 24 Vietnamese sectors remain significant and persistent (at a 1 per cent significance level). Moreover, the market volatility is also confirmed to be substantially affected by the shocks from the previous periods during Covid-19. Figure 3 presents the market volatility patterns of 24 Vietnamese sectors from 2012 to 2021. We concentrate on each sector's pattern over the period and the volatility changes across 24 sectors during the Covid-19. The volatility patterns indicate that Aviation, Construction, Mineral, Oil & Gas, Securities, and Services are the most volatile sectors in Vietnam over the entire period. In contrast, the 18 remaining sectors' market volatility patterns are pretty stable. Interestingly, the Covid-19 pandemic has not severely impacted all 24 Vietnamese sectors at the beginning of 2020 because no significant volatility spikes are found during this period. The first wave of the Covid-19 pandemic occurs around 2,700, which is at the beginning of 2020. In general, several sectors recorded an increase in market volatility. However, this increase is insignificant and even lower than the previous period's volatility spikes. Specifically, Figure 3 shows that Securities and Services exhibit the highest volatility of approximately 0.25 per cent per day during the Covid-19 pandemic. Meanwhile, other sectors show only a slight increase in their volatility during the pandemic. The reason behind this abnormal market behaviour can be explained by the nature of the Vietnamese stock market. Vietnam's stock market is classified as a frontier market, which infers that the market is small, hard to access, and risky. As such, the whole stock market constantly witnesses high and persistent volatility, thus making the investors familiar with the shocks and are used to investing during high volatile periods. Therefore, the impacts of the Covid-19 pandemic shock on all 24 Vietnamese sectors are less severe, thus resulting in only a slight increase in market volatility compared to other markets such as the US or Australian stock markets. TSI stands for the total spillover index. The spillover effects across 24 Vietnamese sectors during the COVID-19 pandemic The spillover effects across 24 sectors in Vietnam over the entire ten years research period of 2012-2021 using the vector autoregression (VAR) model are examined and presented in this section. Table 3 presents the connectedness/spillover network among 24 Vietnamese sectors. The total spillover index (TSI) indicates the average spillover effects across all Vietnamese sectors. The TSI of all 24 Vietnamese sectors is 76.4 per cent during the entire period, implying that the spillover effects across Vietnamese sectors are extreme and the inter-sector connectedness is substantial. This finding indicates that the market risk in terms of volatility among Vietnamese stocks is likely to spread among all sectors rapidly. Furthermore, the results of the net spillover effect of each sector show that Food and Building Materials play an essential role as the most significant risk transmitters across the Vietnamese stock market because of their most significant "NET" values of around 100 per cent in the volatility transmission network. Therefore, these two sectors are considered significant risk transmission sources, implying that whenever significant market volatility emerges in these sectors, the volatility will spread to other sectors quickly. In contrast, Real Estate and Securities are the two most significant risk absorbers (with the lowest "NET" values of −32.58 per cent and −30.47 per cent, respectively). Knowing each sector's characteristics can help policymakers avoid market failure or mitigate the negative impacts on the financial market created by the major shocks. Figure 4 shows total average spillover index (TSI) changes in trend over 2012-2021. The total connectedness index among 24 sectors in Vietnam is consistently high, which is always more than 60 per cent. However, the TSI increased to around 90 per cent after the first case of Covid-19 was recorded in Vietnam on 23 January 2020, thus indicating an extremely high market volatility spillover effects across 24 sectors during the pandemic period. Figure 5 presents the spillover effects changes in each Vietnamese sector over the entire research period. Additionally, we use the vertical red line to mark the first recorded Covid-19 case in Vietnam on 23 January 2020 to compare the spillover effect between pre-pandemic and during the pandemic. Based on the illustrated patterns in Figure 5, Aviation, Construction Investment, Pharmaceutical, Real Estate, Rubber, Securities, and Technology are the significant risk absorbers over the entire period. The persistent negative spillover trends most support this finding during the whole period. In contrast, Aquaculture, Building Materials, Food, and Plastic are the main risk transmitters with a consistently positive spillover trend over the 2012-2021 research period. We now analyze the impacts of the Covid-19 pandemic on the spillover effects across 24 sectors in Vietnam. The trend between pre-and during the pandemic is compared. The pandemic is found to create a paramount increase in the spillover effects across 24 Vietnamese sectors. This finding is supported by the rise to approximately 90 per cent in the TSI, as presented in Figure 4. Furthermore, the pandemic has caused changes in the spillover network of many sectors. Figure 6 illustrates the role of 24 Vietnamese sectors in the spillover network and compares the differences between pre-and during the Covid-19 pandemic. Building Materials, Food, Aquaculture, and Plastic are always the four most significant risk transmitters both pre-pandemic and during the pandemic period, consistent with the findings in the spillover patterns presented in Figure 5. On the other hand, oil and Gas switched from the risk transmitter to the risk absorber after the emergence of the pandemic. Interestingly, all 19 remaining sectors are always the risk absorbers from 2012 to 2020. As such, we can conclude that across 24 Vietnamese sectors, the four most significant risk transmitters always spread the market risk in terms of volatility to the other twenty sectors, especially during the Covid-19 period when the spillover effects are amplified. A robustness analysis This section conducts a robustness analysis concerning market volatility spillovers across 24 Vietnamese sectors using various combinations of rolling windows, forecast horizons and lag lengths. Our analysis uses the rolling window of 200 days, a forecast horizon of 12 days, and the optimal lag of 3 days to input the VAR model to examine the sectoral spillover effects. However, different combinations of these parameters/inputs may result in different patterns. As such, we conduct the robustness test using different combinations of the rolling windows (250 days, 500 days, 750 days); the forecast horizon (5 days, ten days, 15 days), and lag lengths (1 day, two days, three days, four days, five days). Figure 7 illustrates the robustness analysis results. Our robustness analysis indicates that the trends and magnitudes of the market volatility spillover across 24 sectors in Vietnam, represented by the total spillover Note: The red line marks the date of 23 January 2020, when the first Covid-19 cases were recorded in Vietnam. index (TSI), largely remain unchanged regardless of the combinations of different rolling windows, forecast horizons, and lag lengths. Concluding remarks The market volatility patterns, characteristics, and spillover effects among stock markets have been extensively investigated. However, the volatility spillovers among various sectors of a country have largely been ignored in the existing literature, in particular for an emerging market such as Vietnam. As such, our study is conducted to examine market volatility across Vietnamese sectors during the 2012-2021 period, including the Covid-19 pandemic. Our analysis uses the ARMA-GARCH estimation technique to estimate market volatility and the vector autoregression (VAR) technique to examine the spillover effects between these sectors. Our empirical results indicate that all 24 Vietnamese sectors are substantially affected by the shocks and volatilities from previous periods. Development Investment, followed by Education and Securities, are the most severely affected sector by the market volatility from prior periods. In contrast, Construction is the least affected sector by market volatility from previous periods. Also, Development Investment is found to have no mean-reversion. Pharmaceutical exhibits the fastest mean reversion, whereas Securities shows the most prolonged mean-reversion process. Furthermore, we found that Aviation, Construction, Mineral, Oil & Gas, Securities, and Services are the most volatile sectors in Vietnam over the entire research period. Noticeably, all Vietnamese sectors have not been severely impacted by the Covid-19 pandemic since the volatility spikes are not significant compared to the previous periods. In addition, our findings indicate that the inter-sector connectedness across 24 sectors in Vietnam is consistently high, reaching above 60 per cent over 2012-2021. This finding means that the Vietnamese stock market risks will likely spread across all sectors rapidly whenever a major shock emerges. The substantial spillover effects among all sectors were found after the first case of Covid-19 was recorded in Vietnam on 23 January 2020. Additionally, we found that Aviation, Construction Investment, and Pharmaceutical are the most significant risk absorbers. Meanwhile, Aquaculture and Building Materials are the main risk transmitters during the research period. Our study exhibits limitations. Studies in the future may need to identify the mechanism by which volatility spillovers occur across 24 Vietnamese sectors. Understanding this mechanism is very important for policy implications for the Vietnamese government and governments of other emerging markets to formulate and implement appropriate responses to market shocks from future events. In addition,
6,732.6
2022-09-15T00:00:00.000
[ "Economics" ]
Effective Robotics Education : Surveying Experiences of Master Program Students in Introduction to Robotics Course Technology-driven world poses new challenges for the modern education system. To prepare skilled specialists for academic and industrial needs it is important to create competitive educational ground. Our team works on developing and implementing world-class master program in Intelligent Robotics. To pave the way for a highquality educational program we invest efforts into studying students’ attitude and motivation for connecting their professional life with robotics. In this paper we describe the curriculum for master program that was designed and implemented at the Higher Institute of Information Technology and Information Systems at Kazan Federal University and present the results of our continuous research of comparative analysis of surveys among students of Introduction Introduction The current state of higher education in robotics in Russia is at the stage of a rapid development, thus both the scientific and the industrial communities are experiencing a skilled labor shortage.There is a long growing list of industries that continue to automate their manufacturing process and introduce robots to make their production more effective, but still companies feel an imbalance between qualifications of recent graduates and requirements of industry.This problem raises new challenges for the modern educational system. In 2017 before opening a new master program track in Intelligent Robotics at Higher Institute of Information Technology and Information Systems (ITIS) at Kazan Federal University (KFU) we started our research on robotics engineering education by conducting surveys among bachelor and master students.The surveys targeted bachelor students of Applied Informatics program, who had selected robotics courses as elective courses, and master students of Software Engineering program, who had a compulsory course in robotics research [1].We collected and analysed data from bachelor and master students of different robotics courses through a set of surveys [2], and the analysis of student responses helped determining methodological directions for implementing new teaching methods throughout all courses of Intelligent Robotics program. We explored existing robotics and mechatronics educational programs in Russia, and unfortunately the majority of them are currently behind similar programs of developed countries [1,3,4].Realizing all consequences of such marginalization Russian government has been focusing recently on the innovation-driven growth of the country.Russian Federation Government enacted Decree № 2227-r on Strategy for Innovative Development of the Russian Federation until 2020 [5].According to this document besides improving of engineering education quality it is necessary to focus on development of students' personal attitude to life and facilitate elaboration of such qualities as flexibility, mobility, lifelong education capabilities, propensity for entrepreneurship and risk acceptance. The key priority for our team is to create and implement internationally acceptable top level master degree program, which will fill the gaps of engineering education in the field of intelligent robotics.The proposed program is intended to provide students with both knowledge in robotics research and abilities to perform applied projects independently and in teams with other students. While creating a new educational program, we were aware of the fact that robotics is a relatively new and rapidly evolving field of science and education, which could give us more freedom and flexibility in designing the curriculum, unlike traditional fields like applied mathematics or physics [6].But on the other hand we had to consider students' expectations from the program and their attitude to robotics field.These required to conduct series of detailed surveys among the students who intend to study robotics in a way that we could outline a concept of motivation and level of interest to relate the students' future career to robotics field. In our previous work we have published our results of surveys among Spring-2017 bachelor students of Introduction to Robotics course that was conducted before opening the master degree Intelligent Robotics program, MSc-IR [2].Continuing our research, this paper introduces the results of survey analysis of the first year students of master's degree program, which was conducted during the first semester of Fall-2017. 2 Engineering education at ITIS KFU The curriculum covers two years of study and consists of twelve special subjects that are taught in English.Within Fall-2017 (the first semester of the program) we had delivered three robotics core MSc-IR courses: Introduction to Robotics, Robot Operational System and Computer Vision.Teaching courses in English will allow students to be acquainted with international academic terminology of robotics field and to keep pace with their foreign mates.The program is premised on the involvement of mathematicians, physicists and computer science specialists from KFU, as well as invited Russian and foreign professors for teaching. Features of the proposed program are: -Full-time education; -Robotics courses are taught in English; -Most of the courses include intensive teamwork; -Home assignments require to use not only various simulators, but robotic hardware that ranges from LEGO robotic kits within the first semester to Robotis OP-3 humanoids within the third semester for practical evaluation of gained theoretical knowledge; -Opportunity to make a research with real robots at the hands-on laboratories under a tutor supervision.Key priority of the track is creating and implementing globally acceptable robotics education.It is expected that besides the lectures and practical courses during the program students will carry out their projects within research laboratories. 2.2 Research projects integration into educatonal process The problem of engineering staff shortage in Russia is a significant barrier for broadening a variety of educational programs and particular engineering and computer science courses in Russian academy.One of the reasons for the lack of human resources in robotics is conservatism and inflexibility of the higher educational community, its inability to meet the challenges posed by a rapidly changing world.In 2016 Institute for Statistical Studies and Economics of Knowledge (ISSEK) of the National Research University Higher School of Economics (HSE) carried out a survey among robotic companies.The study had revealed that for all engineering staff categories (researchers, engineers, and technicians) there is an imbalance between the qualifications of recent graduates and requirements of industry.Beside the problem of qualification, the study of HSE ISSEK had emphasized that robotics industry experiences lack of specialists: 61% of the survey participating companies indicated their need for professional human resources and none of companies had an engineering staff excess [7].To some extent such problem is a consequence of deficiencies in higher education. Nevertheless, the overall picture of the engineering education in Russia, and particularly in robotics, demonstrates some improvements, including: -Last decade activity of Russian schoolchildren in robotic creativity has significantly increased [8]; -Some Russian universities have started to drive growth in robotics field and collaborate with national and foreign colleagues [1,9]; -The government started supporting the interest to technical innovations and entrepreneurship among school, college and university students; -Emerging robotics laboratories provide students with opportunity to carry out their research projects. Research, being integrated into the educational process, provides learning process with opportunity to engage students in carrying out self-guided study and practice their theoretical knowledge.Considering the fact that robotics is a truly multidisciplinary field, such approach allows students to choose their particular direction of interest within the broad and fascinating robotics fields [10,11].Currently in the Laboratory of Intelligent Robotic Systems students of robotics master program undertake robotics research projects in the fields of aerial swarms, mobile robotics, urban search and rescue robotics, path planning, simultaneous localization and mapping, humanoid robot locomotion, manipulation, robotic surgery and others.Matlab and Gazebo environments are used for algorithm prototyping and simulation, while experimental work is performed with crawler and wheeled mobile robots (e.g., Servosila Engineer [12], Unior [13], PAL Robotics PMB-2 [14]), DJI Phantom drones, humanoids (e.g., AR-601 [15], Robotis OP-2 and OP-3 [16]), KUKA manipulators and other robots.In addition to experimental work, we encourage our students to present their results through scientific publications and provide them with supervision in preparing their research papers. We continue to implement interactive teaching and learning strategies in the classes and integrate robotics research into the educational process, because such approach would prepare students for carrying out serious applied robotics projects and give them a chance to be well-prepared both for academic or industrial career as researchers, developers and team leads. Curriculum of Introduction to Robotics course Introduction to Robotics is a core course of the MSc-IR program, which is taught in the first semester of the program.This course is intended to provide students with an understanding of the basic robotics concepts and principals, to introduce the recent applications and prospect of this field, and the main idea behind this course curriculum is to sparkle student interest in robotics and to help them selecting a research topic for their master diploma.The course includes the following topics: -Robotics in Russia and abroad introduces the history of robotics, explains the role of robotics in the modern society and demonstrates various attractive examples of practical applications of robotic advanced technologies around the world.Special attention is paid for robotics R&D in Russia so that for each foreign application example a local equivalent is demonstrated, whether it succeeds to over perform the foreign application or not. -Introduction to industrial robotics familiarizes students with types and particular industrial applications of robotic manipulators. -Linear algebra and coordinate systems is a brief review of linear algebra topics that concentrate mainly of matrix operations, while coordinate systems include different representations of translation and rotation operations, homogeneous coordinates, Euler angles, quaternions and gimbal lock problem. -Forward and inverse kinematics of manipulators covers kinematic principles of robotic arms and teaches students to solve forward and inverse kinematics problems with analytic and numeric approaches.Students familiarize with degrees of freedom, workspace, properties of robots, Jacobians, velocities and static forces, manipulator singularities, etc. -Trajectory generation introduces path description and generation, joint-space and Cartesianspace schemes, geometric problems with Cartesian paths and path generation at run time. -Manipulator-mechanism design discusses manipulator design dependence on task requirements, kinematic configuration and actuation schemes, position and force sensing.In addition, a set of introductory lectures cover a number of selected topics such as military and defense robotics, roboethics [17], search and rescue robotics [18], space robotics and path planning [19].The main course book for theoretical approaches and problem solution is a classical book of J.J.Craig [20], while for simulations in Matlab with Robotics Toolbox we use Corke's book [21].In addition to pen-and-paper home assignments and coding in Matlab, students have a number of presentations, including interactive demonstrations of their Robotics Toolbox manipulators [21], and a final project.LEGO Mindstorms EV3 robotic kits and LeJos programming language are utilized in order to practice theoretical knowledge with a hardware, and for majority of participating students the first experience in their life to face engineering problems and gain insights into a drastic difference between pure software coding and dealing with hardware issues of a robot.In order to encourage active learning, students are asked to discuss particular questions in small groups in the class in real time and then present their joint solutions for all course participants. Research method This research data section presents analysis of Introduction to Robotics course.Students studied the course for 3 hours per week during 18 weeks in the 1 st semester.The two surveys were conducted: an initial survey took place in the beginning of the classes, just after the first lecture, and a final survey was run in the end of the course, just before the final test.The target group consisted of 11 students that were successfully enrolled to the master program and participated in the course, however, unfortunately, not all students volunteered to respond each of the surveys.9 students responded to the initial survey and 10 students responded to the final survey.To consistently observe dynamics of the students' progress we selected responds of 9 students that participated in both surveys.However, as we target to evaluate students with only technical background, we excluded a student with a BA in Public Policy, thus decreasing the respondents number to 8.Among the selected respondents 5 had BSc in Applied Informatics, others had BSc in Physics, Information Security and Gas-Turbine Engineering (Fig. 1). We applied the same research methodology that we had utilized previously [1,2] and provided students with questions related to English language comprehension, self-efficiency, active learning strategies, significance of studying robotics, and stimulation of learning environment.This time two questionnaires were developed for the two surveys and each one consisted of two parts.Both surveys were conducted in Russian language to guarantee that all respondents absolutely understand each and every question and their subtle differences.The initial survey contained 48 questions where the first part contained questions basically related to their background, knowledge received before starting the class etc.The second part contained questions that were related to the students' expectations from the course. In the final survey we suggested 42 questions where the first part consisted of the same questions presented in the initial survey (i.e., the identical questions) and the second part was related to the experience that was gained by the end of the course.The survey targeted at students' background, English language, and motivation to study robotics.The questionnaires were provided on-line via Google forms in the following way: each question appeared on a separate page, a new question became available only after submission of a previous question's reply, and, moreover, there was no opportunity to return to previously answered questions. The questions were divided into statements, openended questions and multiply choice questions.Each statement presented a 5-point scale with optional answers -(1) SD, Strongly Disagree; (2) D, Disagree; (3) NO, No opinion; (4) A, Agree; (5) SA, Strongly Agree -which appear along X-axis in Fig. 2-6.Y-axis of Fig. 2-6 indicate percentage of the respondents that selected the corresponding options. Analysis In this section we compare the students' expectations in the beginning of Introduction to Robotics course against their experience after they participated in classes for one semester (and just before taking a final test of the course).We are analyzed English language comprehension, selfefficiency, active learning strategies, motivation to study robotics, stimulating learning environment, and opinions about the course. English language comprehension Overall English language comprehension dynamics was positive: by the end of the course 62.5% (SA-25%, A-37,5%) of the respondents confirmed their confidence of speaking English in the class (Fig. 2), while 12,5% of students did not feel confident (decreased by 12,5%) and 25% had no opinion (increased by 12, 5%).Such results demonstrated that the majority of respondents by the end of the semester felt confident while speaking English during the course, which shows that teaching methods and class activities did not discourage students to use foreign language in the class.Additionally, there was a decrease of students' worries of speaking English with Russian native speakers during inclass discussions (Fig. 3).However, even though 62,5% of students had taken classes in English before starting the course, there was significant negative dynamics on the item "I am not scared when I do not understand what the teacher says in English, because I can ask to explain it in Russian".This might be caused by robotics material complexity or language barrier in English listening and comprehension in.The fact that they had taken classes in English before does not necessarily mean that those were science related classes.We assume that the tendency could be positive in the next semester after the students adapt to hearing in English (Fig. 4).Even though the students worried about English language comprehension of the material, their motivation to study English increased by 12.5% (Fig. 5) by the end of the course and 87,5% in total were motivated to continue studying English. Self efficiency In general, the students' expectations towards the robotics field difficulty level were met.Moreover, by the end of the course a small number of students realized that robotics is even a more difficult subject than they had expected at the start (Fig. 6).This might provide additional explanations of the students' tendency to worry about their understanding of the material in English in the class (demonstrated by Fig. 4).Nonetheless, by the end of the course 75% of the respondent group were sure that they could understand difficult robotics terms from the lectures (Fig. 7).These responses hint that the complexity of robotics material was not the main reason of negative effect on material comprehension during the class (Fig. 4).Thus, this strengthens the assumption about the language barrier (i.e., listening the course in the foreign language) as a reason of this negative effect.While 87.5% of respondents were sure that they could successfully pass a final test (Fig. 8), by the end of the course 100% of the students responded that they could learn robotics if they put enough efforts (Fig. 9) even though 87,5% of them had never taken any robotics related classes before.A strong self-efficiency was observed in the final survey: 87.5% of the respondents were ready to apply efforts in order to learn the course material even if the content was difficult (Fig. 10). Active learning strategies This subsection presents the observations of active learning strategies, which were used by the students.By the end of the course 100% of the students made efforts in understanding material and trying to connect new material with the knowledge they had possessed before taking the course (Fig. 11) thus demonstrating their motivation towards robotics study.Further we noticed a positive tendency among students in looking for additional sources in order to improve material comprehension as by the end of the course only 12.5% of the respondents had no opinion on these item (Fig. 12) and 12.5% had no opinion about discussing unclear material with the classmates or asking the teacher for additional explanations, while 87,5% (SA-62,5%, A-25%) desired to use these additional resources; unfortunately, the number of positive responses for this item decreased, while it had been 100% at the course beginning.The passive no-opinion position on the both items could be also related to students' self-confidence of having enough knowledge and/or abilities to process the material independently, which may be implicitly connected to the students' self-confidence of learning complicated material (Fig. 9). Motivation to study robotics The survey demonstrated that 87.5% of the students had never participated in robotics related classes prior to MSc-IR program enrolment.The question if a student believes he/she will use the gained in the course knowledge for their future job scored 100% of positive responses both before and after the course, while by the end of the course there was a 12,5% increase in SA responses.Moreover, the same tendency was obtained for the question if studying robotics is important because it helps to stimulate student's thinking (with slight difference in SA and A responses, Fig. 13).In comparison with the initial survey, the final survey showed that more students realized the importance of solving problems while implementing robotics projects (Fig. 14). Stimulating learning environment Almost all students explained their participation in the course by their interest to the course content.By the end of the course 62.5% of the respondents strongly agreed that mental effort in studying and preparing for classes served as an additional motivation to participate in the classes, while 12.5% changed opinion to a disagree option (Fig. 15).To double check if teaching in English succeeds to stimulate learning environment, the survey included a number of similar questions.One of such questions verified if a student wanted to participate in the class because the course is conducted in English, and the respondents' opinions were positively shifted (SA and A in total summed up to 75%, Fig. 16); this well-correlates with the previously demonstrated tendency of English language studying motivation (in Fig. 5). the respondents' evaluation of the robotics classes interesting content and the level of complexity (Fig. 18, 87.5% of SA and A responses).Moreover, the question if the students would still select the course in a case it was an elective course (the course is a core obligatory one within MSc-IR program) scored 100% strongly agree responses, which was indirectly confirmed by a high rate of the students' appearance in the lectures while the presence in the lectures was optional. Figure 18.I believe that the robotics course (content) is interesting and has an appropriate level of complexity. Open-ended and multiple choice questions In addition to the likert-scale statements the students were asked to response open-ended and multiply choice questions.One of the multiply choice questions demonstrated that, surprisingly, the students preferred theoretical content and practical tasks to interactive tasks and own ideas presenting (Fig. 19).As teachers, we are interested to facilitate the students to produce, present and discuss their own novel ideas as well as to actively interact with their team members.Therefore, we plan to emphasize the acquirement of these skills through multiple practical tasks within this course as well as in other courses of MSc-IR program in the future.With a help of open-ended questions, we learned that the students liked such methods as individual approach, practical tasks with Lego EV3 robots, opportunity to listen the course material both in English and Russian languages, practicing presentation skills in front of the class and receiving feedback from the teacher. Conclusion and Future work To prepare skilled specialists for academy and industry it is important to create an educational ground, which will be competitive with other leading universities on the world-scale.Our team is developing a world-class master program in Intelligent Robotics at Kazan Federal University in English language.To guarantee a highquality education we invest essential efforts into monitoring of students' attitude and motivation to relate their professional life with robotics field and collecting feedback about the program courses from students in order to continuously improve the program. In this paper we presented the results of our continuous comparative analysis of surveys among master students of Intelligent Robotics program.The survey of Introduction to Robotics course was run twice: the initial survey took place after the first lecture and the final survey took place before the final test.The survey demonstrated that teaching in English encouraged the students to improve their language skills and, while this definitely demanded significant efforts in order to overcome the language barrier and to master complicated robotics material, the students enjoyed the course and could understand difficult robotics topics in the class.By the end of the course the students' self-efficiency and self-confidence improved, while they also realized the complexity of the robotics field and the master program curriculum.Furthermore, the students expressed motivation increase for robotics mastering and applied active learning strategies in studying the course.Yet, surprisingly, the students preferred theoretical content and practical tasks of the course to interactive tasks and own ideas presenting. As a part of our on-going work we analyse data that was collected through surveys in other courses.It will be interesting to further explore the changes in students' responses and motivation after all of them will gain enough experience of working with sensors and complicated robots within their personal research topics for graduation thesis. Figure 2 . Figure 2. I do not worry to make mistakes while speaking English. Figure 3 . Figure 3.I do not feel nervous when speaking English with Russian native speakers. Figure 4 . Figure 4.I am not scared if I do not understand English because I can ask to explain in Russian. Figure 5 . Figure 5.After participating in the course I have more motivation to study English. Figure 6 . Figure 6.I am sure I can understand the content of the class despite the complexity of material. Figure 7 . Figure 7.I am not sure if I can understand difficult robotics terms during the class. Figure 8 . Figure 8.I am sure I can successfully pass the final test. Figure 9 . Figure 9.I cannot study robotics no matter how much effort I put. Figure 10 . Figure 10.If a lecture is too difficult, I prefer not to learn this material. Figure 11 . Figure 11.When I study new (robotics) material, I try to connect it with my previous experience. Figure 12 . Figure 12.When I do not understand new (robotics) material, I try to find additional information in order to understand it. Figure 13 . Figure 13.I think that studying robotics is important because it stimulates my thinking. Figure 14 . Figure 14.I think that (in robotics) the most important is to solve the problems, which I face during the projects. Figure 15 . Figure 15.I want to participate in the classes because studying and preparing for classes require mental effort. Figure 16 . Figure 16.I want to participate in the class because lectures are conducted in English. Figure 17 . Figure 17.Teaching methods are new for meThere were minor positive changes in the students' evaluation of the applied teaching methods novelty with regard to their prior expectations (Fig.17), as well as in 6 , ), as well as in Figure 19 . Figure 19.What students found interesting during classes. 2.1 Robotics curriculum at ITIS KFU Current robotics curriculum for master students at ITIS is a pilot educational track within the framework of the existing Software Engineering master program, which is supposed to be extended toward a full master program according to Federal State Education Standard with assignment of qualification of the Master in Robotics and Mechatronics.During summer admission campaign we focused on accepting primarily prospective students with bachelor's or specialist's degree holders in engineering, physics, computer science and IT.Students were supposed to have a background in programming or they were expected to learn programming basics independently within the first month of their classes.
5,843.6
2018-01-01T00:00:00.000
[ "Engineering", "Education", "Computer Science" ]
Non-systemic fungal endophytes in Carex brevicollis may influence the toxicity of the sedge to livestock The sedge Carex brevicollis is a common component of semi-natural grasslands and forests in temperate mountains of Central and Southern Europe. The consumption of this species causes a severe toxicity to livestock, associated to high plant concentrations of the β-carbolic alkaloid brevicolline. This research was started to ascertain the origin of this toxicity. An exploratory survey of alkaloid content in plants growing in contrasting habitats (grasslands/forests) did not contribute to f ind a pattern of the variable contents of brevicolline in plants, and led us to address other possibilities, such as a potential role of fungal endophytism. Systemic, vertically-transmitted endophytes producers of herbivore-deterrent alkaloids are known to infect many known forage grasses. We did not detect systemic endophytes in C. brevicollis, but the sedge harboured a rich community of non-systemic fungi. To test experimentally whether non-systemic endophytes influenced the synthesis of the alkaloid, 24 plants were submitted to a fungicide treatment to remove the fungal assemblage, and the offspring ramets were analysed for alkaloid content. Brevicolline was the major β-carbolic alkaloid detected, and the contents were at least five times lower in the new ramets that developed from fungicide-treated plants than in the untreated plants. This result, although not conclusive about the primary source of the alkaloid (a plant or a fungal product) indicates that fungal endophytes may affect the contents of the toxic brevicolline in this sedge. Additional key words: livestock toxicity; alkaloid; brevicolline; fungal endophyte; plant-endophyte interaction. * Corresponding author<EMAIL_ADDRESS>Received: 31-10-13. Accepted: 03-07-14. Abbreviations used: EMBL (European Molecular Biology Laboratory); GC-MS (Gas Chromatography Mass Spectrometry); ITS (Internal Transcribed Spacer); PAR (Photosynthetically Active Radiation); PDA (Potato Dextrose Agar). Instituto Nacional de Investigación y Tecnología Agraria y Alimentaria (INIA) Spanish Journal of Agricultural Research 2014 12(3): 623-632 http://dx.doi.org/10.5424/sjar/2014123-5219 ISSN: 1695-971X eISSN: 2171-9292 RESEARCH ARTICLE OPEN ACCESS 624 R. M. Canals et al. / Span J Agric Res (2014) 12(3): 623-632 the existence of toxic alkaloids has not been documented in any other species of Carex, although some secondary chemicals such as proteinase inhibitors and stilbene derivatives have been described in some sedges as an induced response to grazing (Brathen et al., 2004). These previous results and the nature of this particular toxicity, unique to this species, led us to suspect that the synthesis of alkaloids in C. brevicollis might respond to a mechanism of induction activated by a particular exogenous factor. In the area of study this plant occurs in two welldifferentiated habitats, forests and grasslands, both differing dramatically in light intensity and mammalian grazing pressure exerted on the sedge. These two factors are common inducers of chemical defences in angiosperms (Downum, 1992; Chen, 2008). Light has been shown to mediate the synthesis and activation of β-carbolines, a set of phytochemicals derived from tryptophan, which includes brevicolline (Downum, 1992). Regarding herbivory, the induction of toxic compounds has been much more studied in plant-insect systems (Kessler & Baldwin, 2002; Castells et al., 2005; Chen, 2008; Kaplan et al., 2008) than in plantmammal systems (Huntzinger et al., 2004; Zinn et al., 2007), although the occurrence of mechanisms of cross-resistance (defences induced by a particular herbivore being effective against other herbivores that consume the same plant) are assumed (Kessler & Baldwin, 2004). In plant-mammal systems, research has particularly addressed the loss of plant digestibility caused by the synthesis of defences against grazing/ browsing (such as phenolic or silica-based compounds, Massey et al., 2007), or it has focused on animals rather than plants, analysing the mechanisms developed by mammals to avoid or tolerate toxicity (Torregrossa & Dearing, 2010). Plant-associated fungi are another factor linked to the synthesis of antiherbivore compounds in plants. Some of the best known examples of livestock poisoning by alkaloids involve endophytes, fungi that can asymptomatically infect plants. The symbiosis between systemic endophytes of the genera Epichloë and Neotyphodium and their grass hosts has been well described in the past decades. Grasses infected by these endophytes defend from herbivory through the toxic alkaloids produced by the fungi (Clay & Schardl, 2002; Rodriguez et al., 2009). Similarly, different species of the sedge Cyperus spp. have shown antiherbivore activity and increased growth and survival when infected by systemic Balansia endophytes (Clay et al., 1985; Stovall & Clay, 1988). Unlike the well-known Epichloë/Neotyphodium species, most other fungal endophytes are not capable of systemic colonization of plant organs, or seed transmission (Sánchez Márquez et al., 2012). These nonsystemic endophytes are extremely diverse taxonomically, and have been found in all plant taxa analysed, including sedges (Ruotsalainen et al., 2002; Rodriguez et al., 2009; Loro et al., 2012). The ecological functions attributed to non-systemic endophytes are very diverse, and unknown for most species (Saikkonen et al., 1998; Rodriguez & Redman, 2008). Till date, few studies have addressed in particular the role of nonsystemic endophytes in plant defensive mechanisms. The endophytic fungus Undifilum oxytropis, which infects several Astragalus and Oxytropis species, has been shown to produce the toxic alkaloid swainsonine (Cook et al., 2009; Yang et al., 2012) and some symbiotic epiphytic fungi living on plant surfaces have also been linked to plant toxicity caused by ergoline alkaloids in Ipomea species (Markert et al., 2008). Because of the above background, we first investigated a potential influence of habitat in the production of the alkaloid brevicolline in C. brevicollis plants. Introduction This research was initiated to elucidate the origin of a livestock toxicity caused by the sedge Carex brevicollis (DC) (Fam. Cyperaceae). This perennial plant is a common component of environmentally valuable grasslands and forests in temperate mountains of Central and Southern Europe. C. brevicollis is highly toxic to mammals, causing abortions in pregnant cows, ewes, and mares ingesting it, which results in severe economic damage to farmers (Ruiz de los Mozos et al., 2008). Chemical analyses have reported a high content of β-carbolic alkaloids (up to 2% dry matter), mainly brevicolline and to a lesser extent brevicarine, in the stems, leaves and inflorescences of this species (Sharipov et al., 1975;Lazurjevski & Terentjeva, 1976;Busqué et al., 2010). Brevicolline is known to enhance uterine contractions in pregnant mammals, producing an intense oxytocic effect (Yasnetso & Sizov, 1972). In addition, it has shown a strong antimicrobial activity against some bacteria and fungi in laboratory studies (Towers & Abramowski, 1983;Cao et al., 2007). The genus Carex encompasses about 1,800 species, many of them forage plants. Unlike other plant genera containing species exhibiting highly poisonous substances (e.g. Ranunculus, Euphorbia, Lilium, Solanum), the existence of toxic alkaloids has not been documented in any other species of Carex, although some secondary chemicals such as proteinase inhibitors and stilbene derivatives have been described in some sedges as an induced response to grazing (Brathen et al., 2004). These previous results and the nature of this particular toxicity, unique to this species, led us to suspect that the synthesis of alkaloids in C. brevicollis might respond to a mechanism of induction activated by a particular exogenous factor. In the area of study this plant occurs in two welldifferentiated habitats, forests and grasslands, both differing dramatically in light intensity and mammalian grazing pressure exerted on the sedge. These two factors are common inducers of chemical defences in angiosperms (Downum, 1992;Chen, 2008). Light has been shown to mediate the synthesis and activation of β-carbolines, a set of phytochemicals derived from tryptophan, which includes brevicolline (Downum, 1992). Regarding herbivory, the induction of toxic compounds has been much more studied in plant-insect systems (Kessler & Baldwin, 2002;Castells et al., 2005;Chen, 2008;Kaplan et al., 2008) than in plantmammal systems (Huntzinger et al., 2004;Zinn et al., 2007), although the occurrence of mechanisms of cross-resistance (defences induced by a particular herbivore being effective against other herbivores that consume the same plant) are assumed (Kessler & Baldwin, 2004). In plant-mammal systems, research has particularly addressed the loss of plant digestibility caused by the synthesis of defences against grazing/ browsing (such as phenolic or silica-based compounds, Massey et al., 2007), or it has focused on animals rather than plants, analysing the mechanisms developed by mammals to avoid or tolerate toxicity (Torregrossa & Dearing, 2010). Plant-associated fungi are another factor linked to the synthesis of antiherbivore compounds in plants. Some of the best known examples of livestock poisoning by alkaloids involve endophytes, fungi that can asymptomatically infect plants. The symbiosis between systemic endophytes of the genera Epichloë and Neotyphodium and their grass hosts has been well described in the past decades. Grasses infected by these endophytes defend from herbivory through the toxic alkaloids produced by the fungi (Clay & Schardl, 2002;Rodriguez et al., 2009). Similarly, different species of the sedge Cyperus spp. have shown antiherbivore activity and increased growth and survival when infected by systemic Ba-lansia endophytes (Clay et al., 1985;Stovall & Clay, 1988). Unlike the well-known Epichloë/Neotyphodium species, most other fungal endophytes are not capable of systemic colonization of plant organs, or seed transmission (Sánchez Márquez et al., 2012). These nonsystemic endophytes are extremely diverse taxonomically, and have been found in all plant taxa analysed, including sedges (Ruotsalainen et al., 2002;Rodriguez et al., 2009;Loro et al., 2012). The ecological functions attributed to non-systemic endophytes are very diverse, and unknown for most species (Saikkonen et al., 1998;Rodriguez & Redman, 2008). Till date, few studies have addressed in particular the role of nonsystemic endophytes in plant defensive mechanisms. The endophytic fungus Undifilum oxytropis, which infects several Astragalus and Oxytropis species, has been shown to produce the toxic alkaloid swainsonine (Cook et al., 2009;Yang et al., 2012) and some symbiotic epiphytic fungi living on plant surfaces have also been linked to plant toxicity caused by ergoline alkaloids in Ipomea species (Markert et al., 2008). Because of the above background, we first investigated a potential influence of habitat in the production of the alkaloid brevicolline in C. brevicollis plants. Contents of brevicolline in leaves were high, variable among individuals, and did not exhibit the patterns expected between habitats. Therefore, we addressed the study of a potential involvement of fungal endophytes in the synthesis of this powerful alkaloid. Study site and plant sampling This research was done in a mountainous rangeland area, Urbasa (Navarra, Spain), located south of the Western Pyrenees (950 m a.s.l., Fig. 1). The site, included in the Urbasa-Andia Natural Park, receives a temperate climatic influence (mean temperature = 8.4°C; rainfall = 1,275 mm yr -1 ) and is characterized by a karstic landscape covered by 11,400 ha of grasslands, heathlands, and beech (Fagus sylvatica) forests enclosed in the European Natura 2000 network. This area has been grazed extensively by livestock since the Neolithic period, and nowadays supports the pressure of more than 13,500 sheep, 2,500 cows and 750 horses from May to October each year. Intoxications caused by the consumption of C. brevicollis occur every year in Urbasa, although several farmers have implemented livestock management measures to reduce the risk (Ruiz de los Mozos et al., 2008). C. brevicollis grows almost everywhere in the area, both in closed and open habitats. The former, constituted by beech forests, occupy about 73% of the surface and holds a poor understory due to high tree density and canopy development. The open habitats are constituted by a mosaic of grasslands and heathlands that are intensively grazed during the plant growth season. Compared to open areas, within the forest understory light is dramatically attenuated. In a mid-day measurement before the autumn leaf fall, we observed photosynthetically active radiation (PAR) photon flux attenuations 20 times greater in the forest understory. Regarding grazing, it is almost absent in the forest, whereas open areas support a high livestock pressure during the grazing period, with an average of 2 cows ha -1 . By means of studying plants from such contrasting ecosystems, we intended to gain insight into the nature of the chemical defence of C. brevicollis. Samplings were done at two different locations ( Fig. 1), Udau (42°50'N 2°8'W), and Bardoitza (42°48'N 2°4'W), where mosaics of grassland and beech forest habitats occurred. In autumn, at the end of the grazing period, 40 plants were collected in total, 20 per location, of which 10 grew in grasslands and 10 in the understory of adjacent beech forests. At each location, we selected grassland and forest habitats less than 30 m apart, that shared similar physiographical traits (aspect, topography, slope and substrate), but that differed radically in vegetation type, intercepted light and grazing intensity. Most C. brevicollis plants in grasslands were partially defoliated by grazing, whereas forest plants displayed no defoliation signs. Isolation and identification of fungal endophytes A survey of fungal endophytes associated to leaves of C. brevicollis was made with the 40 plants collected To estimate the amount of endophytic colonisation of the plants, we diagnosed the presence of endophytes in samples of 26 leaf fragments per plant. Samples were obtained cutting transversally several asymptomatic leaves from each plant in fragments of about 5 mm of length. The fragments were superficially disinfected by immersion in a solution of 20% domestic bleach (1% active chlorine) for 10 minutes, rinsed in sterile water, and placed in two Petri plates containing potato dextrose agar (PDA) with 200 mg L -1 chloramphenicol. The effectiveness of the surface disinfection method was tested with a sample of several leaf fragments using the tissue print method described by Schulz et al. (1998). The plates containing the leaf samples were incubated in the dark at room temperature (22-26°C), and checked daily for the presence of fungal mycelium emerging from leaf fragments. When this was observed, a sample of the mycelium was transferred to another PDA plate to obtain a culture for later identification, and the infected leaf fragment was withdrawn from the Petri plate, excising the agar around it to avoid fungal colony growth. Three weeks after plating the leaf fragments, we recorded the total number of endophyte-infected leaf fragments from each plant. These data were analysed to compare the amount of endophytic colonisation per plant at both locations and types of habitats. For the identification of endophytes, the fungal isolates obtained from leaf fragments were grouped into morphotypes, according to macroscopic characteristics such as colony appearance and colour (Sánchez Márquez et al., 2007). Afterwards, one or more isolates representative of each morphotype were identified using microscopic and molecular characters. Only the morphotypes consisting of more than one isolate were identified this way. The molecular character used was the nucleotide sequence of the ITS1-5.8S rRNA-ITS2 region, which was obtained after amplifying this region by PCR (Sánchez Márquez et al., 2007). Nucleotide sequences were used to find similar matches in the European Molecular Biology Laboratory (EMBL) nucleotide database. To assign taxa to the sequences, genus and species of the closest database match were accepted when the sequence similarity of the Carex endophyte and the database match was greater than 98%, only the genus was accepted when the similarity was between 97.9 and 95%. When similarities were lower than 95% the isolates were considered as unidentified. Such criteria for ITS-based identification of fungi has been found appropriate in other endophyte surveys (Sánchez Márquez et al., 2007). Brevicolline alkaloid determination In the laboratory we separated several ramets from each of the 40 plants sampled, and used their leaves for alkaloid extraction. Several extraction procedures were tested before choosing the one proposed by Zayed & Wink (2005). One gram of ground plant tissue was treated with 25 mL 1 M HCl overnight at room temperature. Then, the solution was filtered and alkalinised to pH 12 with 6 M NaOH. The alkaloids were extracted by washing three times with 30 mL dicloromethane, and filtered through MgSO 4 to eliminate water traces. Collected samples were vacuum evaporated and kept at 4°C. Previous to GC-MS, alkaloids were separated in a Factor Four column (VF-5ms, 30 m × 0.25 mm, DF = 0.25 μm). C. brevicollis alkaloids are unusual and particular to this plant, so commercial standards are not available. For this reason, nicotine was used for the GC-MS analysis, since it is a common alkaloid from which brevicolline and similar β-carbolic structures can be obtained (Wagner & Comins, 2006). Brevicolline was identified and estimated quantitatively using two different complementary GC-MS techniques, electronic impact ionization, that gives the standard fragmentation of the molecules, and chemical ionization, which determines its molecular weight and estimates the number of molecules present. Experimental fungicide treatment of Carex brevicollis To check whether the presence of fungal endophytes affected the alkaloid content in host plants, we designed an experiment consisting of eliminating the endophytic mycobiota of field sampled plants using a systemic fungicide, and then comparing the alkaloid content of the new ramets produced by these plants with that of the corresponding ramets of untreated plants and of the mother plants sampled in the field. Twelve apparently healthy plants of C. brevicollis collected in three different grasslands in Urbasa (Tximista, Udau and Bardoitza) were planted in pots containing an organic soil mixture, and maintained for several weeks in the greenhouse with a constant watering regime. After this period, we separated three ramets from each mother plant. The mother plants were harvested and stored at -20°C for further analyses of alkaloid contents while the new ramets were transplanted to new pots in the greenhouse. One ramet was kept as a control and the remaining two were treated with the systemic fungicide propiconazole (Oid-Zol ® , Tragusa), which inhibits ergosterol synthesis. Ergosterol is critical for the formation of fungal cell walls, and its absence prevents fungal growth and further invasion of host tissues. The treatment consisted of three doses of 800 μg of propiconazole per plant spaced 10 days. The first and third doses were applied to the soil, due to the upward systemic movement of propiconazole from the roots to the foliage, and the second to the leaves (Zabalgogeazcoa et al., 2006). Treated and untreated plants were then allowed to grow for 50 days in the greenhouse until new ramets developed. The new ramets were collected and stored at -20°C until the alkaloid content was determined with the same analytical protocol previously described. The reason for analysing new ramets produced by these plants was to avoid possible fungicide effects upon alkaloid synthesis. As a whole, 48 samples were collected for alkaloid analyses (12 mother plants, 24 ramets developed from fungicide treated plants, and 12 ramets developed from untreated control plants). Statistical analyses Prior to the use of parametric statistics, data were checked for normality and homogeneity of covariances, and transformed when necessary in logarithmic variables. We performed an ANOVA with habitat (grassland/forest understory) as a fixed factor and location (Udau, Bardoitza) as a blocking factor, in order to discern whether contents of brevicolline differed significantly between habitats. For the study of endophytic colonisation, we performed multifactorial ANOVAs where habitat was a fixed factor, location a blocking factor and the response variables were the number of endophyte-infected fragments per plant and the number of Biscogniauxia nummularia isolates. Data from the fungicide experiment were analysed using a linear mixed model with the following factors: treatment (with three levels: mother, fumigated and unfumigated plants), origin of the plants (12 original plants) and grassland (three different grasslands, Bardoitza, Tximista and Udau). Treatment was considered a fixed factor and origin of the plant nested within grassland a random factor. In the case of fumigated ramets (24), the data used was the mean obtained from the two fumigated ramets with the same ancestor (12). In order to choose the structure of the covariance, likelihood ratio tests were performed to compare different models. The most parsimonious model was fitted with a scaled identity covariance structure in which the elements are not correlated and have a constant variance. All statistical procedures were carried out using the IBM SPSS statistics package. Alkaloid contents in natural populations of Carex brevicollis The combined techniques used in the GC-MS analysis allowed the identification of a group of similar βcarbolic organic structures, whose major component corresponded to the alkaloid brevicolline. All plants analysed contained brevicolline in highly variable amounts, ranging from 0.224 to 2.863 g kg -1 dry matter. Brevicolline concentrations tended to be lower, although the difference was not significant, in plants growing in grasslands than in those from the forest understory (F 1,36 = 3.409; p = 0.073) (Fig. 2). Endophytic colonisation of leaves of Carex brevicollis The amount of endophytic colonisation of leaves was estimated as the percentage of infected leaf frag- ments per plant. In Udau an average of 36% of the fragments of each plant were colonised by endophytes, and in Bardoitza 51.2% of the fragments. At both locations the amount of endophytic colonisation was greater in plants from grasslands (51.6%) than in those from nearby forests (35.3%) (Fig. 3), and this habitat effect was statistically significant (F 1,36 = 7.837; p = 0.008). Endophyte identification The leaves of C. brevicollis supported a rich and abundant fungal assemblage. From the 40 plants analysed, 347 fungal isolates were obtained and grouped into 103 different morphotypes. Nineteen morphotypes contained more than one isolate and 263 isolates were classified into these plural morphotypes. The remaining 84 morphotypes were unique, each consisting of a single isolate. Only the plural morphotypes were identif ied using nucleotide sequences, and in this group morphological characters were also used for identification of cultures that sporulated. With this information the 19 plural morphotypes could be regrouped into 14 taxa, indicating that calculations based on morphotypes overestimated the actual number of taxa (Table 1). All identified filamentous fungi, including unknown taxa, were ascomycetes, as deducedf rom their placement in a phylogenetic tree of ITS sequences. The most abundant endophytic taxon was Biscogniauxia nummularia, 140 isolates of this species Table 1. Fungal endophytes identified in the leaves of Carex brevicollis. Ten plants were sampled and analysed at each habitat and location. The number of isolates obtained and the proportion (in parenthesis) of these ten plants that were infected by each fungus are shown for grasslands (G) and beech forests (F) in the two sampling locations, Udau (U) and Bardoitza (B). Only endophytic species represented by more than one isolate are listed were obtained, and 80% of the plants were infected by it. The number of isolates of B. nummularia was very similar in grassland and forest habitats (Fig. 4a), but levels of infection were much higher in Bardoitza than in Udau (Table 1). An unknown ascomycete was the second most abundant taxon. This species and five others could not be identified because their cultures were sterile in PDA, and their nucleotide sequences were less than 95% similar to any identified accession from the EMBL nucleotide sequence database (Table 1). Contrary to B. nummularia, which displayed a similar number of isolates in closed and open habitats (Fig. 4a), the number of isolates of the other fungal species identified was significantly greater in grasslands than in nearby forests ( Fig. 4b; F 1,36 = 28.580; p < 0.0001). After the above results were obtained, a second set of six plants of C. brevicollis was sampled in Bardoitza grassland and processed for endophyte isolation, with the purpose of obtaining more isolates of B. nummularia. This fungus is easy to distinguish from other endophytes because of the brown coloration of its colonies. After the field sampling, B. nummularia was well isolated from two of the six plants, but six months later, neither B. nummularia nor any other fungi was isolated from the new ramets that had developed in the greenhouse. Alkaloid contents in fungicide-treated ramets of Carex brevicollis Beta carbolic alkaloids were detected in extracts of all the new ramets produced by C. brevicollis plants, in those produced by fungicide-treated plants as well as in those derived from the untreated control ramets. The toxic brevicolline was the major alkaloid detected by GC-MS analysis, it was present in all the individuals and represented 96.2% ± 2.2 (mean ± standard error) of the β-carbolic structures present. The concentrations of β-carbolic alkaloids were more than five-times lower in ramets developed from fungicide-treated plants than in those that grew from non-treated plants (Table 2; Fig. 5). Among non-treated plants, concentrations were significantly higher in mother plants sampled in the field than in their offspring ramets produced in the greenhouse. Discussion C. brevicollis is the only toxic species of a genus of forage plants, and is able to grow successfully in plant communities with an extended grazing history. In this survey, brevicolline contents displayed high variability among plants, and did not differ significantly between open and closed habitats. The alkaloid, contrary to our expectations, tended to be higher in plants growing in the forest understory, where grazing is unusual and light scant. Although our work was not designed to specifically test the effect of grazing on brevicolline synthesis, these observations suggest that grazing might not be a first order factor affecting brevicolline content in plants. Two main abiotic parameters, growing degree days and altitude, were not found to be related to the brevicolline content of leaves of C. brevicollis in a survey where variable concentrations of brevicolline also occurred among plant individuals (Busqué et al., 2010). The endophyte survey revealed the existence of a rich and diverse fungal community in the leaves of C. brevicollis, dominated by 14 taxa that comprised 75.7% of all isolates obtained. The amount of endophytic colonisation was higher in plants from grasslands. This could be due to the greater amount of tissue wounds caused by grazing, which might facilitate the entry of horizontally transmitted endophytes, or perhaps to the tendency of higher contents of brevicolline in the forest sedges, which may exert a control on microbial plant populations. The dominant endophytic taxon was Biscogniauxia nummularia, found at both locations and habitats, and in 80% of the plants analysed. Despite the high prevalence of this fungus as an endophyte in wild populations of C. brevicollis, we did not recover it from new ramets produced in the greenhouse by infected mother plants. This led us to suspect that, in spite of its abundance, the colonisation of C. brevicollis by B. nummularia was of a non-systemic type (Sánchez-Márquez et al., 2012). It is interesting that B. nummularia is known for being an endophyte and a pathogen in Fagus sylvatica (Hendry et al., 2002), which is the dominant tree species in the area of study. In the last decades, several studies have shown that non-systemic endophytes are ubiquitous in plant species and compose an extremely diverse phylogenetic group. The roles and functions of these fungi in the host plants are mostly unknown and are the focus of interesting studies (Rodriguez & Redman, 2008;Zabalgogeazcoa, 2008). From an evolutionary perspective, horizontal transmission and high fungal diversity within the host are more consistent with antagonistic, rather than mutualistic interactions between the host and its symbiont. The according theory of balanced antagonism states that in plant-endophyte interactions there is a degree of virulence by the fungal partner, the host plant having to defend to maintain fungal invaders below a given threshold (Schulz & Boyle, 2005). However, other authors believe that the long evolutionary interaction and the pervasive occurrence of endophytism indicates that fungi might compensate the cost of heterotrophism by playing some positive functions in the plant, such as improved adaptation to biotic and abiotic stresses (Arnold & Lewis, 2005;Rodriguez & Redman, 2008;Zabalgogeazcoa, 2008;Saikkonen et al., 2010;Yuan et al., 2010). The synthesis of toxic alkaloids by non-systemic endophytic fungi has been discovered in some plant Table 2. Estimates of the f ixed and random effects from the experiment of the fungicide treatment. The dependent variable is the number of molecules of β-carbolic alkaloid. Fixed factor: treatment with three levels, mother, non-fumigated and fumigated plants. Figure 5. β-carbolic alkaloid contents in mother plants and in the new ramets developed from treated and non-treated individuals of Carex brevicollis. Compared to fumigated plants, alkaloid contents in mother plants and in untreated ramets increase ×9 and ×5, respectively. species in recent years (Cook et al., 2009;Yang et al., 2012). In these cases, a particular fungus with an endophytic lifestyle is able to produce in planta and in vitro a toxic alkaloid. In our experiment ramets derived from fungicide treated plants produced alkaloids, but in much lower quantities than ramets from non-treated plants. At first sight, these results suggest that the alkaloid is a plant product that can be induced by fungal endophytism. The synthesis of indole alkaloids, such as β-carbolines, elicited by fungal cell walls has been shown in plant cell cultures (Shanks et al., 1998;Facchini, 2001;Zhao et al., 2001;Bais et al., 2003;Pauw et al., 2004). Although a plant synthesis with a fungal regulation is plausible for brevicolline, more research is still needed since other explanations are possible. Different endophytic agents inhabiting plants, such as bacteria, may play a role in the synthesis of toxins (Zhang et al., 2006). Besides, in some cases, endophytes might be incompletely removed by fungicides (Cheplick, 1997;Faeth & Sullivan, 2003), what opens up the possibility of a fungal synthesis of the toxin. It is presumable that the fungicide treatment removes non-systemic endophytes better than systemic ones but, on the other side, a re-infection is easier to occur in the former than in the latter. Further research is needed to test these ideas, and to analyse whether B. nummularia, the most abundant fungal endophyte, has a particular role on toxin regulation.
6,797
2014-07-03T00:00:00.000
[ "Agricultural and Food Sciences", "Biology", "Environmental Science" ]
Charmonium production in p Ne collisions at √ s NN = 68 . 5 GeV The measurement of charmonium states produced in proton-neon ( p Ne) collisions by the LHCb experiment in its fixed-target configuration is presented. The production of J /ψ and ψ( 2 S ) mesons is studied with a beam of 2.5 TeV protons colliding on gaseous neon targets at rest, corresponding to a nucleon-nucleon centre-of-mass energy √ s NN = 68 . 5 GeV. The data sample corresponds to an integrated luminosity of 21 . 7 ± 1 . 4 nb − 1 . The J /ψ and ψ( 2 S ) hadrons are reconstructed in μ + μ − final states. The J /ψ production cross-section per target nucleon in the centre-of-mass rapidity range y (cid:2) ∈ [− 2 . 29 , 0 ] is found to be 506 ± 8 ± 46 nb/nucleon. The ratio of J /ψ and D 0 cross-sections is evaluated to ( 1 . 06 ± 0 . 02 ± 0 . 09 ) %. The ψ( 2 S ) to J /ψ relative production rate is found to be ( 1 . 67 ± 0 . 27 ± 0 . 10 ) % in good agreement with other measurements involving beam and target nuclei of similar sizes. The The production of charmonia, cc bound states, is interesting to study in proton-proton, proton-nucleus and nucleus-nucleus collisions.This process involves two scales: that of the cc pair production, which can be studied in proton-proton collisions; and that of hadronization, for which proton-nucleus collisions can bring decisive insights. Several initial-and final-state effects occur in proton-nucleus collisions that can modify charmonium production with respect to proton-proton collisions.Charmonium production can be suppressed by nuclear absorption [1] and can be affected by multiple scattering [2], and energy loss by radiation [3] in the proton-nucleus overlapping region.Charmonium states can also be dissociated by comovers [4] or affected by the modification, namely shadowing or anti-shadowing, of the parton flux inside the nucleus [5,6].These socalled cold nuclear-matter effects (CNM) depend on the collision energy, the transverse momentum and rapidity of the produced charmonium state, as well as the size of the target nucleus.It is therefore essential to carry out charmonium measurements over a wide range of experimental conditions.Moreover, the understanding of charmonium production and hadronization mechanisms can be significantly improved by comparison with measurements of the overall charm quark production, for which D 0 mesons are a good proxy, as their production dominates over other charm hadrons. In this paper, a measurement of charmonium production in the LHCb fixed-target configuration is presented.The production of J/ψ mesons is studied in collisions of protons with energies of 2.5 TeV incident on neon nuclei at rest, resulting in centre-of-mass energies of √ s NN = 68.5 GeV.It is also compared with the production of D 0 mesons measured in the same conditions [7].In addition, the first measurement of the relative production rate of ψ(2S) and J/ψ mesons in this fixed-target configuration is reported.The LHCb detector [8,9] is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5.It was designed primarly for the study of particles containing c or b quarks.The main detector elements are: the silicon-strip vertex locator (VELO) surrounding the interaction region that allows to precisely reconstruct the decay vertex of c and b hadrons; a tracking system with a warm magnet and tracking stations that provide a measurement of the momentum of charged particles; two ring-imaging Cherenkov detectors that provide discrimination between different species of charged hadrons; a calorimeter system consisting of scintillating-pad and preshower detectors in front of the electromagnetic and hadronic calorimeters; and a muon detector composed of alternating layers of iron and multiwire proportional chambers.The system for measuring the overlap with gas (SMOG) [10,11] is used to measure LHC beam profiles.It enables the injection of gases with pressure of O(10 −7 ) mbar in the beam-pipe section inside the VELO, allowing LHCb to operate as a fixed-target experiment.SMOG allows the injection of noble gases and therefore gives the unique opportunity to study nucleus-nucleus and proton-nucleus collisions on various targets.Due to the boost induced by the high-energy proton beam, the LHCb acceptance covers the backward rapidity hemisphere in the nucleon-nucleon centre-of-mass system of the reaction, −2.29 < y ⋆ < 0. Events are selected by the two-stage trigger system [12].The first level is implemented in hardware and uses information provided by the calorimeters and the muon detectors, while the second is a software trigger.The hardware trigger requires at least one identified muon for the reconstruction of the J/ψ → µ + µ − and ψ(2S) → µ + µ − decays.The software trigger requires two well-reconstructed muons having an invariant mass, m µ + µ − , greater than 2700 MeV/c 2 . The data samples correspond to a collider configuration in which proton bunches moving towards the detector do not cross any bunch moving in the opposite direction.Unlike in proton-proton (pp) collisions, no nominal interaction point exists in the fixed-target case.Therefore, events are required to have a reconstructed primary vertex (PV) with its coordinate along the beam axis (z) being within the fiducial region z P V ∈ [−200, −100] ∪ [100, 150] mm (where z P V = 0 mm is the nominal position of the pp interaction point), within which high reconstruction efficiencies are achieved and calibration samples are available.Residual pp collision events, are suppressed by vetoing events with activity in the backward direction with respect to the beam direction, based on the number of hits in VELO stations upstream of the interaction region.The region −100 < z P V < 100 mm, where most of the residual pp collisions occur, is also vetoed.The offline selections of J/ψ and ψ(2S) candidates are similar to those used in Ref. [13].Events must contain a primary vertex with at least four tracks reconstructed in the VELO detector.The J/ψ and ψ(2S) candidates are constructed from two oppositely-charged muons forming a good-quality vertex.The well-identified muons have a transverse momentum, p T , larger than 500 MeV/c and are required to be consistent with originating from the PV, which suppresses J/ψ and ψ(2S) mesons coming from b-hadron decays.The measurements are performed in the ranges of transverse momentum p T < 8 GeV/c and rapidity 2.0 < y < 4.29 of J/ψ and ψ(2S) mesons.Corrections for the acceptance and reconstruction efficiencies are determined using samples of simulated proton-neon (pNe) collisions.In the simulation, J/ψ and ψ(2S) mesons are generated using Pythia 8 [14] with a specific LHCb configuration [15] and with colliding-proton beam momentum equal to the momentum per nucleon of the beam and target in the centre-of-mass frame.The decays are described by EvtGen [16], in which final-state radiation is generated using Photos [17].The generated J/ψ and ψ(2S) meson decay products are embedded into pNe minimum-bias events that are generated with the Epos event generator [18] using beam parameters obtained from data.Decays of hadrons generated with Epos are also described by EvtGen.The interaction of the generated particles with the detector and its response are implemented using the Geant4 toolkit [19,20] as described in Ref. [21].After reconstruction, the simulated events are assigned weights, based on the VELO cluster multiplicity.This ensures that the event multiplicity and the PV position follow the same distributions as in the data.Figure 1 shows the invariant-mass distributions for the J/ψ and ψ(2S) candidates, from which the corresponding signal yields are obtained with extended maximum-likelihood fits, after all selection criteria are applied to the entire pNe data set.The signals are described by Crystal Ball functions [22] and the background shapes are modelled by exponential functions.The total J/ψ and ψ(2S) signal yields are 4542 ± 71 and 76 ± 12, respectively.The signal yields are determined independently in intervals of p T and y ⋆ .These yields are corrected for the total efficiencies, evaluated to 36.6% and 38.8% for the J/ψ and ψ(2S) respectively, which account for the geometrical acceptance of the detector, and the efficiencies of the trigger, event selection, PV and track reconstruction, and particle identification.Particle identification [23] and tracking efficiencies are obtained from control samples in pp collision data.All other efficiencies are determined using samples of simulated data. Several sources of systematic uncertainty are considered, affecting either the determination of the signal yields or the total efficiencies.They are summarised in Table 1 separately for contributions that are correlated and uncorrelated between different intervals of p T and y ⋆ .Systematic uncertainty on the signal determination includes several contributions.A significant systematic uncertainty arises from the finite size of the simulation samples.The systematic uncertainty associated to the determination of the signal yields is related to the mass fit.This uncertainty is evaluated using alternative models for signal and background shapes, Gaussian and polynomial functions respectively, that reproduce the mass distributions equally well.The effect of the small (below 0.1%) residual contribution of signal from b hadrons is investigated and found to be negligible.Other contributions are obtained by determining the maximum contamination from residual pp collisions with samples of pure pNe collisions and pure pp collisions.The neon purity systematic uncertainty corresponds to the contamination from collisions between the beam and elements different from neon, coming from standard outgassing.It is quantified using data samples recorded with no neon injection.Since the tracking and particle identification Experimental data, represented by black points, are taken from Ref. [24].The red point corresponds to the pNe result from the present analysis.The green point corresponds to a measurement performed by LHCb with pHe collisions [13]. efficiencies are determined using pp control samples, the differences between the track multiplicity in pNe and pp collisions are considered as systematic uncertainties.The tracking and particle identification systematic uncertainties also take into account the size of the pp control samples.The PV reconstruction systematic uncertainty corresponds to the variation of the efficiency over the whole z P V range, and to the difference between the PV reconstruction efficiency evaluated using the simulation and a data-driven approach exploiting the well-reconstructed ϕ → K + K − decay.The integrated luminosity is determined to be 21.7±1.4 nb −1 from the yield of electrons elastically scattering off the target Ne atoms as presented in Ref. [25].The measured J/ψ production cross-section per target nucleon and within y ⋆ ∈ [−2.29, 0], using the world average branching fraction of J/ψ → µ + µ − decays [26], is where the first uncertainty is statistical and the second systematic.To compare with previous experimental results at different energies, the J/ψ cross-section is extrapolated to the full phase space using Pythia 8 with the CT09MCS PDF set [27], with no additional uncertainty related to the extrapolation, assuming forward-backward symmetry in the rapidity distribution.After extrapolation, the total J/ψ cross-section is correspond to predictions using the CT14NLO and nCTEQ15 PDF sets [28][29][30][31].Green and red boxes correspond to predictions (Vogt) from [32] with and without a 1% intrinsic charm (IC) contribution respectively (green and red lines indicate the central values). where the first uncertainty is statistical and the second systematic.An overview of J/ψ cross-section measurements performed at different centre-of-mass energies by different experiments [24], including this measurement and the previous LHCb measurement in pHe collisions at √ s NN = 86.6GeV [13], is shown in Fig.The J/ψ differential cross-sections per target nucleon, as functions of y ⋆ and p T , are shown in Fig. 3.These results are compared with predictions of the HELAC-Onia (HO) generator [28][29][30], using QCD Leading Order (LO) calculations within the Color Singlet Model (CSM), with the proton CT14NLO and nuclear nCTEQ15 PDF [31] sets.The error band is obtained by varying the renormalization and factorization scales from 0.5 to 2. These predictions underestimate the measured total cross-sections.The data are better described by alternative predictions (Vogt), using calculations in the Color Evaporation Model carried out at Next-to-Leading Order (NLO) in the heavy-flavour cross-section, with or without a 1% intrinsic charm (IC) contribution [32]. The J/ψ production cross-section is also compared to the D 0 production cross-section extracted from the same dataset, in the same kinematical conditions [7].Several systematic uncertainties cancel in the J/ψ/D 0 cross-section ratio, related to the PV and track reconstruction efficiencies, the contamination from residual pp collisions, the neon purity and the luminosity determination.The ratio of J/ψ and D 0 cross-sections is where the first uncertainty is statistical and the second systematic.The ratio takes into account the branching fractions [26] of J/ψ → µ + µ − and D 0 → K − π + .The J/ψ-to-D 0 cross-section ratio as a function of y ⋆ and p T is shown in Fig. 4.Although this ratio shows a strong dependence on p T , the data show no significant rapidity dependence.The ψ(2S) production cross-section is also measured.Due to the limited size of the ψ(2S) sample, only the relative production rate of ψ(2S) and J/ψ mesons is presented, where most of the efficiencies and systematic uncertainties cancel out.The remaining systematic uncertainties are evaluated to be 0.01% for the finite size of the simulation sample, 0.09% for the total efficiency differences between J/ψ and ψ(2S) and 0.05% for the signal extraction.The relative production rate of ψ(2S) and J/ψ mesons is where B ψ(2S)→µ + µ − and B J/ψ→µ + µ − are the branching fractions of ψ(2S) → µ + µ − and J/ψ → µ + µ − decays, respectively, and the first uncertainty is statistical and the second systematic.Figure 5 compares this result to measurements performed at various centreof-mass energies by other experiments as a function of the target atomic mass number A [33][34][35][36][37]. Measurement is in agreement with other proton-nucleus measurements at similar values of A. In summary, the study of charmonium production in pNe collisions at √ s NN = 68.5 GeV recorded by the LHCb experiment is presented.The J/ψ production cross-section is measured in the centre-of-mass rapidity range y ⋆ ∈ [−2.29, 0].The comparison of this new measurement with earlier data supports a power-law dependence of the J/ψ production cross-section on centre-of-mass energy.The J/ψ-to-D 0 cross-section ratio is found to be independent of rapidity and the ψ(2S)-to-J/ψ cross-section ratio is found to be (1.67 ± 0.27 ± 0.10)%.This result is in a good agreement with other measurements involving beam and target nuclei of similar sizes, and performed at different centre-of-mass energies.[%] Figure 2 : Figure 2: Total J/ψ cross-section per target nucleon as a function of centre-of-mass energy.Experimental data, represented by black points, are taken from Ref.[24].The red point corresponds to the pNe result from the present analysis.The green point corresponds to a measurement performed by LHCb with pHe collisions[13]. Figure 3 : Figure3: Differential J/ψ cross-section as a function of (left) y ⋆ and (right) p T .The quadratic sums of statistical and uncorrelated systematic uncertainties are given by the error bars, while the grey boxes represent the correlated systematic uncertainties.Blue boxes (LO CSM, HO) correspond to predictions using the CT14NLO and nCTEQ15 PDF sets[28][29][30][31]. Green and red boxes correspond to predictions (Vogt) from[32] with and without a 1% intrinsic charm (IC) contribution respectively (green and red lines indicate the central values). Figure 4 : Figure 4: Ratio of J/ψ and D 0 cross-sections as a function of (left) y ⋆ and (right) p T .The quadratic sums of the statistical and uncorrelated systematic uncertainties are given by the error bars, while the grey boxes represent the correlated systematic uncertainties. Figure 5 : Figure 5: The ψ(2S)-to-J/ψ production ratio as a function of the target atomic mass number A. The red point corresponds to the √ s NN = 68.5 GeV pNe result from the present analysis, vertical error bar corresponds to the statistical uncertainty and the box to the systematic uncertainty.The other points show previous fixed-target experimental data at various centre-ofmass energies [33-37]. Table 1 : Systematic and statistical uncertainties on the J/ψ meson yield.Systematic uncertainties correlated between bins affect all measurements by the same relative amount.Ranges denote the minimum and the maximum values among the y ⋆ or p T intervals while the latter value is the uncertainty integrated over y ⋆ or p T .
3,726.8
2022-11-21T00:00:00.000
[ "Materials Science" ]
Hierarchical Species Sampling Models This paper introduces a general class of hierarchical nonparametric prior distributions. The random probability measures are constructed by a hierarchy of generalized species sampling processes with possibly non-diffuse base measures. The proposed framework provides a general probabilistic foundation for hierarchical random measures with either atomic or mixed base measures and allows for studying their properties, such as the distribution of the marginal and total number of clusters. We show that hierarchical species sampling models have a Chinese Restaurants Franchise representation and can be used as prior distributions to undertake Bayesian nonparametric inference. We provide a method to sample from the posterior distribution together with some numerical illustrations. Our class of priors includes some new hierarchical mixture priors such as the hierarchical Gnedin measures, and other well-known prior distributions such as the hierarchical Pitman-Yor and the hierarchical normalized random measures. Introduction Cluster structures in multiple groups of observations can be modelled by means of hierarchical random probability measures or hierarchical processes that allow for heterogenous clustering effects across groups and for sharing clusters among groups.As an effect of the heterogeneity, in these models the number of clusters in each group (marginal number of clusters) can differ, and due to cluster sharing, the number of clusters in the entire sample (total number of clusters) can be smaller than the sum of the marginal number of clusters.An important example of hierarchical random measure is the Hierarchical Dirichlet Process (HDP), introduced in the seminal paper of Teh et al. (2006).The HDP involves a simple Bayesian hierarchy where the common base measure for a set of Dirichlet processes is itself distributed according to a Dirichlet process.This means that the joint law of the random probability measures (p 1 , . . ., p I ) is where DP (θ, p) denotes the Dirichlet process with base measure p and concentration parameter θ > 0. Once the joint law of (p 1 , . . ., p I ) has been specified, observations [ξ i,j ] i=1,...,I;j≥1 are assumed to be conditionally independent given (p 1 , . . ., p I ) with ξ i,j |(p 1 , . . ., p I ) ind ∼ p i , i = 1, . . ., I and j ≥ 1. Hierarchical processes are widely used as prior distributions in Bayesian nonparametric inference (see Teh and Jordan (2010) and reference therein), by assuming ξ i,j are latent variables describing the clustering structure of the data and the observations in the i-th group, Y i,j , are conditionally independent given ξ i,j with where f is a suitable kernel density. In this paper, we introduce a new class of hierarchical random probability measures, called Hierarchical Species Sampling Model (HSSM), based on a hierarchy of species sampling models. A Species Sampling random probability (SSrp) is defined as where (Z j ) j≥1 and (q j ) j≥1 are stochastically independent sequences, the atoms Z j are i.i.d. with common distribution H 0 (base measure) and the non-negative weights q j ≥ 0 sum to one almost surely.By Kingman's theory on exchangeable partitions, any random sequence of positive weights such that j≥1 q j ≤ 1 can be associated to an exchangeable random partition of the integers (Π n ) n≥1 .Moreover, the law of an exchangeable random partition (Π n ) n≥1 is completely described by an exchangeable partition probability function (EPPF) q 0 .Hence the law of the measure p defined in (1.2) is parametrized by q 0 and H 0 , and it will be denoted by SSrp(q 0 , H 0 ). The proposed framework provides a general probabilistic foundation of both existing and novel hierarchical random measures, and relies on a convenient parametrization of the hierarchical process in terms of two EPPFs and a base measure.Our HSSM class includes the HDP, its generalizations given by the Hierarchical Pitman-Yor process (HPYP), see Teh (2006); Du et al. (2010); Lim et al. (2016); Camerlenghi et al. (2017) and the hierarchical normalized random measures with independent increments (HNRMI), first studied in Camerlenghi et al. (2018), Camerlenghi et al. (2019) and more recently in Argiento et al. (2019).Among the novel measures, we study hierarchical generalizations of Gnedin (Gnedin (2010)) and of finite mixture (e.g., Miller and Harrison (2018)) processes and asymmetric hierarchical constructions with p 0 and p i of different type (Du et al. (2010)).Another motivation for studying HSSMs relies on the introduction of non-diffuse base measures (e.g., the spike-and-slab prior of George and McCulloch (1993)) now widely used in Bayesian parametric (e.g., Castillo et al. (2015) and Rockova and George (2018)) and nonparametric (e.g., Kim et al. (2009), Canale et al. (2017)) inference. We show that the arrays of observations from HSSMs have a Chinese Restaurant Franchise representation, that is appealing for the applications to Bayesian nonparametrics, since it sheds light on the clustering mechanism of the observations and suggests a simple and general sampling algorithm for posterior computations.The sampler can be used under both assumptions of diffuse and non-diffuse (e.g.spike-and-slab) base measure, whenever the EPPFs q 0 and q are known explicitly. By exploiting the properties of species sampling sequences, we are able to provide the finite sample distribution of the number of clusters for each group of observations and the total number of clusters for the hierarchy.We provide some new asymptotic results when the number of observations goes to infinity, thus extending to our general class of processes the asymptotic approximations given in Pitman (2006) and Camerlenghi et al. (2019) for species sampling and hierarchical normalized random measures, respectively. The paper is organized as follows.Section 2 introduces exchangeable random partitions, generalized species sampling sequences and species sampling random probability measures.Section 3 defines hierarchical species sampling models and shows some useful properties for the applications to Bayesian nonparametric inference.Section 4 gives finite-sample and asymptotic distributions of the number of clusters under both assumptions of diffuse and non-diffuse base measure.A general Gibbs sampler for hierarchical species sampling mixtures is established in Section 5. Section 6 presents some simulation studies and a real data application. Background Material Our Hierarchical Species Sampling Models build on exchangeable random partitions and related processes, such as species sampling sequences and species sampling random probability measures.We review some of their definitions and properties, which will be used in the rest of the paper.Supplementary material (Bassetti et al., 2019a) provides further details, examples and some new results under the assumption of non-diffuse base measure. and we denote by |π c,n |, the number of elements of the block c = 1, . . ., k.We denote with P n the collection of all partitions of [n] and, given a partition, we list its blocks in ascending order of their smallest element.In other words, a partition π n ∈ P n is coded with elements in order of appearance. A random partition of N is a sequence of random partitions, Π = (Π n ) n , such that each element Π n takes values in P n and the restriction of Π n to P m , m < n is Π m (consistency property).A random partition of N is said to be exchangeable if for every n the distribution of Π n is invariant under the action of all permutations (acting on Π n in the natural way). Exchangeable random partitions are characterized by the fact that their distribution depends on Π n only through its block size.A random partition on N is exchangeable if and only if its distribution can be written in terms of exchangeable partition probability function (EPPF).An EPPF is a symmetric function q defined on the integers (n 1 , . . ., n k ), with k i=1 n i = n, that satisfies the additions rule q(n 1 , . . ., n k ) = k j=1 q(n 1 , . . ., n j + 1, . . ., n k ) + q(n 1 , . . ., n k , 1), (see Pitman (2006)).If (Π n ) n is an exchangeable random partition of N, there exists an EPPF such that for every n and where k = |π n |.In other words, q(n 1 , . . ., n k ) corresponds to the probability that Π n is equal to any of the partitions of [n] with k distinct blocks and block frequencies (n 1 , . . ., n k ). Given an EPPF q, one deduces the corresponding sequence of predictive distributions.Starting with Π 1 = {1}, given Π n = π n (with |π n | = k), the conditional probability of adding a new block (containing n + 1) to Π n is while the conditional probability of adding n+1 to the c-th block of Π n (for c = 1, . . ., k) is An important class of exchangeable random partitions is the Gibbs-type partitions, introduced in Gnedin and Pitman (2005) and characterized by the EPPF where (x) n = x(x+1) . . .(x+n−1) is the rising factorial (or Pochhammer's polynomial), σ < 1 and V n,k are positive real numbers such that V 1,1 = 1 and (2.4) Species Sampling Models with General Base Measure Kingman's theory of random partitions sets up a one-one correspondence (Kingman's correspondence) between EPPFs and distributions for decreasing sequences of random variables (q ↓ k ) k with q ↓ i ≥ 0 and i q ↓ i ≤ 1 almost surely, by using the notion of random partition induced by a sequence of random variables.Let us recall that a sequence of random variables (ζ n ) n induces a random partition on N by equivalence classes i ∼ j if and only if If i q ↓ i = 1 a.s.then Kingman's correspondence between EPPF and (q ↓ j ) j can be defined as follows.Let (U j ) j be an i.i.d.sequence of uniform random variables on (0, 1) independent from (q ↓ j ) j and let Π be the random partition induced by a sequence (θ n ) n of conditionally i.i.d.random variables from j≥1 q j δ Uj where (q j ) j is any (possibly random) permutation of (q ↓ j ) j .Then the EPPF in the Kingman's correspondence is the EPPF of Π.In point of fact, one can prove that where (j 1 , . . ., j k ) ranges over all ordered k-tuples of distinct positive integers.See Equation (2.14) in Pitman (2006). A Species Sampling random probability of parameters q and H, in symbols p ∼ SSrp(q, H), is a random distribution p = j≥1 δ Zj q j , (2.6) where (Z j ) j are i.i.d.random variables on a Polish space X with possibly non-diffuse common distribution H and EPPF q given in (2.5).Such random probability measures are sometimes called species sampling models.In this parametrization, q takes into account only the law of (q ↓ j ) j while H describes the law of the Z j s.If H is diffuse, a sequence (ξ n ) n sampled from p in (2.6), i.e. with ξ n conditionally i.i.d.(given p) with law p ∼ SSrp(q, H), is a Species Sampling Sequence as defined by Pitman (1996) (Proposition 13 in Pitman (1996)) and the EPPF of the partition induced by (ξ n ) n is exactly q.On the contrary, when H is not diffuse then (ξ n ) n is not a Species Sampling Sequence in the sense of Pitman (1996) and the EPPF of the induced partition is not q.Nevertheless, as shown in the next Proposition, there exists an augmented space X × (0, 1) and a latent partition related to (ξ n ) n with EPPF q. Hereafter, for a general base measure H, we refer to (ξ n ) n as generalized species sampling sequence, gSSS(q, H). Proposition 1.Let (U j ) j be an i.i.d.sequence of uniform random variables on (0, 1), (Z j ) j an i.i.d.sequence with possibly non-diffuse common distribution H and (q j ) j a sequence of positive numbers with j q j = 1 a.s.. Assume that all the previous elements are independent and let (ζ n ) n := (ξ n , θ n ) n be a sequence of random variables, with values in X × (0, 1), conditionally i.i.d.from p given p = j≥1 δ (Zj ,Uj ) q j . (2.7) Then, the EPPF of the partition induced by (ζ n ) n is q given in (2.5) and (ξ n ) n is a gSSS(q, H). From the previous Proposition, it follows that the partition induced by (ζ n ) n is in general finer than the partition induced by (ξ n ) n , with the equality if H is diffuse.This result is essential in order to properly define and study hierarchical models of type (1.3), since the random measure p 0 in (1.3) is almost surely discrete and hence not diffuse.Further properties of the gSSS are proved in the supplementary material (Bassetti et al., 2019a), whereas further results are available in Sangalli (2006) for normalized random measures with independent increments.These properties are relevant to the comprehension of the implications of mixed based measures for Bayesian non-parametrics, especially for hierarchical prior constructions. Hierarchical Species Sampling Models We introduce hierarchical species sampling models (HSSMs), provide some examples and derive relevant properties. HSSM Definition and Examples In the following definition a hierarchy of species sampling random probabilities is used to build hierarchical species sampling models.Definition 1.Let q and q 0 be two EPPFs and H 0 a probability distribution on the Polish space X.A Hierarchical Species Sampling model, HSSM (q, q 0 , H 0 ), of parameters (q, q 0 , H 0 ) is a vector of random probably measures (p 0 , p 1 , . . ., p I ) such that An array [ξ i,j ] i=1,...,I,j≥1 is sampled from HSSM (q, q 0 , H 0 ) if its elements are conditionally independent random variables given (p 1 , . . ., p I ) with ξ i,j |(p 1 , . . ., p I ) ind ∼ p i , where i = 1, . . ., I and j ≥ 1.By de Finetti's representation theorem it follows that the array [ξ i,j ] i=1,...,I,j≥1 is partially exchangeable (in the sense of de Finetti), i.e. Definition 1 is general and provides a probabilistic foundation for a wide class of hierarchical random models.The properties of the SSrp and of the gSSS, guarantee that the hierarchical random measures in Definition 1 are well defined also for nondiffuse (e.g., atomic or mixed) probability measures H 0 . The HSSM class in Definition 1 includes well-known (e.g., Teh et al. (2006), Teh (2006), Bacallado et al. (2017)) and new hierarchical processes, as shown in the following examples.We assume that the reader is familiar with basic non-parametric prior processes.A brief account to these topics is included in the supplementary material (Bassetti et al., 2019a). Example 2 (Hierarchical homogeneous normalized random measures).Hierarchical homogeneous Normalized Random Measures (HNRMI) introduced in Camerlenghi et al. (2019) are defined by where NRMI(θ, η, H) denotes a normalized homogeneous random measure with parameters (θ, η, H), where θ > 0, η is Lévy a measure on R + (absolutely continuous with respect to the Lebesgue measure) and H a measure on X.A NRMI is a SSrp and hence HNRMI are HSSM. Our class of HSSM includes new hierarchical processes such as hierarchical mixtures of finite mixture processes and combinations of finite mixture processes and P Y P . A Hierarchical MFMP with parameters σ i , ρ (i) = (ρ As a special case when |σ i | = 1 and for a suitable ρ (i) (i = 0, 1, . . .), one obtains the Hierarchical Gnedin Process with parameters , which is a hierarchical extension of the Gnedin Process.For further details see Examples S.2 and S.3 in the supplementary material (Bassetti et al., 2019a). HSSM and Chinese Restaurant Franchising Representation The next proposition gives the marginal law of an array sampled from a HSSM.When ] is a partition of [n] and q an EPPF, we will write Proposition 2. Let [ξ i,j ] i=1,...,I,j≥1 be sampled from HSSM (q, q 0 , H 0 ), then for every vector of integer numbers (n 1 , . . ., n I ) and every collection of Borel sets {A i,j Starting from Proposition 2 we show that an array sampled from a HSSM has a Chinese Restaurant Franchise representation.Such representation is very useful because it leads to a generative interpretation of the nonparametric-priors in the HSSM class, and naturally allows for posterior simulation procedures (see Section 5). In the Chinese Restaurant Franchise metaphor, observations are attributed to "customers", identified by the indices (i, j), and groups are described as "restaurants" (i = 1, . . ., I).In each "restaurant", "customers" are clustered according to "tables", which are then clustered at the second hierarchy level by means of "dishes".Observations are clustered across restaurants at the second level of the clustering process, when dishes are associated to tables.One can think that the first customer sitting at each table chooses a dish from a common menu and this dish is shared by all other customers who join the same table afterwards. The first level of the clustering process, acting within each group, is driven by independent random partitions Π (1) , . . ., Π (I) with EPPF q.The second level, acting between groups, is driven by a random partition Π (0) with EPPF q 0 . Given n 1 , . . ., n I integer numbers, we introduce the following set of observations: and denote with C j (Π) the random index of the block of the random partition Π that contains j, that is ..,I,j≥1 is a sample from a HSSM (q, q 0 , H 0 ), then O and {φ d * i,j : j = 1, . . ., n i ; i = 1, . . .I} have the same laws, where 1) , . . ., Π (I) are i.i.d.exchangeable partitions with EPPF q and Π (0) is an exchangeable partition with EPPF q 0 .All the previous random variables are independent. , then the construction in Theorem 1 can be summarized by the following hierarchical structure where, following the Chinese Restaurant Franchise metaphor (see Figure 1), c i,j is the table at which the j-th "customer" of the "restaurant" i sits, d i,c is the index of the Figure 1: Illustration of the HSSM (q, q 0 , H 0 ) clustering process given in Theorem 1.We assume two groups (restaurants), I = 2, with n 1 = 6 and n 2 = 4 observations (customers) each.Top-left: Samples (dishes) φ n from the non-diffuse base measure.Dishes have the same colour and line type if they take the same values.Mid-left: Indexes D(i, c) (from 1 to 7 in lexicographical order) of the tables which share the same dish.Boxes represent the blocks of the random partition at the top of the hierarchy.Bottomleft: Observations (customers) allocated by c i,j to each table (circles) in the groupspecific random partitions.Top-right: Table lexicographical ordering and dishes assigned to the tables by the top level partition.Bottom-right: observations clustering implied by the joint tables and dishes allocation d * i,j . "dish" served at table c in the restaurant i and d * i,j is the index of the "dish" served to the j-th customer of the i-th restaurant. A special case of Theorem 1 has been independently proved in Proposition 2 of Argiento et al. (2019) for HNRMI.Theorem 1 can also be used to describe in a recursive way the array O. Having in mind the Chinese Restaurant Franchise, we shall denote with n icd the number of customers in restaurant i seated at table c and being served dish d and with m id the number of tables in the restaurant i serving dish d.We denote with dots the marginal counts.Thus, n i•d is the number of customers in restaurant i being served dish d, m i• is the number of tables in restaurant i, n i•• is the number of customers in restaurant i (i.e. the n i observations), and m •• is the number of tables. Finally, let ω n,k and ν n be the weights of the predictive distribution of the random partitions Π (i) (i = 1, . . ., I) with EPPF q (see Section 2.1).Also, let ωn,k and νn be the weights of the predictive distribution of the random partitions Π (0) with EPPF q 0 defined analogously by using q 0 in place of q.We can sample {ξ i,j 1,1 = φ 1 ∼ H 0 and then iterating, for i = 1, . . ., I, the following steps: and let c it = c for the chosen c, we leave m i• the same and set Remark 1.The Chinese Restaurant Franchise representation and the Pólya Urn sampler in (S1)-( S3) are deduced directly from the latent partition representation given in Theorem 1, with no additional assumptions on H 0 and without resorting to the expression of the distribution of the partition induced by the observations.This expression can be derived for HSSM as a side result of our combinatorial framework and includes Theorem 3 and 4 of Camerlenghi et al. (2019) as special cases when the HSSM is a HNRMI.Since the derivation of this law is not a central result of the paper, it is given in the supplementary material (Bassetti et al., 2019a). Cluster Sizes Distributions We study the distribution of the number of clusters in each group of observations (i.e., the number of distinct dishes served in the restaurant i), as well as the global number of clusters (i.e. the total number of distinct dishes in the restaurant franchise). Let us introduce a time index t to describe the customers arrival process.At time t = 1, 2, . . .and for each group i, O it is the observation set and n i (t) is the number of elements in O it , i.e. the number of observations in the group i at time t.The collection of all the n(t) := , each group has one new observation between t − 1 and t and hence the total number of observations at time t is n(t) = It.Different sampling rates can be assumed within our framework.For example n i (t) = tb i for suitable integers b i describes an asymmetric sampling scheme in which groups have different arrival rates, b i . We find the exact finite sample distribution of the number of clusters for given n(t) and n i (t) when t < ∞.Some properties, such as the prior mean and variance, are discussed in order to provide some guidelines to setting HSSM parameters in the applications.We present some new asymptotic results when the number of observations goes to infinity, such that n(t) diverges to +∞ as t goes to +∞.The results extend existing asymptotic approximations for species sampling (Pitman (2006)) and for hierarchical normalized random measures (Camerlenghi et al. (2019)) to the general class of HSSMs.Finally, we provide a numerical study of the approximation accuracy. Distribution of the Cluster Size Under the Prior For every i = 1, . . ., I, we define By Theorem 1, for every fixed t, the laws of K i,t and K t are the same as the ones of the number of "active tables" in "restaurant" i and of the total number of "active tables" in the whole franchise, respectively.Analogously, the laws of D t and D i,t are the same as the laws of the number of dishes served in the restaurant i and in the whole franchise, respectively.If H 0 is diffuse, then D t and the number of distinct clusters in O t have the same law and also D i,t and the number of clusters in the group i follow the same law. The distributions of D t and D i,t are derived in the following Proposition 3.For every n ≥ 1 and k = 1, . . ., n, we define q n (k) := P |Π and One of the advantages of our framework is that the gSSS properties allow us to easily derive the distribution of the number of clusters when H 0 is not diffuse.Indeed, it can be deduced by considering possible coalescences of latent clusters (due to ties in the i.i.d.sequence (φ n ) n of Theorem 1) forming a true cluster.Let us denote with Dt and Di,t the number of distinct clusters in O t and O it , respectively.The assumption of atomic base measures behind HDP and HPYP has been used in many studies, and some of its theoretical and computational implications have been investigated (e.g., see Nguyen (2016) and Sohn and Xing (2009)), whereas the implications of the use of mixed base measures are not yet well studied, especially in hierarchical constructions.In the following we state some new results for the case of a spike-and-slab base measure. The probability of Dt has the same expression as above with D t in place of D i,t and n t in place of n i,t .Moreover, and E[ Dt ] has an analogous expression with D i,t replaced by D t . For a Gibbs-type EPPF with σ > 0, using results in Gnedin and Pitman (2005), we get where V n,k satisfies the partial difference equation in (2.4) and S σ (n, k) is a generalized Stirling number of the first kind, defined as for σ = 0 and S 0 (n, k) = |s(n, k)| for σ = 0, where |s(n, k)| is the unsigned Stirling number of the first kind, see Pitman (2006).See De Blasi et al. ( 2015) for an up-to-date review of Gibbs-type prior processes. For the hierarchical PY process the distribution q n (k) has closed-form expression For the Gnedin model (Gnedin, 2010) the distribution q n (k) is In the supplementary material (Bassetti et al., 2019b), we provide a graphical illustration of the prior distributions presented here above and a sensitivity analysis with respect to the prior parameters. Asymptotic Distribution of the Cluster Size An exchangeable random partition (Π n ) n≥1 has asymptotic diversity S if for a positive random variable S and a suitable normalizing sequence (c n ) n≥1 .Asymptotic diversity generalizes the notion of σ-diversity, see Definition 3.10 in Pitman (2006). In the following propositions, we use the (marginal) limiting behaviour (4.2) of the random partitions Π ∞ , respectively) for suitable diverging sequences a n and b n .Moreover assume that a n = n σ0 L 0 (n) and b n = n σ1 L 1 (n), with σ i ≥ 0 and L i is a slowly varying function, i = 0, 1, and set Remark 2. Part (ii) extends to HSSM with different group sizes, n i (t), the results in Theorem 7 of Camerlenghi et al. (2019) for HNRMI with groups of equal size.Both part (i) and (ii) provide deterministic scaling of diversities, in the spirit of Pitman (2006), and differently from Camerlenghi et al. (2019) where a random scaling is obtained. Remark 3. Combining Propositions 4 and 6 one can obtain similar asymptotic results also for Di,t and Dt .For instance, one can prove that, under the same assumptions of Proposition 4, if and H C diffuse (as in the spike-and-slab case), for t → +∞ one has Di,t and Dt The second general result describes the asymptotic behaviour of D i,t and D t in presence of random partitions for which c n = 1 for every n. n | converges a.s. to a positive random variable K i as n → +∞, then for every k ≥ 1 lim and Starting from Propositions 6 and 7, analytic expressions for the asymptotic distributions of D i,t and D t can be deduced for some special HSSMs. As an example, consider the HGP and the HPYGP in Examples 3 and 4. If (Π n ) n is a Gnedin's partition, then |Π n | converges almost surely to a random variable K (see Gnedin (2010) and Example S.3 in the supplementary material (Bassetti et al., 2019a)) and the asymptotic behaviour of the number of clusters can be derived from Proposition 7 as stated here below. Hierarchical Species Sampling Models (ii) for HP Y DP (θ 0 , σ 0 ; θ 1 ) with σ 0 > 0: (iv) for HDP (θ 0 , θ 1 ): In Figure 2, we compare exact and asymptotic values (see Proposition 3 and Corollary 2, respectively) of the expected marginal number of clusters for the HSSMs in the PY family: HDP (θ 0 ; θ 1 ), HDP Y P (θ 0 ; σ 1 , θ 1 ), HP Y P (σ 0 , θ 0 ; σ 1 , θ 1 ) and HP Y DP (θ 0 , σ 0 ; θ 1 ) (different rows of Figure 2).For each HSSM we consider n i (t) increasing from 1 to 500 and different parameter settings (different columns and lines).For the HDP the exact value (dashed lines) is well approximated by the asymptotic one (solid line) for all sample sizes n i (t), and different values of θ i (gray and blacks lines in the left and right plots of panel (i)).For the HPYP, the results in panel (ii) show that there are larger differences when θ i , i = 0, 1 are large and σ 0 and σ 1 are close to zero (left plot).The approximation is good for small θ i (right plot) and improves slowly with increasing n i (t) for smaller σ i (gray lines in the right plot).In the panels (iii) and (iv) for HDPYP and HPYDP, there exist parameter settings where the asymptotic approximation is not satisfactory and is not improving when n i (t) increases. Our numerical results point out that the asymptotic approximation for both PY and HPY lacks of accuracy for some parameters settings.Thus, the exact formula for the number of clusters should be used in the applications when calibrating the parameters of the process. Chinese Restaurant Franchise Sampler Random measures and hierarchical random measures are widely used in Bayesian nonparametric inference (see Hjort et al. (2010) for an introduction) as prior distributions for the parameters of a given density function.In this context a further stage is added to the hierarchical structure of Equation (3.7) involving an observation model where f is a suitable kernel density. The resulting model is an infinite mixture, which is the object of the Bayesian inference.In this framework, the posterior distribution is usually not tractable and Gibbs sampling is used to approximate the posterior quantities of interest.There are two main classes of samplers for posterior approximation in Bayesian nonparametrics: marginal (see Escobar (1994) and Escobar and West (1995)) and conditional (Walker (2007), Papaspiliopoulos and Roberts (2008), Kalli et al. (2011)) samplers.See also Figure 2: Exact (dashed lines) and asymptotic (solid lines) expected marginal number of clusters E(D i,t ) when n i (t) = 1, . . ., 500 for different HSSMs.Favaro and Teh (2013) for an up-to-date review.In this section, we extend the marginal sampler for HDP mixture (see Teh et al. (2006), Teh (2006) and Teh and Jordan (2010)), to our general class of HSSMs.We present the sampler for the case kernel and base measure are conjugate.When this assumption is not satisfied our sampling method can be easily modified following the auxiliary variable sampler of Neal (2000) and Favaro and Teh (2013). Following the notation in Section 3.2, we consider the data structure Y i,j , c i,j : i ∈ J , and j = 1, . . ., n Denote with the superscript ¬ij the counts and sets in which the customer j in the restaurant i is removed and, analogously, with ic the counts and sets in which all the customers in the table c of the restaurant i are removed.We denote with p(X) the density of the random variable X. The proposed Gibbs sampler simulates iteratively the elements of c and d from their full conditional distributions, where the latent variables φ d are integrated out analytically.In sampling the latent variable c, we need to sample jointly [c, d * ] and, since d is a function of [c, d * ], this also gives a sample for d.In order to improve the mixing we re-sample d given c in a second step.In summary, the sampler iterates for i = 1, . . ., I according to the following steps: Equation (S.32) in the supplementary material (Bassetti et al., 2019a)), for j = 1, . . ., n i•• ; (ii) (re)-sample d i,c from p(d i,c |Y , c, d ic ) (see Equation (S.34) in the supplementary material (Bassetti et al., 2019a)), for c = 1, . . ., m i• . A detailed description of the Gibbs sampler is given in the supplementary material (Bassetti et al., 2019a). Simulation Experiments We compare some of the HSSMs described in Section 3 on synthetic data generated under different assumptions on the true model.In the first experimental setting, we consider three groups of observations sampled from three-component normal mixtures with common mixture components, but different mixture probabilities: iid ∼ 0.3N (−5, 1) + 0.3N (0, 1) + 0.4N (5, 1), j = 1, . . ., 100, The parameters of the different prior processes are chosen such that the marginal expected number of clusters is E(D i,t ) = 5 and its variance is between 1.97 and 3.53 assuming n i (t) = n i = 50 with t = 1 for i = 1, . . ., 3. In the second and third experimental settings, we consider ten groups of observations from two-and three-component normal mixtures respectively with one common component across groups.In the second experiment, we assume with n i (t) increasing from 5 to 100 with t = 1 and for i = 1, . . ., 10.In the third setting, we assume a smaller weight for the common component and larger number of group specific components: The parameters of the prior processes are chosen such that the marginal expected value is E(D i,t ) = 10 and the variance is between 4.37 and 6.53 assuming n i (t) = 20 with t = 1 for i = 1, . . ., 10. For each setting we generate 50 independent datasets and run the marginal sampler described in Section 5 with 6, 000 iterations to approximate the posterior predictive distribution and the posterior distribution of the clustering variables c and d.We discard the first 1, 000 iterations of each run.All inferences are averaged over the 50 independent runs. We compare the models by evaluating their co-clustering errors and predictive abilities (see Favaro and Teh (2013) and Dahl (2006)).We denote with d(m) = (d ), the vector of allocation variables for all the observations, sampled at the Gibbs iteration m = 1, . . ., M, where M is the number of Gibbs iterations.The co-clustering matrix of posterior pairwise probabilities of joint classification is estimated by: Let d0 be the true value of the allocation vector d.The co-clustering error can be measured as the average L 1 distance between the true pairwise co-clustering matrix, δ {d 0l } (d 0k ) and the estimated co-clustering probability matrix, P lk , i.e.: The following alternative measure can be defined by using the Hamming norm and the estimated co-clustering matrix, I(P lk > 0.5): Both accuracy measures CN and CN * attain 0 in absence of co-clustering error and 1 when co-clustering is mispredicted. The L 1 distance between the true group-specific densities, f (Y i,ni+1 ) and the corresponding posterior predictive densities, p(Y i,ni+1 |Y), can be used to define the predictive score: Finally, we consider the posterior median ( q 0.5 (D)) and variance ( V (D)) of the total number of clusters D. The results in Table 1 point out similar co-clustering accuracy across HSSMs and experiments.In the first and second experimental settings, HPYP and HDPYP have significantly small co-clustering errors, CN and CN * .As regard the predictive score SC, the seven HSSMs behave similarly in the three restaurants experiment (panel a), whereas in the two-components experiment the HDPYP performs slightly better with respect to the other HSSMs.In presence of large heterogeneity across restaurants (third setting), the HGPYP is performing best following the co-clustering norm and the predictive score measures.A comparison between HPYP and HGPYP shows that these results do not depend on the number of observations and can be explained by a better fitting of tails and dispersion of the group-specific densities provided by the HGPYP.For illustrative purposes, we provide in Figure 3 a comparison of the log-predictive scores of the two models for an increasing number of observations.In the first setting, the posterior number of clusters, q 0.5 (D), for all the HSSMs (panel (a) in Table 1) is significantly close to the true value, that corresponds to 3 mixture components.Increasing the number of restaurants (second and third settings), the HPYP tends to have extra clusters causing larger posterior median and variance of the number of clusters ( q 0.5 (D) and V (D) in Table 1).Conversely, the HGPYP have a smaller dispersion of the number of clusters with respect to the HPYP. The results for the third experiment suggest that HGPYP performs better when groups of observations are heterogeneous.Also increasing the number of observations, HGPYP provides a consistent estimate of the true number of components (Figure 3).In conclusion, our experiments indicate that using the Pitman-Yor process at some stage of the hierarchy may lead to a better accuracy.The HDPYP did reasonably well in all our experiments in line with previous findings on hierarchical Dirichlet and Pitman-Yor processes for topic models (see Du et al. (2010)).Also, using Gnedin process at the top of the hierarchy might lead to a better accuracy when groups of observations are heterogeneous.Moreover, when the researcher is interested in a consistent estimate of the number of components, HGPYP should be preferred.Further details and results are in the supplementary material (Bassetti et al., 2019b). Real Data Application Bayesian nonparametrics is used in economic time series modelling to capture observation clustering effects (e.g., see Hirano, 2002;Griffin and Steel, 2011;Bassetti et al., 2014;Kalli and Griffin, 2018;Billio et al., 2019).In this paper, we consider the industrial production index, an important indicator of macroeconomic activity used in business cycle analysis (see Stock and Watson (2002)).One of the most relevant issues in this field concerns the classification of observations by allowing for different parameter values in periods (called regimes) of recession and expansion. The data has been previously analysed by Bassetti et al. (2014) and contains the seasonally and working day adjusted industrial production indexes (IPI) at a monthly frequency from April 1971 to January 2011 for both United States (US) and European Union (EU).We generate autoregressive-filtered IPI quarterly growth rates by calculating the residuals of a vector autoregressive model of order 4. We follow a Bayesian nonparametric approach based on HSSM prior for the estimation of the number of regimes or structural breaks.Based on the simulation results, we focus on the HPYP, with hyperparameters (θ 0 , σ 0 ) = (1.2,0.2) and (θ 1 , σ 1 ) = (2, 0.2), and on the HGPYP, with hyperparameters (γ 0 , ζ 0 ) = (14.7,130) and (θ 1 , σ 1 ) = (2, 0.23), such that the prior mean of the number of clusters is 5.The main results of the nonparametric inference can be summarized through the implied data clustering (panel (a) of Figure 4) and the marginal, total and common posterior number of clusters (panel (b)). One of the most striking feature of the co-clustering is that in the first and second block of the minor diagonal there are vertical and horizontal black lines.They correspond to observations of a country, which belong to the same cluster that is the same phase of the business cycle. Another feature that motivates the use of HSSMs is given by the black horizontal and vertical lines in the two main diagonal blocks.They correspond to observations of the two countries allocated to common clusters.The appearance of the posterior total number of clusters (see panel b.1) suggests that at least three clusters should be used in a joint modelling of the US and EU business cycle.The larger dispersion of the marginal number of cluster for EU (b.3) with respect to US (b.2) confirms the evidence in Bassetti et al. (2014) of a larger heterogeneity in the EU cycle.Finally, we found evidence (panel b.4) of common clusters of observations between EU and US business cycles. Supplementary Material Supplementary material A to Hierarchical Species Sampling Models (DOI: 10.1214/19-BA1168SUPPA; .pdf).This document contains the derivations of the results of the paper and a detailed analysis of the generalized species sampling (with a general base measure).It also describes the Chinese Restaurant Franchise Sampler for Hierarchical Species Sampling Mixtures. Proposition 4 . Let H0 (d|k) (for 1 ≤ d ≤ k) be the probability of observing exactly d distinct values in the vector (φ 1 , . . ., φ k ) where the φ n s are i.i.d.H 0 .Then, P{ Di,t = d} = ni(t) k=d H0 (d|k)P{D i,t = k} for d = 1, . . ., n i (t).The probability of Dt has the same expression as above with D t in place of D i,t and n(t) in place of n i (t).If H 0 is diffuse, then P{ Di,t = d} = P{D i,t = d} and P{ Dt = d} = P{D t = d}, for every d ≥ 1. 0, . . ., I), to obtain the asymptotic distribution of D i,t and D t assuming c n = n σ L(n), with L slowly-varying.The first general result deals with HSSM where Π n = Π (i) n satisfies (4.2) for every i = 1, . . ., I and c n → +∞ and hence the cluster size |Π (i) n | diverges to +∞.Proposition 6. Assume that Π (0) and Π (i) (for i = 1, . . ., I) are independent exchangeable random partitions such that |Π (0) n |/a n (|Π (i) n |/b n for i = 1, . . ., I, respectively) converges almost surely to a strictly positive random variable D (0) where c = [c i : i ∈ J ], with c i = [c i,j : j = 1, . . ., n i•• ], d = [d i,c : i ∈ J , c = 1, . . ., m i• ], φ = [φ d : d ∈ D],and, with a slight abuse of notation, we write [c, d] ∼ HSSM in order to denote the distribution of the labels [c, d] obtained from a HSSM as in (3.7).If we defined * i,j = d i,ci,j and d * = [d * i,j : i ∈ J , j = 1, . . ., n i•• ],then [c, d] and [c, d * ] contain the same amount of information, indeed d * is a function of d and c, while d is a function of d * and c.From now on, we denote with Y = [Y i,j : i ∈ J , j = 1, . . ., n i•• ] the set of observations.If f and H are conjugate, the Chinese Restaurant Franchise Sampler of Teh et al. (2006) can be generalized and a new sampler can be obtained for our class of models. Figure 3 : Figure3: Top-left: Log-posterior predictive score for the right tail (above the 97.5% quantile of the true distribution).Top-right: posterior mean when the number of customers increases for HGPYP (solid) and HPYP (dashed).Bottom: posterior number of clusters for the HPYP (left) and HGPYP (right).In this setting the true number of clusters is 11. Figure 4 : Figure 4: (a) Co-clustering matrix for the US (bottom left block) and EU (top right block) business cycles and cross-co-clustering (main diagonal blocks) between US and EU for the HPYP.(b) Posterior number of clusters.Total (b.1), marginal for US (b.2) and EU (b.3) and common (b.4) for the HPYP (solid line) and for the HGPYP (dashed line). t is sampled from Git , we set ξ * i,c = φ d , let d ic = d for chosen d and increment m •c by one.If ξ i,t is sampled from H 0 , then we increment D by one and set φ D = ξ it , ξ * i,c = ξ i,t and d ic = D.In both cases, we increment m •• by one.(S3) Having sampled ξ i,t with t = n i•• in the previous Step, set i : b n converges a.s. to a strictly positive random variable D i•• , d i,c : i ∈ J , and c = 1, ..., m i• , φ d : d ∈ D,where Y i,j is the j-th observation in the i-th group, n i•• = n i is the total number of observations in the i-th group, and J = {1, . .., I} is the set of group indexes.The latent variable c i,j denotes the table at which the j-th "customer" of "restaurant" i sits and d i,c the index of the "dish" served at table c in restaurant i.The random variables φ d are the "dishes" and D = {d : d = d i,c for some i ∈ J and c ∈ {1, . .., m i• }} is the set of indexes of the served dishes.Let us assume that the distribution H of the atoms φ d s has density h and the observations Y i,j have a kernel density f (•|•), then our hierarchical infinite mixture model is Table 1 : Model accuracy for seven HSSMs
10,766
2018-03-15T00:00:00.000
[ "Mathematics" ]
GOOGLE TRANSLATE PERFORMANCE IN TRANSLATING ENGLISH PASSIVE VOICE INTO INDONESIAN : A scant number of Google Translate users and researchers continue to be skeptical of the current Google Translate 's performance as a machine translation tool. As English passive voice translation often brings problems, especially when translated into Indonesian which rich of affixes, this study works to analyze the way Google Translate (MT) translates English passive voice into Indonesian and to investigate whether Google Translate (MT) can do modulation. The data in this research were in the form of clauses and sentences with passive voice taken from corpus data. It included 497 news articles from the online news platform ‘GlobalVoices,' which were processed with AntConc 3.5.8 software. The data in this research were analyzed quantitatively and qualitatively to achieve broad objectives, depth of understanding, and the corroboration. Meanwhile, the comparative methods were used to analyze both source and target texts. Through the cautious process of collecting and analyzing the data, the results showed that (1) GT (via NMT) was able to translate the English passive voice by distinguishing morphological changes in Indonesian passive voice (2) GT was able to modulate English passive voice into Indonesian base verbs and Indonesian active voice. INTRODUCTION As translation, both commercial and literary, is one of several activities that have been expanding in today's globalized world (Hatim & Munday, 2004), recently, humans are not the only ones who can be trusted to translate texts. Just as technology is constantly being developed, altered, and improved; machine translation (MT) arose and became one of the options for translating text. One of the most popular machine translations is Google Translate which was developed by Google Inc. By using Google Statistical Machine Translation (GSMT) in 2006, it is possible for everyone to translate a huge amount of data by just a single click away (Garcia, 2009). Unfortunately, SMT raised various problems in translation. Specific errors on translating Source Text (ST) to Target Text (TT) are hard to predict and fix by users. Consequently, machine translation was judged to be less acceptable and inaccurate in its early days (Komeili et al., 2011). Nevertheless, In late 2016, Google Translate then adopted Neural Machine Translation (NMT) which is called as Google Neural Machine Translation (GNMT). Compared with SMT, GNMT is capable of fixing translation difficulties and threats by providing a more fluent and legible translation by handling morphology and syntax five times better than SMT systems (Ramesh et al., 2021). Thus, GNMT translations were claimed to be more precise and fluent compared to translations of SMT systems. In addition, Bahdanau et al., (2014) stated that: "Unlike the traditional phrase-based translation system which consists of many small sub-components that are tuned separately, neural machine translation attempts to build and train a single, large neural network that reads a sentence and outputs a correct translation." Google Translate, then, has a number of flaws by supporting approximately 109 languages at various levels as of April 2021. Because of its development, Google Translate has been used by over 500 million people around the world with 100 billion words translated every day in 103 languages, when human translators were judged to be more expensive and took a lot of time (Aiken, 2019). Since most people begin to use Google Translate frequently, a scant number of scholars then became skeptical and conducted additional research focusing on Google Translate (Sun, 2014) to test its performance and accuracy. Amar (2017), for example, investigated the accuracy level of Google Translate especially in translating English text into Indonesian based on language error analysis and the use of equivalence strategy. He concluded that Google Translate can only translate English source text into Indonesian correctly if the appropriate equivalence translation strategy is just literal or transposition. In the same manner, Sutrisno (2020) examines the accuracy as well as the shortcomings of Google Translate in the context of English to Indonesian translations in order to critically engage the complaints made by Google users. Both the original sentences and their translated versions were analyzed using a sentence pair matrix to determine the machine's failings and areas for improvement. Through his research, he found that Google Translate has the capability to translate English to Indonesian sentences with an accuracy level reaching 60.37. Whereas Sianipar & Sajarwa (2021), by comparing the translation of passive voice in Indonesian research abstracts into English conducted by human translation vs. machine translation (Google Translate), they concluded that human translation is better than machine translation in translating English passive voice into Indonesian. All three studies show that Google Translate still has some drawbacks when it comes to translating certain texts. However, as Google Translate continues to develop, we continue to verify its performance by analyzing the way Google Translate translates English passive voice into Indonesian which were tested by using a news corpus data set. Meanwhile, since humans are very intensive to do modulation, this research also tends to investigate whether Google Translate can do the same thing to create natural translation. Thus, passive voice was used as the variable of this research since it was positioned as the most common structure used in the written discourse, especially in the news and scientific writings construction (Keenan, 1985). Moreover, every language has a unique and different characteristic. In contrast to English active voice which is easy to translate, English passive voice is often difficult to translate into Indonesian due to Indonesian having some different affixes to use in passive construction. Besides, the roles between actor and agent which are called subject positions in the generative grammar, also need to be considered in the sentence construction. By using this approach, this research will be beneficial in looking at technological developments from a translation point of view. Google Translate: How Does It Work? Google Translate is a well-known free online translation engine that can translate not only numerous words, but also phrases, text fragments, and entire web pages (Karami, 2014). Along with Google Translate prominent heights, it is expanding to over 100 languages today and is used by most internet users around the world for translating texts (Koehn, 2020). In 2016, Google Inc. expanded their quality and released a Neural Machine Translation (NMT) system, which has the potential to address many of the shortcomings of traditional SMT. End-to-end Neural Machine Translation (NMT) has become the new standard method in actual machine translation systems in recent years (Tan et al., 2020). Google NMT can also solve the notoriously difficult language pair translation problem by taking the context of a word into account rather than simply translating each individual word. Its system can reduce translation errors compared to Google SMT's phrase-based translation. However, Google NMT can still make an amount of errors in languages with productive word creation, such as compounding and agglutination (Sennrich et al., 2015). This problem was used as the basis of our research and investigation on the translation of English into Indonesian using present Google NMT. English Passive Voice The use of passive voice is very common in English sentences and texts as one of the most fundamental elements of the English language. When the doer of an action is unknown or insignificant, or when the focus is "on the experiment or process being described", the passive voice is utilized (Hacker, 2003). In line with the definition, according to Apandi & Islami (2018), passive voice is used when the focus of the sentence is the outcome or the person affected by the action and it is not important or known who or what is performing the action. Furthermore, the passive voice is a grammatical form in which a head noun serving as the subject of a phrase, clause, or verb is impacted or acted upon by the verb's action (Scholastica, 2018). In passive voice, there are three markers: be, -ed, and by, each with its own meaning and significance. Passive with agent and passive without agent, or agentive passive and non-agentive passive, are the two most common types of passive. The agent will not appear in the agentive passive, but will be implied in the context. The rules and usage of the passive voice differ between languages. In English, the passive voice can be constructed in many different forms. The short dynamic be-passive pattern with '[be-verb+Past Participles (Verb 3)]' construction is the most fundamental passive pattern in English grammatical structure (Biber et al., 1999), e.g. "is stolen, was caught, were written", etc.). Nevertheless, sometimes English only used past participles to mark passive voices, e.g."The book written by the lecturer is now in the well-known publisher". In this case, the passive voice used in the sentence does not use the "be-verb" formula, but simply by using past participle verbs only used the . Another feature of the passive in English is the use of "by phrases" at the end of the clause (for example: "The book is written by my father"). Indonesian Passive Voice In addition, Indonesians regularly utilize passive voice as well. According to Alwi et al. (2003), there are several ways to construct passive voice in Indonesia, those are: 1) by adding prefix di-into the base verb; 2) by adding prefix ter-into the base verb; and 3) by using the verb base itself. The first and most common method of forming passive constructions in Indonesian is to use a base verb combined with the prefix di-with . This construction is commonly used if the subject/agent is a noun or noun phrase. Furthermore, if the action is unintended, the prefix ter-is used instead of di-. The construction is [prefix ter-+verb base], e.g termakan, tertabrak, etc, and [prefix ter-+ base verb + suffix -i, nya, kan], e.g tertuliskan (is written), terbawanya (is bought), terwakili (is represented), such as in the sentence "The girl was hit by a car"; the translation became " Gadis itu tertabrak mobil". Based on the example, it means that the car accidentally hits someone. Modulation: An Overview Translating a text is not only a matter of finding the relevant words in the target language and applying the correct target language grammar when translating a text (Putranti, 2018), it is also a matter of generating the most natural translation of the source language message into the target language (Pinchuck in Machali, 2009;Nida & Taber, 1982). Nevertheless, creating the closest natural equivalent was not easy to handle. One of translation techniques which can be applied by translators is known as modulation (Catford, 1965;Newmark, 1988;Vinay & Darbelnet, 1955). In our research, we focused our investigation on passive voice which caused modulation. METHOD In this research, mixed methods were used to obtain breadth and depth of understanding, as well as corroboration. According to Nassaji (2015), qualitative data can also be analyzed quantitatively. This occurs when the researcher examines qualitative data to identify relevant themes and ideas before converting them to numerical data for further comparison and evaluation. The quantitative method by using descriptive statistics in this research was used to reveal (1) the frequency as well as the number of passive voice in the news corpus; (2) the number of affixations in the target texts (Indonesian); and (3) the number of modulations. Meanwhile, descriptive deals with qualitative method was used to describe the patterns of the translation of English passive voice into Indonesian, as well as modulation techniques conducted by Google Translate. The data in this research were in the form of clauses and sentences containing passive voice structure taken from news corpus data which were downloaded from Parallel Global Voices (http://nlp.ilsp.gr/pgv/). The corpus data for this research was accessed in April 12 th , 2021 containing 497 news articles with a total 17,069 sentences, 665,664-word tokens; and 36,763-word types. All the data, then, were manually entered into Google Translate to serve output data in Indonesian language. Figure 1. The translation of the sentence "It was just ten days ago that part of the country was submerged by waters" using Google Translate A content analysis method was used to select the data from the corpus since the data were in the form of texts. Meanwhile, purposive sampling was applied focusing on the emergence of passive voice in the SL's data sets. Then, the data were input into Google Translate from April to May 2021 and were classified based on their morphological changes and the probability of modulation. To collect the data, AntConc 3.5.8 was used as the data instrument which was downloaded from https://www.laurenceanthony.net/software/antconc/. It was frequently utilized by researchers all around the world for corpus-based research tools since it was freely accessible to scholars. The overall distribution of the research items was displayed under "Concordance Plot" during the inputting process, and the particular contexts of each retrieved word were shown via "File Views". However, we discovered that AntConc 3.5.8 processed and counted some words or phrases that have the same formula as the Indonesian passive structure (such as possessive, i.g, dirinya (himself). To facilitate the analysis, the corpus data were reduced and eliminated by selecting and focusing only on sentences containing English passive voices, as well as data that were indicated to contain modulation. Using AntConc 3.5.8, the data were reduced into 1,098 data of passive sentences with 1,550 passive verbs. As the basis in conducting this research, we stand on the theory that Indonesian is similar to English in terms of structure (S-V-O). Nevertheless, as stated by Sutrisno (2020), there are certain rules in both languages that may cause interference, such as: (1) Indonesian does not have tenses; (2) During the analyzing process, the entire data set was analyzed and evaluated using Sudaryanto's (1993) comparative methods. Any differences and similarity in the source texts and the target texts were fully observed. As the results, two strands of research (both quantitative and qualitative) were served at the interpretation stage or discussion. Both quantitative and qualitative tend to complement each other and receive equal emphasis in the findings. Investigator triangulation by repeatedly checking the data, theoretical triangulation by linking back to some relevant theories, and methodological triangulation by using appropriate methods were employed in order to achieve credibility, dependability, transferability and conformability (Moleong, 2001). Triangulation in this research was used since subjectivity becomes one of problems during collecting and analyzing the data in the form of language, social and humanities approaches. Findings Through the cautious process of collecting and analyzing the data, we focused our results and discussions on the morphological changes as the impact of translation Discussion We segregated the discussion into four sections which were dealing with (1) The Translation of English Passive Voice into Indonesian Passive Voice Through the investigation, there are roughly two basic forms of passive voice used in Indonesia presented by Google Translate. The first is distinguished by the prefix di-, while the second is distinguished by the prefix ter-. The results of inputting and translating English passive voice into Indonesian passive voice are described below. The Translation of English Passive voice into Indonesian Passive Voice, marked by Prefix di-. From the analyses of 1,297 data of passive verbs, we found that overall data were effectively translated from the English passive voice into the Indonesian passive voice marked by prefix di-. As our consideration, we found that each verb construction is most fully transferred and there is no specific change in terms of meanings. Nevertheless, we neglected the terms of accuracy in this research along with our research limitations. Furthermore, we also identified other Indonesian passive voices formed with the prefix [di-+root verb+suffix (-i,-kan)]. This following table showed the tendency of the translation of English passive voice into Indonesian passive voice, basically marked by prefix di-. the prefix di-is more appropriate. Therefore, it would be better to translate it as "dituduh" Table 1. English Passive Voice (SL) and Their Translation into Indonesia Passive Voice (TL) Using [prefix di-+root verb], and [prefix di-+root verb+suffix -i, -kan] Affix(es) Source Language(s) Target Language(s) (Google instead of "tertuduh [ter-+ root verb]". Relating to aspect, the modal word "have" in the source language is directly translated into the adverb "telah" in the target language. In line with what Alwi et al. (2003) said, adverbs in Indonesian can be used as markers of aspect, modality, quantity, and quality of the categories of verbs, adjectives, numerals, and other adverbs. Meanwhile, the adverb "telah" in excerpt (1) is a perfective aspect marker which indicates that the event has already started in the past and continues in the present. This translation process also proves that Google Translate has been using literal translation while translating passive voice with aspects. Then, from the excerpt (2), the phrase "was not asked [to be (past)+not+V3]", (negative passive) is also found to be translated into Indonesian negative passive construction " tidak ditanyai [negation+di-+root verb+-i]". We detected that Google Translate has a tendency to translate "was not asked" into ditanyai instead of ditanya because Indonesian suffix -i can be used to change the form of a verb from intransitive to its transitive meaning. Then, in excerpt (3), the English passive verb "are reported [to be+V3]" is translated as "dilaporkan [di-+root verb+-kan]" using confix-affixes that function to form passive verbs. It has functioned to state the causative meaning of causing something to happen, and stating the meaning of an act done for someone else. So, in this case, "dilaporkan" means that the event is reported by other people to someone else. At a glance, our findings show that Google Translate's translation of passive voice has improved significantly since its inception. Because Google Translate is educated on hundreds of millions of pre-translated words, phrases, and even material from the internet, it will operate and give the more generic translation if one version of passive voice exists several times. As a result, Google Translate's meaning is solely determined by the program's internal logic. The Translation of English Passive Voice into Indonesian Passive Voice, Marked by Prefix ter- When comparing Indonesian passive voice with prefix di-and prefix ter-, it is clear that the use of prefix ter-implies such unintended factors. The use of prefix terimplies that the action is done unintentionally. The passive voice that is translated into Indonesian with the prefix ter-are shown in Table 2. Table 2. English passive voice (SL) and their translation into Indonesia passive voice (TL) using [prefix ter-+root verb], and [prefix ter-+root verb+suffix -i, -kan, -nya] Affix(es) Source Language(s) Target Language(s) (Google Translate) ter-4) That way, object both persons will be spared from having to go through renewing or not renewing the expirable marriage license. Passive Construction: Modal Auxilary (will) + be + past participle (spared) The overall data showed that when the meaning of a passive voice verb in Indonesian includes an unintentional action, the prefix ter-is used instead of di-. As seen in the excerpt (4), GT has translated the phrase "will be spared [will+be+V3]" into "akan terhindar [modal (akan)+ter-+root verb]". It showed that since the action represented in the verb "spared" is done unintentionally, the use of the prefix ter-is more appropriate. In this case, the word "will" in the source language is translated into "akan" in the target language. The word "akan" in Indonesian is an adverb that can function as both an aspect and a modality marker. In Indonesian, the adverb "akan" was used as an aspect marker to indicate that the event would take place in the future. Meanwhile, in excerpts (5) and (6) (6) is also one of the various confix-affixes functioning to form passive verbs. This Indonesian passive translation "terwakili and terpinggirkan" possessed as passive stative verb which expresses the stative condition that something or someone is involved in a certain situation. Relying on the findings, the suffixes -i and -kan serve distinct functions based on the context of the sentences. Suffix -kan serves as a causal function, whilst suffix -i serves as a repetitious function. The Translation of English Passive voice into Indonesian Base Verbs Some English passive voice also were translated into Indonesian root verbs (without prefix di-, and ter-). Based on the analysis, we found that there is a tendency of omitting affixes in both Indonesian passive constructions. These phenomena are in line with what Alwi et al. (2003) has said that the sentences which use the root based in Indonesian are essentially as one way in expressing the passive voice in Indonesian. The examples of the data were presented in Table 3. Passive construction using root verb is commonly used in Indonesia when the verb (Prefix me-/meN-+root verb+suffix) in active voice is changed to passive by omitting its prefix and suffix, e.g Active sentence such as Saya sudah mencuci mobil [I have washed the car] change in to Passive sentence such as Mobil sudah saya cuci [The car has been washed by me]. There is a shift in perspective in this example because it is unusual to say "mobil sudah dicuci oleh saya" in Indonesia. As a result, to make the sentence construction sound natural, Indonesian use the passive construction without the prefix diand ter-. We discovered that Google Translate was able to recognize passive constructions as well by shifting point of view from ST to TT (Vinay & Darbelnet, 1955). Took a look back into the examples, excerpts (8) and (9) show that the phrase "is published [to be (present) +V3]" is translated into "terbit [root verb]" and "was leaked [to be (present) +V3]" is translated into "bocor [root verb]". This kind of shifting was called a modulation technique. The Translation of English Passive Voice into Indonesian Active Voice According to the data, Google Translate intends to translate the English passive voice into Indonesian active voice. Newmark (1988) and Vinay & Darbelnet (1955) called the changes from passive form into active form as another kind of modulation. Here, Vinay & Darbelnet (1955) added modulation in order to produce the natural translation. Hence, the modulation technique is considered to be the best option to hold the original meaning in the source language. Because Indonesian has a specific word order, this issue frequently occurs in English to Indonesian translations which were shown in Table 4. Based on our analysis, we found that the active voice is characterized with the verb preceded with the prefix ber-, me-and meN-. Here, the passive phrase "has been motivated [has+been+V3]" is translated into "bermotif [ber-+root verb]" which belongs to an active verb in Indonesian. The verb "bermotif" sounds more natural rather than "dimotifkan". The other example, the phrase "is eaten [to be (present)+V3]" also translated by Google Translate into "memakannya" [me-+root verb (active)+-nya]. This case showed that the role of context also influenced the choice of translation. It is more acceptable or natural to say "bagaimana memakannya'' instead of "bagaimana itu dimakan". Furthermore, the phrase "have been displaced [have+been+V3]" is translated into "telah mengungsi [meN-/meng-+ ungsi]" which belongs to the Indonesian conditional allomorph of MeN-. Overall, Google Translate tends to convert active translation into passive formulations because the action is carried out by the sentence's agent. The current study showed that Google Translate intends to use modulation to resolve some translation issues, particularly when producing natural translation. Conclusios From the depth analysis, we conclude that Google Translate using Neural Machine Translation (NMT) was able to translate English passive voice into Indonesian by distinguishing morphological changes in Indonesian passive voice through the use of affixes (such as the use di-, di-kan, di-i, ter-, ter-i, ter-kan, ter-nya). Furthermore, we also evaluate that today, Google Translate by the performance of NMT was able to modulate or change English passive voice into Indonesian active voices appropriately given their context by using some kinds of affixes, such as ber-, and me-, meN-or by the change the point of view using root (base) verbs in Indonesia passive constructions. Thus, our findings then were used as a follow up from the parliamentary findings which concluded that a statistical machine translation (SMT) does not yet have the capability of modulation. Suggestions Along with the rapid development and improvement of Google Translate, this means that the scope of this field of study is overly vague. Hence, this study provides an opportunity for future researchers to expand further research on the performance of Google Translate over time, for instance; seeing the accuracy level of passive voice translation conducted by Google Translate or by testing using other types of sentences. It is deemed essential to test Google Translate's accuracy in translating English passive voice into Indonesian using accuracy evaluation methods such as manual or automatic evaluation (e.g., BLEU (Bilingual Evaluation Understudy) scores (Aiken, 2019;Ramesh et al., 2021), CompareMT (Neubig et al., 2019), MTComparEval, Memsource criteria (see www.memsource.com), translation closeness metric, and etc.).
5,705
2021-12-31T00:00:00.000
[ "Computer Science" ]
AMFlow: a Mathematica package for Feynman integrals computation via Auxiliary Mass Flow AMFlow is a Mathematica package to numerically compute dimensionally regularized Feynman integrals via the recently proposed auxiliary mass flow method. In this framework, integrals are treated as functions of an auxiliary mass parameter and their results can be obtained by constructing and solving differential systems with respect to this parameter, in an automatic way. The usage of this package is described in detail through an explicit example of double-box family involved in two-loop $t\bar{t}$ hadroproduction. There are many methods on the market to compute master integrals, such as: sector decomposition [22,23,24,25,26,27,28]; Mellin-Barnes representation [29,30,31,32,33,34]; difference equations [2,35]; traditional differential equations [36,37,38,39,40,41,42,43,44,45], by setting up and solving differential equations satisfied by master integrals with respect to kinematic variables s; and others [46,47,48,49,50,51,52,53,54,55]. The sector decomposition method and Mellin-Barnes representation method can be applied in principle to any integral. However, it is well known that these methods, which need to calculate multidimensional integrations directly, are very inefficient to obtain high-precision results. Difference equations and differential equations can be very efficient, but they depend on integrals reduction to set up relevant equations, which may become very nontrivial for multiloop multiscale problems. Besides, usually there is no systematic way to obtain boundary conditions for these two methods. The auxiliary mass flow method [56,57] is also a kind of differential equations method, which calculates Feynman integrals by setting up and solving differential equations with respect to an auxiliary mass term η. This method has many advantages. First, it is systematic, because boundary conditions at η → ∞ can be obtained iteratively [57,58]. Second, as only ordinary differential equations are involved, high-precision results can be efficiently obtained [59]. Third, integrals containing linear propagators and phase-space integrations can all be calculated [60,61]. Finally, integrals reduction to set up differential equations with respective to η is usually easier than to set up differential equations with respective to s [57]. Therefore, as long as reduction tools are powerful enough to set up differential equations with respect to η, auxiliary mass flow can always provide high-precision result efficiently. in Ref. [57]). It is thus valuable for high-precision phenomenological studies. This paper aims to provide a public implementation of this method, including the automation of the fully iterative strategy and a high-performance numerical solver for ordinary differential equations, so that it can be more widely used for phenomenological studies. Auxiliary mass flow In this section we give a review of the auxiliary mass flow method, concentrating on the computation of normal loop integrals [56,57]. The extensions to compute integrals containing linear propagators or phase-space integrations can be found in Refs. [60,61]. The plain method Let us consider a dimensionally regularized Feynman integral family defined by where s is the list of all kinematic variables including Mandelstam variables and nonzero masses of particles, D = 4 − 2 is the spacetime dimension, L is the number of loops, i are loop momenta, D 1 , . . . , D K are inverse propagators, D K+1 , . . . , D N are irreducible scalar products introduced for completeness, ν 1 , . . . , ν K can be any integers, and ν K+1 , . . . , ν N can only be nonpositive integers. We next introduce an auxiliary integral family by inserting an auxiliary parameter η to each propagator of (1) Then physical results can be recovered by taking the following limit This auxiliary family, although seems to be more complicated than the original one, becomes rather simple as η approaches the infinity. This can be understood through region analysis [77,78]. More specifically, when |η| is very large, only the integration region with µ i ∼ O( √ η) can contribute, and thus every propagator can be expanded like where (ν) i ≡ Γ(ν + i)/Γ(ν) is the Pochhammer symbol. After all such kinds of expansion, what we get are combinations of equal-mass vacuum integrals, which have been intensively studied in literature [79,80,81,82,83,84]. As a result, auxiliary integrals I aux ( ν, s, , η) in the neighborhood of η = ∞ can be easily obtained and what remains is to perform analytic continuation (auxiliary mass flow ) of them to recover physical results. As auxiliary integrals can be expressed as linear combination of master integrals using integrals reduction, we only need to perform analytic continuation for master integrals, denoted by the vector I aux ( s, , η). Integrals reduction can also setup differential equations for master integrals, which look like For any fixed generic kinematic configuration s = s 0 1 , the above differential equations can be numerically solved by using series expansions, similar to numerically solving differential equations with respective to kinematic variables [85,86], which can realize the flow of η from the boundary at ∞ to physical value at i0 − . Before describing how to solve the above differential equations, it is helpful to know some basic features of these auxiliary integrals as analytic functions of η. According to Cutkosky rules [87], integrals can be only real-valued on the real axis when η > η th , where η th is the largest threshold for the corresponding process. Thus the branch cut of the auxiliary integral can be defined as the straight line connecting η = −∞ and η = η th along the real axis, such that the Schwarz reflection principle holds everywhere except the branch cut (for real s and ). Now we can describe our strategy for analytic continuations, or solving differential equations. We first need to define a path for the analytic continuations connecting η = ∞ and η = i0 − , characterized by a list of regular points {η 0 , η 1 , . . . , η l } on which we will perform series expansions in order. A typical choice is shown in Fig. 1, where the larger (smaller) circle is defined as smallest (largest) circle centered at η = 0 that contains all singularities (no singularity) except η = ∞(η = 0). The choice of the regular points should satisfy the following rules: i) η 0 is outside of the larger circle; ii) η l is inside the smaller circle; iii) the distance between η i+1 and η i is smaller than the convergence radius of the series expansions centered at η i . Then the flow of auxiliary mass can be divided into three main stages: i) expanding the integrals around η = ∞ and estimating at η = η 0 ; ii) expanding at η = η i and estimating at η = η i+1 for i = 0, . . . , l − 1; iii) expanding formally at η = 0 and matching at η = η l to determine the unknown coefficients in the formal asymptotic series. After these steps, we are able to take the limit η → i0 − for the expansion at η = 0 to obtain physical results. A simple example would be helpful to explain the basic ideas of performing the expansions. For more technical details, see e.g. Refs. [56,59]. Let us consider a massless one-loop two-point integral family There is one master integral I(1, 1, ), whose result is In the aforementioned auxiliary mass flow method, we first introduce the auxiliary mass parameter to obtain Now there are two master integrals, Boundary condition for the first master integral can be computed fully analytically and the second one can be expanded near η = ∞ giving Next we define the list of regular points to perform expansions. We can read directly from the differential equations (11) that the singularities are 0, 1/4 and ∞. As a result, the list of regular points can be chosen as η 0 = −i/2, η 1 = −i/4 and η 2 = −i/8. As the first master integral in this example has been totally solved, we just consider the second one. Near η = ∞, this integral can be expanded like which is a natural generalization of its boundary condition (13). We then substitute the expansions (12) and (14) into the differential equations (11) and what comes out is a system of recurrence relations which can be used to express a n ( ) in terms of a 0 ( ), the boundary input determined by Eq. (13). Some of the results are given in the following all of which are real-valued, if is real. The expansion (14) enables us to estimate the value of I aux (1, 1, , η 0 ) through where has been used. Expansion near the regular point η = η 0 is a Taylor expansion, which looks like We again substitute this expansion along with the value of the first master integral into the differential equations (11) and obtain a system of recurrence relations, which can be used to reduce b n ( ) to b 0 ( ), the value of Then we can estimate I aux (1, 1, , η 1 ) using the expansion near η = η 0 (18) Similarly, we can expand near η = η 1 and obtain the estimation at η = η 2 At the last step, we need to consider the expansion near η = 0 and match at η = η 2 . The general form of this expansion is where the left part comes from the homogeneous equation and the right part comes from the inhomogeneous equation (sub-topology). By substituting the expansions (12) and (22) into the differential equations (11), we can obtain two sets of recurrence relations, which can be used to reduce all c n ( ) to c 0 ( ) and determine all d n ( ) respectively. For example, we have and We find there is actually only one unknown parameter, c 0 ( ), which can be determined through matching at η = η 2 . By substituting the estimation at η = η 2 (21) and the coefficients (23) and (24) into the series expansion (22), we can solve the resulting linear equation to obtain After computing these expansions, we can finally take the physical limit η → i0 − in the expansion near η = 0 (22). Note that in dimensional regularization, we have for any nonzero b. So what remains in this limit is just the leading term of the Taylor part, c 0 ( ), i.e., which agrees with the analytic result (8). Iterative strategy One interesting phenomenon from the previous example is that the number of master integrals increases after introducing η. As a result, it can be expected for much more complicated problems, the introduction of η may greatly increase the number of MIs, such that the differential equations (5) cannot be set up in reasonable time with current reduction techniques. To overcome this difficulty, in [57] we propose to apply the auxiliary mass flow method iteratively to reduce the number of master integrals, and thus the computational cost, to a reasonable level. The key observation is that the number of master integrals can be reduced if η is introduced to fewer propagators. For example, for the two-loop fivepoint massless double-pentagon integral family with 108 master integrals shown in 2, we obtain 476 master integrals if η is introduced to all propagators ("all" mode), 319 master integrals for propagators 1-6 ("loop" mode), 233 master integrals for propagators 4-6 ("branch" mode), and the best case, 176 master integrals for the propagator 5 ("propagator" mode). For topologies where independent internal masses exist, we can do even better. We can simply treat these masses as η and thus will not introduce any extra mass scale ("mass" mode). Because "mass" and "propagator" mode introduce fewer extra number of master integrals than other modes in general, they usually perform better. However, as an expense, the boundary analysis is more complicated in general, due to more contributing integration regions as η → ∞. Following the general rules of region analysis [77,78] To obtain boundary conditions, we need to expand the integrands of master integrals in each region. Specifically, in the all-large region (L...L), each propagator should be expanded as where κ = 1 or 0, depending on whether η is introduced to this propagator or not. We thus obtain vacuum integrals in this region. In the all-small region (S...S), only propagators containing η should be expanded as In this case, we get integrals in a subfamily, with propagators containing η contracted. In mixed regions, we need to decompose loop momentum of each propagator as the sum of a large part L and a small part S . Then, if L = 0 or κ = 0, we can expand the propagator as Otherwise, no expansion is needed. After the expansion, the part containing large loop momenta and the part containing small loop momenta are decoupled and we obtain factorized integrals. It turns out that usually the boundary integrals are still too complicated to evaluate directly. But this is still fine because they are already simpler than the original integrals, which means we can keep applying the previous procedure to simplify the boundary integrals until they are all known to us. For example, the double-pentagon topology can be simplified iteratively with "propagator" mode as shown in Fig. 3. In practice, we would profit from a systematic definition of the terminal topologies. For example, we can always identify single-mass vacuum integrals as our terminals. In Ref. [58], single-mass vacuum integrals are further simplified in an iterative manner. In that way, we can simply identify the 0-loop integral (whose result is 1) as terminals, which have been proved to be more convenient. Numerical fit A very useful trick implemented in AMFlow is numerical fit. Consider a function f (x) which can be expanded near x = 0 as and our goal is to compute its estimation up to the k-th order with relative accuracy E n ≤ E, wheref n is the estimation of f n and E n is defined by We propose to realize this by evaluating f (x) numerically at some sample points x 0 , x 1 , . . . , x N (N ≥ k) near x = 0 and solving a system of linear equations In practice, we find two ways are useful to choose these sample points: 1. |x 0 | ∼ · · · ∼ |x N | ∼ r R, 2. x 0 , . . . , x N are distributed uniformly on the circle centered at x = 0 with radius r < R, where R is the convergence radius of the expansion (31). If one of these ways is chosen and the precision p of the samples f (x 0 ), f (x 1 ), . . . , f (x N ) is sufficiently high, then the relative accuracy of f n can be roughly estimated as It can be seen that the relative accuracy E n decreases as n increases. Thus to achieve our precision goal, we can set E k ∼ E, or equivalently This also gives a constraint about the precision p of the samples because we cannot expect a correct result if the precision of the samples is too low. So the total time consumption to obtain the estimation (32) is where t(p) is the average time needed to compute at a sample point with precision p, depending on both the nature of the problem and the numerical algorithm. Typically, in the framework of power series expansion method to solve differential equations of Feynman integrals, the dominant part of t(p) is a polynomial-like object of the number of correct digits, i.e., where α is a positive number. Therefore, we have which can be minimized by choosing and Next we can discuss how to apply this trick to the computation of Feynman integrals. To obtain numerical results of master integrals as expansions in , we can solve differential equations (5) with some numerical values of and solve a system of linear equations like Eq. (34) for each master integral 2 . We find the first way to choose sample points stated after Eq. (34) is better to use in this case, because we can always choose real values of to avoid potential complexities. Note that (41) and (42) only serve as a reference, and in practice one may need to make some adjustments to get satisfactory results. This trick brings several benefits. First, a much simpler code structure is made possible, because in this framework all integrals are simply pure numbers rather than expansions in , which is much easier to carry out. Second, the problem of -order cancellations is totally resolved, because we never use truncated series in to express any integral throughout the calculations, which means we can always include more -orders by simply increasing the precision of the integrals. Finally, the computations at different sample points are totally independent and thus can be massively parallelized to save our waiting time. This trick can also be applied to achieve asymptotic expansions of Feynman integrals at a given phase-space point or a given value of η. Sometimes, this becomes crucial, given the fact that there are usually many removable singularities in differential equations. With the second way to choose sample points on a circle, removable singularities inside the circle can be totally ignored. Users can then follow the guidance outlined in README.md to install this package properly on their devices. After that, the package can be loaded by the command Get["/path/to/AMFlow.m"]; AMFlow depends on external programs to do integrals reduction. To use different reducers, one can set the following option SetReductionOptions["IBPReducer" -> reducer ]; where reducer can be any reducer whose interface with AMFlow has been built. Currently, three reducers based on Laporta's algorithm are available, including "FiniteFlow+LiteRed" [6,14], "FIRE+LiteRed" [6,17] and "Kira" [18]. Other reducers can also play their roles after users build their interfaces with AMFlow properly. Input First, we should use the function AMFlowInfo to define globally used objects during the computation, like AMFlowInfo[key ] = obj ; where key should be a string pre-defined in AMFlow and obj should be the corresponding object. We list most frequently used pre-defined strings and the meaning of their corresponding objects below: "Family" -the name of the integral family; "Loop" -a list of all loop momenta; "Leg" -a list of all external momenta; "Conservation" -a list of replacement rules for momentum conservation; "Replacement" -a list of complete replacement rules for scalar products among external legs; "Propagator" -a list of complete inverse propagators; "Numeric" -a list of replacement rules indicating the numerical kinematics where to perform the computation; Automatic computation AMFlow provides a function named SolveIntegrals to perform automatic computations of Feynman integrals. A general usage of this function should be like auto = SolveIntegrals[target, goal, epsorder]; where target is a list of target integrals, goal represents the precision goal and epsorder means the length of expansion in the final expansions, i.e., starting from −2L and ending at −2L+order with L the number of loops. This function will first reduce the target integrals to master integrals and then compute master integrals using auxiliary mass flow. The output auto is a list of replacement rules from integrals to their values. Manual computation Although SolveIntegrals is designed for most general purposes, there could be some extreme cases where this function may not be able to produce satisfactory results. So we introduce a more involved way to compute integrals in this section. We first use the function GenerateNumericalConfig to regenerate the parameters for numerical evaluation suggested by SolveIntegrals {epslist, workingpre, xorder} = GenerateNumericalConfig[ goal, epsorder]; where goal and epsorder have been defined in the previous section. The output is a triblet: epslist is a list of suggested sample points of , workingpre is the suggested working precision and xorder is the suggested truncated order of the power series expansions. In principle, if these suggested parameters are used, we will obtain exactly the same results as SolveIntegrals. So, when SolveIntegrals fails to generate satisfactory results, users can define their own epslist, workingpre and xorder. We then tell the program our preferred parameters by can be set to −4 in this example. The output exp is just the list of expansions in eps for target integrals. Other functions There are also other useful functions in AMFlow. Here we just give a brief summary. For more details, users can investigate corresponding examples provided in the folder examples. See aotumatic_phasespace and feynman_prescription. 3. Computation of asymptotic expansions using the differential equations solver provided in AMFlow, either by traditional matching or numerical fit introduced in section 3. See differential_equation_solver. Computation of integrals with complex kinematic parameters. See complex_kinematics. Computation of integrals in arbitrary space-time dimension. See spacetime_dimension. Summary of options AMFlow allows users to set global options through SetAMFOptions, SetReductionOptions and SetReducerOptions. Here we list and describe the most frequently used options. For other options, we refer the users to the file options_summary. Option and Default Description SetAMFOptions "D0"→4 A rational number D 0 such that the integrals will be computed with D = D 0 − 2 . "WorkingPre"→100 Working precision when performing numerical computations, including solving differential equations and fitting. "XOrder"→100 Truncated order of expansions when solving differential equations. SetReductionOptions "IBPReducer"→ "FiniteFlow+LiteRed" Integration-by-parts reducer. Available reducers include "FiniteFlow+LiteRed", "Kira" and "FIRE+LiteRed". "BlackBoxRank"→3 Suggested maximal rank of seed integrals when constructing IBP systems. But if the maximal rank of target integrals s is larger than the value of this option, then the maximal rank of seed integrals will be adjusted to s internally. "BlackBoxDot"→0 Suggested maximal dot of seed integrals when constructing IBP systems. But if the maximal dot of target integrals r is larger than the value of this option, then the maximal dot of seed integrals will be adjusted to r internally. SetReducerOptions ("FiniteFlow+LiteRed" or "FIRE+LiteRed" used) "EMSymmetry"→False A parameter indicating whether symmetries among external legs should be exploited when preparing the topology using LiteRed. SetReducerOptions ("Kira" used) "IntegralOrder"→5 A positive integer ranging from 1 to 8 specifying the integral ordering for Kira. For more details, see Ref. [88]. "ReductionMode"→ "Kira" Reduction mode for Kira. Available modes include "Kira", "FireFly", "Mixed" and "NoFactorScan". See Ref. [18] for more details. Summary and outlook In this paper, the Mathematica package AMFlow is presented together with some explicit examples. We have highlighted the numerical fit strategy, which can overcome many difficulties when numerically solving differential equations. The differential equations solver provided in AMFlow is of high performance and very suitable for high precision computations. In later version of AMFlow, we will provide some functions for users to access this solver in a more convenient way. With the auxiliary mass flow method, integral reduction will be the only input for calculating Feynman integrals [58]. In the near future, a public implementation of the reduction method developed in Refs. [11,12] will be available, which can typically reduce the time consumption by 2 orders of magnitude comparing with other methods on the market. With this powerful reduction package and AMFlow, complicated integrals such as those in Ref. [57], can be computed automatically.
5,397
2022-01-27T00:00:00.000
[ "Mathematics" ]
The Southern Ocean marine ice record of the early historical, circum-Antarctic voyages of Cook and Bellingshausen . The circum-navigations of Cook’s Second Voyage (1772-1775) and Bellingshausen (1819-1821) were attempts to find any great southern land mass poleward of ~50 o S and consequently involved sailing for three or two summers respectively in polar latitudes around Antarctica. Extensive sea ice eventually blocked each voyages’ southern probes, although Bellingshausen, unknowingly at the time, saw the Antarctic continent. However, these attempts meant sea-ice and iceberg records from the early historical period were collected near simultaneously from around much of Antarctica. Here, 10 these records are extracted from journals, analysed, and compared to each other and the modern satellite record of both forms of marine ice. They generally show an early historical period with a more northerly record of both forms of marine ice than normal for today, but to a geographically varying degree. However, the early historical period in the Pacific sector of the Southern Ocean saw marine ice generally within the range of modern observations for the same time of year, but the Weddell Sea and Indian Ocean marine ice, particularly on Cook’s voyage, then extended several degrees further north than 15 in today’s extreme ice years. Introduction Marine ice, whether sea ice or icebergs, as well as extensive land, had long been realised to be an effective barrier to northward maritime travel in the North Atlantic (Goodwin, 2019) and, by the mid-eighteenth century, in the northern seas of the Pacific (McCannon, 2012).However, while isolated reports of icebergs occurred from at least 1687 in the southwest Atlantic and Drake Passage (Martin et al., 2022;Headland et al., 2023) far less was known about the Southern Ocean by the 1770s.Ships rounded Cape Horn to travel between the Atlantic and Pacific Oceans, but while such voyages were frequently stormy all but a handful of unlucky voyagers stayed close enough to South America to miss encounters with icebergs.The search for Terra Australis, while leading to European discovery of South Pacific islands, as well as New Zealand and Australia, had not extended south of temperate latitudes away from South America. The British Admiralty and the Royal Society jointly sponsored James Cook's second expedition, starting in 1772, to search for land in southern latitudes.This followed his first global circum-navigation expedition (1768)(1769)(1770)(1771) where he had charted both islands of New Zealand, and travelled up much of the east coast of Australia, reducing the possible existence of a southern land mass to, at best, sub-polar southern latitudes.Interestingly, in rounding Cape Horn during that voyage on the way to observing the transit of Venus from Tahiti, his journal records no encounter with marine ice even though the Endeavour reached 60 o S (Cook, 1771).The Admiralty also sent an expedition towards the North Pole the following year, under the command of Constantine Phipps, famous for the presence of Midshipman Horatio Nelson (Goodwin, 2019).However, there does not appear to have been any overt link between these two contemporary British polar expeditions.Cook, in the Resolution, together with Tobias Furneaux, in the Adventure, headed south from Cape Town in November 1772 and over three austral summers they made several attempts to reach as far south as possible before sea ice became extensive enough, or icebergs became too frequent, to allow further southing (Fig. 1).The expedition first managed to cross the Antarctic Circle in January 1773, near 40 o E, before being turned back by extensive pack ice.The two vessels became separated in poor weather in early February 1773, near 50 o S, 64 o E in the southern Indian Ocean, and then made their separate ways to a meeting point in New Zealand, for over-wintering.Furneaux, with Adventure, headed roughly due east, for Tasmania, to confirm that Australia did not extend deeply south.In contrast, Cook in the Resolution, turned south but while spending some weeks in sub-polar latitudes only reached 61 o S in this stretch because of recurring dense iceberg fields. After rendezvousing in New Zealand, both ships spent the austral winter exploring the South Pacific island belt.They were separated during a storm in October 1773 on the return voyage back to New Zealand, from where they had planned to start a second polar leg across the Pacific.Cook reached the rendezvous point first and after waiting in vain for Adventure to join him, he eventually decided to set sail on 26 th November 1773, to begin this second leg with a full summer sailing season ahead of him.Furneaux arrived at the rendezvous four days later.Cook had left a message that he would return to Queen Charlotte Sound in New Zealand after the 1773/4 summer, but Furneaux, leaving a message of his intentions, decided to continue on a southerly course across the Pacific and South Atlantic, heading for Cape Town, and then Britain, in 1774.The Adventure reached 61 o S around 90 o W and then 60 o S in the Drake Passage, during this return journey, sighting a number of 55 icebergs on approaching the South Atlantic (see separate track in Fig. 1, and Table 1 for a summary of the various ships and sections covered in this paper).1 and Fig. 1). Next spring they returned to the Southern Ocean (Table 1), reaching ~ 65 o S by late November 1820 near 160 o E, where southward sailing was prevented by pack ice and icebergs (Fig. 1).Heading east, they again attempted to head south around 170 o W, managing to cross the Antarctic Circle to ~ 67 o S, but again were prevented from journeying further southward by more pack ice.However, they continued along the edge of the pack ice for ~ 20 o of longitude, before being driven northwards by an extensive iceberg field.Continuing eastward at high latitudes, however, they attempted another southward excursion at the approach of the new year of 1821, again crossing the Antarctic Circle to exceed 67 o S near 120 o W, before again being blocked by pack ice.A week later another southward foray reached in excess of 69 o S, where they remained below the Antarctic Circle for a few days travelling eastward from ~ 95 o W-75 o W (Fig. 1).Pack ice pushed them north again, after which they sailed through the Drake Passage at a high latitude of ~ 60 o N, seeing only an occasional iceberg, before turning for Cape Town and home at 50 o W at the end of January 1821. A number of other expeditions explored the Weddell Sea over the next few decades (see (Love and Bigg, 2023) for a summary) and others visited parts of the Antarctic coastline during the nineteenth century, such as Ross and Crozier in the Ross Sea during the 1840s (Palin, 2018) and the approach of the Challenger expedition to the Indian Ocean sector in the 1870s (Jones, 2022) there were no other near-synchronous, geographically extensive, surveys of the far Southern Ocean until the Heroic Age of Antarctic Expedition at the turn of the twentieth century (Edinburgh and Day, 2016).Even then, the Indian Ocean and central Pacific sectors of the Southern Ocean were not visited.The Cook and Bellingshausen expeditions therefore give us two unique snapshot views of Southern Ocean marine ice cover over a hundred and fifty years prior to regular comprehensive satellite coverage.The purpose of this paper is to inter-compare these and examine their records relative to the extensive post-1978 satellite coverage of sea ice and icebergs. There are some existing long-term iceberg and sea ice data or reconstructions with which this work can be set in context.The pioneering work of Parkinson (1990) first revealed some of Cook's record of more extensive sea ice in the Weddell Sea, while also noting evidence of Cook's and Bellingshausen's other sea ice records being within normal range.A recent study by Martin et al. (2022) has also extracted Cook's records of iceberg and sea-ice from his 1772-1775 expedition, showing that, apart from a much wider eastward expansion of sea ice in the Weddell sea ice tongue, their data fits within the envelope of modern observations; here we not only compare our independent reconstruction with their dataset but we extend this with sea ice records, and extracts of both variables from the separate journeys of Furneaux in the Adventure.Headland et al. 2023) back to 1700.These previous works will be considered in the Discussion, but it is worth beginning the work by noting our principal hypothesis that the two circum-Antarctic expeditions considered here occurred at the height of the Little Ice Age, so it is expected that sea ice and iceberg records will generally extend further north than those today.The validity, and geographical consistency, of this hypothesis will be seen below. Documentary sources The key data sources underlying this study are daily journals and logbooks from the voyages of Cook andBellingshausen. For Cook's expedition (1772-1775) those used include the post-voyage journals of Cook (Cook, 1775) and Johann Reinhold Forster (1981), both of whom were on board the Resolution, and the logbook of Tobias Furneaux (1774), captain of the Adventure.From Forster (1981), it is Volumes II-IV that are relevant to the Southern Ocean part of the expedition.It is worth noting that, for Cook's voyage, these sources are different from those used by Martin et al. (2022), thus providing a new dataset for comparison.For Bellingshausen's voyage (1819-1821) they include a translation of the journal of Bellingshausen (Bellingshausen, 2016), where Volume II contains the Southern Ocean component.Note that the separate voyage of the Mirny in the Indian Ocean sector is included within this journal. All journals and logs were read and where sea ice or icebergs were mentioned a set of data were recorded in a spreadsheet (Bigg, 2024).These entries also include days at high latitude before and after the last ice encounters.The positional data recorded were the day, month and year of the record, the latitude and (where recorded) the longitude at noon on the day. Where, very occasionally, either the latitude or longitude is not given for a day with marine ice observations a value is found by averaging neighbouring day's positions.Very occasionally, the position was given at a different time of the day, presumably through lunar rather than solar observations, but for the purposes of this study this time difference was ignored. Bellingshausen used the Russian "Old Style" calendar, so his observations are 12 days earlier than dates given by the contemporary calendar; in the spreadsheet all his data has therefore been adjusted forward 12 days for consistency. Cook and Furneaux's voyage took place early in the age of using chronometers to determine longitude (Sobel, 1996).Both captains had a copy of Harrison's K2 chronometer on board their respective ships for time-keeping relative to known meridians.These chronometers gradually lost time so during sections of their voyages with no sight of known land longitude values derived purely from the chronometer accumulated error.Cook and Forster had corrected this on return to Britain, to standardize the daily longitude measurements in their journals.However, Furneaux's log records longitude as given by the time difference between the chronometer and observed noon.The chronometer was re-set at the known positions of Cape Town, before any southern excursions began, and again at Queen Charlotte's Sound in New Zealand, where the Resolution and Adventure rendezvoused.Any difference between real and calculated longitude during the time Furneaux and Cook were separated in the Indian Ocean appeared small when positions were calculated, and so this drift was ignored for this segment. However, over Adventure's final journey in late 1773-early 1774, from New Zealand across the Pacific and Atlantic to reach Cape Town, when the chronometer had aged by almost 2 years and no land was seen for some 3 months, the timepiece's reading had drifted so that the log's recorded longitude was ~ 17 o out by the time Adventure reached Cape Town on 3 March 1774.Presumably, lunar observations had helped Furneaux identify his real longitude roughly, as some 10 days earlier he had changed course from tracking near the 50 th parallel of latitude to head essentially due northwards towards Cape Town (Fig. 1).The data used here for this part of the Adventure's voyage have therefore had the longitude corrected assuming the chronometer slowed uniformly over the 81 days it took to sail from New Zealand to Cape Town.This only affects iceberg observations, as no sea ice was observed by the Adventure whenever it was separated from the Resolution. For each day iceberg density as noted in the journals and logs was recorded, with '0' denoting no "islands of ice", '1' if one "island of ice" was noted, '2' if a few icebergs were seen, and '3' if an iceberg field was noted.For some observations an idea of the size of an iceberg is given, in terms of circumference or heightthe presence of such a record is flagged in the spreadsheet (Bigg, 2024), although not used in this analysis.A flag is also noted in the dataset if any of the ships stopped to harvest iceberg fragments to supplement their drinking water. Sea ice is also noted in the journals as 'loose ice', 'field ice', 'drift ice' or 'pack ice', clearly different from 'ice islands'.If no sea ice is mentioned a value of '0' is noted, but if 'loose ice' or 'drift ice' is present a value of '1' is given, with '2' when the sea ice observed is clearly more extensive. Modern sea ice data To compare the sea ice fields from those provided by the past documentary evidence with current observations, daily high resolution fields are required.Microwave brightness temperatures from 5 satellite instruments are available to give a daily coverage of sea ice concentration over northern and southern polar latitudes at a resolution of 25 km x 25 km since August 1987, with bi-daily data back to November 1978 (Parkinson et al., 1999).There has been intercalibration between changing sensors over the years, and infill of errors (Cavalieri et al., 1999), meaning that this dataset is robust and well able to look at climate trends and anomalies (Parkinson, 2019).Southern Hemisphere daily fields were extracted, for available years, for the days on which sea ice was found by Cook or Bellingshausen from the National Snow and Ice Data Center (https://nsidc.org/data/nsidc-0051/versions/2).This gave between 38 and 40 years of daily values, which were used to provide a statistical measure of the mean sea ice extent and its latitudinal extremes for the specific longitudes of the eighteenth and nineteenth century observations.These extracted daily values of the modern northern sea ice edge for each date of Cook and Bellingshausen's sea ice observations are provided in a Supplementary Spreadsheet (seaiceobs_moderndailyextremes.xlsx).Note that a 25 km square was determined to contain sea ice if the measured concentration was ≥ 15%. Modern iceberg data There are several sources of modern iceberg data, which can be used to provide a climatological view of iceberg density and prevalence across the Southern Ocean.The main source that is used here is the long-term small (< 3 km) iceberg distribution data provided by Tournadre et al. (2016) and updates in the altiberg database (https://cersat.ifremer.fr/fr/Data/Latestproducts/Altiberg-a-database-for-small-icebergs).This was produced from data across 8 altimeters on board satellites over 1991-2019; here the merged product from across the altimeters is used (Tournadre et al., 2016; https://cersat.ifremer.fr/fr/Data/Latest-products/Altiberg-a-database-for-small-icebergs).Iceberg position, size and volume is calculated from the Doppler return altimeter data; the summary variable giving the monthly probability of an iceberg occurring in a given 100 km square is used here (https://sextant.ifremer.fr/geonetwork/srv/api/records/695647ad-5af3-427fafed-1485d3458b93).The data is available via ftp://ftp.ifremer.fr/ifremer/cersat/projects/altiberg/v2/and the data manual is available also through ftp at ftp://ftp.ifremer.fr/ifremer/cersat/projects/altiberg/v2/documentation/ALTIBERG-rep_v2_1.pdf. The smaller number of larger icebergs (> 5 km) have been tracked using scatterometer data since 1992 (Budge and Long, 2018).This gives a measure of the full iceberg presence envelope, although Cook's largest observed iceberg probably was ~ 3km diameter (Martin et al., 2022), meaning the smaller iceberg distribution is a more appropriate comparator.There are also databases of iceberg observations compiled from historical data (Headland et al., 2023), shipboard observations from Russia (Romanov et al., 2017) and a combined Norwegian and Australian shipboard database (Orheim et al., 2023) which can be used to modify the view produced from the base altimeter iceberg database.All of the latter have records of location, date and, in some cases numbers and sizes of icebergs; here only location is used. Sea ice Despite the circum-Antarctic nature of the voyages of both Cook and Bellingshausen, their sea ice records are both largely confined to the Atlantic and Pacific sections of their expeditions (Fig. 2).The sea ice records also occur across three (Cook) or two (Bellingshausen) summers, as well as across at least two of the summer months as well.This makes comparison between and within them, as well as with modern day sea ice extent, non-trivial.The data will therefore be cross-compared according to the sea ice record for the specific days of the year when the voyages recorded sea ice.However, before this is examined, it is worth noting two points from Fig. 2. The first point is that in almost all areas pack ice and loose ice occur close together spatially, and between voyages.The second point is that there is a clear exception to this around 10-25 o E, where Cook's 1772/3 austral summer loose sea ice record is almost 10 o further north than Bellingshausen's 1820 pack ice record.Indeed, the latter is essentially noting the ice adjoining the Antarctic continent itself off Queen Maud Land.However, modern sea ice extent is highly variable from year to year.For example, on 31 st December, when Cook observed sea ice at 13.5 o E, 60.33 o S in 1772, the modern latitudinal variability over 1978-2022 of the edge of the 15% concentration band of sea ice at the same longitude of 13.5 o E has ranged over 60.19 o -69.91 o S (Fig. 3).In early summer there is often a tongue of less concentrated sea ice extending eastward at ~60 o S from the Antarctic Peninsula across the northern Weddell Sea.Note that polynyas can also occur within the Weddell Sea, as suggested in Fig. 3, meaning it should be remembered that the early explorers were being stopped by the first impenetrable ice barrier, rather than a continuous ice pack reaching to Antarctica.The statistical background to the sea ice records from Cook and Bellingshausen is shown in Fig. 4a, where all their records of sea ice, whether loose or pack ice, are shown by longitude, with a superimposed measure of the variability of the modern day northern sea ice limit for the same longitude (± 0.15 o ).The latter is shown as a bar whose centre is at the mean latitude of sea ice observed for that day over the 38-40 values of the available daily microwave data from 1978-2022, with standard error bars denoting the variability.However, the extreme interannual variability of Southern Ocean sea ice extent means that it is also necessary to show on Fig. 4 the extreme northern sea ice edge over 1978-2022 for the given day and longitude as well, to capture the full potential variability in which to set the 18 th -19 th century data in proper context.It is worth noting here that Worby and Comiso (2004) found that satellite-derived ice edges tend to be 0.75±0.61o south of those observed in situ; this additional uncertainty in the modern data does not change the basic arguments of the discussion to follow.It is also worth noting that in general Cook's sea ice observations cam from earlier in the summer (Fig. 4b), when sea ice extent is intrinsically more variable as it moves towards the summer minimum at interannually variable rates (Parkinson et al., 1999).This increases the range of variability seen in Fig. 4a Note that in general Cook's sea ice observations were from earlier in the summer than Bellingshausen, when sea ice is intrinsically more variable interannually. Icebergs Icebergs were extensively recorded by Cook, Furneaux, Bellingshausen and Lazarev around the polar latitudes of the 260 Southern Ocean (Fig. 5).Apart from 130-160 o E, where none of the expeditions sailed in sub-polar latitudes as they headed for temperature climes to over-winter, only the area of the sub-polar Atlantic to the west of the South Orkney Islands (~45 o W) has a minimal number of iceberg entries.Icebergs were encountered on these 18 th -19 th century voyages largely in parts of the Southern Ocean where there are altimeter records of icebergs in recent decades (Fig. 6a).The exception to this is in the Indian Ocean between 15-60 o E, where in both 1773 (Cook) and 1774 (Furneaux) icebergs were encountered further north than current limits.This expansion of the Weddell iceberg tongue (see the high probability tongue in Fig. 6a) is consistent with other records from the late 18 th century in this area (Martin et al., 2022).It is also notable that where Cook encountered extensive iceberg fields in the 1770s in the Atlantic is today mostly in regions of occasional iceberg encounters (Fig. 6b).Thus, both iceberg and sea ice records are consistent with more icy South Atlantic and Indian Ocean sections of the Southern Ocean in the late 18 th century. In contrast, the Indian Ocean zone where the outlet from the Amery Ice Tongue is today was largely in the same position in the 18 th and 19 th centuries (Fig. 6a) and the presence of Pacific Ocean icebergs during the presently studied voyages matches today's records.Nevertheless, Bellingshausen in the early 19 th century tended to encounter more icebergs north of the Ross Sea and West Antarctica than are found today (Fig. 6b), but still within the bounds of today's observations. Discussion The analysis section above compares the marine ice record from the Cook and Bellingshausen expeditions with modern day satellite-derived datasets, placing both the sea ice and iceberg records in the context of the most representative datasets available for the last few decades.However, there are other reconstructions of historical sea ice extent and recent iceberg distributions that it is worth comparing with the 18 th and 19 th century records presented here.There is also the recent independent analysis of Cook's iceberg record by Martin et al. (2022) with which it is possible to verify the current reconstruction.Here, these comparisons will be examined before concluding with a summary of the key findings from this analysis. The first discussion of a subset of Cook and Bellingshausen's sea ice records was given by Parkinson (1990).She noted the greater sea ice concentrations seen by Cook in the Weddell Sea in 1772, consistent with the current data, and the similarity of both Cook and Bellingshausen's sea ice data in the Pacific sector, again consistent with present data.Dalaiden et al. (2023) used a data assimilation approach using proxy sea ice records from land-based ice cores and tree rings around the Southern Ocean to reconstruct regional sea ice anomalies back to 1700.These regional reconstructions are similar, in general trends, although not in detail, with the satellite record and Fogt et al.'s (2022) proxy reconstructions of 20 th century regional sea ice anomalies.This comparison is least successful in the Bellingshausen/Amundsen Sea sector, where the Dalaiden et al.'s reconstruction tends to overestimate the sea ice extent compared to Fogt et al. (2022) or modern observations.Most regions, however, suggest sea ice during the period 1750-1850 was somewhat more extensive than today (see Fig. 1 of Dalaiden et al., 2023), although the error bars tend to overlap with modern observations.The Weddell Sea is the region where the trend towards reducing sea ice in the last century was most pronounced.These results are consistent with Cook and Bellingshausen's records of more extensive sea ice generally, but particularly in the Weddell Sea sector, that was shown in Fig. 4.This is also consistent with the finding of Love and Bigg (2023) that there was more extensive, and an eastward extension of, sea ice in the main summer months in the Weddell Sea in the 1820s-1840s. For our iceberg comparison of today's record with Cook and Bellingshausen shown in Fig. 6 the satellite altimetric record of Tournadre et al. (2016), updated to 2019 (https://cersat.ifremer.fr/fr/Data/Latest-products/Altiberg-a-database-for-smallicebergs),was employed.The latter is a dataset of all icebergs of area between ~ 0.1-8 km 2 , which matches the range of iceberg sizes reported by all ships examined in this paper.However, there are also large databases of modern large icebergs > 5 km diameter, from scatterometer data (Budge et al., 2018), and modern ship observations (Romanov et al., 2017;Orheim et al., 2023).The larger iceberg dataset largely falls within that of Tournadre et al. (2016), shown on Fig. 6a.However, in the case of both of the ship observation datasets, largely from independent sources Russian in the case of Romanov et al. (2017) and Norwegian Polar Institute and Australian sources for Orheim et al. (2023), their northern bounds of iceberg presence extend rather further north in many areas than the satellite-derived datasets.Note that the shipboard observations cover a significantly longer time period than the altimetric dataset of Tournadre et al. (2016).These shipboard observation limits are The dataset of icebergs from Cook's voyage provided by Martin et al. (2022), and readily visible in Fig. 2 of their paper, very strongly corresponds with those given here from different sources for the same voyage.One does need to examine several sources for the same voyage where available: for example, there were a few occasions where Forster had noted icebergs for a particular day (Forster, 1981), but Cook's 1775 journal had not, and vice versa.The present data is also enhanced by icebergs records from the Adventure's period of independent sailing (Fig. 5), which was responsible for the increase in data in the current work for the South Atlantic and the southeastern Indian Ocean. Conclusion The Introduction of this paper ended by noting our principal hypothesis that it was expected that sea ice and iceberg records would generally extend further north than those today in both the two circum-Antarctic expeditions considered here, as they occurred at the end of the Little Ice Age.Our analysis has confirmed this hypothesis for both sea ice and iceberg records, with both showing more northerly limits than is typical for the current day (Fig. 4 and Fig. 6).Nevertheless, the Southern Ocean climate and ice record is very variable from year to year (Fig. 4), so in most areas the 18 th and 19 th century data falls within the most northerly limits of extreme years today.The exception to this lies in the South Atlantic where especially Cook's expedition experienced marine ice of both forms further north and east than is likely today, even in extreme years. While the iceberg anomalies experienced by Cook and Furneaux may have been due to extraordinary giant iceberg calving events from Antarctic ice shelves from previous years to decades, such events would have been extreme compared to the modern record.It is much more likely that the marine ice found by Cook and Bellingshausen reflect a colder than average climate of the South Atlantic in particular, and the Southern Ocean more generally (Dalaiden et al. 2023), during 1770-1820 than today. Figure 1 : Figure 1: Southern Ocean sections of the voyages considered in this paper; green line: Resolution, blue line: Adventure, black line: Vostok, red line: Mirny.Note that Cook's outward South Atlantic section is not shown until he encountered ice (~ 50.7 o S, 20.3 o E), while his return is shown all the way to Cape Town. ( 2023) produced a dataset of Southern Ocean iceberg records from 1687-1933, although none of Cook's or Bellingshausen's iceberg data are included within this dataset.With regard to Southern Ocean sea ice there are two proxy reconstructions using different approaches, one back to 1900 by Fogt et al. (2022) and another by Dalaiden et al. ( Figure 2 : Figure 2: Sea ice records from the voyages of Cook (shown by '+' in blue) and Bellingshausen (shown by 'x' in red.Pack ice (or landfast ice) is denoted by the voyage symbol enclosed in a square; loose ice is given by the voyage symbol alone. Figure 3 : Figure 3: An example of summer sea ice concentration in the Southern Ocean, from 31 December, 2007, showing areas of open water poleward of the outer sea ice edge.This paper uses a 15% concentration boundary to denote the ice edge, here given by the light blue contour.Note polynyas (regions of lower ice concentration) at the edge of the Filchner and Ross Ice Shelves. in many of Cook's observations compared to Bellingshausen's later summer data. Fig. 4a shows Fig.4ashows that almost everywhere the Cook and Bellingshausen sea ice limits are well to the north of the modern standard deviation variability for microwave observations for the same day and longitude.However, in many longitudes these 18 th -19 th century values are still within the extreme envelope of modern observations over 1978-2022.Both Cook and Bellingshausen's exploration years were therefore extreme from an ice perspective, but usually not unprecedented in terms Fig. 5 : Fig. 5: Iceberg observations recorded by the voyages of Cook ('+') and Bellingshausen ('x').These include any iceberg sightings, so cover all records of category '1' to '3'.Note that additional observations from the Adventure, for Cook's expedition, and the Mirny, for Bellingshausen's, are shown in red. Figure 6 : Figure 6: Comparison of modern iceberg distribution from Tournadre et al. (2016) with iceberg observations of Cook ('+') and Bellingshausen ('x').a) upper panel shows all iceberg sightings; b) lower panel only shows those iceberg sightings of category '3', namely iceberg fields.Units of modern distribution are mean probability of an iceberg being present in a 100 km square (see discussion in section 2.3). schematically overlain on the Cook and Bellingshausen voyages' distribution in Fig. 7.The vast majority of the 18 th and 19 th century iceberg records lie within the 1950-2010 ship record, however, note that Cook's late January 1773 iceberg records from ~50 o E still lie outside the modern ship record.A few years later, in December 1789, Edward Riou encountered icebergs at ~ 44 o S, 45 o E (Martin, 2023), even further north than modern or Cook's records, if further west than the latter's extreme record.There were clearly extensive and unusual iceberg numbers in the South Atlantic and southwestern Indian Ocean during the 1770s and 1780s, perhaps indicating a recent period of calving of very large giant icebergs from Antarctica. Figure 7 : Figure 7: Iceberg observations of Cook ('+') and Bellingshausen ('x') explorations in context of modern ship-board iceberg observations.These include records of all iceberg categories from '1' to '3'.The red line is the northern limit of iceberg observations from Romanov et al. (2017), while the blue line is the northern limit of iceberg observations from Orheim et al. (2023). Table 1 . Details of the historical records used in this study for each separate ship's polar sections.A '+' sign in the "ship" column means both vessels of the particular expedition were together.The sector column shows the approximate longitude 60 range covered in the period given in the Dates column.Cook's austral summer of 1773/4 was spent searching for southing.Initially Cook headed southeast, dodging icebergs and sea ice for some time beyond 60 o S in the South Pacific, eventually being turned back by extensive pack ice just 65 beyond the Antarctic Circle, near 140 o W (Fig.1).After returning to more temperate climes Cook again headed south, reaching beyond 71 o S near 107 o W, before once more being turned back by extensive iceberg fields and pack ice.With the decline of summer at the beginning of February, 1774 Cook headed north, around 100 o W, to begin his return to over-winter in New Zealand.For his final austral summer of southern exploration (Table1), Cook first crossed the Pacific to spend and continued to Sydney separately (Table
7,352.8
2024-09-18T00:00:00.000
[ "Environmental Science", "History", "Geology" ]
Titania-Silica Composites : A Review on the Photocatalytic Activity and Synthesis Methods The photocatalyic activity of titania is a very promising mechanism that has many possible applications like purification of air and water [1]-[4]. To make it even more attractive, titania can be combined with silica to increase the photocatalytic efficiency and durability of the photocatalytic material, while lowering the production costs [1]. In this article, relevant literature is reviewed to obtain an overview about the chemistry and physics behind some of the different parameters that lead to cost-effective photocatalytic titania-silica composites. The first part of this review deals with the mechanisms involved in the photocatalytic activity, then the chemistry behind certain methods for the synthesis of the titania-silica composites is discussed, and in the last and third part of this review, the influence of silica supports on titania is discussed. These three sections represent three different fields of research that are combined in this review to obtain better insights on the photocatalytic titania-silica composites. While many research subjects in these fields have been well known for some time now, some subjects are only more recently resolved and some subjects are still under discussion (e.g. the cause for the increased hydrophilic surface of titania after illumination). This article aims to review the most important literature to give an overview of the current situation of the fundamentals of photocatalysis and synthesis of the cost-effective photocatalyic composites. It is found that the most cost-effective photocatalytic titania-silica composites are the ones that have a thin anatase layer coated on silica with a large specific surface area, and are prepared with the precipitation or sol-gel methods. Introduction Composites made out of silica and titania can have the photocatalytic properties from titania, the high stability from silica and extra properties coming from chemical bonds between the two materials [1].Titania is photocatalytic because it is able to absorb energy from light, and then use that energy to catalyze the degradation of organic molecules and the oxidation of some inorganic pollutants like nitrogen oxides (NO x ) [1]- [12].As the photocatalytic activity takes place only on the exposed surface area of titania, the amount of titania needed for the same photocatalytic efficiency can be reduced enormously by coating a thin layer of titania on silica [5] [13]- [42].As the production of silica can be cheaper than that of titania, the costs of the photocatalyic material can then be significantly reduced.In addition to lower production costs, the durability increases as silica has a higher mechanical and thermal stability than titania.So when the composites are used instead of pure titania, the photocatalytic material can be used for a longer time with high photocatalytic efficiencies.In addition, because of the enhanced thermal stability, the photocatalytic material can be used in applications that require higher preparation temperatures.In addition to the lower costs and increased durability, the photocatalytic efficiency of the material can be increased with the addition of silica because silica can have a large specific surface area and is able to adsorb some pollutants and intermediates for a longer time than pure titania. One promising application of the photocatalytic materials is the degradation of pollutants.The main reasons why using photocatalytic materials for air purification is promising include: lower costs of materials and energy needed than the other current purification methods, the ability of many photocatalytic materials to oxidize pollutants even if they are present in low concentrations, and the fact that the pollutants do not have to be stored but are converted into less harmful side-products (e.g.CO 2 from organic molecules and NO 3 − from NO x after a complete photocatalytic oxidation (PCO)).The photocatalytic titania has been, and is being used in many other applications as well, including: photoelectrolysis of water, medical applications (where titania works as a disinfectant by destroying bacteria and viruses), municipal and industrial wastewater treatment, self-cleaning glass with anti-fogging abilities, and even in textiles that are self-cleaning. A good method to have air purification is by the incorporation of photocatalytic material in building materials (including: concrete, wallpaper, gypsum and paint [2]- [4] [8]- [10] [43]- [51]), due to the large illuminated surfaces areas that many building materials have.Investigations into the photocatalytic building materials showed that the concentration of pollutants close to photocatalytic building materials indeed significantly decreased.Since large areas of building materials are often illuminated anyway with sun-light or indoor light and because these building materials become self-cleaning, the maintenance costs of these materials can be very low.Because of the large illuminated area and low maintenance cost, the potential of the photocatalytic building material for air purification is very promising. However, in most of the research field of the applications of photocatalytic materials, only pure and doped titania are mentioned and not the titania-silica composites despite the large benefits these composites can have (e.g.lower costs, higher durability).An important reason for this absence of composites can be the complexity of the research field of the titania-silica composites.For the synthesis of the titania-silica composites alone, there are many different methods, each with their own parameters that can be changed in multiple ways.As many studies on the titania-silica composites have been done with different goals in mind, many different kinds of titania-silica composites have been produced [1] [5] [13]- [42] [52]- [84], from which some are either not suitable for photocatalysis or have a very expensive production method.Since the photocatalytic activity of titania alone is already a complex system [3] [4] [6] [11] [85]- [89], it can be understandable that adding more complexity to the system (for example, with silica) is not desirable.This review is written in order to provide insight on the low cost synthesis of titania-silica composites, and how each different parameter can be tuned to produce highly efficient photocatalytic material to show how the composites can be an attractive alternative to the titania for photocatalytic applications. Mechanism of Photocatalysis The process of photocatalysis in titania starts when a photon is absorbed by an electron in the valance band of titania [3] [4] [6] [11] [85]- [89].This electron is then excited to the conduction band, and by doing so, leaves a hole behind in the valance band (reaction 1).The valance and conduction bands of titania have the right energy levels for many important redox reactions.After the excitation of electrons, holes in the valance band have a redox potential of +2.53 V, which is enough for the oxidation of hydroxyl ions into OH • (see reaction 2) or the oxidation of adsorbed organic molecules groups.The largest source of hydroxyl ions comes from the dissociation of water (see reaction 3).The redox potential of electrons in the conduction band is −0.52 V, which is strong enough to reduce oxygen to superoxide (see reaction 4).It is also possible that the excited electrons and holes will react with different adsorbed species depending on the environment.For example, if there is a high amount of adsorbed water, it is possible that more radical hydroxyls will form through the reaction of hydrogen peroxide as shown in reaction 5 and 6. • OH h OH where e − is an excited electron in the conduction band, h + is a hole in the valance band, OH • is a radical hydroxyl and 2 O − is a superoxide.Radical hydroxyls and superoxides are strong oxidants that can react with certain inorganic pollutants like NO x and many organic molecules.In Figure 1, a schematic view is given for the photocatalytic mechanism. An important property of titania, which influences the photocatalytic efficiency, is the amount of hydroxyl groups in its environment.In turn, the amount of hydroxyl groups is determined by the humidity in air, or the amount of water and its pH in liquids.This amount will determine how many hydroxyl groups will be chemically bonded to the surface of the titania.Bonded hydroxyl groups can either react with holes themselves and form radicals, or adsorb other hydroxyl groups and water molecules which can subsequently react with the holes and excited electrons [90] [91].The photocatalytic activity in air can thus be higher with a higher humidity.However, it is also possible that a very high humidity will lower the photocatalytic activity by taking up more adsorption sides on the surface.For example, the photocatalytic oxidation of NO x depends on the adsorption to titania and is thus lower with a very high humidity. Oxygen Vacancies, Hydrophilicity and Self-Cleaning Surfaces By reacting Ti 4+ and O 2− into Ti 3+ and O − , excited electrons and holes can remain at the surface longer if there are no adsorbed species they can react with directly [3].Because the difference in charge between titanium and oxygen atoms is then reduced, the oxygen atoms are much less stable and can, with relatively little energy, leave the crystal forming oxygen vacancies.These oxygen vacancies are important in titania for different mechanisms.For example, around an oxygen vacancy there is an excess of electrons, making titania a n-type semiconductor, which has a higher conductivity than when titania is an intrinsic semiconductor. Another reason why oxygen vacancies are important is because the surface of titania becomes more hydrophilic when water molecules occupy these oxygen vacancies.After a water molecule occupies the vacancy, one hydrogen atom of the water molecule can react with a neighboring oxygen atom forming two hydroxyl groups [3].The increase in hydroxyl groups can lead to an increase in photocatalytic efficiency and to an increase in hydrophilicity of the surface.This increase in hydrophilicity, was first reported by Wang et al. [92] in 1997 with a titania coating on glass.By illuminating the coated glass with UV-light, the glass became transparent since the water fog that was present on the glass, defogged as the contact angle between the water droplets and the glass decreased to zero.They also showed that after keeping the hydrophilic surface away from any light source for some days, the glass became more hydrophobic, which means that the formation of a hydrophilic surface is a reversible process.However, it has been reported that oxygen vacancies are not solely responsible for the hydrophilicity, as some studies showed that hydrophilicity was independent in some cases on the number of oxygen vacancies [93] [94].While it is possible that the degradation of organic materials on the surface can also play a role on the hydrophilicity increase, it nevertheless has been shown not to be a determining factor [95]. The hydrophilicity and the degradation of organic materials on the surface of titania are two reasons why titania can be used for self-cleaning applications [3].The degradation of organic materials through photocatalytic oxidation prevents organic substances to accumulate on the surface and can prevent the growth of bacteria and fungi.The hydrophilicity of the surface increases the water adsorption so that it can replace other adsorbed species and it lowers the energy required for water to slide over the surface so that contaminants can be washed off more easily. Effect of Different Crystal Forms of Titania on the Photocatalytic Efficiency Titania has several forms, but the two main crystal structures that most researchers focus on are rutile and anatase [3].These two crystal forms are both tetragonal structures in which titanium atoms are 6-coordinated in an octahedral formation.The band gap of rutile is 3.0 eV and the band gap of anatase is 3.2 eV.For rutile and anatase to become photocatalytically active, they need to absorb electromagnetic radiation with wavelengths smaller than 413 nm and 387 nm respectively.While rutile is thus able to absorb more light in the visible range, anatase is more photocatalytically active.Luttrell et al. [96] showed this higher photocatalytic efficiency by studying the difference in PCO efficiencies of anatase and rutile thin films of different sizes for the PCO of methyl orange.They showed that for films thinner than 2.5 nm the difference was not significant between the two forms, but for thicker films, the anatase thin film had a higher efficiency.They measured that the maximum thickness, where the photocatalytic efficiency increases with increasing size, is 2.5 nm for rutile and 5 nm for anatase.Thus, from this study it can be concluded that excited electrons and holes in anatase can travel farther than in rutile so that more electrons and holes can reach the surface.The ability of exited electrons and holes to travel longer distances in anatase was contributed to a longer lifetime and higher conductivity [96]- [98]. An important reason why excited electrons and holes have a longer lifetime and higher conductivity in anatase is because of the differences between the oxygen vacancies that form in anatase and rutile.Oxygen vacancies cause extra energy levels within the band gap.Calculations done by Mattioli et al. [97] showed that oxygen vacancies in an anatase crystal can cause both shallow delocalized energy levels and deep localized energy levels in the band gap, while in rutile only deep localized levels can form.Since anatase has also shallow delocalized energy levels in its band gap, it has a higher conductivity and the excited electrons and holes have longer lifetimes than in rutile as they are less trapped in the deep localized energy levels where the chance of recombination is higher. While anatase has a high photocatalytic efficiency, amorphous titania has the lowest efficiency [99].The main reason for this lower efficiency is because, in amorphous titania, there are many spots where recombination of the electron-hole pair can happen.The recombination through defects is the most common way electrons and holes are lost.Thus, amorphous titania has a much higher recombination capacity.In addition, conductivity in amorphous materials is very low since energy levels in amorphous materials are much more localized.The high recombination rate and low conductivity means that only electrons which are excited directly at the surface play a part in the photocatalytic activity in amorphous titania. Some researchers have measured higher PCO efficiencies in titania that contains both rutile and anatase than in titania with only anatase.Degussa P25 nanoparticles, which are commercial titania nanoparticles made out of around 80% anatase and 20% rutile, are well known for their high photocatalytic activity [100] and are often used as a reference material.The conduction band of rutile has been measured to start at a higher energy level than that of anatase even though its band gap is smaller, which is why titania with both crystal forms can have a higher photocatalytic efficiency [101].Since electrons always go to a lower energy state if possible, excited electrons in rutile will go to the conduction band of anatase.As electron holes can be viewed as opposite electrons, electron holes in anatase will flow to the valance band of rutile because its top lies above the energy level of the valance band from anatase.Because holes move from anatase to rutile and excited electrons from rutile to anatase, the recombination chance is reduced and a difference in electron density is produced at the interface between the two forms, causing an increase in conductivity and lifetimes for the electrons and holes, resulting in a higher photocatalytic efficiency. Silica Sources Many different types of silica from different sources can be applied as a support for titania including: fumed silica, precipitation silica from alkali silicates, silica produced with the Stöber method, zeolites, clays, glass, silica from the dissolution of silica minerals and more [1] [5] [13]- [42] [52]- [84] [102]- [107].Fumed silica is formed at high temperatures where silica compounds, like chlorosilanes, are transformed into silica [108] [109].Silica from alkali silicates is formed by neutralization of the alkali solutions so that the silicate polymerizes to silica and precipitates from the solution.In both fumed silica and precipitated silica, amorphous aggregates are formed.These aggregates can have very large specific surface areas but also have complex undefined structures.Another silica which is often used because of the more defined structure, is silica made with a sol-gel method.The Stöber method [110] is the best known example of a sol-gel method for producing silica.During this method, Tetraethyl orthosilicate (TEOS) is slowly added to a solution of ethanol, water and ammonia.Depending on the composition of the solution, silica colloids of varies sizes and shapes can be formed.The advantage of this silica is that the resulting shape and size of the silica can be well controlled.However, the disadvantage is that this silica has a smaller specific surface area than fumed and precipitated silica.For both well-defined shapes and high specific surface areas, researchers have also used zeolites as support.However compared to the other mentioned supports, the zeolites are more expensive.For very low production costs and a high specific surface area, silica made during the dissolution of olivine has a great potential [111]- [114] but is still in its developing stage. Titania-Silica Chemistry The reaction of titania precursors with silica happens either directly with silanols or indirectly through hydrolysis into titania monomers (Ti(OH) 4 ) first and subsequently by condensation with silanols [1] [5] [13]- [42] [52]- [84] [109] [115].Either way, the titania will form bonds with the silica through reaction 7. Si where R is a side group of a titania precursor or a hydroxyl group of a titania monomer.The Si-O-Ti bond can be measured by using techniques like infra-red/Raman spectrometry and X-ray photoelectron spectroscopy [14]- [16] [32]- [34] [52] [56]- [60].This condensation reaction between the titania precursor and the silica surface depends mostly on the hydroxyl groups of the silica [16] [56] [59] [63] [66] [67] since the rest of the silica is very inert.In turn, the amount of hydroxyl groups on the silica is dependent on the temperature during the pretreatment, the method used, and the amount of water and its pH used [67] [109].For example, if the silica undergoes pre-heating temperatures higher than 800˚C and no water is used during the synthesis, there will be only a low amount of hydroxyl groups on the silica surface left so that only a few titanium atoms can be found on the silica after the reaction [67].On the other hand, if lower temperatures are used, the density of hydroxyl groups will be high enough on the silica surface that hydrogen bonds between silanols can form.Titania precursors react more with these hydrogen bonded silanols than isolated silanols [66] [67].These silanols are close enough to each other that a titania precursor can react with multiple hydroxyl groups, making the reaction of hydrogen bonded silanols favorable over the reaction of isolated silanol.When water is used during the synthesis method of the titania-silica composites, the titania precursor undergoes hydrolysis first.During the hydrolysis, the side groups of the precursor are replaced by hydroxyl groups [116]- [119].After a full hydrolysis at a neutral pH, Ti(OH) 4 is the most common product, as titanium has four valance electrons.Below a pH of 4, the ions Ti OH + can also be formed [116] [117].It is also possible that, a double bonded oxygen atom which stays bonded during the hydrolysis, forms during the reaction of the precursor so that only two hydroxyl groups can bond to the titanium.Titanium hydroxides are titania monomers that can form larger titania molecules by polymerization through condensation with other monomers if their concentration is high enough. Different Synthesis Methods There are many different methods to synthesize the titania-silica composites.An indirect way to prepare them is by adding premade titania nanoparticles to a silica support [103] [120] [121] at a pH of around 3 -4.At that pH, the titania and silica have opposite charges so that the titania and silica will have electrical attraction.However, for more stability and a better homogenous coating, direct methods are often more favorable.The vapor-deposition methods (chemical vapor deposition (CVD) and physical vapor deposition (PVD)), for example, are such methods [5] [23]- [25] [66]- [70].During the CVD method, the titania precursor is heated to the gas phase to react with dry silica in an inert environment and during the PVD method the titania is sputtered against a support surface for thin films.The impregnation [17]- [22] [52] [59]- [63] and the grafting [52] [64] [65] methods are also direct methods.During both these methods, the titania precursor is dissolved in an organic solvent like toluene or hexane.This solvent is then added to the silica support so that the precursor reacts with the silanols.During the grafting method, the solvent is removed through evaporation and during the impregnation method, the solvent is removed in some other way (e.g.filtration).During the vapor-deposition, the impregnation and the grafting methods, no water is used, which means that these methods do not have the option to form more than one layer of titania in one step, because no new hydroxyl groups can form on the coated titania during the reactions for further condensation.In addition, these methods are not optimal for low cost production since either very high temperatures or expensive organic solvents are required. Methods that are more promising for low cost photocatalytic materials are the precipitation methods [13]- [16] [52]- [58] and the sol-gel methods [14] [32]- [42] [72]- [84].These methods are capable of forming more than one monolayer titania on silica, and do not require expensive solvents.During the precipitation method, the titania precursor is dissolved in an aqueous solution with a low pH and low temperatures, where titania does not form.After mixing the aqueous solution containing the precursor with the silica, the solution is either neutralized with an alkaline solution and/or heated up to a specific temperature.This specific temperature depends on the pH and solvent used.By increasing the pH and/or temperature, titania slowly forms, which can be on a silica support for a coating if the hydrolysis is slow enough so that the concentration of titania monomers does not reach the critical supersaturation.Titanium chlorides (TiCl 3 , TiCl 4 ) and titanium oxysulfate (TiOSO 4 ) are the precursors, which are often used in the precipitation methods.During the sol-gel methods, titanium alkoxides (e.g.titanium isopropoxide, titanium n-butoxide) are often used.To form a titania coating, the precursor is slowly added to a silica dispersion in an organic solvent (ethanol, n-propanol) which contains a low amount of water or to which a low amount of water is added after the precursor is added. An important parameter in the methods that involves hydrolysis is the pH.Below pH 6, part of the Ti(OH) 4 is replaced by ( ) 3 Ti OH + , and also by ( ) Ti OH + below pH 4. With decreasing pH, more Ti(OH) 4 is replaced by the ions which lead to a higher solubility [116].A higher solubility means that the equilibrium between monomers and condensed titania is then more to the side of the monomers.Thus, when a low pH is used, more precursor is needed for the same amount of the condensated titania, as some of the titania monomers stay dissolved [116].Since it is mostly the removal of OH − groups that lead to the formation of ions, hydrated, amorphous and small sized titania particles are more dissolvable than crystalline titania and titania bonded to larger particles like the silica [122] [123].The peptizing method, which is a different kind of sol-gel method, uses this constant equilibrium between titania monomers and condensated titania in an aqueous solution and the difference in dissolvability.During this method, hydrated precipitates are first formed in an aqueous solution and then slowly dissolved by reducing the pH to around 2 -4.Using the Ostwald ripening process, crystalline titania nanoparticles are then formed [122] [123] or coated on a silica support [15]. Another way to use the sol-gel method for low cost photocatalytic material is by coating a support, like a glass plate, with a thin film using the dip-coating method [124]- [133].During the dip-coating method, the support is dipped into a stable sol-gel mixture, and is then slowly pulled out of the mixture so that a thin layer of the mixture is adsorbed to the surface.During the drying, a thin titania film is then formed.Polymers (e.g.Poly (ethylene glycol)) can be used to obtain a higher porosity.By adding these large molecules in the sol-gel mixture, large pores are formed during the calcination step, when these molecules are removed. Another method which is often used for the synthesis of titania-silica composites is the hydrothermal treatment [26]- [31] [71] [72].The advantage of this method is that it can be used for both the coating step and the crystallization step (which will be discussed in 3.5).This method is done by adding a precursor, a solution containing some water and the silica (or silica source) to an autoclave.The solution is then heated up (e.g. to 200˚C) for both the reaction and crystallization step. Controlling the Hydrolysis Rate For a homogenous coating with the seeded-growth process, the concentration of titanium monomers should not exceed the critical supersaturation, and thus the hydrolysis rate needs to be controlled.In aqueous solutions with a neutral pH, the hydrolysis of titania precursors happens so fast that the concentration of the monomers reaches the critical supersaturation point almost instantly, causing the titania to precipitate randomly in the solution instead of slowly forming on the silica surface. The most important parameters on which the hydrolysis rate of titania precursors is dependent are: the pH, temperature, concentration of the precursor and of water, and type of precursor used.For example, in an aqueous solution with a pH below 1 and a temperature below 20˚C, no titania will form [13]- [16] [52]- [58].Having organic liquids (like n-propanol) [134] in the solvent increases the temperature and pH at which the titania is still soluble, because the dielectric constant of the solvent is then decreased.Having a low water content also prevents fast hydrolysis even when no acid is used [34] [81] [134].However, as each hydrolysis-condensation reaction consumes a water molecule, enough water should be present, to add new hydroxyl groups on the surface of the forming silica-titania composites.Another way to slow down hydrolysis is by reacting the precursors first with molecules, like glycols, which are larger than the side-groups of the precursor.These molecules can replace the side-groups of the precursors [124] [135] if added in excess, so that new, less reactive titania precursors are formed.Depending on the exact method, another important variable is the speed at which a parameter is changed, for example, the change of pH during the neutralization method, the addition speed of a precursor during a sol-gel method and the speed at which the temperature increases during a hydrothermal treatment. Transformation to Crystalline Titania When the hydrolysis rate is very slow during the reaction, thermodynamics plays a more important role than kinetics.Since crystalline titania is more energetically favorable than amorphous titania, crystallization of the titania can then directly happen, especially at a low pH, where the solubility difference between amorphous titania and crystalline titania is larger [122] [134] [136]- [143].However, the direct formation of crystalline titania is hard to control.If the hydrolysis is too slow, it can result in large rutile crystals with a low specific surface area, which is undesirable for the photocatalysis.In any other case, it is likely that most of the titania is amorphous titania after the reaction.Because amorphous titania has a much lower photocatalytic activity [96] [99], it can be beneficial to either use calcination or hydrothermal treatment to transform it into anatase. During the calcination of pure titania, the transformation of amorphous titania to anatase happens at a temperature of about 400˚C and at temperatures above 600˚C the transformation to rutile occurs [143].At these high temperatures, chemically bonded hydroxyl groups condensate with each other so that more bonds are formed between the titanium and oxygen atoms.Through rearrangements, the crystal structures are then slowly formed.Once a crystal is large enough to be stable, it will further increase in size by taking up more titania atoms, either through more rearrangements, or by merging with other crystals. Besides the calcination in dry air, it is also possible to use hydrothermal treatment for the formation of crystalline titania [25]- [31] [71] [72] [145]- [149].Since the formation of crystalline titania takes place in an aqueous environment during a hydrothermal treatment hydroxyl groups are incorporated into the formed structure which can be helpful for the photocatalytic activity.Hydrothermal treatment works at lower temperatures than calcination because the water increases the mobility of the atoms, reduces surface tension of the titania and catalyzes nucleation of crystals [145]- [149].Wang and Ying [147] showed that using a hydrothermal treatment on amorphous titania, smaller and more stable titania nanoparticles were produced than with calcination. The exact temperature at which the transformation to either anatase or rutile happens during both calcination and hydrothermal treatment depends on the size of the particles (according to Banfield et al. [144] below a size of 14 nm, anatase is more thermodynamically stable than rutile), the pH and other chemicals (e.g.adsorbed polymers, salts) that can influence the mobility of the atoms [117] [134] [136] [140] [142].The formation of anatase or rutile from amorphous titania does not start at a single point where all amorphous material crystallizes into anatase or into rutile.By using higher temperatures, more amorphous titania will transform into anatase.However, higher temperatures will also transform anatase into rutile and increase the growth rate of the crystals, which leads to a smaller specific surface area [72] [134] [136]- [143]. When titania is chemically bonded to a substrate like silica, the substrate stabilizes the different structures of titania, and suppresses the transformation of amorphous titania to anatase and the transformation of anatase to rutile by decreasing the mobility of the titania atoms like an anchor [32] [33] [54] [72] [76] [80] [129].Thus, higher temperatures are required to form anatase and rutile when titania is coated on silica.While more energy is needed for the formation of anatase from amorphous titania on a support, the anatase that is then formed has a higher thermal stability.It has even been reported that the anatase-rutile transformation only happens in some composites with a high temperature of 1000˚C [54] [129].The increase in temperature required for the crystalline transformations depends on the thickness of the titania, since a thicker layer is less influenced by the support [54].For the titania-silica composites, the crystal growth by calcination can cause shrinkage stress when the titania structure shrinks due to the density increase and removal of chemically and physically adsorbed water.As the silica works like an anchor against the shrinkage, stress is produced on the structure which can lead to the breakage of some Ti-O-Si bonds [150]. The Influence of Silica on Photocatalytic Titania in Low Titania Content Composites Titania-silica composites have more different properties than pure titania than simply a higher stability and a higher specific surface area, especially when the titania content is very low.Many researchers have studied the low titania composites because of these different properties.Anpo et al. [5] were one of the first who studied them.Using the CVD method on a porous silica glass, they found some interesting results which include: 1) below three layers, no anatase could be measured with X-ray diffraction, while it could still be present; 2) the band-gap became larger (4.1 eV) for just a monolayer titania; 3) the titanium was 4-coordinated in a tetrahedral structure instead of the 6-coordinated octahedral structure in pure anatase or rutile; 4) the tetrahedral titania with a large band gap catalyzed different reactions like the decomposition of N 2 O as will be explained in Section 4.1; and 5) the photocatalytic efficiency per amount of catalyst was much higher for low titania content composites, which will be explained in Section 4.3. The Larger Band Gap and Its Influence on the Photocatalytic Activity The band gap of titania increases when going from bulk anatase to the tetrahedral titania.The normal band gap for crystalline titania is around 3.0 -3.2 eV, but the measureable band gap from a very low amount of titania on the surface of silica can be much larger [5] [21] [68] [69] [151] [152].There are two effects responsible for this increase.The first is the quantum size effect, which increases the band gap with decreasing crystal size, when the size is below 2 nm [68].The second effect is caused by the difference in energy levels of the energy bands from silica and the energy bands from titania, close to the titania-silica interface [1] [21] [68].Band gaps up to 4.1 eV [5] [69] could be measured due to these two effects.When the band gap becomes larger, electrons require more energy to be excited to the conduction band.For the applications that use sun-light or normal indoor light as the light source, this larger band gap is a disadvantage, since even less of the light spectra can then be absorbed. On the other hand, the energy that is absorbed is used more efficiently because of the larger band gap.A larger band gap lowers the chance for recombination and increases the redox potentials of the excited electrons and holes.This higher redox potential increases the efficiency of the formation of the radical hydroxyl and superoxides molecules and enables the titania to catalyze different reactions [5] [17]- [20].For example, Yamashita et al. [18] measured that pure titania transformed NO mostly into oxidized species, while NO decomposed to N 2 and O 2 in the presence of composites prepared with an ion-exchange method, in which titanium ions replaced silicon ions.In the same system [19] and similar systems with other zeolites [20], the same observation was made with the reaction of CO 2 and H 2 O.With the ion-exchange composites, methanol was mostly produced while methane was produced by the titania samples.Another example of the difference in catalytic reactions taking place is from a study by Gao et al. [17] who observed that in the presence of tetrahedral titania, methanol reacted to methyl formate (C 2 H 4 O 2 ) and formaldehyde (CH 2 O) while in the presence of octahedral titania, methanol reacted to dimethyl ether (C 2 H 6 O).While photocatalytic titania has some potential in reducing the amount of greenhouse gasses in air [153], these reactions show that the composites have an even greater potential to be useful against climate change. Higher Density of Hydroxyl Groups Binary metal oxides often have better catalytic properties than single metal oxides because they have extra acid sites on their surface in the form of hydroxyl groups [1] [154].The titania-silica composites are one of those binary systems, and an increase in acid sites has been measured in several different studies [1] [18] [37] [56] [74] [80] [83].The increase in hydroxyl groups is important for the photocatalytic activity and hydrophilicity as these depend on the amount of hydroxyl groups on the surface.Tanabe et al. [154] made the hypothesis that this increase in hydroxyl groups is caused by the difference in coordination numbers.The coordination number for silicon atoms in silica is 4 and for titanium atoms in crystalline titania it is 6.So when titanium atoms are introduced in, or on silica in low amounts and form the tetrahedral structure, an excess of negative −2 charge per titanium atom is created.This excess charge causes Brönsted acidity on the surface after absorbing enough protons to compensate the charge.Walter et al. [82] showed, using neutron diffraction, that the number of hydroxyl groups is indeed affected when titanium atoms are introduced into the silica structure by the difference in coordination number.They showed an increase in hydroxyl groups mainly caused by the increase in strain in the structure.Liu et al. [74] and Doolin et al. [80] both used the sol-gel method to make titania-rich and titania-poor composites and compared them to pure titania and silica.They measured indeed an increase in Brönsted acidity in the composites especially where there were Ti-O-Si bonds, which was in agreement with Tanabe.However, for the titania-poor composites, the increase in acidity was lower than for titania-rich composites, which is in disagreement with the model of Tanabe.So far, no model has been proposed yet, that explains the extra hydroxyl groups better than the model of Tanebe et al., but these studies about the extra hydroxyl groups do show that the mechanism is related with the Ti-O-Si bond [1]. Higher Photocatalytic Efficiency of Low Titania Content Composites Other researchers [17] [35] [62] [67] [81] [83] found similar results as Anpo et al. [5] on different low titania content composites and these researchers often observed an increase in photocatalytic efficiency per amount of catalyst compared to pure titania.The high efficiencies of these low titania content composites, which do not even have enough titania for a full monolayer, are caused by: 1) the high specific surface area of the silica supports used; 2) the ability of the silica to adsorb many molecules for longer times than titania, especially with the extra hydroxyl groups; 3) the fact that the titania is used more efficiently since all the titania is at the surface; 4) the higher redox potentials of the electrons and holes; and 5) the fact that silica can scatter the light to the titania without being able to absorb its energy.In addition, during the photocatalytic measurements in these studies, UV-light was used.The measurements using UV-light might not represent the applications which use sun-light as the light source, since the decrease of possible light absorption caused by the increase in band gap is less with UV-light. Conclusions The titania-silica composites are interesting materials because they have the potential to make photocatalytic materials more cost-effective.For the same level of photocatalytic activity, fewer resources have to be invested with the titania-silica composites than with pure titania.The titania-silica composites can, with less and cheaper material, have the same photocatalytic efficiency as pure titania for a longer time since the composites can have a higher photocatalytic efficiency, lower production costs and increased durability.The applications of the photocatalytic material including the applications that degrade pollutants, become then more attractive for companies to produce on a larger scale which can eventually lead to an overall improvement of the quality of air and water. To obtain this cost-effective photocatalytic material, the titania-silica composites need to be synthesized with a method that has low production costs but still produces composites which have a high photocatalytic efficiency. • For a high efficiency, the method needs to deposit an anatase layer with thickness of maximum 5 nm on a large specific surface area.Any layer larger than 5 nm will have titania, which does not contribute to the photocatalysis, since it is too far away from the surface.• If some of the crystal structure is rutile instead of anatase, it can have some increase in photocatalytic efficiency because of the separation of holes and electrons at the interface of the two forms.However, the amount of rutile should not be too high, as the lifetime and conductivity of excited electrons and holes in rutile are less favorable than in anatase.• When the crystal size is too small, the titania may have an increase in band gap.While it has been reported that such an increase in band gap can cause higher photocatalytic efficiencies, it is important to note that it will make the titania absorb less visible light.• The titania should be chemically bonded to a silica substrate which has a large specific surface area, high mechanical and thermal stability as well as low production costs.• The most promising methods for low cost photocatalytic composites are the ones that involve hydrolysis (precipitation and sol-gel methods), as these methods ensure that more than one layer of titania can form without the need of expensive materials.However, the hydrolysis of titania precursors can be hard to control.The most important parameters on which the hydrolysis rate is dependent are the pH, temperature, concentration of water and precursor, the speed at which these parameters are changed during the reaction (e.g. by addition of water) and the type of precursor used.How much influence each parameter has on the hydrolysis rate depends on the method used.It is important that the reaction speed of the hydrolysis should be slow enough so that the condensation of titania monomers on the substrate's surface is more likely to happen than polymerization between monomers.• For the transformation of amorphous titania to anatase, calcination or hydrothermal treatment can be applied. The temperature and time required to obtain anatase crystals from amorphous titania depend on the mobility of the titania molecules, which can be influenced by: the chemical bonds to the silica, any nucleated crystals already present, and other chemicals present (e.g.adsorbed polymers, salts).While having more crystalline anatase is beneficial for the photocatalytic activity, crystallization does not always produce materials with a higher photocatalytic efficiency, since during the growth of the crystals, the specific surface area is reduced and anatase can transform into rutile at high temperatures.When all these points are fulfilled, the resulting titania-silica composites will have the required properties to be a cost-effective material which can compete with pure titania in photocatalytic applications.Even with the increased complexity, the composites are an excellent alternative to pure titania nanoparticles. Figure 1 . Figure 1.Schematic drawing of the photocatalytic activity of titania.1: The absorption of a photon; 2: The excitation of an electron to the conduction band; 3: The transport of the electron and hole from the initial point to reach the surface of titania where the electron and hole can react with an adsorbed molecule.
9,499
2015-11-13T00:00:00.000
[ "Chemistry", "Materials Science" ]
Exploring the Influence of Nanocrystalline Structure and Aluminum Content on High-Temperature Oxidation Behavior of Fe-Cr-Al Alloys The present study examines the high-temperature (500–800 °C) oxidation behavior of Fe-10Cr-(3,5) Al alloys and studies the effect of nanocrystalline structure and Al content on their resistance to oxidation. The nanocrystalline (NC) alloy powder was synthesized via planetary ball milling. The prepared NC alloy powder was consolidated using spark plasma sintering to form NC alloys. Subsequently, an annealing of the NC alloys was performed to transform them into microcrystalline (MC) alloys. It was observed that the NC alloys exhibit superior resistance to oxidation compared to their MC counterparts at high temperatures. The superior resistance to oxidation of the NC alloys is attributed to their considerably finer grain size, which enhances the diffusion of those elements to the metal–oxide interface that forms the protective oxide layer. Conversely, the coarser grain size in MC alloys limits the diffusion of the oxide-forming components. Furthermore, the Fe-10Cr-5Al alloy showed greater resistance to oxidation than the Fe-10Cr-3Al alloy. Introduction The materials used in high-temperature applications must exhibit the required resistance to oxidation at such temperatures.Fe-Cr-Al alloys are extensively used in hightemperature applications such as boilers/steam generators and as heating elements [1] for various applications such as furnaces, gas burners, furnace rollers, and ignitors.These materials are also commonly used in solar power systems as construction materials [2], automobiles as catalyst support [2], and nuclear power plants as fuel cladding materials against fuel accidents [2][3][4] because of their prominent resistance to high-temperature oxidation and neutron irradiation.Researchers have also investigated the oxidation characteristics of various Fe-Cr-Al alloys to identify conditions under which a protective α-Al 2 O 3 layer can fully develop at high temperatures [1,2,[5][6][7][8][9][10][11][12].As α-Al 2 O 3 is a much more defect-free oxide (than Cr 2 O 3 ), an alumina layer provides superior protection at temperatures as high as 1350 • C [6,13].During the oxidation of alumina-forming Fe alloys such as Fe-Al and Fe-Cr-Al alloys, less protective transients of Al 2 O 3 (such as γ, θ, and δ-Al 2 O 3 ) can form at temperatures below 900 • C, but it converts into the most stable form of Al 2 O 3 (i.e., α-Al 2 O 3 ) at higher temperatures.The formation of a robust layer of Al 2 O 3 requires ~10-15 at% Al in Fe-Al alloys [14].However, such high Al contents have a deleterious influence on the mechanical properties (particularly ductility), which restricts the use of such alloys in load-bearing structural components [11,15].In this respect, it is highly attractive to find means for fully developing an alumina layer on Fe alloys, without using excessively high Al contents.One such approach is to use Fe-Cr-Al alloys, where the addition of Cr enables the development of an alumina layer at much lower Al contents.The chromium addition to Fe-Al alloys for remarkably lowering the critical content of aluminum required for the development of Al 2 O 3 is known as the "Third Element Effect" of Cr [9,16,17], where the addition of chromium to Fe-Al alloys enables the full development of a protective layer of α-Al 2 O 3 [5,9,17]. Studies [18] have established that a grain size reduction in Fe-Cr alloys to a nano-size regime reduces the critical concentration of oxide-forming element required for the development of a protective oxide layer.Similarly, investigations of the oxidation characteristics of Fe-Cr-Al alloys have also demonstrated that a reduction in grain size to a nano-regime reduces the critical concentration of aluminum required for the development of a robust Al 2 O 3 layer [19,20].The grain size reduction to nano levels enhances the diffusion through the grain boundary in the alloy by three to five orders of magnitude, which enables development of the protective oxide layer, thereby enhancing resistance to oxidation [21,22].Another aspect is the spallation characteristics since Fe-Cr-Al alloys suffer oxide scale spallation under cyclic oxidation.In addition, Nanocrystalline (NC) structure can help resist oxide scale spallation (since the grain boundaries act as sites for oxide anchoring), which is beneficial for accommodating thermal stresses in the oxide scale, thereby resisting spallation, such as during thermal cycling [23,24].These beneficial effects of nanocrystalline structure motivated us to investigate the influence of a nanocrystalline structure on the oxidation of Fe-Cr-Al alloys with different Al contents over a temperature range of 500-800 • C. Although the beneficial effects of a nanocrystalline structure on oxidation are widely recognized, this aspect has been minimally explored for Fe-Cr-Al alloys, as most studies on the oxidation of Fe-Cr-Al alloys [1,5,9,10,16,[25][26][27][28] have focused on microcrystalline (MC) alloys.However, we recently reported the influence of a nanocrystalline structure on Fe-Cr-Al alloys at various temperatures [19,20].The oxidation behaviors of a Fe-20Cr-3Al alloy [20] and Fe-20Cr-5Al alloy [19] showed chromium to promote the formation of Al 2 O 3 .However, the combined influence of aluminum content and the nanocrystalline structure on the oxidation of Fe-Cr-Al alloys remains a relatively unexplored territory.Therefore, the present study primarily focused on examining the effects of aluminum content and a nanocrystalline structure on the oxidation behaviors of Fe-10Cr-(3,5) Al (wt%) alloys at 500, 700, and 800 • C. The resulting oxide scales were characterized using various characterization techniques to gain insights into their composition and structure. Synthesis of Nanocrystalline (NC) and Microcrystalline (MC) Fe-10Cr-(3,5) Al Alloys Fe-10Cr-(3,5)Al alloys (NC and MC) were synthesized through a powder metallurgy route.Fe, Cr, and Al powders were ball milled to synthesize the NC Fe-10Cr-(3,5)Al alloy powders, following the reported procedure for the ball milling of Fe-10Cr-3Al alloys [29].The consolidation of the milled Fe-10Cr-(3,5)Al alloy powder into a pellet with a 20 mm diameter was performed using spark plasma sintering (Dr.Sinter SPS-5000 Machine, Sumitomo Metals, Tokyo, Japan).The consolidation process utilized specifically optimized parameters: a temperature of 900 • C, pressure of 90 MPa, heating rate of 100 • C/min, holding time of 2 min, and a vacuum level of 0.1 Pa.The consolidated pellets are called NC Fe-10Cr-(3,5)Al alloys in the present study.Subsequently, the NC pellets were annealed in a tubular furnace at 900 • C for 20 h under a forming gas composed of 95% argon and 5% hydrogen.This annealing process transformed the NC structure into an MC structure, and the latter is called an MC Fe-10Cr-(3,5)Al alloy in the present study. For an oxidation test, the surfaces of the consolidated alloy discs were polished using SiC papers up to a grit size of 2000, followed by a final polishing with a 0.1 µm diamond paste before oxidation.Isothermal oxidations of both the NC and MC Fe-10Cr-(3,5)Al alloys were conducted at 500, 700, and 800 • C in a tubular furnace for 60 h.The weights of the samples after oxidation at predetermined intervals of time were measured using an electronic balance (Sartorius CP225D, Göttingen, Germany).Subsequently, the oxidation kinetic plots were generated based on the collected weight gain data.The oxidation test runs were performed in triplicate for each test condition in order to examine the reproducibility. 2.2.Characterization of NC and MC Pellets before and after Oxidation X-ray diffraction (XRD) patterns of the spark plasma sintered (SPSed) and annealed pellets of Fe-10Cr-(3,5) Al alloys were generated using a diffractometer (X'Pert pro, PANalytical, Almelo, The Netherlands) with CuK α radiation (λ = 0.154056 nm), utilizing a step size of 0.02 • and a duration of 20 s per step.The crystallite sizes of the alloys were calculated using the modified Williamson-Hall (MWH) technique [30] after considering peak broadening due to the XRD instrument, following a procedure described elsewhere [31]. The oxide scales formed upon oxidation at high temperatures were investigated using XRD.Field emission gun-scanning electron microscopy, FEG-SEM, (JEOL JSM-7600F, Tokyo, Japan) was used to characterize the surface morphology of the oxide layer.Elemental depth profiles of Fe, Cr and Al in the oxide scales developed in 60 h at different oxidation temperatures on both the NC and MC variants of the alloy were generated using timeof-flight secondary ion mass spectroscopy, TOF-SIMS (Physical Electronics/PHI TRIFT V NANO TOF, Chanhassen, MN, USA).TOF-SIMS depth profiles were obtained using a Cs + ion primary sputter beam (energy: 2 keV, raster size: 50 µm, sputter time: 2 s), Ga ion analysis beam (energy: 30 keV, raster size: 800 µm), and current of 1.7 mA. Oxidation Kinetics The grain sizes of NC Fe-10Cr-3Al, NC Fe-10Cr-5Al, MC Fe-10Cr-3Al, MC Fe-10Cr-5Al were determined to be 91 ± 6 nm, 92 ± 8 nm, 0.9 ± 0.05 µm, and 0.7 ± 0.04 µm, respectively, following the procedure described in Section 2.2.The oxidation kinetics for the Fe-10Cr-5Al (Figure 1a) and Fe-10Cr-3Al (Figure 1b) alloys showed similar weight gains (w) after 60 h of oxidation at 500 • C, indicating that the NC structure has no discernible impact on the oxidation.However, the NC structure considerably influences the oxidation resistances of both the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 700 • C and 800 • C. The MC Fe-10Cr-5Al alloy exhibited about four and five times greater weight gains compared to the NC Fe-10Cr-5Al alloy after 60 h of oxidation at 700 • C (Figure 1c) and 800 • C (Figure 1e), respectively.Similarly, the MC Fe-10Cr-3Al alloy showed ~3 and ~14 times higher weight gains than the NC Fe-10Cr-3Al alloy after 60 h of oxidation at 700 • C (Figure 1d) and 800 • C (Figure 1f), respectively.The remarkable effect of a nanocrystalline structure on the oxidation resistance of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at the higher temperatures was investigated through a post-oxidation examination of the oxide scales developed on these alloys at 700 • C and 800 • C. The oxidation rate (rate of weight gain) generally increases with an increase in the oxidation temperature.Consistently, the Fe-10Cr-5Al and Fe-10Cr-3Al alloys showed notably higher weight gains at 700 • C than at 500 • C (Figure 2).However, upon a further increase in temperature to 800 • C, the NC Fe-10Cr-5Al alloy exhibited only a slight increase in weight gain relative to that observed at 700 • C after 60 h.On the other hand, the MC Fe-10Cr-5Al alloy showed a slightly higher weight gain at 800 • C after 60 h of oxidation.In the case of the Fe-10Cr-3Al alloy, the NC alloy exhibited a considerably lower weight gain at 800 • C compared to that at 700 • C, whereas the MC alloy exhibited a slightly higher weight gain at 800 • C compared to 700 • C. The increase in temperature (from 700 to 800 • C) has a relatively insignificant role in the oxidation resistances of the Fe-10Cr-5Al (NC and MC) and Fe-10Cr-3Al (MC) alloys, and their oxidation resistances are similar to that of the MC Fe-20Cr-3Al alloy at those temperatures [20].This observation suggests that a more protective oxide scale forms on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 800 • C compared to that at 700 • C. The oxidation kinetic constants (K p for parabolic law and K c for cubic law) for the Fe-10Cr-5Al and Fe-10Cr-3Al alloys were calculated from their weight gain plots (Figure 2) at 500, 700, and 800 • C, and they are listed in Table 1.The oxidation rate (rate of weight gain) generally increases with an increase in the oxidation temperature.Consistently, the Fe-10Cr-5Al and Fe-10Cr-3Al alloys showed notably higher weight gains at 700 °C than at 500 °C (Figure 2).However, upon a further increase in temperature to 800 °C, the NC Fe-10Cr-5Al alloy exhibited only a slight increase in weight gain relative to that observed at 700 °C after 60 h.On the other hand, the MC Fe-10Cr-5Al alloy showed a slightly higher weight gain at 800 °C after 60 h of oxidation.In the case of the Fe-10Cr-3Al alloy, the NC alloy exhibited a considerably lower weight gain at 800 °C compared to that at 700 °C, whereas the MC alloy exhibited a slightly higher weight gain at 800 °C compared to 700 °C.The increase in temperature (from 700 to 800 °C) has a relatively insignificant role in the oxidation resistances of the Fe-10Cr-5Al (NC and MC) and Fe-10Cr-3Al (MC) alloys, and their oxidation resistances are similar to that of the MC Fe-20Cr-3Al alloy at those temperatures [20].This observation suggests that a more protective oxide scale forms on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 800 °C compared to that at 700 °C.The oxidation kinetic constants (Kp for parabolic law and Kc for cubic law) for the Fe-10Cr-5Al and Fe-10Cr-3Al alloys were calculated from their weight gain plots (Figure 2) at 500, 700, and 800 °C, and they are listed in Table 1.Table 1.Oxidation kinetic laws and kinetic constants of Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 500, 700, and 800 °C. Oxide Morphology The oxide scales developed on both the Fe-10Cr-5Al (Figure 3a,b) and Fe-10Cr-3Al (Figure 4a,b) alloys at 500 • C for 60 h exhibited similar oxide whisker/flake to the Fe oxides that are scattered over the entire surface.These flakes make the surface of the oxide scales porous and considerably reduce their protective properties.The broad similarity of the oxide scale morphologies of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys consisting primarily of whiskers (Figure 3a,b) is consistent with the similar weight gains exhibited by the two after oxidation at 500 • C for 60 h (Figure 1a,b).However, the scale developed on the NC Fe-10Cr-5Al alloy at 700 • C appeared to be closely packed, faceted crystals of submicron size (Figure 3c), whereas a considerably porous, faceted crystal oxide formed on the MC Fe-10Cr-5Al alloy at 700 • C (Figure 3d).On the other hand, the MC Fe-10Cr-3Al alloy exhibited a porous oxide morphology at 700 • C, which was similar to that formed on its NC counterparts.In contrast, the oxide scales that developed on the Fe-10Cr-5Al (Figure 3e,f) and Fe-10Cr-3Al (Figure 4e,f) alloys at 800 • C were considerably compact than those formed at 700 • C. The oxide layers developed on the NC Fe-Fe-10Cr-5Al and NC Fe-10Cr-3Al alloys were considerably more compact with faceted crystals than those formed on their MC counterparts, indicating a positive effect of the NC structure on the oxidation resistances of these alloys. Oxide Morphology The oxide scales developed on both the Fe-10Cr-5Al (Figure 3a,b) and Fe-10Cr-3Al (Figure 4a,b) alloys at 500 °C for 60 h exhibited similar oxide whisker/flake to the Fe oxides that are scattered over the entire surface.These flakes make the surface of the oxide scales porous and considerably reduce their protective properties.The broad similarity of the oxide scale morphologies of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys consisting primarily of whiskers (Figure 3a,b) is consistent with the similar weight gains exhibited by the two after oxidation at 500 °C for 60 h (Figure 1a,b).However, the scale developed on the NC Fe-10Cr-5Al alloy at 700 °C appeared to be closely packed, faceted crystals of submicron size (Figure 3c), whereas a considerably porous, faceted crystal oxide formed on the MC Fe-10Cr-5Al alloy at 700 °C (Figure 3d).On the other hand, the MC Fe-10Cr-3Al alloy exhibited a porous oxide morphology at 700 °C, which was similar to that formed on its NC counterparts.In contrast, the oxide scales that developed on the Fe-10Cr-5Al (Figure 3e,f) and Fe-10Cr-3Al (Figure 4e,f) alloys at 800 °C were considerably compact than those formed at 700 °C.The oxide layers developed on the NC Fe-Fe-10Cr-5Al and NC Fe-10Cr-3Al alloys were considerably more compact with faceted crystals than those formed on their MC counterparts, indicating a positive effect of the NC structure on the oxidation resistances of these alloys. Phase Composition for Oxide Scale Figure 5 shows the XRD spectra of oxides developed on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys after 60 h at 500, 700, and 800 °C.The oxide scales developed on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys after 60 h at 500 °C were predominantly composed of Fe2O3.In contrast, the oxide scales developed on the NC Fe-10Cr-(3,5)Al alloys at 700 and 800 °C after the same duration were rich in Cr2O3, while those on their MC counterpart at the same temperature after the same duration were rich in Fe2O3.The occurrence of Fe2O3-rich scales on the MC alloys is consistent with the higher weight gains exhibited by the MC alloys than those of the NC alloys at 700 and 800 °C (Figure 1). Phase Composition for Oxide Scale Figure 5 shows the XRD spectra of oxides developed on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys after 60 h at 500, 700, and 800 • C. The oxide scales developed on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys after 60 h at 500 • C were predominantly composed of Fe 2 O 3 .In contrast, the oxide scales developed on the NC Fe-10Cr-(3,5)Al alloys at 700 and 800 • C after the same duration were rich in Cr 2 O 3 , while those on their MC counterpart at the same temperature after the same duration were rich in Fe 2 O 3. The occurrence of Fe 2 O 3 -rich scales on the MC alloys is consistent with the higher weight gains exhibited by the MC alloys than those of the NC alloys at 700 and 800 • C (Figure 1). TOF-SIMS Depth Profile of Oxide Scale TOF-SIMS profiles of the oxide scales developed on the Fe-10Cr-5Al (Figure 6a,b) and Fe-10Cr-3Al (Figure 7a,b) alloys after 60 h of oxidation at 500 °C are similar, indicating the formation of chemically similar (Fe2O3-rich) oxides on both the alloys.The broad Fe peak (TOF-SIMS profiles) indicates the development of a thick oxide on both of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 500 °C.On the basis that the fast-growing iron oxide wustite (FeO) is thermodynamically stable only at temperatures above 570 °C, the broad Fe peak is attributed to the Fe2O3 that is stable at lower temperatures.However, the Fe2O3-rich oxide layer grows thick because the combined effect of diffusivity and the alloy microstructure is not able to facilitate a sufficiently rapid enrichment of Cr (or Al) that could facilitate the development of a contiguous layer of Cr (or Al) oxide.In this respect, it may be relevant to note that the oxidation of a Fe-10Cr alloy at temperatures such as 300 and 350 °C (at which the growth of the Fe2O3-rich oxide layer is sluggish) when combined with a nanocrystalline structure were found to facilitate sufficient Cr enrichment to enable the development of a contiguous layer of Cr oxide [18].As a result, the NC Fe-10Cr alloy showed a remarkably superior oxidation resistance compared to its MC counterpart [18].In contrast, as described earlier, the combination of 500 °C and an NC structure does not enable the development of a contiguous layer of Cr oxide in the cases of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys, and hence, both the NC and MC alloys oxidized at similar rates at 500 °C (Figure 1a,b), which is duly corroborated by the similarity of the intensities and breadths of the TOF-SIMS profiles for Fe.However, the NC structure did facilitate greater Cr and Al accumulation in the oxide layer, as reflected in the minor Cr and Al peaks for the NC alloy in Figures 6a and 7a. The TOF-SIMS profiles for the oxide scales developed on the Fe-10Cr-5Al (Figure 6c,d) and Fe-10Cr-3Al (Figure 7c,d) alloys after 60 h of oxidation at 700 °C show the peaks TOF-SIMS Depth Profile of Oxide Scale TOF-SIMS profiles of the oxide scales developed on the Fe-10Cr-5Al (Figure 6a,b) and Fe-10Cr-3Al (Figure 7a,b) alloys after 60 h of oxidation at 500 • C are similar, indicating the formation of chemically similar (Fe 2 O 3 -rich) oxides on both the alloys.The broad Fe peak (TOF-SIMS profiles) indicates the development of a thick oxide on both of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 500 • C. On the basis that the fast-growing iron oxide wustite (FeO) is thermodynamically stable only at temperatures above 570 • C, the broad Fe peak is attributed to the Fe 2 O 3 that is stable at lower temperatures.However, the Fe 2 O 3 -rich oxide layer grows thick because the combined effect of diffusivity and the alloy microstructure is not able to facilitate a sufficiently rapid enrichment of Cr (or Al) that could facilitate the development of a contiguous layer of Cr (or Al) oxide.In this respect, it may be relevant to note that the oxidation of a Fe-10Cr alloy at temperatures such as 300 and 350 • C (at which the growth of the Fe 2 O 3 -rich oxide layer is sluggish) when combined with a nanocrystalline structure were found to facilitate sufficient Cr enrichment to enable the development of a contiguous layer of Cr oxide [18].As a result, the NC Fe-10Cr alloy showed a remarkably superior oxidation resistance compared to its MC counterpart [18].In contrast, as described earlier, the combination of 500 • C and an NC structure does not enable the development of a contiguous layer of Cr oxide in the cases of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys, and hence, both the NC and MC alloys oxidized at similar rates at 500 • C (Figure 1a,b), which is duly corroborated by the similarity of the intensities and breadths of the TOF-SIMS profiles for Fe.However, the NC structure did facilitate greater Cr and Al accumulation in the oxide layer, as reflected in the minor Cr and Al peaks for the NC alloy in Figures 6a and 7a. exceed those of the MC Fe-10Cr-3Al alloys by ~5 and ~6 times, respectively.The broad and low-intensity peaks of Cr and Al in the case of the MC alloys at 800 °C suggest a wider distribution of Cr and Al over a larger scale thickness, indicating a considerably less protective oxide formation on the MC alloys compared to the NC alloys.These observations are consistent with the lower weight gains of the NC Fe-10Cr-5Al and NC Fe-10Cr-3Al alloys compared to their MC counterparts at 800 °C (Figure 1e,f).The TOF-SIMS profiles for the oxide scales developed on the Fe-10Cr-5Al (Figure 6c,d) and Fe-10Cr-3Al (Figure 7c,d) alloys after 60 h of oxidation at 700 • C show the peaks of Fe along with Cr.The Fe peak for the MC Fe-10Cr-5Al alloy (at 700 • C) is considerably broader than that for the NC Fe-10Cr-5Al alloy.In addition, the intensities of the Fe and Cr peaks of the NC Fe-10Cr-5Al alloy are ~3 and ~10 times greater than those of the MC Fe-10Cr-5Al alloy.However, the intensity of the Cr peak in the case of the NC Fe-10Cr-3Al alloy is about six times higher than that of the MC Fe-10Cr-3Al alloy.These observations suggest that enhanced diffusivity in the NC Fe-10Cr-5Al and NC Fe-10Cr-5Al alloys facilitated the full development of an oxide on the NC alloys, aligning with their lower oxidation rate compared to the MC alloys at 700 Discussion Fe2O3-rich mixed oxide scales formed on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 500 °C at the early periods of oxidation, which are similar to those formed on the Fe-20Cr-3Al alloy [20].Upon further oxidation, the Fe2O3 scale grows rapidly in comparison to the Cr2O3 and Al2O3 due to an unavailability of sufficient Cr and Al for the formation of a protective oxide layer at the surfaces of the alloys.The formation of Fe2O3 also dominates over both the influences of Al addition and the impact of the NC structure on the alloys.Consequently, both the Fe-10Cr-5Al and Fe-10Cr-3Al alloys exhibit the formation of an Discussion Fe 2 O 3 -rich mixed oxide scales formed on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 500 • C at the early periods of oxidation, which are similar to those formed on the Fe-20Cr-3Al alloy [20].Upon further oxidation, the Fe 2 O 3 scale grows rapidly in comparison to the Cr 2 O 3 and Al 2 O 3 due to an unavailability of sufficient Cr and Al for the formation of a protective oxide layer at the surfaces of the alloys.The formation of Fe 2 O 3 also dominates over both the influences of Al addition and the impact of the NC structure on the alloys.Consequently, both the Fe-10Cr-5Al and Fe-10Cr-3Al alloys exhibit the formation of an Fe 2 O 3 -rich oxide and a similar oxidation behavior (parabolic oxidation kinetics) at 500 • C, indicating that the 10 wt% Cr is not sufficient for the development of a continuous layer of Cr 2 O 3 at that temperature.However, the NC structure efficiently influences the oxidation behaviors of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 700 • C. The NC structure of the alloys effectively enhances the diffusivities of the constituents of the alloys due to high grain boundaries.The higher diffusivities promote the formation of an oxide containing substantially more Cr 2 O 3 in the NC alloy than that formed on the MC alloy (due to the limited diffusivity in the case of the latter).Consequently, the oxide scales developed on the NC Fe-10Cr-5Al and NC Fe-10Cr-3Al alloys consisted of a high proportion of Cr 2 O 3 compared to their MC counterparts.The formation of a higher Cr 2 O 3 -containing oxide with faceted crystals in a scale morphology suggests the development of a more protective oxide layer on the NC alloys than on the MC alloys.Consequently, the NC Fe-10Cr-5Al alloy exhibits superior oxidation resistance than the MC Fe-10Cr-5Al alloy at 700 • C.Although the morphologies of the oxides developed on the NC Fe-10Cr-3Al and MC Fe-10Cr-3Al alloys at 700 • C are typically the same, the higher concentration of Cr 2 O 3 in the oxide layer of the NC alloy enhances its resistance to oxidation compared to that of the MC alloy. The impact of a nanocrystalline structure on the oxidation resistance behaviors of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at 800 • C is clearly exhibited by the oxide scales developed on these alloys.The compact and faceted crystals of the Cr 2 O 3 -rich oxide scales formed on the NC Fe-10Cr-5Al and NC Fe-10Cr-3Al alloys manifest in a noticeably lower weight gain than their MC counterparts.The oxide scales developed on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys at the initial period of oxidation developed very rapidly at 800 • C compared those at 500 and 700 • C, especially for the NC alloy (Figure 8a,e).In addition, the diffusion coefficients of all constituents of the alloys at 800 • C are the same [32][33][34].A rapid oxidation and similar diffusivity do not promote the formation of distinct oxide layers for all constituents of the alloys.As a result, the Fe 2 O 3 -rich mixed oxide scale forms at the initial period of oxidation (Figure 8b,f).Upon increasing the oxidation time, the oxide scale becomes enriched with Cr 2 O 3 , as Cr and Al have greater affinities for oxygen in comparison to Fe, and the Cr content in the alloys is higher than that of Al (Figure 8c,g).The formation of a continuous layer of a Cr 2 O 3 -rich oxide on the NC Fe-10Cr-5Al alloy impedes the diffusion of both metal and oxygen ions (Figure 8h), and hence, the NC Fe-10Cr-5Al alloy follows cubic oxidation kinetics.Conversely, the limited diffusion in the MC Fe-10Cr-5Al alloy restricts the development of a continuous layer of Cr 2 O 3 on the MC alloy due to a lower diffusion coefficient of elements (Figure 8d).Hence, the MC Fe-10Cr-5Al alloy exhibited a higher oxidation rate compared to the NC Fe-10Cr-5Al alloy, and hence, the MC Fe-10Cr-5Al alloy follows parabolic oxidation kinetics.Further, the scale formed on the Fe-10Cr-3Al alloys follows a similar mechanism to that of the Fe-10Cr-5Al alloy.The post-oxidation characterizations of the oxides formed on the Fe-10Cr-5Al and Fe-10Cr-3Al alloys demonstrate the formation of an insignificant amount of Al 2 O 3 on the alloys. The characterization of an oxide scale after 60 h of oxidation using XRD and TOF-SIMS suggests that although there may be isolated instances of Al 2 O 3 formation on the oxide scale, no continuous layer of protective Al 2 O 3 formed on the oxide scale.Neither of the Fe-10Cr-3Al and Fe-10Cr-5Al alloys exhibited any formation of an Al 2 O 3 layer after oxidation for 60 h at 500, 700 and 800 • C. The absence of a continuous Al 2 O 3 layer indicates that the addition of 3 and 5 wt% Al with 10 wt% Cr in Fe is not sufficient for the full formation of a layer of Al 2 O 3 .However, our previous studies [19,20] suggest that 3 wt% Al and 5 wt% Al with 20 wt% Cr in Fe is sufficient for the full development of an Al 2 O 3 layer. Conclusions The nanocrystalline (NC) structure of the Fe-10Cr-5Al and Fe-10Cr-3Al alloys not enhance the oxidation resistance of the alloys at 500 °C.However, the NC struc plays a remarkable role in the oxidation resistances of the Fe-10Cr-5Al and Fe-10Cr alloys at higher temperatures.Fe2O3-rich oxide scales form at 500 °C on Fe-10Cr-5Al Fe-10Cr-3Al alloys.In contrast, Cr2O3-rich scales form on NC Fe-10Cr-5Al and NC 10Cr-3Al alloys at 700 and 800 °C, whereas Fe2O3-rich scales form on MC Fe-10Cr-5Al MC Fe-10Cr-3Al alloys at these higher temperatures.A continuous layer of Al2O3 doe form on Fe-10Cr-5Al and Fe-10Cr-3Al alloys at the oxidation temperatures (500-800 suggesting that added Al and Cr are not sufficient for the full formation of a layer of A Figure 3 . Figure 3. Morphologies of the oxide scales formed on the Fe-10Cr-5Al alloy in 60 h of the oxidation of the NC alloy at (a) 500 • C, (c) 700 • C, and (e) 800 • C and MC alloy at (b) 500 • C, (d) 700 • C, and (f) 800 • C. Figure 3 . Figure 3. Morphologies of the oxide scales formed on the Fe-10Cr-5Al alloy in 60 h of the oxidation of the NC alloy at (a) 500 °C, (c) 700 °C, and (e) 800 °C and MC alloy at (b) 500 °C, (d) 700 °C, and (f) 800 °C. Figure 4 . Figure 4. Morphologies of the oxide scales formed on the Fe-10Cr-3Al alloy in 60 h of the oxidation of the NC alloy at (a) 500 °C, (c) 700 °C, and (e) 800 °C and MC alloy at (b) 500 °C, (d) 700 °C, and (f) 800 °C. Figure 4 . Figure 4. Morphologies of the oxide scales formed on the Fe-10Cr-3Al alloy in 60 h of the oxidation of the NC alloy at (a) 500 • C, (c) 700 • C, and (e) 800 • C and MC alloy at (b) 500 • C, (d) 700 • C, and (f) 800 • C.
7,035
2024-04-01T00:00:00.000
[ "Materials Science" ]
Evidence for phonon hardening in laser-excited gold using x-ray diffraction at a hard x-ray free electron laser Studies of laser-heated materials on femtosecond timescales have shown that the interatomic potential can be perturbed at sufficiently high laser intensities. For gold, it has been postulated to undergo a strong stiffening leading to an increase of the phonon energies, known as phonon hardening. Despite efforts to investigate this behavior, only measurements at low absorbed energy density have been performed, for which the interpretation of the experimental data remains ambiguous. By using in situ single-shot x-ray diffraction at a hard x-ray free-electron laser, the evolution of diffraction line intensities of laser-excited Au to a higher energy density provides evidence for phonon hardening. INTRODUCTION Over the past few decades, femtosecond optical-pump opticalprobe measurements have enabled the investigation of ultrafast phenomena taking place in semiconductors (1,2) and metals (3)(4)(5)(6) driven far from equilibrium.In these experiments, because of the small momentum of optical photons and the mass ratio between the electrons and the nuclei, the optical laser pulse transfers its energy primarily to the electronic subsystem, leaving the lattice initially unperturbed.This process can lead to exotic phenomena.For instance, simulations have predicted that under strong excitation, the interatomic potential of silicon softens to the point where transverse acoustic phonon modes become unstable and initiate a rapid disordering (7,8).Similarly, a softening of optical phonon modes caused by a solid-solid phase transition was observed in photoexcited bismuth using ultrafast x-ray diffraction measurements (9).In contrast, it has been postulated that the lattice response of metals upon strong optical excitation is fundamentally different.Density functional theory simulations performed on laser-excited gold (Au) predict that, when electrons near the Fermi surface are heated to a few electronvolts, while the lattice remains cold and at solid ambient density, a near-instantaneous stiffening of the interatomic potential occurs, caused by an increase of the strength of the metallic bonding (8,10).This hardening causes an increase of the phonon frequencies across the Brillouin zone and is referred to as phonon hardening.This behavior is expected to have an appreciable impact on the thermodynamic properties of Au under ultrafast intense irradiation as phonons contribute to intrinsic thermodynamic quantities such as constant-volume specific heat, entropy, and internal energy and hence would affect the melting behavior.In addition, phonon hardening is not a unique property of laser-excited Au, but is also expected to occur in other face cubic centered (fcc) metals such as Al, Cu, and Pt (8,10).Here, we focus on Au as a model system, allowing us to draw comparisons between previous experimental and theoretical studies. The investigation of these phenomena requires measurements at the atomic scale with a time resolution on the order of phonon frequencies, which have recently been possible with the development of hard x-ray free electron lasers (XFELs) (11)(12)(13) and ultrafast electron diffraction facilities with megaelectronvolt energies (14)(15)(16).Ernstorfer et al. (17) have attempted to show evidence of phonon hardening by performing ultrafast electron diffraction measurements from 10 s of nanometer-thick Au foil excited using a femtosecond optical laser pulse.They used a two-temperature model (TTM) (18) to describe heating of the electronic and lattice subsystems after laser irradiation, combined with analysis of the intensity decay of the (2 2 0) diffraction line using the Debye-Waller theory.These observations were compared with simulations, which assumed an increase in the Debye temperature, Θ D , in laser-excited Au, an expected consequence of phonon hardening (8).However, recent modeling work using the ambient value of Θ D (i.e., without assuming phonon hardening) (10) was able to reproduce the experimental data described in (17) As a result, the experimental observation of phonon hardening in laser-excited Au is still questioned and further investigations are required to provide evidence of this exotic behavior.More recently, also using ultrafast electron diffraction, Mo et al. (16) investigated the response of ultrafast heated nanometerthick Au foils, but the electron temperature achieved was not sufficient to investigate phonon hardening (10). Here, we describe the use of x-ray diffraction at a hard XFEL to measure the temporal evolution of the (1 1 1), (2 0 0), and (2 2 0) diffraction lines of laser-excited Au at an absorbed energy density of 6.4 ± 0.8 MJ/kg, an energy density more than two times larger than previously reported values (17) and for which the increase of the Θ D is expected to be larger.The use of the ultra-bright x-ray pulses generated by the XFEL enables the collection of diffraction patterns on a single-shot basis with a temporal resolution of ~20 fs, an improvement in resolution by a factor of ~10 compared to previous measurements.In addition, the reciprocal space resolution achieved in this measurement is 1.6 × 10 −3 Å −1 , approximately two times better than electron diffraction measurements (15).As a result, we are sensitive to subtle changes of the diffraction peak positions, which allow us to only consider ambient solid density measurements, reducing the impact of complex hydrodynamic effects, and excluding density change effects, on the data interpretation.We find that the measured decay of the diffraction peak intensities is best explained by an increase of Θ D and hence that our data provide evidence of the existence of phonon hardening in strongly excited Au. Experimental method Experiments were conducted at the Matter in Extreme Conditions (MEC) endstation (19) of the Linac Coherent Light Source (LCLS) at SLAC National Accelerator Laboratory.A schematic of the experimental configuration is shown in Fig. 1.Free-standing 59-nm-thick Au foils (SciTech Ltd.) were irradiated using the MEC short-pulse laser system (20) frequency-doubled to 400 nm, providing ~222 ± 14 μJ in a 50-fs laser pulse.The spatially Gaussian optical laser pulse was focused to a spot size of ~100 μm-by-100 μm full width at half maximum (FWHM) at the target plane position.A transmission image of the laser spot at the target plane is shown in the bottom left inset of Fig. 1.Our films, excited to 6.4 ± 0.8 MJ/kg, were probed by the x-ray pulses at different time delays ranging from −2 to 3 ps with respect to the optical laser pulse in a single-shot basis.The timing between the optical laser pulse and the x-ray pulse was measured on a shot-to-shot basis using the time tool system available at the MEC endstation and was found to have an accuracy of 17 fs (see Materials and Methods). Two-dimensional (2D) x-ray diffraction patterns were collected in transmission through the sample in a Debye-Scherrer geometry using a photon energy of 10.896 keV (λ = 1.138Å).Examples of azimuthally integrated 1D diffraction patterns from laser-excited Au are shown in the top right inset of Fig. 1 for different time delays.The (1 1 1), (2 0 0), and (2 2 0) diffraction lines of Au are indicated and are indexed to give a lattice parameter of 4.071 ± 0.003 Å, in good agreement with the literature value (21).Weak diffraction lines from nickel (Ni), originating from the Ni mesh grid supporting the Au foils, can also be observed due to a low-intensity halo surrounding the focused x-ray beam at the target plane.This halo originates from the unfocused x-ray beam overfilling the beryllium compound refractive lenses (CRLs) used for focusing of the x-ray beam [the geometric aperture of the lenses is typically ~1 mm, and the unfocused beam is ~2 mm in diameter at the lenses position (22)].The scattering signal from the halo contributes to the entire diffraction pattern in the same manner and is estimated to be ~20× smaller than the unheated Au signal and is hence negligible in this analysis (see Materials and Methods).The contribution from unexcited Au is found to be similarly small.As a result, the diffraction intensity from both unexcited Au and Ni is neglected in the analysis. Extraction of the Debye temperature In the case of phonon hardening, the increase of the phonon energies upon excitation is driven by the excitation of the electronic system and is fundamentally different from the effect of density changes on the phonon dispersion relation.As a result, it is essential to ensure that the density of the system remains constant and equal to the ambient conditions value when investigating this exotic phenomenon.At our excitation conditions, the energy flow between the hot electrons and the lattice is fast enough to initiate a density change after 1 ps and drive a solid-liquid phase transition within 3 ps (see fig.S4).As a result, the measurements presented here were limited to a maximum time delay of 1 ps to satisfy the conditions for the investigation of phonon hardening.At later time delays, the diffraction line positions shift, indicating the onset of density changes (cf.Materials and Methods).Given this short temporal window, our improved time resolution of ~20 fs and the brightness of the x-ray pulse were essential to measure the intensity decay over the entire subpicosecond temporal window available on a single-shot basis.This is in contrast with the work of Ernstorfer et al. (17), which achieved a time resolution of ~400 fs and required data averaging.In addition, data, for which solid-liquid coexistence was observed, were included in their analysis, which violates the assumption of phonon hardening. Having established the temporal window for the measurement of phonon hardening, we normalize the diffraction lines of Au by the total number of counts recorded on the detector to account for fluctuation in the x-ray pulse energy.Each diffraction line is then integrated to give the intensity for each diffraction line, I hkl with h, k, and l corresponding to Miller indices.These intensities are finally normalized by the value without laser excitation, I 0 hkl , and the result is shown by the open symbols in Fig. 2A for the (1 1 1), (2 0 0), and (2 2 0) diffraction lines of Au.More information on the extraction of the normalized intensity decay can be found in Materials and Methods. The intensity decay of the (1 1 1), (2 0 0), and (2 2 0) diffraction lines over time exhibits the Q 2 behavior expected from the Debye-Waller theory (23) and indicates an increase of the lattice temperature (see Materials and Methods).As a result, the decay of the diffraction lines of Au can be quantified by introducing the Debye-Waller factor, which requires knowledge of both the lattice temperature and Θ D .The first one is simulated using a TTM, described in details in Materials and Methods.At an absorbed energy density of 6.4 ± 0.8 MJ/kg, the maximum electron temperature reaches 3.5 ± 0.3 eV(41.1 ± 3 kK) and the lattice temperature is found to be 3.4 ± 0.3 kK(0.3 ± 0.03 eV) at the longest time delay of 1 ps.These values were obtained using parameters calculated with density functional theory simulations by Smirnov (10).However, the value of Θ D at our excitation conditions is unknown.If its value is assumed to remain constant to the ambient value of 170 K (24), the simulated intensity decay (dashed curves in Fig. 2A) shows a clear deviation from our data.This suggests that the Debye temperature is changing at our excitation conditions.Here, unlike previous studies (17), no a priori assumption on the value of the Debye temperature is made.Instead, it is treated as a free parameter and is estimated by matching the simulated intensity decay to the experimentally measured value for each diffraction line and each time delay.More information on this procedure is found in Materials and Methods. The extracted time-dependent Debye temperature is shown in Fig. 2B.The data show that the experimental diffraction intensities collected at an absorbed energy density of 6.4 ± 0.8 MJ/kg are consistent with an increase of the Debye temperature.Given the uncertainty on the deduced Debye temperature, the positive time delay data were found to be best described using a constant value of 265 ± 18 K as shown by the dashed-dotted horizontal line in Fig. 2B.This value is used to produce the solid lines in Fig. 2A.The reported uncertainty considers both the uncertainty on the timing between the optical laser pulse and the x-ray pulse, the uncertainty on the absorbed energy density, and the uncertainty on the measured diffraction line intensity (see Materials and Methods).The uncertainty at the 1σ level is shown by the shaded areas in Fig. 2. DISCUSSION To simulate the evolution of the lattice temperature, T l , using a TTM, we need to know the electron-phonon coupling rate, g ep . However, there is no consensus on its value at our excitation conditions (25).To account for the influence of this parameter, we extract the Debye temperature using different electron-phonon coupling rates found in the literature: Smirnov (10), Holst et al. (26), Lin et al. (27), and Migdal et al. (28).The values obtained from (26,27) used in this analysis correspond to upper bounds on the temperaturedependent electron-phonon coupling rate, while the value obtained from Migdal et al. (28) corresponds to a lower bound.While the calculations from Smirnov (10), Holst et al. (26), and Lin et al. (27) are all based on the work by Allen (29), Smirnov uses the full spectral function, whereas Holst et al. (26) use its representation by the mass enhancement factor.Migdal et al. (28) use a slightly different theory, but more importantly, the density of states for Au is described using parabolic functions with the position of the electronic d-band kept fixed at an experimental value for all the electronic temperatures explored in their work.Note that for each model, the electron-phonon coupling rate and the electron heat capacity are calculated with the same electronic density of states. The results for different electron-phonon coupling rates are shown with different symbols in Fig. 3A.The Debye temperature calculated for each model considered in this work shows that an increase is necessary to explain our experimental data.Our measurements are compared with predictions from density functional theory (open squares and dashed line in Fig. 3A).The predicted Debye temperatures are found by matching the lattice heat capacity calculated within the Debye model with the one calculated from the phonon density of states obtained from first-principles calculations.The increase of the Debye temperature is a direct consequence of the increase of the phonon mode energies with increasing electron temperature and thus does not require the use of the TTM.To show this, we reproduce the work of Recoules et al. (8) and calculate the phonon dispersion of Au at 3-and 4-eV electron temperature (we achieved 3.5 ± 0.3 eV in this experiment) using the projector augmented wave method (30) as implemented in the ABINIT package (31)(32)(33).We use the Jollet-Torrent-Holzwarth (34) atomic dataset for Au within the local density approximation (35) and a plane-wave (24).the dashed-dotted line corresponds to the mean Debye temperature measured at positive time delays and was found to be Θ D = 265 ± 18 K. the shaded area corresponds to the uncertainty of the Debye temperature measurement at the 1σ level.For negative time delays, the uncertainty corresponds to the deviation of the measured intensities from unity and is found to be 21 K. expansion up to 30-Ha cutoff.The Brillouin zone is sampled with a 32 × 32 × 32 k-points grid.Phonons are calculated on an 8 × 8 × 8 q-points grid.When increasing the electronic temperature, we include excited occupied states.We find that the energy of the phonon modes increases across the entire Brillouin zone as shown in Fig. 3B.This leads to a shift of the phonon density of states to higher energies and hence an increase of the Debye temperature.In Fig. 3A, the Debye temperatures shown with the open squares at 3 and 4 eV are calculated from the lattice heat capacity using the phonon density of states corresponding to the phonon dispersions in Fig. 3B. From Fig. 3A, we observe that our measurements show an increase of Θ D for all the values representative of the uncertainty on the electron-phonon coupling rate and thus provide further evidence for phonon hardening in laser-excited Au.We note that only the value from Lin et al. (27) was considered in the analysis from Ernstorfer et al. (17).Because of its high electron-phonon coupling rate, it predicts an upper bound on Θ D as seen from Fig. 3A.However, the experimental data from Ernstorfer et al. (17) were also found to be in good agreement when using the values of the electron-phonon coupling rate and the electron heat capacity from Smirnov (10).For completeness, we also considered the opposite behavior corresponding to a sudden disappearance of bonding as observed in cubic diamond-structured semiconductors (1,2,8) and showed that this scenario does not reproduce the experimentally measured intensity decay of the diffraction peaks.The results of this analysis can be found in the Supplementary Materials. In this study, we used in situ x-ray diffraction measurements at a hard XFEL to investigate phonon hardening in ultrafast laserexcited Au.With the much-improved time resolution of our measurement and the high brightness of the x-ray pulse, we provided additional evidence for phonon hardening at higher absorbed energy densities compared to previous studies (17).This work extends previous work by considering different values for the temperaturedependent electron-phonon coupling rate when calculating the lattice temperature and by showing that an increase of the Debye temperature is required to explain our experimental observations, for all coupling rates considered. Here, we used a TTM to characterize the energy transfer between the electron subsystem and the lattice subsystem.While being widely used in the literature, this model assumes that all phonon branches in the system equilibrate instantaneously.To address this limitation, more sophisticated models such as the nonlinear model (NLM) proposed by Waldecker et al. (36) or the out-of-equilibrium dynamical model by Maldonado et al. (37) have been introduced.For this reason, we also considered the NLM, in addition to the TTM, to describe heating of the various phonon branches by the electron subsystem.However, the NLM requires knowledge of several additional coupling parameters compared to the TTM (phonon-phonon coupling rates for each phonon branch).Following the methodology outlined by Waldecker et al. (36), we performed an approximate calculation of these parameters.The simulation results indicate that the mean square atomic displacement aligns with the predictions of the TTM.Additional details can be found in the Supplementary Materials.Last, the model introduced by Maldonado et al. (37) surpasses the NLM by incorporating wave vector-dependent coupling parameters.While the calculation of these parameters at various electronic temperatures is in principle feasible with ab-initio calculations, this falls beyond the scope of this manuscript.For these reasons, we conclude that the TTM is the most suitable model available to describe heating of a Au lattice using ultrashort laser irradiation. For the specific case of phonon hardening, recent developments at hard XFELs could provide an avenue to directly observe this behavior by measuring the increase of the phonon energies following laser excitation.This could be achieved using inelastic x-ray scattering with millielectronvolt resolution.This technique has been extensively used at synchrotron light sources (38) and has been recently fielded at XFELs (39)(40)(41) to take advantage of the exquisite temporal resolution required for ultrafast dynamics such as phonon hardening.We have demonstrated an energy resolution of 22 meV sufficient to resolve phonon modes near the edge of the Brillouin zone in ambient Au (42).In the presence of phonon hardening, the shift of the phonon energies to higher values is expected to be measurable by a shift of the inelastic components to larger energy transfers. Experimental details The LCLS was operated in the hard x-ray self-seeding beam mode (43) at an incident x-ray photon energy of 10.896 keV with a nominal pulse duration of 50 fs and a bandwidth of 1 eV (ΔE/E).Using beryllium CRLs (Be CRLs) located 4 m upstream of the MEC vacuum chamber, the x-rays were focused on target to a spot size of ~20 μm by 20 μm FWHM. The target consisted of 59 ± 2-nm-thick (measured along the x-ray propagation direction) polycrystalline Au foils grown by Scitech Precision Ltd. and deposited on top of a nickel (Ni) mesh grid from Goodfellow Cambridge Ltd. with wire diameter of 41 μm resulting in open squares of 340 μm in width, over which free-standing Au foils were suspended.The free-standing Au samples were irradiated using a 0° incidence, 750-mm focal length concave mirror operated at an angle of ~11° with respect to the incident x-ray beam.The absorption ratio of solid density Au at a wavelength of 400 nm is taken to be 0.47 ± 0.05 based on ex situ reflection and transmission measurements of our thin films and corresponds to the fraction of the incident optical laser energy deposited inside the film.Considering the area probed by the x-ray pulse, with these parameters, our samples were excited to an absorbed energy density of 6.4 ± 0.8 MJ/ kg corresponding to an absorbed laser intensity of 1.5 ± 0.2 × 10 13 W/ cm 2 and an absorbed laser fluence of 0.7 ± 0.1 J/cm 2 . On-shot timing between the x-ray pulse and the optical laser pulse Description of the time tool Given the ultrafast nature of phonon hardening, the relative time of arrival between the optical laser pulse and the x-ray pulse needs to be known with high accuracy.For this, the MEC endstation uses the ultrafast excitation of a yttrium-aluminum-garnet (YAG) window after irradiation by an x-ray pulse to monitor the relative timing between the optical laser pulse and the x-rays and is referred to as the time tool.Here, a 100-μmthick YAG window is positioned at normal incidence to the x-ray beam, 4 m upstream of the vacuum chamber.Upon irradiation by the x-ray pulse, the excitation causes the window to become opaque for optical radiation as the carrier density increases.A leakage of the optical laser pulse is then impinging on the YAG window at a 45° incidence angle such that the temporal information is encoded on one of the spatial axes.Typical images of the time tool are shown in Fig. 4A.The transmission decreases when the x-ray pulse is impinging on the YAG window first (blue area in Fig. 4A).By monitoring the position of the intensity recovery edge along the horizontal direction on a shot-to-shot basis, the time delay between the x-ray pulse and the optical laser pulse can be measured with ~20fs accuracy.Here, the intensity edge is found from the first derivative of the transient time tool signal (orange solid line), and the edge position is defined as the pixel number corresponding to the maximum value as indicated by the vertical dashed line in Fig. 4 (A and B).When computing the first derivative, the raw signal is first smoothed using a Gaussian filter with a standard deviation of 40 pixels to remove high-frequency noise (blue curve in Fig. 4B). However, the time tool only provides the relative timing between the two pulses.For the analysis shown here, an absolute timing is necessary.This is obtained by correlating the timing measurement from the time tool with a secondary measurement performed on a 100-μmthick YAG window positioned at the target plane inside the vacuum chamber.The latter can be used to determine the order of arrival between the x-ray pulse and the optical pulse on a shot-toshot basis.We define τ X-ray and τ Optical as the time of arrival of the x-ray pulse and optical pulse at the target plane, respectively.By taking advantage of the inherent temporal jitter between the two pulses, one samples both τ X-ray ≤ τ Optical and τ X-ray ≥ τ Optical as shown in Fig. 5 (A and B).This allows the determination of the pixel position on the time tool images corresponding to τ X-ray = τ Optical .This procedure is shown in Fig. 5C.Red circles correspond to the pixel position on the time tool images, for which τ X-ray ≤ τ Optical .Blue squares correspond to the pixel position on the time tool images, for which τ X-ray ≥ τ Optical .The pixel position corresponding to τ X-ray = τ Optical is found by finding the position of the line that best separates the two datasets.This is achieved using a support vector machine with a linear kernel.This line defines zero time delay between the two pulses and corresponds to Δt = 0 ps.The pixel position corresponding to the edge on the time tool is then converted to absolute time, Δt, using the time tool calibration (2.5 fs/pixel) and the pixel position corresponding to Δt = 0 ps. Estimation of the timing uncertainty between the optical laser pulse and the x-ray pulse The uncertainty on Δt is the combination of the relative timing uncertainty measured at the time tool and the uncertainty on the determination of the pixel position corresponding to Δt = 0 ps.The relative timing uncertainty is estimated from the uncertainty on the determination of the edge position on the time tool images and is found to be 11 fs. The uncertainty on the pixel position corresponding to Δt = 0 ps is estimated from the classification boundary shown in Fig. 6.We notice that the two datasets are not perfectly separable.The uncertainty is taken to be the maximum distance between the outliers and the boundary and translates into a timing uncertainty of 13 fs.The uncertainties from both sources are lastly combined to give a timing uncertainty of 17 fs. Estimation of the intensity from the (1 1 1) diffraction line of Ni Because of the lattice parameter of fcc Ni at ambient conditions, the (1 1 1) diffraction line of Ni was coincidentally overlapping with the (2 0 0) diffraction line of Au.Its contribution is estimated from x-ray diffraction data collected long after melting.The azimuthally integrated 1D x-ray diffraction pattern measured at 11 ps is shown in Fig. 6A.It shows a broad feature characteristic of a disordered state (liquid state) along with Au and Ni solid diffraction peaks.The solid peaks originate from the low-intensity halo surrounding the focused x-ray beam and, hence, far from the area irradiated by the optical laser.The diffraction lines are fitted using a pseudo-Voigt lineshape, allowing the extraction of the integrated intensity for each diffraction peak shown in Fig. 6B.One observes that the intensities of the (1 1 1) (red squares) and (2 0 0) (green circles) diffraction lines of Au and the intensity of (2 0 0) diffraction line of Ni (black triangles) reach a plateau above a time delay of 6 ps.These values are associated with the intensity scattered from the halo surrounding the focused x-ray beam.From the intensity of the (2 0 0) diffraction line of Ni, one concludes that this intensity is independent of the time delay.By comparing the intensity between early time delays (within the gray area in Fig. 6B) and long time delays (above 6 ps), we find that the summed contribution of the (2 0 0) diffraction line of ambient Au and the (1 1 1) diffraction line of ambient Ni contribute 20 times less to the intensity measured at ~3.08 Å −1 (green circles) and is thus neglected in the Debye-Waller analysis.For this reason, the green circles are labeled as "(2 0 0) Au." Furthermore, the contribution from ambient Au to the diffraction intensity within the first picosecond is also neglected as the intensity of the (1 1 1) diffraction line above 6 ps is almost two orders of magnitude weaker.For this reason, the analysis assumes that the measured diffraction pattern is free of scattering from ambient materials. Extraction of the normalized intensity decay of the diffraction lines Each diffraction line of Au is fitted using a pseudo-Voigt lineshape from which the integrated intensity I hkl is calculated.To account for shot-to-shot fluctuations in the x-ray pulse energy, the integrated intensities are normalized by the total intensity recorded on the ePix10k x-ray detector.The values of the integrated intensity measured at ambient conditions without optical excitation, I 0 hkl , are found to be linearly correlated with the total intensity recorded on the detector and shown by the black dashed lines in Fig. 7 for each Fig. 5. Determination of the zero time delay between the optical laser pulse and the x-ray pulse.image of a 100μm-thick YAG window at the target plane inside the vacuum chamber obtained after illumination by the optical laser when the x-ray pulse is impinging the window before the optical laser, τ X-ray ≤ τ Optical , (A) and after the optical laser, τ X-ray ≥ τ Optical , (B). the optical laser spot size is highlighted by the white dashed ellipse and the x-ray spot by the red dashed ellipse.the loss of intensity within the red dashed ellipse is a consequence of the transient change of the carriers density in the YAG window caused by the x-ray pulse, which, in turn, increases the absorption of optical light.(C) Pixel position found using the analysis shown Fig. 4 for each x-ray pulse (corresponding to each event number).the relative time of arrival between the x-ray pulse and the optical pulse is determined using the transmitted intensity through a 100μm-thick YAG window positioned at the target plane for the same x-ray shots.Red corresponds to the optical pulse impinging the YAG window after the x-ray pulse, and blue corresponds to the optical laser arriving before the x-ray pulse.here, τ X-ray = τ Optical corresponds to pixel number 971.We define this pixel position as zero time delay between the two pulses and refer to it as Δt = 0 ps.diffraction line.Here, the x-ray pulse energy alone did not provide a good normalization as it is only measured at the exit of undulator, 100 s of meters upstream of the MEC endstation, and does not consider fluctuations in the beamlime transmission. For all x-ray diffraction patterns recorded on laser-excited Au, the integrated diffraction intensity for each diffraction line is first calculated following the procedure described above and then normalized by the value without laser excitation.To calculate the latter one, the total intensity recorded on the x-ray detector corresponding to the laser-excited diffraction pattern and the linear fits shown in Fig. 7 are used.The quality of the normalization can be appreciated from the negative time delays in Fig. 2A as one expects these values to be unity. Determination of the temporal window Because phonon hardening is predicted to happen for a Au lattice at ambient solid density, only data corresponding to these conditions are considered for the Debye-Waller analysis.From the position of the diffraction lines of Au shown in Fig. 8, we observe that the peak positions start shifting after 1-ps time delay.For this reason, only data collected between −2 and 1 ps time delays are used for the Debye temperature measurement. Justification for the Debye-Waller behavior of the diffraction line intensity decay The intensity decay of the diffraction peaks is given, within the Debye-Waller theory, by (23) where Q is the momentum transfer and 1 3 ⟨ u 2 ⟩ is the average mean square atomic displacement along each Cartesian directions.⟨ u 2 0 ⟩ is the mean square displacement at ambient conditions.The use of the average mean square atomic displacement is justified here as the crystallographic orientation of the samples is lost due to their polycrystalline nature.If the diffraction peak intensity decays observed in Fig. 2A are a consequence of the increase in the motion of atoms about their equilibrium position caused by an increase in temperature, then the decays are expected to follow a Q 2 dependence.Because the mean square atomic displacement is independent of the momentum transfer, one can compensate for the difference in the momentum transfer between each diffraction line, such that the intensity of the (1 1 1) and (2 0 0) diffraction lines can be compared with the (2 2 0) diffraction line intensity.The result of this procedure is shown in Fig. 9. Here, the (2 2 0) diffraction line is used as the reference for the comparison because no diffraction line from Ni is expected at this momentum transfer.For each time delay, the data compensated for the difference in the momentum transfer are shown with open squares for the (1 1 1) diffraction line (Fig. 9A) and the (2 0 0) diffraction line (Fig. 9B).The black dashed line corresponds to the Q 2 scaling expected from the Debye-Waller theory.We observe that the intensity decay of the (1 1 1) and (2 0 0) diffraction lines is consistent with the expected scaling, up to a time delay of 1 ps, thus justifying the use of the Debye-Waller theory to interpret our data. Debye temperature analysis methods The evolution of the normalized diffraction line intensity is analyzed using the Debye-Waller theory such that the intensity decay is given by , ω is the phonon frequency, M is the atomic mass, k B is the Boltzmann constant, and T l is the lattice temperature.Θ D is the Debye temperature.The ambient temperature Debye-Waller factor, 2W(Q, T 0 l ) , is calculated using the literature value of Θ D = 170 K (24). To be used in Eqs. 2 and 3 requires both the Debye temperature and the lattice temperature.Here, the first one is treated as a free parameter that is calculated by matching the simulated and the experimental intensity decay of each diffraction peak.The lattice temperature is obtained from simulations of the energy flow between the hot electron population generated after laser irradiation and the lattice.In ultrafast excitation of metals using optical laser pulses, a TTM is commonly used and is given by the coupled partial differential Eq. 4 (18,29) where T e is the temperature of the electron population; C l is the lattice heat capacity, taken to be equal to the Dulong-Petit limit (46); is the electron temperature dependent electron heat capacity; g ep is the electron temperature dependent electron-phonon coupling rate describing the energy transfer between the electronic subsystem and the lattice; and S(t) is a Gaussian source term accounting for the heating of the electron subsystem by the optical laser pulse.In Eq. 4, both electronic and lattice heat conduction have been neglected as the associated time scales are much longer than the time scale of this measurement.The characteristic time of electronic heat conduction is estimated to be τ e = C e L 2 /κ e ∼ 10s of ps using the value for the electron heat diffusivity coefficient κ e (18,47) and L = 50 nm.The lattice heat diffusion is also neglected since κ l ≪ κ e .At an absorbed energy density of 6.4 ± 0.8 MJ/kg, the simulated electron temperature, simulated using the values from Smirnov (10), reaches a maximum of 3.5 ± 0.3 eV (41.1 ± 3 kK) and the simulated lattice temperature is found to be 3.4 ± 0.3 kK(0.3 ± 0.03 eV) at the longest time delay of 1 ps. Given the laser parameters, the nonthermal electron population is expected to rapidly equilibrate through electron-electron collisions and is assumed to have fully thermalized on time scales shorter than electron-phonon interactions (48,49), thus justifying the use of a TTM in this work.It is further assumed that no temperature gradient is present in our target.The laser pulse energy is deposited uniformly throughout the sample thickness by energetic ballistic electrons as their effective absorption depth is ∼50 nm at our excitation conditions (48,49).At our excitation condition, these electrons travel at the Fermi velocity [~10 6 m/s for Au (50)], thus reaching the back side of our target ~50 fs after laser irradiation. To extract the Debye temperature, the temporal evolution of the electron and the lattice temperature are first simulated using a TTM with the source term corresponding to our laser excitation.It is taken to be Gaussian profile with a duration of 50-fs FWHM and normalized to match the incident optical laser pulse energy.Last, the values for the electron temperature-dependent electron heat capacity and the electron temperature-dependent electron-phonon coupling rate are taken from the values found in the literature for Au (10,(26)(27)(28) From the temporal evolution of the lattice temperature, we invert Eqs. 2 and 3 to find the Debye temperature corresponding to each data point in Fig. 2A.This is numerically achieved using a leastsquare procedure that minimizes the distance between the simulated intensity decay and the measured intensity decay for a given diffraction line at a given time delay. We note that, in Fig. 2A, the Debye temperature at negative time delays reflects the experimental uncertainty on the measured intensity decay of the diffraction lines.For these points, the right-hand side of Eq. 3 is unity and the Debye temperature should be identical to the literature value used to calculate the ambient temperature Debye-Waller factor.However, the left-hand side of Eq. 3 is not exactly unity due to experimental uncertainties and we express this as a varying Debye temperature to determine the precision of our measurement and to confirm that the increase measured at positive time delays is statistically relevant. Estimation of the uncertainty on the extracted Debye temperature The uncertainty on the extracted Debye temperature is primarily due to the uncertainty on the measured intensity decay of the diffraction lines and the uncertainty on the lattice temperature.The latter is caused by timing uncertainties and uncertainties on the energy density absorbed by the sample. Uncertainty on the absorbed energy density The absorbed energy density is calculated from the fraction of the optical laser energy absorbed within the x-ray spot.The first source of uncertainty is the uncertainty on the incident optical laser pulse energy.The energy of the optical laser pulse corresponds to the maximum energy that could be used during the experiment.Because of the presence of spherical apertures along the beam path, upstream of the target, the laser spot at the target plane exhibits an Airy pattern for which the second Airy lobe overfills a single target aperture and contains ~14% of the laser pulse energy.This fraction can be sufficient to damage neighboring windows as the laser pulse energy is increased.For this reason, the maximum optical pulse energy that could be used was 222 ± 14 μJ.During the experiment, the incident laser pulse was imagined in transmission through the sample (bottom left inset in Fig. 1).The integrated number of counts on the transmission diagnostic was then calibrated using a powermeter.The reported uncertainty corresponds to the fluctuation in the incident laser energy extrapolated from the fluctuation in the integrated number of counts in the transmission diagnostic. The second source of uncertainty is due to the misalignment between the x-ray pulse and the optical laser pulse, as well as the spatial jitter of the two beams.These are estimated from transmission images obtained using a 100-μmthick YAG window at the target plane and corresponding to τ X-ray ≤ τ Optical , as shown in Fig. 5A.Each pulse is fitted using a 2D Gaussian profile, from which the position of its center of mass is calculated.We found that the center of mass of the optical laser beam was slightly misaligned in the vertical axis by 12 μm.The energy contained with the 20-μm FHWM x-ray spot is then calculated to be 4.9 ± 0.3 μJ.The absorbed energy density is lastly calculated to be 6.4 ± 0.8 MJ/kg where the uncertainty considers the uncertainty on the incident optical pulse energy, the uncertainty on the absorption ratio of our thin films, and the spatial misalignment between the two pulses. Uncertainty on the extracted Debye temperature To quantify the contribution from each source of uncertainty, we extract the Debye temperature by considering only one source of uncertainty at a time.The results obtained with the model from Smirnov are summarized in Fig. 10A.The uncertainty on the timing, the absorbed energy density, and the measured intensity decay of the diffraction lines altogether are estimated using a Monte Carlo error propagation procedure.The results of this procedure are shown in Fig. 10B.The same analysis was performed for the other models to extract the mean Debye temperature and the corresponding uncertainty. Fig. 1 . Fig. 1.Schematic of the experimental setup used to measure the temporal evolution of the diffraction pattern from laser-heated, free standing Au foils at the LCLS.A transmission image of the nearly Gaussian transform-limited optical laser pulse is shown in the bottom left inset along with a set of contours corresponding to the best 2D Gaussian fit to the data.Azimuthally integrated diffraction patterns at different time delays are shown in the top right inset. Fig. 2 . Fig. 2. Extraction of the Debye temperature from the measured diffraction line intensity decays.(A) temporal evolution of I hkl ∕ I 0 hkl for the (1 1 1), (2 0 0), and (2 2 0) diffraction lines of Au shown with open squares, open circles, and open inverted triangles, respectively.For clarity, the data corresponding to the (2 0 0) and (1 1 1) diffraction peaks have been offset vertically by 0.3 and 0.6, respectively.the dashed curves correspond to the evolution simulated assuming that the Debye temperature remains constant to the ambient value.the solid curves were obtained using the Debye temperature extracted from the experimental data.the shaded areas correspond to the 1σ uncertainty.the uncertainty at positive delays is larger as it also considers the uncertainty on the deduced Debye temperature.(B) temporal evolution of the Debye temperature measured using the (1 1 1), (2 0 0), and (2 2 0) diffraction line intensities.the horizontal dashed line indicates the literature value of the Debye temperature at ambient condition(24). the dashed-dotted line corresponds to the mean Debye temperature measured at positive time delays and was found to be Θ D = 265 ± 18 K. the shaded area corresponds to the uncertainty of the Debye temperature measurement at the 1σ level.For negative time delays, the uncertainty corresponds to the deviation of the measured intensities from unity and is found to be 21 K. Fig. 3 . Fig. 3. Comparison between the extracted Debye temperature and simulations in the event of phonon hardening.(A) Deduced Debye temperature for each electron-phonon coupling rate as a function of the electron temperature along with the theoretical predictions of phonon hardening from Recoules et al. (8) (open black squares) and Smirnov (10) (dashed line).the values obtained using the models from Smirnov (10), holst et al. (26), lin et al. (27), and Migdal et al. (28) are shown by the purple square, blue circle, brown triangle, and green diamond, respectively.the gray inverted triangle corresponds to the Debye temperature extracted from x-ray only measurements.For the experimental results, the electron temperature corresponds to the value obtained from the ttM simulations.the vertical error bars correspond to the 1σ-level uncertainty.(B) calculated phonon dispersion curves of Au along high-symmetry paths in the first Brillouin zone showing the increase of the phonon energies characteristic of phonon hardening for a lattice at 0 K and electron temperatures comparable with our measurements.the blue curves correspond to the phonon dispersion of Au at zero electron temperature. Fig. 4 . Fig. 4. Shot-to-shot measurement of the time delay between the optical laser pulse and the x-ray pulse.(A) Background normalized image of the YAG window upstream of the vacuum chamber illuminated using the optical laser pulse.the window is angled, such that time is encoded in the horizontal axis.the vertical dashed line corresponds to the time at which the two pulses arrive simultaneously on the YAG window and is determined by the position of the maximum of the first derivative of the transient time tool signal as shown by the orange curve in (B). Fig. 7 . Fig. 7. Correlation between the integrated diffraction peak intensity and the total intensity recorded on the detector.intensity of the diffraction lines of Au as a function of the total intensity recorded on the 2D x-ray detector: (1 1 1) (red squares), (2 0 0) (green circles), and (2 2 0) (purple inverted triangles).All symbols correspond to x-ray only measurements.the intensity for each diffraction line, I 0 hkl , was obtained by integrating the intensity found by fitting a pseudo-voigt lineshape to each diffraction line.the dashed black lines correspond to the best linear fits to each dataset. C e (T e ) ∂T e ∂t = −g ep (T e )(T e − T l ) + S(t), C l ∂T l ∂t = +g ep (T e )(T e − T l ) Fig. 9 . Fig. 9. Q 2 compensated-intensity of the diffraction peak intensities.Data compensated for the difference in the momentum transfer difference between the (2 2 0) and the (1 1 1) (A), as well as the (2 0 0) (B) diffraction lines.the black dashed line corresponds to the Q 2 scaling expected from the Debye-Waller theory and is a line with unity slope.the different colors indicate different time delays between the x-ray pulse and the optical laser pulse. Fig. 8 . Fig. 8. Temporal evolution of the diffraction peak position.Position, Q Peak , of the (1 1 1) (red squares), (2 0 0) (green circles), and (2 2 0) (purple inverted triangles) diffraction lines of Au as a function of the time delay.(A to C) the peak position between −2-and 12-ps time delays.(D to F) A zoom-in at early time delays between −2 and 2 ps, corresponding to the gray shaded area in (A) to (c). the black horizontal dashed lines in the bottom row correspond to the position of the diffraction lines of Au calculated using the lattice parameter found in the literature.the horizontal solid line in (e) corresponds to the position of the (1 1 1) diffraction line of ni. and are shown in the fig.S1.The temporal evolution of the electron and lattice temperature calculated for each model are shown in the fig.S2. Fig. 10 . Fig. 10.Estimation of the uncertainty on the extracted Debye temperature.(A) Distribution of the Debye temperature extracted from the experimental after propagation of the timing uncertainty only (red), the uncertainty on the absorbed energy density only (black), and the uncertainty on the intensity decay of the diffraction lines measured experimentally only (orange).the distributions are centered around the mean value of the extracted Debye temperature.(B) Distribution of the Debye temperature obtained after propagating the uncertainties from the three sources shown in (A). the distribution is fitted to a normal distribution and the result is shown with the black dashed line.the results are obtained using the ttM parameters provided by Smirnov.
10,644.2
2024-02-09T00:00:00.000
[ "Materials Science", "Physics" ]
Nicorandil Attenuates Monocrotaline-Induced Vascular Endothelial Damage and Pulmonary Arterial Hypertension Background An antianginal KATP channel opener nicorandil has various beneficial effects on cardiovascular systems; however, its effects on pulmonary vasculature under pulmonary arterial hypertension (PAH) have not yet been elucidated. Therefore, we attempted to determine whether nicorandil can attenuate monocrotaline (MCT)-induced PAH in rats. Materials and Methods Sprague-Dawley rats injected intraperitoneally with 60 mg/kg MCT were randomized to receive either vehicle; nicorandil (5.0 mg·kg−1·day−1) alone; or nicorandil as well as either a KATP channel blocker glibenclamide or a nitric oxide synthase (NOS) inhibitor N ω-nitro-l-arginine methyl ester (l-NAME), from immediately or 21 days after MCT injection. Four or five weeks later, right ventricular systolic pressure (RVSP) was measured, and lung tissue was harvested. Also, we evaluated the nicorandil-induced anti-apoptotic effects and activation status of several molecules in cell survival signaling pathway in vitro using human umbilical vein endothelial cells (HUVECs). Results Four weeks after MCT injection, RVSP was significantly increased in the vehicle-treated group (51.0±4.7 mm Hg), whereas it was attenuated by nicorandil treatment (33.2±3.9 mm Hg; P<0.01). Nicorandil protected pulmonary endothelium from the MCT-induced thromboemboli formation and induction of apoptosis, accompanied with both upregulation of endothelial NOS (eNOS) expression and downregulation of cleaved caspase-3 expression. Late treatment with nicorandil for the established PAH was also effective in suppressing the additional progression of PAH. These beneficial effects of nicorandil were blocked similarly by glibenclamide and l-NAME. Next, HUVECs were incubated in serum-free medium and then exhibited apoptotic morphology, while these changes were significantly attenuated by nicorandil administration. Nicorandil activated the phosphatidylinositol 3-kinase (PI3K)/Akt and extracellular signal-regulated kinase (ERK) pathways in HUVECs, accompanied with the upregulation of both eNOS and Bcl-2 expression. Conclusions Nicorandil attenuated MCT-induced vascular endothelial damage and PAH through production of eNOS and anti-apoptotic factors, suggesting that nicorandil might have a promising therapeutic potential for PAH. Introduction Pulmonary arterial hypertension (PAH) is a progressive fatal disorder with a poor prognosis [1,2]. The changes in the pulmonary vasculature in PAH involve persistent vasoconstriction, vascular smooth muscle cell proliferation, and thrombosis [1][2][3]. Although the exact pathogenesis of PAH is still uncertain, it is thought that vascular endothelial damage and dysfunction play a crucial role in triggering pathological vascular remodeling [4]. In addition, experimental studies suggest that endothelial cell apoptosis in pulmonary microvasculature causes arteriolar occlusion and increases pulmonary vascular resistance [4][5][6][7], suggesting that vascular endothelial cell apoptosis is closely associated with the pathogenesis of PAH. Nicorandil is a unique hybrid vasodilator and exerts 2 vasodilator actions; adenosine triphosphate (ATP)-sensitive potas-sium (K ATP ) channel opening and nitric oxide (NO) release [8]. Nicorandil is not only an antianginal drug but also exerts cardioprotective effects on the ischemic myocardium due to its K ATP channel-opening action, thereby mimicking the phenomenon of ischemic preconditioning [9]. Cumulative evidence suggests that nicorandil has several beneficial effects on the cardiovascular system; however, its effects on the pulmonary vasculature under PAH remain undetermined. On this basis, we evaluated the efficacy of nicorandil in an experimental rat PAH model induced by monocrotaline (MCT). Also, the mechanisms of action of nicorandil were investigated in vivo and in vitro. Here, we show that nicorandil attenuates MCTinduced endothelial damage and apoptosis in pulmonary vasculature under PAH through production of endothelial NO synthase (eNOS) and anti-apoptotic factors; the production is mediated by the cell survival signaling cascades, phosphatidylinositol 3-kinase (PI3K)/Akt and extracellular signal-regulated kinase (ERK) pathways, which are mainly activated via the opening of K ATP channels. The results suggest that nicorandil may have a therapeutic potential for PAH. Animals Wild-type Sprague-Dawley rats were purchased from Japan SLC. All experimental procedures and protocols were approved by the Institutional Committee for Animal Research at the University of Tokyo (#1621T 132) and complied with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health (NIH) publication no. 86-23; revised 1985). The animal model of PAH and experimental protocols Eight-week-old male Sprague-Dawley rats were injected intraperitoneally with saline (control) or 60 mg/kg MCT (Wako). In the prevention protocol, the rats injected with MCT were randomized to receive either a vehicle, nicorandil (2.5-7.5 mg?kg 21 ?day 21 ) alone, or nicorandil (5.0 mg?kg 21 ?day 21 ) with either 5.0 mg?kg 21 ?day 21 of the K ATP channel blocker glibenclamide or the NO synthase (NOS) inhibitor N v -nitro-Larginine methyl ester (L-NAME; 1 mg/mL in drinking water) from immediately after the MCT injection. The vehicle, nicorandil, and glibenclamide were administered continuously by an implanted subcutaneous osmotic pump (Alzet; Durect Corporation). Separately, one group of the MCT-injected rats was administered a pan-caspase inhibitor, Z-Val-Ala-Asp(OMe)-CH 2 F (ZVAD-fmk; R&D Systems), as a bolus into the tail vein 4 times (first immediately after the MCT injection and 1, 3, and 7 days later; total dose, 3.3 mg/kg). In addition, the other group of the MCTinjected rats was treated with a NO donor, sustained release isosorbide dinitrate (sr-ISDN), by oral administration at a dose rate of either 10, 50, or 100 mg?kg 21 once a day. Each group comprised 8-10 rats. The right ventricular systolic pressure (RVSP) of the rats was measured by inserting polyethylene catheters into the right ventricle 28 days after the MCT injection. The rats were then euthanized, and the hearts and lungs were harvested. The weight ratio of the right ventricle (RV) to left ventricle (LV) including the septum (RV/LV ratio) was determined. The right lungs were fixed in methanol or 4% paraformaldehyde and embedded in paraffin for histological analysis. The left lung segments were snap frozen in liquid nitrogen for western blotting. In the reversal protocol, the rats were injected with saline (control) or 60 mg/kg MCT, and 21 days later, the rats were randomized to receive either the vehicle, nicorandil (5.0 mg?kg 21 ?day 21 ) alone, or nicorandil with glibenclamide or L-NAME. Two weeks after the initiation of treatment, the RVSP was measured, and the hearts and lungs were harvested for analyses. Survival Analysis In a separate experiment, we examined the effects of nicorandil on the survival of the MCT-injected rats in both the prevention and reversal protocols. The day of the MCT injection was defined as day 0, and the survival analysis observation continued up to day 42. The groups in both protocols comprised 12-15 rats. Histological analysis The paraffin-embedded sections were processed for hematoxylin and eosin, elastic Van Gieson, and immunohistochemical staining for examination under light microscopy. The medial wall thickness of the pulmonary arterioles (PAs) was calculated and expressed as follows: %medial wall thickness = ([medial thick-ness62]/external diameter)6100. Immunohistochemical analyses involved the incubation of the sections with the primary antibodies (anti-a-smooth muscle actin [aSMA], Sigma; anti-CD68, Serotec; and anti-eNOS, BD Biosciences), followed by incubation with a biotinylated secondary antibody (Dako) using the avidin-biotin complex technique with Vector Red substrate (Vector Laboratories). The nuclei were counterstained with hematoxylin. Immunohistochemical staining for Ki67 expression To assess proliferation of smooth muscle cells (SMCs) in the media of PAs, immunofluorescent double staining of lung frozen sections for Ki67 and aSMA was performed. After blocking with 1% bovine serum albumin and 5% goat serum in PBS, the sections were incubated overnight with anti-Ki67 rabbit antibody (Abcam) and anti-aSMA mouse antibody (Sigma), followed by incubation with Alexa Fluor 488-conjugated anti-rabbit IgG and Alexa Fluor 594-conjugated anti-mouse secondary antibodies (Molecular Probes) for 1 h. After washing, the nuclei were counterstained with DAPI (Sigma) before mounting and imaging by a confocal microscope. The number of proliferating PA-SMCs with Ki67 positive nuclei was expressed as the percentage of Ki67-positive cells over the total number of aSMA-positive SMCs in the media of 30-40 PAs (external diameter, 20-100 mm) per rat. Analyses of cell morphology, viability, and apoptosis HUVECs were incubated in a serum-free medium with or without nicorandil for 48 h. The cells were then observed under a phase contrast microscope (Olympus). Cell viability was determined using 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2H-tetrazolium inner salt assay (MTS; Promega Corporation), and the percent cell death was calculated as follows: 1006{12[viability of treated (serum-starved) endothelial cells/ viability of untreated endothelial cells]}. In a separate series, HUVECs that were incubated in serum-free medium for 12 h were subjected to TUNEL staining to detect apoptotic cell death, according to the manufacturer's instructions. The nuclei were counterstained with Hoechst 33258 (Sigma). TUNEL-positive nuclei were counted in 10 randomly selected fields using a confocal microscope (Olympus) and were expressed as a percentage of the total number of nuclei. Western blotting Proteins were extracted from the lung tissues or HUVECs after homogenization in a lysis buffer containing a protease inhibitor cocktail (Sigma). The protein samples (5-10 mg) were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred onto a polyvinylidene fluoride membrane (Hybond-P; GE Healthcare). The membranes were incubated with primary antibodies to eNOS, cleaved caspase-3, Akt, Ser473phospho-Akt, ERK, Thr202/Thr204-phospho-ERK1/2, Bad, Ser112-phospho-Bad (Cell Signaling Technology), and Bcl-2 (BD Biosciences), followed by incubation with a horseradish peroxidase-conjugated secondary antibody. Next, an enhanced chemiluminescence system (ECL Plus; GE Healthcare) was used to detect immunoblotting, and bands were visualized and quantified with a lumino-analyzer (LAS-1000; Fujifilm). The signal intensity was normalized to b-actin expression. Akt kinase assay Akt kinase activity was determined in the protein (20 mg) extracted from the HUVEC lysates by detecting the phosphorylated glycogen synthase kinase (GSK)-3 fusion protein with the Akt kinase assay kit (Cell Signaling Technology), according to the manufacturer's instructions. Statistical analysis Data are presented as the mean 6 standard deviation (SD). The comparison of the means was performed by a one-way analysis of variance (ANOVA) followed by Scheffé's post hoc test. The survival curves were analyzed by the Kaplan-Meier method and compared using the Wilcoxon rank sum test. Statistical significance was defined as P,0.05. Nicorandil prevents the progression of MCT-induced PAH The RVSP in the vehicle-treated group in the prevention protocol was significantly higher than that of the normal controls at 28 days after the MCT injection (51.064.7 mm Hg vs. 20.262.8 mm Hg; P,0.01) ( Figure 1A). Nicorandil attenuated the MCT-induced increase in RVSP in a dose-dependent manner, and the RVSP was 33.263.9 mm Hg in rats treated with 5.0 mg?kg 21 ?day 21 nicorandil (P,0.01 vs. the vehicle). The effect of nicorandil was markedly inhibited by glibenclamide (44.863.4 mm Hg) and L-NAME (45.664.8 mm Hg; P,0.05 vs. nicorandil alone). The pan-caspase inhibitor ZVAD-fmk also attenuated the MCT-induced increase in RVSP (33.063.1 mm Hg; P,0.01 vs. the vehicle). Further, the RV/ LV ratio, which was increased in the vehicle group, was also significantly attenuated in the nicorandil-and ZVAD-fmk-treated groups ( Figure 1B). The systemic blood pressure and heart rate of the rats did not vary among the groups (data not shown). Histological analysis revealed that the percent medial wall thickness of the PAs in the vehicle-treated group was significantly greater than that of the normal controls (62.767.0% vs. 20.562.9%; P,0.01) ( Figure 1C and 1D). Treatment with 5.0 mg?kg 21 ?day 21 nicorandil (32.063.4%) and ZVAD-fmk (30.663.6%) attenuated the MCT-induced medial wall thickening (P,0.01 vs. the vehicle, respectively); however, these effects of nicorandil were inhibited by glibenclamide and L-NAME. The survival analysis revealed that nicorandil significantly improved the survival rate in the MCT-injected rats in the prevention protocol ( Figure 1E). The survival rates at 28 and 42 days after the MCT injection were 46% and 8% in the vehicle group, and those were increased to 77% and 54% in the nicorandil-treated group (5.0 mg?kg 21 ?day 21 ; P,0.05), respectively, whereas the coadministration of glibenclamide or L-NAME diminished the effect of nicorandil ( Figure 1E). In a separate series, the rats with MCT-induced PAH were treated with a slow NO-releasing drug, sr-ISDN. Low and middle doses of sr-ISDN (10 or 50 mg?kg 21 ?day 21 ) attenuated the MCTinduced increase in RVSP (41.864.3 or 38.663.4 mm Hg; P,0.05 vs. the vehicle) without changing systemic blood pressure and heart rate, although the decreasing degree in RVSP was modest compared with 5.0-7.5 mg?kg 21 ?day 21 nicorandil. High dose of sr-ISDN (100 mg?kg 21 ?day 21 ) attenuated the MCTinduced increase in RVSP to a greater degree (35.864.0 mm Hg; P,0.01 vs. the vehicle); however, high dose of sr-ISDN also significantly decreased systemic blood pressure in the rats with MCT-induced PAH (74.468.8 mm Hg vs. 91.868.6 mm Hg in the vehicle; P,0.01) and, probably due to the induction of severe hypotension, did not improve the survival rate in the MCTinjected rats (data not shown). Nicorandil improves the histopathological findings in MCT-injured lungs Immunohistochemical findings revealed that nicorandil and ZVAD-fmk attenuated both the thickening of the PA's media that was composed of aSMA-positive cells and the recruitment of macrophages into the perivascular areas in MCT-injured lungs ( Figure 2A). MCT markedly impaired eNOS expression in the endothelium of the pulmonary vasculature, and the expression was restored by treatment with nicorandil and ZVAD-fmk ( Figure 2A). As shown in Figure 2B and 2D, MCT readily induced thromboemboli formation and endothelial cell apoptosis in small PAs (external diameter, 20-100 mm), and nicorandil and ZVADfmk attenuated these deteriorated changes ( Figure 2B-2E). Treatment with nicorandil also significantly reduced the number of proliferating SMCs with Ki67 positive nuclei in the media of remodeled PAs (Figure 3). In contrast, glibenclamide and L-NAME blocked these effects of nicorandil, respectively. Treatment with nicorandil and ZVAD-fmk attenuated the MCT-induced increase in both these parameters, while these effects of nicorandil were blocked by glibenclamide and L-NAME. Each group comprised 8-10 rats. # P,0.01 vs. normal control; * P,0.05 and ** P,0.01 vs. vehicle; { P,0.05 and { P,0.01 vs. nicorandil (5.0 mg?kg 21 ?day 21 ). (C) Histological findings of the PAs (arrows). Top, hematoxylin and eosin (HE) staining; bottom, elastic Van Gieson (EVG) staining. Scale bar, 50 mm. (D) MCT markedly increased the percent medial wall thickness of the PAs (#), and nicorandil and ZVADfmk attenuated MCT-induced medial wall thickening (**). In contrast, the effects of nicorandil were inhibited by glibenclamide and L-NAME ({). (E) Survival analysis in the prevention protocol. Each group comprised 12-13 rats. * P,0.05 vs. vehicle. doi:10.1371/journal.pone.0033367.g001 Nicorandil upregulates eNOS expression and downregulates cleaved caspase-3 expression in MCTinjured lungs Western blot analysis revealed that MCT significantly downregulated eNOS expression and inversely upregulated cleaved caspase-3 expression in the lungs ( Figure 4A). Notably, nicorandil as well as ZVAD-fmk increased the expression of eNOS and attenuated the expression of cleaved caspase-3 in MCT-injured lungs in a dosedependent manner, while these effects of nicorandil were also blocked by glibenclamide and L-NAME ( Figure 4B and 4C). Nicorandil prevents the progression of established PAH In the reversal protocol, we evaluated whether nicorandil was also effective against established PAH. The RVSP and the RV/ LV ratio in the vehicle group increased to 38.764.7 mm Hg and 0.3360.06 at 21 days after the MCT injection (P,0.05 vs. the normal control, respectively), and moreover, these parameters increased to 55.064.6 mm Hg and 0.4760.04 at 35 days (P,0.01 vs. the normal control, respectively) ( Figure 5A and 5B). Late treatment with nicorandil on days 21-35 prevented the additional increases in the RVSP and RV/LV ratio, as the values in the nicorandil-treated group were 37.362.9 mm Hg and 0.3360.03 at day 35 (P,0.05 vs. the vehicle, respectively). Histological analysis revealed that nicorandil prevented additional medial wall thickening of the PAs in MCT-injured lungs ( Figure 5C and 5D). The survival analysis revealed that nicorandil improved the survival rate in rats with established PAH in the reversal protocol ( Figure 5E). The survival rate at 42 days after the MCT injection was 13% in the vehicle group and 40% in the nicorandil-treated group (P,0.05). In contrast, these beneficial effects of nicorandil were blocked by glibenclamide and L-NAME, respectively. Restoration of eNOS expression by nicorandil in established PAH In the reversal protocol, MCT downregulated the expression of eNOS in the endothelium of the pulmonary vasculature ( Figure 6A) and lung homogenates ( Figure 6B and 6C) in a time-dependent manner. Notably, late treatment with nicorandil restored eNOS expression at least partially ( Figure 6A-C), while this effect was inhibited by glibenclamide and L-NAME. Anti-apoptotic effects of nicorandil on in vitro vascular endothelial cells Next, we examined the in vitro anti-apoptotic effects of nicorandil for vascular endothelial cells. HUVECs were cultured in serum-free medium and then exhibited apoptotic morphology that was characterized by cell shrinkage (Figure 7A), and there was a decrease in the viability of these cells as determined by the MTS assay ( Figure 7B). In addition, there was an increase in the number of serum-starved HUVECs that exhibited apoptotic morphology as identified by TUNEL staining (Figure 7C and 7D). Stimulation with nicorandil and the K ATP channel opener diazoxide partially inhibited the serum starvation-induced endothelial cell apoptosis in a concentration-dependent manner, while these effects of nicorandil were also inhibited by glibenclamide and L-NAME ( Figure 7A-D). Nicorandil activates the PI3K/Akt and ERK pathways in vascular endothelial cells Both the PI3K/Akt and ERK signaling pathways function as cell survival signaling cascades [11,12]. To investigate the signaling pathways associated with the actions of nicorandil on vascular endothelial cells, we determined the phosphorylation status of Akt and ERK1/2 in the serum-starved HUVECs. Nicorandil (100 mmol/L) induced Akt serine-473 phosphorylation in HU-VECs in a time-dependent manner, with a maximum 3.0-fold increase ( Figure 8A and 8B). Diazoxide (100 mmol/L) also induced a maximum 2.1-fold increase in Akt phosphorylation. The Akt phosphorylation induced by nicorandil was blocked by glibenclamide (10 mmol/L) and a PI3K inhibitor, LY294002 (10 mmol/L). To determine the Akt kinase activity, we measured the phosphorylation of GSK-3, which is a downstream target of Akt. Nicorandil induced a 2.7-fold increase in GSK-3 phosphorylation (P,0.05 vs. baseline), which was blocked by glibenclamide and LY294002 ( Figure 8A; bottom). Similarly, nicorandil and diazoxide induced ERK1/2 threonine-202/204 phosphorylation in HUVECs, with a maximum 2.8-and 2.1-fold increase, respectively ( Figure 8C and 8D). In addition, serine-112 phosphorylation of Bad, which is a downstream target of ERK1/2, was induced by nicorandil and diazoxide ( Figure 8C; bottom), whereas the phosphorylation of ERK1/2 and Bad by nicorandil was blocked by glibenclamide and a MEK inhibitor, PD98059 (10 mmol/L). Finally, we measured the expressions of eNOS and the apoptosis inhibitor Bcl-2, which is a downstream target of the PI3K/Akt and ERK1/2 pathways, in the serum-starved HUVECs stimulated with nicorandil for 24 h. Notably, nicorandil upregulated the expression of both eNOS and Bcl-2 in a concentration-dependent manner ( Figure 8E and 8F), whereas these effects of nicorandil were blocked by glibenclamide, LY294002, and PD98059. Diazoxide also increased the expression of eNOS and Bcl-2, although the degree of the increase induced by diazoxide was lesser than that by nicorandil. Discussion Our experiments revealed that nicorandil attenuated the progression of MCT-induced PAH and improved the survival rate in rats with MCT-induced PAH both in the prevention and reversal protocols. These effects were accompanied by an improvement in the severity of pulmonary vascular remodeling that includes medial wall thickening with the increase of proliferating SMCs, recruitment of macrophages into the perivascular areas, thromboemboli formation in the pulmonary microcirculation, and vascular endothelial cell apoptosis. These effects of nicorandil were closely associated with the enhanced expression of eNOS and anti-apoptotic factors in the vascular endothelium of the lungs, while those were blocked by glibenclamide and L-NAME, suggesting that the beneficial effects of nicorandil on pulmonary vasculature are mediated by the opening of K ATP channels and NOS. In addition, nicorandil induced the activation of the cell survival signaling pathways, PI3K/Akt and ERK1/2, in the vascular endothelial cells, resulting in the production of eNOS and anti-apoptotic factors. Nicorandil is a nicotinamide ester with 2 vasodilator actions-K ATP channel opening and NO release [8]. The Impact Of Nicorandil in Angina (IONA) study has demonstrated that nicorandil improves the clinical outcome of patients with stable angina [13]. The mechanisms underlying the reduction in major coronary events in the trial are thought to be related to the cardioprotective effects of nicorandil that mimic ischemic preconditioning through K ATP channels. Although the effects of nicorandil in PAH have remained undetermined, a few reports based on animal studies [14] and clinical experience [15] have suggested its potential efficacy for PAH treatment. In contrast to our results, Hongo et al. [14] reported that late treatment with nicorandil could not prevent further development of PAH nor prolong survival in rats with established PAH. The reason for this discrepancy remains unclear, but it may be attributable to the difference in the injection method of nicorandil (drinking water or subcutaneous osmotic pumps). MCT, a pyrrolizidine alkaloid toxin, induces selective pulmonary endothelial injury and apoptosis, followed by severe inflammatory responses and medial hypertrophy [6,16]. In our study, nicorandil protected the endothelium of the pulmonary vasculature from MCT injury by both restoring eNOS expression and inhibiting the induction of endothelial cell apoptosis in vivo and in vitro. Cumulative evidence suggests that endothelial cell apoptosis in pulmonary vasculature might trigger pathological vascular remodeling, leading to the progression of PAH [4][5][6][7]10]. Consistent with the findings of a previous report [5], the broad caspase inhibitor ZVAD-fmk also attenuated the development of MCT-induced PAH in the present study, accompanied with a decrease in endothelial cell apoptosis in the pulmonary microvasculature. It is plausible that the anti-apoptotic effects of nicorandil and ZVAD-fmk on the pulmonary vascular endothelium might contribute to the blocking of development of pathological pulmonary vascular remodeling induced by MCT and therefore Figure 5. The effects of nicorandil in the reversal protocol. The RVSP (A) and the RV/LV ratio (B) in the vehicle group increased at 21 days after the MCT injection (MCT-21) as compared to the baseline, and additionally increased in the next 2 weeks (MCT-35). Late treatment with nicorandil on days 21-35 prevented the additional increase in these parameters, while these effects were blocked by glibenclamide and L-NAME. # P,0.05 and ## P,0.01 vs. normal control; * P,0.05 and ** P,0. lead to the improvements on pulmonary hemodynamics and survival in rats with MCT-induced PAH. On the other hand, medial hypertrophy itself by proliferation of vascular SMCs is also a major histopathological finding in pulmonary vasculature under PAH [1][2][3], and nicorandil seems to have the direct and/or indirect effects on SMCs in the media of PAs as nicorandil markedly reduced the number of proliferating SMCs (Figure 3). Our results are consistent with the report demonstrating that nicorandil has anti-proliferation effects on rat aortic SMCs [17]. Intriguingly, a specific anti-apoptotic reagent ZVAD-fmk also attenuated proliferation of medial SMCs in the remodeled PAs in the similar manner as nicorandil. Since we found no TUNELpositive apoptotic cells among proliferating medial SMCs in PAs of the MCT-injured lungs ( Figure 2D-E), it would appear that ZVAD-fmk exerted anti-apoptotic effects on pulmonary vascular endothelial cells, not on medial SMCs, in this setting, resulting in the indirect anti-proliferation effects on medial SMCs. Taken together, nicorandil might attenuate medial hypertrophy of PAs in this model by not only the direct effect on PA-SMCs but also the indirect effect through blocking the induction of apoptosis on the pulmonary vascular endothelium. This notion is supported by the results of the recent report that has revealed that the absence of normal immune regulation results in an inappropriately exuberant inflammatory response and accelerated endothelial cell apoptosis, leading to smooth muscle hypertrophy and increased pulmonary vascular resistance [18]. Although the anti-apoptotic effects of nicorandil on vascular endothelial cells have not been reported so far, nicorandil is known to exert anti-apoptotic effects on cardiomyocytes via the activation of K ATP channels [19]. The protective effects of nicorandil on the pulmonary endothelium in our study were involved in the eNOS production, and this finding is consistent with a previous study [20] that revealed that nicorandil enhanced eNOS expression in the myocardium via the opening of K ATP channels. In our study, the K ATP channel closer glibenclamide inhibited the nicorandil's protective effects, including the enhanced expression of eNOS on the pulmonary endothelium, suggesting that eNOS production induced by nicorandil appeared to be a downstream event after the opening of K ATP channels. The notion is supported by the findings of a previous report [21] that showed that a K ATP channel opener protected the myocardium against lethal ischemia via NOS production. The production of anti-apoptotic factors and eNOS by nicorandil was associated with the activation of both the PI3K/ Akt and ERK1/2 signaling pathways. Indeed, the PI3K/Akt and ERK1/2 signaling pathways play a crucial role in cell survival and regulation of apoptosis [11,12]. Many downstream effectors in the PI3K/Akt and ERK1/2 signaling pathways, including Bcl-2, Bad, and the caspase family members, function as inhibitors of apoptosis. Intriguingly, the ERK1/2 pathway has been reported to undergo activation in response to reactive oxygen species (ROS) [22], and the opening of mitochondrial K ATP channels has been shown to generate mitochondria-derived ROS in cardiomyocytes [23]. Thus, owing to its K ATP channel-opening property, nicorandil may activate the ERK1/2 pathway via the generation of ROS in the mitochondria in vascular endothelial cells, although not addressed in this study; the notion is supported by evidence from the literature that has shown that another K ATP channel opener diazoxide also triggers ERK activation through mitochondria-derived ROS in cardiomyocytes [24]. On the contrary, the opening of mitochondrial K ATP channels has been reported to activate the PI3K/Akt pathway in the cardiomyocytes of rodents as well [21,25]. In addition, NO has been shown to induce the activation of both the PI3K/Akt and ERK1/2 signaling pathways through a cyclic guanosine monophosphate-dependent pathway [24,26]. Taking into consideration that the degree of the in vitro effects induced by diazoxide in our study was lesser than that induced by nicorandil, not only the K ATP channel-opening effect but also the NO-releasing property itself of nicorandil may contribute to the activation of the PI3K/Akt and ERK1/2 pathways, resulting in the beneficial protective effects against serum starvation in vitro and MCT injury in vivo. NO is known to dilate the pulmonary vessels and used in the short-term treatment of patients with severe PAH derived from a variety of origins, although there is limited experience with the long-term use of inhaled NO as a treatment of PAH [1,2,27]. In the present study, a slow NO-releasing drug, sr-ISDN, also attenuated MCT-induced PAH in rats to some degree; however, the effects without changing systemic blood pressure and heart rate were modest compared with nicorandil. Given that a K ATP channel closer glibenclamide inhibited approximately 60-70% of the nicorandil's protective effects for deteriorated pulmonary vasculature and hemodynamics in rats with MCT-induced PAH, it seems that the major beneficial effects of nicorandil are attributed to its K ATP channel opening property, while its NO donor property functions adjunctively in this setting. Alternatively, the feature as being a unique hybrid drug with the 2 vasodilator actions may be a crucial advantage of nicorandil, at least in the context of treatment of PAH. In conclusion, the present study has revealed that nicorandil attenuates MCT-induced vascular endothelial damage and apoptosis and PAH through the production of eNOS and antiapoptotic factors, which is mediated by the PI3K/Akt and ERK1/ 2 signaling pathways. These results strongly supports the notion that nicorandil has a promising therapeutic potential for PAH.
6,293
2012-03-30T00:00:00.000
[ "Biology", "Medicine" ]
Event-Triggered Path Following Robust Control of Underactuated Unmanned Surface Vehicles with Unknown Model Nonlinearity and Disturbances : An effective path-following controller is a guarantee for stable sailing of underactuated unmanned surface vehicles (USVs). This paper proposes an event-triggered robust control approach considering an unknown model nonlinearity, external disturbance, and event-triggered mechanism. The proposed method consists of guidance and dynamic control subsystems. Based on the tracking error dynamics equations, the guidance subsystem is designed to achieve the guidance law. For the dynamic control subsystem, the radial basis function neural networks (RBFNNs) are designed to approximate the unknown model nonlinearity and external disturbances to improve the robustness of the proposed method. In addition, an event-triggered mechanism is constructed to reduce the triggering times. The closed-loop system is proven to be stable, and the effectiveness of the proposed method is illustrated through simulation results. Introduction An unmanned surface vehicle (USV) is a kind of autonomous waterborne platform that can autonomously complete tasks, such as environmental perception and target detection, and has autonomous identification, autonomous planning, and autonomous navigation capabilities [1][2][3].It has the advantages of small size, low cost, good maneuverability, and no casualties [4].The USV can independently perform tasks in areas where manned ships are not suitable for dispatch, thereby expanding the scope of water operations.Therefore, it has become an important tool in carrying out civilian and military tasks such as marine environmental monitoring, water search and rescue, ship escort, firepower strike, and anti-submarine tasks [5]. In complex marine environments, ensuring the safety, stability, and accuracy of autonomous navigation for USVs is a major challenge for USV control systems.The overall issues of motion control for USVs include set-point regulation, trajectory tracking, and path following [6]. In the motion control system of USVs, the existing propulsion system usually consists of the main thruster and rudder or the double thrusters at the stern of the ship, without side thrusters.This power configuration means that the USVs have only two control inputs.However, there are three degrees of freedom (DoF) for USVs, including surge, sway, and yaw, which means that the number of control inputs is less than the DoF of the USVs.This type of USV has underactuated characteristics.Because of the low maneuverability, it is more suitable to study the path following control of underactuated USVs. Path following refers to USVs tracking a predetermined path.The USV does not need to reach a certain position on the path at the specified time, and the reference path is independent of time.In other words, the spatial constraints of path following problem take precedence over time constraints.As shown in Figure 1, the path following control of USVs divides the control system into two parts: the guidance subsystem and the control subsystem.Based on the path information and environmental information, the guidance subsystem generates the expected reference signals.Then, the control subsystem will track the reference signal generated by the guidance subsystem to achieve path tracking.Path tracking control is similar to the actual behavior of crew maneuvering ships, and the modular design concept allows it to directly apply mature guidance technology and heading maintenance control theory, which has strong practical application value and has become a commonly used solution for USV path following.Over the last several years, promising results on the path following control of USVs have been proposed. In [7], a LOS-based guidance law was designed for target enclosing control of an USV, and the effectiveness of the proposed method was verified by simulations and experiments.However, when the tracking error is large, the speed of LOS method converging to the desired path is relatively slow.In addition, when the USV is influenced by the environment disturbance, the sideslip angle will occur, which limits the application of the LOS method.In order to deal with the above-mentioned problem, Fossen et al. proposed an integral LOS in [8] using additional integral terms to offset the sideslip angle.Moreover, there are many improvement methods based on LOS.In [9], an adaptive LOS guidance law was proposed for the finite-time path following control of USVs, which can keep the tracking error within the constraint range.In [10], the fuzzy rules were used to determine the forward looking distance of the LOS guidance to increase the convergence speed.In [11], based on sliding mode theory, a robust LOS guidance law was designed for the underactuated ships.Except for LOS-based approaches, the vector field guidance is also widely used in USV control [12][13][14]. As a most widely used algorithm applied to USV path following control, PID controller has the advantages of simple structure, good economy, and high control accuracy.However, when external disturbances exist, such as wind, waves, and currents, the adaptability of the PID controller is insufficient and its control stability will decrease.Currently, researches on PID control method have mainly focused on its improvements.For instance, in [15] the authors proposed an improved PID control method by using optimization theory, and this method can obtain the optimal control parameters.In [16], the fuzzy rules were used to realize the self adjustment of PID parameters to improve its robustness.In [17], a modified incremental PID was proposed to deal with the influence of the marine currents.Trajectory linearization can simplify the problem of path following control.However, linearization processing will lead to system errors and reduce control accuracy.Similar to other algorithms, we can combine it with robust control approaches improve control performance.In [18], to improve the robust performance of the TLC approaches, the neural network (NN) is used to estimate the model uncertainties.In [19], the linear extended state observer was designed to approximate the unknown disturbances, and by combining with the TLC approach, a robust controller was proposed for USVs.In [20], a finite-time disturbance observer was designed to observe disturbance and uncertainties to improve the robustness of TLC method.SMC has robustness to parameter changes and external disturbances; however, it has the disadvantage of chattering.In [21], by using hyperbolic tangent function, a SMC-based path following controller was proposed for the USV, which can deal with the chattering problem.In [22], the SMC was used to structure an observer.Then, it was combined with the adaptive law, and a nonlinear surge controller was proposed.In [23], to achieve fast converge speed, a nonsingular terminal SMC was designed for the USV control in the present of model uncertainties.The backstepping method is greatly influenced by the motion model, and in order to achieve good control performance and robustness, it is necessary to establish an accurate modelwhich is difficult to obtain.Therefore, for the backstepping approach, combinations with other techniques (for instance, tracking error compensation [9], SMC [26]) to improve its robustness performance have been a research hotspot.Intelligence control methods have unique advantages in dealing with nonlinear and complex system problems.Fuzzy logic control converts expert knowledge into fuzzy rules, which can effectively deal with the impact of model uncertainty and interference in the path following control of USVs [27].In addition, NN can be used to approximate the uncertainty and external interference terms of the USV model, so as to improve the anti-interference ability and robustness of the controller [28,29].In recent years, machine learning theory has developed rapidly, and reinforcement learning has been widely applied in the field of USV control [30,31].Reinforcement learning theory does not require the establishment of accurate mathematical models, and has a self-learning ability in unknown environments.Therefore, it has great research value for solving model uncertainty and unknown interference problems in USV control. Although fruitful research results have been reported, we need to note that limitations and challenges still exist: • The control methods in most existing works on path following control of USVs are time triggered (e.g., [15,26]), which means that the control signals should update at every sampling instance, and it is unnecessary from the perspective of resource allocation; • The USV model is highly nonlinear and coupled, which poses great difficulties in the design of path following controllers.Currently, although there are many papers studying the nonlinear controller design of USVs, most methods still require knowledge of partial or complete model information (e.g., [9,25]). Inspired by the existing literature discussed above, this paper proposes a eventtriggered robust path following controller subject to unknown model nonlinearity and disturbances.Specifically, based on the relative position between the USV and the expected path, a dynamic equation for its path tracking error is established in the Serret-Frenet coordinate system.According to the backstepping technique and Lyapunov stability theory, the guidance law and control signals are achieved.Then, to deal with the unknown model nonlinearity and disturbances, radial basis function neural networks (RBFNNs) are designed.Finally, on the basis of the above mentioned control signals, an event-triggered mechanism is structured to obtain the final control inputs.The contributions of this paper are summarized as follows: • An event-triggered based path following controller is proposed for the underactuated USVs.Because of the event-triggered mechanism, there is no need to update the control inputs at every sample instance.Therefore, this can decrease the computational burden; • The RBFNNs are designed to approximate the model nonlinearity and disturbances, which makes the proposed controller not rely on the USV mathematical model and improves the robustness performance of the controller. The organization of the rest is as follows.In Section 2, several useful lemmata are provided.In Section 3, the USV model and the control objectives are given.The guidance subsystem is presented in Section 4, and the design process of an event-triggered robust controller is proposed in Section 5.Then, the closed-loop system is proved to be stable in Appendix A. The effectiveness is verified in Section 6 by simulations.Finally, the conclusion and potential future studies are given in Section 7. Preliminary For the following nonlinear system, where x is the system state, and the equilibrium point is x = 0. Lemma 2. If the Lyapunov function H(x) about state x of system (1) meets with Ḣ(x) ≤ −κ 1 H(x) + κ 2 , where κ 1 , κ 2 > 0, then, we can say that the system is globally uniformly ultimately bounded (GUUB).Lemma 3. The nonlinear term F (x) of system (1) can be approximated by the RBFNN with arbitrary accuracy: where ω is the m × 1 weight vector of the RBFNN, h(x) is the m × 1 vector consists of Gaussian .., m, and the approximation error δ and weight ω are all bounded. Problem Formulation In this section, the mathematical model of the USV and the control objectives are given. USV Model The kinematics model of the USV is where Q = [x, y] T is its position, ψ is the yaw angle, u is its surge velocity, v is its sway velocity, and r is its yaw angular velocity.The dynamics model of the underactuated USV is [4,6] where m i and d i , i = 1, 2, 3 are the model parameters of the USV, τ u is the force in the surge channel, τ r is the yaw torque in the yaw channel, and τ u w , τ v w , and τ r w are the disturbances in each DoF. Control Objectives The purpose of this paper is to propose a path following control approach for underactuated USV considering unknown model nonlinearity, disturbances, and an event-triggered mechanism.The following conditions should be satisfied: 1. The underactuated USV can converge to a desired path P, which means lim t→∞ x e = 0; lim t→∞ y e = 0, where x e and y e are the position tracking errors defined in the following content; 2. The underactuated USV can sail along the desired path at a predefined surge velocity u d , which means lim The controller can guarantee the USV moves stably under the influence of unknown model nonlinearity and disturbances; 4. A suitable event-triggered mechanism should be designed. Guidance Subsystem Design for Path Following Control of Underactuated USV In this section, the tracking error dynamic equations are given, and the guidance law is derived. As shown in Figure 2, Q = [x, y] T is the position vector of the USV in the inertial coordinate system {I}, and the velocity vector can be Q = [ ẋ, ẏ] T .The course angle can be calculated by From Figure 2, we have where β is the sideslip angle of the USV. Therefore, the kinematics model of the USV can be re-expressed in a Serret-Frenet coordinate system {S} as where It is assumed that P is a point moving along the path P = [x d (θ), y d (θ)] T at a designed velocity v p , where θ is the path parameter to be designed. Then, the course error is where χ d is the course angle of point P. The displacement vector between Q and P is d = [x e , y e , 0] T ; therefore, the relative velocity can be calculated by ḋ = [ ẋe , ẏe , 0] T .The angular velocity of P can be expressed as ω p = [0, 0, c(ρ) ρ] T , where c(ρ) is the path curvature and ρ is the parameter to be designed. In the {S} frame, we have where R(χ e ) =   cosχ e −sinχ e 0 sinχ e cosχ e 0 0 0 1 The derivation of Equation ( 8) is Therefore, the tracking error dynamic equations can be To achieve objective 1 in Section 3.2, we define the following Lyapunov function as The speed v p can be designed as where k 1 > 0. Finally, the desired yaw angular velocity can be designed as where If r = r d , by substituting Equation ( 17) into ( 16), we can obtain Based on Lemma 1, the system is globally asymptotically stable. To sum up, if the yaw angular velocity of the USV r changes according to Equation ( 17), and the path parameter ρ and the velocity of guidance point P change according Equation (15), then the tracking errors x e , y e and χ e will converge to zero, which means that objective 1 in Section 3.2 will be achieved. Control Subsystem Design for Path Following Control of Underactuated USV In this section, the dynamic controller is designed. Backstepping-Based Dynamic Controller Design Define the tracking error of the surge velocity as where u d is the desired surge velocity. To achieve objective 2 in Section 3.2, we define the following Lyapunov function as Differentiating Equation ( 20), we can obtain where , which contains the nonlinear term and the disturbance.The control signal in the surge channel can be where k 3 > 1 2 .For yaw channel, to track the desired yaw angular velocity r d , we define the following Lyapunov function where r e = r − r d . Differentiating Equation ( 23), we can obtain where In the same way, the control signal in yaw channel can be where k 4 > 1 2 .Therefore, if the control inputs are given by the following equations, the underactuated USV can track the desired path P. The objectives 1 and 2 in Section 3.2 can be achieved under control input given by Equation (26). However, it should be noted that the control signals contains nonlinearities and disturbances where it is very difficult to obtain their accurate expressions.Therefore, to achieve objective 3 in Section 3.2, the RBFNNs are designed to approximate the nonlinear terms and disturbances in Equation (26). Remark 1.By using a backstepping approach, we can decompose the USV system into two subsystems, with one handling the position variable and the other handling the velocity variable.Virtual control laws are designed for each subsystem to achieve stability, and Lyapunov stability analysis is performed to ensure the overall system stability.Specifically, in Section 4, the position tracking of the USV is achieved by designing the desired velocity variables u d and r d .The control laws τ u and τ r are then designed in Section 5 to ensure the USV can navigate with the desired velocities. Radial Basis Function Neural Networks Design To estimate the term G u , we define the following RBFNN: where W u is the ideal weight vector of the NN, H u (i u ) is the vector consists of Gaussian functions, i u is the NN input, and δ u is the error. Because the ideal weights are very difficult to obtain, then we define the estimated value of where Ŵu is the estimated value of W u , and H u is short for H u (i u ). Then the control input Equation ( 22) can be Define the following Lyapunov function as where Wu is the estimation error of the weights vector and L u is a positive defined matrix. Differentiating Equation (30), we can obtain Then, the updating law of Ŵu can be where k 5 > 0. In the same way, to estimate the term G r , we define the following RBFNN: where W r is the ideal weight vector of the NN, H r (i r ) is the vector consists of Gaussian functions, i r is the NN input, and δ r is the error. Because the ideal weights are very difficult to obtain, then we define the estimated value of G r as Ĝr = ŴT r H r (34 where Ŵr is the estimated value of W r , and H r is short for H r (i r ). Then the control input Equation ( 25) can be Define the following Lyapunov function as where Wr is the estimation error of the weights vector and L r is a positive defined matrix.Differentiating Equation (36), we can obtain Then, the updating law of Ŵr can be where k 6 > 0. Therefore, considering the unknown nonlinearity and disturbances, the control inputs below can guarantee the USV to track along the desired path P, which means that the control objectives 1 to 3 in Section 3.2 can be achieved under the control input given by Equation (39). However, the controller is time triggered, which means the control input should update at every sampling instance.To deal with this problem, an event-triggered mechanism is designed in the following subsection. Recalling Equation (36), we have By substituting Equation ( 46) into (47), we have Then, the virtual control signal η r (t) can be designed as where β r > β r and µ r > 0. Therefore, the proposed event-triggered RBFNN-based controller for the surge channel is as follows The controller for the yaw channel is as follows At this point, all the control objectives 1 to 4 in Section 3.2 are achieved.Based on the above content, we can draw the following theorem. Theorem 1.For the under-actuated USV whose kinematics model and dynamics model given by Equations ( 3) and ( 4), it can track the desired path P under the proposed event-triggered robust path following the controller consisting of Equations ( 50) and ( 51), with velocities given by u d and Equation (17). The proof of Theorem 1 can be found in Appendix A. Simulation Results To verify the effectiveness of the proposed controller, simulations are carried out.The model parameters are listed in Table 1, which can also be found in [4,6].Two cases are included in this section.In case 1, the control performances with different controller parameters are evaluated, which helps us choose appropriate controller parameters.In case 2, comparisons with other methods are made to reflect the superiority of the proposed method. The simulation platform used in this paper is MATLAB and the equation solver is a fourth-fifth-order Runge-Kutta algorithm (ODE45). Case 1: Performance with Different Controller Parameters In this case, the desired path is given by The initial position of the USV is Q 0 = [50m, 0m] T , its initial yaw angle ψ 0 = π 2 rad, its initial surge velocity u 0 = 0.01 m/s, its initial lateral velocity v 0 = 0 m/s, its initial yaw angular velocity is r 0 = 0 rad/s, the simulation time is 120 s, the simulation step ∆t = 0.01 s, and no external disturbances are considered in this case. The states u ∈ [0, The simulation results are shown in Figures 3-6 (take k 1 , k c , neuron number, and β u for example).Please note that in these figures, R is set as 40 m. As shown in Figure 3a, we can see that even though the value of k 1 is different, the USV can still track the desired path with high accuracy.However, the smaller the value of k 1 , the slower the USV converges to the desired path, as illustrated in Figure 3b.It can be found that the event-triggered mechanism plays a role in the path-following of the USV, where the control inputs are only updated when the event is triggered as shown in Figure 3c.By comparison, the value of k 1 is ultimately chosen as 0.1.From Figure 4a, it can be seen that the USV is still able to accurately tack the desired path even when different values of k c are selected.Different from k 1 , the value of k c not only affects the convergence speed to the desired path, but also affects the tracking accuracy of the surge velocity, which is illustrated in Figure 4b.The smaller the value of k c is, the smaller the surge velocity tracking error u e will be.However, the smaller the value of k c is, the slower the USV converges to the desired path.In addition, It can be found that the value of k c will also affect the number of event triggers.The larger the value of k c , the more times the event will be triggered as shown in Figure 4c.By comparison, the value of k c is ultimately chosen as 0.1.The node number of the RBFNN is determined by comparing the control performance with different node numbers.As shown in Figure 5, we can find that when the number of neurons is five, although the USV can move along the desired path, it exhibits a significant forward velocity tracking error.However, when the number of neurons is 21 50, their control performances are similar.Since a number of neurons requires more computational time, the final number of neurons is chosen to be 21. A illustrated in Figure 6a, it can be observed that the value of β u has little impact on the accuracy of path tracking for the USV.The value of β u primarily affects the event-triggered times in surge channel, which in turn affects the tracking precision of surge velocity.It is clear that the smaller the β u , the more times the event will be triggered.By comparison, the value of β u is ultimately chosen as 15.Finally, all the controller parameters are listed in Table 2. The simulation is carried out to verify the vehicle behavior for a smaller R = 10 m.In this condition, four different initial states are selected: [x 1 , y 1 , rad], and [x 4 , y 4 , ψ 4 ] = [0, −15 m, 0].The controller parameters are listed in Table 2.The results are shown in Figure 7. As shown in Figure 7a, for a smaller R = 10 m, the proposed controller can still guarantee that the USV will track the desired path well, the tracking errors are bounded as illustrated in Figure 7b, and the event-triggered mechanism works, as shown in Figure 7c. Case 2: Comparison With Other Approaches In this case, other two approaches including backstepping and time triggered RBFNNbased backstepping are involved.The control laws of these two controllers are given by Equations ( 26) and (39). The simulation results are shown in Figures 8 and 9, and the control performance comparisons are listed in Table 3. Integrating the errors in IAE captures the cumulative effect of position tacking errors over the entire interval, providing a quantitative measure of the overall error.RMSE evaluates the root mean square of the errors, which measures the dispersion between the desired position and the true position of the USV, providing an overall understanding of the error distribution.Therefore, the selection of IAE and RMSE as evaluation metrics is aimed at comprehensively considering the cumulative effect and distribution characteristics of position tracking errors.They provide a thorough assessment of the differences between the desired position and the actual position of the USV and help compare the performance of the algorithms. From Figures 8a,b, it can be observed that the control accuracy of the backstepping method tends to decrease significantly due to the external disturbances.However, for RBFNN-backstepping and the proposed method, due to the robustness of the NN, the USV can still track the desired path with high accuracy.As shown in Figure 8c, the control inputs only update when the event is triggered. As shown in Figure 9a, each weight of the RBFNN in the surge and yaw channels is bounded.The triggering time intervals in each channel are illustrated in Figure 9b.It can be found that the maximum time intervals can reach up to 5.16 s and 5.59 s.Considering the unknown model nonlinearity and external disturbances, G r = m 1 −m 2 m 3 uv − d 3 m 3 r + τ r w m 3 .Submitting the model parameters listed in Table 1, we can obtain G r = −0.1326uv− 1.1414r + 0.0076sin(0.05t)cos(0.01t)+ 0.0228.After the system stabilizes, the lateral velocity and yaw angular velocity of the USV are relatively small.Hence, the actual value of G r will also be small (taking the example at t = 60 s, the actual value of G r is only −0.0348).Therefore, the weights of the RBFNN in the yaw channel are very small as shown in Figure 9c. From Table 3, we can find that, for the backstepping method, the IAE and RMSE of position are much bigger than the ones of RBFNN-based backstepping approach or the proposed method (backstepping: 350.00 >, proposed method: 256.58 >, and RBFNN-based backstepping: 252.88).The fundamental reason for this situation is that the traditional backstepping technique is a model-based method and its robustness is poor, while the RBFNN can improve the robustness of the other two approaches.Comparing the proposed method with RBFNN-based backstepping, the difference between the IAE and RSME of the proposed method and the ones of the RBFNN-based backstepping are very small.However, the triggering times of the proposed method in both the surge and yaw channels are much less than the ones of the RBFNN-based backstepping method (proposed method: 77 times in the surge channel and 293 times in the yaw channel; RBFNN-based backstepping: 12,000 times in both channels). Above all, it is clear that the proposed method can guarantee the USV to track the desired path accurately even if the unknown model nonlinearity and disturbances exist.In addition, profiting from the designed event-triggered mechanism, which is different from the time-triggered control approaches, there is no need for the proposed method to update the control inputs at every sampling instant. Although the control method proposed in this paper ensures stable navigation of the USV under the influence of unknown nonlinearity and external disturbances, there is still room for further improvement.For example, the method does not consider control input constraints, the neural network structure parameters are manually set without optimization, and it does not take into account the consideration of optimal performance criteria. Remark 2. In Appendix A, the closed-loop system is proven to be GUUB, and we obtain the following inequality: By solving the above inequality, we can obtain the following results: where H(0) denotes the initial value of H. Therefore, the tracking error satisfies: where E = x e , y e , χ e − χ , u e , r e , Wu , Wr Based on the above analysis, it can be concluded that the tracking error is bounded as time progresses.Therefore, it is possible that there are some errors that do not converge to zero. Remark A1. Based on Lemma 2, to guarantee that the closed-loop system is GUUB, the Lyapunov function H should meet with Ḣ ≤ −λ 1 H + λ 2 , where λ 1 and λ 2 are all positive constants.In this paper, λ 1 is the minimum value among k 1 , k U, k c , k 3 − 1 2 , k 4 − 1 2 , 1 2 k 5 , and 1 2 k 6 .To ensure that λ 1 is always a positive constant, the values of k 3 − 1 2 and k 4 − 1 2 should be positive.Therefore, both k 3 and k 4 should be greater than 1 2 . In addition, the event-triggered based controller should avoid the Zeno behavior. For the surge channel, by recalling α u = η u (t) − τ u (t), we have α u (t) = β u , the time interval ∆t u = t i+1 − t i ≥ β u u > 0, which means the Zeno behavior is avoided. In the same way, it is easy to prove that the Zeno behavior in the yaw channel can also be avoided. At this point, the proof of Theorem 1 is completed. Figure 1 . Figure 1.Schematic diagram of path following control framework. Figure 2 . Figure 2. Path following diagram of the underactuated USV. Trajectories of the USV.(b) Tracking errors. Trajectories of the USV.(b) Tracking errors.(c) Control inputs. Trajectories of the USV.(b) Tracking errors.(c) Control inputs. Trajectories of the USV. Trajectories of the USV.(b) Tracking errors.(c) Control inputs. Table 1 . Parameters of the USV.
6,857.2
2023-12-11T00:00:00.000
[ "Engineering", "Computer Science" ]
Norm retrieval and phase retrieval by projections We make a detailed study of norm retrieval. We give several classification theorems for norm retrieval and give a large number of examples to go with the theory. One consequence is a new result about Parseval frames: If a Parseval frame is divided into two subsets with spans $W_1,W_2$ and $W_1 \cap W_2=\{0\}$, then $W_1 \perp W_2$. Introduction Signal reconstruction is an important problem in engineering and has a wide variety of applications. Recovering signals when there is partial loss of information is a significant challenge. Partial loss of phase information occurs in application areas such as speech recognition [4,17,18], and optics applications such as X-ray crystallography [3,13,14], and there is a need to do phase retrieval efficiently. The concept of phase retrieval for Hilbert space frames was introduced in 2006 by Balan, Casazza, and Edidin [2], and since then it has become an active area of research in signal processing and harmonic analysis. Phase retrieval has been defined for vectors as well as for projections and in general deals with recovering the phase of a signal given its intensity measurements from a redundant linear system. Phase retrieval by projections, where the signal is projected onto some higher dimensional subspaces and has to be recovered from the norms of the projections of the vectors onto the subspaces, appears in real life problems such as crystal twinning [12]. We refer the reader to [8] for a detailed study of phase retrieval by projections. Another related problem is that of phaseless reconstruction, where the unknown signal is reconstructed from the intensity measurements. Recently, the two terms phase retrieval and phaseless reconstruction were used interchangeably. However, it is not clear from their respective definitions how these two are equivalent. Recently, in [5] the authors proved the equivalence of phase retrieval and phaseless reconstruction in real as well as in complex case. Due to The first, second and fourth authors were supported by NSF DMS 1609760; NSF ATD 1321779; and ARO W911NF-16-1-0008. Part of this research was carried out while the first and fourth authors were visiting the Hong Kong University of Science and Technology on a grant from (ICERM) Institute for computational and experimental research in Mathematics. this equivalence, in this paper, we restrict ourselves to proving results regarding phase retrieval. Further, a weaker notion of phase retrieval and phaseless reconstruction was introduced in [6]. In this work, we consider the notion of norm retrieval which was recently introduced by Bahmanpour et.al. in [1], and is the problem of retrieving the norm of a vector given the absolute value of its intensity measurements. Norm retrieval arises naturally from phase retrieval when one utilizes both a collection of subspaces and their orthogonal complements. Here we study norm retrieval and certain classifications of it. We use projections to do norm retrieval and to extend certain results from [16] for frames. We provide a complete classification of subspaces of R N which do norm retrieval. Various examples for phase and norm retrieval by projections are given. Further, a classification of norm retrieval using Naimark's theorem is also obtained. We organize the rest of the paper as follows. In Section 2, we include basic definitions and results of phase retrieval. Section 3 introduces the norm retrieval and properties. Section 4 provides the relationship between phase and norm retrieval and related results. Detailed classifications of vectors and subspaces which do norm retrieval are provided in Section 5. Preliminaries We denote by H N a N dimensional real or complex Hilbert space, and we write R N or C N when it is necessary to differentiate between the two explicitly. Below, we give the definition of a frame in H N . The following definitions and terms are useful in the sequel. • The constants A and B are called the lower and upper frame bounds of the frame, respectively. • If A = B, the frame is called an A-tight frame (or a tight frame). In particular, if A = B = 1, the frame is called a Parseval frame. • Φ is an equal norm frame if φ i = φ j for all i, j and is called a unit norm frame if φ i = 1 for all i = 1, 2, · · · n. • If, only the right hand side inequality holds in (1), the frame is called a B-Bessel family with Bessel bound B. Note that in a finite dimensional setting, a frame is a spanning set of vectors in the Hilbert space. We refer to [10] for an introduction to Hilbert space frame theory and applications. The analysis operator associated with Φ is defined as the operator T : Here, {e i } M i=1 is understood to be the natural orthonormal basis for ℓ M 2 . The adjoint T * of the analysis operator T is called the synthesis operator of the frame Φ. It can be shown that T * (e i ) = φ i . The frame operator for the frame Φ is defined as S : Note that the frame operator S is a positive, self-adjoint and invertible operator satisfying the operator inequality AI ≤ S ≤ BI, where A and B are the frame bounds and I denotes the identity on H N . Frame operators play an important role since they are used to reconstruct the vectors in the space. To be precise, any x ∈ H N can be written as The frame operator of a Parseval frame is the identity operator. Thus, if We concentrate on norm retrieval and its classifications in this paper. We now see the basic definitions of phase retrieval formally, starting with phase retrieval by projections. Throughout the paper, the term projection is used to describe orthogonal projection (orthogonal idempotent operator) onto subspaces. be the projections onto each of these subspaces. We say that ) yields phase retrieval if for all x, y ∈ H N satisfying P i x = P i y for all i = 1, 2, · · · , M then x = cy for some scalar c such that |c| = 1 Phase retrieval by vectors is a particular case of the above. Φ yields phase retrieval with respect to an orthonormal basis Orthonormal bases fail to do phase retrieval, since in any given orthonormal basis, the corresponding coefficients of a vector are unique. One of the fundamental properties to identify the minimum number of vectors required to do phase retrieval is the complement property. It is proved in [2] that phase retrieval is equivalent to the complement property in R N . Further, it is proven that a generic family of (2N − 1)-vectors in R N does phase retrieval, however no set of (2N − 2)-vectors can. Here, generic refers to an open dense set in the set of (2N − 1)-element frames in H N . Full spark is another important notion of vectors in frame theory. A formal definition is given below: in H N , the spark of Φ is defined as the cardinality of the smallest linearly dependent subset of Φ. When spark(Φ) = N + 1, every subset of size N is linearly independent, and in that case, Φ is said to be full spark. Note from the definitions that full spark frames with M ≥ 2N − 1 have the complement property and hence do phase retrieval. Moreover, if M = 2N − 1 then the complement property clearly implies full spark. Next result, known as Naimark's theorem, characterizes Parseval frames in a finite dimensional Hilbert space. This theorem facilitates a way to construct Parseval frames, and crucially it is the only way to obtain Parseval frames. Later, we use this to obtain a classification of frames which do norm retrieval. The notation [M] = {1, 2, · · · , M} is used throughout the paper. Theorem 2.6 (Naimark's Theorem). Beginnings of Norm Retrieval In this section, we provide the definition of norm retrieval along with certain related results, and pertinent examples. be the orthogonal projections onto each of these subspaces. We say that ) yields norm retrieval if for all x, y ∈ H N satisfying P i x = P i y for all i = 1, 2, · · · , M then x = y . In particular, a set of vectors {φ Remark 3.2. It is immediate that a family of vectors doing phase retrieval does norm retrieval. An obvious choice of vectors which do norm retrieval are orthonormal bases. The following theorem provides a sufficient condition under which the subspaces spanned by the canonical basis vectors do norm retrieval. It is easy to see that tight frames do norm retrieval. Theorem 3.4. Tight frames do norm retrieval. for any ψ j ∈ H N . This is generalized in the following proposition. contains an orthonormal basis, then it does norm retrieval. Moreover, in this case, be an orthonormal basis for H N and let P i be the projections The above proposition does not hold if the number of hyperplanes is strictly less than N. This is proved in the next theorem. Now, we strengthen the above result by not requiring the vectors to be orthogonal. To prove this, we need the following lemma. Proof. We do this by induction on N with the case N = 2 obvious. So assume this holds for N − 1. Given As λ varies from −∞ to +∞, the right hand side varies from −∞ to +∞ and for some λ, we have Proof. Let P i be the projection onto W i and choose But x 2 = 1 while y 2 = 1, and so norm retrieval fails. However, in the following theorem, we show that three proper subspaces of codimension one can do norm retrieval in R N . Theorem 3.9. In R N three proper subspaces of codimension one can do norm retrieval. It follows that in R 3 , two 2-dimensional subspaces cannot do norm retrieval but three 2-dimensional subspaces can do norm retrieval. The following proposition shows a relationship between subspaces doing norm retrieval and the sum of the dimensions of the subspaces. The importance of this proposition is that we are looking for conditions on subspaces to do norm retrieval. To do so, the dimension of the subspaces is one of the tools we have. Proof. If M i=1 dim W i < N then we may pick non-zero x ⊥ W i for each i so that P i x = 0 for all i and therefore {W i } M i=1 fails norm retrieval. For the moreover part, let {g i } N i=1 be an orthonormal basis. We represent this basis L-times as a multiset: , · · · , g N , g 1 , · · · , g N , · · · , g 1 , · · · , g N }, and index it as: . We may pick a partition of [LN] in the following manner: Hence the result. As we have seen, the above proposition may fail if M i=1 k i = LN. Phase retrieval and Norm Retrieval In this section, we provide results relating phase retrieval and norm retrieval. The following theorem of Edidin [11] is significant in phase retrieval as it gives a necessary and sufficient condition for subspaces to do phase retrieval. Proof. If not, pick non-zero x ⊥ W ⊥ i for all i ∈ I c . This implies x ∈ ∩ i∈I c W i and therefore {P i (x)} N i=1 contains at most N − 1 distinct vectors and can not span R N . This contradicts the theorem 4.1. Proof. If (W ⊥ i ) does not span, then there exists 0 = x ∈ ∩W i . So P i x = x for all i = 1, 2, · · · , M, and so {P i (x)} does not span. Thus, by Theorem 4.1, (W i ) does not do phase retrieval. The following example shows that it is possible for subspaces to do norm retrieval even if {W ⊥ i } do not span the space which we see as one of main differences between phase retrieval and norm retrieval. be a orthonormal basis for R 3 , then let Any collection of subspaces which does phase retrieval yields norm retrieval, which follows from the definitions. However, the converse need not hold true always. For instance, any orthonormal basis does norm retrieval in R N . But it has too few vectors to do phase retrieval as it requires at least 2N − 1 vectors to do phase retrieval in R N . Given subspaces {W i } M i=1 of H N which yield phase retrieval, it is not necessarily true that {W ⊥ i } M i=1 do phase retrieval. The following result proves that norm retrieval is the condition needed to pass phase retrieval to orthogonal complements. Though the result is already proved in [1], we include it here for completeness. does norm retrieval. Proof. Assume that (I − P i )x = (I − P i )y for all i = 1, 2, · · · , M and {P i } M i=1 does norm retrieval. I.e. x = y . Then Since x = y , we have P i x = P i y for all i = 1, 2, · · · , M. Since {P i } M i=1 does phase retrieval, it follows that x = cy for some |c| = 1. The other direction of the theorem is clear. Next is an example of a family of subspaces {W i } M i=1 which does phase retrieval but complements fail phase retrieval and hence fail norm retrieval [8]. does norm retrieval, we can conclude the latter does phase retrieval as well which follows from Lemma 4.5. The next result gives us a sufficient condition for the subspaces to do norm retrieval. It is enough to check if the identity is in the linear span of the projections in order for the subspaces to do norm retrieval. A similar result in the case of phase retrieval is proved in [7]. does norm retrieval. Proof. Given x ∈ R N , then Since for each i the coefficients a i and P i x are known, the collection does norm retrieval. A counter example for the converse of the above proposition is given in [1] where the authors construct a collection of projections, P i , which do phase retrieval but I ∈ span P i . Here, we provide another example for the same. We give a set of five vectors in R 3 which does phase retrieval; however the identity operator is not in the span of these vectors. We need the following theorem that provides a necessary and sufficient condition for a frame to be not scalable in R 3 . Recall that a frame is a Parseval frame [15]. Later in the next section, we prove that scalable frames always do norm retrieval. Choose five full spark vectors in the cone referred in the previous theorem 4.9. These vectors do phase retrieval and hence norm retrieval in R 3 . Now, given The next proposition gives a sufficient condition for the complements to do norm retrieval when the subspaces do. does norm retrieval. Proof. Observe the following By the previous proposition this shows {I − P i } L i=1 does norm retrieval. It is possible that a i P i = I = b i P i with a i = 1 but b i = 1, as we will see in the following example. be an orthonormal basis for R 3 . Now let Classification of Norm Retrieval In this section, we give classifications of norm retrieval by projections. The following theorem in [16] uses the span of the frame elements to classify norm retrievable frames in R N . Next, we prove one of the main results of this paper. This is an extension of the previous Theorem 5.1 and it fully classifies the subspaces of R N which do norm retrieval. Then the following are equivalent: Given any orthonormal bases {φ i,j } I i j=1 of W i and any subcollection and so x, y = 0. By (2), we must have that x + y, x − y = 0, which implies that x and y have the same norm. The third equivalence is immediate from the result in Theorem (5.1). , c i = 0 does norm retrieval. Hence all scalable frames do norm retrieval. Proof. This is an immediate result of Theorem 5.2. Observe the conditions in Theorem 5.2 do not depend on the norm of each vector φ i . For the complex case we have: Proof. Given x, y as above, | x + y, φ ij | = | x − y, φij |, for all (i,j). We use Theorem 5.2 to give a simple proof of a result in [7] which has a very complicated proof in that paper. do norm retrieval in R N , then the vectors are orthogonal. Proof. Assume φ i = 1 and that φ j is not orthogonal so span {φ i } i =j . Choose x ⊥ a i for all i = j. Let y = x − x, a j a j . Now, a j , y = a j , x − x, a j a j , a j = 0. Let I = {i : i = j}. Then x ⊥ span {a i } i∈I and y ⊥ a j , but x, y = x, x − x, a j x, a j = 1 − | x, a j | 2 = 0, contradicting the theorem. (2) For i = 1, 2, · · · , M if W 1 = span{φ i } i∈I and W 2 = span{φ i } i∈I c then, Both phase retrieval and norm retrieval are preserved when applying projections to the vectors. Also, phase retrieval is preserved under the application of any invertible operator (refer to [1] for details). This is not the case with norm retrieval, in general. We prove this in the next corollary. Corollary 5.7. Norm retrieval is not preserved under the application of an invertible operator, in general. be linearly independent vectors in R N which are not orthogonal. Then by Corollary 5.5, Φ cannot do norm retrieval. But there exists an invertible operator T on R N so that {T φ i } N i=1 is an orthonormal basis and so does norm retrieval. However, we note that unitary operators, which are invertible, do preserve norm retrieval. The following corollary about Parseval frames also holds in the infinite dimensional case with the same proof. Corollary 5.8. If Φ is a Parseval frame, it does norm retrieval. Hence, if we partition Φ into two disjoint sets, and choose a vector orthogonal to each set, then these vectors are orthogonal. Proof. Let Φ = {φ i } i∈I be a Parseval frame and let J ⊆ I. Let T be its analysis operator. If x ⊥ {φ i } i∈J and y ⊥ {φ i } i∈J c . Then T x = ( x, φ i ) and T y = ( y, φ i ) do not have any nonzero coordinates in common. So T x ⊥ T y. Since, the analysis operator of a Parseval frame is an isometry, we have x ⊥ y. A classic result in frame theory is that a Parseval frame {φ i } i∈I has the property that if φ j / ∈ W = span i =j {φ i } then φ j ⊥ W. It turns out that a much more general result holds. Corollary 5.10. If Φ = {φ i } N i=1 is a frame for R M with frame operator S which does norm retrieval, then for every I ⊂ {1, 2, · · · , N}, if x ⊥ span {φ i } i∈I then x ∈ span {S −1 φ i } i∈I c . In particular, if Φ is a Parseval frame then x ∈ span {φ i } i∈I c . Proof. Given x as in the corollary, We next provide a classification of norm retrieval using Naimark's theorem. It turns out that every frame can be scaled to look similar to Naimark's theorem. Proof. Let {g i } N i=1 be the eigenbasis for the frame with respective eigenvalues 1 = λ i ≥ λ 2 ≥ · · · ≥ λ N . For M + 1 ≤ M + i ≤ 2M − 1 let Then is a Parseval frame. So R N ⊂ ℓ 2 (2M − 1) with orthonormal basis {e i } 2M −1 i=1 and the projection down to R N satisfies P e i = φ i for all i ∈ [2M − 1]. Theorem 5.12. Let Φ = {φ i } M i=1 be a frame for R N . The following are equivalent: (1) Φ does norm retrieval. be an orthonormal basis for ℓ 2M −1 2 and the projection onto R N satisfies P e i = φ i for i = 1, 2, · · · , M. Now, Φ does norm retrieval if and only if for any x ∈ R N , knowing | x, φ i | gives us x . But x, φ i = x, P e i = P x, e i = x, e i . Now, knowing | x, e i | for i = 1, 2, · · · , M means knowing x . But: x, e i e i 2 .
5,129.6
2017-01-27T00:00:00.000
[ "Mathematics" ]
Research on eight machine learning algorithms applicability on different characteristics data sets in medical classification tasks With the vigorous development of data mining field, more and more algorithms have been proposed or improved. How to quickly select a data mining algorithm that is suitable for data sets in medical field is a challenge for some medical workers. The purpose of this paper is to study the comparative characteristics of the general medical data set and the general data sets in other fields, and find the applicability rules of the data mining algorithm suitable for the characteristics of the current research data set. The study quantified characteristics of the research data set with 26 indicators, including simple indicators, statistical indicators and information theory indicators. Eight machine learning algorithms with high maturity, low user involvement and strong family representation were selected as the base algorithms. The algorithm performances were evaluated by three aspects: prediction accuracy, running speed and memory consumption. By constructing decision tree and stepwise regression model to learn the above metadata, the algorithm applicability knowledge of medical data set is obtained. Through cross-verification, the accuracy of all the algorithm applicability prediction models is above 75%, which proves the validity and feasibility of the applicability knowledge. Introduction 1.Background With the development of data mining technology and interdisciplinary fields, more and more algorithms have been proposed and applied.With the development of science and the innovation of technology, hospital information system has been established and gradually popularized.The acquisition, storage and rapid transmission of large amounts of data are gradually realized, thus accumulating huge medical data resources.In the biomedical field, it is critical to translate the growing volume of biomedical data into meaningful and valuable information for practicing physicians.Traditional data analysis methods are mainly based on statistics.However, with the increasing of data sets, the wide application of multimedia storage media and object-oriented technology, the traditional statistical analysis methods are no longer enough to support the current data analysis needs.As a result, a series of new data analysis methods came into being, and data mining methods have been paid more and more Zhang et al. 10.3389/fncom.2024.1345575Frontiers in Computational Neuroscience 02 frontiersin.orgattention and applied in the biomedical field.How to choose an algorithm, which is more suitable for the current task, from a large number of algorithms, is a problem to be solved in various research fields. In this context, a novel and prospective research field -hybrid methods between metaheuristics and machine learning, has arisen.The novel research field successfully combines machine learning and swarm intelligence approaches and proved to be able to obtain outstanding results in different areas (Malakar et al., 2019;Bacanin et al., 2021Bacanin et al., , 2022;;Zivkovic et al., 2022). For medical workers without science and engineering background, it has become an urgent need to quickly choose a method suitable for current research data among many data mining algorithms.In view of the above problems, this paper adopts 8 data mining algorithms to construct models and evaluate results on different data sets according to research questions, obtain the applicability knowledge of algorithms, and provide empirical guidance for the selection of data mining algorithms.Aiming at the inconsistency of multiple evaluation indicators, this paper studied mapping knowledge from three aspects: prediction accuracy, modeling running time and memory occupancy requirements, which provided the possibility for users to choose according to the priority of research problems. Related works In 1976, Rice formally defined the conceptual model of algorithm selection, which consists of four parts: problem space, feature space, algorithm space and performance space (Rice, 1976).In order to make algorithm selection more targeted, Berrer introduced the concept of user preference into the algorithm evaluation system, enabling users to assign different weights to each evaluation index according to business characteristics, which is an important way for users to participate in the model selection process (Guoxun, 2013).Some early studies laid the foundation for meta-learning (Rendell and Cho, 1990;Aha, 1992;Schaffer, 2010;Jianshuang et al., 2017).Meta-learning, in simple terms, is learning about learning, that is, relearning on the basis of learning results (Brodley, 1995).Meta-learning studies how to learn from experience to enhance learning performance (Makmal et al., 2017).At present, researches on algorithm selection based on metalearning ideas mainly focus on the description of dataset characteristics, the determination of meta-algorithms (Vilata and Drissi, 2002;Finn and Choi, 2017;Finn and Levine, 2017;Lee and Levine, 2018a,b) and the expansion and application of meta-learning to a specific problem (Doan, 2016;Li et al., 2017). High-quality description of dataset characteristics can provide a reasonable explanation for the difference in algorithm performance, while few dataset characteristics were taken into account in early studies, which were expanded by two subsequent ESPRIT projects.(1) Comparative testing of statistical and logical learning(STATLOG) project (King et al., 1995): From 1991 to 1994, a large-scale project was carried out in Europe to compare classification algorithms.By applying different types of classification algorithms on different datasets from different fields, and comparing the performance of each algorithm, the relationship between algorithm performance and dataset characteristics was obtained, so as to provide empirical knowledge for algorithm selection.The STATLOG project selects 22 classification task datasets in the UCI database, 23 algorithms based on machine learning methods, such as statistics, rules, tree structure and neural network, and 16 dataset characteristics description indicators, such as mean, variance and information entropy.The accuracy of prediction is taken as the evaluation criterion.The C4.5 decision tree algorithm is used to generate rules applicable to data characteristics for each algorithm.The results of the STATLOG project show that no algorithm can perform optimally on all datasets, that confirms the No free lunch (NFL) theorem (Wolpert, 1996).The STATLOG project provides extremely valuable metadata that has been widely used in the field of meta-learning over the years.(2) A metalearning assistant for providing user support in machine learning and data mining (METAL) project (Smith, 2008): From 1998 to 2001, based on the research results of the STATLOG project and the research progress of meta-learning, another algorithm selection research project was carried out in Europe, which mainly focused on algorithm selection in classification and regression problems.The METAL project selects a total of 53 classification task datasets from UCI database and other sources, 10 algorithms such as based on rules, decision trees, neural networks, instances and linear discrimination.The METAL project continues to use the 16 characteristic description indicators of datasets in STATLOG, and takes prediction accuracy and time performance as evaluation criteria.The computing performance of each algorithm is evaluated and sorted by 10-fold cross-validation. After the two European ESPRIT projects, there is limited research on algorithm selection for general datasets without significant macro features.In 2000, Lim et al. selected 22 kinds of decision tree algorithms, 9 statistical algorithms and 2 neural network algorithms to run on 32 datasets respectively, and evaluated each algorithm in terms of classification accuracy, training time and number of leaf nodes in decision tree (Lim et al., 2000).In 2006, Ali and Smith conducted a large-scale algorithm selection study for classification problems.They selected 112 classification task datasets in the UCI database and 8 algorithms based on statistics, rules and neural networks.On the basis of STATLOG, they introduced statistical features from Matlab toolbox and other sources, such as the dispersion index and the maximum and minimum eigenvalues of covariance matrix, and expanded the characteristic description indicators of the dataset to 31.F-measure is added as evaluation criteria, and C4.5 decision tree algorithm is used to learn mapping rules to predict the optimal algorithm (Ali and Smith, 2006).For the first time, support vector machine (SVM) is included in the research scope, and the indicators of dataset characteristic description and algorithm evaluation are extended.Since 2014, some researchers focused on the integration of several basic classifiers (Cruz et al., 2015) or the overall workflow of some software (Nguyen and Kalousis, 2014;Soares, 2014).These studies only show the final result, which is equivalent to a black box for users, and the specific judgment process is unknown.For the specific field of supervised machine learning problems, Luo (2016) reviewed the literature on machine learning algorithms and automatic selection of hyperparameter values, and found that these methods have limitations in the context of biomedical data.Because the performance of machine learning algorithms is shown to be problem dependent (Heremans and Orshoven, 2015), it is recommended to compare different candidate algorithms in specific application environments.Some studies have been conducted in the fields of time series (Adhikari, 2015) and bioinformatics (Ding et al., 2014), which the data has significant temporal variation or high dimensional Algorithm selection should compare the performance of algorithms from multiple aspects.On the basis of some existing researches, the following three theorems have been widely recognized. (1) NFL theorem:Wolpert and Macerday proposed the NFL theorem for comparing two optimization algorithms to determine which one is better.However, the performance of the optimization algorithm is equivalent due to the mutual compensation of all possible functions.Specifically, it can be described as follows: For all optimization problems in a specific field, after m steps of iteration, the cumulative sum of all possibilities of algorithm A and algorithm B reaching the given value of the objective function is equal (David and Wolpert, 1997).NFL theorem shows that the algorithm is selected by the data, that is, the background of the problem.If we do not make any assumptions about the background of the problem, there is no universal optimal algorithm, so it is meaningless to study the universal optimal algorithm.(2) Occam's razor principle (Warmuth, 1987): The principle states that "if it is not necessary, do not add entities, " that is, the "simple and effective principle." The principle holds that for a given domain, the simplest explanation of a phenomenon is most likely to be correct, that is, for a given number of models with approximate goodness-of-fit, the more concise model should be chosen (Domingos, 2010).However, due to the simplicity and necessity of this principle is difficult to quantify in practice, this algorithm selection principle has not been widely promoted.(3) Minimum description length (MDL) principle (Rissanen, 1978): This principle was proposed by Rissanen (1978) from the perspective of information theory, and its basic idea is that for a given data set, the optimal compression of the data is the best hypothesis for the dataset.The MDL principle holds that the complexity of a model is the sum of the description length of the model itself and the encoding length of the data represented by the model (Barron et al., 1998).The principle is the formalization of Occam's razor principle and one of the most practical branches of Kolmogorov complexity (Nannen, 2010).A highly complex hypothesis may accurately describe all the data, but lose generality at the same time.However, too simple description will miss a lot of data features, MDL principle is the compromise of the above two cases, avoids overfitting or underfitting of the model. Ideally, we want to identify or design an algorithm that works best for all situations.However, both experimental results (Michie et al., 1994) and theoretical work (David, 1995) suggest that this is not possible.The choice of which algorithms to use depends on the dataset at hand, so a system that can provide recommendations for such choices would be very useful (Mitchell, 2003).By trying all the algorithms for this problem, we can narrow the algorithm recommendation problem down to a performance comparison problem.In practice, however, this is usually not feasible because there are too many algorithms to try, and some of them run slowly.This problem is exacerbated especially when dealing with large amounts of data, which often occurs in knowledge discovery in databases. Many algorithm selection methods are limited to selecting a single algorithm or a small group of algorithms (Abdulrahman, 2017), that are expected to perform well on a given problem (Kalousis and Theoharis, 1999;Pfahringer and Bensusan, 2000;Todorovski, 2003).Brazdil et al. believe that the algorithm recommendation problem is more similar to the ranking task in nature, which is similar to the common ranking task in information retrieval and recommendation systems (Brazdil and Costa, 2003).In these tasks, it is not known in advance how many alternatives the user will actually consider.If the user's preferred algorithm performs slightly less well than the one at the top of the ranking, the user can decide to stick with his favorite algorithm.If you have enough time and hardware conditions, you can try more algorithms.Since we do not know how many algorithms a user might actually want to choose, consider providing a ranking of all the algorithms.In 1994, Brazdil, Gama and Henery first used metalearning algorithm recommendation to deal with sorting tasks (Brazdil, 1994).Later Nakhaeizadeh and Schnabl (1997), and later Keller et al. (2000), and Brazdil and Soares (2000) also adopted similar methods.In 2011, RBC Prudencio, MCPD Souto and TB Ludermir applied the ordering meta-learning method to the time series and gene expression data clustering field (Prudêncio et al., 2011).In 2017, Finn et al. introduced the theory of meta-learning in the fast adaptation study of deep networks (Finn and Levine, 2017). The study of algorithm recommendation is the further improvement of the study of algorithm selection, and it is also the theoretical basis of the study of algorithm applicability in this paper. Medical data has different characteristics from other data.The theoretical framework for the applicability study of medical data mining algorithm proposed and constructed in this paper can provide more targeted empirical knowledge on algorithm selection for medical research compared with previous studies.The algorithm applicability knowledge base constructed in this paper solves the problem of lack of empirical knowledge of data mining algorithms in medical research, and provides theoretical guidance for users to choose suitable algorithms. Base dataset In the selection of datasets, this paper follows the principles of universality, openness and less intervention, and uses the machine learning database of University of California Irvine (UCI) as the source of the base dataset.The UCI database is a database used by the machine learning community for empirical analysis of machine learning algorithms, and it is a collection of data that covers domain theory data as well as data generated by data generators.Since inception in 1987 by David Aha and others, the UCI database has been used by students, teachers, and researchers around the world as the primary source of machine learning datasets.At present, the UCI database has reached more than 1,000 citations, making it one of the top 100 most cited in computer science.According to the dataset range studied in this paper, that is, open data sets aiming at classification that can be converted into structured data through simple or slightly complex operations, open datasets included in UCI database are selected.One hundred and thirty-eight independent datasets from 335 UCI datasets were included in the study. Data preprocessing The datasets in the UCI database come from various industries, and a considerable part of them are shared raw data.The data collection and storage software used by the sharers are not the same, so there are some differences in data formats.The quality of data is the basic guarantee of data analysis, and only high-quality data can obtain high-quality analysis results.Therefore, this paper conducted data preprocessing on 138 selected datasets in order to carry out characteristic quantization and subsequent algorithm applicability research.Since the purpose of this paper is to study the characteristics of universal medical datasets compared with general datasets in other fields, the principle of "only necessary preprocessing without affecting the basic characteristics of data" is adhered to in the data preprocessing stage.Specifically, that is to simulate the preliminary data preprocessing carried out by the researchers after obtaining the original data for the current research scheme.Data preprocessing in the study mainly includes the following aspects: Deficient data In the process of data acquisition, many reasons may cause the incompleteness of collected data.For datasets that lack a column name, define the column name to clarify the meaning of the attribute.Since medical data involves different individuals, and individual differences exist among patients, it is easy to introduce greater errors if the missing values are filled by mean, median, chain equation and other methods hastily.Therefore, data samples containing missing values are removed in this paper to ensure the integrity of each analysis sample.At the same time, in order to avoid a large reduction in the sample size of the dataset after excessive removal of missing values of a variable, this paper with a limit of 30%, removes attributes with missing values exceeding 30%.Because some attributes in the dataset have more missing values, if the samples with missing values are directly removed, the sample size of the dataset will be greatly reduced.Therefore, the threshold of 30% is set in this study.When the missing value ratio of an attribute is greater than this threshold, the attribute will be removed. Inconsistent data In the process of data recording and collection, there may be inconsistent presentation, spelling errors and other problems resulting in inconsistent data.In this paper, by comparing with the description of the dataset, the inconsistent data that can be clearly judged are normalized, the uncertain differences are retained and multi-party verification is carried out, and the sample data is removed if there is no confirmed information to reduce noise. Data integration Different data collection scenarios and storage media will cause the collected data to be dispersed in different data files, showing the characteristics of phased and distributed storage.In this case, the data of different data sources need to be associated and integrated through data integration operations, and stored in a unified data set. After data preprocessing, a total of 293 sub datasets of 138 independent datasets were included in this study. Dataset characteristic metadata By focusing on the analysis and comparison of the calculation indicators adopted by the two European Spirit projects -STATLOG and METAL, and combining the research purpose and needs of this study, this paper adopts 26 indicators to quantify the characteristics of the research datasets.These 26 quantitative indicators can be divided into three categories: simple indicators, statistical indicators and information theory indicators. Base algorithm selection Classification, as one of the most important techniques in data mining, has a wide applicable range, and many classification algorithms have been proposed so far.According to the learning characteristics of each algorithm, data mining classification algorithms can be divided into the following four categories: classification algorithm based on tree, classification algorithm based on neural network, classification algorithm based on Bayes, and classification algorithm based on statistics.In recent years, on the basis of statistical learning theory, support vector machine (SVM) have developed vigorously, showed unique advantages in solving small sample, nonlinear and high-dimensional pattern recognition problems, and received attention and promotion from scholars in multiple fields.In addition, rough set theory, fuzzy set theory, genetic algorithm and ensemble learning methods are introduced into the classification task. In the study, the following three selection criteria for alternative base algorithms are formulated: 1 High maturity in theory and practice; 2 Less user involvement in the design stage; 3 Strong family representation.According to the above three criteria, this paper filters many data mining algorithms for aiming at classification task.This paper selects five classification algorithms among the ten classic algorithms: k nearest neighbor (kNN) algorithm, decision tree C4.5 (C4.5) algorithm, support vector machine (SVM) algorithm, naive bayes (NB) algorithm, AdaBoost (AB) algorithm, and the increasingly popular -random forest (RF) algorithm, the representative of neural network algorithm -backpropagation network (BP), and logistic regression (LR), which is commonly used in medical research.The above 8 algorithms are used as the alternative base algorithm in this paper. Algorithm performance metadata In the process of algorithm applicability research, algorithm performance evaluation is an essential component.In the field of machine learning, the commonly used algorithms performance evaluation indexes include: accuracy rate, true positive rate, true negative rate, recall rate, average absolute error, Area under the ROC curve (AUC), Akaike information criterion (AIC), running time, interpretability, etc.For different data mining methods, there are specific evaluation indexes. The evaluation of the classification methods is mainly based on the following five items: 1 Accuracy of prediction: the proportion of correct classification in sample data; 2 Running speed: the time of model construction and classification using the model.Since the time required to generate the model accounts for most of the total time, the model construction time is mainly used as the measurement standard of the speed of the classification method in the experiment; 3 Robustness: The ability of the model to accurately predict data with noise or missing values; 4 Processable data volume: The ability to effectively construct a model in the face of a large amount of data, mainly referring to the ability to solve the problem of resident disk data; 5 Interpretability: The level at which a model can be understood. In the field of medical research, sensitivity, specificity and accuracy are often used to evaluate predictive models constructed in a particular study.Sensitivity is the proportion of individuals with actual disease who are accurately judged to be true positive, that is, the true positive rate described above.Specificity is the proportion of individuals who are not actually sick that are accurately judged to be true negative, i.e., the true negative rate and recall rate described above.By focusing on the analysis and comparison of the calculation indicators adopted by the two European Spirit projects -STATLOG and METAL, and combining the research purpose and needs of this study, this paper mainly evaluates each alternative base algorithm in three aspects, the prediction accuracy, running speed and memory consumption. Prediction accuracy The accuracy (Acc) of training set and test set, as well as the analog expansion of sensitivity and specificity, are used as the evaluation indexes for the prediction accuracy of each alternative base algorithm. In this paper, the analogy of sensitivity and specificity can be briefly described as calculating the correct prediction rate of the class with the most and least samples in the target (2) Running speed The modeling time of 8 alternative base algorithms on each base dataset is monitored and collected as an evaluation indicator.Since each algorithm will produce an order of magnitude difference in the dataset with different characteristics, the logarithmic operation of the modeling time of each algorithm is carried out in order to carry out comparative analysis. Memory consumption Monitor and collect the memory occupation of the prediction model built by 8 alternative base algorithms on each base dataset as an evaluation indicator.Considering that each algorithm will produce an order of magnitude difference in the dataset with different characteristics, the logarithmic operation of the memory usage of each algorithm is carried out for comparative analysis. For different research objectives and programs, the focus of researchers may be different.For the diagnosis of a rare disease, researchers are more concerned about the identification and screening rate of this minority group of people with the disease, that is, the above S least _ value need to meet the acceptable threshold.For the diagnosis or prediction of the development of some emergency conditions, such as judging whether a patient with chest pain is an acute myocardial infarction or a patient in need of timely intervention in the emergency room, the prediction model to be used at this time has high requirements on the prediction accuracy and time, that is, the performance evaluation algorithm indicators mentioned above need to be considered comprehensively. Algorithm applicability evaluation Because several algorithms reach the optimal level on some datasets at the same time, the optimal algorithm result is the combination of several algorithms.The number of these combinations can be reduced by combining the prediction accuracy evaluation with the runtime and memory usage, respectively.However, due to the differences in dataset characteristics that affect the running time and memory usage, this method has some defects.Considering the ratio between the number of datasets included in this paper and the combined results, in order to ensure the accuracy and generalization of the algorithm applicability knowledge, we decided to discretized the ranking of prediction accuracy of each algorithm on each dataset, that is, the top three algorithms are labeled as recommended algorithm (Y), and the fourth and fifth algorithms are labeled as medium (M), ranking sixth through eighth and modeling failures are marked as not recommended (No). Due to the 34 discrete variable datasets included in the study, limited by the amount of data, they are not suitable for modeling learning features.Therefore, this paper only conducts modeling learning on mixed variable datasets and continuous variable datasets to evaluate the algorithm applicability on different characteristic datasets. Preliminary statistical results In 293 UCI data subsets included in the study, modeling failures occurred in all eight algorithms.Among them, the main reason for LR algorithm modeling failure is that the dimension is too high or the number of weight coefficients contained in the discrete variable exceeds the maximum threshold allowed by the algorithm, resulting in modeling failure.The main reason for AB algorithm modeling failure is memory overflow, that is, the memory required for modeling exceeds the upper limit allocated by the system.The main reason for RF algorithm modeling failure is discrete variables include too many categories exceeding the upper limit and memory overflow.The main reason for BP algorithm modeling failure is basically the same with LR.The modeling success rate of the eight algorithms is shown in Table 1. As can be seen from Table 1 that the BP algorithm modeling failure rate is relatively high, 22.87%.Preliminary analysis, the number of weight coefficients exceeded the maximum threshold allowed by the algorithm due to too many categories of discrete variables.Further analysis and discussion will be conducted in accordance with the specific characteristics of the dataset. Since the learning and modeling time of the eight algorithms on different datasets presents an order of magnitude difference, the learning and modeling time result values after logarithmic are compared in this paper, and the scatter diagram is shown in Figure 1. The number on X axis corresponds to the serial number of the research dataset.As can be seen from Figure 1 that the same algorithm has different learning and modeling time on datasets with different characteristics.The overall trend shows that the modeling time of NB algorithm is the shortest on most datasets, while the modeling time of ensemble method AB is significantly several orders of magnitude higher.Dataset characteristics that affect modeling time will be further discussed and analyzed later. In view of the fact that the memory usage of the eight algorithms in learning and modeling on different datasets also presents an order of magnitude difference, this paper compares the memory occupation result values after logarithmic, as shown in Figure 2. As can be seen from Figure 2 that the memory occupied by the same algorithm is different to some extent, when learning and modeling on datasets with different characteristics.The overall trend shows that on most datasets, NB algorithm requires the smallest amount of memory for modeling, followed by C4.5 algorithm, while RF and AB two ensemble methods have significantly higher memory consumption of several orders of magnitude.Dataset characteristics that affect memory usage will be further analyzed and summarized in subsequent studies. Because the number of the discrete variable dataset is small, the modeling analysis is not used, but the chi-square test analysis of R*C contingency table is carried out.The recommendation of the 8 algorithms on the datasets in different fields was sorted into contingency tables, respectively.Taking the LR algorithm as an example, as shown in Table 2, the differences between groups were compared by the χ 2 values calculated according to formula (3).Similarly, contingency table analysis was performed on the other 7 base algorithms to explore the applicability of each algorithm on the dataset in the biomedical field.The contingency table analysis results of whether there are differences in data domain among LR, C4.5, 3. In the formula, A ij is the actual frequency of each cell in the contingency table, and n R i and n C j are the combined counts of row i and column j corresponding to A ij . As can be seen from Table 3, there are differences in the recommendation of NB algorithm on datasets in the medical, biological and general fields.By referring to the occurrence table of NB algorithm, it can be found that the recommendation rate of NB algorithm on datasets in the medical and biological fields is relatively high, which is 60.0 and 50.0%respectively, while the recommendation rate on datasets in the general field is only 4.8%. Predictive accuracy modeling analysis Through the above exploratory analysis, we have a preliminary understanding of the algorithm applicability.In order to further discover the hidden feature knowledge in the algorithm applicability, this paper uses stepwise regression and decision tree C4.5 algorithm to build a model, so as to find the features and rules that need to be further analyzed and discussed in the previous exploratory statistical analysis. Mixed variable datasets With "whether to recommend" as the target variable and 26 quantization characteristics of datasets as attribute variables, a stepwise regression model was constructed to obtain dataset characteristics related to the applicability of LR, C4.5, SVM, AB, kNN, NB, RF and BP 8 algorithms, as shown in Table 4. In Table 4, "√" indicates that there is a statistically significant correlation between an algorithm and a dataset characteristic after stepwise regression screening. With "whether to recommend" as the target variable and the 26 quantization characteristics of datasets as attribute variables, a decision tree model was constructed using the C4.5 algorithm.The applicability judgment decision trees of 8 algorithms on the obtained mixed variable data set are built.According to the decision trees, the applicability of 8 algorithms on mixed variable datasets can be judged and predicted. Continuous variable datasets In Table 5, after stepwise regression screening, there is a statistically significant correlation between an algorithm and the characteristics of a dataset, which is represented by "√". With "recommend or not" as the target variable and 26 data sets quantization characteristics as attribute variables, a decision tree model is constructed using C4.5 algorithm.The applicability judgment decision tree of 8 algorithms obtained on continuous variable datasets are built.According to these decision trees, the applicability of 8 algorithms on continuous variable datasets can be judged and predicted.Through the validation on the training set and test set of the algorithm applicability metadata, both the decision tree judgment model and the stepwise regression judgment model reached the accuracy of more than 75%. Running time modeling analysis Since the learning and modeling time of the eight algorithms on different datasets presents an order of magnitude difference, this paper calculates the running time logarithmic value, and then construct a model to perform magnitude prediction. Associated the running time of the algorithm with the dataset characteristics, and analyzed the integrated metadata set.Taking "log10(Time)" as the target variable and 26 quantized dataset characteristics as the attribute variables.Firstly, the correlation between the target variable and the attribute variable is calculated, and the attribute variable with the absolute value of the correlation coefficient greater than 0.3 is taken as the correlation variable and included in the next modeling analysis.The model was constructed by stepwise regression, and the running time order prediction formulas of LR, C4.5, SVM, AB, kNN, NB, RF and BP 8 algorithms on the three categories datasets were obtained respectively, as shown in formulas ( 4)-( 27). Mixed variable datasets The running time magnitude prediction formula of LR algorithm on mixed variable datasets is shown in formula (4). The running time magnitude prediction formula of C4.5 algorithm on mixed variable datasets is shown in formula (5).The running time magnitude prediction formula of SVM algorithm on mixed variable datasets is shown in formula (6). The running time magnitude prediction formula of AB algorithm on mixed variable datasets is shown in formula (7). The running time magnitude prediction formula of kNN algorithm on mixed variable datasets is shown in formula (8). The running time magnitude prediction formula of NB algorithm on mixed variable datasets is shown in formula (9). The running time magnitude prediction formula of RF algorithm on mixed variable datasets is shown in formula (10). Discrete variable datasets The running time magnitude prediction formula of LR algorithm on discrete variable datasets is shown in formula (12). The running time magnitude prediction formula of C4.5 algorithm on discrete variable datasets is shown in formula (13). The running time magnitude prediction formula of SVM algorithm on discrete variable datasets is shown in formula ( 14). The running time magnitude prediction formula of AB algorithm on discrete variable datasets is shown in formula (15). The running time magnitude prediction formula of kNN algorithm on discrete variable datasets is shown in formula (16).The running time magnitude prediction formula of RF algorithm on discrete variable datasets is shown in formula (18).The running time magnitude prediction formula of BP algorithm on discrete variable datasets is shown in formula (19). Continuous variable datasets The running time magnitude prediction formula of LR algorithm on continuous variable datasets is shown in formula (20). The running time magnitude prediction formula of C4.5 algorithm on continuous variable datasets is shown in formula (21).The running time magnitude prediction formula of SVM algorithm on continuous variable datasets is shown in formula (22).The running time magnitude prediction formula of AB algorithm on continuous variable datasets is shown in formula (23). The running time magnitude prediction formula of NB algorithm on continuous variable datasets is shown in formula (25). The running time magnitude prediction formula of RF algorithm on continuous variable datasets is shown in formula (26). Memory requirement modeling analysis The memory usage of algorithms during running is related to the inherent characteristics of the dataset, and there will be differences of orders of magnitude among each algorithm on the same dataset.Therefore, logarithmic operation is carried out on the memory occupation of the learning and modeling process of each algorithm for comparative analysis and prediction. The memory usage of the algorithm is associated with the dataset characteristics, and the integrated metadata set is learned and analyzed, with "log10(Memory)" as the target variable and 26 quantization characteristics of the dataset as the attribute variable.Firstly, the correlation between the target variable and the attribute variable is calculated, and the attribute variable with the absolute value of the correlation coefficient greater than 0.3 is taken as the correlation variable and included in the next modeling analysis.The model was constructed by stepwise regression, and the prediction formulas of the memory usage level of LR, C4.5, SVM, AB, kNN, NB, RF and BP 8 algorithms on the three categories datasets were obtained, as shown in formulas ( 28)-(51). Mixed variable datasets The memory usage level prediction formula of LR algorithm on mixed variable datasets is shown in formula ( 28 The memory usage level prediction formula of AB algorithm on discrete variable datasets is shown in formula (39). The memory usage level prediction formula of kNN algorithm on discrete variable datasets is shown in formula (40). The memory usage level prediction formula of NB algorithm on discrete variable datasets is shown in formula (41). The memory usage level prediction formula of RF algorithm on discrete variable datasets is shown in formula (42).The memory usage level prediction formula of BP algorithm on discrete variable datasets is shown in formula (43). Continuous variable datasets The memory usage level prediction formula of LR algorithm on continuous variable datasets is shown in formula (44). The memory usage level prediction formula of C4.5 algorithm on continuous variable datasets is shown in formula (45).(45) The memory usage level prediction formula of SVM algorithm on continuous variable datasets is shown in formula (46). The memory usage level prediction formula of AB algorithm on continuous variable datasets is shown in formula (47).The memory usage level prediction formula of kNN algorithm on continuous variable datasets is shown in formula (48).The memory usage level prediction formula of NB algorithm on continuous variable datasets is shown in formula (49).The memory usage level prediction formula of RF algorithm on continuous variable datasets is shown in formula (50).The memory usage level prediction formula of BP algorithm on continuous variable datasets is shown in formula (51).From formulas (28)-( 51), it can be found that sample size N is an important factor affecting the modeling memory of each base algorithm on the three types of datasets.In addition, the memory usage of each base algorithm on the mixed variable datasets is also related to R_least and R_largest.The memory usage of each base algorithm on discrete variable datasets is mainly related to N_class.The memory usage of each base algorithm on continuous variable datasets is related to R_least and Geomean. Conclusion The validity and feasibility of the algorithm applicability knowledge base constructed in this paper have been verified theoretically, thus realizing the construction of the algorithm applicability knowledge base of the dataset oriented to classification task.Compared with other studies, this paper focuses the problem space of algorithm applicability in the medical field for the first time, and it is found that C4.5 algorithm has outstanding performance on most medical datasets, ranking in the forefront of prediction accuracy, comparable to the ensemble methods, and the order of magnitude modeling running time and memory occupation is relatively smaller. As for the applicability of data mining algorithms, although this paper has carried out a relatively in-depth analysis by introducing algorithm selection concept, algorithm recommendation and metalearning theory, expected to obtain rule knowledge with guiding value for medical data mining practice.However, due to the limitations of theory and practice, this paper still has some shortcomings and needs further research.All kinds of specific problems in the biomedical field can be abstractions into classification, numerical prediction, clustering, association rules and time series analysis in data mining, and 70% of problems in real life can be transformed into classification problems.In this paper, the applicability of the algorithm is studied only in the field of classification tasks, and subsequent studies can expand the breadth of mining tasks, such as continuing to study the applicability of the algorithm in the field of numerical prediction tasks, the applicability of various deep neural networks in medical image analysis, the influence of data preprocessing methods on modeling results, etc. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.Decision tree model of C4.5 algorithm applicability -continuous variable data sets. FIGURE 1 FIGURE 1Modeling time of 8 algorithms on different datasets. FIGURE 2 FIGURE 2Modeling memory usage of 8 algorithms on different datasets. magnitude prediction formula of BP algorithm on mixed variable datasets is shown in formula (11). magnitude prediction formula of NB algorithm on discrete variable datasets is shown in formula (17). magnitude prediction formula of kNN algorithm on continuous variable datasets is shown in formula (24). magnitude prediction formula of BP algorithm on continuous variable datasets is shown in formula (27). continuous variable datasets is mainly related to R_least and Geomean. TABLE 1 Summary of modeling completed by 8 algorithms. SVM, AB, kNN, NB, RF and BP 8 base algorithms are shown in Table TABLE 4 Summary of dataset characteristics related to the applicability of 8 algorithms -mixed variable datasets. TABLE 5 Summary of dataset characteristics related to the applicability of 8 algorithms -continuous variable datasets. ).
9,062.2
2024-01-31T00:00:00.000
[ "Computer Science", "Medicine" ]
Nonisothermal Crystallization of Surface-Treated Alumina and Aluminum Nitride-Filled Polylactic Acid Hybrid Composites This work investigates the nonisothermal crystallization and melting behavior of polylactic acid (PLA), filled with treated and untreated alumina and nano-aluminum nitride hybrid composites. Analysis by attenuated total reflectance Fourier transform infrared spectroscopy revealed that the treated fillers and the PLA matrix developed a good interaction. The crystallization and melting behaviors of the PLA hybrid composites were investigated using differential scanning calorimetry showed that the degree of crystallinity increased with the addition of hybrid fillers. Unlike the untreated PLA composites, the complete crystallization of the treated PLA hybrid composites hindered cold crystallization during the second heating cycle. The crystallization kinetics studied using the Avrami model indicated that the crystallization rate of PLA was affected by the inclusion of filler particles. X-ray diffraction analysis confirmed crystal formation with the incorporation of filler particles. The inclusion of nano-aluminum nitride (AlN) and the increase in the crystallinity led to an improvement of the storage modulus. Introduction Recent increasing production of biodegradable plastic materials has raised environmental concerns about the large-scale production of nonbiodegradable and nonrecyclable petroleum-based polymers. Polylactic acid (PLA) is a widely utilized biodegradable polymer because of its good mechanical properties, high stiffness, high transparency, excellent printability, and good processability [1,2]. Currently, PLA is mainly produced through the polymerization of lactides from renewable sources such as potato, corn, and bagasse. PLA can be used industrially and academically in biomedical, food packaging, and electronics research fields [3]. However, the application of PLA-based materials is limited to the aforementioned specific sectors because of its unfavorable properties, which include slow crystallization, high gas diffusion, and brittleness [4]. The crystallization of PLA has been improved through the fabrication of composites incorporating carbon-based materials [2,5], clays [6,7], and ceramic fillers [8][9][10]. PLA is a semicrystalline polymer, the morphological, mechanical, and physical properties of which are controlled by its crystallization behavior [11]. Because of the chiral nature of lactic acids, the lactides, the building block of PLA, exist in l-lactide and d-lactide forms. Depending on the amount of l-lactide and d,l-lactide components, PLA can crystallize in three crystal forms: α, β, and γ forms [4,12]. Among these crystal forms, the α-phase, which occurs during melt or cold crystallization, is the most stable. The brittle nature of PLA is a consequence of its low glass-transition temperature, which is another factor that limits its application [13]. Hence, many researchers have suggested that increasing the extent and rate of crystallization would change the microstructure of the matrix and expand its practical application prospects. Park et al. [2] fabricated a PLA/carbon nanotube (CNT) composite with enhanced mechanical properties stemming from a substantial improvement in the crystallization kinetics as a result of the addition of CNTs. Li et al. [14] confirmed that improvements in the dynamic mechanical properties of a PLA composite filled with microcrystalline cellulose (MCC) at a very low MCC content are attributable to an increase in the crystallinity and crystallization rate. In our previous work [15], we suggested that alumina affects the crystallization behavior of polybutylene succinate (PBS). However, the effect of alumina on the crystallization of the PBS matrix has not been discussed in depth. Numerous studies have confirmed the effect of alumina on the melting and crystallization properties of polymer matrices. For example, Kuo et al. [16] concluded that the inclusion of nanosized alumina restricted the chain mobility of poly(ether ether ketone) (PEEK), which played a dominant role in increasing its crystallization time. In addition, Mosavian et al. [17] used the Avrami model to study the nonisothermal crystallization kinetics of high-density polyethylene (HDPE)/alumina composites at different cooling rates. Their results revealed that the crystallization peak broadened and shifted to lower temperatures with increasing cooling rate. In the present study, we synthesized a PLA composite reinforced with alumina and nanosized aluminum nitride (AlN) hybrid fillers via a solution casting process. In our previous study, the inclusion of nanosized AlN improved the storage modulus of the PBS nanocomposite [18]. Consequently, we expected the combined effect of alumina and AlN to improve both the crystallization behavior and the stiffness of the resultant PLA hybrid composite. The surface properties of the hydrophobic PLA and the relatively hydrophilic alumina/AlN hybrid fillers are dissimilar. Consequently, the fillers were surface treated before being mixed with the PLA to improve their interaction with the matrix [19]. The composite was then characterized using differential scanning calorimetry (DSC), X-ray diffraction (XRD), and dynamic mechanical analysis (DMA). Filler Surface Treatment Prior to the fabrication of the PLA hybrid composite, the filler materials were surface treated. The alumina particles were surface functionalized with poly(maleic acid) using a modified version of a procedure reported elsewhere [20]. First, 0.04 M maleic acid (MA) solution was prepared; alumina (Sigma-Aldrich, Seoul, South Korea) particles were subsequently added to this solution, and the resultant mixture was stirred at room temperature for 4 days. The alumina particles were then separated from the solution and air dried for 1 day. The MA molecules that adsorbed onto the alumina particles (Al-MA) were allowed to polymerize at 80 • C in the presence of 1-octadecene monomer as a solvent in a three-necked flask equipped with a nitrogen gas inlet, condenser, and thermometer. After the polymerization temperature was reached, azobisisobutyronitrile (AIBN) initiator was added to the mixture and the system was maintained at 80 • C for 3 h under a nitrogen atmosphere. The surface-functionalized alumina particles (Al-poly(MA)) were separated from the mixture and air dried for 24 h. In addition, AlN (Sigma-Aldrich, Seoul, South Korea) nanoparticles were also stirred and sonicated with dimethylformamide (DMF). The AlN solution was centrifuged for 30 min at 10,000 rpm to separate the lighter particles with a relatively similar size. PLA Hybrid Composite Fabrication The PLA (PLLA homopolymer, Jae Youn Chemical Co., Ltd., Gangwon-do, South Korea) pellets were oven dried at 50 • C for 24 h to eliminate surface moisture, which can otherwise lead to void formation. The PLA pellets were placed in chloroform solvent and stirred for 4 h at room temperature. In a separate flask, the proper amounts of alumina (18 wt %, 28 wt %, 38 wt %, or 48 wt %) and AlN (2 wt %) were mixed in chloroform and sonicated for 30 min. Subsequently, the hybrid filler mixture was transferred into the PLA solution and the resultant mixture was allowed to mix for 90 Polymers 2019, 11, 1077 3 of 12 min. The solution was then poured into a Petri dish when the solution became viscous, indicating the formation of the PLA hybrid composite. The Petri dish was maintained at room temperature while covered with a lid (with a small opening) so as to decrease the solvent evaporation rate. The dried PLA hybrid composite films (Table 1) were subsequently collected for further analysis. Neat PLA was also synthesized in the same manner and used as a control. Characterization The raw and treated alumina particles were characterized using Fourier transmission infrared spectroscopy (FT-IR, Nicolet iS5, Thermo Fisher Scientific, Seoul, Korea) to confirm whether the poly (maleic acid-1-octadecene) was effectively grafted on the alumina surface. Attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR, Nicolet 6700, Thermo Scientific, Seoul, Korea) analysis was conducted to study the surface of the PLA composites. The analysis was conducted in a wide frequency range from 4000 to 400 cm −1 at a resolution of 4.0 cm −1 . The melting and crystallization behavior of the PLA hybrid composites were investigated by DSC (KEP Tech., Mougins, France). The investigation was carried out in a heating-cooling-heating cycle under a nitrogen atmosphere. The specimens were heated from 30 to 220 • C at a rate of 10 • C/min and then maintained at 220 • C for 1 min to remove the thermal history. The samples were subsequently cooled to 30 • C at various cooling rates (5, 10, and 20 • C/min). Finally, the samples were reheated to 220 • C at the heating rate corresponding to the previous cooling rate. The percentage crystallinity (X c ) of the PLA was calculated from the second heating cycle using the following Equation: where (X c ) is the degree of crystallinity of PLA; ∆H O m is the enthalpy of fusion of 100% crystalline PLA (93 J/g) [21]; ∆H cc and ∆H m are the enthalpies of cold crystallization and melting, respectively; and w PLA is the weight fraction of PLA in the hybrid composites. The XRD patterns of the alumina particles and PLA composites were collected using an X-ray diffractometer (XRD, New D8 Advance, Bruker AXS) equipped with a Cu-Kα radiation source. The XRD patterns were collected over the scanning-angle range 5 • ≤ 2θ ≤ 80 • . The storage modulus of the PLA hybrid composite films was measured using a dynamic mechanical analyzer (Triton Tech., UK). The analysis was performed in the temperature range from −50 to 165 • C at a frequency of 1 or 10 Hz. Surface Characterization The surface of the alumina particles, the neat PLA, and the PLA composites were analyzed by FTIR. The corresponding FTIR spectral curves are shown in Figure 1. Some peaks disappeared and new ones appeared after the particles were surface treated. The sharp peaks at 2825 and 2927 cm −1 in the FTIR spectrum of the treated alumina are assigned to the stretching vibrations of -CH 3 and -CH 2 functional groups, respectively [22]. In addition, a C-O peak that appeared because of the interaction of surface oxygen and atmospheric carbon disappeared after the particles were treated. In general, the surface of the alumina was well functionalized. ATR-FTIR analysis is sensitive toward organic functional groups and is helpful in eliminating the effects of surface moisture, which leads to the appearance of unnecessary peaks in the spectra [11]. In the case of the PLA spectrum (Figure 1b), main peaks at 1750, 1180, and 1084 cm −1 are attributed to C=O, C-O-C, and C-O stretching vibrations, respectively [23]. The incorporation of alumina leads to the disappearance of two bending modes at 669 and 750 cm −1 , revealing the effect of fillers on the chemical structure of the matrix, which is attributed to the amorphous phase of PLA [24]. This result could mean that the PLA hybrid composites have higher crystallinity when compared to the neat PLA. Polymers 2017, 9, x FOR PEER REVIEW 4 of 12 the surface of the alumina was well functionalized. ATR-FTIR analysis is sensitive toward organic functional groups and is helpful in eliminating the effects of surface moisture, which leads to the appearance of unnecessary peaks in the spectra [11]. In the case of the PLA spectrum (Figure 1b), main peaks at 1750, 1180, and 1084 cm −1 are attributed to C=O, C-O-C, and C-O stretching vibrations, respectively [23]. The incorporation of alumina leads to the disappearance of two bending modes at 669 and 750 cm −1 , revealing the effect of fillers on the chemical structure of the matrix, which is attributed to the amorphous phase of PLA [24]. This result could mean that the PLA hybrid composites have higher crystallinity when compared to the neat PLA. Crystallization Behavior The melting and crystallization behavior of the neat and hybrid composites were analyzed using DSC at three different heating and cooling rates. The crystallization behavior of the hybrid composite samples was studied using the first cooling cycle recorded at various cooling rates after the thermal history had been removed via the first heating. The corresponding crystallization curves at cooling rates of 5, 10, and 20°C/min are shown in Figure 2. From Figure 2a-c, the crystallization curve is broad and has a higher peak height at a lower cooling rate, indicating a decrease in the enthalpy of crystallization (ΔHc) with increasing cooling rate. This result confirms that, at lower cooling rates, the samples have sufficient time to form crystals, whereas increasing the cooling rate forces the crystallization process to occur faster, without complete crystal formation. However, the crystallization peak (Tc) of the composites synthesized with treated filler exhibited a sharper peak and a higher enthalpy of crystallization compared with their equivalent composites with untreated filler loadings (Figure 2d-f). This observation confirms that the incorporation of treated filler led to the easier formation of crystals in T-PLA20 and T-PLA40 at the early stage of crystallization. Crystallization Behavior The melting and crystallization behavior of the neat and hybrid composites were analyzed using DSC at three different heating and cooling rates. The crystallization behavior of the hybrid composite samples was studied using the first cooling cycle recorded at various cooling rates after the thermal history had been removed via the first heating. The corresponding crystallization curves at cooling rates of 5, 10, and 20 • C/min are shown in Figure 2. From Figure 2a-c, the crystallization curve is broad and has a higher peak height at a lower cooling rate, indicating a decrease in the enthalpy of crystallization (∆H c ) with increasing cooling rate. This result confirms that, at lower cooling rates, the samples have sufficient time to form crystals, whereas increasing the cooling rate forces the crystallization process to occur faster, without complete crystal formation. However, the crystallization peak (T c ) of the composites synthesized with treated filler exhibited a sharper peak and a higher enthalpy of crystallization compared with their equivalent composites with untreated filler loadings (Figure 2d-f). This observation confirms that the incorporation of treated filler led to the easier formation of crystals in T-PLA20 and T-PLA40 at the early stage of crystallization. To further explain the aforementioned crystallization mechanism, the relative crystallinity (Xt) as a function of temperature (T) was determined using the integral method, as follows: To further explain the aforementioned crystallization mechanism, the relative crystallinity (X t ) as a function of temperature (T) was determined using the integral method, as follows: The relative crystallinity curve of the neat PLA and T-PLA50 plotted as a function of the crystallization temperature is shown in Figure 3 and Figure S1. All of the curves exhibit a sigmoidal shape with higher early crystallization and slower final crystallization. At a cooling rate of 5 • C/min, the crystallization started (onset) at a higher temperature and completed (offset) at a lower temperature compared with the crystallization processes at the other cooling rates. These results clarify the appearance of a broader crystallization curve (Figure 2a-c) at lower cooling rates. However, the crystal formation temperature range in T-PLA50 was similar irrespective of the cooling rate, confirming that the interaction of the treated hybrid fillers and the PLA matrix influenced the crystallization rate and mechanism. In general, we concluded that the inclusion of treated hybrid fillers affected the crystallization process of the PLA composites at different cooling rates. To further explain the aforementioned crystallization mechanism, the relative crystallinity (Xt) as a function of temperature (T) was determined using the integral method, as follows: The relative crystallinity curve of the neat PLA and T-PLA50 plotted as a function of the crystallization temperature is shown in Figure 3 and Figure S1. All of the curves exhibit a sigmoidal shape with higher early crystallization and slower final crystallization. At a cooling rate of 5°C/min, Melting and Cold-Crystallization Properties The melting and cold-crystallization points were recorded from the second DSC heating cycle. The first heating cycle was performed at the same 10 • C/min heating rate for all of the samples to control the effect of the heating cycle on the subsequent steps. The cooling and second heating were then carried out at the selected rates. The second heating cycle thermographs for the neat PLA and its hybrid composites are shown in Figure 4. The melting temperature did not substantially change for any of the specimen samples with the inclusion of different filler loading of hybrid fillers (Figure 4a-c), filler treatments (Figure 4d-f), or heating rates. Polymers 2017, 9, x FOR PEER REVIEW 6 of 12 the crystallization started (onset) at a higher temperature and completed (offset) at a lower temperature compared with the crystallization processes at the other cooling rates. These results clarify the appearance of a broader crystallization curve (Figure 2a-c) at lower cooling rates. However, the crystal formation temperature range in T-PLA50 was similar irrespective of the cooling rate, confirming that the interaction of the treated hybrid fillers and the PLA matrix influenced the crystallization rate and mechanism. In general, we concluded that the inclusion of treated hybrid fillers affected the crystallization process of the PLA composites at different cooling rates. Melting and Cold-Crystallization Properties The melting and cold-crystallization points were recorded from the second DSC heating cycle. The first heating cycle was performed at the same 10 °C/min heating rate for all of the samples to control the effect of the heating cycle on the subsequent steps. The cooling and second heating were then carried out at the selected rates. The second heating cycle thermographs for the neat PLA and its hybrid composites are shown in For PLA composites filled with untreated hybrid fillers, the increase in the heating rate lead to the appearance of a cold-crystallization curve accompanied with an increase in the ΔHcc. The disappearance of cold crystallization peaks (Tcc) at lower heating rates was due to the slow and complete crystallization process prior to the heating cycle. At lower cooling rates, the samples have sufficient time to crystallize (Figure 2a-c) and inhibit the formation of a cold crystallization peak. On the contrary, at a heating rate of 20 °C/min, the cold crystallization peak becomes broader with increasing ΔHcc. In particular, the Tcc of the neat PLA heated at 20°C/min shifts to higher temperatures For PLA composites filled with untreated hybrid fillers, the increase in the heating rate lead to the appearance of a cold-crystallization curve accompanied with an increase in the ∆H cc . The disappearance of cold crystallization peaks (T cc ) at lower heating rates was due to the slow and complete crystallization process prior to the heating cycle. At lower cooling rates, the samples have sufficient time to crystallize (Figure 2a-c) and inhibit the formation of a cold crystallization peak. On the contrary, at a heating rate of 20 • C/min, the cold crystallization peak becomes broader with increasing ∆H cc . In particular, the T cc of the neat PLA heated at 20 • C/min shifts to higher temperatures and becomes broad, with the offset cold-crystallization temperature continuing until the onset melting temperature. This scenario occurred mainly due to the previous incomplete crystallization (Figure 2c) related to the fast cooling rate. The increase in the enthalpy of cold crystallization with increasing heating rate leads to a decrease in the degree of crystallinity (X c ) for each sample with the same filler content. In addition, the hybrid composites filled with untreated fillers show a decrease in the degree of crystallinity when the heating rate is increased from 5 to 20 • C/min ( Table 2). Table 2. Melting and crystallization properties of the neat PLA and its hybrid composites. Sample ψ ( • C/min) T c ( • C) T cc ( • C) ∆H c (J/g) ∆H m (J/g) X c (%) Neat PLA The effect of filler treatment on the cold crystallization and melting behavior of the PLA composites was similarly investigated (Figure 4d-f). Because of the slower crystallization rate of the composite with treated filler particles, the cold-crystallization temperature decreased compared with that of the PLA composites with untreated fillers. The increase in the heating/cooling rate does not induce a clear change in the cold-crystallization temperature of the treated samples. At 20 • C/min, the cold-crystallization temperature for both T-PLA20 and T-PLA40 was lower than that of the corresponding untreated PLA composite samples. Moreover, the ∆H cc decreased and the ∆H m increased, which is associated with the restriction of polymer chain mobility due to the interaction of the treated alumina and the PLA matrix [16]. As a result, the degree of crystallinity for the given composites filled with treated filler materials did not substantially change with increasing heating rate. In general, samples that crystallize at a lower cooling rate have sufficient time to undergo complete crystallization. Hence, either no cold-crystallization peak or only a small cold-crystallization peak would be observed when the sample is heated. By contrast, if the sample is cooled at a high cooling rate, it would either uncrystallize or undergo partial crystallization, leading to the appearance of a cold-crystallization peak when the sample is heated. Nonisothermal Crystallization Kinetics The nonisothermal crystallization kinetics of the PLA hybrid composites were estimated using the Avrami model (Equation (3a)) [25,26]. Although this model has usually been applied to isothermal crystallization processes, it can also be used to characterize nonisothermal kinetics. To analyze the crystallization kinetics, the crystallization temperature was converted to time (Equation (3b)). The relative crystallinity was also plotted as a function of time to investigate the crystal formation at different stages ( Figure S2). The results show that the crystallization process occurred much faster with an increase in the cooling rate. This result further reiterates that the slow crystallization processes that occurred at lower cooling rates played a substantial role in the completion of crystal formation: where n is the Avrami exponent; k is the rate constant; and ψ is the cooling rate. The slope and the intercept of the linearized curve of log[− ln(1 − X t )] vs. log(t) provide the values of n and k, respectively. The Avrami plots at different cooling rates are shown in Figure 5 and Figure S3, and the slope and intercept of the linearized curve are tabulated in Table 3. The value of n varies with the crystallization mechanism and growth geometry, whereas the value of k is associated with the crystallization rate [27]. Irrespective of the filler loading or filler treatment, the PLA hybrid composites that did not exhibit a sufficiently large value of n did not show a uniform change. However, the composites with higher filler loadings had relatively higher n values compared with the neat PLA, revealing that the interfacial interaction of the alumina with the PLA matrix complicated the crystallization mechanism and growth geometry. where n is the Avrami exponent; k is the rate constant; and ψ is the cooling rate. The slope and the intercept of the linearized curve of log − ln(1 − ) vs. log ( ) provide the values of n and k, respectively. The Avrami plots at different cooling rates are shown in Figure 5 and Figure S3, and the slope and intercept of the linearized curve are tabulated in Table 3. The value of n varies with the crystallization mechanism and growth geometry, whereas the value of k is associated with the crystallization rate [27]. Irrespective of the filler loading or filler treatment, the PLA hybrid composites that did not exhibit a sufficiently large value of n did not show a uniform change. However, the composites with higher filler loadings had relatively higher n values compared with the neat PLA, revealing that the interfacial interaction of the alumina with the PLA matrix complicated the crystallization mechanism and growth geometry. Because the Avrami model is used for isothermal crystallization processes, the rate constant (k) was corrected using Equation (4): where kc is the corrected crystallization rate constant. This equation considers the effect of cooling rate under nonisothermal conditions. Table 3. shows that the value of kc increased with increasing cooling rate. Thus, the crystallization occurred faster at higher cooling rates. This completely agrees with the DSC, crystallization behavior results analyzed at different cooling rates. Because the Avrami model is used for isothermal crystallization processes, the rate constant (k) was corrected using Equation (4): where k c is the corrected crystallization rate constant. This equation considers the effect of cooling rate under nonisothermal conditions. Table 3. shows that the value of k c increased with increasing cooling rate. Thus, the crystallization occurred faster at higher cooling rates. This completely agrees with the DSC, crystallization behavior results analyzed at different cooling rates. XRD Analysis The crystalline nature and crystallization behaviors of the neat PLA and its hybrid composites were investigated by XRD analysis. The XRD patterns of the corresponding samples are presented in Figure 6. The XRD peaks of alumina reveal that the powder is highly crystalline and corresponds to single-phase α-Al 2 O 3 [28]. The XRD pattern of the neat PLA shows high-intensity peaks at 16. (010) and (015) planes, respectively [6]. The highly crystalline plane at (110)/(200) mainly corresponds to the α-form crystals in the PLA matrix [29]. After the composite was fabricated with the incorporation of alumina in the PLA, the XRD peaks of the alumina increased in intensity with increasing filler loading, whereas the intensity of the diffraction peaks assigned to PLA became weaker. These results indicate that interaction between the PLA matrix and the hybrid fillers strongly affected the crystallinity of the hybrid PLA, consistent with the DSC results. Consequently, the incorporation of highly crystalline alumina increased the crystallinity of the PLA hybrid composites. The crystalline nature and crystallization behaviors of the neat PLA and its hybrid composites were investigated by XRD analysis. The XRD patterns of the corresponding samples are presented in Figure 6. The XRD peaks of alumina reveal that the powder is highly crystalline and corresponds to single-phase α-Al2O3 [28]. The XRD pattern of the neat PLA shows high-intensity peaks at 16.4° and 18.87° and low-intensity peaks at 14.4° and 22.1°. The high-intensity peaks are associated with the reflections of the (110)/(200) and (203) planes, respectively, whereas the low-intensity peaks are assigned to the reflections of the (010) and (015) planes, respectively [6]. The highly crystalline plane at (110)/(200) mainly corresponds to the α-form crystals in the PLA matrix [29]. After the composite was fabricated with the incorporation of alumina in the PLA, the XRD peaks of the alumina increased in intensity with increasing filler loading, whereas the intensity of the diffraction peaks assigned to PLA became weaker. These results indicate that interaction between the PLA matrix and the hybrid fillers strongly affected the crystallinity of the hybrid PLA, consistent with the DSC results. Consequently, the incorporation of highly crystalline alumina increased the crystallinity of the PLA hybrid composites. Dynamic Mechanical Properties The storage modulus of the PLA composites as a function of temperature was investigated at 1 and 10 Hz. The corresponding modulus curves are shown in Figure 7. The modulus showed a drastic increase at the starting temperature for the composites with higher filler loadings. At both 1 and 10 Hz, T-PLA50 had a storage modulus 2.25 times (125%) higher than that of the neat PLA. However, the plot clearly shows that the change in frequency did not strongly affect the modulus of the composites throughout the whole investigated temperature range. The T-PLA40 has a higher storage modulus than the PLA40, indicating that the treated fillers incorporated into the PLA matrix increased its rigidity because of the ability to restrict the mobility of PLA chains. Dynamic Mechanical Properties The storage modulus of the PLA composites as a function of temperature was investigated at 1 and 10 Hz. The corresponding modulus curves are shown in Figure 7. The modulus showed a drastic increase at the starting temperature for the composites with higher filler loadings. At both 1 and 10 Hz, T-PLA50 had a storage modulus 2.25 times (125%) higher than that of the neat PLA. However, the plot clearly shows that the change in frequency did not strongly affect the modulus of the composites throughout the whole investigated temperature range. The T-PLA40 has a higher storage modulus than the PLA40, indicating that the treated fillers incorporated into the PLA matrix increased its rigidity because of the ability to restrict the mobility of PLA chains. The increase in the storage modulus of the PLA composite system fabricated with treated alumina indicated an improvement in the interaction between the PLA matrix and the treated alumina. This improved interaction confirming the restriction of the molecular mobility was the main effect that hindered the cold-crystallization process during the heating cycle [13]. The improvement of the storage modulus upon incorporation of treated crystal alumina into the PLA matrix is believed to be responsible for the increase in crystallinity. This result agrees with the results of the DSC experiments (Table 2). At temperatures greater than the glass-transition temperature, the storage moduli of all of the samples become almost equal irrespective of the frequency change. This observation is related to the chain relaxation of the PLA matrix at elevated temperatures. In general, DMA analysis confirmed that the incorporation of treated fillers hinders the mobility of the PLA matrix and makes the system more rigid, which is attributed to the decrease in intensity or disappearance of the cold-crystallization peak. Conclusions PLA is a widely utilized biodegradable and renewable polymer and a potential candidate to replace petrochemical polymers. In this work, PLA filled with treated and untreated alumina and a nano-AlN hybrid composite was synthesized via solution casting. The interaction of the hybrid fillers and the PLA matrix was analyzed using ATR-FTIR analysis. The crystallization and melting behaviors of the PLA hybrid composites were studied using DSC. The DSC results revealed that the hybrid composites exhibited a higher degree of crystallinity than the neat PLA. The complete crystallization of the treated PLA hybrid composites hindered cold crystallization during the second heating process. The crystallization kinetics were also studied using the Avrami model. The Avrami model parameters showed that the crystallization rate of PLA was affected by the inclusion of filler particles. The XRD results confirmed crystal formation upon the incorporation of fillers. The inclusion of nano-AlN and the increase in the crystallinity led to an improvement of the storage modulus. In general, this study confirmed that the filler loading, filler treatment and the change in heating and cooling rate have significant effect on the nonisothermal crystallization and degree of crystallinity of the PLA hybrid composites. Supplementary Materials: The following are available online at www.mdpi.com/link. The increase in the storage modulus of the PLA composite system fabricated with treated alumina indicated an improvement in the interaction between the PLA matrix and the treated alumina. This improved interaction confirming the restriction of the molecular mobility was the main effect that hindered the cold-crystallization process during the heating cycle [13]. The improvement of the storage modulus upon incorporation of treated crystal alumina into the PLA matrix is believed to be responsible for the increase in crystallinity. This result agrees with the results of the DSC experiments (Table 2). At temperatures greater than the glass-transition temperature, the storage moduli of all of the samples become almost equal irrespective of the frequency change. This observation is related to the chain relaxation of the PLA matrix at elevated temperatures. In general, DMA analysis confirmed that the incorporation of treated fillers hinders the mobility of the PLA matrix and makes the system more rigid, which is attributed to the decrease in intensity or disappearance of the cold-crystallization peak. Conclusions PLA is a widely utilized biodegradable and renewable polymer and a potential candidate to replace petrochemical polymers. In this work, PLA filled with treated and untreated alumina and a nano-AlN hybrid composite was synthesized via solution casting. The interaction of the hybrid fillers and the PLA matrix was analyzed using ATR-FTIR analysis. The crystallization and melting behaviors of the PLA hybrid composites were studied using DSC. The DSC results revealed that the hybrid composites exhibited a higher degree of crystallinity than the neat PLA. The complete crystallization of the treated PLA hybrid composites hindered cold crystallization during the second heating process. The crystallization kinetics were also studied using the Avrami model. The Avrami model parameters showed that the crystallization rate of PLA was affected by the inclusion of filler particles. The XRD results confirmed crystal formation upon the incorporation of fillers. The inclusion of nano-AlN and the increase in the crystallinity led to an improvement of the storage modulus. In general, this study confirmed that the filler loading, filler treatment and the change in heating and cooling rate have significant effect on the nonisothermal crystallization and degree of crystallinity of the PLA hybrid composites. Conflicts of Interest: The authors declare no conflict of interest.
7,490
2019-06-01T00:00:00.000
[ "Materials Science" ]
Satellite Retrieval of Surface Evapotranspiration with Nonparametric Approach : Accuracy Assessment over a Semiarid Region Surface evapotranspiration (ET) is one of the key surface processes. Reliable estimation of regional ET solely from satellite data remains a challenge.This study applies recently proposednonparametric (NP) approach to retrieve surface ET, in terms of latent heat flux (LE), over a semiarid region. The involved input parameters are surface net radiation, land surface temperature, near-surface air temperature, and soil heat flux, all of which are retrievals or products of the Moderate-Resolution Imaging Spectroradiometer (MODIS). Field observations are used as ground references, which were obtained from six eddy covariance (EC) sites with different land covers including desert, Gobi, village, orchard, vegetable field, and wetland. Our results show that the accuracy of LE retrievals varies with EC sites with a determination of coefficient from 0.02 to 0.76, a bias from −221.56W/m to 143.77W/m, a relative error from 8.82% to 48.35%, and a root mean square error from 67.97W/m to 239.55W/m. The error mainly resulted from the uncertainties from MODIS products or the retrieval of net radiation and soil heat flux in nonvegetated region. It highlights the importance of accurate retrieval of the input parameters from satellite data, which are the ongoing tasks of remote sensing community. Introduction Evapotranspiration (ET) includes evaporation from various land surfaces and transpiration from vegetation [1].ET is interchangeable to the associated latent energy (LE) [2].It is a key land surface process in regulating regional hydrological and climatic characteristics.As the only way back to the atmosphere, global land ET returns about 60% of annual land precipitation to the atmosphere [3].Accurate estimation of ET is important for regional water resources management, especially in arid regions. ET is intrinsically difficult to measure and predict especially at a large spatial scale.Various approaches are proposed to estimate ET in succession after the 18th century, including the Penman evaporation equation [4], the Penman-Monteith (P-M) combination equation [5], and the Priestley and Taylor approach [6].Traditional measurement or estimation of LE is mostly applicable at a point-scale [7][8][9], including eddy covariance (EC) techniques [10].However, representativeness of the point-scale measurement for a large area is generally problematic, and the dense coverage of point measurements is not feasible [11]. Alternatively, remote sensing technology can efficiently solve the representation limitation of the point measurement.However, it cannot observe LE directly, whereas it provides retrievals of geophysical parameters to estimate LE at a regional scale [12].Several retrieval algorithms appeared in the last two decades [13,14], including the triangle approach [15][16][17], the simplified surface energy balance index (S-SEBI) [18], the surface energy balance system (SEBS) [19], the three-temperature model [20][21][22], the MOD16 algorithm [23,24], and other P-M remote sensing methods [25,26].They are widely applied to estimate regional or global ET from remotely sensed data [25][26][27][28][29][30][31].The existing algorithms have 2 Advances in Meteorology a relative error from 10% to 40% [32,33] in their validation sites.For example, Gillies et al. [34] used aircraft scanner data and the tringle approach to retrieve LE in an area covered with grasslands, steppe-shrub, and tall-grass prairie.The LE retrievals had a root mean square error (RMSE) value of 22∼55 W/m 2 and a relative error (RE) of 10% ∼30%, in reference to field measures by EC techniques and Bowenradio approaches.Verstraeten et al. [28] used the advanced very high resolution radiometer (AVHRR) and the S-SEBI approach to retrieve LE in forestland.This achieved a RMSE (RE) of 35 W/m 2 (24%), compared to EC measures.Su [19] used the Moderate-Resolution Imaging Spectroradiometer (MODIS) data and the SEBS approach to estimate LE in wheat, corn, and rainforest areas.The LE retrievals had a RE of 25%, compared to EC measures.Xiong and Qiu [22] used the three-temperature approach and Landsat Thematic Mapper (TM) data to retrieve instantaneous LE in grassland and hills.Relative to Bowen ratio systems, LE had a RE of 4.65% ∼100% or 0.02∼0.20 mm h −1 during the satellite overpassing time.Mu et al. [23] evaluated the MOD16 products and reported an average bias (RE) of 0.31 mm day −1 (24.1%) for daily ET at FLUXNET-EC sites.Zhang et al. [25] applied normalized difference vegetation index-(NDVI-) based ET algorithm to assess global terrestrial LE using AVHRR GIMMS NDVI data.The daily results had a favorable accuracy with RMSE of about 10∼40 W/m 2 at 34 flux tower sites.Similarly, Leuning et al. [26] introduced a remotely sensed leaf area index-(LAI-) based P-M algorithm to calculate regional daily average evaporation using MODIS LAI products.At 15 flux sites globally, the systematic RMSE in daytime mean evaporation was in the range of 0.09∼ 0.50 mm day −1 , whereas the unsystematic component was in the range of 0.28∼0.71mm day −1 . A nonparametric approach (NP) has been recently proposed for estimating surface evapotranspiration [35].It uses net radiation, surface air temperature, land surface temperature, and soil heat flux as the inputs, without the need of parameterizing surface resistance.All the necessary inputs are measurable, offering a novel but simple approach for practical use.The approach has been validated at 24 EC sites, yet it was not tested with remote sensing application.This paper applies the NP approach to estimate regional LE covering different surfaces, evaluates the accuracy of LE retrievals from MODIS data only, and identifies the error sources which are useful for improving retrieval accuracy. Methodology 2.1.The Nonparametric Evapotranspiration Approach.Surface net radiation ( n ) is the net amount of radiation entering and leaving the Earth's surface.A part of n is transformed into surface soil heat flux ( s ), and another part controls LE and sensible heat flux ( s ).In NP approach, a homogeneous terrestrial ground surface layer is assumed for a macrostate system, and Hamiltonian (potential energy plus kinetic energy) is the total energy of this system. n serves as the potential energy, whereas s , s , and LE serve as kinetic energy.The land surface temperature (LST) serves as a generalized coordinate in this system.The approach calculates the partial differential equations of Hamiltonian with LST ( s ).The final forms are [35] where is land surface emissivity (LSE), a is near-surface air temperature (AT), Δ is the slope of saturated vapor pressure at temperature a , is the psychometric constant, and is the Stefan-Boltzmann's constant (5.67 × 10 −8 Wm −2 K −4 ). can be estimated by the near-surface pressure (). Retrieval Algorithms for where 0 is the solar constant at the top of the atmosphere (about 1367 W/m 2 ), is the surface albedo, is the solar zenith angle (SZA), 0 is the water vapor pressure, a is the air emissivity, and 31 and 32 denote the emissivity in bands 31 and 32 of MODIS, respectively. sd is the downwell shortwave radiation. v is the latent heat of vaporization (2.5 × 10 6 J/kg), v is the gas constant for water vapor (461 J kg −1 K −1 ), and d is the dew point temperature at screen level.For the surface albedo, it is derived by the following equation [37]: where 1 , 2 , 3 , 4 , 5 , and 7 are the nadir BRDF-adjusted albedos in bands 1, 2, 3, 4, 5, and 7 of MODIS, respectively.At long time scales, s is commonly assumed to be negligible.But at subdaily scale, the s varies with the time of day, and the values of s are not always negligible.It can be parameterized with the following equation [38]: Obviously, s is regarded as a function of normalized difference vegetation index (NDVI) and n . Correction of EC Measures Used as Reference for Accuracy Assessment.Although EC is the most accurate technique to measure the turbulent fluxes of sensible and latent heat, the energy balance can not be closed with EC data at the Earth surface with a closure of the energy balance of approximately 80% [39,40].In addition, relative to the fluctuations of LE accuracy, s is measured in more reliable accuracy [41][42][43].So it is necessary to correct LE directly measured by EC for validation. A preferred method can be derived from energy balance [44,45].On the large-scale homogeneous surface and steadystate condition, the corrected LE (ERLE) is where EC is s measured by EC.The ERLE is the reference of validation for remote sensing retrieved LE (RSLE). Metrics for Accuracy Assessment. The linear regression approach is used to describe the accordance between RSLE and ERLE.The determination of coefficient ( 2 ), slope, and intercept of the linear fit between RSLE and ERLE are subsequently obtained.The accordance is more satisfactory when the regression line is nearer to 1 : 1 line and 2 is higher.Their definitions are described as follows [46]: where means retrieve values, means reference values, means the average of , = 1, . . ., , and is the number of pair data for comparison. In addition, bias, relative error (RE), and RMSE are used to quantify errors of RSLE.Bias quantifies the average absolute difference between retrieve values and reference values.RE is the absolute value of the bias divided by the magnitude of the reference values.RMSE is the standard deviation of the retrieve values around reference values.Basically, the RMSE represents a combination of standard deviation and bias. Their definitions are described as follows [47]: Gobi).The locations of sites imply the similar climatic conditions.The underlying surfaces are homogeneous at all sites excepted for orchard and village sites.Based on the field visit, the fruit trees grow with bean seeding at orchard site.At village site, the underlying surface is composed of the bare soil, house, road, and trees.All instruments were intercompared over the Gobi between 14 and 24 May, 2012 [43].The intercompared and well agreed instruments, accompanied by the uniform data processing steps and standards, ensured data consistency, which guaranteed the reliability of validation [50].All selected sites were covered by different land cover types.Thus, for convenience, the site names were replaced by the types of underlying surfaces here. Meteorological and Surface Flux Data. Acquired from the HiWATER, the EC and AWS data span from June 25 to September 15 in 2012, the overlapping time span of EC data at all sites (Table 2).All AWS and EC data were produced, archived, and made available to the scientific community by the Cold and Arid Regions Science Data Center at Lanzhou [43,51].They were used for validation and error source analysis of RSLE.All LE validation references were obtained by s (directly measured by EC), n , and s (measured by AWS) (see (5)). The parameters obtained by AWS were averaged every 10 minutes, whereas the temporal resolution of EC was 30 where lu ( ld ) is the surface upwelling (downwell) longwave radiation.Atmosphere Archive and Distribution System (LAADS), the National Aeronautics and Space Administration (NASA).In our study, both MYD and MCD (MYD07, MYD11, MYD13, and MCD43) were selected to retrieve LE.MYD07 provided the profile of air temperature and moisture to obtain the near-surface atmospheric temperature and dew temperature [53].MYD11 provided the land surface temperature and emissivity [54].MYD13 provided the 16-day NDVI [55,56].MCD43 provided 8-day nadir BRDF-adjusted albedos [57,58], including the albedo in band 1 to band 7. ASTER radiometer has 5 thermal infrared (TIR) bands to provide TIR spectral emissivity variations at 90 m spatial resolution [59].LSE product is produced by the Temperature and Emissivity Separation (TES) algorithm [60].In our study, ASTER product was also provided by the Cold and Arid Regions Science Data Center at Lanzhou [61,62].ASTER product was used to estimate LSE.LSE can be represented by the ASTER narrowband emissivities using the following linear equation [63]: Remote Sensing where 10 ∼ 14 are the five ASTER narrowband emissivities.It was regarded as the real value of LSE because of the high accuracy and spatial resolution. Data Processing. Aiming to ascertain the applicability of LE retrieve algorithm, the validation and error source analysis of RSLE were made at the different sites (Figure 2).Firstly, the LE obtained by EC was corrected by n , s (obtained by AWS), and s (obtained by EC) in the way of energy residual correction (see (5)).Secondly, n and s were retrieved by MODIS product in Bisht's and Moran's algorithm (( 2) and ( 4)), respectively.Then, RSLE can be estimated in NP approach (see (1)).The errors of RSLE were derived from the discrepancies between RSLE and ERLE.Thirdly, to qualify the error contribution due to the input error of one parameter, that parameter (derived from MODIS) and the other actual parameters (measured by AWS/ASTER) were brought in NP approach to get LE estimation (see (1)).Similarly, all actual input parameters (measured by AWS/ASTER) were bought in NP approach to get LE estimation, too.The difference among these two estimations was regarded as the error contribution of the parameter at the retrieval moment [64].Fourthly, according to the error analysis, we searched for the probable ways to improve the retrieve accuracy.3 showed energy fluxes and environmental parameters measured by surface observations at the six sites.In the order of decreasing surface moistures, these sites were listed as wetland, vegetable, orchard, village, Gobi, and desert sites.The decreasing order matched pretty well with the vegetation abundances at these sites.Generally, there were higher LE in vegetated region with higher n /LSE and lower s /LST.In detail, n was higher at vegetated than nonvegetated sites.Mean amount of n was 625∼735 W/m 2 for vegetated sites, and it decreased to Mean LSE was in the range of 0.978∼0.981and 0.932∼0.975for vegetated and nonvegetated sites.On the contrary, LST was considerably lower at vegetated sites with typical values of 300∼305 K, compared to 315∼320 K at nonvegetated sites.The near-surface pressure was higher at vegetated sites than at nonvegetated sites (up to 89.11 kpa for wetland, and down to 83.28 kpa for desert).The difference of AT was little, and there was about 299 K of AT at all sites.These environmental parameters were the background of LE retrieve. MODIS Retrievals of n , s , and LE. n , s , and LE were all derived from satellite retrieve.Table 4 showed input parameters obtained by satellite retrieve at the six sites.The dew point temperature, pressure, and LSE were similar among all sites, with values of about 278 K, 78 kpa, and 0.96, respectively.Other parameters were different among sites.In detail, AT were 296∼299 K at all sites except for orchard (292 K) and desert (289 K) sites.Low LST appeared at vegetated sites (about 301 K), whereas high LST occurred at nonvegetated sites (300∼309 K).Similarly, albedo was slightly lower at vegetated sites with value of about 0.17, compared to about 0.19 at nonvegetated sites.Thus, generally, the retrieved n was higher in vegetated region (650∼685 W/m 2 ) than in the nonvegetated region (570∼685 W/m 2 ).Considering the lower NDVI at nonvegetated sites (less than 0.2), the higher retrieved s (more than 250 W/m 2 ) appeared there.On the basis of the retrieved n and s , RSLE was higher at vegetated sites (350∼450 W/m 2 ) than at the nonvegetated sites (120∼ 440 W/m 2 ).The instantaneous retrieve results of n , s , and LE in a part of Zhangye region at 05:55 (UTC) in August 20, 2012, were shown in Figure 3.The distribution of LE, n , and s was in good accordance with the oasis-desert ecosystem.The desert (the east and south of the region) had lower n because of the high albedo here.The desert also had higher s because of bare surface.In addition, the oasis (the middle of the region) and wetland (the north of the region) had more evaporation than desert because of the irrigation.In view of retrieve values, the LE was up to 300∼400 W/m 2 in the region of oasis, whereas the LE decreased to 150∼250 W/m 2 in the region of desert.In general, the distribution of retrieve results was deemed to be reliable. Accuracy Assessment of MODIS-Retrieved LE. Figure 4 revealed the relationship between ERLE (donated by 𝑥-axis) and RSLE (donated by -axis) at the retrieval moment.In general, relative to ERLE in 30 minutes, the RSLE was generally accurate and underestimated with bias, RMSE, RE, To validate the retrieved LE further, the LE directly observed by EC (ECLE) was also compared with RSLE in Figure 6.Similarly, the RSLE was in relatively good agreement with ECLE with 2 of 0.11∼0.36 at vegetated sites.Nevertheless, at the nonvegetated sites, the accordance disappeared with 2 of 0.05∼0.23.Relative to the ECLE in Figure 6, the ERLE matched better with retrieved surface LE in Figure 5, especially at nonvegetated sites.Thus, at least, the RSLE had a satisfactory accuracy at vegetated sites, and it also probably had a relatively good accuracy at nonvegetated sites. Error Sources and Their Contributions to MODIS-Retrieved LE. To reveal the error contributions of input error, the input error of each parameter was showed firstly (Figure 7). n was retrieved with low accuracy at village, orchard, and Gobi sites with bias value of more than 80 W/m 2 .There was a low accuracy of retrieved s at almost all sites, especially at Gobi and desert sites (bias values of about 170 W/m 2 ).The large difference (about 10 kpa) of surface pressure appeared at Gobi and desert sites.At most sites, the biases of AT were 2∼5 K, except for orchard (−7 K) and desert (−10 K) sites.Similarly, the biases of LST were also 0∼4 K at vegetated sites, whereas they were more than 9 K at nonvegetated sites.The LSE difference between MODIS and ASTER products was less than 0.01 at all sites except for Gobi (−0.036) and desert sites (0.031). On the basis of input errors, the error contribution can be revealed.Figure 8 showed the error contributions of each factor (shown as the line) and the error of RSLE (shown as the columns) at 6 sites.Except for orchard and village sites, the RSLE were in relatively satisfactory accuracy, and the biases were within −90∼50 W/m 2 at the other 4 sites.Based on the analysis of error sources, it was clear that the major error sources (inducing more than 40 W/m 2 RSLE error) were n , s , LST, and AT at nonvegetated sites, with error contributions of 40∼110 W/m 2 , −120∼10 W/m 2 , >60 W/m 2 , and −100∼10 W/m 2 , respectively.At vegetated sites, input errors were not the dominant error sources of RSLE. In detail, the large s error contribution (causing more than 100 W/m 2 RSLE error) appeared at Gobi and desert sites.About 100 W/m 2 RSLE error caused by the n error occurred at village and Gobi sites.The influence of AT and LST error on LE error was below 40 W/m 2 at most of sites, except for the LST at village and Gobi sites, the AT and LST at desert site, and AT at orchard site.The error contribution of LSE accounted for RSLE error as a small part (leading to less than 10 W/m 2 LE error).Similarly, the input errors of pressure affected RSLE error quite little (mostly less than 1 W/m 2 ).In addition, according to validation, the accuracy of RSLE was unsatisfactory at orchard and village with a bias of −221.56W/m 2 and 143.77W/m 2 , respectively.At these two sites, the main errors sources (inducing more than 100 W/m 2 RSLE error) were AT and n . Discussion The improvement of input data can possibly benefit LE retrieve in the future. n and s are not the direct input parameters of algorithms but the indirect parameters retrieved by Bisht's and Moran's algorithm [36,38].The improvement of n and s retrieval algorithms is helpful to the improvement of RSLE.In our study, the Bisht's algorithm under clear sky [36] is selected as n retrieval algorithm.Considering the widespread cloud, Bisht's algorithm under all sky [65] can be chosen to broaden applications.Besides, the accuracy of retrieved n is especially unsatisfactory in arid nonvegetated region [36].It is suitable to replace the retrieved n by the observation directly measured by the Clouds and the Earth's Radiant Energy System (CERES) project [66].CERES's accuracy meets the demand of n input in the region of desert [67,68].For s , because of the subtle temporal variation of NDVI in the arid region, s retrieved by Moran's algorithm varies slightly.Thus, some more sensitive algorithm can be chosen for s , such as the s retrieval algorithm in SEBS model or in the way of thermal inertia [19,27]. AT and the dew point temperature are derived from the atmospheric profile data of MYD07 in our study.They are not the values at the near-surface but the values of the atmospheric profile nearest to the surface.It is evident that the more accurate atmospheric information contributes to the retrieve of n and LE.So the other Earth observations with the high accuracy (e.g., the Goddard Earth Observing System Model, Version 5 (GEOS-5)) are optional substitutions [69].For LST and LSE, they are derived from the MYD11 in the split-windows algorithm [54].Wan et al. [70] have reported that just the average of bands 31 and 32 emissivities could lead to an overestimation of LSE, especially in arid and semiarid region.Accordingly, LST is underestimated in these regions.The MYD21 C6 is estimated to be published in 2016, and it can supply the more accurate LST and LSE based on TES algorithm [71], especially in the arid and semiarid region.That means the remarkable improvement of n and LE retrieve if we replace MYD11 by MYD21. [48,49]dyArea and Ground Sites.As a typical inland river basin in the northwest of China, the Heihe River Basin is located between 97 ∘ 24 ∼102 ∘ 10 E and 37 ∘ 41 ∼42 ∘ 42 N and covers an area of approximately 130 000 km 2 .The selected 6 ground observation sites are parts of multiscale EC observation matrices belonging to Heihe Watershed Allied Telemetry Experimental Research (HiWATER)[48,49], and they are acquired over the Zhangye region (100 ∘ 25 E, 38 ∘ 51 N, 1519 m) in the middle reaches of Heihe River Basin (Figure1).A total of 6 EC sites measuring LE and s are used for data analysis and accuracy assessment (Table1), accompanied by 6 automatic weather stations (AWS) which are used to measure the near-surface meteorological parameters.In view of spatial homogeneity and underlying representativeness, observations focus on six different areas with landscapes ranging from moist vegetated surfaces (vegetable, orchard, and wetland) to arid nonvegetated surfaces (village, desert, and Table 1 : Descriptive information of EC sites. N Orchard Vegetable Wetland Desert Gobi Village Figure 1: Geolocation of EC observation sites and illustration of land surface. Table 2 : Datasets for analysis. [52].Remote sensing data were obtained from the MODIS and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) products.The pixels located at the sites were obtained to retrieve LE.All chosen images were acquired under clear sky during 13:00 to 15:00 (local time) from June 25 to September 15, 2012.The temporal and spatial resolutions of these products were listed in Table2.MODIS are onboard NASA's Earth Observation System TERRA and AQUA satellites[52].As mentioned above, various products (MOD, MYD, and MCD) are provided by MODIS.They have been produced, archived, and made available to the scientific community by the Level 1 and s (EC) Figure 2: Schematic representation of the procedure for analysis in our study. Table 3 : Average of fluxes and environment parameters derived from AWS/ASTER at the six sites in the EC matrices, including net radiation ( n ), soil heat flux ( s ), land surface temperature ( s ), near-surface air temperature ( a ), land surface emissivity (LSE), near-surface pressure (), latent heat (LE), and sensible heat ( s ).The energy residual corrected (ER) LE is also revealed. Table 4 : Average of input parameters derived by MODIS products at the six sites, including land surface temperature ( s ), near-surface air temperature ( a ), land surface emissivity (LSE), near-surface pressure (), dew point temperature ( d ), albedo, and NDVI.The retrieved net radiation ( n ), soil heat flux ( s ), and latent heat flux (LE) are also shown.
5,514.6
2016-05-11T00:00:00.000
[ "Environmental Science", "Mathematics" ]
A Novel Highly Dynamic Choice Routing Scheme for Mobile Adhoc Network This study aims to improve the performance of the traditional routing protocol for MANET such as DSR and AODV in terms of delay and overhead. The proposed routing scheme is called as Highly Dynamic Choice Routing (HDCR) which adopts with the highly dynamic environment of MANET. The link residual life is estimated to reduce the link failure before forwarding data through a node. The velocity of the moving mode is considered while choosing the next forwarder node. This enables the HDCR to decrease the delay in the network. The proposed routing scheme reduces routing overhead and reduces the delay. This scheme reduces the link failure too. The performance is evaluated by using the simulation results obtained by using NS2 simulator. INTRODUCTION MANET is used to exchange the information between the nodes in moving mode.The mobile nodes are connected by wireless links is called as mobile adhoc network.The nodes in MANET are autonomous or independent node.The mobile nodes are transferring the information without the help of any external devices such as routers.Each and every node in MANET is autonomous nodes.They are act as relay nodes to support the transmission of other nodes.The nodes itself act as transmitter, receiver and routers.So, the nodes in MANET called as autonomous nodes.As the topology of the MANET changes dynamically, the MANET is called as infrastructure less network.The link between the nodes also changes dynamically.So, it is hard to transmit the data to the node in a highly dynamic environment.Traditionally, there are several routing protocols developed specifically for MANET such as DSR, AODV and DSDV. In that the Dynamic Source Routing (DSR) is outperforms than the Adhoc on Demand distance Vector (AODV) routing protocol in terms of throughput.But it is not suitable for the highly dynamic environment.In Thakare and Joshi (2010), the author compared and evaluated the performance of DSR and AODV by using the random way mobility model as a mobility model.In that analysis, they have found that the DSR provides better performance than the AODV in terms of throughput and delay with low mobility speed and less load.But the AODV outperforms than DSR when the nodes are moved with high speed and with more load.The delay is occurred in the DSR due to hostile use of caches and stale routes. The behavior of the AODV and DSR has studied in Khattak et al. (2008).In this study, the author analyses the performance over TCP (Transmission Control Protocol) communication protocol and Constant Bit Rate (CBR) traffic model.The obtained results showed that, the Packet delivery ratio is higher when using TCP and CBR while the delay is high for TCP and low for CBR.With high speed the PDR of AODV is lower than the PDR of DSR.The authors have concluded that the AODV and DSR is outperforms than each other with different traffic patterns. The performance of AODV and DSR is analyzed in the highly dynamic environment like VANET in Som and Singh (2012).The authors detected that, the AODV provides better throughput than the DSR but the packet loss is high for ADOV.But they have found that they are mostly control packets.The authors concluded that, the AODV provides better performance than DSR in a highly dynamic environment. In Sapna and Desmukh (2009), the author analyses the performance of AODV and DSR by using Network simulator NS2 with Random way point mobility model.The AODV provides higher throughput than DSR.But the ADOV is suffered by packet loss, delay and overhead as it maintains only route per destination.The AODV, DSR and DSDV routing protocols are analyzed and different parameters are compared in Taksande and Kulat (2011).In that, the author said that, all the routing protocols are performed well under TCP connection rather than UDP because of retransmission is not available in UDP.They have concluded that, the DSDV is performed poorly in the mobility environment due to the low coverage time. From the survey, the DSR is performed well than AODV in the less dynamic environment.But it is not suitable for the highly dynamic environment.So, in this study, the newly proposed routing is used to improve the performance of DSR and make it to adopt with the highly dynamic environment.The proposed routing scheme HDCR is well suited for the highly dynamic environment as it constructs the route dynamically.And reduces the overhead and delay occurred in the network. PROPOSED METHODOLOGY The MANET consists of autonomous mobile nodes connected by wireless link to exchange the information.As the topology of the network changes dynamically, the link between the nodes also changes frequently.The node transmits the information to the indented destination directly if the destination is in the transmission range of the source node.If the destination is present out of the transmission range in the sense the source node transmit via intermediate relay nodes.The mobile node itself acts as relay node.There are many routing protocols are available for MANET.All the traditional routing protocols are built the route before transmitting the data to the destination.So, there is a chance to occur a link failure in MANET.Due to link failure in the network, the data never reaches the destination.After that, the source node reconstructs the route to transmit the data.It causes delay and routing overhead in MANET.To overcome this, this study proposes a novel routing scheme is called as Highly Dynamic Choice Routing scheme (HDCR).The HDCR scheme additionally uses the link residual life to construct the route. The HDCR select the next forwarder node based on the link residual life and the velocity of the node.In HDCR, the source node finds the list of neighbor node.And then chooses the next forwarder node based on the link residual life and the distance to the destination.The source node itself does not know the entire rout to reach the destination.In the proposed scheme, the intermediate relay node is also responsible to reconstruct the route failure.Moreover, there is no chance of link failure in the proposed scheme why because the link residual life is also considered while constructing the route.The delay is reduced in the routing scheme by considering the distance between the current node and the destination node.The proposed routing scheme provides the choice of next forwarder node.The reliability is ensured by reducing the link failure in the network.The following block diagram explains the proposed scheme very well. Figure 1 explains the concept of proposed routing scheme that is the intermediate process between the source node and destination node to transmit the data.In Fig. 1, the source node intends to transmit the data to the destination node.So, initially, the source node finds out the nodes which are in their transmission range to form the neighbor list.First, it checks that, whether the destination node is present in the neighbor list or not.If it is present in the sense, it will forward the data to the destination directly.Otherwise, it searches for the best forwarder node in the neighbor list by using the following way: Step 1: Find out the link residual life of the link exists between the source node and the nodes in the neighbor list. Step 2: Calculate the distance between the nodes in the neighbor list and the destination. Step 3: Choose the node with high Link Residual Life (LRL) and the min distance as a next forwarder node. Step 4: Forward the data to the next best forwarder node. Step 5: Repeat from step 1 until the data packet reaches the destination. The link residual life is defined as the duration at which the link exists between the nodes.The LRL is calculated by using the following formula: Distance indicates that the neighbor (relay) node needs to move to get out of range of the source node.The relative velocity is used to find the direction of the moving node.The relative velocity is calculated by: ˞˥ˬIˮ˩˰˥ ˢ˥ˬJI˩ˮ˳ = ˖˩JJˬII˥˭˥Jˮ ˠ˩˭˥ 9 (2) The distance between the node and the destination should be in the decreasing manner to become a next forwarder node.The proposed routing scheme reduces the routing overhead by reducing link failure.The routing delay also reduced in the network.This have been analyzed by using the simulation results obtained by the network simulator NS2. SIMULATION RESULTS The simulation is done by using the simulator NS2.Network simulator is a discrete event time driven simulator.NS2 is open source software which uses C++ and Tool Command Language (TCL) for simulation.C++ is used for packet processing and fast to run.TCL is used for simulation description and used to manipulate existing C++ objects.It is faster to run and change.NS2 is widely used to simulate the networking concepts.The simulation parameter used in the simulation is tabulated in Table 1. Table 1 describes that, 21 numbers of nodes are distributed in the simulation area 1070×746 m.The mobiles are moving within the simulation area by using the random way mobility model with the speed 5 m/sec.Each and every node has the direct link with the nodes within the range 250 m.The Constant Bit Rate (CBR) traffic model is used to control the traffic flow in the network.The performance or the proposed scheme is analyzed by the parameters throughput, Link duration and delay.And the performance is evaluated by changing the mobility model such as Random way point and city section mobility model.The End to End delay is the average time taken by the data packet to reach the destination.The delay is calculated by using the formula (Fig. 2): ˖˥ˬI˳ = ˜II˫˥ˮ IJJ˩˰Iˬ ˮ˩˭˥ − ˜II˫˥ˮ J˥Jˤ ˮ˩˭˥ (3) Figure 3 shows the graph plotted between the delays occur in the destination verses simulation time.Lower the delay indicates that the high performance of the proposed scheme. The throughput indicates that the amount of work done per unit time.In the proposed scheme, the throughput indicates that, the amount of data delivered per unit time.The throughput is calculated by using the following formula: ˠℎJJ˯˧ℎJ˯ˮ = (4) Figure 3 explains that the proposed scheme HDCR provides better performance while using City section mobility model. Figure 4 shows that, as the number of node increases the link between the nodes exists for long duration.The proposed scheme outperforms in the city section mobility model than Random way point mobility model. CONCLUSION In this study, a novel routing scheme is proposed to adopt the routing protocol for the highly dynamic MANET.The link residual life and velocity of the moving node plays a very important role while constructing the path to reach the destination.The proposed scheme outperforms than the existing scheme in terms of routing overhead, delay, reliability and link failure.
2,443
2014-08-25T00:00:00.000
[ "Computer Science", "Engineering" ]
Memory and synaptic plasticity are impaired by dysregulated hippocampal O-GlcNAcylation O-GlcNAcylated proteins are abundant in the brain and are associated with neuronal functions and neurodegenerative diseases. Although several studies have reported the effects of aberrant regulation of O-GlcNAcylation on brain function, the roles of O-GlcNAcylation in synaptic function remain unclear. To understand the effect of aberrant O-GlcNAcylation on the brain, we used Oga+/− mice which have an increased level of O-GlcNAcylation, and found that Oga+/− mice exhibited impaired spatial learning and memory. Consistent with this result, Oga+/− mice showed a defect in hippocampal synaptic plasticity. Oga heterozygosity causes impairment of both long-term potentiation and long-term depression due to dysregulation of AMPA receptor phosphorylation. These results demonstrate a role for hyper-O-GlcNAcylation in learning and memory. influences synaptic plasticity in the hippocampus [21][22][23] . Although it is clear that O-GlcNAcylation is abundant in synapses and that O-GlcNAcylation affects synaptic plasticity and learning and memory in the hippocampus, past studies have used different methods to modulate O-GlcNAcylation levels, resulting in conflicting results. Decreased O-GlcNAc levels by alloxan treatment (OGT inhibitor) impairs high-frequency stimulation (HFS)-induced long-term potentiation (LTP) in the Schaffer Collateral (SC)-CA1 Pathway 24 . In contrast, the elevation of O-GlcNAcylation induced by Thiamet-G (OGA inhibitor) inhibits HFS-LTP and impairs hippocampal learning 18 . Here we assessed how chronic elevations of O-GlcNAcylation in the hippocampus affect synaptic function, behavioral traits, and spatial learning and memory, using Oga +/− mice with constitutively increased O-GlcNAc levels. Results Oga +/− brains have normal morphology and dendritic spine density. Consistent with previous studies showing enriched expression of O-GlcNAc cycling enzymes, OGT and OGA, we detected high levels of O-GlcNAcase expression in the hippocampus, which was visualized by beta galactosidase (LacZ) staining of an Oga +/− brain section (Fig. 1a). To assess the effect of increased O-GlcNAcylation on hippocampus-dependent function, we used Oga +/− mice with chronically elevated O-GlcNAcylation. The hippocampal lysates prepared from Oga +/− mice showed an increase in the overall O-GlcNAcylation levels (Fig. 1b). To verify the elevation of O-GlcNAcylation in the hippocampus, we used immunohistochemistry with an anti-O-GlcNAc antibody. As expected, increased immunoreactivity was observed throughout all regions of the hippocampus in Oga +/− mice compared to WT. (Fig. 1c). Next, we tested whether Oga heterozygosity leads to morphological changes in the brain. Morphological analysis of neurons in the hippocampus by Nissl staining revealed that the Oga +/− hippocampus shows no morphological changes in hippocampal CA1, CA3, or dentate gyrus (DG) (Fig. 1d). In addition, we found that there were no differences in the numbers of cells immunostained for the neuronal marker neuronal nuclei (NeuN) and for the astrocyte marker glial fibrillary acidic protein (GFAP) in the hippocampal CA1, CA3, or DG (Fig. 1e). The average brain weight also was not affected by Oga heterozygosity (Fig. 1f and g). Lastly, dendritic spine density was not altered in the Oga +/− hippocampal CA1 pyramidal neurons ( Fig. 1h and i). These results together suggest that synaptic development and hippocampal structure are not affected by the elevation of O-GlcNAcylation. Oga +/− mice display impaired spatial learning and memory. Various neuronal proteins involved in synaptic function and learning and memory are known to be O-GlcNAcylated 8,9,15 . To assess whether hyper-O-GlcNAcylation affects hippocampal-dependent spatial learning and memory, we employed the Barnes circular maze test. In this test, mice were trained to escape a brightly lighted circular field by discovering the escape hole at its periphery. Compared with wild-type (WT) mice, Oga +/− mice showed impaired learning performance during four days of training ( Fig. 2a-c). To assess memory formation, we performed probe trials on days 5 and 12. WT and Oga +/− mice performed similarly during the probe trials when total distance was measured ( Fig. 2d and g). Oga +/− mice exhibited increased latency to the target region during probe trials ( Fig. 2f and i). However, no significant difference were observed in the time spent in the target region between WT and Oga +/− mice ( Fig. 2e and h). To further verify the impairment in spatial learning and memory of Oga +/− mice, we performed a context fear conditioning test. Compared with WT mice, Oga +/− mice failed to retain fear memory 24 h after fear conditioning ( Fig. 2j and k). Our data indicate that proper removal of O-GlcNAc modification by OGA is required for hippocampal-dependent spatial learning and memory. To examine motor coordination in these mice, we tested motor performance using the rotarod task. Oga +/− mice did not display a defect in motor function during the rotarod test (Fig. S1A). During the open field test, Oga +/− mice showed normal locomotor activities ( Fig. S1B and C). Anxiety-related behaviors were also tested using the elevated plus maze. Compared with WT mice, Oga +/− mice exhibited no significant differences in the number of entries to the open arms and amount of time spent in the open arms ( Fig. S1D-G). Considered collectively, these data indicate that Oga +/− mice show normal locomotor activity and anxiety levels. Glutamatergic and GABAergic synaptic transmission in the hippocampus is normal in Oga +/− mice. We next explored the effect of heterozygous loss of Oga on intrinsic neuronal excitability and excitatory synaptic transmission in hippocampal CA1 pyramidal neurons. Excitability was tested by injecting step depolarizing currents, and we found that intrinsic excitability of hippocampal CA1 pyramidal neurons remains unchanged in Oga +/− mice ( Fig. 3a and b). The frequency and amplitude of miniature excitatory and inhibitory postsynaptic currents (mEPSCs and mIPSCs, respectively) in hippocampal CA1 pyramidal neurons were also comparable in WT and Oga +/− pyramidal neurons ( Fig. 3c and d), indicating that basal synaptic responses are not affected in Oga +/− mice. In addition, we measured the ratio of AMPA to N-methyl-D-aspartate (NMDA) receptor-mediated synaptic currents in SC− CA1 synapses, and the AMPA/NMDA ratio was similar between WT and Oga +/− synapses (Fig. 3e). These results suggest that Oga heterozygosity does not affect basal SC− CA1 synaptic transmission or short-term plasticity. Impaired NMDA receptor (NMDAR)-dependent synaptic plasticity in Oga +/− mice. Previous studies investigating the effects of increased O-GlcNAcylation on synaptic plasticity have generated conflicting results 18,24 . Therefore, we next assessed whether increased O-GlcNAcylation resulting from Oga haploinsufficiency alters synaptic plasticity. The slope of the field excitatory postsynaptic potential (fEPSP) to fiber volley amplitudes (input-output curves) was not changed in Oga +/− mice (Fig. 4a). Presynaptic release probability, as measured by paired-pulse facilitation (PPF), also remained unaffected in Oga +/− mice (Fig. 4b). When we measured NMDAR-mediated LTP in Oga +/− mice, the magnitude of LTP induced by high-frequency stimulation Scientific RepoRts | 7:44921 | DOI: 10.1038/srep44921 (HFS) at the SC− CA1 pathway was reduced compared to WT mice (Fig. 4c). Moreover, in the same hippocampal pathway, low-frequency stimulation (LFS)-induced long-term depression (LTD) was impaired in Oga +/− mice in comparison to WT mice (Fig. 4d). These results suggest that the removal of O-GlcNAcylation mediated by OGA is required for NMDAR-dependent LTP and LTD at SC− CA1 synapses. Impaired modulation of AMPA receptor during LTP/LTD in Oga +/− mice. Regulation of AMPA receptor trafficking is crucial for controlling the strength of synaptic transmission during LTP/LTD. In particular, phosphorylation of GluA1 AMPAR subunit at S845 and S831 play key roles in AMPA receptor trafficking and synaptic plasticity 25 . We thus decided to examine whether GluA1 phosphorylation is altered in response to chemically induced LTP and LTD in Oga +/− hippocampus. We briefly stimulated hippocampal slices from WT and Oga +/− mice with glycine for LTP or NMDA for LTD 26,27 . WT hippocampal slices showed elevated phosphorylation of the S845 and S831 GluA1 in chemical LTP, and the phosphorylation of the S845 GluA1 was decreased in chemical LTD. However, in Oga +/− hippocampal slices, phosphorylation of the S845 and S831 GluA1 were not properly regulated following chemical LTP or LTD (Fig. 5a-c). These results indicate that Oga heterozygosity impairs the proper regulation of AMPA receptor phosphorylation during synaptic plasticity. Discussion Although increasing evidence has been generated by various studies regarding the significance of O-GlcNAcylation in regulating synaptic functions, the different experimental designs used have resulted in conflicting conclusions on the impact of changing the levels of O-GlcNAcylation. Here we used mice with a heterozygous loss-of-function mutation in OGA which have elevated O-GlcNAc levels. We found that OGA is highly expressed in the hippocampus, suggesting that O-GlcNAc modification of neuronal proteins is closely related to hippocampus-dependent functions. Oga +/− mice exhibited impaired synaptic plasticity in the hippocampus at SC-CA1 synapses, dysregulated phosphorylation of AMPA receptor subunit GluA1 in chemically induced LTP and LTD, and deficits in hippocampus-dependent learning and memory. This result together demonstrates that increased levels of O-GlcNAcylation lead to altered synaptic plasticity in the hippocampus, which may underlie the impairment of learning and memory observed in Oga +/− mice. Several studies have shown that synaptic plasticity is variably affected by O-GlcNAcylation. Tallent et al. showed that the elevation of O-GlcNAcylation induced by OGA inhibitor (9d) decreases PPF and increases LTP induction, and suggested that the elevation of O-GlcNAcylation facilitated LTP by modulating the interplay between phosphorylation and O-GlcNAcylation of signaling molecules, such as synapsin I/II, ERK, and CaMKII 24 . In the same study, the authors also reported that reduced O-GlcNAcylation with OGT inhibitor (Alloxan) prevents LTP induction 24 . However, contrary to this result, Kanno et al. found that Alloxan enhances hippocampal SC-CA1 LTP by regulating AMPA receptor trafficking 30 . Taylor et al. also showed that acutely elevated O-GlcNAcylation by OGA inhibitor (Thiamet-G) or glucosamine induces LTD, but impairs LTP at CA3-CA1 synapses, which also led to a deficit in novel object recognition 18 . Each study mentioned above used different methods to change the levels of O-GlcNAcylation. Alloxan is known as a weak OGT inhibitor and thus likely to have off-target effects 31,32 . Furthermore, as we previously reported, the OGA inhibitor (Thiamet-G) increases the levels of OGA expression 33,34 . Glucosamine also affects various intracellular signaling pathways [35][36][37] . Each experiment was performed in acutely elevated O-GlcNAcylation by pretreatment of OGA inhibitors or glucosamine. The different experimental designs might have resulted in conflicting results. Previously, the discrepancy in the effect of dysregulated O-GlcNAcylation was also observed in other intracellular signaling pathways and physiological functions 33,34,38 . Despite this discrepancy, earlier studies suggest that dysregulated O-GlcNAcylation can affect synaptic plasticity. Here, we used OGA heterozygous mice that have elevated O-GlcNAcylation levels. Oga +/− hippocampus displayed impaired regulation of AMPAR GluA1 phosphorylation which plays an important role in mediating AMPAR trafficking during synaptic plasticity. Although Taylor et al. showed that GluA1 is not O-GlcNAcylated 18 , the phosphorylation of GluA1 can be indirectly regulated by activation of upstream signaling molecules, including protein kinase C (PKC), CaMKII, and protein kinase A (PKA) [39][40][41] . Importantly, both PKC and CaMKII are modified by O-GlcNAcylation 21,42 , and the dynamic interplay between O-GlcNAcylation and phosphorylation in neurons was shown to be involved in hippocampal synaptic plasyticity 24 . Activation of PKC or PKA reduces global O-GlcNAc levels in cytoskeletal fraction of cultured cerebellar neurons 43 . We speculate that GluA1 phosphorylation might be affected by altered O-GlcNAcylation levels. However, we cannot rule out the possibility that Oga heterozygosity affects multiple signaling pathways involved in hippocampal LTP and LTD. ROS play an important role in synaptic plasticity 44 by regulating synaptic plasticity-related signaling molecules, receptors, and channels [45][46][47] . Importantly, O-GlcNAcylation have been shown to affect ROS generation 48 . In addition, forkhed box O1 (FoxO1), a regulator of the transcription of the oxidative stress responsive enzymes catalase and MnSOD (SOD2), is O-GlcNAcylated 49 . We therefore examined whether ROS levels are affected in Oga +/− hippocampus compared to WT hippocampus. Despite Oga heterozygosity, the ROS levels were not altered in Oga +/− hippocampus (Fig. S3). Aging is associated with impairments in cognitive and synaptic function 50 . Dysfunction of the aging brain is not caused by neuronal loss 51 but by specific alterations in neuronal morphology, cell-cell interactions, and gene expression 50 . The hippocampus appears to be particularly vulnerable to the effects of aging on cognitive function and synaptic plasticity. O-GlcNAcylation and its regulatory enzymes are highly detected in the hippocampus 52 . O-GlcNAcylation modulates neuronal cell signaling processes and gene expression, which is critical for proper neuronal function 1,53 . Interestingly, we previously reported that the brains of older mice show significantly increased levels of O-GlcNAcylation compared with those in younger mice 3 . However, the mechanism underlying the chronic elevations in O-GlcNAcylation on brain aging remains unknown. Based on our observations, in the normally aged brain, we speculate that chronically elevated O-GlcNAcylation contributes to impairment of synaptic plasticity and learning and memory. Methods Mice. Oga +/− mice (C57BL/6J) were generated as described previously 3 . All mice were housed under a 12-hour light/dark cycle and given ad libitum access to food and water. All experimental protocols were approved by Institutional Animal Care and Use Committee of the Ulsan National Institute of Science and Technology (UNISTIACUC-14-018) and all methods were performed in accordance with the relevant guidelines and regulations. Barnes maze. The paradigm of Barnes circular maze consists of white circular platform (92 cm diameter), with 20 evenly spaced holes (5 cm diameter) located 7.5 cm from the perimeter and is elevated 100 cm above the floor. Several spatial cues with distinct shapes were placed near the walls of the testing room. A black target box (20 × 10 × 10 cm) was placed under one hole. The mice were encouraged to find this box by aversive noise (85 dB) on the platform. Barnes maze were run for 4 consecutive days, and 3 trials were carried out each day with 20 min inter-trial intervals. The mouse was allowed to search for the target box for 3 min. Distance, latency, and numbers of errors to reach the target hole were recorded during training trials by video tracking software. On day 5, a probe test was performed without the escape box. Mice were allowed to freely find the target box for 3 min. Time spent around each hole, total distance travelled, and the latency to find target hole were recorded. Golgi staining. Brains from 8-week-old mice were processed with the FD Rapid GolgiStain ™ Kit (NeuroTechnologies) according to the instructions of the manufacturer. Images of dendritic spines (apical dendrites of CA1 pyramidal neurons) were acquired using an Olympus Cell^TIRF Xcellence microscope in UNIST-Olympus Biomed Imaging Center (UOBC). Chemical LTP and LTD induction. Acute hippocampal slices (300-μ m thick) from WT or Oga +/− mice (8-10 weeks) were prepared in a sucrose-cutting buffer containing (in mM) 234 sucrose, 2.5 KCl, 1.25 NaH 2 PO 4 , Chemical LTP and LTD significantly increases and decreases the levels of GluA1 S845 phosphorylation, respectively in WT (n = 4, normalized to control). However, acute hippocampal slices from Oga +/− mice failed to exhibit a significant change in GluA1 S845 phosphorylation following chemical LTP and LTD induction (n = 3, normalized to control). One-way ANOVA followed by the Tukey's test was used. (c) Chemical LTP significantly increases the levels of GluA1 S831 phosphorylation in WT (n = 3, normalized to control), but not in Oga +/− mice (n = 3, normalized to control). Error bars represent ± standard error of the mean (SEM). NS: not significant, *p < 0.05, **p < 0.01, ***p < 0.001; one-way ANOVA followed by the Tukey's test. Full-length blots/gels are presented in Supplementary Figure S4. Scientific RepoRts | 7:44921 | DOI: 10.1038/srep44921 24 NaHCO 3 , 11 glucose, 10 MgSO 4 , 0.5 CaCl 2 bubbled with 95% O 2 and 5% CO 2 . The slices were recovered at 35 °C for one hour in a recovery buffer containing (in mM) 124 NaCl, 3 KCl, 1.25 NaH 2 PO 4 , 26 NaHCO 3 , 10 glucose, 6.5 MgSO 4 , 1 CaCl 2 bubbled with 95% O 2 and 5% CO 2 . Following the recovery, the slices were further incubated at 37 °C for one hour in an extracellular fluid containing (in mM) 125 NaCl, 2.5 KCl, 1 MgCl 2 , 2 CaCl 2 , 33 glucose, 25 HEPES, and then treated with 20 μ M D-AP5 and 0.5 μ M TTX for 20 min. The slices were subsequently treated with 3 μ M strychnine, 20 μ M bicuculline and 200 μ M glycine for 10 min to induce chemical LTP, or with 20 μ M NMDA for 3 min to induce chemical LTD in a Mg-free extracellular fluid, and transferred back to a regular extracellular fluid for 30 min prior to sample collection. Statistical analysis. The Student's unpaired T-test or non-parametric Mann-Whitney U-test was used to compare two independent groups. For multiple comparisons, a one-way repeated measures ANOVA with Tukey's post hoc test was utilized, as specified in the Figure legends. All data are expressed as the mean ± SEM and significance indicated by *P < 0.05, **P < 0.01, and ***P < 0.001.
3,955
2017-04-03T00:00:00.000
[ "Biology", "Medicine" ]
Synthesis and biological evaluation of a new series of ortho-carboranyl biphenyloxime derivatives (Z,Z’)-1,1′-(4-ortho-Caboranyldimethyl)-bis(2-methoxyphenylethan-1-oxime) intermediate 3 was synthesized by a three-step reaction with a final treatment with base to give a new series of ortho-carboranyl biphenyloxime derivatives (4–8). Compounds 7 and 8 showed high solubility and the in vitro study results revealed high levels of accumulation in HeLa cells with higher cytotoxicity and boron uptake compared to l-boronphenylalanine. Electronic supplementary material The online version of this article (10.1186/s13065-018-0444-z) contains supplementary material, which is available to authorized users. Introduction Carborane (C 2 B 10 H 12 , Fig. 1) is a spherical compound formed by one or more boron peaks of polyhedral boron compounds, which is formed by carbon atoms. The volume is similar to that of a benzene ring [1][2][3][4][5]. This is a special large steric skeleton with a very strong hydrophobic structure. Therefore, improvement of the chemical structure can alter the stability, water solubility, and biological activity of compatibility and allow wider applications of carborane as a BNCT agent [6][7][8][9]. Boron neutron capture therapy (BNCT) was first proposed as a potential cancer therapy in 1936, based on the thermal neutron captured by 10 B atoms then produces a 4 He (α-particle) and a 7 Li ion [10,11]. However, its successful application in the treatment of cancer patients still presents a challenge in medical research [12]. A major challenge in designing boron containing drugs for BNCT of cancer is the selective delivery of 10 B to the tumor as well as water solubility [13]. Our synthetic strategy was to use heterocyclic alkyl chains as a boron delivery system, the target molecules being the heterocyclic alkyl oxime chains in which the boron functionality was present as a ortho-carborane. The large number of boron atoms has a clear advantage for BNCT [14]. This paper reports the hydrophilic carboranylbenzyloxime moiety, such as alkylmorpholine, alkylpiperidine, phenoxyalkyl, and pyridine, on carbon-oxygen combined with chemical bonding. These compounds have higher solubility in polar solvents and increased the boron uptake in tumor cells, highlighting the potential use of carborane as a hydrophilic carrier into the body that can pass the Blood Brain Barrier (BBB rule) to the cells within the organization for drug evaluation. Experimental All manipulations were performed under a dry nitrogen atmosphere using standard Schlenk techniques. Tetrahydrofuran (THF) was purchased from Aladdin Pure Chemical Company and dried over sodium metal distillation prior use. The reactions were monitored on Merck F-254 pre-coated TLC plastic sheets using hexane as the mobile phase. All yields refer to the isolated yields of the products after column chromatography using silica gel (200-230 mesh). All glassware, syringes, magnetic stirring bars, and needles were dried overnight in a convection oven. Ortho-carborane (C 2 H 2 B 10 H 10 ) was purchased from HENAN WANXIANG Fine Chemical Company and used after sublimation. The NMR spectra were recorded on a Bruker 300 spectrometer operated and the chemical shifts were measured relative to the internal residual peaks from the lock solvent (99.9% CDCl 3 and CD 3 COCD 3 ), and then referenced to Si(CH 3 ) 4 Open Access *Correspondence<EMAIL_ADDRESS>School of Pharmacy, Jiangsu University, Zhenjiang 212013, People's Republic of China (0.00 ppm). The Fourier transform infrared (FTIR) spectra of the samples were recorded on an Agilent Cary 600 Series FT-IR spectrometer using KBr disks. Elemental analyses were performed using a Carlo Erba Instruments CHNS-O EA1108 analyzer (Additional file 1). Synthesis of 1,1′-(4-caboranyldimethyl)-bis(2-methoxy-4,1-phenylene-ethan-1-one) (2). Acetyl chloride (1.4 mL, 20 mmol) was added via a syringe to a solution of aluminum chloride (2.6 g, 20 mmol) in 50 mL of methylene chloride at 0 °C and stirred for 30 min. A solution of compound 1 (3.5 g, 10 mmol) in methylene chloride 10 mL was added slowly to the reaction flask at 0 °C, and the reaction temperature was maintained at 0 °C for 30 min. The reaction mixture was then warmed slowly to room temperature, stirred for an additional 3 h, and quenched with a saturated NaHCO 3 (30 mL) solution. The crude product was then extracted, and the organic layer was washed with H 2 O, dried with anhydrous Na 2 SO 4 , and filtered then concentrated. The residue was purified by flash column chromatography (ethyl acetate/ hexane 1:8) to give compound 2 as a colorless oil: yield: 4.1 g (97%). IR (KBr pellet), cm −1 , ν: (B-H o-carborane ) 2602. Cell viability assay (MTT assay) HeLa cells in a 3 × 10 4 /mL cell suspension per hole in 96 well plates were digested by adding 100 μL of a cell suspension and culturing for 24 h to absorb the original culture medium followed by the addition of 200 μL configured compounds-4, 5, 6, 7, 8 and BPA (l-boronphenylalanine). Each concentration was made from 4 compound holes, and the holes around the 96 well plates were sealed with PBS, the negative control. The blank control group lacked the compounds. After 24 h, 20 μL of a MTT solution was added to each hole, and cultured for 4 h. Subsequently, DMSO 150 μL was added to the medium through a suction hole and shaken for 10 min. The OD of each hole was determined at 490 nM, and the sample inhibition rate in different concentrations was calculated: inhibition rate = (Control OD value/Delivery OD value)/Control OD value × 100%. Finally, the IC 50 value of the sample was calculated using the related software. Results and discussion This paper reports the hydrophilic function of the orthocarboranylbenzyloxime moiety, such as alkylmorpholine, alkylpiperidine, phenoxyalkyl and pyridine, on carbon-oxygen combined with chemical bonding. These compounds have higher solubility in polar solvents and increasing boron uptake in tumor cells within the organization for a drug evaluation. The major requirement of a BNCT agent is a high water solubility, high boron uptake, and low cytotoxicity. The HeLa cervical carcinoma cells were treated with the Scheme 1 Preparation of (Z,Z')-1, 1′-(4-Caboranyldimethyl)-bis(2-methoxyphenylethan-1-oxime) Scheme 2 Preparation of (Z,Z′)-1,1′-(4-Caboranyldimethyl)-bis (hydrophilic functional) derivatives(4- 8) candidate compounds 4-8 for 2 days, and the cell viability was determined by a MTT assay. Compounds 4-8 exhibited boron uptake in the range of 0.106-0.520 ppm (Table 1), and the cell cytotoxicity was in the range of 1.134-2.516 µM, as shown Fig. 2. In particular, compounds 7 and 8 showed high boron uptake in HeLa cells, and both compounds had higher cytotoxicity than BPA (l-boronphenylalanine). Morpholine and piperidine is a heterocyclic nitrogen and oxygen member six-ring reagent with a simple structure that improves the water solubility and bioactivity improvement. They are used in the preparation of pharmaceutical drugs for their antiinflammation, anticancer, and antiviral activity [24][25][26][27][28]. Conclusion In conclusion, we reported the series of ortho-carborane substituted bipolar-function derivatives, such as alkyl pyridine, alkyl phenoxide, alkyl morpholine, and alkyl piperidine, were synthesized. The target compounds coupling of the aryl-oxime with chain functional group proceeded successfully for introduction of an ortho-carborane moiety in the molecules, which can easily be further four-step substituted to high yield final compound. The effects of synthesized compounds on biology activity were assay in HeLa cells. Both cyclic alkyl derivatives of ortho-carborane and oxime containing compounds, 7 and 8, respectively, were exhibit high boron uptake and higher cytotoxicity than BPA (l-boronphenylalanine). This resulted in carborane compounds with improved water solubility for the BNCT agent. The knowledge gained from modified bipolar groups could facilitate both drug selection and evaluations.
1,710.8
2018-06-29T00:00:00.000
[ "Chemistry" ]
Sensitivity of Anomalous Quartic Gauge Couplings via $Z\gamma\gamma$ Production at Future hadron-hadron Colliders Triple gauge boson production provides a promising opportunity to probe the anomalous quartic gauge couplings in understanding the details of electroweak symmetry breaking at future hadron-hadron collider facilities with increasing center of mass energy and luminosity. In this paper, we investigate the sensitivities of dimension-8 anomalous couplings related to the $ZZ\gamma\gamma$ and $Z\gamma\gamma\gamma$ quartic vertices, defined in the effective field theory framework, via $pp\to Z\gamma\gamma$ signal process with Z-boson decaying to charged leptons at the high luminosity phase of LHC (HL-LHC) and future facilities, namely the High Energy LHC (HE-LHC) and Future Circular hadron-hadron collider (FCC-hh). We analyzed the signal and relevant backgrounds via a cut based method with Monte Carlo event sampling where the detector responses of three hadron collider facilities, the center-of-mass energies of 14, 27 and 100 TeV with an integrated luminosities of 3, 15 and 30 ab$^{-1}$ are considered for the HL-LHC, HE-LHC and FCC-hh, respectively. The reconstructed 4-body invariant mass of $l^+l^-\gamma\gamma$ system is used to constrain the anomalous quartic gauge coupling parameters under the hypothesis of absence of anomalies in triple gauge couplings. Our results indicate that the sensitivity on anomalous quartic couplings $f_{T8}/\Lambda^{4}$ and $f_{T9}/\Lambda^{4}$ ($f_{T0}/\Lambda^{4}$, $f_{T1}/\Lambda^{4}$ and $f_{T2}/\Lambda^{4}$) at 95$\%$ C.L. for FCC-hh with $L_{int}$ = 30 ab$^{-1}$ without systematic errors are two (one) order better than the current experimental limits. Considering a realistic systematic uncertainty such as 10$\%$ from possible experimental sources, the sensitivity of all anomalous quartic couplings gets worsen by about 1.2$\%$, 1.7$\%$ and 1.5$\%$ compared to those without systematic uncertainty for HL-LHC, HE-LHC and FCC-hh, respectively. I. INTRODUCTION The Standard Model (SM) puzzle was completed with the simultaneous discovery of the scalar Higgs boson, predicted theoretically in the SM, at the CERN Large Hadron Collider (LHC) by both ATLAS and CMS collaborations [1,2]. With the discovery of this particle, the mechanism of electroweak symmetry breaking (EWSB) has become more important and still continues to be investigated. The self-interaction of the triple and quartic vector boson couplings is defined by the non-Abelian structure of the ElectroWeak (EW) sector within the framework of the SM. Any deviation in the couplings predicted by the EW sector of the SM is not observed yet with the precision measurements. While the experimental results are consistent with the couplings of W ± to Z boson, there is no experimental evidence of Z bosons coupling to photons. Therefore, studying of triple and quartic couplings can either confirm the SM and the spontaneous symmetry breaking mechanism or provide clues for the new physics Beyond Standard Model (BSM). Anomalous triple and quartic gauge boson couplings are parametrized by higher-dimensional operators in the Effective Field Theory (EFT) that can be explained in a model independent way of contribution of the new physics in the BSM. The anomalous triple gauge couplings are modified by integrating out heavy fields whereas the anomalous Quartic Gauge Couplings (aQGC) can be related to low energy limits of heavy state exchange. In this scenario, the SU (2) L U (1) Y is realized linearly and the lowest order Quartic Gauge Couplings are given by the dimension-eight operators [3,4]. These operators are so-called genuine QGC operators which generate the QGC without having TGC associated with them. Many experimental and phenomenological studies have been carried out and revealed constraints about aQGC. Both vector-boson scattering processes (i.e. ZZjj and Zγjj process ) and triboson (i.e, Zγγ production) production are directly sensitive to to the quartic ZZγγ and Zγγγ vertices. The new era starting with the novel machine configuration of the LHC and beyond aims to decrease the statistical error by increasing center of mass energy and luminosity in the measurements of the Higgs boson properties as well as finding clues to explain the physics beyond SM. With the configurations of beam parameters and hardware, the upgrade project HL-LHC will achieve an approximately 250 fb −1 per year to reach a target integrated luminosity of 3000 fb −1 at 7.0 TeV nominal beam energy of the LHC in a total of 12 years [67]. The other considered post-LHC hadron collider which will be installed in existing LHC tunnel is HE-LHC that is designed to operate at √ s= 27 TeV center-of-mass energy with an integrated luminosity of at least a factor of 5 larger than the HL-LHC [68]. As stated in the Update of the European Strategy for Particle Physics by the European Strategy Group, it is recommended to investigate the technical and financial feasibility of a future hadron collider at CERN with a centre-of-mass energy of at least 100 TeV. The future project currently under consideration by CERN which comes to fore with infrastructure and technology as well as the physics opportunities is the Future Circular Collider (FCC) Study [69]. The goal of our study is to investigate the effects of anomalous quartic gauge couplings on ZZγγ and Zγγγ vertices via pp → Zγγ process where Z boson subsequently decays to e or µ pairs at HL-LHC, HE-LHC and FCC-hh. The rest of the paper is organized as follows. A brief review of theoretical framework that discusses the operators in EFT Lagrangian is introduced in Section II. The event generation tools as well as the detail of the analysis to find the optimum cuts for separating signal events from different source of backgrounds is discussed in Section III. In section IV, we give the detail of method to obtain sensitivity bounds on anomalous quartic gauge couplings, and then determine them with an integrated luminosity L int = 3 ab −1 , 15 ab −1 , 30 ab −1 for HL-LHC, HE-LHC and FCC-hh, respectively. Finally, we summarize our result and compare obtained limits to the current experimental results in Section V. COUPLINGS Although there is no contribution of the quartic gauge-boson couplings of the ZZγγ and Zγγγ vertices to the Zγγ production in the SM, new physics effects in the cross section of Zγγ production can be searched with high-dimensional effective operators which describe the anomalous quarticgauge boson couplings without triple gauge-boson couplings. These neutral aQGCs couplings are modeled by either linear or non-linear representations using an EFT [70][71][72]. In the non-linear representation, the electroweak symmetry breaking is due to no fundamental Higgs scalar whereas in the linear representation, it can be broken by the conventional SM Higgs mechanism. With the discovery of the Higgs boson at the LHC, it becomes important to study the anomalous quartic gauge couplings based on linear representation. In this representation, the parity conserved and charge-conjugated effective Lagrangian include the dimension-eight effective operators by assuming the SU (2)×U (1) symmetry of the EW gauge field, with a Higgs boson belongs to a SU (2) L doublet. In this approach, the lowest dimension of operators which leads to quartic interactions but do not include two or three weak gauge boson vertices are expected to be eight. Therefore, the three where Λ is the scale of new physics, and f S,j , f M,j and f T,j represent coefficients of relevant effective operators. These coefficients are zero in the SM prediction. The expanded form of these operators and a complete list of quartic vertices modified by these operators are given in Appendix A. Among the f M,x and f T,x operators that affect the ZZγγ and Zγγγ vertices, f M,x operators in the production of Zγγ at future of hadron-hadron colliders with high center of mass energies and luminosities were examined and their limit values were predicted in Ref [29]. Therefore, we focus on the five coefficients f T 0 /Λ 4 , f T 1 /Λ 4 , f T 2 /Λ 4 , f T 8 /Λ 4 and f T 9 /Λ 4 of the operators containing four field strength tensors for this study. Especially f T 8 /Λ 4 , and f T 9 /Λ 4 give rise to only neutral anomalous quartic gauge vertices. The effective field theory is only valid under the new physics scale in which unitarity violation does not occur. However, high-dimensional operators with nonzero aQGC can lead to a scattering amplitude that violates unitarity at sufficiently high energy values, called the unitarity bound. The value of the unitarity bound for the dimension-8 operators is determined by using a dipole form factor ensuring unitarity at high energies as: whereŝ is the maximum center-of-mass energy, Λ F F is the energy scale of the form factor. The maximal form factor scale Λ F F is calculated with a form factor tool VBFNLO 2.7.1 [73] for a given input of anomalous quartic gauge boson couplings parameters. The VBFNLO utility determines form factor using the amplitudes of on-shell VV scattering processes and computes the zeroth partial wave of the amplitude. The real part of the zeroth partial wave must be below 0.5 which is called the unitarity criterion. All channels the same electrical charge Q in V V → V V scattering (V = W/Z/γ) are combined in addition to individual check on each channel of the V V system. The calculated Unitarity Violation (UV) bounds using the form factor tool with VBFNLO as a function of higher-dimensional operators considered in our study are given in Fig.1. The unitarity is safe in the region that is below the line for each coefficients. The limit values with no unitarization restriction ( Λ F F = ∞) on the dimension-8 aQGC obtained by ATLAS and CMS collaborations by /Λ 4 and f T 9 /Λ 4 obtained from analysis of Zγγ production as well as different production channels by ATLAS and CMS collaborations. setting all other anomalous couplings to zero are summarized in Table I. The limits on The current best limits obtained by CMS collaboration for different production channels on /Λ 4 and f T 9 /Λ 4 couplings are also presented in last column of the Table I. The best limits obtained on in association with two jets production [23] at a center-of-mass energy of 13 TeV with an integrated luminosity 35.9 fb −1 . They also reported limits on f T 8 /Λ 4 from production of two jets in association with two Z boson [24] and f T 9 /Λ 4 from electroweak production of a Z boson, a photon and two forward jets production [28] at a center-of-mass energy of 13 TeV with an integrated luminosity 137 fb −1 . III. EVENT SELECTION AND DETAILS OF ANALYSIS The details of the analysis are given for the effects of dimension-8 operators on anomalous quartic of cross section with respect to one of the coupling while others are kept at zero. The sensitivity of f T 8 /Λ 4 and f T 9 /Λ 4 anomalous quartic couplings for each collider options to cross sections is more significant than the other couplings as seen in the second row of Fig. 3. Therefore, we expect to obtain better limits on the f T 8 /Λ 4 and f T 9 /Λ 4 couplings with Zγγ production. The general expression for amplitude in the EFT regime for the process considered can be written as where |M SM | 2 , (M SM M * dim8 ) and |M dim8 | 2 are the SM, interference of the SM amplitude with higher dimensional operators, and the square of the new physics contributions, respectively. In order to show the effectiveness of the form factor, the cross sections at LO without and with Λ F F =1.5 and 2 TeV is presented for all three collider options in Fig.4. It can be clearly seen in Fig. 4 that the square contributions of the new physics amplitudes suppress the interference contributions of the SM amplitude with high-dimensional operators in the case where the UV limit is not applied. However, if the new physics energy scale is heavy (i.e. Λ F F =1.5 and 2 TeV or higher), the largest new physics contribution to pp → Zγγ process is expected from the interference between the SM and the dimension-eight operators as seen from Fig. 4. For further analysis including response of the detector effects, we generate 600k events for all backgrounds and signal processes where we scan each [80] where a cone radius is set as ∆R = 0.4 (0.2) and p j T >15 (25) GeV for HL-LHC and HE-LHC (FCC-hh) colliders. Our main focus is to see the effects of anomalous quartic gauge boson couplings via pp → Zγγ signal process where Z boson subsequently decays to e or µ pairs. Therefore, events with two isolated photons and one pair of the same flavor and oppositely charged leptons (electrons or muons) are selected for further analysis (Cut-0). Electron and muon channels are combined to increase sensitivity even more. The signal includes nonzero effective couplings and SM contribution as well as its interference. "sm" stands for SM background process of the same final state with the signal process in our analysis. The main background processes to the selected l + l − γγ sample of events may originate from Zγj and Zjj production with hadronic jet misidentified as a photon. Such misidentifications generally arise from jets hadronizing with a neutral meson, which carries away most of the jet energy. The photons that carry a large fraction of the jet energy can exceed the reconstructed HL-LHC FCC-hh the final-state charged leptons and photons as given Cut-1 and Cut-2 in Table II for each collider options. Since the signal event contains two photons, we can safely suppress the event contamination and avoid infrared divergences by using a minimum transverse momentum and pseudo-rapidity cuts of the leading and sub-leading photons for the other background processes. Furthermore, the normalized distributions of leading and sub-leading photons (∆R(γ 1 , γ 2 )), leading photon and leading charged lepton (∆R(γ 1 , l 1 )), leading and sub-leading charged leptons ( ∆R(l 1 , l 2 )) separation in the pseudorapidity-azimuthal angle plane as well as the invariant mass of the oppositely signed charged lepton pair are given in Fig.7 (Fig.10) for HL-LHC (FCC-hh). To have well-separated photons and charged leptons in the phase space that leads to be identified separate objects in the detector, we require separations as ∆R(γ 1 , γ 2 )> 0.4, ∆R(γ 1 , l 1 )> 0.4 and ∆R(l 1 , l 2 )< 1.4 (Cut- 3). We also impose the invariant mass window cut around the Z boson mass peak as 81 GeV < M l + l − < 101 GeV (Cut-4) to suppresses the virtual photon contribution to the di-lepton system. Since requiring the high transverse momentum photon eliminates the fake backgrounds, we plot the transverse momentum of leading photon for HL-LHC, HE-LHC and FCC-hh in Fig.11 (left to right ) to define a region which is sensitive to aQGC. From these normalized plots we apply a cut on p γ 1 T as 160 GeV, 250 GeV and 300 GeV for each collider option, respectively (Cut-5). The flow of cuts are summarized in Table II for each hadron-hadron colliders that we analyzed. The normalized number of events after applied cuts are presented in Table III and FCC-hh, respectively. The distribution of the reconstructed 4-body invariant mass of l + l − γγ system for HL-LHC, HE-LHC and FCC-hh options given in Fig. 12 (top to bottom, respectively). is As seen from Fig.12, the applied UV bounds impose an upper-cut in the invariant mass of the l + l − γγ system which guarantees that the unitarity constraints are always satisfied. The number of events after applying UV bounds for signals are given in the parenthesis at Table III for comparison TABLE III: The cumulative number of events for In order to obtain a continuous prediction for the anomalous quartic gauge couplings after Cut-5, a quadratic fit is performed to number of events for each couplings ( n bins i N N P i ) obtained by integrating the invariant mass distribution of l + l − γγ system in Fig.12. The obtaining of the 95% Confidence Level (C.L.) limit on a one-dimensional aQGC parameter is performed by χ 2 test which corresponds to 3.84 leading leptons (γ 2 ) after the event selection(Cut-0) for the signals and all relevant backgrounds processes at FCC-hh with L int = 30 ab −1 where N N P i is the total number of events in the existence of aQGC, N B i is total number of events of the corresponding SM backgrounds in ith bin, ∆ i = δ 2 sys + 1 Table IV for HL-LHC, HE-LHC and FCC-hh collider options. This table also presents limits with δ sys = 0, 3%, 5% and 10% systematic errors as well as the unitarity bounds defined as the scattering energy at which the aQGC coupling strength is set equal to the observed limit. Following the second way discussed above, we present obtained limits at 95% C.L. with UV bound applied on the invariant mass distribution of l + l − γγ system for HL-LHC, HE-LHC and FCC-hh collider options without systematic errors in Fig. 13 where the impact of the UV bound can be seen. The limits on the aQGCs with UV bounds get worsen as expected since the interference of the SM amplitude with the dimension-eight operators suppresses the square contribution of new physics amplitude. We point out that our numerical results for the case f the same production channel used in our analysis [7,9], our obtained limits for all aQGC are one to three order of magnitude better as can be seen from the comparison of Table I and Table IV of pp → Zγγ process with leading-order (LO) or next to leading order (NLO) predictions [85] and higher order EW corrections, the uncertainty in integrated luminosity as well as electrons and jets misidentified as photons. In our study, we focus on LO predictions but do not investigate the impact and validity of these higher-order corrections on the signal and SM background processes. Since the main purpose of this study is not to discuss sources of the systematic uncertainty in detail but to investigate the overall effects of the systematic uncertainty on the limits values of aQGC, we consider three different scenarios of systematic uncertainty. The 95% C.L. limit values without systematic uncertainties and with three different scenarios of systematic uncertainties as δ sys = 3%, 5% and 10% for three different collider options are quoted in Table 5. The limits on the aQGC considered in this study with systematic error are weaker slightly when realistic systematic error is considered, e.g., compared with a 10% systematic error and without systematic error, sensitivity of f T 9 /Λ 4 gets worsen by about 1.2%, 1.7% and 1.5% for HL-LHC, HE-LHC and FCC-hh, respectively. We expect two (one) order of magnitude better limits on f ZZγγ and Zγγγ vertices for 95% C.L. has performed with three systematic uncertainty scenario δ sys = 3%, 5% and 10% using the l + l − γγ invariant mass system distributions. Since O T 8 and O T 9 give rise to aQGC containing only the neutral electroweak gauge bosons among the anomalous quartic operators, we reach the remarkable sensitivity on especially f T 8 /Λ 4 and f T 9 /Λ 4 couplings for HE-LHC and FCC-hh options comparing with current experimental results as seen from Table IV iii ) Eight operators containing field strength tensors only are as follows A complete list of corresponding quartic gauge boson vertices modified by dimension-8 operators is given in Table V.
4,555.6
2021-09-26T00:00:00.000
[ "Physics" ]
A CT-based transfer learning approach to predict NSCLC recurrence: The added-value of peritumoral region Non-small cell lung cancer (NSCLC) represents 85% of all new lung cancer diagnoses and presents a high recurrence rate after surgery. Thus, an accurate prediction of recurrence risk in NSCLC patients at diagnosis could be essential to designate risk patients to more aggressive medical treatments. In this manuscript, we apply a transfer learning approach to predict recurrence in NSCLC patients, exploiting only data acquired during its screening phase. Particularly, we used a public radiogenomic dataset of NSCLC patients having a primary tumor CT image and clinical information. Starting from the CT slice containing the tumor with maximum area, we considered three different dilatation sizes to identify three Regions of Interest (ROIs): CROP (without dilation), CROP 10 and CROP 20. Then, from each ROI, we extracted radiomic features by means of different pre-trained CNNs. The latter have been combined with clinical information; thus, we trained a Support Vector Machine classifier to predict the NSCLC recurrence. The classification performances of the devised models were finally evaluated on both the hold-out training and hold-out test sets, in which the original sample has been previously divided. The experimental results showed that the model obtained analyzing CROP 20 images, which are the ROIs containing more peritumoral area, achieved the best performances on both the hold-out training set, with an AUC of 0.73, an Accuracy of 0.61, a Sensitivity of 0.63, and a Specificity of 0.60, and on the hold-out test set, with an AUC value of 0.83, an Accuracy value of 0.79, a Sensitivity value of 0.80, and a Specificity value of 0.78. The proposed model represents a promising procedure for early predicting recurrence risk in NSCLC patients. Introduction Lung cancer is one of the most aggressive cancer types with a 5-year relative survival rate of only 19%. Non-small cell lung cancer (NSCLC) accounts for 85% of lung cancer cases and is one of the most fatal cancers worldwide [1]. Treatment approaches for NSCLC patients differ a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 depending on stage, histology, genetic alterations, and patient's condition. Locally advanced NSCLC patients are non-surgical candidates and currently treated with chemoradiotherapy eventually followed by immunotherapy. On the other hand, for early stages of NSCLC, surgically resection and consequent adjuvant chemotherapy are recommended. Though surgically resection remains the only potentially curative treatment for early-stage NSCLC, 30-55% of these patients develop a post-resection tumor recurrence within the first 5 years. Several studies demonstrated patients' outcome after surgically resection is often affected by an underestimation of the tumor stage, due to the presence of occult micro-metastatic cancer cells undetectable by standard staging methods, such as modern diagnostic imaging. Also, in some cases, surgery itself could lead to the dissemination of cancer cells [2]. Thus, an early identification of which patients are more prone to develop a NSCLC recurrence is crucial to define personalized treatment approaches and improving patients' prognosis. Actually, the application of artificial intelligence techniques could be fundamental in developing tools able to support clinicians in defining personalized therapeutic surveillance plans, after identifying patients at high risk of relapse. To this end, herein we propose a radiomic-based model for predicting the NSCLC recurrence exploiting features extracted from pre-treatment CT images throughout pre-trained Convolutional Neural Networks (CNNs). Pre-trained CNNs refer to a transfer learning approach which allows to extract radiomic features from images according to which the networks have previously learned during training on a very huge (millions) number of images of different nature. Thus, the knowledge acquired from the network during this training phase, such as dots and edges, as well as high-level features like shapes and objects from raw images, has been then transferred and applied on CT images of our sample patients [10,[42][43][44][45][46]. For our purpose, we used a public database contained both CT images and clinical data of NCSLC patients, and we analyzed them conjointly to develop a suitable supervised machine learning model [47]. Specifically, we compared the results obtained using multiple state-of-the-art pretrained CNNs for radiomic feature extraction, and we evaluated performances achieved examining different regions of interest (ROIs) at different dilatations, to investigate the predictive power of the peritumoral region, namely, the tissue connecting the tumor and the normal tissue. This manuscript is organized as follows: in Section 2, Materials and Methods, we introduce the used dataset, the feature extraction procedure by a transfer learning approach, and the designed learning model; in Section 3-4, Results and Discussion, we present and discuss the computed performances comparing our study with the state-of-the-art about NSCLC recurrence prediction. Experimental dataset In this work, we used a public radiogenomics dataset of NSCLC available in the Cancer Imaging Archive (TCIA) [47]. Both imaging and clinical data have been de-identified by TCIA and approved by the Institutional Review Board of the TCIA hosting institution. Ethical approval was reviewed and approved by Washington University Institutional Review Board protocols. Written informed consent was obtained from all individual participants involved. The whole database consisted of 211 subjects divided in two cohorts: Since only (1) the database 1 included the segmentations of the axial CT images, for this preliminary study we focused on the cohort R01. Besides, since the tumor segmentation masks was not available for 18 patients belonged to the cohort R01, the final number of patients involved in this study was equal to 144, of which 40 (27.78%) with a recurrence event within 8 years from the first tumor diagnosis. For each patient, a CT image in DICOM format, as well as clinical data were provided. Concerning CT images, these were acquired by preoperative CT scans with a thickness of 0.625-3 mm and an X-ray tube current at 124-699 mA at 80-140 KVp. Consequently, the related segmentations were defined on the axial CT image series by thoracic radiologists with more than 5 years of experience and adjusted using ePAD software [47]. Feature extraction by transfer learning approach For each patient, the first step consisted in automatically identifying, among all segmentation masks, the mask with largest tumor area, that is, the segmentation mask characterized by the greatest number of pixels having an intensity value equals to 255, i.e., white pixels. Segmentation masks, which were generated by authors of the public database, were obtained using an unpublished automatic segmentation algorithm based on semantic annotations ascribed by an expert radiologist, and then reviewed by two thoracic radiologists with more than 5 years of experience which edited them as necessary [47]. After identifying the corresponding CT slice, we defined a bounding box around the extremal points of the tumour in the four planar x-y dimensions. So, we cropped the correspondent CT slide considering three different dilatation sizes: 0 (no dilatations), 10 and 20 additional pixels along the four extremal points. In this way, for each patient, we identified the following Regions of Interest (ROI)s: CROP (with no dilations), CROP 10 (obtained adding 10 pixels) and CROP 20 (obtained adding 20 pixels). The whole ROI extraction procedure is depicted in Fig 1. Next, as depicted in Fig 2A, from each ROI we extracted radiomic features using three pretrained convolutional neural networks (CNNs), namely, AlexNET, ResNet152V2 and Incep-tionV3, after resizing all ROIs to the specific dimension required by each network. Pre-trained CNNs have been trained on more than a million images belonging to a subset of the ImageNet database [50], and can classify images into 1000 object categories. Pre-trained networks are mainly characterized for their accuracy and their relative running time. Therefore, choosing the pre-trained CNN to be implemented means finding a well-balanced compromise between these characteristics. Accordingly, pre-trained CNNs we selected represent three different well-balanced compromises between accuracy and relative running time [51]. Concerning AlexNET [44], which requires input images resized to 227×227 pixels, we extracted features from the pool2 layer of the network architecture which corresponds to the second pooling layer after the second convolutional layer of the network. The pool2 layer has an output with dimensions of 13×13×256 that is flattening to a single 43264-length vector. As consequence, the number of extracted features is 43264 in total for each ROI of every patient. PLOS ONE Concerning ResNet152V2 [52], which requires input images resized to 224×224 pixels, we extracted features using the max_pooling2d layer, which corresponds also in this case to the second pooling layer after the second convolutional layer and has an output with dimensions of 28×28×256 flattened to a single 200794-length vector. Thus, for each ROI of every patient the number of features extracted is equal to 200794. Finally, we extracted features from the max_pooling2d layer, the second one after the second convolutional layer of the InceptionV3 network architecture [53], after resizing images to 299×299 pixels. The max_pooling2d layer has an output with dimensions of 35×35×192 that is flattening to a single 235200-length vector. As consequence, the number of extracted features is 235200 in total for each ROI of every patient. So, for each pre-trained network, we exploited pool2 layer for feature extraction. This is because pool2 layer is one of the initial layers of the network and returns low-level features, i.e., representations of local details of an image, such as edges, dots, and curves. These details would otherwise be obscured considering only global information extracted from later layers of the network. Additionally, we extracted features from a pooling layer rather than a convolutional layer to preserve the invariance to truncation, occlusion, and translation [54]. All the analysis steps have been performed by using MATLAB R2022a (Mathworks, Inc. Natick, MA, USA) software. Learning model Using both clinical data and radiomic features extracted in the previous step, our aim was to devise a model for predicting recurrence event in NCSLC. The flowchart of the implemented method is shown in Fig 2. After implementing the feature extraction procedure previously described, we performed a stratified randomly sampling on the overall dataset, in order to split the 144 NSCLC patients in a hold-out training set, containing 80% of the sample, and a holdout test set, containing 20% of the sample. As a consequence, the hold-out training set consisted of 116 patients, of which 81 control cases and 35 recurrence cases. While the hold-out test set consisted of 28 patients, of which 23 control cases and 5 recurrences. Consequently, we developed nine learning models which discriminate between recurrence and non-recurrence patients, exploiting normalized features extracted by means of the three different pre-trained CNNs from the CROP, the CROP 10 and the CROP 20, by turns. For each devised model, we firstly selected the only features whose variance was not equal to zero, and then we performed a feature selection procedure on the hold-out training set ( Fig 2B). Thus, we recorded the features with an Area Under the Curve (AUC) value greater than 0.7 over 5 rounds of a finetuning procedure. Specifically, for each round, the hold-out training set was partitioned into 10 smaller sets, and each of these sets was removed by turns for evaluating features predictive power. At the end of this iterative procedure, we selected the subset of radiomic features that showed an AUC above this threshold at least 40% for AlexNET, 60% for ResNet152V2 and 100% InceptionV3. These thresholds have been found to be the optimal ones after evaluating classification performances achieved by our model according to all possible frequencies. Though these frequencies differ from each other due to the different architectures of the employed networks, they represent the best trade-off between high performances and lowdimensional datasets. Interim results were not reported to not burden the discussion. According to this features reduction step, we obtained a subset of significative features for each applied CNN. Then, after estimating the missing clinical data of the database by means of the Miss Forest imputation technique [55], we combined each radiomic feature subset with the clinical data, in order to train a SVM classifier on the hold-out training set within a 10-fold cross-validation scheme over 5 rounds, as depicted in Fig 2C. SVM is a supervised machine learning model which detects the hyperplane that has the maximum distance between data points of both classes, through a specific kernel function. For our study the linear function was adopted. Finally, we evaluated all the developed classification models on the hold-out test set using the optimal feature subset identified on hold-out training set (external validation in Fig 2). For both the hold-out training and the hold-out test set we evaluated performances of all used models in terms of AUC, as well as Accuracy (Acc), Sensitivity (Sens), Specificity (Spe), which are metrics calculated by identifying the optimal threshold by means of a Youden's index test [56]. Table 1 summarized the characteristics of the analyzed sample. For Age at Histological Diagnosis, Weight, and Pack Years median, first quartile q 1 and third quartile q 3 are reported. For the other clinical features, the absolute and relative frequencies are reported. Results Classification performances achieved by all models on CROP, CROP 10 and CROP 20 images are summarized in Tables 2-4, respectively. Specifically, each table includes performances obtained on both the hold-out training and the hold-out test sets, along with the number of radiomic features selected within the feature selection procedure and exploited for training the related model. Concerning CROP images, Table 2 shows how the best performances on the hold-out training set were reached with 8 residual radiomic features extracted by AlexNET: AUC = 0.73, Acc = 0.61, Sens = 0.63, and Spe = 0.60. On the other hand, the best performances on the holdout test set were obtained involving 27 residual features extracted by InceptionV3: AUC = 0.68, Acc = 0.68, Sens = 0.80, and Spe = 0.65. Considering CROP 10 images, Table 3 reveals how the best performances on the hold-out training set have been reached exploiting 11 residual radiomic features extracted by ResNet152V2: AUC = 0.80, Acc = 0.78, Sens = 0.66, and Spe = 0.84. However, on the hold-out test set, the best performances were obtained by analyzing 4 residual radiomic features extracted via AlexNET: AUC = 0.79, Acc = 0.82, Sens = 0.80, and Spe = 0.83. referring to InceptionV3, its performances were stable on both the hold-out training and hold-out test sets. Finally, as far as CROP 20 images, Table 4 shows how the best performances on the holdout training set have been achieved involving 17 residual radiomic features extracted by ResNet152V2: AUC = 0.78, Acc = 0.72, Sens = 0.83, and Spe = 0.68. These performances decreased on the hold-out test set in terms of Sensitivity (0.60). Actually, the best performances on the hold-out test set were reached with 7 residual radiomic features extracted by AlexNET: AUC = 0.83, Acc = 0.79, Sens = 0.80, and Spe = 0.78. Comparing results obtained on the hold-out test set analyzing the three different CROPs, performances achieved on CROP 20 images resulted the best ones. Actually, for each patient, further ROIs were identified exploring other dilatation sizes, such as, 30, 40, 50 and 60 additional pixels along the four extremal points (S1 Fig). However, classification performances achieved by our models on all these images decreased significantly, probably because of a too large zone of peritumoral tissue considered which could also include surrounding regions, such as, the backbone, which could be confounding elements for model learning. Discussion An early and accurate prediction of recurrence risk in NSCLC patients during diagnosis could be essential to promptly designate risk patients to more aggressive medical therapies, and, on the other hand, to spare no risk patients from unnecessary invasive treatments [1]. For this purpose, it could be important to design a model able to assess in NSCLC patients the recurrence risk during diagnosis. Nowadays, in the clinical practice, CT imaging represents the gold standard for NSCLC diagnosis. Therefore, the goal of this study is to define a model able to predict the NSCLC recurrence risk exploiting both clinical data and a CT image of the primary tumor, which are both acquired during the screening phase. We analyzed a public radiogenomic database, from which a sub-cohort of 144 patients with available CT images, segmentation tumor masks and clinical data have been selected [47]. In order to evaluate the information contained both in the tumor region and in the peritumoral area, once the image with largest tumor was identified, we cropped the image with dilatation sizes 0, 10 and 20 and extracted radiomic features via CNNs. The entire sub-cohort was divided into a hold-out training dataset and a hold-out test dataset corresponding to the 80% and 20% of the entire sample, respectively. Then, after reducing the radiomic features and combining them with clinical information a linear SVM classifier was trained and the performances on the hold-out training set and the hold-out test set were computed. We have explored various CNNs, namely, AlexNET, ResNET152V2, and InceptionV3, and then we compared the related performances after suitably reducing the extracted features. Our best results were obtained investigating the predictive power of CROP 20 images, which are the images containing more peritumoral area. Particularly, on the hold-out training set our model achieved an AUC value equals to 0.73, an Accuracy equals to 0.61, a Sensitivity equals to 0.63, and a Specificity equals to 0.60. Even more promising performances were achieved on the hold-out test set with an of AUC 0.83, an Accuracy of 0.79, a Sensitivity of 0.80, and a Specificity 0.78. These results represent the best performances in terms of balance between holdout training and hold-out test sets. While ResNET152V2 and InceptionV3 seem to be generally more performing on the hold-out training set, AlexNET appeared to give better performances on the independent test. Hence, classification performances resulted partially sensitive to pretrained CNN choice due to the different accuracy characterizing pre-trained networks. Indeed, choosing a pre-trained CNN to be implemented means finding a well-balanced compromise between accuracy and relative running time. Moreover, comparing these results with the ones obtained by analyzing both images without dilatations (CROP) and images containing a smaller dilatation (CROP 10), it is evident how the peritumoral region allowed us to retrieve more discriminant information about NSCLC recurrence prediction. As previously reported by our group in a study assessing the sentinel lymph-node status in breast cancer patients by ultrasound images of the primary tumor, we concluded the peritumoral region was essential for accurate predicting the outcome [6]. Other dilatation sizes, such as, 30, 40, 50 and 60 additional pixels, as well as middle dilatation sizes, were also investigated. On the one hand, classification performances achieved on CROP 30, CROP 40, CROP 50 and CROP 60 images decreased significantly, probably because of a too large zone of peritumoral tissue considered which could also include surrounding regions, such as, the backbone, which could be confounding elements for model learning. On the other hand, middle dilatation sizes did not appreciably contribute to improve classification performances. Consequently, the most appropriate criterion resulted the one we adopted. Our results are comparable with those obtained by Wang et al. who analyzed CT images from a cohort of 157 NSCLC patients using only handcrafted-radiomic features, which are however operator dependent. In their study, they reached an Accuracy equals to 0.85 [37]. On the other hand, S. Hindocha et al. developed a model able to predict recurrence, recurrence-free survival, and overall survival of NSCLC patients, by employing only clinical features collected from a cohort of 657 patients. Considering the recurrence prediction, authors reached an AUC value equals to 0.69 and 0.72 for the validation and external datasets, respectively [38]. With respect to NSCLC recurrence studies involving features extracted by means of convolutional neural networks, P. Aonpong et al. used the same radiogenomic database analyzed in the present study to predict the NSCLC recurrence devising a genotype-guided radiomic model [33]. For their specific goal, a sub-cohort of 88 patients was considered. Their model predicted the NSCLC recurrence via gene expression data extracted from CT images vis CNNs and achieved an AUC of 0.77, and Accuracy of 0.83, a Sensitivity of 0.95, and a Specificity of 0.59. Besides, G. Kim et al. recently proposed an ensemble-based prediction model for NSCLC recurrence involving 326 patients also including our dataset. They developed three neural network models trained combining clinical data, such as tumor node stage, handcrafted radiomic features, and deep learning radiomic features [35]. The final performances of clinical, handcrafted and deep-learned features together were AUC equal to 0.77, Sensitivity equals to 0.80, and Specificity equals to 0.73. The best performances obtained in our study have been compared with those available in the literature, to the best of our knowledge, (Table 5). Accordingly, compared to the main state-of-the-art, our proposal shows better performing results, except with reference to models using genomic information. In this regard, in our study we aimed to devise a model to predict the NSCLC recurrence, purposely neglecting the genomic information provided by the clinical features EGFR and KRAS, that are clinically expansive and time-consuming to obtain. Furthermore, even though studies for predicting NSCLC recurrence involving both deep and clinical features already exist [33,35], the original aspect of our study is the analysis of CT images with different dilatation (crops) levels and different CNNs. Using a different CNN, as well as analyzing a different dilatation level, can affect the final performances of the model. In fact, our results were extremely influenced by the thickness of peritumoral region considered, and our best performances were obtained investigating the predictive power of CROP 20 images. As well, though we exploited three pre-trained CNNs characterized by a well-balance compromise between accuracy and relative running time, performances were also influenced by network accuracy. Thus, in our future work, we will also investigate the predictive power of other pre-trained networks, such as DenseNET and Vision Transformer, as well as end-to-end models developed training CNNs on a more conspicuous data sample. Besides, other limitations of our study deal with its retrospective design and the limited dimension of the dataset. With a larger dataset, it could be possible to achieve higher performances and improve the model. For this purpose, in our future work we will collect a private database of NSCLC patients, also including more histopathological features of the primary tumor, along with CT images acquired during the screening phase. Conclusion The current study proposes an artificial intelligence-based model for early predicting recurrence risk in patients affected by NSCLC exploiting only data acquired during diagnosis, namely, clinical variables and a primary tumor CT image. Specifically, in this study we investigated the discriminant power of different CNNS employed for automatically extracted radiomics features from three different regions of interest, identified considering different thickness of peritumoral region. Despite the promising results achieved by our model analyzing the ROI containing the maximum peritumoral area, for our future work we aim to collect a private database of NSCLC patients, including both histopathological features and a CT image of the primary tumor. Moreover, it could be interesting to include the use of the Explainable Artificial Intelligence that through the years has gained a lot of attention in order to overcome the "black-box" nature of artificial intelligence algorithms, trying to better understand and explain the choices made by these models [57].
5,370.4
2023-05-02T00:00:00.000
[ "Medicine", "Computer Science" ]
Wireless Networking-Driven Healthcare Approaches in Combating COVID-19 Since its outbreak, the coronavirus (COVID-19) pandemic has caused havoc on people's lives. All activities were paused due to the virus's spread across the continents. Researchers have been working hard to find new medication treatments for the COVID-19 pandemic. The World Health Organization (WHO) recommends that safety and self-measures play a major role in preventing the virus from spreading from one person to another. Wireless technology is playing a critical role in avoiding viral propagation. This technology mainly comprises of portable devices that assist self-isolated patients in adhering to safe precautionary measures. Government officials are currently using wireless technologies to identify infected people at large gatherings. In this research, we gave an overview of wireless technologies that assisted the general public and healthcare professionals in maintaining effective healthcare services during COVID-19. We also discussed the possible challenges faced by them for effective implementation in day-to-day life. In conclusion, wireless technologies are one of the best techniques in today's age to effectively combat the pandemic. Introduction Coronavirus illness (COVID-19) is a respiratory infection that first appeared in Wuhan, China, in 2019. The World Health Organization (WHO) has labeled it a pandemic since its emergence due to widespread transmission across continents [1,2]. It affects persons of all ages, with the elderly, especially those with comorbid diseases, having a higher mortality rate. People above the age of 80 had a 12 times higher mortality risk than those between the ages of 40 and 59 [3]. On average, women were nearly 0.73 times more likely to be infected with COVID-19 than men. Hence, age and gender are considered socioeconomic inequalities of the COVID-19 pandemic [3]. Overcrowding, race, and ethnicity are some of the other socioeconomic inequalities reported by COVID-19 [3,4]. Researchers around the world from different fields such as artificial intelligence (AI), biomedicine, pathology, and virology are contributing their work regarding COVID-19 to combat the virus by providing detailed information on virus morphology and its virulence [5]. Technology plays a crucial role in the combat of COVID-19 during the pandemic. AI, the Internet of hospital things (IoHT), deep learning techniques, 5G, and other technologies, such as wireless communication networks, are increasingly being used to combat the pandemic [6]. Wireless technologies such as mobiles, Bluetooth, and Wi-Fi help us communicate with each other without the use of cables. During this pandemic, many countries applied wireless technologies to combat the virus effectively. Countries like the USA, China, and Korea implemented tracing systems integrated with their netizen mobile devices to find their locations during the lockdown [7]. Governments, institutions, and industries depend on social media platforms like Zoom, Skype, and Team Link to communicate with each other in pandemic times. Sensors, drones, and smart helmets are being used in airports, bus stands, and other areas of social gatherings. All these constitute part of wireless technology [7]. Wireless technologies permit access to continuous patient care while maintaining the patient's safety and privacy in a health crisis. This process was achieved through the relaxation of government limitations after the pandemic. Vidal-Alaball et al. explain the role of telehealth services during the COVID-19 pandemic [8]. According to them, wireless technologies can be used as a platform for online consultations, help monitor patients through smart devices, and can aid in avoiding dangerous places with high viral loads using wireless technologies. Bajowala et al. concluded that wireless technologies in the form of telemedicine can be safe and effective for payment of hospital transactions and can be used in the billing and coding areas of hospitals [9]. Blue et al. discussed the role of wireless technologies in the neurology department during the COVID-19 pandemic [10]. They concluded that accurate and effective neuroexamination can be done through the implementation of wireless technologies as telemedicine services. Boehm et al. discussed the importance of wireless technologies around urology wards during COVID-19 [11]. They reported that many patients are willing to take their appointments in hospitals through a wireless platform named "telemedicine." Bokolo emphasized adopting virtual software platforms for outpatients visiting hospitals during the pandemic timeframe [12]. Their findings concluded that wireless technologies could minimize emergency room visits, safeguard healthcare resources, and decrease the COVID-19 spread by remotely treating patients during and after the COVID-19 pandemic. Zhou et al. constructed a 5G network integrated with wireless technologies in a new model in the hospital cabin [13]. This model solves the different problems faced by hospitals, such as Internet access, data filling, and file sharing and storage. The special architecture within this model helps in updating patient records, nurses' activities, and radiologists' scans and reports. Janjua et al. described the use of wireless technologies during COVID-19 in hospital areas [14]. According to them, wireless technologies improve the capacity of data usage by using high frequency bands and improve hospital coverage using various ad hoc networks and device-to-device connections. Saeed et al. in their study discussed the possible advantages of wireless communications during the COVID-19 times to improve the country's economy [15]. They can boost online activity for a smooth flow of e-commerce, protect high-risk individuals from virus spread with touch-less solutions, and reduce viral growth by flattening the curve during lockdown times. Al-Humairi and Kamal discussed the prospective use of wireless technologies towards the building of monitoring systems [16]. They reviewed the possible uses of thermal scanning technologies in buildings, the use of Swann security cameras, and the integration of infrared thermometer devices developed by wireless technologies to monitor the public vitals and body temperature in buildings and public places. Cervino and Oteri discussed the importance and usage of telephone triage for COVID-19 patients in medical settings [17]. They emphasized that telephone triage has the capability to identify COVID-19 patients with disease symptoms, help examine the patient's general health condition, and identify the risks associated with COVID-19 patients. Hence, telephone triages are considered operative filters to prevent the spread of COVID-19 infections. Therefore, wireless technologies are one of the common approaches implemented in the contemporary COVID-19 pandemic to control the virus' spread and improve public safety. In this review article, we mainly focus on applications of various wireless technologies for the public and patients during the COVID-19 pandemic that can further help physicians, the public, and other healthcare professionals gain awareness and ideas regarding the importance of informative technologies to prevent the spread of pandemics. Section 3 deals with wireless technologies' role in the pandemic for the public, healthcare professionals, and remote applications. In Section 4, we discussed the possible challenges faced by wireless technologies, along with solutions for effective implementation. Finally, in Sections 5-7, we discussed the limitations of this study and provided the overall conclusion on wireless technologies and their impact during the COVID-19 crisis. 2 BioMed Research International Methodology The information was gathered from published literature by searching scientific databases for specific topics and key terms (Table 1). To find relevant scientific data, researchers used advanced PubMed searching with MeSH keywords. Wireless communications, COVID-19, pandemics, applications, and challenges were used in a search strategy for publications published between 2019 and 2021. The screening of titles and abstracts was done manually. These articles' full texts were then reviewed for possible inclusion and exclusion criteria. Case studies and unauthored proofs were excluded. Wireless Technology Applications In the current pandemic situation, the main challenge for healthcare organizations is to prevent the virus' spread and help the public maintain safe health by following adequate preventive and control measures. In this section, we discussed the applications of wireless technologies to prevent COVID-19 spread. As per issued guidelines by WHO, measures such as avoiding group gatherings, maintaining social distance, and tracing of netizens are important measures to prevent the rapid viral spread. Here we discussed wireless technologies to prevent the viral spread by tracing indoor and outdoor activities of the public. Outdoor Tracking. A different variety of wireless technologies, such as drones, mobile phones, and global positioning systems, is used to monitor the viral spread in outdoor areas [18]. In cities with a denser population, network drones are used to monitor the crowds for social distance maintenance. These drones also help to raise awareness among the public regarding social distance measures [6]. The pandemic drones are also used to monitor the variations in body temperature, changes in normal physiological body functions, and the presence of flu, coughs, and sneezes in public places [19,20]. Once the information is collected from such suspicious people, it is transferred to higher authorities for appropriate actions. Drones with 5G connectivity can facilitate this process even faster because of faster Internet connectivity and low latency. Satellite communications are another huge advantage in monitoring and remodeling the COVID-19 spread. COVID-19 may spread rapidly in areas with huge populations; hence, this can be monitored using geospatial data and satellite images to identify the populations at risk of getting COVID-19 [21]. Another aspect of fighting the COVID-19 pandemic is identifying individuals with COVID-19 infection. To make the process easier, social platforms such as Google and Facebook have initiated a venture on GPS-based user data. This will help to track the infected people with their current location [22,23]. A contract tracing system developed by Google and Apple that uses Bluetooth signals to identify nearby smartphone users and sends alarming signals if they come within a certain distance of COVID-19 patients. This also reduces the transmission of the virus [24]. When the infected person comes out, there is a huge chance of viral spread from the infected patient. So, to prevent this and effectively monitor the patients, wearable bands are used. These are cost-effective and provide accurate results. They are connected to a patient smartphone application with Bluetooth and sensors that can help track the identity of a person [25]. Indoor Monitoring. In the outdoor environment, technologies such as GPS and Bluetooth are being used to maintain social distance and prevent the spread of the virus. However, it is challenging to maintain the social distancing guidelines within the house and indoors due to the unavailability of such technologies. Hence, novel technologies are needed to prevent indoor viral spread to overcome the problem. Those include Wi-Fi, visible light, Bluetooth, and radio frequency identification. These have proved to be promising solutions for self-isolated people [26]. For example, a new technology called proximity tracing is being used during the COVID-19 pandemic times to identify an individual's presence. This helps the public maintain the work environment distance monitored indoors [27]. This proximity test has a tag that should be worn by the workers. It can work in both an active and passive manner. The active manner can alert the workers within the working environment when they come close to each other by violating the social distancing guidelines. The passive approach can provide information to tracing authorities when the staff is infected with the COVID-19 virus [27]. An app called "Social Monitoring" was introduced by the Russian government and made mandatory for its netizens to install it on mobile devices. Once the app is installed, patients are asked to scan for the quick response code whenever they leave the quarantine place [28]. Another wireless technology is the Easy Band, a wearable device. It provides an alarming sound when there is a violation of social distancing norms between two people [29]. Apart from these indoor applications, they prepare the network graphs using sensing measurements from the proximity users to know the contract tracing and working distance between the people [30]. 3.3. Healthcare Applications. The outbreak of COVID-19 increased the trend of utilizing wireless communications and 5G networks in healthcare ( Figure 1). Many hospitals extensively use Wi-Fi networking services to permit a better connection and allow better response times for the public and local communication. Medical robots are used to deliver drugs and check the patient's vitals such as body temperature and blood pressure to disinfect the rooms of hospitals to prevent the spread of viral infections [31]. These robots, connected with wireless technology integrated with 5G services, can collect patient data and share it with remote data centers to improve the efficacy of healthcare systems [32]. The information exchange from these robots needs to be accurate and low-latency communication as provided by these wireless technologies. To improve patient care, China has developed an advanced hospital system that is integrated with wireless and 5G connectivity [32]. Table 2 summarizes the wireless technologies that were utilized to deliver better healthcare services to patients during the COVID-19 pandemic crisis. BioMed Research International Fiorillo et al. emphasized the need for a protocol for prevention of COVID-19 spread in medical settings, especially in dental offices. Dental wards are commonly used as a potential source for various microorganisms due to the increased likelihood of the formation of microbial films [33]. Therefore, dental units and medical instruments used in these areas are to be sterilized with 0.1% sodium hypochlorite or 0.5% hydrogen peroxide. In addition to these, the establishment of air purifiers and aspirators in dental clinics can reduce the load of microorganisms present in the air [33]. D'Amico et al. further discussed the need for management of COVID-19 patients in dental wards. Their work emphasized the importance of the usage of telephone triages, the maintenance of social distance among the patients in waiting rooms, and the role of protective personal equipment (PPE) in the prevention of COVID-19 spread in dental wards [34]. Lu et al. described the use of new wireless technologies to manage the pain of patients during the COVID-19 crisis. As most of the nonemergency procedures are halted during this Study Technology type Application Lu et al. [35] Wireless programming system To deliver safe and effective programming operations for implantable spinal cord stimulation device patients remotely. Silva and Tavakoli [36] Wearable biomonitoring patches Helps to continuous monitoring of patients remotely and ultimately reducing the burden on hospitals. Ni et al. [37] Wireless mechanoacoustics Record coughing frequency and intensity in COVID-19 patients during the disease course. Zhang et al. [38] Wireless stethoscope Auscultator's characters in COVID-19 patients are analyzed in hospitals and indoor settings using this wireless stethoscope. Dini et al. [39] Wireless lung ultrasound It diagnoses lung injury in COVID-19 patients and helps nursing residents to monitor the patients. Kancharla and Estes [40] The mobile cardiac monitoring device It detected the abnormal fluctuations in echocardiography of COVID-19 patients. Yilmaz et al. [41] The wireless wearable acoustic transducer To monitor long-term health benefits of respiratory ill patients with COVID-19. BioMed Research International time, it becomes difficult for the patients who are implanted with spinal cord devices to tolerate the pain [35]. They developed a remote programming system that helps healthcare professionals monitor such patients through video programming and deliver safe palliative medicine to such patients implanted with spinal cord devices [35]. Silva et al. discussed the importance of wearable patches to monitor the vitals of COVID-19 patients. They are used in the long term because they can improve patient safety and treatment outcomes [36]. This type of wearable patch also reduces the pressure on healthcare professionals as the patients are monitored remotely. Besides this, they also help to gather a huge number of patients' data in a timely manner so that treatment outcome is enhanced [36]. Ni et al. discussed the association of body vital signs with cloud data infrastructure to monitor the COVID-19 patients. This infrastructure program primarily examines measurements such as coughs and vocal cord changes that occur as the disease progresses [37]. They can also link the association with the frequency of coughing and droplet production to [40]. It included 82 inpatients with the use of mobile patch-based technology. Findings by Kancharla and Estes concluded that there was a reduction of 595 minutes of viral exposure in hospital staff with the implementation of wireless technologies to monitor cardiac vitals. It also increases the staff presence in emergency departments. Hence, patch-based mobile technology reduces infection as there is no direct patient contact with the physician and can be beneficial to COVID-19 patients with existing heart abnormalities [40]. Yilmaz et al. developed a sound acquisition module that integrates within the patient's garments and helps minimize stethoscope use. This technology provides an option for respiratory ill patients to benefit from long-term vitals monitoring during the COVID-19 crisis [41]. Remote Healthcare. Due to restrictions imposed on travel, remote health settings are globally common in areas with a lack of healthcare facilities. Hence, healthcare for the people in those areas can be provided in two ways. One is through telemedicine, and the other is through remote health monitoring. In platforms like telemedicine services, doctors make use of smartphone teleconferences or scrutinize the electronic health records of patients for appropriate diagnosis and evaluation of treatment outcomes [14]. Such a type of healthcare facility is available in houses or basic healthcare centers in the presence of paramedical staff. In the present world, many physicians are using teleconferences to connect virtually with their patients without physical contact. However, as digital technologies are evolving continuously, other options such as holograms and holographic presentations will be in use soon [14]. Telemedicine and Internet of medical things technologies such as wearable devices and wireless body networks have begun to be used exclusively to provide healthcare facilities to people in rural areas. IoHT is used as an active tracker of a patient's condition. Biomedical sensors that provide physiological activity and record vital parameters are used in IoHT-based healthcare devices that collect data, analyze the information recorded, and regularly monitor the patient's condition [14,36]. They can minimize stress levels and record the amount of physical activity done by the person each day. The best examples of this are blood pressure monitoring devices, pulse recording devices, pacemakers, hearing aids, and smartwatches. One such application is the Biostrap, a wearable device that monitors heartbeats. They can help the patient administer the appropriate drug or medication for an existing disease and act as an alarm tool [14], so that, without the need for a caregiver, the patient can take his medication. For all these reasons, IoHT is also known as "Smart Health" or "Smart Technology." In a similar manner, the implementation of real-time health tracking systems helps to improve the elderly patient's health by tracing out emergencies through the sensors' integration, thus providing the patient medical care. Despite these benefits, the implementation of technology is a major limitation, especially for the elderly, as they are not aware of the information technologies. In addition, to regular dose adjustments for chronic disease patients such as diabetics and hypertensive patients, the implementation of teleconferencing or virtual modes, is the easiest way to provide health facilities for regular care. Despite these many advantages, informative technologies and wireless communications need high security with privacy guidelines to store patient data [6]. Since the increase in coronavirus cases, rural healthcare has been given prime importance to people to reduce the infection risk associated with the virus. Rural healthcare is completely dependent upon the network's facilities and infrastructure to provide better health outcomes. Due to this, many information technologies such as massive connections, ultrapower connections, and low-latency tactile Internetdependent remote facilities have not yet been approved [31]. Technologies Used in Digital Tracing. One of the most important precautionary guidelines to prevent the COVID-mative technologies are used to achieve the goal of effective social distance maintenance. Such technologies are Wireless Fidelity (Wifi), Quick Response (QR) codes, the Global Positioning System (GPS), and ZigBee. 3.5.1. Wifi. This informative technology is extremely useful for tracing coronavirus-infected patients, particularly during the self-isolation period. These are used to monitor patients in buildings, hospitals, and other congested areas. They provide high, extreme accuracy of the inner environment in contrast to other existing devices. In this informative technology, there is the presence of the wireless connections associated with the sensors, by which signals are shared with the government and healthcare authorities to maintain updated information records on coronavirus cases [42]. These are very useful in public places, especially in railways and airports. It is also cheaper to maintain the services and maintain them. Bluetooth. Another wireless technology used in the control of the coronavirus pandemic is Bluetooth. It is present in almost every smartphone. There are several patterns of Bluetooth devices. Among them, Bluetooth low-energy protocol devices are more popular because of their lowenergy expenditure and less energy is used. As a contract tracing option, these are switched on every time to trace the information. One of the big advantages of the use of Bluetooth devices is that they can be connected to many devices without the need for an access point. Government authorities in Singapore manufactured an app called Blue Trace. The protocol implemented in this app is so simple and clear that when this app comes in close connection with another app, it can save the data manually in its database [43]. Later, this information is shared with the government authorities to maintain the records [23]. GPS. The GPS uses satellite systems to trace the individual's identity. The option of GPS is provided in all smartphones, where it must be enabled to track the patient information. In the conditions of the pandemic, such options are enabled on public smartphones for contract tracing. Another advantage of the implementation of GPS tracking is that it can minimize direct physical contact between the person and another person. For example, when customers purchase products online, those products are delivered to the respective houses of the public through unmanned aerial vehicles. Many big stores incorporate this GPS technology to deliver products. Therefore, social distance among the public is effectively maintained through the GPS tracers. Many effective solutions are deployed by GPS to geolocate the public in self-isolation. The use of smartphones enables GPS trackers. It can record information on individuals' movements and locations and then share the respective data with government officials [23]. QR Codes. Another method to trace the individuals is by QR code scanning. Here, a person can take a picture virtually at multiple locations. This picture is scanned and analyzed by the mobile databases and provides an option for geolocation. For example, if the public is tested positive for the coronavirus, then the information of the respective person can be traced easily as it provides geological location. Apart from this, places visited by infected people can also be traced. Therefore, informative technologies like QR codes are being used worldwide to restrict the coronavirus pandemic. 3.5.5. ZigBee. This is yet another effective technology that is primarily used in maintaining social distance during pandemics. It is a low-cost and low-energy network information technology. These devices are able to communicate with other devices within a 20-meter range. This device consists of a hub internally, which can be used to identify the user's location. As a result, this technology can be used to maintain effective social distance guidelines in crowdpulling areas [44]. Recent technologies such as 5G also play a crucial role in the control of the COVID-19 pandemic. This technology is typically used for the construction of cabin hospitals during pandemic times. We usually need a cabin or hospital area with a clinic data network. A lease line is provided by an Internet service provider to connect with the hospital network. After this set up is completed, patients who are visiting the clinic will find their applications are connected to this network and all their records are saved to maintain the electronic records [13]. Blue et al. discussed the importance of wireless technologies, especially telehealth services, during the pandemic. According to them, these information technologies are useful for providing general patient examinations such as vital sign evaluations and physical examinations through webcams without direct physical contact [10]. Boehm et al. used wireless technology in the urology departments of medicine. They concluded that nearly half of the patients visiting the clinic were interested in opting for the choice of telemedicine during the pandemic. They also reported a decline in viral spread through the implementation of wireless technology resources in hospital settings [11]. Bokolo discussed the utilization of wireless approaches in providing outpatient care in hospitals during and after the COVID-19 pandemic. They concluded that informative technologies could decrease the diagnosis time and improve patient care, hence they can act as a proactive measure, especially in pandemic times [12]. In their study, Contreras et al. reported the importance of informative technologies and the change brought by them with the pandemic. They concluded that wireless approaches such as telehealth services and 5G data connectivity will play a key role soon of better patient health care [45]. Chamola et al. concluded that wireless technology services are used in the identification of coronavirus cases, community spread, and diagnosis, especially in molecular tests with the use of sensors for rapid results. They also concluded that wearable devices are used to monitor the body vitals, respiration rate, and saturation rates that are considered important parameters in coronavirus infected patients [6]. With the advent of wireless technologies and the development of sophisticated hospitals, resources have led to the use of telerobots in hospital settings in pandemic times. A human installs a user interface option on this robotic device to control it remotely. These are in use, especially to disinfect patients and the public in hospital settings with the sanitizers. They are also used to undergoing minimal surgeries, especially where a physician cannot operate on the patient due to infection risks in pandemic times [31]. The best example of this is the da Vinci robotic system. Another advantage of the use of robotic services as a part of wireless technologies is that during pandemic times, healthcare professionals usually need personal equipment such as personal protective equipment to reduce the risk of infection and being affected by viruses. Hence, the implementation of such robotic devices has decreased the need for protective equipment, and it can be further dispensed to the public whenever needed. Apart from these healthcare needs, wireless technologies help the public leverage both virtual and augmented realities in pandemic times. As these can be utilized for virtual interfaces, it can help to reduce the social isolation feelings in the minds of patients [31]. Wireless Technology-Related Challenges In the fight against the COVID-19 pandemic, we cannot ignore the beneficial role of wireless technologies in public safety and healthcare. However, apart from the positive outcomes, there are also challenges associated with wireless technologies, such as privacy, security, and misinformation. Therefore, this section deals with possible challenges and their solutions. Privacy. Despite the use of contract tracing technologies to prevent viral spread, they also invade public privacy. The user's location is easily accessed by these applications and is used for government record purposes. Human rights activists warn that the use of these applications could manipulate the surveillance guidelines in the coming future. Hence, a few questions have to be addressed by the government authorities before the implementation of such applications. They are as follows: (1) Are users aware of the information they have collected, and can they delete it once the pandemic has ended? (2) How long and who can access this information? (3) Guidelines for sharing the information Apart from this, drones used for aerial surveillance purposes monitor the social distance between the public in mass gatherings. This raises a general question and breaches the security concerns of the public. Because it can be an infringement on individual liberties [46]. Mobile phone data collected from the public, such as personal location, can prevent the spread of the virus. However, it also poses a risk to individual privacy as the data was collected by government authorities for surveillance. So, to overcome this problem, Bluetooth low-energy technology was used in some countries [47]. With this technology, when two people come close to each other, contract-tracing apps record their identities. They also record the individual location and time of proximity between the users. This information is stored in the device or shared with government authorities as a part of the COVID-19 surveillance program. In the future, when the patient is infected with COVID-19, this information will be shared among all the users as a precautionary measure to protect them from infection. If the user identity is not recognizable and the only information shared is that the process is viable; if not, it leads to a deviation of public privacy [47]. 4.2. Security. The unequalled utilization of mobile phones during the pandemic outbreak increases the cybersecurity risks for the public. According to Akamai's most recent report in March 2020, overall Internet traffic has increased by 30%. This increased use of the Internet during the pandemic lockdown globally increased the cyber security risks, malicious emails, phishing information related to COVID-19 and the circulation of fake information during the pandemic [48]. Apart from this, many business institutions have started working remotely, which makes the authentication process a challenging factor. All the organization's accounts are transferred online during the pandemic to promote their goods and services among the public. This also increased the cyber-attack. All the applications in the present day are completely automated. This can also lead to an increase in cyber security attacks. Novel digital technologies aroused during the pandemic are at an increased risk of cybersecurity attacks. Hence, necessary actions must be taken to prevent cyber security attacks. Since the start of COVID-19 pandemic, work from home has become a common phenomenon in all organizations to run services without any failure. This working culture has become a boon for cyber security criminals. Therefore, to prevent malicious cybersecurity attacks, Hence, these are some of the measures taken to prevent cybersecurity attacks during pandemics. Cybersecurity staff and public services can join hands to reduce this fraudulent risk. Apart from those, other limitations such as wider reach, lack of smartphone applications, improper user applications, scalability problems, and transparency of wireless technologies still exist. Limitations We analyzed the published data from the last two years and missed some information that was available earlier. Secondly, only PubMed and Google Scholar were used as sources of information. Third, the lack of statistical analysis prevents us from determining the study's significance. Finally, the data reported in this publication came solely from healthcare areas of interest that use wireless technologies. Conclusion The coronavirus pandemic's rapidity, risk, and severity ushered in a slew of new developments. This event demonstrates the value of healthcare workers and personnel. This eruption sparked debate about the employment of innovative informing approaches and their paradigms in healthcare settings, with the goal of improving patient care while also reducing viral spread. As a result, all international and national organizations, as well as countries, have embraced information technology. Wireless technologies are critical in the fight against the COVID-19 outbreak and in restoring normalcy to the situation. They can be used in a variety of situations, including surveillance, hospital care, business administration, pharmaceutical chain management, dental clinics, and so on. When similar technologies were used in the early phases of a pandemic, better results were seen in terms of virus propagation. However, there are issues with this application, such as privacy and security concerns, that must be handled depending on how this technology is used in a specific industry. Prospects This work paves the way for future researchers to see the flaws in the wireless technologies discussed above to improve their applications. Apart from this, there is a chance to deploy wireless technologies and telemedicine services in hospitals to provide continuous health facilities for the public even in pandemic times and to minimize the risk of being infected with viruses. Data Availability The data used to support the findings of this study are included within the article.
7,336
2021-12-30T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
Breast cancer histopathological images recognition based on two-stage nuclei segmentation strategy Pathological examination is the gold standard for breast cancer diagnosis. The recognition of histopathological images of breast cancer has attracted a lot of attention in the field of medical image processing. In this paper, on the base of the Bioimaging 2015 dataset, a two-stage nuclei segmentation strategy, that is, a method of watershed segmentation based on histopathological images after stain separation, is proposed to make the dataset recognized to be the carcinoma and non-carcinoma recognition. Firstly, stain separation is performed on breast cancer histopathological images. Then the marker-based watershed segmentation method is used for images obtained from stain separation to achieve the nuclei segmentation target. Next, the completed local binary pattern is used to extract texture features from the nuclei regions (images after nuclei segmentation), and color features were extracted by using the color auto-correlation method on the stain-separated images. Finally, the two kinds of features were fused and the support vector machine was used for carcinoma and non-carcinoma recognition. The experimental results show that the two-stage nuclei segmentation strategy proposed in this paper has significant advantages in the recognition of carcinoma and non-carcinoma on breast cancer histopathological images, and the recognition accuracy arrives at 91.67%. The proposed method is also applied to the ICIAR 2018 dataset to realize the automatic recognition of carcinoma and non-carcinoma, and the recognition accuracy arrives at 92.50%. Introduction In recent years, the incidence and mortality of global cancer have been rising continuously, which seriously threatens human life and health. Breast cancer is one of the cancers with the highest mortality for females in the world [1]. One of the most obvious changes in the latest global cancer data in 2020 is the rapid increase in the number of new cases of breast cancer, which has replaced lung cancer to be the world's leading cancer [2]. Breast cancer pathological examination is considered to be the gold standard for breast cancer diagnosis. The recognition of histopathological images of breast cancer has attracted a lot of attention in the field of medical image processing. Nowadays the breast cancer diagnosis mainly depends on the priori knowledge and diagnostic experience of pathologists. During the diagnosis process, the essence of abnormal tissues cannot be recognized sometimes, and even false detection and missed detection may occur. Therefore, researchers assist doctors in processing and analyzing medical images through imaging, medical images processing technology and computer analysis and calculation, that is, computer aided diagnosis (CAD) system. With the advancement of CAD technology, machine learning has been widely used in the diagnosis of breast cancer [3][4][5][6]. Effective feature extraction is the key to histopathological images recognition, but the realization of the automatic recognition of breast cancer histopathological images is a challenging task to due to the characteristics of histopathological images. At present, the traditional methods used for breast cancer histopathological images recognition mainly consist of the artificial feature extraction methods and deep learning methods [7][8][9][10]. The traditional artificial feature extraction methods require manually designing the region of interest in the images, and the features are extracted and then the extracted features are needed to be selected. In [11], a breast cancer histopathological images dataset called BreaKHis was proposed by Spanhol et al. for preforming the benign and malignant classification of tumors by six different extracted features: completed local binary pattern(CLBP), gray level co-occurrence matrix (GLCM), local binary pattern (LBP), local phase quantization (LPQ), parameter-free threshold adjacency statistics (PFTAS) and one keypoint descriptor named Oriented FAST and Rotated BRIEF (ORB) features, and four kinds of different classifiers: 1-nearest neighbor (1-NN), quadratic linear analysis (QDA), random forests (RF) and support vector machine (SVM). In [12], Belsare et al. firstly used the spatial color texture image segmentation method to segment the images, then extracted the features: GLCM, graph running length matrix and Euler number, and used linear discriminant analysis (LDA), to perform the classification of the breast cancer histopathological images. Reis et al. combined multi-scale basic image features and LBP features with random decision trees to make the maturity of the stroma in the breast tissue be classified [13]. Chan et al. applied fractal dimension features to breast cancer detection [14]. Hao et al. extracted three-channel features of 10 feature descriptors on the BreaKHis dataset to classify breast cancer histopathological images [15]. Deep learning methods have also been widely used in breast cancer histopathological images recognition. Araújo et al. used Convolutional Neural Network (CNN) and CNN combined with SVM for the binary classification based on the Bioimaging 2015 dataset [16]. Wang et al. classified the ICIAR 2018 dataset into four categories through the VGG16 network and the transfer learning [17]. Spanhol et al. also adopted AlexNet for breast cancer classification based on BreaKHis and achieved better results than the machine learning model trained with hand-extracted texture descriptors [18]. Saini et al. firstly used deep convolution generation adversarial network to augment the data of benign samples, and then used the improved VGG16 to extract the features of different pooling layers, and SVM was used to classify breast cancer histopathological images [19]. Roy [24]. Besides the commonly used artificial feature extraction methods and deep learning methods, many scholars have also applied multi-instance learning and sparse representation methods to recognize the breast cancer histopathological images. Sudharshan et al. used a multiinstance learning method to classify the BreaKHis dataset into benign and malignant categories [25]. A new multi-channel histopathological image simultaneous sparse model was proposed by Srinivas et al. and was applied to solve a new optimization problem based on simultaneous sparseness for performing breast cancer histopathological images classification [26]. Li et al. proposed the combination of the discriminative feature learning and the multichannel joint sparse representation based on mutual information for classifying benign and malignant tumors at 40× magnification on the BreaKHis dataset [27]. In addition, the distribution, size and morphology, and aggregation density of cell nuclei are the important information of breast cancer histopathological images. Therefore, the researches on the cell nuclei segmentation and the cell morphology are the significant importance for breast cancer histopathological images recognition. Kumar et al. proposed a framework for automatic detection and classification of cancer from microscopic biopsy images, which includes cell segmentation, feature extraction, and classification [28]. Kowal et al. used four different clustering methods and the adaptive gray thresholds to segment cell nuclei, and then extracted 42 morphological, topological and texture features for breast cancer benign and malignant classification [29]. Zheng et al. used the blob detection method to detect the nucleus whose location was determined by use of the local maximum, and used the sparse autoencoding to extract features of the nucleus slice for the recognition of benign and malignant breast tumors [30]. Anuranjeeta et al. extracted the shape and morphological features of cells for breast cancer classification and recognition [31]. Pang et al. trained CNN using gradient descent technology to solve the problem of cell nuclei segmentation for histopathological images [32]. For the problems of under-segmentation and over-segmentation in the process of histopathological images segmentation, a two-stage nuclei segmentation strategy, that is, a method of watershed segmentation based on histopathological images after stain separation, is proposed on the base of the Bioimaging 2015 dataset in this paper to make the dataset recognized to be the carcinoma and non-carcinoma recognition. Firstly, stain separation is performed on breast cancer histopathological images. Then the marker-based watershed segmentation method is used for images obtained from stain separation to achieve the nuclei segmentation target. Next, the completed local binary pattern was used to extract texture features from the nuclei regions (images after nuclei segmentation), and color features were extracted by using the color auto-correlation method on the stain-separated images. Finally, the two kinds of features were fused and the support vector machine was used for carcinoma and non-carcinoma recognition. The experimental results show that the two-stage nuclei segmentation strategy proposed in this paper has significant advantages in the recognition of carcinoma and noncarcinoma on breast cancer histopathological images, and the recognition accuracy arrives at 91.67%. The proposed method is also applied to the ICIAR 2018 dataset to realize the automatic recognition of carcinoma and non-carcinoma, and the recognition accuracy arrives at 92.50%. Fig 1 shows the framework of breast cancer histopathological images recognition based on the two-stage nuclei segmentation strategy proposed in this paper. In this paper, an effective automatic computer-aided diagnosis technique is proposed for the segmentation and recognition of breast cancer histopathological images. This work makes the significant contributions to the realization of an interactive system for nuclei segmentation and cancer recognition, as follows: 1. A two-stage nuclei segmentation strategy is proposed for nuclei segmentation of histopathology images. It is a challenging task to achieve nuclei segmentation in histopathology images with similar foreground and complex background. The proposed method not only effectively avoids the under-segmentation and over-segmentation problems, but also provides good cancer detection performance with less algorithm complexity and faster running speed. 2. Based on the two-stage nuclei segmentation strategy, a breast cancer histopathology image recognition model for cancer detection is proposed. This model is performed on two different modes: patches-wise and image-wise. Cancer can be effectively identified by extracting low-dimensional features based on nuclei segmentation, and it has good cancer recognition performance on two kinds of different datasets, which has wide applicability and can replace deep learning methods to some extent. The method can provide a diagnostic review technique to reduce human error for pathologists. The rest of the paper is organized as follows: in Section 2, a two-stage nuclei segmentation strategy was proposed. In Section 3, the feature extraction methods were introduced in detail. Section 4 is the experimental results and Section 5 is the discussion and conclusion. The proposed two-stage nuclei segmentation strategy Due to the characteristics of histopathological image, it is a challenging task to perform the automatic classification of the histopathological images of breast cancer. The overlapping of cells, uneven color distribution and subtle differences between images have brought the great difficulties to the classification of breast cancer histopathological images [33]. The effective and sufficient nuclei segmentation of histopathological images can improve the classification performance. However, in histopathological images, the diversity, the density and the overlap of nuclei pose the great challenges for the nuclei segmentation task of histopathological images [34]. In order to fully segment the nuclei, get more effective features, and prevent the undersegmentation and the over-segmentation, a two-stage nuclei segmentation strategy is proposed in this paper: stain separation is firstly conducted on the breast cancer histopathological images to obtain the foreground images, then the nuclei are segmented by the watershed segmentation method on the image after stain separation, thus the obtain images have a better degree of segmentation and more effective information. Stain separation The stain separations of histopathological images are helpful for pathologists and CAD system. Separation techniques used for natural images may cause changes in the structural characteristics of stained tissues in histopathological images and produce undesirable color distortions. The method commonly used in Hematoxylin and Eosin (H&E) image stain separation is realized by converting the RGB space to the optical density. Since the stain separation is an estimation of the density map of each stain, the relationship between the RGB color and the stain density of each pixel needs to be considered: the stained tissue will weaken the light in a certain spectrum according to the type and the amount of the absorbed stain. In this paper, the stain separation method based on the Sparse Non-negative Matrix Factorization (SNMF) framework proposed in [35] was used for breast cancer histopathological images stain separation. Let I 2 R m×n be the matrix of the RGB intensities, where m = 3 is the number of the RGB channels, and n is the total number of image pixels. And let I 0 be the illuminating light intensity on the sample (usually 255 for 8 bit images). Then the relative optical density V can be expressed to be as follows [36]: Let V = WH, W 2 R m×r be the stain color appearance matrix whose columns represent the color basis of each stain such that r is the number of stains, and H 2 R r×n be the stain density maps, whose rows represent the concentration of each stain. Therefore, for an given observation matrix V, the stain color appearance matrix W and stain density map matrix H need to be obtained from solving the following problem: Since this problem (2) is a non-convex optimization problem where the local optimum is obtained instead of the global optimum, an undesirable coloring vector is obtained. Therefore, Vahadane et al. [35] proposed a sparse non-negative matrix factorization (SNMF) framework where a sparseness constraint is added into Eq (2) and thus the Eq (2) is become to be as follows: Where k�k F denotes the F-norm of a matrix, and λ = 0.2 is the sparsity and regularization parameter, and j indicates the type of stains (j = 1, 2, . . ., r). For the H&E images, r = 2. The LARS-LASSO algorithm [37] can be applied to solve the Eq (3), then W and H are obtained, and then the stain separations of H&E images are preformed. Fig 2 shows the stain separation results of the images on the Bioimaging 2015 dataset using the above method: stain separation. Nuclei segmentation Nuclei segmentation is a basic but challenging task in the histopathological image analysis. Compared with the segmentation of independent nucleus, the segmentation of overlapping and adherent nuclei is a key of histopathological image segmentation in recent years. The morphological changes of the nuclei are considered to be the important information for many diseases. The distribution, size and density of nuclei reflect the pathological changes of breast cancer, which are the important basis for judging carcinoma and non-carcinoma. The common segmentation methods consist of the threshold segmentation, the edge detection, the active contour, the k-means clustering segmentation and the watershed segmentation. In this paper, the watershed segmentation is used to segment the nuclei of breast cancer histopathological images obtained from stain separation. Watershed algorithm is an image segmentation algorithm based on mathematical morphology. The image is regarded to be a topological landform, where each pixel represents the altitude of the point, each local minimum and its affected area are called catchment basin, and the boundary forms a watershed. The watershed segmentation algorithm is applied to extract the pixels based on the similarity between the pixels. For the extraction and segmentation of cell nuclei, each pixel value in the histopathological images is regarded to be the altitude of a pixel the in the watershed algorithm. The commonly watershed algorithms include watershed segmentation based on distance transformation, gradient-based watershed segmentation, and marker-based watershed segmentation. Since over-segmentation is prone to exist in the watershed algorithm, the noise or other interference factors on the images will also affect the watershed segmentation for histopathological images. In order to solve the over-segmentation problem, the marker-based watershed segmentation algorithm is selected in this paper. The marker-based watershed segmentation algorithm is applied to perform the watershed segmentation on the gradient image of the original image rather than indirectly on the original image, which ensure the integrity of the edge information of the target object as far as possible and avoid over-segmentation of histopathological images. Therefore, in order to reduce the influence of noise and other interference factors on nuclei segmentation in the breast cancer histopathological images, the marker-based watershed segmentation is applied into the breast cancer histopathological images obtained from the stain separation in this paper. Two-stage nuclei segmentation strategy based on stain separation and watershed algorithm The detection of visually salient image regions [38] is very useful for image segmentation. Therefore, the Frequency-tuned salient region detection method is applied into the original marker-based watershed segmentation algorithm for the sake of the segmentation performance improvement. The method exploits feature of color and luminance and outputs full resolution saliency maps with well-defined boundaries of salient objects. With the sensation of image segmentation, the noise in the corners of the image is removed before segmentation. The steps of the two-stage segmentation strategy based on the stain separation and the watershed algorithm proposed in this paper are as shown in The proposed two-stage segmentation strategy based on stain separation and watershed algorithm in this paper is compared with four different segmentation methods: k-means clustering segmentation, Ostu threshold segmentation (maximum between-cluster variance method), minimum error threshold segmentation, and iterative threshold segmentation. In addition, the watershed segmentation directly used for the original image is compared with the proposed segmentation method. The comparing results on breast cancer histopathological images are shown in Fig 6. Fig 6a is the original image, where the red marked area is the nuclei with adhesion and overlapping, and Fig 6b is the fore ground image obtained from stain separation. By comparison and observation from Fig 6, the Ostu threshold segmentation and the iterative threshold segmentation have the worst performance, but fail to accurately segment the nucleus, as shown in Fig 6d and 6e, respectively; the k-means clustering segmentation and the minimum error threshold segmentation method can accurately segment the nuclei, but for some nuclei with overlapping and adhesion in histopathological images, the edges cannot be accurately segmented, and there is still adhesion and overlapping in the segmented image, as marked to be the red cycles in Fig 6c and 6f, respectively; the proposed two-stage segmentation strategy can not only completely and fully segment the nucleus, but also performs well on the nuclei that are adhered and overlapped, as marked to be the red cycles in Computational complexity The complexity of the two-stage nuclei segmentation strategy method mainly depends on the implementation processes of the stain separation and the marker-based watershed segmentation algorithm. The algorithm complexities of the stain separation and the segmentation process are analyzed respectively. 2.4.1 The complexity of stain separation. As already introduced in Section 2.1, the SNMF framework is used in the process of stain separation, and sparse constraints is added to obtain a LASSO problem, which is solved by the LARS-LASSO algorithm. Therefore, the complexity of the stain separation process mainly depends on the calculation of the LARS-LASSO algorithm. LASSO is a constrained version of Ordinary Least Squares (OLS). Let x 1 , x 2 , . . ., x m be n-dimensional vectors, A 2 R n×m , and y be an n-dimensional vector. Then the model of lasso is as follows: In response to this problem, LARS algorithm proposed by Efron [37] is a more prudent method of single variable selection, whose complexity is equivalent to that of OLS. The entire sequence of steps in the LARS algorithm with m < n variables requires O(m 3 + nm 2 ) computations. For the lasso, costing at most O(m 2 ) operations per downdate. Therefore, the complexity of stain separation is O(m 3 + (n + 1)m 2 ). 2.4.2 The complexity of the segmentation process. The Frequency-tuned salient region detection method is applied into the original marker-based watershed segmentation algorithm for the sake of detecting salient image regions [38]. The computational complexity of this method is O(N), where N is the scale of the algorithm. In the segmentation process, with the corner denoising operation performed, computational complexity of the overall segmentation process proposed in this paper is O(N 2 ). In addition, in order to show the time complexity more clearly, we counted the running time of 10 breast cancer histopathological images in the process of stain separation and segmentation respectively, and the image size is 512×512. Completed 10 experiments to obtain the average time, and obtained the processing time of each image in the process of stain separation and segmentation. The results show that the stain separation and segmentation process of each image takes about 10.99s and 0.89s, respectively. Therefore, the method proposed in this paper is a simple and feasible method that does not depend on hardware equipment. Feature extraction In the image recognition, a lot of redundant information exists in the original image, which seriously affects the classification accuracy of the image. It is crucial for image recognition to choose an appropriate feature extraction method. The effective information is extracted, and the dimension of the feature is reduced at the same time, which avoids the disaster of dimension. The common methods of the extracting texture features include gray-level co-occurrence matrix, Tamura feature, wavelet transform, Gabor feature, Completed Local Binary Pattern (CLBP), etc. [39][40][41][42]. The common methods of the extracting color features include color histograms, color moments, and color auto-correlogram. In this paper, the CLBP method is used to extract the texture features of the breast cancer histopathological images obtained from nuclei segmentation, and the color auto-correlogram is used to extract the color features of the fore ground image of the breast cancer histopathological images obtained from stain separation. The central gray of Completed Local Binary Pattern (CLBP) CLBP is a variant of Local Binary Pattern (LBP). The local area of the CLBP operator is represented by its center pixel and the sign-magnitude transformation of local difference. After global thresholding, the central pixel is encoded by binary string, thus CLBP is called to be the central gray of complete local binary pattern (CLBP_C). Meantime, the sign-magnitude transformation of local difference is decomposed into two complementary structural components: difference sign CLBP-Sign (CLBP_S) and difference magnitude CLBP-Magnitude (CLBP_M). For a pixel (x c , y c ) in the image, the components CLBP_C, CLBP_S and CLBP_M are to be as follows: where P is the number of sampling points in the neighborhood of the center pixel, R is the radius of the neighborhood, g c is the gray value of the center pixel, g N ¼ 1 N X NÀ 1 n¼0 g n represents the mean gray value about g c when the center point is constantly moving, N is the number of windows, g p is the gray value of the pixel adjacent to the center pixel, D p = |g p − g c |, and g p À g c represents the mean magnitude. In Eq (5), CLBP_S P,R (x c , y c ) is equivalent to the traditional LBP operator, which describes the difference sign feature of the local window; CLBP_M P,R (x c , y c ) describes the difference magnitude characteristics of the local window; and CLBP_C P,R (x c , y c ) is the gray level information reflected by the pixel at the center. Color auto-correlogram The color features are the basic visual features of color images. Compared with other visual features, they are less dependent on the direction, size, and viewing angle of the image, and are related to the objects or scenes contained in the image. The color histogram describes the proportion of different colors in the entire image, but cannot describe the objects in the image. The color moment generally has only 9 components (3 color components, 3 low-order moments on each component), and the feature dimension is small, which makes it difficult to completely describe the color information of the image. The color auto-correlogram is obtained from the color correlogram. The color correlogram can not only reflect the proportion of the number of pixels of a certain color in the entire image in an image, but also reflect the spatial correlation between different color pairs [43]. For image I, let I c(i) be the all pixels of color c(i), then the color correlogram can be written as: r ðkÞ cðiÞ; cðjÞ ¼ P r ½jp 1 À p 2 j ¼ k� p 1 2 I cðiÞ ; p 2 2 I cðjÞ ; Where |p 1 − p 2 | represents the distance between p 1 and p 2 , P r is the calculation of probability. That is, the color correlogram can be regarded as a table indexed by a color pair <i, j>, the kth component of <i, j> represents the probability that the distance between the pixel with color c(i) and the pixel with color c(j) is equal to k. If the correlation between any colors in the image is considered, the color correlogram of the image will be very complicated and huge. If only considers the spatial relationship between pixels with the same color is only considered, the color correlogram is to be the color auto-correlogram. Due to the limitations of color histograms and color moments, color auto-correlogram is used to describe the color features of breast cancer histopathological images in this paper. In this paper, CLBP is applied to extract the texture features of the image obtained from nuclei segmentation. Let P = 8, R = 1, then, get the 118-dimensional feature vector. The method of color auto-correlogram is used to extract the 128-dimensional feature vector as the color feature of the breast cancer histopathological image obtained from stain separation. The above two features are cascaded and input into SVM for breast cancer histopathological images recognition. Dataset The breast cancer histopathological image data used in this paper is the Bioimaging Challenge 2015 Breast Histology Dataset [16]. All images in this dataset are digitized under the same acquisition conditions, with a magnification of 200× and a pixel size of 0.42 μm × 0.42 μm (2048 × 1536 pixels). The images are stained with Hematoxylin and Eosin (H&E). Due to the characteristics of hematoxylin and eosin, the protein in the histopathological images will be stained pink by eosin, and hematoxylin will stain the cell nuclei blue-purple. All images are divided into four categories: normal, benign, in situ and invasive. Normal and benign tissues can be categories as non-carcinoma, and in situ carcinoma and invasive carcinoma can be categories as carcinoma, as shown in Fig 7. The images were labeled by two experienced pathologists, and the images with disagreements between the pathologists were discarded. The dataset consists of a training set of 249 images and a test set of 36 images (where 16 images have the increased ambiguity, called the extended test data). Table 1 shows the distribution of the dataset. Fig 8 shows the segmentation results of the proposed segmentation method for the complete image. Experimental setup In this paper, all the algorithms were performed under Matlab R2019a on a computer with a Windows 10 64-bit Professional platform and 8 GB RAM. A series of pre-processing on the breast cancer histopathological images in the Bioimaging 2015 dataset. The original images are scaled by 0.5 times to obtain the images with a size of 1024 × 768. Then, 20 image patches are randomly cropped with a size of 512 × 512 from each image after scaling. If the number of cropped image patches is too small, it is difficult to ensure that the patches contain complete image information, and if the number of cropped image patches is too large, it may contain redundant information, so we choose to crop 20 image patches, which ensures that the patches can contain enough information and avoid redundant information. These two steps not only preserve the effective information of the original images, but also augments the dataset reasonably. And random cropping the images reduces the contingency of the experimental results. The SVM with radial basis kernel function is used to be the classifier to make the tumors classified into non-carcinoma and carcinoma, where the penalty parameter c is 2 and the kernel function parameter g is 1. The image patches and the whole image are studied separately in the experiments. The image labels are obtained by majority voting, that is, for each test image, if more than 10 image patches are classified to be non-carcinoma, the image is classified to be non-carcinoma, otherwise it is classified to be carcinoma. In addition to the classification accuracy, the sensitivity, specificity, precision and F1_score are also taken to be the metrics of evaluating the classification performance for patch-wise and image-wise. The sensitivity represents the probability that carcinoma samples are correctly diagnosed in all carcinoma samples, the specificity represents the probability that non-carcinoma samples are correctly diagnosed in all non-carcinoma samples, and the precision represents the probability of correctly diagnosed carcinoma samples in samples that are diagnosed as carcinoma, and F1_score is the harmonic average of the sensitivity and the accuracy, which it is used to measure the balance of the two metrics. The formulas of the evaluation metrics are as follows [44]. Se Sp ¼ Pr where true positive (TP) represents the number of carcinoma samples classified as carcinoma, true negative (TN) represents the number of non-carcinoma samples classified as non-carcinoma, false positive (FP) represents the number of non-carcinoma samples incorrectly classified as carcinoma, and false negative (FN) represents the number of carcinoma samples misclassified as non-carcinoma. Comparison of different color feature methods. To get the best color features of breast cancer histopathological image for classification, the color histogram, the color moment and the color auto-correlogram are used to extract the corresponding color features before and after stain separation, and the classification performances of different color features are compared. For convenience, color histogram is abbreviated as Color-Hist, color moment is abbreviated as Color-Mome, and color auto-correlogram is abbreviated as Color-Auto-Corr, the color features and their abbreviations are shown in Table 2. The comparable results of the patch-wise and the image-wise are shown in Tables 3 and 4. The experimental results from Tables 3 and 4 show that the color histogram features perform the best for breast cancer images without stain separation. However, color auto-correlogram features obtain the best performance after stain separation. From Tables 3 and 4, it is also observed that when the color auto-correlogram method is used to extract the color features of the breast cancer image obtained from stain separation, the classification accuracy, the sensitivity, the specificity and the precision and F1_score at the patch-wise are 75.97%, 68.33%, 83.61%, 80.66% and 73.99%, respectively, and those at the image-wise are 88.89%, 77.78%, 100%, 100% and 87.50%, respectively. Therefore, the color auto-correlogram features after the stain separation are chosen to be fused with the CLBP texture features after nuclei segmentation, which are regarded to be the input of SVM for final classification of breast cancer histopathological images. It should be noted that the original images mentioned in this section all refer to image patches with a size of 512 × 512 obtained by random cropping, which are relative to the stain separated images and the nuclei segmentation images. Comparison of image segmentation results under different conditions. To verify the effectiveness of the two-stage nuclei segmentation strategy proposed in this paper for the classification of breast cancer histopathological images, the CLBP texture features are extracted from the original images indirectly, the images obtained by the watershed segmentation on the original images, and the nuclei segmentation images obtained by the two-stage nuclei segmentation strategy on the original images, respectively. The fused features indicate the fusion of the CLBP texture features and the color auto-correlogram features. The compared results of CLBP features and the fused features are shown in Tables 5 and 6 at the patch-wise and at the image-wise, respectively, where the watershed segmentation on the original images is abbreviated as watershed segmentation. From Tables 5 and 6, the experimental results show that the classification accuracy of the two-stage nuclei segmentation strategy proposed in this paper is better at the patch-wise and the image-wise. The fused features of CLBP features extracted from nuclei segmentation image obtained by the two-stage nuclei segmentation strategy and the color auto-correlogram features after stain separation perform better than the other image types. From Tables 5 and 6, we also observe that the classification accuracy, the sensitivity, the specificity and the precision and F1_score at the patch-wise are 82.22%, 72.22%, 92.22%, 90.28% and 80.25%, respectively, and those at the image-wise are 91.67%, 83.33%, 100%, 100% and F1_score is 90.91%, respectively. Comparison of different segmentation methods. To verify the validation of the two-stage nuclear segmentation strategy proposed for breast cancer histopathological images in this paper, the k-means clustering segmentation, Ostu threshold segmentation, minimum error threshold segmentation method and iterative threshold segmentation are employed to be compared on the Bioimaging 2015 dataset for performing the classifications of breast tumors to be non-carcinoma and carcinoma. For convenience, k-means clustering segmentation is abbreviated as k-means, Ostu threshold segmentation is abbreviated as Ostu, and minimum error threshold segmentation method is abbreviated as Min-Error, and iterative threshold segmentation is abbreviated as Iter, the segmentation methods and their abbreviations are shown in Table 7. All the comparable methods have the same experimental conditions. For every segmentation method, two kinds of different feature extractions are adopted to perform the classifications of the breast histopathological images, which are the corresponding classification experiments: the classification on the CLBP features extracted after the nuclei segmentation, and the classification on the fused features of CLBP features and color auto-correlogram features. Thus the experimental results are shown in Tables 8 and 9. From Tables 8 and 9, it observed that the proposed two-stage nuclei segmentation strategy has obvious advantages over the other four compared segmentation methods both at the patch-wise and the image-wise and k-means clustering segmentation has better performance than the other three segmentation methods. It is worth noting that these segmentation methods have better classification results on fused features than those of CLBP features extracted from nuclei segmentation images. We also observe from Tables 8 and 9 that the classification accuracy, the sensitivity, the specificity and the precision and F1_score at the patch-wise are Fig 9 is the comparison of the classification performances at the patch-wise and the image-wise with the fused features. From Fig 9 we can see the advantages of the proposed method over other segmentation methods more clearly and intuitively. Therefore, the two-stage nuclei segmentation strategy proposed in this paper is superior to the other comparable segmentation methods. In order to compare the recognition performance of the proposed method with other segmentation methods more intuitively, the ROC curves and AUC values of different methods are compared, shown in Fig 10. From Fig 10, it can be seen that the proposed method significantly outperforms other methods in recognition performance whether it is patch-wise or image-wise. Results on the ICIAR 2018 challenge dataset. We tested the proposed method on the ICIAR 2018 dataset, which is an extended version of the Bioimaging 2015 dataset, with the same image size and magnification as it [7]. ICIAR 2018 dataset consists of 400 breast histology images for training purpose and a separate hidden test set consisting of 100 images. We tested our method on this dataset by dividing the training set of this dataset, where we made 70% as training set, 20% as validation set and 10% as test set. And the classification accuracy, the sensitivity, the specificity and the precision and F1_score at the patch-wise are 84.38%, 81.50%, 87.25%, 86.47% and 83.91%, respectively, and those at the image-wise are 92.50%, 90.00%, 95.00%, 94.74%, and 92.31%, respectively. The results are shown in Table 10. This is the result of a competitive advantage over existing methods. The ROC curves and AUC values of the results are shown in Fig 11. 4.3.5 Comparison of the current methods and the proposed method. To further verify the effectiveness of the two-stage nuclear segmentation strategy proposed in this paper, the classification accuracy of the proposed method in this paper and the current methods for breast cancer histopathological image classification at the image-wise are compared. Table 11 shows the comparison of the classification performance of the proposed method in this paper and the existing methods on the Bioimaging 2015 dataset. It is observed from Table 11 that the proposed two-stage nuclei segmentation strategy method in this paper is significantly better than the methods in [16,21,23] on the same data set, but does not perform as well as the method in [24]. However, the related literatures are all using the deep learning algorithm, and the advantage of the deep learning algorithm is that it can get higher recognition accuracy, but the disadvantage is that a large number of labeled breast cancer histopathological images are required. Optimizing a large number of parameters also leads to a lot of time spent in the experiment. The method in this paper has good performance in realizing the recognition of carcinoma and non-carcinoma breast cancer histopathological images, and has the competitive ability in carcinoma and non-carcinoma recognition, can effectively replace the deep learning algorithm to a certain extent in breast cancer histopathology image recognition. Evaluation metrics of segmentation In this paper, the Dice coefficient and Haus Dorff distance are used as evaluation metrics to measure the quality of the segmentation results. The Dice coefficient reflect more regional information and the Haus Dorff distance reflects more edge information. The calculation methods of the evaluation metrics are shown in formulas (12) and (13). where D is Dice coefficient, X is the prediction result and Y is Ground-truth. mainly involves classification research and is not a dataset dedicated to segmentation, so Ground-truth is not included in the dataset. Therefore, we perform binarization processing under the same parameters for all images through threshold segmentation, try to approximate the obtained binary images as Ground-truth, and calculate the Dice coefficient and the Haus Dorff distance to evaluate the performance of the proposed segmentation method. When calculating the Dice coefficient, we average the Dice coefficients of all images, and take the maximum value among the Dice coefficients of each category. As described in Section 4, k-means and our proposed method outperform the other comparable methods. Therefore, in this section, we take k-means and out proposed method to be compared by use of the Dice coefficient and the Haus Dorff distance. The results are shown in Table 12. The results show that the Dice coefficient of the proposed method is greater than that of the k-means cluster segmentation method, and the Haus Dorff distance is smaller than that of the k-means cluster segmentation method, which shows that the method proposed in this paper is superior to the k-means cluster segmentation method in terms of segmentation performance. But the value of the Dice coefficient is not very good, which may be caused by the fact that we do not have the real Ground-truth, but replace the Ground-truth with the binary image under the same parameter, and this approximate method of replacing the Ground-truth only It can be used as a reference to a certain extent, and cannot fully evaluate the segmentation performance. Discussion and conclusion The nuclei segmentation of histopathological images is of great significance for cancer diagnosis, grading and prognosis. The application of morphological standards in visual classification improves the accuracy of CAD systems and reduces human diagnosis errors. In this paper, a two-stage nuclei segmentation strategy, that is, a method of watershed segmentation based on histopathological images after stain separation, is proposed to make the dataset recognized to be the carcinoma and non-carcinoma recognition on the Bioimaging 2015 dataset. Compared with k-means clustering segmentation, Ostu threshold segmentation, minimum error threshold segmentation and iterative threshold segmentation, the proposed two-stage nuclei segmentation strategy performed the best and has the classification accuracy 91.67%, the sensitivity 83.33%, the specificity 100%, the accuracy rate 100% and F1_score 90.91%. In addition, compared with the current classification methods of breast cancer histopathological images, the proposed two-stage nuclei segmentation strategy method in this paper is also competitive and shows better classification performance. It is worth noting that those images with darker color and clearer imaging have better stain separation effect and better results of image segmentation. Therefore, our proposed method in this paper is affected by the image itself to a certain extent, such as the color depth and the clarity of the image. In the future work, we will explore better nuclei detection and position methods to improve the effect of nuclear segmentation for histopathological images. And we will explore better feature extraction and fusion methods to further improve the classification performance of breast cancer histopathological images.
9,282
2022-04-28T00:00:00.000
[ "Computer Science" ]
Thyroid Hormone Receptor α Controls the Hind Limb Metamorphosis by Regulating Cell Proliferation and Wnt Signaling Pathways in Xenopus tropicalis Thyroid hormone (T3) receptors (TRs) mediate T3 effects on vertebrate development. We have studied Xenopus tropicalis metamorphosis as a model for postembryonic human development and demonstrated that TRα knockout induces precocious hind limb development. To reveal the molecular pathways regulated by TRα during limb development, we performed chromatin immunoprecipitation- and RNA-sequencing on the hind limb of premetamorphic wild type and TRα knockout tadpoles, and identified over 700 TR-bound genes upregulated by T3 treatment in wild type but not TRα knockout tadpoles. Interestingly, most of these genes were expressed at higher levels in the hind limb of premetamorphic TRα knockout tadpoles than stage-matched wild-type tadpoles, suggesting their derepression upon TRα knockout. Bioinformatic analyses revealed that these genes were highly enriched with cell cycle and Wingless/Integrated (Wnt) signaling-related genes. Furthermore, cell cycle and Wnt signaling pathways were also highly enriched among genes bound by TR in wild type but not TRα knockout hind limb. These findings suggest that direct binding of TRα to target genes related to cell cycle and Wnt pathways is important for limb development: first preventing precocious hind limb formation by repressing these pathways as unliganded TR before metamorphosis and later promoting hind limb development during metamorphosis by mediating T3 activation of these pathways. Introduction Thyroid hormone (T3) is essential for organ metabolism and animal development in all vertebrates, especially during postembryonic development, a period around birth in mammals when plasma T3 level reaches the peak [1][2][3][4]. During this period, many organs, including the intestine and brain, are drastically remodeled to the adult form with distinct morphology compared to the fetal organs [5]. Thus, low T3 availability in humans causes cretinism characterized by profound mental retardation, short stature, and impaired development of the neuromotor and auditory systems [6]. T3 receptors (TRs) are members of the nuclear hormone receptor superfamily. TR and 9-cis retinoic acid receptors (RXRs) form complexes and bind to T3 response elements (TREs). In the absence of T3, these complexes recruit corepressors such as nuclear receptor corepressor (N-CoR) and silencing mediator of retinoid and thyroid hormone receptor (SMRT) and reduce histone acetylation. While, in the presence of T3, TR-RXR complexes recruit coactivators such as P300 and steroid receptor coactivators (SRCs) [1,2,4,[7][8][9][10][11][12][13][14][15][16] and activate target gene expression, likely in part via histone acetylation. In mammals, the two TR genes have several alternative mRNA splicing products, including TRβ1, TRβ2, and TRα1 that can bind to T3, as well as TRα2, which is incapable of binding to T3 [17], with distinct tissue distributions. TRβ1 is expressed mainly in the inner ear, retina, and liver, while TRβ2 is predominantly expressed in the hypothalamus and pituitary [18,19]. TRα1 is predominantly expressed in the intestine, bone, muscle, heart, and the central nervous system, and its expression is activated earlier than T3 synthesis during vertebrate development [20,21]. However, the roles of TRs during postembryonic development in vertebrates are largely unknown. The main reason is the difficulty of studying mammalian embryos and neonates that depend on maternal supply for survival. We studied T3 functions during Xenopus metamorphosis as a model for human postembryonic development. The changes during this process, including intestinal and brain remodeling, are regulated by T3 and resemble those occurring during mammalian postembryonic development [3,4,22]. Unlike any mammalian models, Xenopus develops in a biphasic process (embryogenesis and subsequent metamorphosis), making it easy to manipulate without maternal influence. By using knockout technology, several research groups, including ours, have generated TRα, TRβ, and TRα and TRβ double knockout Xenopus tropicalis animals to analyze the function of TR during metamorphosis [22][23][24][25][26]. These studies have demonstrated various TR subtype-and organ-dependent effects of TR knockout [27]. In particular, hind limb formation appears to occur regardless of TR knockout but with distinct developmental timing and rates. The expression of TRα in the hind limb peaks around stage 52 when the hind limb begins to form, and there is little plasma T3 [28,29]. Knocking out TRα, but not TRβ, induces precocious hind limb development, indicating that TRα plays a critical role in regulating the timing and rate of limb development [22][23][24][25][26]. To understand the molecular pathways regulated by TR, particularly TRα, during limb development, it is critical to identify TR target genes and reveal their expression changes during metamorphosis. Therefore, we carried out global RNA-seq and ChIP-seq analyses on wild type and TRα knockout hind limb and uncovered TRα-regulated biological pathways controlling limb development in Xenopus tropicalis. Here, we report the identification of over 700 TR-bound genes upregulated by T3 treatment in wild type but not TRα knockout tadpoles in the hind limb and evidence for the involvement of cell cycle and Wnt pathways during hind limb formation in response to T3. As Wnt signaling is also a well-known key pathway for limb organogenesis in mammals, including humans [30], our findings suggest that T3 regulates the timing of hind limb formation by regulating conserved pathways during limb development. Direct Target Genes of T3 in Wild Type Hind Limb at the Onset of the Metamorphosis It has been reported that hind limb development can occur even in the absence of any of the two TR genes [22], although hind limb developmental timing and rates are altered in the absence of TRα or both TR genes [23,25,31]. To reveal how limb development is regulated by TR, particularly TRα, we carried out ChIP-seq to identify direct target genes of T3 in the hind limb at stage 54, the onset of natural metamorphosis ( Figure S1, Tables S1 and S2). As a result, we identified 3425 and 2495 genes for wild-type hind limb from tadpoles without and with T3 treatment, respectively, or a total of 3714 TR-bound genes ( Figure 1A, Table S3). When Gene Ontology (GO) analysis was performed on these 3714 TR-bound genes, we observed that the GO terms related to the development and cellular processes such as cell cycle were among the most significantly enriched GO terms (Figure 1B, Supplemental Table S4). Similarly, pathway analysis also identified that pathways related to the development and cellar processes such as Wnt and Hedgehog signaling were among the most significantly enriched pathways ( Figure 1C and Supplemental Table S5). These findings Direct Target Genes of TRα in Hind Limb at the Onset of the Metamorphosis Given that knockout of TRα but not TRβ has a significant effect on limb development, we were interested in identifying TRα-regulated genes in the hind limb at the onset of metamorphosis. Using ChIP-seq, we identified 1130 and 2339 genes that remained bound by TR, presumable TRβ encoded by the remaining TRβ gene, in the TRα knockout hind limb in the absence or presence of T3, respectively (Figure 2 and Supplemental Tables S6-S8). In total, there were 2499 genes bound by TR in the TRα (-/-) hind limb, and most of these genes were common between TRα (-/-) hind limb with and without T3 treatment (Figure 2A), suggesting that most genes were bound by TR constitutively in both wild type and TRα (-/-) hind limb. Comparing TR-bound genes in the wild type and TRα (-/-) hind limb revealed 1407 genes bound by TR only in wild type, i.e., representing 37.9% (1407/3714) of all TR-bound genes in wild-type hind limb ( Figure 2B), suggesting that TRα plays a critical role in binding TR target genes in the limb at the onset of metamorphosis. To determine the biological processes and signaling pathways most likely affected by TRα knockout, we carried out GO and pathway analyses on the 1407 genes bound by TR only in the wild-type hind limb. We found that GO terms related to developmental processes and cell cycle were among the most significantly enriched ( Figure 2C and Supplemental Table S9). Pathway analysis also revealed the enrichment of developmental processes and cell cycle networks, including Wnt signal pathway, similar to the findings based on all TR-bound genes in wild-type hind limb ( Figure 2D and Supplemental Table S10), suggesting that TRα is important for regulating the developmental processes and pathways by T3 to control cell proliferation and limb growth. Gene Regulation by T3 in Wild Type and TRα (-/-) Hind limb To determine the effect of TRα knockout at the gene expression level, we carried out RNA-seq analysis (Supplemental Figure S2) and compared the expression of all TRbound genes as before [32]. The heatmaps of T3-induced gene expression changes revealed that much higher fractions of the TR-bound genes were upregulated or downregulated by T3 for the wild type-specific or common TR-bound genes than for TRα (-/-)-specific TR-bound genes (Figure 3), similar to the observations in the intestine [33]. In addition, among the genes regulated by T3, TRα knockout reduced T3-regulation, i.e., lower folds of upregulation or downregulation ( Figure 3). TRα Knockout Leads to Derepression of T3 Response Genes and Precocious Activation of Hind Limb Development Program To determine the molecular mechanisms underlying the precocious hind limb formation observed in TRα (-/-) tadpoles, we compared the global gene expression profiles between wild type and TRα (-/-) hind limb at stage 54 without any T3 treatment. We found that 1938 genes had higher expression in TRα (-/-) hind limb compared with wild-type hind limb ( Figure 4A and Supplemental Table S11). GO analysis of these 1938 upregulated, or derepressed genes revealed significant enrichment of genes in GO terms related to cell cycle and developmental processes ( Figure 4B and Supplemental Table S12). Similarly, pathway analysis showed significant enrichment of genes in biological pathways associated with cell cycle and development ( Figure 4C, Table S13). These findings suggest that unliganded TRα functions to repress GO terms or pathways associated with cell proliferation and development in the hind limb of premetamorphic tadpole to prevent precocious development before stage 54. Figure 1A), most genes were bound by TR constitutively. (B) Venn diagram comparison of all genes detected by ChIP-seq in wild type (WT) and TRα (-/-) hind limb. Of the 3714 TR target genes in wild-type hind limb, nearly 62% or 2307 genes were bound by TR in TRα (-/-) hind limb, presumably by TRβ. (C) GO analyses were performed using MetaCore software on the 1407 genes bound by TR in wild type but not TRα (-/-) hind limb. The top 10 most significant GO terms related to cell cycle and development were plotted here. (D) The pathways enriched among the 1407 genes bound by TR in wild type but not TRα (-/-) hind limb included those related to the development and Wnt signaling. The top 10 most significant pathways were plotted here. Gene Regulation by T3 in Wild Type and TRα (-/-) Hind limb To determine the effect of TRα knockout at the gene expression level, we carried out RNA-seq analysis (Supplemental Figure S2) and compared the expression of all TR-bound genes as before [32]. The heatmaps of T3-induced gene expression changes revealed that much higher fractions of the TR-bound genes were upregulated or downregulated by T3 for the wild type-specific or common TR-bound genes than for TRα (-/-)-specific TR- . RNA samples were isolated from wild type and TRα (-/-) hind limb with and without 18 h T3 treatment and subjected to RNA-seq analyses. Note that a much higher fraction of the genes in each of the three classes of TR-bound genes were upregulated (red) or downregulated (blue) by T3 in the wild-type animal hind limb. In addition, TRα knockout reduced the magnitudes of T3-regulation, i.e., leading to lighter red or blue, for individual genes in the knockout hind limb suggesting that TRα (-/-) is important for gene regulation by T3. Note that the blank regions between the red and blue areas were genes whose expression has no or little change after T3 treatment of wild type or TRα knockout animals. The color range shows fold changes, with the darkest red or blue colors showing 4-fold changes or more for the individual genes. TRα Knockout Leads to Derepression of T3 Response Genes and Precocious Activation of Hind Limb Development Program To determine the molecular mechanisms underlying the precocious hind limb formation observed in TRα (-/-) tadpoles, we compared the global gene expression profiles between wild type and TRα (-/-) hind limb at stage 54 without any T3 treatment. We found that 1938 genes had higher expression in TRα (-/-) hind limb compared with wild-type hind limb ( Figure 4A and Supplemental Table S11). GO analysis of these 1938 upregulated, or derepressed genes revealed significant enrichment of genes in GO terms related to cell cycle and developmental processes ( Figure 4B and Supplemental Table S12). Similarly, pathway analysis showed significant enrichment of genes in biological pathways associated with cell cycle and development ( Figure 4C, Table S13). These findings suggest that unliganded TRα functions to repress GO terms or pathways associated with cell proliferation and development in the hind limb of premetamorphic tadpole to prevent precocious development before stage 54. . RNA samples were isolated from wild type and TRα (-/-) hind limb with and without 18 h T3 treatment and subjected to RNA-seq analyses. Note that a much higher fraction of the genes in each of the three classes of TR-bound genes were upregulated (red) or downregulated (blue) by T3 in the wild-type animal hind limb. In addition, TRα knockout reduced the magnitudes of T3-regulation, i.e., leading to lighter red or blue, for individual genes in the knockout hind limb, suggesting that TRα (-/-) is important for gene regulation by T3. Note that the blank regions between the red and blue areas were genes whose expression has no or little change after T3 treatment of wild type or TRα knockout animals. The color range shows fold changes, with the darkest red or blue colors showing 4-fold changes or more for the individual genes. We next compared the expression of the genes in the hindlimb of wild type tadpoles with or without T3 treatment and found that a total of 3552 genes were upregulated and 3733 downregulated by at least two folds or more after 18 hr T3 treatment in the hindlimb of wild type tadpoles ( Figure 5A and Supplemental Table S14). Similar analysis for the TRα (-/-) tadpoles showed that only 1090 upregulated genes and 1400 downregulated genes in hindlimb after T3 treatment ( Figure 5A and Supplemental Table S15), indicating that TRα knockout had a broad effect on not only T3 upregulated genes but also downregulated ones. When we compared the T3-regulated genes in the wild-type hind limb to those in the TRα knockout hind limb, we identified 2638 and 2782 genes that were up and downregulated, respectively, in the wild type but not TRα knockout hind limb ( Figure 5B and Supplemental Table S16). GO analysis of the 2638 genes upregulated by T3 only in wild type, but not TRα knockout hind limb, demonstrated the enrichment of GO terms involved in cell proliferation and development ( Figure 5C and Supplemental Table S17). Likewise, pathway analysis also showed the enrichment of cell cycle-related canonical pathways ( Figure 5D and Table S18). As TRα knockout slows down the limb development during metamorphosis when T3 is present (stages 54-58), these findings suggest an important role of TRα in mediating T3 signal to activate cell cycle genes to promote cell cycle progression in hind limb development during metamorphosis between stage 54 and stage 58, when limb development is essentially complete. genes up or downregulated by 2-fold or more due to TRα knockout. Genes whose expression levels in the hind limbs of stage 54 tadpoles differed by 2-fold or more between wild type and TRα knockout were shown as wild type-or TRα (-/-)-specific. Note that 1938 genes were upregulated (derepressed), and 1114 genes were downregulated when comparing the expression in TRα knockout hind limb with that in the wild-type hind limb at stage 54. (B,C) Many GO terms and biological pathways related to cell cycle and development are enriched among genes upregulated (derepressed) due to knocking out TRα. GO and pathway analyses were performed on the 1938 genes upregulated (derepressed) in TRα knockout hind limb compared to the wild-type hind limb. The enriched GO terms or pathways were sorted by FDR value, and the ten most significant GO terms related to cell cycle and development (B) or the ten most enriched cell cycle pathways (C) were plotted here. We next compared the expression of the genes in the hindlimb of wild type tadpoles with or without T3 treatment and found that a total of 3552 genes were upregulated and 3733 downregulated by at least two folds or more after 18 hr T3 treatment in the hindlimb of wild type tadpoles ( Figure 5A and Supplemental Table S14). Similar analysis for the TRα (-/-) tadpoles showed that only 1090 upregulated genes and 1400 downregulated genes in hindlimb after T3 treatment ( Figure 5A and Supplemental Table S15), indicating that TRα knockout had a broad effect on not only T3 upregulated genes but also downregulated ones. genes up or downregulated by 2-fold or more due to TRα knockout. Genes whose expression levels in the hind limbs of stage 54 tadpoles differed by 2-fold or more between wild type and TRα knockout were shown as wild type-or TRα (-/-)-specific. Note that 1938 genes were upregulated (derepressed), and 1114 genes were downregulated when comparing the expression in TRα knockout hind limb with that in the wild-type hind limb at stage 54. (B,C) Many GO terms and biological pathways related to cell cycle and development are enriched among genes upregulated (derepressed) due to knocking out TRα. GO and pathway analyses were performed on the 1938 genes upregulated (derepressed) in TRα knockout hind limb compared to the wild-type hind limb. The enriched GO terms or pathways were sorted by FDR value, and the ten most significant GO terms related to cell cycle and development (B) or the ten most enriched cell cycle pathways (C) were plotted here. pathway analysis also showed the enrichment of cell cycle-related canonical pathwa ( Figure 5D and Table S18). As TRα knockout slows down the limb development durin metamorphosis when T3 is present (stages 54-58), these findings suggest an importa role of TRα in mediating T3 signal to activate cell cycle genes to promote cell cycle pr gression in hind limb development during metamorphosis between stage 54 and stage 5 when limb development is essentially complete. Figure 5. GO terms and biological pathways related to the cell cycle are highly enriched amon genes whose regulation by T3 in the hind limb is abolished by TRα knockout. (A) Venn diagra analysis for genes upregulated or downregulated by T3 in wild type (WT, Left) and TRα (-/-) (Righ hind limb (two-fold or more and padj <0.05). Genes whose expression levels in the T3 treated v control hind limb differ by 2-fold or more for either wild type (WT) or TRα knockout (TRα (-/ tadpoles were shown as up or downregulated genes, respectively, for each genotype, while the re Figure 5. GO terms and biological pathways related to the cell cycle are highly enriched among genes whose regulation by T3 in the hind limb is abolished by TRα knockout. (A) Venn diagram analysis for genes upregulated or downregulated by T3 in wild type (WT, Left) and TRα (-/-) (Right) hind limb (two-fold or more and padj <0.05). Genes whose expression levels in the T3 treated vs. control hind limb differ by 2-fold or more for either wild type (WT) or TRα knockout (TRα (-/-)) tadpoles were shown as up or downregulated genes, respectively, for each genotype, while the rest of genes are shown as common. Note that many more genes were up-or downregulated by T3 in the WT compared to TRα knockout tadpoles. (B) Venn diagram comparison of T3 upregulated (left) or downregulated (right) genes in WT hind limb to those in TRα (-/-) hind limb reveals 2638 and 2782 genes are up-or downregulated by T3 only in WT, respectively (i.e., TRα-dependent T3 target genes). Note that most genes regulated by T3in the TRα (-/-) hind limb were also regulated by T3 in the WT hind limb, while most genes regulated by T3 in the WT hind limb were not regulated by T3 in TRα (-/-) hind limb, indicating a major role of TRα in gene regulation by T3 in the hind limb. (C,D) Cell cycle-related GO terms (C) and biological pathways (D) are highly enriched among genes upregulated by T3 only in the WT hind limb. GO and pathway analyses were performed on the 2638 genes upregulated by T3 only in the WT hind limb. The enriched GO terms and biological pathways were sorted by FDR value. The top ten enriched cell cycle-related GO terms (C) and pathways (D) are shown here. TRα Knockout Reduces the Number of TR-Bound Genes Regulated by T3 in the Hind Limb We next compared the TR-bound genes with genes upregulated in the wild-type hind limb after T3 treatment. We found that 899 genes were both upregulated by T3 and bound by TR, representing 24% of TR-bound genes and 25% of T3 upregulated genes ( Figure 6A). In addition, another 692 genes were both downregulated by T3 and bound by TR, representing 19% of TR-bound genes and 19% of T3 downregulated genes ( Figure 6B). Thus, overall, about 43% of TR-bound genes were either up-or downregulated by T3. Considering that not all T3 regulated genes are direct T3 response genes and that not all TR-bound genes are regulated by T3 at a single time point of T3 treatment, the 43% overlap is very significant, suggesting that most if not all, TR-bound genes are direct T3 response genes in the wild-type hind limb. On the other hand, for the TRα (-/-) hind limb, only 182 genes were both upregulated by T3 and bound by TR, representing only 7% of TR-bound genes and 17% of T3 upregulated genes ( Figure 6C). Additionally, 162 genes were both downregulated by T3 and bound by TR in TRα (-/-) hind limb, representing 6% of TR-bound genes and 12% of T3 downregulated genes ( Figure 6D). In total, only 14% of TR-bound genes were either up-or downregulated by T3 in TRα (-/-) hind limb. Thus, in TRα (-/-) hind limb, the fraction of TR-bound genes (presumed to be bound by TRβ) regulated by T3 was much lower than that in the wild-type hind limb. When we compared the 1407 genes bound by TR only wild type, or 2307 genes bound by TR in both wild type and TRα (-/-) hind limb, with the 899 genes bound by TR and upregulated by T3 in the wild-type hind limb, we found that about 24% of the TR-bound genes, in either case, were upregulated by T3 ( Figure 6E,F). This number is much higher than the 7% of genes bound by TR only in TRα (-/-) hind limb that were upregulated by T3 treatment of TRα knockout tadpoles. These findings suggest that TRα plays an important role in gene regulation by T3 during limb metamorphosis. Coordinate TR-Binding and T3-Regulation of Genes in Cell Cycle and Wnt/β-Catenin Signaling Pathways by TRα during Hind Limb Development The above analyses revealed that TRα knockout affected many signaling pathways. Most significantly among them were cell cycle and Wnt/β-catenin signaling pathways, whose genes were highly enriched among those derepressed (Figure 4) or lost binding by TR (Figure 2) or regulation by T3 ( Figure 5) in TRα knockout hind limb at premetamorphic stage 54. These findings suggest that the changes in these pathways caused by TRα knockout may underlie the developmental effects of TRα knockout on hind limb development. To investigate how the genes in these pathways are affected by TRα knockout, we examined TR-binding and T3-regulation of individual genes in these pathways. We found that for both the cell cycle pathway (Figure 7) and the Wnt/β-catenin signaling process (Figure 8), many genes were coordinately bound by TR and upregulated by T3 in a TRα-dependent manner. For example, most of the genes which were bound by TR in cell cycle pathway were either bound by TR only in wild type hind limb, e.g., cyclin B, or both in wild type hind limb and TRα (-/-) hind limb, e.g., cyclin D and CKD4, but their expression was upregulated by T3 only in wild type hind limb (Figure 7). Likewise, many genes in the Wnt/β-catenin pathway were either bound by TR only in wild type hind limb, e.g., Casein kinase II, or both in wild type hind limb and TRα (-/-) hind limb, e.g., FOXM1 and GSK3 beta, while their expression was upregulated by T3 only in wild type hind limb ( Figure 8). Interestingly, no gene in the cell cycle pathway (Figure 7) and only a single gene in the Wnt/β-catenin pathway ( Figure 8) were bound by TR and downregulated by T3 during the 18 h treatment in wild type hind limb. Thus, TRα seems critical for direct binding and coordinated upregulation of T3 response genes to activate these pathways to promote cell proliferation and limb growth during metamorphosis. respectively. The blue histograms labeled ① and ② show the genes that were downregulated by T3 in wild-type and TRα (-/-) hind limb, respectively. Green circles indicate genes bound by TR uniquely in wild type hind limb without or with T3 treatment. Red circles indicate genes bound by TR uniquely in TRα (-/-) hind limb without or with T3 treatment. The black circle indicates genes bound by TR in both wild type and TRα (-/-) hind limb. Note that like the cycle pathway in Figure 7, most of the genes in the Wnt signaling pathway that were upregulated by T3 and bound by TR in wild type hind limb were not regulated by T3 in TRα (-/-) hind limb, suggesting that TRα plays an important role in mediating T3 regulation of this pathway during hind limb metamorphosis. Discussion Because of its total dependence on T3 and easy manipulability without maternal influence, Xenopus metamorphosis has long served as a model to study postembryonic organ development, including tissue remodeling [4,34]. For T3-inducible genes such as TRβ, Figure 8. TR-binding and regulation of genes in the pathway for positive regulation of Wnt/Betacatenin signaling in the nucleus based on ChIP-seq and RNA-seq data. The pathway for positive regulation of Wnt/Beta-catenin signaling was visualized with regard to genes regulated by T3 based on RNA-seq or bound by TR based on ChIP-seq. The arrows show functional interaction: green for activation. The red histograms labeled 1 and 2 show the genes upregulated by T3 in the wild-typeand TRα (-/-)-hind limb, respectively. The blue histograms labeled 1 and 2 show the genes that were downregulated by T3 in wild-type and TRα (-/-) hind limb, respectively. Green circles indicate genes bound by TR uniquely in wild type hind limb without or with T3 treatment. Red circles indicate genes bound by TR uniquely in TRα (-/-) hind limb without or with T3 treatment. The black circle indicates genes bound by TR in both wild type and TRα (-/-) hind limb. Note that like the cycle pathway in Figure 7, most of the genes in the Wnt signaling pathway that were upregulated by T3 and bound by TR in wild type hind limb were not regulated by T3 in TRα (-/-) hind limb, suggesting that TRα plays an important role in mediating T3 regulation of this pathway during hind limb metamorphosis. Discussion Because of its total dependence on T3 and easy manipulability without maternal influence, Xenopus metamorphosis has long served as a model to study postembryonic organ development, including tissue remodeling [4,34]. For T3-inducible genes such as TRβ, TR/RXR heterodimers function as repressors in the absence of T3 and as activators in the presence of T3. This property enables TR to play a dual role during Xenopus development [2,[9][10][11][35][36][37][38][39]. Interestingly, recent TR knockout studies have revealed that TRα or TRβ is not essential for Xenopus metamorphosis, including de novo limb formation. On the other hand, knocking out TRα affects the timing and rate of hind limb development. TRα knockout enables limb to develop earlier in premetamorphic tadpoles, suggesting that TRα inhibits hind limb formation by repressing T3-responsible gene expression as unliganded TR during premetamorphosis [23]. On the other hand, TRα knockout also delays hind limb formation once T3 becomes available after stage 54 during metamorphosis [40]. Our global analyses of TR binding and gene expression here have revealed the likely molecular basis underlying the effect of TRα on hind limb development during Xenopus tropicalis development. TRα Is Critical for Both the Binding of Many Target Genes by TR and Ensuring Sufficient Levels of TR Binding at Target Genes for Their Regulation by T3 during Limb Development The first step in gene regulation by TR is the binding of TR to target genes in chromatin. TRα is highly expressed in the hind limb by stage 54, the onset of metamorphosis, while TRβ expression in the hind limb is very low but can be activated as a direct TR target gene upon T3 treatment of premetamorphic tadpoles. Our ChIP-seq analysis showed that TRα knockout drastically reduced that number of detectable TR-bound genes, from 3714 in wild type tadpoles to 2499 in the TRα knockout tadpoles, in the hind limb of tadpoles at stage 54, indicating that TRα is important to recognize the TR target genes and consistent with the expression profiles of TRα and TRβ during limb development. While our ChIP-seq analysis does not allow quantitative comparison of the levels of TR binding at individual target genes between the wild type and knockout animals, it is likely that the levels of TR binding at the 2499 genes bound by TRβ, the only TR expressed in the TRα knockout tadpoles, were also lower at individual genes in TRα knockout hind limb compared with those in wild type hind limb, as also suggested by the qPCR analysis of independent ChIP studies (Supplemental Figure S1). This is because the total level of TR in the hind limb would be lower in the knockout tadpoles compared to wild-type tadpoles. Thus, TRα can affect TR target genes at both the number of genes bound by TR and the amount of TR bound to individual genes. Consistent with the ability of TR to function as a repressor of T3-inducible genes in the absence of T3, we found that many genes were upregulated or derepressed by TRα knockout in the premetamorphic hind limb at stage 54 when there is little T3. Furthermore, T3 regulation of TR-bound genes in the hind limb was drastically reduced in TRα knockout tadpoles, in terms of both the fraction of genes regulated by T3 and the magnitudes of regulation for individual genes (Figure 3). This was true for all three groups of TR-bound genes: those bound by TR only in wild-type tadpoles, only in TRα knockout tadpoles, or in both wild type and TRα knockout tadpoles. Interestingly, for genes bound by TR only in the wild-type animals, there was still a small fraction of genes regulated by T3 in TRα knockout tadpoles, although at reduced magnitudes. This suggests that this small fraction of genes was still bound by TRβ in TRα knockout tadpoles, although at levels of binding not detectable by ChIP-seq but sufficient for some regulation by T3. On the other hand, for genes bound by TR in both wild type and TRα knockout tadpoles, a much smaller fraction of them were regulated by T3 at reduced magnitudes in TRα knockout tadpoles. This was likely due to lower levels of TR binding to these genes in TRα knockout tadpoles since only TRβ remained in TRα knockout tadpoles. The same appeared to be true for genes bound by TR only in TRα knockout tadpoles. Thus, TRα affects gene regulation at three levels, repressing them in premetamorphic tadpoles, enabling more genes to be regulated by T3 by increasing the number of genes bound by TR in part through increasing overall levels of TRs in the hind limb, and enhancing gene regulation by T3 through increased TR binding at individual genes. TRα Regulates Pathways Such as Cell Cycle and Wnt/β-Catenin Signaling to Control the Timing and Rate of Limb Development A significant change at the early stages of limb development is rapid cell proliferation. Thus, one would expect that cell cycle pathways are essential for limb development. In addition, early studies in different animal models have shown that Hedgehog and Wnt/β-catenin signaling pathways are required for limb development [41,42]. Interestingly, our global analyses revealed that these pathways are controlled by TRα to function at different stages of limb development. First, knocking out TRα derepressed or upregulated by T3-response genes in premetamorphic limb when little T3 is present. These genes were highly enriched with genes in the pathways and GO terms related to cell proliferation, cell cycle, and Hedgehog and Wnt/β-catenin signaling. Interestingly, such pathways and GO terms were also enriched among genes whose TR-binding became undetectable by ChIP-seq in the TRα knockout hind limb, supporting a derepression mechanism by TRα knockout in premetamorphic hind limb. Since TRα knockout causes precocious limb development, our findings suggest that TRα functions to repress these pathways in premetamorphic wild-type hind limb, when there is little T3, to prevent premature limb development. Second, the pathways and GO terms related to cell proliferation, cell cycle, and Hedgehog and Wnt/β-catenin signaling were not only enriched among genes that lost TR-binding in TRα knockout hind limb but also among genes whose regulation by T3 was abolished in TRα knockout hind limb, providing a direct link between target gene binding and regulation by TRα. Furthermore, T3 is critical for limb development after metamorphosis begins at stage 54. Thus, once metamorphosis begins, TRα appears to control these pathways by increasing the number of genes bound by TR in these pathways and enhancing their regulation by T3. This, in turn, enhances the rate of limb development. In summary, our study is the first report to identify molecular processes regulated by TRα during hind limb development. By analyzing the expression of T3-responsive genes via RNA-seq and direct TR binding to target genes via ChIP-seq, we have provided a comprehensive set of data on global gene regulation by TR, particularly TRα, and revealed the molecular processes involved in hind limb development. Of importance is the finding that TRα plays a central role in regulating the same groups of biological pathways, particularly those related to cell proliferation, cell cycle, and Hedgehog and Wnt/β-catenin signaling, to prevent precocious limb development in premetamorphic tadpoles, and to promote limb development when T3 levels rise after the onset of metamorphosis at stage 54. Given the conservations in vertebrate development, including a critical role of T3 during postembryonic development in all vertebrates [1][2][3][4] and a key involvement of Wnt signaling in the limb organogenesis in mammals [30], our findings here suggest that further studies on anuran limb metamorphosis will not only enhance our understanding of the molecular mechanisms of limb development but also help reveal potential genes and pathways as possible targets for regenerative medicine, particularly to improve tissue repair and regeneration. Animals All Xenopus tropicalis experiments were approved by the Animal Use and Care Committee of Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), U.S. National Institutes of Health (NIH). Wild-type Xenopus tropicalis were purchased from NASCO (Fort Atkinson, WI, USA). TRα (-/-) Xenopus tropicalis were generated by crossing TRα (+/-) male and female frogs [23]. Embryos were reared in 0.1 M Marc's modified Ringers (MMR) in a 10 cm Petri dish for one day at 25 • C and then transferred to an 800 mL beaker for three days. Four days after fertilization, embryos were transferred into a large volume (9-L) container and housed under a 15 h light/9 h dark cycle. Tadpoles were staged according to the description for Xenopus laevis [43]. Chromatin Immunoprecipitation-Sequencing (ChIP-Seq) and ChIP-PCR Analysis Tadpoles treated with 10 nM T3 for 18 h or without T3 treatment as the control were sacrificed, and the chromatin was isolated from the hind limb of at least five tadpoles per sample as described [33,44]. The hind limbs were placed in 1 mL of nuclei extraction buffer (0.5% Triton X-100, 10 mM Tris-HCl, pH 7.5, 3 mM CaCl2, 0.25 M sucrose, with the protease inhibitor tablet (Roche Applied Science, Complete, Mini, EDTA-free), 0.1 mM dithiothreitol in Dounce homogenizers on ice and crushed with 20-25 strokes by using pestle A (DWK Life Sciences (Kimble)). The homogenate was fixed in 1% formaldehyde with rotation at room temperature for 20 min before stopping the fixation with 0.1 M Tris-HCl, pH 9.5. The homogenate was then centrifuged at 2000× g at 4 • C for 2 min, and the pellet was resuspended in 1 mL of nuclei extraction buffer and re-homogenized in Dounce homogenizers with 10-15 strokes using pestle B. The homogenate was filtered through a Falcon 70-µm cell strainer and centrifuged at 2000× g at 4 • C for 2 min. The resulting pellet was resuspended in 200 µL of SDS lysis buffer (Merck Millipore Bioscience, Billerica, MA, USA) on ice, sonicated using the Bioruptor UCD-200 (Diagenode, Sparta, Greece). The output selector switch was set on High (H), and sonication was 1 h. The samples were next centrifuged at 16,000× g for 10 min at 4 • C. The chromatin in the supernatant was quantitated. The chromatin was adjusted to 100 ng DNA/µL by using the SDS lysis buffer and frozen in aliquots at −80 • C. Before analysis, chromatin DNA was diluted to 10 ng/µL with ChIP dilution buffer (Merck Millipore Bioscience). After preclearing with salmon sperm DNA/protein A-agarose (Merck Millipore Bioscience), input samples were taken, and 500 µL of each chromatin sample was added to a 1.5 mL tube with anti-TR antibody or anti-ID14 control antibody [45] and salmon sperm DNA/protein A-agarose beads. The mixture was incubated with rotation for 4 h at 4 • C. After incubation, chromatin immunoprecipitation assay was performed by using a ChIP Assay Kit (Merck Millipore Bioscience) according to the manufacturer's instruction. The ChIP DNA was purified using the Nucleospin ® DNA extraction kit (Macherey-Nagel, Duren, Germany) and eluted with 40 µL of TE buffer. Then, the ChIP DNA was analyzed by qPCR with a TaqMan probe to determine the presence of the TRβ TRE region to confirm the sample quality. For high-throughput ChIP-seq, libraries were prepared from the immunoprecipitated DNA samples by DNA SMART™ ChIP-seq kit (Clontech/Takara Bio Co., Palo Alto, CA, USA). The constructed ChIP-seq libraries were sequenced on the Illumina HiSeq 2500 platform in the Molecular Genomics Core, NICHD, and three technical replicates from each sample were analyzed. ChIP-Seq Data Processing Raw sequencing data in FASTQ format were aligned to X. tropicalis genome assemblies (Xenbase v9.1) with Bowtie2 software (version 2-2.3.4.1), and redundant reads were removed from final bam files with Samtools software (version 1.9). Peak enrichments were detected with MACS2 software (version 2.1.1.20160309). The q value cutoff of 0.05 for each single bam file without control and said peaks was mapped to genes in Xenopus_tropicalis.JGI_4.2.90.gff3 annotation with customer R scripts. The raw read datasets for all ChIP-seq samples are available under Gene Expression Omnibus (GEO) accession number GSE193363. Quantitative Real-Time PCR Total RNA was extracted using RNeasy Plus Mini Kit (Qiagen, Valencia, CA, USA) from the hind limb of wild type and TRα (-/-) tadpoles treated with or without T3 for 18 h. Reverse transcription was carried out as described before [32]. The real-time quantitative RT-PCR (qRT-PCR) was performed in triplicates with the SYBR Green PCR MasterMix (Applied Biosystems, Foster City, CA, USA) on the Step One Plus Real-Time PCR System (Applied Biosystems) with gene-specific primers as reported [32]. The ribosomal protein L8 gene (rpl8) was analyzed as a control for normalization [46], and the gene expression analysis was performed at least twice, with similar results. RNA-Sequencing (RNA-Seq) Analysis Total RNA was extracted from the hind limb of wild type and TRα (-/-) tadpoles with or without T3 treatment for 18 h as described above. After mRNA purification using poly-T oligo-attached magnetic beads and chemically fragmentation, three cDNA libraries were generated from the same sample using the TruSeq RNA Sample Preparation Kit (Illumina, San Diego, CA, USA) as described [32]. Then, the libraries were sequenced on the Illumina HiSeq 2000 platform to obtain 100 nt paired-end reads in the Molecular Genomics Core, NICHD. The demultiplexed and adapter-removed short reads were mapped to Ensembl Xenopus tropicalis Genome (JGI 4.2) with STAR software (version 2.6.1c) and reads counts for each gene/exon were obtained with featureCounts tool of Subread software (version 1.6.3). R Bioconductor DESeq2 package [29] was used for gene differential expression analysis. The raw read datasets for all RNA-seq samples are available under GEO accession number GSE193364. GO and Pathway Analysis To study the potential biological significance of the changes observed in the RNA-seq and ChIP-seq, we performed pathway and gene ontological analysis by MetaCore software (GeneGo Inc., Encinitas, CA, USA). Using the gene symbol of detected genes in RNAseq and ChIP-seq, we uploaded the list of genes in human gene names to the MetaCore software. Then we performed GO and pathway analysis by "Pathway Maps", "Map Folders", and "GO Processes" in the One-click Analysis tab or "Compare Experiments" in the Workflows&Reports tab. Statistical Analysis The results were analyzed using the 4-Step Excel Statistics software (OMS Publishing Inc., Tokorozawa, Saitama, Japan) and Prism 8 statistics software (GraphPad Software Inc., San Diego, CA, USA).
9,637.6
2022-01-22T00:00:00.000
[ "Biology", "Medicine" ]
Evaluation of Pelvic Floor Dysfunction by Pelvic Floor Ultrasonography after Total Hysterectomy for Cervical Cancer Objective . To study the value of pelvic fl oor ultrasonography in evaluating pelvic fl oor dysfunction (PFD) after total hysterectomy for cervical cancer. Methods . All the enrolled patients were given 4D pelvic fl oor ultrasound examination before and after surgery. The results of ultrasonic examination and the parameters of four-dimensional ultrasonic examination before and after surgery were analyzed, and the quality of life of the patients before and after surgery was evaluated. Results . Postoperatively, the posterior angle of bladder and urethra, the rotation angle of urethra, the decreased value of bladder neck, and the distance between bladder neck and pubic symphysis were ( 122 : 60 ± 9 : 53 ) ° , ( 136 : 47 ± 14 : 67 ) ° , ( 58 : 90 ± 18 : 19 ) ° , ( 18 : 14 ± 7 : 32 ) mm, and ( 2 : 76 ± 0 : 46 ) cm, signi fi cantly greater than the preoperative ( 89 : 90 ± 9 : 59 ) ° , ( 107 : 30 ± 9 : 96 ) ° , ( 27 : 59 ± 10 : 96 ) ° , ( 13 : 27 ± 5 : 69 ) mm, and ( 2 : 24 ± 0 : 21 ) cm ( P < 0 : 05 ). Postoperative detrusor muscle thickness, bladder neck movement, residual urine volume, and bladder rotation angle ( 4 : 48 ± 0 : 82 ) mm, ( 0 : 64 ± 0 : 17 ) cm, ( 12 : 82 ± 2 : 69 ) ml, ( 12 : 11 ± 2 : 43 ) ° were signi fi cantly higher than those of preoperative ( 3 : 70 ± 0 : 64 ) mm, ( 0 : 43 ± 0 : 18 ) cm, ( 4 : 83 ± 1 : 07 ) ml, ( 4 : 30 − 1 : 19 ) ° ( P < 0 : 05 ). The scores of emotional function, psychological function, social function, and physiological function were ( 2 : 35 ± 0 : 75 ) points, ( 2 : 45 ± 0 : 66 ) points, ( 2 : 30 ± 0 : 77 ) points, and ( 2 : 19 ± 0 : 71 ) points, signi fi cantly higher than those of ( 1 : 01 ± 0 : 50 ) points, ( 1 : 25 ± 0 : 54 ) points, and ( 1 : 00 ± 0 : 57 ) points before surgery, ( 1 : 05 ± 0 : 46 ) ( P < 0 : 05 ). Conclusions . The application of pelvic fl oor ultrasonography to detect pelvic fl oor dysfunction after total hysterectomy can clearly display the anatomical structure of the pelvic fl oor, which is conducive to disease prevention and treatment. Four-dimensional pelvic fl oor ultrasound can clearly show the postoperative pelvic fl oor function, which is worthy of clinical promotion and reference. Introduction Cervical cancer (CC) is one of the most common gynecological malignant tumors, and its incidence is second only to breast cancer and currently ranks second in the global gynecological malignant tumors, which seriously threatens women's life and health.According to statistics, one-third of the total number of global cases happened in China each year [1].At the same time, with the increase of HPV infection rate, the incidence of cervical cancer increases significantly and gradually tends to be younger.The current treatment of cervical cancer is based on the International Federation of Gynecology Obstetrics (FIGO) staging with the options of operation in the early stage, such as extensive hysterectomy plus pelvic lymph node dissection [2,3].As the number of cancer survivors continues to increase, the quality of life (QOL) of these survivors is an important consideration for healthcare providers, such as pelvic floor dysfunction [4].In recent years, with the development of 3D multisection and 4D dynamic image acquisition techniques and their powerful data postprocessing capacity, pelvic floor ultrasound has begun to be applied in clinical practice.The 4D view off-machine analysis can be exploited to reconstruct 3D plane image, measured in the corresponding physiological action state with easy operation, reliable inspection data, and low cost [5].However, there are few studies on the effect of 4D pelvic floor ultrasound in the diagnosis of pelvic floor function after cervical cancer surgery.We hypothesized that four-dimensional pelvic floor ultrasound can be complementary or alternative route to improve the comprehensive diagnosis of the extent of anatomical structure of the pelvic floor.2) Patients with a history of pelvic surgery, urinary surgery, nervous system, and infectious diseases.(3) Those patients who did not agree to participant in this study. Methods The ultrasonic images were collected using a GE Voluson E8 Color Doppler Ultrasound Machine equipped with RIC 5-9-D Probes.The working frequency was 5 to 10 MHz.4D View 10.0 camera analysis software was used to reconstruct the data, and the instrument was set to pelvic floor ultrasound examination.Prior to the image collection, feces of the patients must be removed.The amount of residual urine in the bladder must be about 50 mL.The volume probe was outsourced to sterile protective sleeve, which was coated with disinfectant and chelating agent on both the inside and outside.The probe was firmly placed in the perineum of the patient to store the images including the midsagittal plane of the urethra, vagina, bladder neck, rectal junction, and pubic symphysis.The angle of urethral rotation, the vertical distance between the bladder neck and the lower symphysis pubis (BSD), the degree of bladder neck depression (BND), and the posterior horn of bladder and urethral angle were also measured at both the resting and maximum Valsalva action state.The reference value of the posterior angle of the bladder and urethra is 90 °to 120 °, and the reference value of the urethral rotation angle is 30 °to 45 °.All detections were repeated 3 times.(2) caused by sneezing, coughing, and laughter urine overflow occurs when there is increased pressure in the abdomen.The diagnostic criteria for pelvic organ prolapse are: the vertical distance between the lowest point of the pelvic organ and the lower part of the pubic symphysis detected by ultrasound of the perineal pelvic floor is greater than 1 cm.The posterior angle of the bladder and urethra is greater than or equal to 140 °, and the rotation angle of the urethra is less than 45 °.Combined with the changes in the anatomical structure of the pelvic floor, clinical manifestations, and the characteristics of this transperineal fourdimensional pelvic floor ultrasound imaging, the results were obtained.The four-dimensional pelvic floor ultrasound was used to detect stress urine.Four items of incontinence, bladder prolapse, uterine prolapse, and rectal prolapse were used as diagnostic indicators.The comparison of fourdimensional pelvic floor ultrasound examination parameters of pelvic floor dysfunction after cervical cancer hysterectomy in different states uses SF-36 to evaluate the quality of life [6].SF-36 includes four aspects: emotional function, mental function, social function, and physiological function.According to the severity of the impact of symptoms on daily life, patients are divided into no effect (0 points), mildly affected (1 point), moderately affected (2 points), and severely affected (3 points), the higher the score, the worse the quality of life. Statistical Analysis. Statistical analysis was performed using SPSS 23.0 software.Continuous data were expressed as mean ± standard deviation, group t-test was used for comparison between two groups.Chi-square test was used for the comparison of binary data.Multiple logistic regression analyses were conducted to identify the potential risks.In this present study, P < 0:05 was considered that the differences were statistically significant. Comparison of Parameter Values of Four-Dimensional Ultrasonography before and after Surgery.After surgery, urinary tract rotation angle, the vertical distance between the bladder neck and the pubic symphysis (BSD), the bladder neck reduction (BND) in the resting state and the posterior angle of the bladder and urethra in the maximum Valsalva state were significantly higher t (P < 0:05).As shown in Table 1. Changes of Lower Urinary Tract in Patients before and after Surgery.As shown in Table 2, the degree of bladder neck motion, bladder residual urine volume, bladder rotation angle, and bladder detrusor thickness were significantly improved after surgery (P < 0:05).3, the scores of the patients after operation were higher than before operation (P < 0:05), that means the quality of life after operation was worse. Discussion From the perspective of human anatomy, the female pelvic cavity involves the cervix, uterus, and supporting tissues, as well as part of the pelvic lymph nodes and vagina, which can play important role in undertaking and supporting to ensure the normal state of pelvic organs [7].Pelvic floor dysfunction can lead to pelvic floor dysfunction diseases such as stress urinary incontinence and pelvic organ prolapse.According to research reports, childbirth and surgery are the main causes of pelvic floor dysfunction.The hysterectomy not only needs to cut off the main and sacral ligaments in the center of the pelvic floor but also needs to push down the bladder and rectum, which causes about 20% patients have symptoms of urinary and stool disorders and incontinence [8]. At present, the clinical methods for evaluating the structural and functional changes of the female pelvic floor mainly include clinical staging, urodynamic examination, and acupressure test, while the imaging methods used for the examination of female pelvic floor dysfunction mainly include CT, ultrasound, and nuclear magnetic resonance [9].Although MRI technology has high contrast resolution and good impact space, it is difficult to repeat observation due to its high cost, long inspection time, and cannot dynamically observe pelvic function.CT is difficult to be accepted by patients because of its radiation, ultrasound because of its minimally invasive, nonradiation, dynamic observation, and other characteristics, it is widely used in clinical examination and diagnosis [10,11].Fourdimensional pelvic floor ultrasound is a new inspection method, which is real-time dynamic imaging based on three-dimensional ultrasound [12].Clinical studies have shown that four-dimensional pelvic floor ultrasound for pelvic floor dysfunction can dynamically observe the threedimensional image of the pelvic floor and pelvic floor muscles in real time through three-dimensional stereo imaging, making up for the insufficiency of two-dimensional plane ultrasound, making clinical diagnosis more intuitive and 3 Scanning the results more accurate.Exactly [13], the results of this study showed that the urinary tract rotation angle, bladder neck reduction (BND), vertical distance (BSD) between the bladder neck and the pubic symphysis, and the retro vesicourethral angle in the resting state and the maximum Valsalva state were significantly higher than those in the postoperative patients.Before surgery, the degree of bladder neck motion, bladder rotation angle, bladder residual urine volume, and bladder detrusor thickness were significantly greater than those of before surgery.This indicates that after hysterectomy, the uterine ligament and uterosacral ligament are cutoff and the bladder and rectum are pushed down, which will affect the innervation of the bladder and rectum, resulting in in the anatomical and physiological structure of the pelvic floor, resulting in pelvic floor dysfunction [14].Although female pelvic floor dysfunction does not endanger the patient's life, it will seriously affect the patient's quality of life, and will have a negative impact on society, psychology, and daily life [15,16].Researchers compared the quality of life of patients with gynecological malignant tumors and postoperative benign uterine disease found that the symptoms of postoperative pelvic floor dysfunction in patients with malignant tumors were significantly higher than those in the control group.Our research data also showed that the quality of life of patients after surgery was worse. In conclusions, cervical cancer patients are more likely to develop pelvic floor dysfunction after hysterectomy.Clinical medical workers should take four-dimensional pelvic floor ultrasonography to reduce the occurrence of pelvic floor dysfunction and improve the quality of life of patients. 2. 1 . Diagnostic Criteria.Stress urinary incontinence diagnostic criteria: (1) BND ≥ 2:0 cm in the maximal Valsalva state; the posterior angle of the bladder and urethra ≥ 120 °and the bladder neck rotation angle ≥ 20 °in the maximal Valsalva state; 2 Scanning 2 . 6 . Comparison of the Quality of Life of the Two Groups of Patients before and after Surgery.As shown in Table Table 1 : Comparison of parameter values of four-dimensional ultrasonography before and after surgery ( X ± s). Table 2 : Changes of lower urinary tract in patients before and after surgery ( X ± s). Table 3 : Comparison of the quality of life of the two groups of patients before and after surgery ( X ± s).
2,921.8
2022-09-28T00:00:00.000
[ "Mathematics" ]
METHODOLOGY OF ENSURING THE INTERACTION OF ECONOMIC AND . Implementation of computer communication technologies in social and economic processes has led to increased cyberattacks aimed to provide third parties with economic benefits or cause enterprises economic damages. The paper substantiates the impact of cyber risks on the economic security of enterprises, including the influence on the cybersecurity of accounting data as its important component. The aim of the article is to assert accounting as an innovative multilevel mechanism of ensuring the interaction of economic and cyber security. Theoretical and methodological aspects of positing accounting as a set of multi-option methods of implementing economic and cyber security interaction were investigated using institutional and innovational methods of scientific research. Economic and mathematical methods of analysis were used to substantiate the interdependence of global indices of state development.It is proven that the extent of digital competitiveness has the greatest influence on the frequency of cyber threats. At the same time, the development of information and communication technologies, innovativeness of the economy, connectivity, and Internet accessibility affect it to a lesser degree. Five levels of information interaction between economic and cyber security of enterprises are identified, viz: the methodological level: determined the impact of cyber threats on the principles and functions of accounting; the quality level: impact on the quality of accounting information; the methodical level: impact on accounting items and accounting types; the communication level: impact on accounting communication with stakeholders; the reputation level: impact on the business image and enterprise goodwill. If cyber threats are realized at these levels, this adds up to increasing economic losses for the enterprise. The paper argues for implementing a feedback mechanism for economic and cyber security conducted using accounting whose task is to credibly identify and evaluate economic losses arising due to cyber risks. It is proven that the methodology of identifying and evaluating economic losses arising in the enterprise due to cyber threats through accounting requires further scientific investigation. information, accounting information requires foremost cybersecurity. Most cyber risks inherent to the activities of economic entities are associated with the theft of accounting information or the reduction of its quality parameters. The economic security of the enterprise is ensured by complying with the qualitative requirements for accounting information. The quality of information depends on its compliance with the expectations of stakeholders. Violation of any of the quality parameters of the accounting system could lead to loss of its usefulness and, consequently, economic significance for internal and external users. In most cases, the enterprise suffers economic losses if incorrect accounting information is in operation. Management decisions based on false (distorted or corrupted) accounting information cause damage to the economic security of the enterprise. The actions of internal users operating accounting information are related to the enterprise's economic activities, while the external users affect the functioning of other economic entities. Thus, non-compliance with the qualitative parameters of the accounting system causes economic damages twice: first, through direct losses due to the actions or inaction of managers (owners and founders) and, second, through indirect losses or lost economic benefits that could have been extracted from cooperation with external stakeholders. As a result, there is a direct link between the economic and cyber security of enterprises. In practice, the relationship between economic and security activities involves the study of accounting mechanisms to identify the impact of cyber threats on the economic security of the enterprise. Literature Review. At the enterprise level, economic security characterizes the current level of protection of the enterprise's most important interests from unfair competition, excessive pressure from regulatory authorities, incompetent decisions, imperfect regulatory framework, and the ability of the enterprise to withstand information threats (Horbachenko, 2020). The impact of cyber risks on enterprise economic security is the subject of scientific research of many scholars. In particular, Rodrigues et al. (2019) argued that the need to ensure cybersecurity is a side effect of the digitalization of the economy. According to scientists, it is important to develop effective measures to prevent and eliminate cyber risks by predicting the economic consequences of cybersecurity breaches. B. Rajput (2020) considered the phenomenon of «cybercrime in the economic sector», which has arisen in recent years due to the connection between cyber risks and economic consequences of their manifestation. The scientist concluded that such crimes would continue to grow due to the increasing integration of economic and cyberspace. Exploring the economic consequences of various cyber risks, Shitova and Shitov (2019) pointed out that all modern cybercrime focuses on obtaining certain economic benefits such as global espionage, financial attacks, card fraud, information theft and phishing, network attacks and traffic interception to steal intellectual property, cryptographers and extortionists, cryptojacking, etc. Researchers have also investigated ways to ensure enterprise cybersecurity to minimize the economic losses of the enterprise. For example, Horbachenko (2020) substantiated the expediency of creating a single national cybersecurity system, which would unite the information space of enterprises into a single integrated system that would be a full-fledged component of national security at the state level. Marasigan (2019) highlighted the importance of instigating institutional changes in the economy at the micro and macro levels to overcome cyber barriers and threats to the operation of enterprises. Wilson (2014) identified the organizational, methodological, software, and hardware support as critically important for a cybersecurity system. It is crucial for ensuring the sustainable economic security of the enterprise. Rue and Pfleeger (2009) proposed different models of economic assessment of cyber risks. Scientists have explained the various mechanisms of cybersecurity's impact on the economic condition of the enterprise in terms of determining the economic losses resulting from the manifestation of cyber risks. Similarly, Patterson and Gergely (2020) developed a method for determining the economic efficiency of enterprise cybersecurity and its setup by analyzing the impact of cyber risks on the enterprise's economic losses or costs (capital and current). Thus, most scientists associate the need for cybersecurity with the growing pervasiveness of information and communication technologies in information processes. However, the connection between the intensification of cybersecurity and the increasing implementation of information processing technologies in social and economic processes was refuted by analyzing global rating data (Global Cybersecurity Index, 2018) (Fig. 1). Figure 1. The relationship between the level of cybersecurity and ICT development of countries Sources: developed by the authors based on (Global Cybersecurity Index, 2018). The approximate and smoothed trend line built using data on the relationship between the ICT development index and the cybersecurity index makes it possible to identify an imbalance between these indicators for many countries. Significant positional deviation of the values from the average trend line shown in Fig. 1 shows the lack of a direct relationship between ICT development and the level of cybersecurity in most countries. A more homogeneous result was obtained when comparing the ratio of the cybersecurity index alternately with the innovation index ( Fig. 2) and the connectivity index (Fig. 3). Thus, the increased attention to measures of ensuring cybersecurity is precipitated by innovations and connectivity at the micro and macro levels. The more innovations are introduced, and the more network infrastructure develops in national socio-economic processes, the bigger the need for an effective cybersecurity system. The level of innovation and development of network infrastructure determines the digital competitiveness of the country. Figure 4 shows a direct relationship between the level of cybersecurity and countries' digital competitiveness, as evidenced by only slight deviations of analytical data from the average trend line. It should be noted that some countries with low indicators of innovation and digitalization of socio-economic processes occupy high positions in the ranking of cybersecurity. For example, Ukraine's indicators for Innovation Index is 38.52; Connectivity index -43; Digital Competitiveness Index -51.29, all with a fairly high Cybersecurity Index of 0.661. Thus, it is explained by the need to combat ongoing cyber threats due to hybrid foreign influence. The digital competitiveness of countries is the basis for the development of a digital economy. When most socio-economic processes are digitized, their cybersecurity must be ensured. Accounting is the information basis for the digital economy. Accounting data becomes an important cybersecurity target in terms of the relationship with the national economic security, various industries, and individual economic (2017) defined cybersecurity in terms of the accounting policy of the enterprise that ensures its economic security. They have defined it as protecting the enterprise's vital interests and accounting information from internal and external threats, i.e., protection of the enterprise, its human and intellectual potential, technologies, profits, added and market value. Remarkably, the above is provided by a system of special legal, economic, organizational, information, technical, and social measures. V. A. Nekhai and V. V. Nekhai (2017) considered information security an important component of economic security, which requires accounting information to be cybersecured for its quality parameters to be met. Yevdokymov (2011) overviewed reliability, the qualitative characteristic of accounting information, concerning ensuring the economic security of the enterprise. In his opinion, reliability is a characteristic of information that provides confidence in the appropriateness of its assumptions about errors and trends and the truth of intentions to provide all data in a veritable form; it meets the principles of verifiability, credibility, and neutrality. In addition, it should be noted that ensuring the reliability of accounting information in the digital economy involves the ability of accounting systems to avoid and resist cyber threats. However, such studies are partial and fragmentary, not allowing authors to establish the multifaceted nature of the connection between economic and cyber security of enterprises. Therefore, the accounting mechanism of ensuring a multifaceted interaction between economic and cyber security has not been scientifically investigated appropriately, which determines this study's purpose. The article aims to assert accounting as an innovative multilevel mechanism for ensuring the interaction of economic and cybersecurity. Methodology and research methods. The institutional approach was used to fulfill the established purpose of this paper in general. In contrast, the concept of institutional changes was used to evaluate the modernization of accounting through the introduction of information and communication technologies, particularly in detecting and eliminating cyber threats. Besides, this approach was used to reveal the accounting essence as a type of socio-economic activity and identify its multifaceted connections to the economic and cybersecurity of the enterprise in the institutional societal system. Emphasis is placed on the use of economic and mathematical modeling. The polynomial trend line, built using approximated and smoothed data, reveals interdependencies in countries' rankings of ICT development, innovation, connectivity, digital capability, and cybersecurity. The identification of positional deviations of these indicators from the average trend line makes it possible to conclude the relationship between the level of cybersecurity of countries, the development of the digital space, and the country's economy. As the results of the rating of national cybersecurity are published biennially, all other statistical data use the indicators of 2018 to ensure comparability. The idea of accounting's important socio-economic role and significance is at the core of the innovative approach to the theoretical and methodological principles of accounting. The hypothesis of the innovative accounting nature was posited as a set of methods to ensure the interaction of enterprises' economic and cyber security. Results. Accounting principles are its fundamental basis. The study of accounting principles allows identifying empirical relationships, patterns of development, and accounting features, which are influenced by current trends in information and communication technologies development. It is possible to substantiate the interaction between enterprises' economic and cyber security by analyzing accounting principles. The increasing number and difficulty of cyber risks require continuous adaptation and transformation of accounting principles to internal and external conditions of an enterprise's operation. The increasing number and difficulty of cyber risks require continuous adaptation and transformation of accounting principles to internal and external conditions of an enterprise's operation. Thus, the ways of practically implementing (adhering to) them are optimized to ensure the enterprise's economic security. As a result, cyber risks exert direct and reverse influence on economic entities' financial and economic performance through accounting principles. Table 1 presents the peculiarities of adhering to fundamental accounting principles in the conditions of simultaneously ensuring economic and cybersecurity, which is the first fundamental methodological level of their interaction. Adherence to the fundamental principles of accounting allows the enterprise to implement its functions to ensure the proper quality of accounting information. The quality of information produced by accounting depends on its ability to meet the requirements and expectations of internal and external users. Cyber risks aim to reduce or negate the usefulness of accounting information due to non-compliance with its quality parameters. The proper quality of accounting information determines the quality of the interaction between economic and cyber security of enterprises. The main qualitative parameters of accounting data targeted by cyber risks are credibility, timeliness, availability, feasibility, reliability, comparability, and others. Regardless of the information subordination of qualitative parameters of accounting data or their grouping by various classification criteria, the economic security of the enterprise depends on the frequency of cyber threats. In particular, cyber risks focus on reducing the quality of accounting information through follows: − credibility (making incorrect (erroneous) management decisions); − timeliness (making belated management decisions); − accessibility (inability to obtain or perceive information in the process of making management decisions); − feasibility (blocking the necessary management decisions); − reliability (inability to make management decisions due to lack of trust in information); − comparability (making unreasonable management decisions due to inability to assess and analyze accounting indicators); − other qualitative parameters of accounting data (damage to the enterprise management). Thus, the manifestation of cyber risks is the reason for the reduced efficiency of the management system, which leads to the enterprise suffering economic damage. All qualitative parameters of accounting data are ultimately related to its confidentiality in ensuring the enterprise's economic and cyber security. Accounting data is nominally divided into public and confidential on the distinction between financial and managerial accounting. Identification of accounting items and their division into types determines the methodical level of the interaction between economic and cyber security of the enterprise. Confidentiality of managerial accounting data is precipitated by the exclusively internal use and need to ensure that unauthorized persons do not access it. Lack of proper cybersecurity for managerial accounting data could lead to third parties using it to gain a competitive advantage in the market, attract buyers and suppliers on more favorable commercial terms, optimize the technological side of operations, revise personnel, pricing, sales policy, etc. Violation of confidentiality ultimately leads to economic losses for the enterprise. Economic damages caused by the manifestation of cyber risks are associated with losing operating profits due to loss of markets, suspension of operation, disruption of logistics cycles, disruptions to the rhythm of production, loss of intellectual property. Additionally, the use of false internal accounting information may lead to erroneous management decisions. The higher level of management, the larger the potential economic losses from making incorrect management decisions. The greatest threat to the economic security of the enterprise may be posed by ineffective strategic management caused by the use of accounting data altered by cyber attacks. The intensity and frequency of cyber threats also depend on the accounting item type. In particular, most cyberattacks are aimed at stealing money and its equivalents. Cyber threats are equally likely to manifest concerning the production and related calculations and manufacturing technologies (performance of works, provision of services) and fixed assets of the enterprise to damage critical infrastructure and suspend the enterprise operations, etc. However, accounting items such as inventories and small current assets are rarely cyber-threatened. Although financial accounting information is not a trade secret, it also requires effective cybersecurity. As the financial statements are officially disclosed, there is a risk of distortion or substitution of data. Stakeholders make management decisions on their financial interests and the operations of the economic entity based on reporting information. To discredit the company, its financial statements may be modified as a result of a cyber-attack. Malicious actions of third parties may cause economic damage to the company at the time of accounting data transfer or its storage location. Disclosure of false information about the entity's activities may result in the loss of the economic interest of stakeholders. In particular, investors may suspend further investment in the issuer's financial instruments; financial institutions may refuse to lend; other creditors may demand early returns of accounts payable; contractors may refuse to cooperate, etc. As a result, an enterprise with distorted financial statements may suffer indirect financial damage that threatens economic security. The level of communicative interaction between the enterprise's economic and cyber security is also connected with the communication with stakeholders. Cyberattacks at this level are aimed at blocking communications and transmitting false or incomplete accounting data to users. Stakeholders may consider such actions to be breaches of communication regulations or mistake the data altered by a cyber threat for authentic information. For example, cyber threats targeting the company's communications with fiscal institutions may lead to false accounting information reaching the recipient. If the accounting data recorded in the enterprise's books as the tax base for accrued taxes (fees) and the data sent to the tax authority differ, this may result in financial sanctions. The tax agent then suffers economic losses from fines for failure to report, late or incomplete information sent to the fiscal authority about financial and economic activities. In the absence of effective cybersecurity of communications with regulatory institutions, threats to the economic security of the enterprise increase due to repeated penalties for violating fiscal regulations. Direct cyber risks threaten banking communications. If the attackers gain access to the electronic transaction accounting system, they could steal funds from the bank and electronic accounts. The extent of economic damage from the manifestation of such cyber risks could be reliably determined. Unauthorized access to the accounting system for non-cash payments threatens the economic security of the enterprise due to the possibility of losing all such funds. If the theft concerns electronic money or cryptocurrencies, it is impossible to search for attackers and recover lost funds. Effective preventive cybersecurity is crucial given the confidentiality and impersonality of electronic transactions, as combating already active cyber threats is difficult. Economic entities that use accounting and management outsourcing services are also vulnerable to significant economic losses. Active cyber threats could significantly modify accounting data in the process of communication from sender to outsourcer (Balaziuk et al., 2020). The outsourcing firm carries out further information processing based on the received distorted data, leading to them unwittingly creating false reports. The increased number of stages of accounting data processing, related to the transfer to the outsourcer, makes it very difficult to establish the reliability of accounting. Meanwhile, the repeated processing of newly credible accounting data after eliminating cyber threats requires additional costs and time. Cyber threats related to outsourcing also increase the likelihood of losing confidential information due to the need for continuous electronic communications, which could cause economic damage to the company. Thus, the number of delegated accounting functions simultaneously affects the enterprise's economic and cyber security. Communication with audit firms may be subject to similar cyber threats. Suppose the auditor receives incomplete information about the financial and economic activities of the enterprise. In that case, they may choose to give a negative audit report or refuse to provide it at all. The use of such audit information may lead to a negative business reputation in the auditor's eyes or other recipients of audit reports. A blow to business reputation has a negative impact on the economic security of the enterprise. Should all cyber risks become a reality, ultimately, the business image of the enterprise is ruined, which determines the reputation level of the interaction between economic and cyber security. Hackers may directly inflict damage on an enterprise's reputation to cause economic damage and suspend the operation of the economic entity, or indirectly while pursuing personal, in most cases financial, goals. In any case, the company subjected to cyberattacks loses the confidence of employees, contractors, investors, creditors, public and state institutions. PR losses inevitably make a dent in the economic security of the enterprise. Oversight and fiscal institutions, social and environmental organizations may impose financial sanctions, block the enterprise's assets, or consider its activities illegal. Information interaction with such business entities is considered «toxic», which automatically hinders its financial and economic activities.
4,715.2
2021-01-01T00:00:00.000
[ "Computer Science", "Economics", "Business" ]
Characterization of 4H- and 6H-Like Stacking Faults in Cross Section of 3C-SiC Epitaxial Layer by Room-Temperature μ-Photoluminescence and μ-Raman Analysis. We report a comprehensive investigation on stacking faults (SFs) in the 3C-SiC cross-section epilayer. 3C-SiC growth was performed in a horizontal hot-wall chemical vapour deposition (CVD) reactor. After the growth (85 microns thick), the silicon substrate was completely melted inside the CVD chamber, obtaining free-standing 4 inch wafers. A structural characterization and distribution of SFs was performed by μ-Raman spectroscopy and room-temperature μ-photoluminescence. Two kinds of SFs, 4H-like and 6H-like, were identified near the removed silicon interface. Each kind of SFs shows a characteristic photoluminescence emission of the 4H-SiC and 6H-SiC located at 393 and 425 nm, respectively. 4H-like and 6H-like SFs show different distribution along film thickness. The reported results were discussed in relation with the experimental data and theoretical models present in the literature. Introduction Cubic silicon carbide (3C-SiC) is a very interesting material for high frequency and high power devices (sustainable energies, hybrid vehicles, low power loss inverters), owing to its wide band gap and its high speed of electron transport within the crystal [1,2]. The market between 200 V and 1200 V is very price sensitive [3], consequently, 4H-SiC technology cannot easily find applications. Instead, 3C-SiC technology could be a good candidate to develop power devices in the region below a breakdown voltage of 800-1000 V [4,5]. Today, the main limitation for devices fabrication on 3C-SiC is the quality of the material. Hetero-epitaxial growth of 3C-SiC on silicon (Si) substrate was developed because of the advantages of low cost and large size. However, it is difficult to obtain high quality 3C-SiC films [6] owing to the crystal lattice mismatch (20%) and the difference in thermal expansion coefficients (~23% at deposition temperatures and 8% at room temperature (RT)). They result in a large residual strain and in a poor crystallographic structure. Consequently, the interface between 3C-SiC and Si is the origin of a high density of planar and volume defects, such as misfit dislocations, micro twins (MTs), anti-phase boundaries (APBs), and stacking faults (SFs) in the epilayer and voids in Si underneath the hetero-interface. 3C-SiC SFs' concentration is highly dependent on the grown layer thickness. A usual exploited strategy concerns the increase of grown thickness to allow a physiological reduction towards a saturation value. Indeed, at the Si/SiC interface, where a very dense defectiveness network is observed, SFs are annihilated at a high rate and the mutual closure mechanism is stimulated. When the number of SFs decreases, the SFs extinction rate falls and their density tends to a saturation value. However, despite the strategies used so far, the concentration of stacking faults is still not compatible with the development of VLSI technologyowing to their high electrical activity [7,8]. In order to minimize the density of SFs, their formation and development inside the material must be understood. The stacking faults in 3C-SiC can be treated as random mixing of α-type unit structures, such as 6H and 4H. Theoretical calculations and experimental analysis methods are really important for spatial profiling of SFs in SiC wafers. Different techniques can be utilized, such as transmission electron microscopy (TEM), cathode-luminescence, KOH etching, X-ray topography, and micro-photoluminescence (micro-PL) mapping [9]. However, a technique such as TEM does not allow to analyse large areas, and the others were generally applied for plan-view characterizations of the surface. In particular, for 3C-SiC, the SFs' density on the surface is investigated by etching epilayer in KOH, and then observing the sample by optical microscope. The etching of the material is heterogeneous, especially through localized defects, rather than uniform on the whole surface. In this way, it is possible to observe the total number and the shape of the SFs, but it is not possible to discriminate the type. Raman spectroscopy is commonly used for SiC analysis as it is non-destructive and does not require any sample preparation. This technique provides spatial resolution suitable for studying defects in thin films. Hence, it can be considered as a complementary method to photoluminescence (PL) analysis for SiC characterization, one of the most used methods to detect and study crystallographic defects in the material [9,10]. In this work, we report the study of the stacking faults (SFs) in the 3C-SiC cross-section epilayer. Structural characterization as well as SFs' distribution was performed by µ-Raman spectroscopy and room-temperature µ-photoluminescence. Two kinds of SFs, 4H-like and 6H-like, were identified near the removed interface with silicon. Each kind of SF introduces a characteristic photoluminescence emission of 4H-SiC and 6H-SiC located at 393 and 425 nm, respectively. 4H-like and 6H-like SFs show a different distribution along the thickness of the film. The reported results were discussed in relation with the experimental data and theoretical models present in the literature. Materials and Methods 3C-SiC growth was performed in a horizontal hot-wall chemical vapour deposition (CVD) reactor (ACIS M10 supplied by LPE) using (100)-oriented Si substrates. The reaction system used was tri-chloro-silane (TCS), ethylene (C 2 H 4 ), and hydrogen (H 2 ) as the silicon precursor, carbon precursor, and gas carrier, respectively. After the initial thermal ramp from room temperature to the carbonization temperature of 1200 • C, the temperature was increased until 1400 • C. At this temperature, the growth of the 3C-SiC takes place [11]. During the growth, 1600 sccm constant nitrogen flux was inserted inside the CVD chamber. After the growth of an almost 85 µm thick layer, temperature was increased to 1650 • C and the silicon substrate was completely melted inside the CVD chamber [12]. Finally, the temperature was decreased until room temperature, obtaining free-standing 4 inch wafers.A scheme of the synthesis process is shown in Figure 1. Micro-Raman and micro-photoluminescence maps were performed at room temperature using an HR800 integrated system by Horiba Jobin Yovin in a backscattering configuration. For the Raman analysis, the excitation wavelength was supplied by a continues He-Ne laser (632.8 nm), which was focalized on the sample by a x40 objective, with numerical aperture (NA) of 0.5. The scattered light was dispersed by an 1800 grooves/mm kinematic grating. For the PL analysis, the excitation wavelength was supplied by a continues He-Cd laser (325 nm), which was focalized on the sample by a x40 objective, with numerical aperture (NA) of 0.5. The emitted light was reflected onto a 300 grooves/mm kinematic grating. Results and Discussion 3C-SiC film was synthesised by CVD adding a constant nitrogen concentration of 1600 sccm inside the chamber. Thus, 3C-SiC film shows a nitrogen concentration of 1 × 10 19 at/cm 3 (the nitrogen concentration in the layer was measured by Van der Paw structures). Figure 2 shows some free-standing 4 inch wafers of 3C-SiC synthesised in accordance with the process scheme reported in Figure 1. We chose to dope the material with a high nitrogen concentration, because N-doped 3C-SiC has a direct band gap character that exhibits good emission properties. Moreover, it has more electrons near the Fermi level that contribute to increasing band-to-band transitions [13], and thus to increasing the intensity of the detected signals. We can observe that the region next to the removed silicon substrate (point 0 on the Y axis) appears darker (Figure 3a), owing to a lower intensity of the band-edge peak, about 5000 counts/s (Figure 3b, black spectrum). As we approach the surface, the intensity of the map intensifies, revealing a greater band-edge peak emission, of about 65,000 counts/s (Figure 3b, red spectrum). It is known that, for 4H-SiC, the intensity of the band-edge emission is decreased by the non-radiative recombination via defect levels and surface/interface recombination [9]. However, SFs in 3C-SiC do not produce a PL peak, usually in the range between 450 and 900 nm, because they do not introduce levels inside the bandgap [10,14]. Nevertheless, we think that the variation of the band-edge peak intensity in the map provides a distribution of crystalline quality, and thus of defects, along the sample section. This difference suggests a high concentration of defects in the first 20 µm of 3C-SiC. These defects decrease moving to the surface. To better understand this result and the possibility to detect the presence of SFs in 3C-SiC film by µ-PL and/or µ-Raman analysis, we focused our attention on the first 30 microns of the 3C-SiC cross-section. In Figure 4, we report (a) µ-PL map at 540 nm and µ-Raman map at (b) 778 cm −1 and (c) 784 cm −1 acquired in the cross-section in the same area. We can observe that the µ-PL map (Figure 4a) shows the same intensity distribution reported in Figure 3. The band-edge peak intensity increases moving from the silicon-removed interface to the surface (from 0 µm to 85 µm). In Figure 4b an intrinsic, extrinsic, and double-extrinsic stacking fault. SFs are wrong sequences of the double layers and they can be seen as inclusions of a few layers of an SiC polytype in the perfect layer stacking of another polytype [15]. It is known that the TO mode for 4H-SiC lattice is located at 778 cm −1 [16]. Thus, we think that the area of the µ-Raman map (reported in Figure 4b) characterized by the presence of the peak at 778.3 cm −1 is an area with a high density of defects. In particular, these extrinsic stacking faults recall the structure of the 4H-SiC [15]. In the same way, the TO mode for 6H-SiC lattice shows two components at 764.4 cm −1 and 789.4 cm −1 [17]. Thus, it is possible that the area of the µ-Raman map (reported in Figure 4c) characterized by the presence of the peak at 784 cm −1 is an area with a high density of double-extrinsic stacking fault, which recall the structure of the 6H-SiC [15]. The component at 764.4 cm −1 cannot be discriminated in our spectra because it is too close to signals from the laser. To confirm this hypothesis, we acquired (in the same area) some linear µ-PL maps along the entire thickness of the cross-section, from the removed interface with Si (0 µm) to the surface (85 µm). Figure 5 shows the linear µ-PL map acquired crossing area (3) in Figure 4c and centred at 390 nm (Figure 5a). Linear µ-PL maps were acquired in the range between 350 and 450 nm. In particular, the map profile centred at 390 nm shows the absence of the signal for thickness greater than 25 µm (Figure 5a). Instead, for thickness lower than 25 µm, the signal increases until a value of 450 counts/s in the range between 5 and 15 µm. The relative spectra extracted from the map profile at various thicknesses are shown in Figure 5b. For thickness greater than 25 µm, in the range between 350 and 450 nm, the PL spectra do not show any particular peak (black and red spectra in b). For thickness of 20 µm, the PL intensity increases in the same range. In particular, we observe the presence of a new peak at 393 nm (blue spectrum in Figure 5b), which is very close to the band-edge emission of the 4H polytype, reported at 390 nm [18]. For thickness of 10 µm, the intensity of the peak at 393 nm increases and another peak appears at 425 nm (green spectrum in Figure 5b), which is very close to the band-edge emission of the 6H polytype, reported at 423 nm [18]. So, combining the results obtained from µ-PL and µ-Raman maps, the presence of 4H-like and 6H-like staking faults was ascertained. In particular, they allowed us to detect the distribution of defects along the cross-section of a sample. Comparing these results with those present in the literature and obtained with different experimental techniques and theoretical simulations, two aspects are most striking. First, to our knowledge, there are no reports in the literature that highlight a distribution and discrimination of SFs over large areas in the cross-section. Even if TEM analysis allow to discriminate the typology of defects, the analysis is often carried out on very small areas (some micron) [19,20]. Meanwhile, the other techniques are used to characterize surfaces. In particular, the most common technique to study and highlight the presence of SFs on the surface of the 3C-SiC is to attack the sample in KOH. The attack takes place selectively, mainly affecting areas with defects. For an etched sample [21], the TO mode becomes asymmetric into the low frequency side. At the same time, the intensity of the TO band increases. Nevertheless, it is not possible to discriminate the type of SFs dispatched by the attack itself. Our approach is not destructive, so we observe the formation of new distinct peaks in Raman spectra (see Figure 4e,f) at a lower frequency with respect to the TO mode of the SF-free 3C-SiC (see Figure 4d), but we do not observe a greater intensity of the TO peak. This difference is related to the different crystallographic planes exposed during the Raman acquisitions. In the results reported in the literature [21], the KOH etch and the relative Raman analyses are conducted along the (001). As the TO mode is forbidden for a perfect 3C-SiC crystal in a backscattering geometry for {001}, the increase of the TO intensity indicates that stacking disorder breaks the k-selection rule [21]. In our case, the sample is placed in cross and the Raman spectra were acquired along the (110). In accordance with the above configuration, we observe a more intense TO peak outside the defective zone ( Figure 4d) and a less intense TO peak in the presence of the signal associated with the defects (Figure 4e,f)). The average FWHM of the TO peak shows a constant value between the region with (Figure 4e,f) and without the defects (Figure 4d). The second point concerns the type of defects and their distribution along the thickness. The literature on the thermodynamic stability and polytypes of SiC is rich. Many theoretical works were carried out to explain the band structure and total energies of the various polytypes of silicon carbide [22][23][24], showing that, at a high temperature, the 6H-and 4H-SiC polytypes become thermodynamically more stable than 3C-SiC [15]. In particular, the formation energies of SFs in 3C-SiC decrease with the temperature (particularly for ESFs). Even though 6H-like SF shows a lower formation energy than 4H-like [15], and it is considered the most common inclusion of other polytypes in 3C-SiC [25], we clearly observe the peak related to the presence of 6H-like SFs only in the first 15 µm of the film (from the removed Si interface to the surface, Figure 4c). Meanwhile, it was possible to detect the peak attributable to 4H-like SFs in the first 20-25 µm of the film (Figure 4b). Another interesting aspect is that the 6H-like signal appears coupled to the 4H-like one, while it is possible to observe large areas where only the 4H-like signal is visible (Figure 4b,c). Furthermore, moving along the cross section of the samples, for thicknesses greater than 25 µm, we observed the 4H-like signal in small areas, but not the 6H-like signal (spectra not shown here). It is important to underline that, to detect the signals related to SFs along a cross-section, the defects must have high density and be sufficiently superficial with respect to the exposed section. In particular, for PL measurements. In fact, the penetration length in the 3C-SiC of the laser source used to acquire the PL spectra is about 3 µm. As the film was grown at a constant temperature (1400 • C), the distribution of SFs along the thickness of the film cannot depend on the formation energy alone. The cause could be ascribed to the stress profiles that vary by moving from the interface to the surface [11]. For example, at the interface where there is a large residual strain, both types of SFs may be needed. However, factors such as crystallographic orientation of substrate and/or the carbonization process of silicon can influence the formation of defects at the interface. Therefore, one type of SF can be privileged over another. Another possibility could be the high concentration of nitrogen, which could facilitate the closure of one type of SF rather than another. The first results (not shown here) on 3C-SiC samples attached in KOH showed that the concentration of surface defects depends on the nitrogen concentration. In particular, the density of SFs decreases with increasing nitrogen concentration. We are looking for experimental tests on these hypotheses. Conclusions The 3C-SiC hetero-epitaxial layers, doped with nitrogen, were grown in a horizontal hot-wall chemical vapour deposition (CVD) reactor using (100)-oriented Si substrates. The melting of silicon substrate allowed to obtain high quality free-standing 3C-SiC films of 4 inches. We showed that, by µ-Raman spectroscopy and room-temperature µ-photoluminescence, it was possible to detect the distribution of staking faults in the 3C-SiC cross-section. In particular, two kinds of SFs, 4H-like and 6H-like, were identified. Each kind of SF shows a characteristic PL emission of the 4H-SiC and 6H-SiC located at 393 and 425 nm, respectively. Even though 6H-like SFs show a lower formation energy than 4H-like, and are considered the most common inclusion of other polytypes in 3C-SiC, we observe the presence of 6H-like SFs only near the original interface with silicon, in particular in the first 15 µm. Meanwhile, it was possible to detect the 4H-like SFs along a thickness of 20-25 µm. Conflicts of Interest: The authors declare no conflict of interest.
4,053
2020-04-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
The Expression of the Chemokine CXCL14 Correlates with Several Aggressive Aspects of Glioblastoma and Promotes Key Properties of Glioblastoma Cells Glioblastoma (GBM) is a primary brain tumor whose prognosis is inevitably dismal, leading patients to death in about 15 months from diagnosis. Tumor cells in the mass of the neoplasm are in continuous exchange with cells of the stromal microenvironment, through the production of soluble molecules, among which chemokines play prominent roles. CXCL14 is a chemokine with a pro-tumor role in breast and prostate carcinoma, where it is secreted by cancer associated fibroblasts, and contributes to tumor growth and invasion. We previously observed that CXCL14 expression is higher in GBM tissues than in healthy white matter. Here, we study the effects of exogenously supplemented CXCL14 on key tumorigenic properties of human GBM cell lines. We show that CXCL14 enhances the migration ability and the proliferation of U87MG and LN229 GBM cell lines. None of these effects was affected by the use of AMD3100, an inhibitor of CXCR4 receptor, suggesting that the observed CXCL14 effects are not mediated by this receptor. We also provide evidence that CXCL14 enhances the sphere-forming ability of glioblastoma stem cells, considered the initiating cells, and is responsible for tumor onset, growth and recurrence. In support of our in vitro results, we present data from several GBM expression datasets, demonstrating that CXCL14 expression is inversely correlated with overall survival, that it is enriched at the leading edge of the tumors and in infiltrating tumor areas, and it characterizes mesenchymal and NON G-CIMP tumors, known to have a particularly bad prognosis. Overall, our results point to CXCL14 as a protumorigenic chemokine in GBM. Introduction Glioblastoma is the most common and deadliest type of brain tumor, that, despite multimodal and aggressive therapy, leads to death within 15 months from diagnosis [1,2]. These tumors, highly infiltrative in the brain parenchyma, are composed not only of purely cancer cells, but also by a variety of stromal cells, among which reactive astrocytes and microglia/macrophages play prominent roles in sustaining tumor growth and progression [3][4][5]. A continuous crosstalk, represented by several types of soluble molecules, connects tumor cells and the surrounding stromal cells in the tumor microenvironment. Chemokines, a family of secreted proteins with established roles in the stimulation of cell migration and growth, are important mediators of tumor-stroma connections in general in solid Results In order to set up our in vitro model, we started by measuring CXCL14 expression in a number of human glioblastoma cell lines, and compared them with cultured human astrocytes. In the cell extracts of three human glioblastoma cell lines, namely A172, LN229 and U87MG, we could detect, by ELISA, CXCL14 levels in the range of 90-400 pg/mL. In support of our previous findings about the enriched expression of CXCL14 in astrocytes in the bulk of the tumor, we found that the cell extracts of cultured human astrocytes contain CXCL14 at a concentration which is at least one order of magnitude higher than that measured in glioblastoma cells ( Figure 1). Our working hypothesis is that CXCL14 in GBM samples is produced mainly by "stromal" cells, such as reactive astrocytes, thus affecting the tumoral properties of GBM cells. In order to establish a source of secreted CXCL14 to be used in in vitro experiments with glioblastoma cells, we took advantage of a cell line of NIH-3T3 fibroblasts stably expressing human CXCL14, NIH-CXCL14 [15]. As expected, these cells not only produced very large human CXCL14 levels, comparable to those endogenously found in human astrocytes, but they also secreted CXCL14 (138.5 pg/mL) in their supernatant ( Figure 1). We chose to employ NIH-CXCL14 conditioned medium as a source of exogenous CXCL14 to study the effects on the proliferation of two human glioblastoma cell lines, LN229 and U87MG, both expressing CXCL14, though at low levels ( Figure 1), but unable to produce detectable levels of the secreted chemokine in their supernatants (not shown). The incubation with NIH-CXCL14 conditioned medium sensibly and reproducibly enhanced U87MG cell growth, with an effect which increased over time (Figure 2A). However, NIH-CXCL14 CM barely affected cell growth of LN229 cells, only at late time points, as seen in Figure 2B. extracts of three human glioblastoma cell lines, namely A172, LN229 and U87MG, we could detect, by ELISA, CXCL14 levels in the range of 90-400 pg/ml. In support of our previous findings about the enriched expression of CXCL14 in astrocytes in the bulk of the tumor, we found that the cell extracts of cultured human astrocytes contain CXCL14 at a concentration which is at least one order of magnitude higher than that measured in glioblastoma cells (Figure 1). Our working hypothesis is that CXCL14 in GBM samples is produced mainly by "stromal" cells, such as reactive astrocytes, thus affecting the tumoral properties of GBM cells. In order to establish a source of secreted CXCL14 to be used in in vitro experiments with glioblastoma cells, we took advantage of a cell line of NIH-3T3 fibroblasts stably expressing human CXCL14, NIH-CXCL14 [15]. As expected, these cells not only produced very large human CXCL14 levels, comparable to those endogenously found in human astrocytes, but they also secreted CXCL14 (138.5 pg/ml) in their supernatant ( Figure 1). We chose to employ NIH-CXCL14 conditioned medium as a source of exogenous CXCL14 to study the effects on the proliferation of two human glioblastoma cell lines, LN229 and U87MG, both expressing CXCL14, though at low levels ( Figure 1), but unable to produce detectable levels of the secreted chemokine in their supernatants (not shown). The incubation with NIH-CXCL14 conditioned medium sensibly and reproducibly enhanced U87MG cell growth, with an effect which increased over time ( Figure 2A). However, NIH-CXCL14 CM barely affected cell growth of LN229 cells, only at late time points, as seen in Figure 2B. , the results of the supplementation of 10 µM AMD3100 to U87MG cells incubated with either NIH-ctr or NIH-CXCL14 conditioned media are also shown. Results are shown as the mean ± S.D. and represent the average of three experiments performed independently. Data were analyzed by a two-tailed unpaired Student's t-test. * p < 0.05; ** p < 0.01. CXCL14 is considered an "orphan" chemokine, as its receptor has never been unequivocally defined, even if some papers showed that it can bind to CXCR4 [18], formally the receptor of CXCL12. This receptor is expressed on glioblastoma cells and is required for tumor growth, and its stimulation is involved in VEGF production by glioblastoma cells, and in the interaction with endothelial cells in the tumor [19][20][21]. With the aim of understanding if CXCL14 functional effects we observed on glioblastoma cell lines may be mediated by CXCR4, we employed the specific CXCR4 inhibitor AMD3100 [22] in proliferation assays of U87MG cells incubated with NIH-CXCL14 conditioned medium. However, in the presence of AMD3100, the increase in cell proliferation due to NIH-CXCL14 supernatant was maintained (Figure 2A), suggesting that CXCL14 effect on proliferation is not mediated by CXCR4. In addition, we did not observe any variation in CXCR4 expression levels in U87MG cells grown in NIH-CXCL14 conditioned medium, compared to cells grown in NIH-ctr conditioned medium (Supplementary Figure S1), indicating that CXCL14 exogenous supplementation does not affect CXCR4 basal expression. CXCL14 has a demonstrated role as a pro-tumoral chemokine produced in the tumor microenvironment of breast carcinoma by cancer associated fibroblasts (CAFs) [15,16]. In that context, CXCL14 was shown to play its function by stimulating ERK1/2 phosphorylation. In line with this, when we treated U87MG cells with recombinant CXCL14, we detected an increase in ERK1/2 phosphorylated forms ( Figure 3). performed independently. Data were analyzed by a two-tailed unpaired Student's t-test. * P < 0.05; ** P < 0.01. CXCL14 is considered an "orphan" chemokine, as its receptor has never been unequivocally defined, even if some papers showed that it can bind to CXCR4 [18], formally the receptor of CXCL12. This receptor is expressed on glioblastoma cells and is required for tumor growth, and its stimulation is involved in VEGF production by glioblastoma cells, and in the interaction with endothelial cells in the tumor [19][20][21]. With the aim of understanding if CXCL14 functional effects we observed on glioblastoma cell lines may be mediated by CXCR4, we employed the specific CXCR4 inhibitor AMD3100 [22] in proliferation assays of U87MG cells incubated with NIH-CXCL14 conditioned medium. However, in the presence of AMD3100, the increase in cell proliferation due to NIH-CXCL14 supernatant was maintained (Figure 2A), suggesting that CXCL14 effect on proliferation is not mediated by CXCR4. In addition, we did not observe any variation in CXCR4 expression levels in U87MG cells grown in NIH-CXCL14 conditioned medium, compared to cells grown in NIH-ctr conditioned medium (Supplementary Figure 1), indicating that CXCL14 exogenous supplementation does not affect CXCR4 basal expression. CXCL14 has a demonstrated role as a pro-tumoral chemokine produced in the tumor microenvironment of breast carcinoma by cancer associated fibroblasts (CAFs) [15,16]. In that context, CXCL14 was shown to play its function by stimulating ERK1/2 phosphorylation. In line with this, when we treated U87MG cells with recombinant CXCL14, we detected an increase in ERK1/2 phosphorylated forms ( Figure 3). As the migratory ability of glioblastoma cells is tightly connected with their lethal features, we also assayed if NIH-CXCL14 conditioned medium could modify the migration propensity of GBM cells. Scratch tests performed on LN229 cells demonstrated that NIH-CXCL14 supernatant significantly increased the number of migrated cells compared to those incubated with the conditioned medium of NIH-ctr negative control cells ( Figure 4A). Further assays performed by using Boyden chambers confirmed and refined these results in LN229 cells ( Figure 4B), and in U87MG cells too ( Figure 4C). In both cell types, CXCL14 supplementation by incubating the cells with NIH-CXCL14 conditioned medium increased the number of migrated cells of about twofold. However, as previously noted in proliferation assays, the inhibition of CXCR4 receptor by AMD3100 did not affect CXCL14 pro-migratory function ( Figure 4C). phosphorylation, was produced after a longer exposure, in order to reveal the faint bands present in untreated cells. As the migratory ability of glioblastoma cells is tightly connected with their lethal features, we also assayed if NIH-CXCL14 conditioned medium could modify the migration propensity of GBM cells. Scratch tests performed on LN229 cells demonstrated that NIH-CXCL14 supernatant significantly increased the number of migrated cells compared to those incubated with the conditioned medium of NIH-ctr negative control cells ( Figure 4A). Further assays performed by using Boyden chambers confirmed and refined these results in LN229 cells ( Figure 4B), and in U87MG cells too ( Figure 4C). In both cell types, CXCL14 supplementation by incubating the cells with NIH-CXCL14 conditioned medium increased the number of migrated cells of about twofold. However, as previously noted in proliferation assays, the inhibition of CXCR4 receptor by AMD3100 did not affect CXCL14 pro-migratory function ( Figure 4C). Representative picture (left) and relative graphical visualization (right) of a scratch test assay measuring the migration ability of LN229 cells previously incubated with either NIH-ctr or NIH-CXCL14 conditioned medium. Pictures were taken at time 0, when the scratch was performed, and after 18 h from scratching. In the graph, the distance migrated by cells in control conditions is set as = 1. (B) Migration transwell assays performed with LN229 cells after incubation with conditioned media of either negative control NIH-ctr or CXCL14 secreting NIH-CXCL14 cells. (C) Migration transwell assays performed with U87MG cells after incubation with conditioned media of either negative control NIH-ctr or CXCL14 secreting NIH-CXCL14 cells, with or without the supplementation of 10 µM AMD3100. The graph shows the number of migrating cells as compared to negative control, set as = 1. Results are presented as mean ± S.D. with significant differences from controls (*) shown (p < 0.05). Two-tailed unpaired t-tests were used to determine significance between groups. The experiments were performed three times (biological replicates). With the aim of strengthening these results, obtained in transient conditions, we also produced U87MG and LN229 cells stably overexpressing human CXCL14 ( Figure 5A,B). Both stable cell lines clearly showed an increased proliferation compared to cells transduced with a negative control vector ( Figure 5C). Moreover, the stable U87MG cells overexpressing CXCL14 showed a significantly increased migration ability ( Figure 5D) in agreement with what we have shown for LN229 cells. supplementation of 10 μM AMD3100. The graph shows the number of migrating cells as compared to negative control, set as = 1. Results are presented as mean ± S.D. with significant differences from controls (*) shown (p < 0.05). Two-tailed unpaired t-tests were used to determine significance between groups. The experiments were performed three times (biological replicates). With the aim of strengthening these results, obtained in transient conditions, we also produced U87MG and LN229 cells stably overexpressing human CXCL14 ( Figure 5A, B). Both stable cell lines clearly showed an increased proliferation compared to cells transduced with a negative control vector ( Figure 5C). Moreover, the stable U87MG cells overexpressing CXCL14 showed a significantly increased migration ability ( Figure 5D) in agreement with what we have shown for LN229 cells. In the immunoblots shown on the left, ectopic CXCL14 was revealed by using an anti-V5 antibody, while on the right a CXCL14-specific antibody was used. β-actin detection was used as a loading control. (C) Proliferation (MTS assay) of U87MG (left) or LN229 (right) cells stably transfected with either the empty vector pcDNA3.1/V5-His C, or the CXCL14-expressing vector pCXCL14-V5. Results are shown as the mean ± S.D. and represent the average of two experiments performed in triplicate. Data were analyzed by a two-tailed unpaired Student's t-test. * P < 0.05 ** P < 0.01. (D) Boyden chamber migration assay performed with U87MG cells stably transfected with either the empty vector pcDNA3.1/V5-HisC (pcDNA), or the CXCL14-expressing vector pCXCL14-V5. The graph depicts the relative migration of CXCL14-expressing cells compared to empty vector transfected ones, whose migration ability was set as = 1. Results are presented as mean ± S.D. with significant differences from controls (*) shown (p < 0.05). In the immunoblots shown on the left, ectopic CXCL14 was revealed by using an anti-V5 antibody, while on the right a CXCL14-specific antibody was used. β-actin detection was used as a loading control. (C) Proliferation (MTS assay) of U87MG (left) or LN229 (right) cells stably transfected with either the empty vector pcDNA3.1/V5-His C, or the CXCL14-expressing vector pCXCL14-V5. Results are shown as the mean ± S.D. and represent the average of two experiments performed in triplicate. Data were analyzed by a two-tailed unpaired Student's t-test. * p < 0.05 ** p < 0.01. (D) Boyden chamber migration assay performed with U87MG cells stably transfected with either the empty vector pcDNA3.1/V5-HisC (pcDNA), or the CXCL14-expressing vector pCXCL14-V5. The graph depicts the relative migration of CXCL14-expressing cells compared to empty vector transfected ones, whose migration ability was set as = 1. Results are presented as mean ± S.D. with significant differences from controls (*) shown (p < 0.05). The origin of glioblastoma, though controversial, is believed to reside in "stem-like" cells, also dubbed as tumor initiating cells; characterized by the ability of self-renewal in vitro and in vivo, and considered to be the main source of glioblastoma resistance to therapy and consequently of tumor relapse and patient death [23,24]. These cells, isolated from fresh tumor samples, can be propagated in vitro and grown as "spheres" in defined culture conditions. In order to extend the frame of our functional observations about CXCL14 possible roles in glioblastoma, we measured the ability of three distinct glioblastoma stem cell lines to form spheres in the presence of NIH-CXCL14 conditioned medium. Figure 6 shows that the incubation with CXCL14 containing medium increased this ability in all cell lines. However, the average size (diameter) of the spheres produced in the two conditions was not significantly different (not shown), suggesting that exogenously supplemented CXCL14 mostly works on the self-renewal ability of GSCs, rather than on their proliferation (reflected by the size of the spheres). In the case of BT517, which showed the highest Fold Change difference in the number of neurospheres produced, we also assayed if AMD3100 could affect the result. Again, as for proliferation and migration of U87MG cells, AMD3100 supplementation did not modify the effect of CXCL14 containing medium on glioblastoma stem cell spherogenic ability. considered to be the main source of glioblastoma resistance to therapy and consequently of tumor relapse and patient death [23,24]. These cells, isolated from fresh tumor samples, can be propagated in vitro and grown as "spheres" in defined culture conditions. In order to extend the frame of our functional observations about CXCL14 possible roles in glioblastoma, we measured the ability of three distinct glioblastoma stem cell lines to form spheres in the presence of NIH-CXCL14 conditioned medium. Figure 6 shows that the incubation with CXCL14 containing medium increased this ability in all cell lines. However, the average size (diameter) of the spheres produced in the two conditions was not significantly different (not shown), suggesting that exogenously supplemented CXCL14 mostly works on the self-renewal ability of GSCs, rather than on their proliferation (reflected by the size of the spheres). In the case of BT517, which showed the highest Fold Change difference in the number of neurospheres produced, we also assayed if AMD3100 could affect the result. Again, as for proliferation and migration of U87MG cells, AMD3100 supplementation did not modify the effect of CXCL14 containing medium on glioblastoma stem cell spherogenic ability. To complement our experimental results with a view of clinical data depicting CXCL14 expression in glioblastoma samples, we analyzed several datasets of glioblastoma patients by mining GlioVis, a web application for data visualization and analysis of brain tumors expression datasets (http://gliovis.bioinfo.cnio.es/) [25]. In the Rembrandt dataset, including 315 gliomas of different grades, the highest CXCL14 expression is found in grade IV tumors, the most aggressive and lethal ones ( Figure 7A). In addition, mining of the IVY GAP dataset, containing RNA-seq results of a total of 122 RNA samples of 5 anatomic structures generated from 10 tumors, revealed that CXCL14 expression is stronger at the leading edge of the tumors and also in infiltrating tumor areas, compared to other parts of the tumors ( Figure 7B). We also found strong evidence of a subtype-specific enrichment of CXCL14 expression in glioblastoma. In fact, as shown in Figure 7C, CXCL14 RNA is clearly overexpressed in mesenchymal tumors, compared to classical (p = 3.8 × 10 −9 ) and even more to proneural ones (p = 2.7 × 10 −42 ). In the same dataset (TCGA, including 528 patients), Figure 6. Exogenously supplemented CXCL14 enhances the self-renewal ability of glioblastoma stem cells. Glioblastoma stem cell self-renewal assay. The histogram shows the fold change (FC) in numbers of neurospheres formed by three different glioblastoma stem cell lines, BT275, BT168, and BT517. The results were obtained by counting the number of neurospheres in four random fields per sample, and comparing cells grown in the presence of NIH-ctr conditioned medium to cells grown in the presence of NIH-CXCL14 conditioned medium. For BT517 cells, the assay was performed in either the presence or the absence of AMD3100 (n = 3; mean ± S.D.). ** p-value < 0.01; *** p-value < 0.001. To complement our experimental results with a view of clinical data depicting CXCL14 expression in glioblastoma samples, we analyzed several datasets of glioblastoma patients by mining GlioVis, a web application for data visualization and analysis of brain tumors expression datasets (http: //gliovis.bioinfo.cnio.es/) [25]. In the Rembrandt dataset, including 315 gliomas of different grades, the highest CXCL14 expression is found in grade IV tumors, the most aggressive and lethal ones ( Figure 7A). In addition, mining of the IVY GAP dataset, containing RNA-seq results of a total of 122 RNA samples of 5 anatomic structures generated from 10 tumors, revealed that CXCL14 expression is stronger at the leading edge of the tumors and also in infiltrating tumor areas, compared to other parts of the tumors ( Figure 7B). We also found strong evidence of a subtype-specific enrichment of CXCL14 expression in glioblastoma. In fact, as shown in Figure 7C, CXCL14 RNA is clearly overexpressed in mesenchymal tumors, compared to classical (p = 3.8 × 10 −9 ) and even more to proneural ones (p = 2.7 × 10 −42 ). In the same dataset (TCGA, including 528 patients), CXCL14 expression neatly characterizes NON G-CIMP samples (NON G-CIMP vs G-CIMP, p = 2.8 × 10 −14 ) ( Figure 7D). [25] analysis of CXCL14 expression in the IVY GAP dataset, containing RNA-seq results of a total of 122 RNA samples of five anatomic structures generated from 10 glioblastomas (as in the table below, which shows the pairwise comparisons between group levels with corrections for multiple testing (p-values with Bonferroni correction)). (C and D) Box-plot graphs representing the GlioVis (http://gliovis.bioinfo.cnio.es/) [25] analysis of CXCL14 expression in the TCGA dataset, including glioblastoma samples from 528 patients, whose tumors were classified as "classical" CL, mesenchymal" MES or "proneural" PN (in C) (MES versus CL p = 3. [25] analysis of CXCL14 expression in the IVY GAP dataset, containing RNA-seq results of a total of 122 RNA samples of five anatomic structures generated from 10 glioblastomas (as in the table below, which shows the pairwise comparisons between group levels with corrections for multiple testing (p-values with Bonferroni correction)). (C,D) Box-plot graphs representing the GlioVis (http://gliovis.bioinfo. cnio.es/) [25] analysis of CXCL14 expression in the TCGA dataset, including glioblastoma samples from 528 patients, whose tumors were classified as "classical" CL, mesenchymal" MES or "proneural" PN (in C) (MES versus CL p = 3.8 × 10 −9 ; MES vs PN p = 2.7 × 10 −42 ; CL vs PN p = 1.5 × 10 −19 ) or as NON G-CIMP and G-CIMP (in D) (NON G-CIMP vs G-CIMP p = 2.8 × 10 −14 ). Pairwise comparisons between group levels with corrections for multiple testing (p-values with Bonferroni correction). In all graphs, for each group of samples, the number in brackets represents the number of samples analyzed. Notably, patients carrying NON G-CIMP tumors are known to have a worse prognosis than G-CIMP ones [26]. Interestingly, CXCL14 expression shows an inverse correlation with overall survival in this set of patients ( Figure 8). However, when patients are divided by the subtype of tumors, it is clear that the statistically different survival is observed specifically only in the proneural subtype (Figure 8), where the overall levels of CXCL14 are the lowest compared to all other subtypes. This suggests that CXCL14 expression may discriminate, among proneural glioblastomas, those with the worst prognosis. Notably, patients carrying NON G-CIMP tumors are known to have a worse prognosis than G-CIMP ones [26]. Interestingly, CXCL14 expression shows an inverse correlation with overall survival in this set of patients ( Figure 8). However, when patients are divided by the subtype of tumors, it is clear that the statistically different survival is observed specifically only in the proneural subtype (Figure 8), where the overall levels of CXCL14 are the lowest compared to all other subtypes. This suggests that CXCL14 expression may discriminate, among proneural glioblastomas, those with the worst prognosis. [25]; on the right, three distinct graphs were generated by separately analyzing GBM patients based on the subtypes of their tumors. Altogether, these observations point to CXCL14 expression correlating with the most aggressive types of glioblastomas and to the most aggressive regions in the tumor mass, those that drive the growth and dissemination of the tumor in the affected brain. Discussion The role of CXCL14 in cancer is controversial, as it was shown to play either tumor suppressive or tumor supportive roles, depending on the specific tumor type [13,[27][28][29][30]. In glioblastoma, however, others and us showed that CXCL14 mRNA expression is enriched in tumor samples [17,31], where this chemokine is likely produced and secreted in the tumor microenvironment by reactive astrocytes [17] or other types of stromal cells [32]. To this regard, it is interesting to notice that, in the samples we tested in the present work, the most abundant expression of CXCL14 was found in cultured astrocytes. Notwithstanding the differences obviously existing between reactive astrocytes in vivo and cultured astrocytes, this may anyway suggest that astrocytes own the intrinsic ability to efficiently produce CXCL14. These findings indicate that CXCL14 may contribute to the tumor supportive function of the microenvironment, ultimately strengthening the aggressive features of glioblastoma cells. Here, we show that indeed CXCL14, both exogenously supplemented and endogenously overexpressed, enhances the proliferation and the migratory ability of two different cell lines of human glioblastoma. In these cells, the exogenous supplementation of CXCL14 Altogether, these observations point to CXCL14 expression correlating with the most aggressive types of glioblastomas and to the most aggressive regions in the tumor mass, those that drive the growth and dissemination of the tumor in the affected brain. Discussion The role of CXCL14 in cancer is controversial, as it was shown to play either tumor suppressive or tumor supportive roles, depending on the specific tumor type [13,[27][28][29][30]. In glioblastoma, however, others and us showed that CXCL14 mRNA expression is enriched in tumor samples [17,31], where this chemokine is likely produced and secreted in the tumor microenvironment by reactive astrocytes [17] or other types of stromal cells [32]. To this regard, it is interesting to notice that, in the samples we tested in the present work, the most abundant expression of CXCL14 was found in cultured astrocytes. Notwithstanding the differences obviously existing between reactive astrocytes in vivo and cultured astrocytes, this may anyway suggest that astrocytes own the intrinsic ability to efficiently produce CXCL14. These findings indicate that CXCL14 may contribute to the tumor supportive function of the microenvironment, ultimately strengthening the aggressive features of glioblastoma cells. Here, we show that indeed CXCL14, both exogenously supplemented and endogenously overexpressed, enhances the proliferation and the migratory ability of two different cell lines of human glioblastoma. In these cells, the exogenous supplementation of CXCL14 induces ERK1/2 phosphorylation, as it does in breast carcinoma, where CXCL14 is secreted in the microenvironment by cancer associated fibroblasts [15]. This result too converges with those collected in other solid tumors, where CXCL14 plays a tumor supportive role. Moreover, our observation that CXCL14 enhances the ability of glioblastoma stem cells to form neurospheres is completely new, and indicates that this chemokine may contribute to key functions of the cells at the origin of glioblastoma. However, what remains still obscure is the receptor through which this chemokine signals and produces its downstream effects. In our model, it is unlikely that CXCR4 is involved, as the specific inhibition of this receptor did not modify CXCL14 effects on glioblastoma cells. We also think we can reasonably exclude that GPR85, an orphan G protein-coupled receptor recently found to bind CXCL14 on mammary fibroblasts in the tumor microenvironment of breast carcinoma [9], works as the receptor for CXCL14 in glioblastoma, as our SAGE data in glioblastoma samples and an RNA-seq analysis on GBM stem-like cells we recently performed, did not show any evidence of GPR85 expression in tumor samples [17] and data not shown. During the preparation of our manuscript, a paper was published, which describes the role of fibroblast-derived CXCL14 in EMT and metastasis of breast cancer [33], and identifies the GPCR ACKR2 as a mediator for CXCL14 action. In the glioblastoma cells we are studying here, though, we do not deem it likely that CXCL14 functions are mediated by ACKR2, as the expression of its mRNA, that we measured by qRTPCR, was undetectable in both U87MG and LN229 cells (not shown). Moreover, as for GPR85, our SAGE data in glioblastoma samples and an RNA-seq analysis on GBM stem-like cells we recently performed, showed only a barely detectable level of ACKR2 mRNA expression in tumor samples [17] and data not shown. What Sjöberg and co-authors show however, may be still highly relevant to glioblastoma too, as the exogenous source of CXCL14 they use is the same as ours, i.e., murine fibroblasts ectopically overexpressing human CXCL14. In their paper, they identify soluble factors that are induced in the secretome of NIH-CXCL14 fibroblasts compared to negative control ones, and propose that some of the molecules specifically enriched in those cells, among which many pro-angiogenic factors, inducers of EMT and of extracellular matrix remodeling, and several other chemokines, may be the direct effectors of CXCL14 action on tumor cells. Notably, among these molecules, several are known to positively affect the tumorigenic properties of glioblastoma cells. Thus, we cannot exclude that in glioblastoma too, CXCL14 exerts its pro-tumoral role at least in part indirectly, by inducing, in the producer cells of the microenvironment, the secretion of other molecules. In our work however, we also show that CXCL14 produced directly by stably overexpressing glioblastoma cell lines, enhances their proliferation and migration ability. Thus, CXCL14 produced in a completely different context from the fibroblast-derived one, still induces the same results. We think that it is quite unlikely that different cell types (murine fibroblasts versus human glioblastoma cell lines) respond to ectopic CXCL14 overexpression by secreting the same set of factors, even though we cannot exclude that some of them may overlap, and be at least in part responsible for CXCL14 action. In fact, in the absence of commercially available bona fide anti-CXCL14 blocking antibodies, we cannot discriminate between a purely direct and a partly indirect mechanism of action for CXCL14 on glioblastoma cells. Regarding the clinical implications of our results, our findings about CXCL14 expression in glioma datasets support a view of CXCL14 tagging the most aggressive tumors (IV grade versus other grades), subtypes (mesenchymal and classical versus proneural), regions (leading edge of the tumors and infiltrating tumor areas), and G-CIMP status (NON G-CIMP vs G-CIMP). This last observation in particular may overlap with that, by Zeng and co-authors [31], about a trend of CXCL14 overexpression in IDH wt gliomas vs IDH mutant ones: G-CIMP tumors in fact were closely related to IDH mutation, and a better prognosis characterizes patients carrying them [26,34]. Moreover, G-CIMP subtypes are highly enriched among the proneural subtypes, compared with NON G-CIMP tumors [34]. However, a detailed analysis showed that, among IDH mutant gliomas, the extent of global DNA methylation can be used to further distinguish patients: tumors with a low degree of DNA methylation (G-CIMP-low) presented a poorer outcome, while those with higher DNA methylation (G-CIMP-high) had a better overall survival [35]. It may be interesting to verify if, among the proneural tumors, those we have found with higher CXCL14 expression and shorter survival, correspond to the G-CIMP-low subset. In this view, our observation that CXCL14, much less expressed in the proneural subtype of tumors than in the others, shows an inverse correlation with overall survival-specifically in that subtype, may be of particular interest. Moreover, proneural tumors, even if showing a trend toward a slightly longer survival than other subtypes, do not respond to aggressive therapy [36], defined as concurrent chemoand radiotherapy or more than three subsequent cycles of chemotherapy. It may be worth investigating if CXCL14 expression in this subtype may affect patients' response to this therapeutic regimen. Altogether, our results highlight the involvement of CXCL14 in glioblastoma environment, where it likely affects not only established tumor cells, but also the stem cells. This observation, together with the subtype-specific enrichment of CXCL14, deserves further studies to unravel the significance of CXCL14 expression in the origin of glioblastoma and in the differential response of these tumors to therapy. To this aim, in vivo studies are needed to unequivocally demonstrate to which extent CXCL14 contributes to GBM growth and, consequently, if it could be considered as a therapeutic target. Further important issues that need to be addressed are the identification of its receptor in glioblastoma and the production of blocking molecules for both CXCL14 and its receptor, in order to selectively prevent its function. Transfections were performed by Lipofectamine 3000 reagent (Invitrogen, Monza, Italy) in Opti-MEM I (Invitrogen, Monza, Italy), by following the manufacturer's recommendations. The generation of stably transfected U87MG and LN229 cell lines was obtained by growing cells in medium containing the selective agent, G418, 1 mg/mL for three weeks starting at 48 h after transfection. The construct encoding for human CXCL14 was prepared by cloning human CXCL14 cDNA into pcDNA3.1/V5-His C (Invitrogen) digested with BamHI and XhoI. The primers used for cloning CXCL14 cDNA were: 5 -AAAGGATCCATGTCCCTGCTCCCACGC-3 and 5 -AAACTCGAGCTTCTTCGTAGACCCTGCG-3 . For treatments with recombinant human CXCL14 (Peprotech), cells were incubated for 7 minutes with 400 ng/mL recombinant CXCL14, and then collected. Human CXCL14 ELISA The CXCL14 protein level in protein extracts and conditioned media was determined using Human CXCL14 PicoKine TM ELISA Kit (BOSTER, Pleasanton, CA, USA) following the manufacturer's protocol. The absorbance at 450 nm was evaluated using a BP800 Microplate Absorbance Reader (Biohit Oyj Healthcare, Milan, Italy). The CXCL14 concentration in the samples was interpolated from the standard curve. Migration Assays To measure the migration of LN229 cells by a "scratch" test, 150 × 10 3 cells per well were plated in triplicate in a 24-well plate. After 24 h, the medium of each well was replaced with either NIH-CXCL14 or NIH-ctr conditioned medium, and cells were incubated for additional 30 h in a humidified atmosphere containing 5% CO 2 at 37 • C. After 30 h, a scratch was produced in each well by a pipette tip, and the medium was changed to serum-free DMEM. Pictures were taken at growing times from the scratch, ending at 18 h. For the transwell migration assays, U87MG cells stably expressing CXCL14 cells were plated in DMEM culture medium without serum on BD BioCoatControlCell Culture Insert Systems (BD Biosciences, Milan, Italy) at 25 × 10 3 cells/chamber. The chemoattractant (DMEM supplemented with 10% FBS) was added to the bottom wells of the plate. The cells were incubated at 37 • C, 5% CO 2 , for 6 h. After incubation, the non-migrated cells were removed from the upper surface of the membrane by scrubbing with a cotton tipped swab, while the cells migrated adhering to the bottom surface of the membrane were fixed with 100% MetOH and stained with DAPI. The number of migrated cells was evaluated in three different fields of three different wells for each condition. When the transwell assays were performed with non-transfected LN229 (or U87MG) cells, upon incubation with NIH-ctr or NIH-CXCL14 conditioned media, cells were pre-incubated for 24 h with the conditioned media from either NIH-ctr or NIH-CXCL14 fibroblasts. Then, after washing with PBS, the assay was performed as described above for stable cell lines. Where indicated, 10 µM AMD3100 (Sigma-Aldrich, #A5602) was added to the medium. MTS Cell Proliferation Analysis U87MG and LN229 cells were trypsinized, harvested and seeded onto 96-well flat-bottomed plates at a density of 3000 cells/well, then incubated at 37 • C for 24 h in DMEM supplemented with 10% FBS. Then, cells were washed with PBS and the medium was replaced with the conditioned media from either NIH-ctr or NIH-CXCL14 fibroblasts. Subsequently, the cells were subjected to
8,141
2019-05-01T00:00:00.000
[ "Medicine", "Biology" ]
Scale-Aware Multispectral Fusion of RGB and NIR Images Based on Alternating Guidance In low light condition, color (RGB) images captured by increasing the camera ISO contain much noise and detail loss. However, near infrared (NIR) images are robust to noise and have clear textures without color. In this paper, we propose scale-aware multispectral fusion of RGB and NIR images based on alternating guidance. Low light RGB images provide large-scale image structure and color information, while NIR images have fine details lost in RGB images. Since they are complementary, we adopt alternating guidance for the fusion of them using weighted least squares (WLS). First, we perform the first guidance to denoise the RGB image and obtain base layer. Then, we conduct the second guidance for scale-aware detail transfer of the NIR image and yield detail layer. Finally, we combine the base and detail layers to generate a fusion image. We maximize the multispectral advantage of RGB and NIR images for fusion based on alternating guidance. Experimental results show that the proposed method achieves good performance in noise reduction, detail transfer and color reproduction, and is superior to the state-of-the-art ones in terms of quantitative measurement and computational efficiency. I. INTRODUCTION With advances in the sensor technology, the image types are highly diversified. In addition to the widely used visible RGB cameras, there exist depth cameras to record depth information, infrared (IR) and NIR cameras for invisible wavelength band imaging, and X-ray cameras for medical imaging. Due to the increasing demands for computer vision applications, the requirements for imaging quality are becoming higher and higher. To maximize the advantages of various sensors, image fusion uses multiple sensors to improve imaging quality and accuracy of vision applications [20], [28]. Recent image fusion methods include fusion of flash/ no-flash images [19], infrared/RGB images [17], near infrared (NIR)/RGB images [13], [21], [34]. In this paper, we focus on the multispectral fusion of RGB and NIR images in low light condition. The associate editor coordinating the review of this manuscript and approving it for publication was Yongqiang Zhao . A. RELATED WORK In low light condition, it is hard to capture high quality RGB images without flash due to low signal-to-noise ratio (SNR). A commonly used method is to increase the camera ISO setting in low light condition. However, the quality of RGB images is seriously degraded by noise at a higher ISO value, and thus the edges and details are severely destructed. IR band imaging acquires images stably in adverse environments such as low light. It is widely used in object recognition [25], object detection [5], video surveillance [9], and remote sensing [24]. Although IR images have the advantage of resisting unfavorable environments, they generally have low resolution with poor textures. These defects limit IR images in certain applications requiring high-quality images, such as night vision systems. However, in the NIR band (750-1400 nm) close to the visible band [2], the NIR images have high resolution, clean textures, and robustness to noise. Therefore, compared with IR images, NIR images are more suitable for night vision systems that require higher image quality such as video surveillance [22]. Although NIR images do not have color information, it is easy to simultaneously acquire RGB [30], and the proposed method. We capture them by JAI AD-130 GE camera which is able to simultaneously capture RGB and NIR images through the same optical path with two CCDs. and NIR images which are complementary through hybrid camera systems. Therefore, acquiring a fusion image with color information and clean details through the fusion of RGB and NIR images provides a solution to high-quality imaging in low light condition. Prior to introducing RGB and NIR image fusion, we first review RGB and IR image fusion. The research on RGB and IR image fusion has a long history, which can be divided into seven categories: multiscale transform [10], [18], [35], sparse representation [11], [28], neural networks [8], [29], subspace methods [1], [7], and saliency-based methods [33], [36], hybrid models [12], [15], and other methods [14], [37]. NIR images provide higher resolution and better details than IR ones in low light condition. Therefore, the fusion of RGB and NIR images is more suitable for producing high-quality fusion results in low light condition. However, due to the difference in contrast and structure between the luminance channel of the RGB image and the NIR image, the NIR image cannot be directly replaced with the luminance channel of the RGB image. The direct replacement of the luminance channel by the NIR image leads to the color distortion and structural destruction. To solve the contrast difference, Son [26] proposed a method for low-light color image denoising based on contrast conversion between NIR images and luminance channels. Son et al. [27] further proposed an NIR coloring method using a contrast-preserving mapping model. To successfully preserve structural information of RGB and NIR images, Shibata et al. [23] proposed a fusion method based on high visibility area selection. Yan et al. [30] explicitly modelled derivative-level confidence and proposed cross field joint image fusion by optimizing a scale map. B. CONTRIBUTIONS In this paper, we propose scale-aware fusion of RGB and NIR images in low light condition based on alternating guidance. Noisy RGB images contain color information and big-scale image structures, while NIR images include small-scale fine textures. We adopt alternating guidance for fusion of RGB and NIR images based on WLS to make full use of their own advantage. First, we perform the first guidance for denoising on noisy RGB image and obtain base layer. For the first guidance, the joint guidance of NIR image and denoised luminance of RGB image was employed to remove noise while retaining edges. Since the NIR imaging highly depends on the NIR light strength, NIR images have no or very small values beyond the range that the NIR light reaches. This usually happens in the outdoor environment, especially for the background at night time, which makes the noisy RGB information also useful for fusion (see Fig. 1). Thus, we perform sigmoid-based NIR weighting for base layer generation (BLG) to achieve different smoothing degrees according to the NIR intensity. Then, we conduct the second guidance for scale-aware detail transfer on NIR image and obtain detail layer. Compared with direct smoothing on NIR image, the second guidance is able to generate more complete smallscale textures lost in noisy RGB image. Finally, we combine base and detail layers to produce a fusion result. As shown in Fig. 1, the proposed method successfully transfers details from the NIR image to the fusion result with noise removal and color reproduction. We capture them in low light condition by JAI AD-130 GE camera. This camera is able to simultaneously capture RGB and NIR images through the same optical path with two CCDs 1 . As shown in the figure, RGB images captured in the low illumination condition have color with severe noise and detail loss. However, NIR images captured in the same condition have fine details with little noise. Thus, they are complementary, and the fusion of them is able to take both advantages. The preliminary result of this paper was presented in [38]. In this paper, we extend our previous work in the following three points. First, we perform sigmoid-based NIR weighting to selectively take both NIR textures (foreground) and RGB information (background) in fusion. Second, we remove NIR highlights in fusion by the joint guidance of NIR image and Y channel (RGB image). Third, we capture real image pairs by JAI AD-130 GE camera, and verify the effectiveness of the proposed method on them. Compared with existing methods, main contributions of the proposed method are as follows: • We propose scale-aware fusion of RGB and NIR images based on alternating guidance to make full use of the multispectal advantage. Low light RGB images contain big-scale image structures with much noise, while NIR images include small-scale fine textures without color. Since they are complementary, we adopt alternating guidance to achieve scale aware fusion of the paired images. • We adopt WLS to alternately use RGB and NIR images as guidance for noise removal, detail transfer and NIR highlight removal. WLS is an edge-aware smoothing filter based on global optimization, which is used in the first guidance (RGB image denoising) to get base 1 https://www.jai.com/products/ad-130-ge layer and in the second guidance (NIR texture transfer) to obtain detail layer. We combine the base and detail layers to generate a fusion image. • We achieve high computational efficiency with O(N ) time complexity by WLS-based fast global smoothing. Fig. 2 shows the whole framework of the proposed scaleaware RGB/NIR image fusion based on alternating guidance. We adopt weighted least squares (WLS) for fast global smoothing that is a global edge preservation filter. We perform scale-aware alternating guidance for fusion of RGB and NIR images based on WLS. We use the combination of NIR image and denoised RGB luminance channel as joint guidance to get base layer in the first guidance. The first guidance effectively removes noise while protecting structure of RGB image. We utilize an over-smoothed RGB image as guidance to obtain detail layer in the second guidance. Since this over-smoothed RGB image only provides big-scale structure of the RGB image, the details of different scales and contrasts lost in the RGB image are completely taken from the NIR image by guiding the NIR image smoothing. The second guidance successfully takes clear texture from NIR image. Finally, we combine base and detail layers to generate fusion image. A. SIGMOID-BASED NIR WEIGHTING To effectively aggregate the multispectral information, we perform independent processing of the foreground and background segmented by the NIR intensity. Since the NIR imaging highly depends on the NIR intensity, the NIR image contains rich information within the range that the NIR light can reach. However, the NIR intensity has little or no information beyond the range. Thus, we mainly use the NIR information within the range, while we mainly utilize the RGB information above the range. To achieve different smoothing VOLUME 8, 2020 degrees to the NIR intensity, we perform sigmoid-based NIR weighting as follows: where i represents the NIR intensity, ε is a parameter that adjusts the steepness of the curve, and τ is a parameter that controls the threshold. B. WEIGHTED LEAST SQUARES WLS is an edge-aware smoothing filter based on global optimization formulation [16], which consists of a data term and a prior term. The prior term is represented for smoothing by a weighted L 2 norm. Given an input image f and a guidance image g, an output image u is obtained by minimizing the following WLS energy function: where N (p) represents a set of four adjacent pixels of P; λ controls the balance between data and smoothing terms, and increasing λ results in smoothing output; and ω p,q (g) is the weight calculated from the guidance image f and measure the similarity between pixels p and q. ω p,q (g) is defined as follows: where σ is a range parameter. The energy function in Eq. (1) is transformed into a vector form as follows: where u and v denote S × 1 column vectors containing values of u and v, respectively, and S is the total number of pixels; T denotes the transposition; and A g is S × S Laplacian matrix as follows [4]: Based on a large sparse matrix, this energy function can be solved through a linear system as follows: However, solving it by matrix inversion is of high computational complexity. By fast global smoothing of WLS, the time complexity can reach O(N ). First, we consider onedimensional (1D) case assuming that WLS energy function works on a 1D horizontal input signal f h and a 1D guiding signal g h along x dimension (x = 0, . . . , W − 1). The energy function of the 1D signal is as follows: where N h (x) represents two neighbors of x. This energy function is minimized by the following linear equation: where I h is an identity matrix with a size of W × W ; u h and f h represent the vector notations of u h and f h , respectively; A h is a three-point Laplacian matrix with a size of W × W . The linear system in Eq. (7) is written as follows: where u h x and f h x are the x th elements of u h and f h , respectively; a x , b x , and c x represent three nonzero elements in the x th row of (I h + λ t A h ). In boundary condition, a 0 = 0 and c W −1 = 0. a x , b x and c x are written as: Matrix (I h + λ t A h ) is a tridiagonal matrix whose nonzero elements exist only in the left and right diagonals. By Gaussian elimination, it reaches O(N ) complexity. In Gaussian elimination, intermediatec x andf h x are computed as follows: To process a two-dimensional (2D) image signal by using 1D solver, we perform 1D global smoothing operations along each dimension of 2D signal. To prevent the streaking artifact which commonly appears in separable algorithms [3], we perform 2D smoothing by applying sequential 1D global smoothing to a multiple number of iterations [16]. In this scheme, λ t in each iteration is computed as follows: where T represents the total number of iterations along each dimension. In each iteration, we perform 1D solver with parameter λ t along x dimension and y dimension of 2D images continuously. C. FIRST GUIDANCE FOR NOISY RGB DENOISING We perform the first guidance for RGB denoising and obtain the base layer. For the first guidance, we utilize both NIR image and denoised luminance channel of RGB image. NIR images contain image structure with clean textures and little noise. Thus, using NIR image as guidance achieves good denoising performance on noisy RGB image. However, due to the contrast difference between NIR image and RGB luminance channel in Fig. 3, direct adoption of NIR image as guidance would ruin some edges of RGB image when the contrast of NIR image is not matched with that of RGB luminance channel. Moreover, using RGB image itself as the guidance (self-guidance) can not use NIR clean textures for denoising. Thus, we adopt both NIR image and denoised luminance channel for guidance to remove noise while keeping image structure [32]. To remove salt-and-pepper noise in Y I , we use fast weight median filter (FWMF) and get Y n . Then, we perform element-wise addition of Y n and NIR image N I to obtain the guidance for denoising noisy RGB image C I . The base layer C B is obtained by minimizing the following energy function: (14) where C I represents column vectors containing values of C I ; A G n denotes Laplacian matrix defined by Y n + N I ; and range parameter σ is σ 2 . Fig. 4 shows RGB denoising results by the first guidance. Guided by C I , i.e. self-guidance, the first guidance can not achieve satisfactory denoising performance (see Fig. 4(b)). Guided by N I , i.e. NIR image, the first guidance makes the picture much blurry (see Fig. 4(c)). When only NIR image is used as guidance, the contrast difference between RGB and NIR images such as red boxes in Fig. 3 would cause serious blurs in edges. Guided by Y n + N I , the first guidance successfully removes noise while preserving the structure of RGB image (see Fig. 4(d)). D. SECOND GUIDANCE FOR SCALE-AWARE DETAIL TRANSFER In low light condition, RGB images often lose details due to the severe noise. Thus, we use the input NIR image to recover the lost details in the noisy RGB image. To transfer multiscale details of the NIR image, we use the RGB image with the big-scale structure to guide the NIR detail transfer, called scale-aware detail transfer. We obtain the second guidance image by over-smoothing the base layer of the input RGB image to contain the big-scale structure. Therefore, based on the second guidance, the details of multiple scales and contrasts in the NIR image are successfully transferred. As shown in Fig. 5(a), the black texts on the tea box in noisy RGB image are mostly destructed and only their color is identified. In contrast, they are very clear in NIR image. However, the lost details in noisy RGB image have various scales and textures as shown in Fig. 5(a) (see the red box in NIR image). In practice, details in NIR image have various scales and textures from small to large, and thus it is inappropriate to apply the same smoothing to NIR image as the second guidance for detail transfer. Directly applying smoothing to NIR image, i.e. self-guidance, causes loss of some textures and color distortion. If we utilize small scale smoothing on NIR image as the second guidance, large scale textures would not be transferred (see Fig. 5(b)). On the contrary, if we use large scale smoothing on NIR image as the second guidance, most details are smoothed so that some unwanted details are transferred (see Fig. 5(c)). The unwanted details would cause serious color distortion in fusion results compared with the input color image. Thus, we use over-smoothed RGB image, C S , as the second guidance for detail transfer. We obtain C S from base layer C B , i.e. the output of the first guidance. Guided by C S , we successfully generate detail layer N D from input NIR image N I . Since C S maintains large scale structure of RGB image, the second guidance acquires details with VOLUME 8, 2020 various scales from NIR image that are lost in noisy RGB image (see Fig. 5(d)). For the second guidance, we first obtain C S from C B using WLS as follows: (15) where C B represents vector norm of C B ; A C B denotes Laplacian matrix defined by C B ; and the range parameter σ is σ 3 . By minimizing the energy function Eq. (15), we obtain C S . Then, we obtain the smoothed NIR image N S under the guidance of C S by minimizing the following energy function: where N I represents vector norm of N I ; A C S denotes Laplacian matrix defined by C S , and the range parameter σ is σ 4 . We acquire N D through a pixel-wise subtraction between N I and N S . E. REMOVAL OF NIR HIGHLIGHTS In NIR images, highlights often appear in the human eyes. They usually happen at night because the pupils dilate to allow in more light. Much of the NIR light passes into the eye through the pupils, and then it is reflected through them. The NIR camera records this reflected light. This is very similar to the red-eye effect in flash photography. Fig. 6 shows the NIR highlights that appear in the human eyes. To suppress them, we perform the second guidance using the image smoothed by joint NIR image and Y channel of C S , instead of the smoothed base layer C S . For a comparison, we provide the fusion results by two smoothed images. As shown in Fig. 7, the second guidance by the joint guidance of NIR and Y channel of C S successfully removes the NIR highlights of the human eyes in fusion. F. LAYER FUSION We reconstruct fusion image by combining base layer C B and detail layer N D . First, we convert C B to YUV color space as follows: where R, G, B represent red, green and blue channels of C B , respectively. We use YUV color space to transfer the NIR details to the fusion without color shift. Then, we combine Y B and N D through pixel-wise add operation to generate the fused luminance channel Y as follows: Finally, we convert Y , U B , V B to the RGB color space to generate fusion RGB image C O as follows: III. EXPERIMENTAL RESULTS For experiments, we use synthetic image pairs in Figs. 8-10 and real image pairs in Figs. 11-13. The synthetic image pairs are indoor scenes, while the real image pairs are outdoor scenes. We synthesize low light RGB images in the synthetic image pairs (Figs. 8-10) by adding Gaussian noise and saltand-pepper noise into the clean RGB images in a publicly available dataset [2]. Moreover, we capture real image pairs in the real image pairs (Figs. 11-13) using JAI AD-080GE camera at night time. This camera can capture RGB and NIR images simultaneously using the same optical path with two CCDs. In the real image pairs, the RGB images are heavily corrupted by noise with severe loss of details. Thus, they are much more degraded and challenging than the synthetic ones for the fusion. We perform our experiments on a PC with Intel i5-6500 CPU (3.2GHz) and 16GB RAM using Matlab 2015b and C++. A. PARAMETER SETTING In the proposed method, we use WLS four times for alternating guidance, i.e. images denoising and detail transfer. Moreover, we utilize fast weight median filtering (FWMF) [32] to remove salt-and-pepper noise. We set the number of WLS iterations to T = 3 for the balance of efficiency and quality. B. PERFORMANCE EVALUATION To verify effectiveness and efficiency of the proposed method, we compare the proposed method with preserves color information of RGB images by preventing color shift in fusion. For quantitative measurements, we evaluate performance on the fusion results in terms of blind image quality assessment (BIQA) [31]. As an evaluation metric, we adopt a no-reference metric for quality assessment, BIQA, because neither RGB nor NIR images are used as reference images. Table 1 shows BIQA scores for different methods on the test image pairs. Smaller scores represent better performance. Bold and underlined numbers indicate the best and second performance, respectively. The proposed method achieves the minimum BIQA scores and outperforms the others in average performance. Furthermore, the proposed method achieves higher computational efficiency than the others. The time complexity of our method reaches O(N ) by fast global smoothing of WLS [16]. We estimate their runtime on our testing dataset (total 29 pairs of RGB and NIR images) with resolution of 1920 × 1080. Table 2 lists the average runtime (unit: sec/pair). Among them, Yan et al.'s work [30] achieves comparable performance in fusion to ours, however, the proposed method is 30+ times faster. This is because we use fast global smoothing for fusion of RGB and NIR images based on WLS. IV. CONCLUSION We have proposed scale-aware multispectral fusion of RGB and NIR images based on alternating guidance. Low light RGB images contain colors with coarse image structure, while NIR images include clean textures without color. We have adopted scale-aware multispectral fusion to contain multiscale structures of RGB and NIR images in the fusion results. Moreover, we have used alternating guidance based on WLS to maximize the multispectral advantage. We have utilized joint guidance of NIR image and denoised RGB luminance in the first guidance to remove noise while keeping edges. We have used an over-smoothed RGB image as guidance in the second guidance to perform scale-aware detail transfer of NIR image to fusion. Experimental results show that the proposed fusion method achieves good performance in noise removal and detail transfer with high computational efficiency.
5,485.2
2020-01-01T00:00:00.000
[ "Computer Science", "Environmental Science", "Engineering" ]
Almost Surely Exponential Convergence Analysis of Time Delayed Uncertain Cellular Neural Networks Driven by Liu Process via Lyapunov–Krasovskii Functional Approach As with probability theory, uncertainty theory has been developed, in recent years, to portray indeterminacy phenomena in various application scenarios. We are concerned, in this paper, with the convergence property of state trajectories to equilibrium states (or fixed points) of time delayed uncertain cellular neural networks driven by the Liu process. By applying the classical Banach’s fixed-point theorem, we prove, under certain conditions, that the delayed uncertain cellular neural networks, concerned in this paper, have unique equilibrium states (or fixed points). By carefully designing a certain Lyapunov–Krasovskii functional, we provide a convergence criterion, for state trajectories of our concerned uncertain cellular neural networks, based on our developed Lyapunov–Krasovskii functional. We demonstrate under our proposed convergence criterion that the existing equilibrium states (or fixed points) are exponentially stable almost surely, or equivalently that state trajectories converge exponentially to equilibrium states (or fixed points) almost surely. We also provide an example to illustrate graphically and numerically that our theoretical results are all valid. There seem to be rare results concerning the stability of equilibrium states (or fixed points) of neural networks driven by uncertain processes, and our study in this paper would provide some new research clues in this direction. The conservatism of the main criterion obtained in this paper is reduced by introducing quite general positive definite matrices in our designed Lyapunov–Krasovskii functional. Introduction By simulating the function of the biological brain, neural networks (NNs), the totality of mathematics-based computational models, deliver outstanding performance in recognition and/or classification of patterns, signal processing, engineering optimization, associative memory and so forth.Therefore, various types of NNs have been created for specific purposes; see [1][2][3][4][5][6][7][8][9][10][11] and the vast references cited therein.For example, in the 1980s, Chua and Yang invented a class of NNs, called cellular NNs (CNNs); see [1,2] for the details.Different from the classical Hopfield NNs (HNNs), the (dynamical) behavior of nodes (cells) of CNNs can merely be influenced by their nearly neighboring nodes (cells); see [1,2,[12][13][14].In the last three decades, CNNs have been extensively applied in diverse areas such as image processing, associative memory and parallel computing.As a consequence, CNNs have been extensively and intensively investigated, among scientific and engineering communities, in recent years from the point of view of mathematics (dynamical system theory and/or mathematical control theory, say); see References [3,6,[10][11][12][13][14], for example. The convergence property of state trajectories to equilibrium states (or fixed points) of NNs is usually referenced as the stability of equilibrium states (or fixed points), and is termed occasionally as the stability of the NNs.Convergence is one of the key properties to guarantee the NNs be successfully applied in the engineering community.And therefore, CNNs have been widely investigated for the existence and the stability of equilibrium states (or fixed points); see the pioneering works [1,12,13] and the references mentioned therein.For example, Chua and Yang [1] briefly discussed the importance of studying the existence and stability of equilibrium states (or fixed points) of CNNs via using dynamical system theory, and obtained some interesting existence and convergence criteria. As is well known to all, probability theory has been widely used to model the indeterminacy phenomena brought about by randomness of the environment; see References [4,6,11,15].In recent years, Liu has developed, based on the understanding an individual's belief degree, uncertainty theory to portray the indeterminacy in individual's subjective cognition; see Reference [16] (see also [17,18] and the vast references therein).Since its birth date, uncertainty theory has attracted extensive attentions in both the research and application communities.Up to now, uncertainty theory nearly has all theoretical results parallel to probability theory; see References [16][17][18][19][20][21][22][23][24][25][26][27][28][29].In particular, the theory of uncertain processes and the uncertain calculus have been already developed well; see Reference [16] (see also [17,18,28], say).The well-established uncertain calculus has paved the area of studying behaviors of dynamical systems subject to uncertain perturbations (for example, the concerned dynamics is driven by the so-called canonical Liu process).In this paper, we shall consider a class of NNs whose dynamics is driven by the canonical Liu process. It would certainly cost time for nodes (cells) themselves to process information and in the procedure of transmission of information between every pair of nodes (cells), therefore, time delay exists unavoidably in real world NNs; see [3][4][5][6]9,10,12,14,15,23,30,31], among the vast existing references.Generally speaking, time delay in NNs would certainly bring about more challenges in proving the convergence of state trajectories of NNs. By reviewing the aforementioned references, we are inspired to study time-delayed uncertain CNNs (DUCNNs) for the convergence of their state trajectories.One of the main aims in this respect is to provide a criterion ensuring the existence of equilibrium states (or fixed points); and another main aim is to put forward a criterion to guarantee that state trajectories of the concerned DUCNNs converge to equilibrium states (or fixed points).In this direction, some interesting results have already been published in the literature.As alluded to previously, the existence and stability of equilibrium states (or fixed points) of deterministic CNNs (whose dynamics is not influenced by stochastic or uncertain environment) were discussed in References [1,12,13].(Almost) periodic state trajectories of NNs play similar roles as equilibrium states (or fixed points).Kong, Zhu et al. [5] studied a class of discontinuous bidirectional associative memory NNs (briefly referenced as BAMNNs, and can be viewed as a specific class of CNNs) with hybrid time-varying delays and D operator, and obtained a criterion ensuring the existence of almost periodic state trajectories of their concerned BAMNNs, and provided another criterion guaranteeing the stability of the almost periodic state trajectories.We shall discuss briefly (almost) periodic state trajectories of NNs in Section 5 again.When the dynamics of CNNs is influenced by some stochastic environment, the problem concerning the almost surely exponential stability almost stochastic was investigated in [14,15,[30][31][32][33][34] and some related references therein.For example, Cong [33] obtained an interesting robust almost sure stability result for continuous-time linear systems subject to exogenous disturbance. As with stochastic dynamical systems driven by Brownian motions (Lévy processes, semimartingales with/without jumps, and so on), the problem concerning the existence of state trajectories for uncertain dynamical systems driven by canonical processes has aroused extensive research interest in recent years.In this respect, a large number of meaningful results were published; see References [16][17][18]21,29,35], for example.Chen and Liu [18] considered general uncertain differential equations and obtained some important (unique) existence results.Shu and Li [35] studied a class of switched nonlinear uncertain systems and proved, via the Contraction Mapping Principle, the existence and uniqueness of state trajectories for their concerned systems.The results concerning the convergence property of state trajectories or the stability of equilibrium states (or fixed points) can be seen in References [21][22][23][24][25][26][27][28][29].Jia and Liu [29] studied, besides the (unique) solvability, the convergence property of age-dependent uncertain population equations subject to stochastic perturbations.Lu and Zhu [23] investigated a class of uncertain dynamical systems, and came up with several criteria ensuring the convergence of state trajectories of their concerned time delayed uncertain dynamical systems in the sense of moments.Jia and Li [21] obtained a criterion to ensure almost sure exponential stability of uncertain HNNs (UHNNs) under stochastic perturbations.For (deterministic, stochastic, uncertain) dynamical systems, the continuous dependence of state trajectories on initial states is also very important.In the context of uncertain dynamical systems, the continuous dependence of state trajectories on initial states is frequently referenced as the stability (of the concerned dynamical systems).Yao, Gao et al. [36] obtained some general stability (continuous dependence) theorems of uncertain differential equations.In References [19,20], some interesting stability (continuous dependence) results in the mean sense for uncertain differential equation were obtained.Some other interesting stability (continuous dependence) results can also be seen in References [22,28] and the references therein. Our principal contributions in this paper are as follows. • We investigate in-depth in this paper a class of DUCNNs driven by a canonical Liu process for the stability of their equilibrium states (or fixed points).As alluded to previously, the dynamics of NNs is inevitably influenced by a random (stochastic) environment.And analogously, humans' subjective cognition based on intuition or inspection (in terms of belief degree) may have some influence on the structure of NNs when the workers are designing the NNs, and therefore have a certain influence on the dynamics of the constructed NNs.Therefore, the research results concerning uncertain NNs may be more suitable and reliable for the decision makers.Uncertainty theory laid the foundation (the notion of belief degree on measurable spaces) of the mathematical theory that are capable of portraying quantitatively humans' subjective cognition.Therefore, it is of great importance to study uncertain NNs for the large time behavior of their state trajectories.By reviewing the existing references, we conclude that our research results in this paper seem to be new.For example, in comparison with Reference [23], in which stability criteria were provided in terms of moments, our aim in this paper is to provide a stability criterion in the sample sense for DUCNNs. For another example, in contrast with Reference [21], our concerned model DUCNNs include discrete time and finitely distributed time delay. • Via applying the classical Contraction Mapping Principle, we establish a criterion (see Theorem 1 in Section 3), and prove that this criterion can guarantee that our concerned model DUCNNs have unique equilibrium states (or fixed points). • We design meticulously a class of Lyapunov-Krasovskii functionals, which take into account the after-effect (or time delay) in our concerned model DUCNNs, analyze in detail our concerned model DUCNNs with these coined Lyapunov-Krasovskii functionals as the key tools, and establish a criterion to ensure that equilibrium states (or fixed points) of our concerned model DUCNNs be almost surely exponentially stable; see Theorem 2 in Section 3. We also come up with a specific example DUCNN to validate our theoretical results; see Section 4. Notational Conventions.We write R for the totality of real numbers, and R + for the interval [0, +∞) of non-negative real numbers.We write N for a positive integer throughout this paper.We denote by R N the N-dimensional Euclidean space, and by R N×N the algebra of N-th order real square matrices.Following the common convention, we designate by (R, L, dt) the usual Lebesgue measure space.We denote by (Γ, L , L, M) (or (Γ, L , L, dM) ) a complete filtered uncertainty space (whose definition would be explained in detail in Section 2; see Definition 1), in which, the filtration L = {L t ; t ∈ R + } is assumed to satisfy the usual conditions; that is, the σ-algebra L 0 contains all M-null sets in the σ-algebra L , and the filtration L is right-continuous in the sense that s>t "L almost surely" is abbreviated as M-a.s.Let X be an arbitrarily given uncertain variable on Γ, denote by E X (see Definition 5) the expected value of X, and by ξ (x) (see Definition 4) the uncertainty distribution ξ (x) of ξ. (Γ × R, L ⊗ L, dM × dt) denotes the product σ-subadditive measure space of (R, L, dt) and (Γ, L , M); {C(t); t ∈ R + }, an L-adapted uncertain process, denotes a one dimensional canonical Liu process defined on the uncertainty space (Γ, L , L, M).Let A be a positive definite matrix, we write λ min (A) and λ max (A), respectively, for the smallest and largest eigenvalues of A. Let A be a square matrix, which we denote by tr A (or tr(A), occasionally) the trace of A, and by sym(A) the symmetric matrix A + A with A designating the transpose of A here and hereafter.For any positive definite matrix A ∈ R N×N , we designate its Cholesky decomposition by A A with A an upper triangular matrix ( A is actually nonsingular and unique).For any pair of symmetric matrices A and B ∈ R N×N , if A − B is positive definite, then we write A B. In particular, if the matrix A ∈ R N×N is positive definite, then we write A 0. The rest of this paper is organized as follows.In Section 2, we recall some preliminaries necessary for our later presentation and formulate our concerned existence and convergence problems for DUCNNs.In Section 3, we state the principal results in this paper and present their proofs in detail.In Section 4, we justify, in both numeric and visual ways, the effectiveness of our theoretical results, via bringing forward a specific example DUCNN of which state trajectories converge to the unique equilibrium state (or fixed point).In Section 5, we conclude our discussion in this paper by presenting several remarks. Some Preliminaries Let (Γ, L ) be a measurable space with Γ a nonempty set and L a σ-algebra over Γ.We equip (Γ, L ) throughout this paper the filtration L = {L t ; t ∈ R + } satisfying the usual conditions.In other words, L is a collection of sub-σ-algebra of L and satisfies (i) The σ-algebra L 0 contains all M-null sets in the σ-algebra L ; and (ii) L is rightcontinuous in the sense of (1).Here and hereafter, we shall write (Γ, L , L) for the measurable space (Γ, L ) equipped with the filtration L = {L t ; t ∈ R + } satisfying the usual conditions.Definition 1.Given a measurable space (Γ, L , L), equipped with a filtration L = {L t ; t ∈ R + } that satisfies the usual conditions, and a given function M mapping L into [0, 1].The given function M is called a uncertainty measure on the filtered measurable space (Γ, L , L) provided that the following three axioms are fulfilled: The quadruple (Γ, L , L, M), obtained by equipping the filtered measurable (Γ, L , L) with the uncertainty measure M, is called a uncertainty space. From now on, we abide by the convention that (Γ, L , L, M) is a complete filtered uncertainty space in which the filtration L satisfies the usual conditions.Definition 2. The measurable function ξ : Γ → R is called a uncertain variable.In more detail, if for any Borel subset B of R, then the set belongs to the σ-algebra L , then ξ is said to be a uncertain variable. Definition 3. Let ξ be a uncertain variable on the uncertainty space (Γ, L , L, M).The following associated real-valued function is called the uncertainty distribution of ξ. Definition 4. Let ξ be a uncertain variable on the uncertainty space (Γ, L , L, M).If the uncertainty distribution ξ (x) of ξ is exactly with ξ 0 a given constant in R and σ a given positive constant, then we call ξ a normal uncertain variable with expected value ξ 0 and variance σ 2 .If ξ 0 = 0 and σ = 1, we call ξ a standard normal uncertain variable, and write its uncertainty distribution as It is obvious that the function Φ(x) given by ( 4) is strictly increasing in R. We can conclude therefore that the function Φ(x) has inverse function Φ −1 (x).Actually, by some routine calculations, we have immediately We shall call the function Φ −1 (x) (the inverse function of Φ(x) given by ( 4)), given as in (5), the inverse standard normal uncertainty distribution throughout this paper. Definition 5. Suppose that ξ is an uncertain variable on the uncertainty space (Γ, L , L, M).If at least one of the following two integrals: are finite, then we call the expected value of the uncertain variable ξ. Based on the definitions of E ξ and ξ (x), it is straightforward to verify that This identity facilitates the calculations of expected values of uncertain variables.To provide some intuitions for our later theoretical development in this paper, we would like to share the next two examples on the computations of expected values of uncertain variables. Example 1.Let ξ be a normal uncertain variable, with expected value ξ 0 and variance σ 2 , on the uncertainty space (Γ, L , L, M).The expected value E ξ of ξ is equal to ξ 0 .Indeed, based on (6), we deduce from (4) that Example 2. Let ξ be a normal uncertain variable, with expected value ξ 0 and variance σ 2 , on the uncertainty space (Γ, L , L, M).Following the steps to derive (7) in Example 1, we have where B( * , ) is Euler's Beta function. Definition 6. Let T be a nonempty subset of R + .The function X : Γ × T → R is said to be an uncertain process provided that it is progressively measurable. Definition 7. Let {C(t)} t∈R + be an uncertain process.The given process {C(t)} t∈R + is called a canonical Liu process provided that the following three assertions hold: • {C(t)} t∈R + has stationary and independent increments; • For every t ∈ R + and every s ∈ (0, +∞), the increment C(t + s) − C(s) is a normal uncertain variable with expected value 0 and variance t 2 . Definition 8. Let {C(t)} t∈R + be the aforementioned canonical Liu process.We denote Some remarks concerning the uncertain variable k, given by ( 8) in Definition 8, are in order here.It was proved by Yao, Gao et al. [36] that where the function Φ(x), given as in ( 4), is the uncertainty distribution of a standard normal uncertain variable.By the definition of limit superior of a sequence of sets, we have This, together with ( 9), implies immediately This, alongside with the definition of the uncertainty measure M (see Definition 7), implies This implies, in particular, that possibly there exists a M-null set in Γ such that, for every sample γ in the sample space Γ, it holds that k(γ) ∈ [0, +∞) (see (8) for the definition of k). where ∆: and with respect to the canonical Liu process {C(s)} s∈R + , and moreover it holds that in which, µ(t)dt and σ(t)dC(t) are called the drift and diffusion terms, respectively.Lemma 1 (see References [16,17]).Let f (x, t) be a C 1 (by C 1 , we mean the totality of continuous functions whose first order derivative is continuous) function on R 2 , and X(t) a Liu process with µ(t) and σ(t) as its drift and diffusion coefficients, respectively, or equivalently Then, f (X(t), t) is a Liu process with µ(t) f x (X(t), t) + f t (X(t), t) and σ(t) f x (X(t), t) as its drift term and diffusion term coefficients, respectively, in other words, Lemma 2 (see Reference [18]).Let {C(t)} t∈R + be the aforementioned canonical Liu process, and k the uncertain variable given as in Definition 8 (see (8) for the details).For any two constants a and b ∈ R + with a < b, and any integrable L-adapted uncertain process x N (t)dC(t)) , where x(t) = (x 1 (t), x 2 (t), . . ., x N (t)) .By virtue of this definition, as with (10), we can define the following R N -valued uncertain process And as with the one-dimensional case, we can define R N -valued Liu process.And it is not difficult to imagine that we can establish a counterpart of Lemma 1 for the R N -valued Liu process {x(t)} t∈ [a,b] given by dx(t) = µ(t)dt + σ(t)dC(t). In comparison with this, it seems to be unapparent and therefore much more laborious to establish a counterpart of Lemma 2 for uncertain integrals of R N -valued uncertain processes with respect to one dimensional canonical Liu processes.Lemma 3. Let {C(t)} t∈R + be the aforementioned canonical Liu process, and k the uncertain variable given as in Definition 8 (see (8) for the details).For any two constants a and b ∈ R + with a < b, and any integrable L-adapted uncertain process {x(t Proof.Thanks to Lemma 2, we have By virtue of some careful calculations, we have further that in which, the ' ' follows from the well-known Cauchy-Schwarz inequality.On the other hand, we can deduce from (16) immediately that Plug ( 17) into (15), and conduct some easy calculations, to end the proof of Lemma 3. It is worth pointing that, as can be seen from ( 8) in Definition 8, the uncertain variable k depends merely on the aforementioned canonical Liu process {C(t)} t∈R + , in particular, k is independent of a, b and the uncertain process Let A ∈ R N×N be a positive definite matrix.Then, by the well-known theorem of Linear Algebra, A admits a unique Cholesky decomposition, that is, there exists a unique a upper triangular matrix with positive diagonal entries such that A = . Hereafter, we shall write A for the unique aforementioned upper triangular matrix in the Cholesky decomposition of any positive definite matrix A; It is straightforward to see that the matrix A is nonsingular.With the help of the notion of Cholesky decomposition, we can prepare the following lemma which is extremely useful in our later presentation.Lemma 4. Let A ∈ R N×N be a positive definite matrix, and B ∈ R N×N a positive semi-definite matrix.Then, the following two identities hold true: and Proof.It is obvious that the matrix A is positive definite (a fortiori, symmetric).By Jordan's decomposition theorem, there exists an orthogonal matrix Q such that Pre-and post-multiply both sides of this equation by ( A ) and A , respectively, to obtain where λ k 0 (k = 1, 2, . . ., N).This, together with the definition of A , implies Aided by ( 19), we can complete the proof of Lemma 4, via some routine calculations. Formulation of the Problems and Main Assumptions In this paper, we consider the model DUCNNs in which: x(t) is a state trajectory and can be re-written in component form as the matrix D = diag(d 1 , d 2 , . . ., d N ) in the leakage term −Dx(t) is positive definite; A k and B k are the connection weight coefficient matrices (real square matrices) in the transmission terms, k = 0, 1, 2; the activation functions F k and G k can be written in component form as and respectively, k = 0, 1, 2; the positive constants τ 1 , τ 2 , η 1 and η 2 are the time delay; as stated previously, {C(t)} t∈R + is a canonical Liu process on the uncertainty space (Γ, L , L, M); the initial state Definition 10.A L 0 -measurable N dimensional uncertain variable x * is said to be a equilibrium state (or fixed point) of the model DUCNNs (23), provided that and that Definition 11.Suppose that the uncertain variable x * : Γ → R N , required to be L 0 -measurable, is a equilibrium state (or fixed point) of the model DUCNNs (23).x * is said to be M-a.s.exponentially stable provided that there exists a positive definite matrix P ∈ R N×N , as well as two uncertain variables ı : Γ → [0, +∞) and  : Γ → (0, +∞) such that for any state trajectory x(t) of DUCNNs (23), it holds that (x(t) − x * ) P(x(t) − x * ) ıe −t , t ∈ R + , M-a.s. From the perspectives of the mathematical complexity and application, it seems to be more interesting to require the decaying exponent  in (28) (see Definition 11) be essentially bounded, or equivalently, to require  be an absolute positive constant. By some routine but seemingly tedious calculations, we can conclude that the decay estimate (28) in Definition 11 holds true if and only if the assertion holds true: either (i) there exists a positive time instant T * such that which is equivalent to or (ii) x(t) = x * for every t ∈ R + and lim sup Based on the analysis conducted in this paragraph, we conclude that proving that the equilibrium state (or fixed point) of DUCNNs ( 23) boils down to proving the inequality (31) holds true under the assumption that x(t) = x * for any t ∈ R + .The positive valued uncertain variable  in (28) and ( 31) is called a (exponential) decay rate. Assumption 2. The constants τ k and η k , independent of sample and time, (occurred in the model DUCNNs (23)) are all non-negative, k = 1, 2.Moreover, it holds always that Assumption 3. The activation function F k is Lipschitz continuous and satisfies the linear growth condition at infinity, k = 0, 1, 2. More precisely, it holds that where L F k is a diagonal matrix defined by with the diagonal entry l j F k , a non-negative constant, given by The activation function G k satisfies Carathéodory's condition, is Lipschitz continuous and satisfies linear growth condition at infinity, k = 0, 1, 2. In addition, it holds that where the diagonal matrix L G k (s), as with the matrix L F k , assumes the form with the function l j G k (s), defined in the interval R + , being Lebesgue integrable in R + , being essentially bounded in R + and defined explicitly by Main Results and the Proofs Theorem 1. Suppose that Assumptions 1 and 3 hold true.DUCNNs (23) admit unique equilibrium states (or fixed points), provided ς < 1 with the non-negative constant ς defined by Proof.Let us recall that R N , equipped with the mapping is indeed a Banach space, and the natural induced metric space is complete. Let us write, in this proof, U :− U(0) (it is worth reminding that U(0) ≡ U(t) for every t ∈ R + , M-a.s.).Since D is positive definite, it is non-singular.This implies, in particular, that for any x, there exists a unique Λ(x) such that Thus, we obtain a mapping Λ of R N into itself.For any x 1 and x 2 , we have We have therefore which implies further that Recalling the notation ς defined by (38), we conclude immediately By Banach's fixed-point theorem, this, together with the assumption that ς < 1, implies that Λ admits a unique fixed point x * .Recalling (39), we conclude that Λ(x * ) = x * implies that x * satisfies (26), and furthermore satisfies automatically (27).Since D is non-singular (positive definite, actually), the activation function F k is globally Lipschitz continuous (k = 0, 1, 2), and U is L 0 -measurable, x * is L 0 -measurable.By Definition 10, x * is indeed a a equilibrium state (or fixed point) of the model DUCNNs (23).Assume that x * 1 and x * 2 are equilibrium states (or fixed points) of DUCNNs (23).By the above analysis, x * 1 and x * 2 are fixed points of Λ.In view of (40), we have Noting that ς < 1, we conclude that x 2 − x 1 = 0, or equivalently, In conclusion, the proof of Theorem 1 is complete. Theorem 2. Suppose that Assumptions 1, 2 and 3 hold true.If the non-negative constant ς given by (38) is strictly less than 1, there exists a positive definite matrix Φ ∈ R N×N , four positive definite matrices Ψ 1 ∈ R N×N , Ψ 2 ∈ R N×N , Ψ 3 ∈ R N×N as well as Ω ∈ R N×N , and three positive constants δ 1 , δ 2 alongside with δ 3 such that and where the symmetric matrix (can be proved to be positive definite) is given by then DUCNNs (23) have unique equilibrium states (or fixed points), and the equilibrium states (or fixed points) are almost surely exponentially stable at a decay rate κ given by where the matrix Φ ∈ R (3N)×(3N) is given by Proof.In view of the assumption that ς < 1 (see (38)), by Theorem 1, we conclude that DUCNNs (23) have unique equilibrium states (or fixed points).It remains to prove the almost surely exponential stability part of Theorem 2. In the rest of this proof, we write x * for an equilibrium state (or fixed point) of DUCNNs (23).As remarked previously, to prove Theorem 2, it suffices to establish the inequality (31) for every state trajectory x(t) of DUCNNs (31) fulfilling x(t) = x * (t ∈ R + ). For the sake of convenience of our later presentation, we introduce and consider the new DUCNNs where the initial datum w 0 (t) is given by and Ǧk • (w(s), t) is given by Ǧk The stability of the equilibrium state (or fixed point) x * of DUCNNs ( 31) is equivalent to that of the equilibrium state (or fixed point) 0 of DUCNNs (46).The time delay in DUCNNs (31) (or equivalently, in DUCNNs ( 46)) brings about extreme difficulty in the stability analysis procedure.To overcome the aforementioned difficulty, our basic idea is to make full use of a certain Lyapunov-Krasovskii functional, associated to DUCNNs (46), to take in the after-effect in DUCNNs (31) (or equivalently, in DUCNNs (46)).Let us introduce the positive definite functional for DUCNNs (46) in which the positive parameter ε will be chosen appropriately (actually, the parameter ε will be specified deliberately to be equal to κ with the constant κ given 'implicitly' by ( 43)), and V(w, t) is a Lyapunov-Krasovskii functional candidate and can be expressed as where the functionals V 1 (t), V 2 (t) and V 3 (t) are given, respectively, by and By the chain rule of differentiation for Liu processes (see Lemma 1), we have Taking into account of (49), we have immediately Thanks to that the uncertain process (Liu process, more precisely) w(t) is a state trajectory of DUCNNs (46), again we apply Lemma 1 (the chain rule of differentiation for Liu processes) to the uncertain process V 1 (t), given explicitly by (50), to obtain With the help of the experience of deriving the differential identity (55), illuminated by the definition (51) of the uncertain process V 2 (t), we have, by Lemma 1, that Enlightened by the experience gathered in the procedure of deducing the differential dV 1 (t) and dV 2 (t) (see ( 55) and ( 56) for the details) of the uncertain processes V 1 (t) and V 2 (t), by Lemma 1, we can deduce from the definition (52) of the uncertain process V 3 (t) that Plug the differential identities (55), ( 56) and (57) into the differential identity (54), and perform some routine calculations, to eventually arrive at By the fundamental theorem of uncertain calculus, we can deduce from (53) that Substitute ( 58) into (59), and conduct some simple computations, to yield To continue, our idea is to treat (60) part by part.By Lemmas 2 and 3, we have where the uncertain variable k is given exactly by (8) in Definition 8, the positive constant δ 1 can be chosen as small as desired, and is therefore imagined to be very close to zero in the calculations here and hereafter.Mimic the steps in (61), to obtain in which Lemmas 2 and 3 played a key role, the positive constant δ 2 , as with the positive constant δ 1 in (61), can be picked to be very close to zero (when necessary), and the uncertain variable k is defined as in Definition 8 (see (8) for the details).By performing calculations analogous to those taken in the procedure of deriving (61) as well as (62) and apply Lemmas 2 and 3, we can show finally that where k, as in ( 61) and (62), is an uncertain variable whose definition lies in (8) of Definition 8, and the real constant δ 3 , required to be positive, can be chosen to be as close to zero as desired.Now let us plug (61), ( 62) and ( 63) into (60), to arrive at in which, the occurred diagonal matrix L G k (s) is defined as in (36) alongside with (37) in Assumption 3, k = 0, 1, 2. By recalling that the nontrivial entries of the diagonal matrix L G k (s) are Lebesgue integrable and essentially bounded, k = 0, 1, 2, and in view of we can conclude immediately that the terms (occurred in (64), ( 61), (62) as well as (63)) ds, and are well-defined as uncertain processes (Liu process, more precisely).Based on (33) along with (34) in Assumption 3, and by the famous Cauchy-Schwarz inequality, we have By Lemmas 5 (especially ( 22)) and 4 (( 18), in particular), we have directly Based on the idea used in (66), with the Cauchy-Schwarz inequality as the main tool, we apply Lemmas 5 and 4 (( 22) and ( 18), in particular), to obtain As with L F 0 in (65), the diagonal matrices L F 1 and L F 2 , occurred in (66) and (67), are defined as in (33) alongside with (34) in Assumption 3. Based on (32) in Assumption 3, the right hand sides of (65), ( 66) and ( 67) are all well-defined.With (48), (49) as well as (50) at our disposal, we perform some routine but seemingly tedious calculations, to arrive at This implies automatically Borrowing the idea 'to establish first the inequality (68) and based on this new established inequality (68), to prove our desired (69)', based on (48), ( 49) and (50), we have analogously Enlightened by the experience of deducing (68), based on (48), ( 49) and (51), we conduct some careful computations, to yield λ min (Ψ 3 ) for a.e.s ∈ R + , M-a.s. As can be seen already in (69), this implies directly By recalling (35) in Assumption 3, we conclude that the terms in (69), ( 70) and ( 71) are all well-defined as non-negative constants. Plug (69), ( 70) and ( 71) into (64), to directly obtain ln V ε (w, t) ln V ε (w, 0) which can be written compactly into in which the uncertain process w(s) is defined by and the symmetric block matrix Θ is defined as in (42).Since the block matrix Θ is positive definite, it follows immediately from Lemma 4 that where w(s) is given as in (73) and the symmetric block matrix Φ is given by (44).Since the matrices Ψ 1 , Ψ 2 and Ψ 3 are all positive definite, τ 2 τ 1 , τ 2 η 1 and τ 2 η 2 (see Assumption 2), it follows from the Cauchy-Schwarz inequality, Lemmas 5 and 4 that It is not difficult to find that Plug ( 74), ( 75) and ( 76) into (49), to obtain Fix ε = κ, and pass to the limit as t → +∞ to finally obtain that lim sup t→+∞ ln V ε (w, t) t −κ, M-a.s. For every state trajectory x(t) of DUCNNs (23), if x(t) = x * (recall that x * is an equilibrium state or a fixed point of DUCNNs ( 23)) for every t ∈ R + , then it holds that lim sup The proof of Theorem 2 is complete. Numerical Validation of the Theoretical Observations In Section 3, we provided a criterion ensuring the (unique) existence of the equilibrium state (or fixed point) of DUCNNs (23) and proved a criterion guaranteeing the convergence of state trajectories of our concerned NNs.In this section, we are focused in coming up with an example to illustrate that the aforementioned theoretical results are indeed effective. We consider a DUCNN having the form (23) with N = 3, x = (x 1 , x 2 , x 3 ) .We assume that the delay τ 1 , τ 2 , η 1 and η 2 are given by τ 1 = 1, τ 2 = 4, η 1 = 2 and η 2 = 3, respectively.We assume in our concerned example that the matrix in the leakage term is For the sake of convenience of our later computations, we assume in this example that the exogenous disturbance U(t) and V(t) are zero for all t ∈ R + . We assume in our concerned example DUCNN that the activation functions F 0 , F 1 , F 2 , G 0 , G 1 and G 2 are given, respectively, by With the above given and G 2 , we can prove easily that our concerned example DUCNN admits x = (0, 0, 0) as its equilibrium state (or fixed point).Next, we would like to check numerically and graphically that x = (0, 0, 0) is actually the unique equilibrium state (or fixed point) of our concerned example DUCNN, moreover, it is almost surely exponentially stable.By some routine but seemingly tedious calculations, we have To reduce the computational burden, we choose to fix Φ and Ψ 1 as We determine Ψ 2 , Ψ 3 and Ω by solving linear matrix inequalities (LMIs) (41) and (42) (with merely Ψ 2 , Ψ 3 and Ω as the decision variables) via exploiting MATLAB (®2015b), to obtain and Ω, we again perform some numerical computations via MATLAB (®2015b) and obtain ς = 0.8261 and κ = 0.0947; see (38) and (43) for the detailed definitions of ς and κ, respectively. In view of ς < 1, we conclude by Theorem 1 that our concerned example DUCNN has a unique equilibrium state (or fixed point), namely x = (0, 0, 0) .In addition, by Theorem 2, it follows from the conclusion the LMIs (41) and (42) (with merely Ψ 2 , Ψ 3 and Ω as the decision variables) are both feasible that x = (0, 0, 0) is almost surely exponentially stable.More precisely, combine (78) and ( 79 By viewing Figure 1, we find readily that the state trajectory x(t) of our concerned example DUCNN supplemented by the initial condition (80) tends to 0, the equilibrium state (or fixed point) of the concerned example DUCNN, as time t escapes to infinity.To summarize, all the observations in this paragraph validate our theoretical results.(23), and the criterion (see Theorem 2) guaranteeing the almost surely exponential stability of the equilibrium states (or fixed points) of DUCNNs (23).x(t) = (x 1 (t), x 2 (t), x 3 (t)) , t ∈ [0, 50], is the state trajectory of our concerned example DUCNN in this section (i.e., Section 4) fulfilling the initial condition (80). Concluding Remarks We studied, in this paper, a class of DUCNNs, namely DUCNNs (23), driven by a onedimensional canonical Liu process; see Section 2. Our concerned model DUCNNs include discrete time and finitely distributed time delay in transmission terms.In the context of uncertain dynamical systems, it seems to be new and difficult to investigate the influence of time delay on the long time behavior of state trajectories.Our research, in this paper, is inspired noticeably by the results in References [5,14,15,[21][22][23][24][25][26][27][28][29][30][31][32][33][34], but we are faced with some new challenges.For example, it is not difficult to recognize that the Brownian motion is beneficial, in a certain sense, for proving almost surely the exponential convergence of state trajectories of stochastic NNs, while the canonical Liu process is actually 'harmful' for proving almost surely exponential convergence of state trajectories of uncertain NNs; see References [5,14,15,[30][31][32][33][34].Therefore, it seems to be much more challenging and laborious to perform convergence analysis on state trajectories for 'indeterminate' NNs driven by uncertain processes than for those driven by stochastic processes. Based on some rudimentary analysis, we come up with a criterion (see (38)) under which our concerned model DUCNNs (23) were demonstrated, via a standard contraction mapping argument, to admit unique equilibrium states (or fixed points); see Theorem 1 and its proof for the details.By designing meticulously a class of Lyapunov-Krasovskii functionals, we brought forward, based on the analysis of our designed Lyapunov-Krasovskii functionals, a criterion (see (41) as well as (42)) to guarantee that the equilibrium states (or fixed points) of our concerned model DUCNNs (23) be almost surely exponentially stable; see Theorem 2 and its proof for the details.The aforementioned theoretical analysis and the corresponding results are collected in Section 3, and our theoretical results are 'demonstrated', numerically and graphically, to be actually effective. Dynamical systems governed by CNNs of nonlinear differential equations driven by uncertain processes can be chaotic, in the sense some of the time series generated by (i.e., state trajectories of) the dynamical systems are of great complexity (for example, they are flexible and/or exhibit high entropy values).By exploiting machine learning, we can establish model to predict accurately flexible time series based on NNs.NNs whose state trajectories converging to their equilibrium states (or fixed points) perform better than those having divergent state trajectories.And therefore, our convergence criterion (see Theorem 2) helps us to design accurate CNN models to predict complex time series. As pointed out in Section 1, to take sufficiently use of the after-effect in our concerned model DUCNNs (23), a class of Lyapunov-Krasovskii functionals, the main ingredients of this paper, were carefully created.Among the merits, general positive definite matrices are included in our designed Lyapunov-Krasovskii functionals to reduce the conservatism of our stability results.An interesting notion that is closely related to the main theme of our research in this paper is stabilization.By stabilization, we mean that extra control is added in uncertain NNs to guarantee that state trajectories of the controlled uncertain NNs converge to the equilibrium states (or fixed points).In the literature, various stabilization problems have been extensively studied for deterministic and stochastic NNs.Inspired by these observations, we shall work in the direction of designing suitable (impulsive control, intermittent control, quantized control, adaptive control, pinning control, sliding mode control, event-triggered control, and so forth) feedback control to stabilize DUCNNs. As pointed out above, and by inspecting DUCNNs (23), it is not difficult to find that the model DUCNNs considered in this paper are driven by merely one dimensional canonical Liu processes.By reviewing all our mathematical derivations throughout this paper, it is not difficult to conclude that our methods can be adapted to treat similar problems associated to UCNNs (with or without time delay) driven by multi-dimensional Liu processes.Recently, the multi-dimensional Liu processes situation was considered in References [21,29].Inspired by the results presented in these references, we plan to consider, in the near future, the problems concerning the existence and stability of equilibrium states (or fixed points) of DUCNNs driven by multi-dimensional Liu processes. As can be seen above, we are merely focused, in this paper, on the existence and stability of equilibrium states (or fixed points).For NNs, equilibrium states (or fixed points) are special cases of periodic trajectories, and equilibrium states (or fixed points) as well as periodic trajectories latter are special cases of almost periodic trajectories.As mentioned in Section 1, in Reference [5], the problem concerning the stability of almost periodic trajectories of a certain class of NNs was considered.From the presentation of this reference, we can find that it is actually important to generalize the notion of equilibrium states (or fixed points) to that of (almost) periodic trajectories.In quite a few situations, NNs have no equilibrium state (or fixed point), but have (almost) periodic trajectories.In the procedure of investigating large time behavior of state trajectories of NNs, (almost) periodic trajectories act in nearly the same role as equilibrium states (or fixed points).We are therefore tempted to study the existence and stability of (almost) periodic trajectories of DUCNNs. The notion of synchronizability is very close to that of stability.By synchronizability, we mean the phenomenon: Every difference trajectory of two NNs (the two NNs may have different structure) (i) tends to zero as time escapes to infinity or (ii) tends to zero as time approaches a finite instant (the so-called settling time), and remains to be zero constantly Definition 9 . Let a, b ∈ R + with a < b, {C(t)} t∈R + a canonical Liu process, and {X(t)} t∈[a,b] a given L-adapted uncertain process.If there exists an uncertain variable ξ such that then the uncertain process {X(t)} t∈[a,b] is said to be integrable, and the limit uncertain variable ξ is said to be the uncertain integral of {X(t)} t∈[a,b] in the interval [a, b] with respect to the canonical Liu process {C(t)} t∈R + .In this situation, we denote ξ = b a X(t)dC(t).Suppose that the uncertain process {X(s)} s∈[a,b] is uncertain integrable in [a, b] with respect to the canonical Liu process {C(s)} s∈R + .By virtue of Definition 9, we can conclude that for every t ∈ [a, b], the uncertain process {X(s)} s∈[a,b] is uncertain integrable in the compact subinterval [a, t] with respect to the canonical Liu process {C(s)} s∈R + , and that {Y(t)} t∈[a,b] is also an uncertain process with {Y(t)} t∈[a,b] given by then we call {X(t)} t∈[a,b] a Liu process, and write equivalently ) Let a, b ∈ R + with a < b and {C(t)} t∈R + a canonical Liu process.If for every k = 1, 2, . . ., N, the uncertain process {x k (t)} t∈[a,b] is uncertain integrable in the interval [a, b] with respect to the canonical Liu process {C(s)} s∈R + , then we write b a x(t)dC(t) = ( Lemma 5 . Let P ∈ R N×N be a positive definite matrix.(Jensen's inequality).Let a, b ∈ R be any two constants with a < b.For any square integrable vector-valued function [a, b] t → y(t) ∈ R N in Lebesgue's sense, it holds that dt (b − a) b a y (t)Py(t)dt, connection weight coefficient matrices A 0 , A 1 , A 2 , B 0 , B 1 and B 2 of the transmission terms are given, respectively, by
10,628.8
2023-10-26T00:00:00.000
[ "Computer Science", "Mathematics" ]
Effect of radio frequency magnetron sputtering power on structural and optical properties of Ti6Al4V thin films In this research, the effects of target sputtering power on the structure and optical properties of radio frequency (RF) sputtered Ti6Al4V films were investigated. Different sputtering RF powers were used to produce different thicknesses of Ti6Al4V thin films. From the X-ray diffraction, it was found that the Ti6A14V films had polycrystalline cubic and hexagonal structures and increased films crystallinity and crystalline size with increasing the sputtering power. Atomic forces microscopy (AFM) gave us a nanometric film character, films homogeneity, and surfaces roughness. A higher degree of roughness and average grain size with increasing RF power was exhibited. Band gap and refractive index of Ti6Al4V thin films varied with sputtering RF powers. Introduction Due to the interesting chemical and mechanical properties, the titanium alloy is far used in many applications [1,2]. One particular alloy, the Ti6Al4V, has the most performance among the different grades of titanium. The Ti6Al4V thin films have attracted large scientific and practical interest since their specific properties enable various applications as microstructure materials in surgical appliances [3,4]. The pure titanium is a monophasic, physiologically inert, and non-toxic metal. Ternary titanium alloys containing Al and V exhibit  and  phases structure that has attractive mechanical properties, high wear resistance, hardness, tenacity, resistance to fatigue, and high corrosion resistance [5]. Besides having a low density, it has an excellent biocompatibility of permitting its use in the fabrication of medical implants [6]. There are many deposition methods used to prepare Ti6Al4V, by means of chemical vapor deposition (CVD) [7], electrochemical method [8], selective laser melting [9], and physical vapor deposition (PVD) such as RF-DC sputtering [10]. In this work, Ti6Al4V thin films have been deposited on glass substrates by the radio frequency (RF) magnetron sputtering method with five different deposition power conditions. The crystallographic properties and surface morphology of the films were studied by the X-ray diffraction (XRD) and atomic forces microscopy (AFM) Photonic Sensors 164 techniques. The optical property measurements for Ti6Al4V thin films are obtained by using the UV (ultraviolet) -visible recording spectrometer. Experiment In our study, Ti6Al4V thin films were prepared by using the RF magnetron sputtering technique (CRC600 CO.USA-made). The thin films were deposited on the glass substrate with different powers. The glass slides were sequentially cleaned in an ultrasonic bath with acetone and ethanol. Finally, they were rinsed with distilled water and dried. The sputtering chamber was evacuated to 5×10 5 mbar base pressure using diffusion and mechanical booster pump combination prior to the deposition. Before the deposition of Ti6Al4V films, a Ti6Al4V target (99.99% pure and 5 cm diameter) was pre-sputtered in pure argon atmosphere for 10 min in order to remove oxide on the surface of the target. Ti6Al4V films were deposited by the RF sputtering system in pure argon gas (99.9%) with pressure of 5×10 2 mbar. The X-ray diffraction measurements of thin films were performed by using the diffract meter type (SHIMADZU-6000). The AFM was a contact mode used to analyze the morphological feature on Angstrom Inc. (AA3000). The optical properties measurements for Ti6Al4V thin films were obtained by using the UV-Visible recording spectrometer (UV-2601 PC Shimadzu software 1700 1650). The thickness of the films has been calculated by using the Device FT-650 Film Thickness (FT) Probe System. X-ray diffraction (XRD) It can be expected that low sputtering power exhibits a low deposition rate which is due to less energetic argon over the target species and less ejected atoms from the target material. However, when the sputtering power was increased, the sputtering yield of the Ti6Al4V films markedly was increased. Figure 1 shows the XRD analysis for Ti6A14V thin films deposited on glass substrates with different powers (50 W, 75 W, 100 W, 125 W, and 150 W, respectively). The XRD pattern illustrates that the Ti6A14V films had a polycrystalline structure with peaks attributed to (110) diffractions for cubic structure or (002) diffractions for hexagonal structure and (102) diffractions for cubic structure, identified with standard peaks (card No. 96-900-8555 and 96-900-8518). Also, note that an increase in the RF power led to an increase in the peak intensity (i.e. an increase in films crystallinity). The mobility improvement of adatoms sputtered on the surface, which was required to form highly crystalline films. Because it is believed that high DC sputtering power in the magnetron sputtering system energizes inert argon gas to provide sufficient kinetic energy to adatoms, the surface diffusion of these adatoms was then expected to enhance with the momentum transfer to the nucleation and growth of the Ti6Al4V films. Increasing the RF power will make an increase in the grain size, as shown in Table 1. This may be due to the enhancement of crystallinity in the films. The films of crystalline was improved which led to a decrease in the number of grain boundaries. A significant line was broadened which is a characteristic of nanoparticles [1,11]. Atomic force microscope The surface morphology of Ti6Al4V films deposited on the glass substrate was studied by AFM to monitor the growth of nanostructure under the influence of different deposition powers. Figure 2 shows 2D and 3D AFM images of Ti6Al4V thin film deposited at different working sputtering powers. The images have light and dark regions. From the colors, brightness is used to specify the vertical profile of the thin film surface, where light regions represent the highest points and the dark points are the depressions. This figure confirms that the films are uniform, and the substrate surface is well covered with grains that are nearly uniformly distributed. From these images, it is observed that the surfaces of the films exhibit more degree of roughness with increasing the RF power. In addition, an increase in the average grain size leads to an increase in the root mean square roughness (RMS), as shown in Table 2. The Ti6Al4V film deposited at higher sputtering power exhibits profound large grains with orientations. These morphologies are due to the fact that sputtering power helps increase the surface mobility of adatoms, which is required to form continuous films. The surface diffusion of these adatoms is then enhanced by the higher sputtering power, which results in a provision of the momentum transfer to the growing surface. Optical measurements The optical properties of the Ti6Al4V thin films deposited by RF magnetron sputtering were analyzed by UV-visible spectroscopy in the wavelength range of 400 nm-1100 nm as shown in Fig. 3. The transmission spectra of Ti6Al4V thin film at different RF powers (50 W, 75 W, 100 W, 125 W, and 150 W) decrease with an increase in RF power when films thicknesses increase (244.89 nm, 435.26 nm, 598.98 nm, 866.23 nm, and 910.46 nm, respectively). The transmittance patterns of all deposited thin films on glass increase with an increase in wavelength (λ). A decrease in the transmittance spectra is caused by an increase in the loss of light scattering as the grain size increases [12]. Absorbance spectra of Ti6Al4V thin films at different RF powers are shown in Fig. 4. An increase in absorbance due to a decrease in transmission associates with a change in the thickness. The figure shows that the optical absorption in the UV region is high. The absorption patterns of all deposited thin films on glass decrease with an increase in the wavelength. Figure 6 illustrates the variation of extinction coefficient with wavelength in the range of 400 nm-1100 nm for Ti6Al4V films deposited on the glass substrate at different RF powers. The extinction coefficient depends mainly on the absorption coefficient, and we notice that the extinction coefficient decreases with an increase in the wavelength because of the increment in the absorption coefficient [13]. The variations of the refractive index are versus wavelength in the range 400 nm-1100 nm. It is clear from this figure that the refractive index in general increases with an increase in the thickness, due to the different deposited thicknesses, as shown in Fig. 6 Fig. 9 Refractive index as a function of wavelength for (Ti6Al4V) thin film at different working RF powers. Table 3 shows the variation of optical parameters at 500 nm wavelength for (Ti6Al4V) films at different RF powers. This table illustrates that T, n,  r ,  i , and E g decrease with an increase in the RF power while  and K decrease. Conclusions We investigated the effects of target sputtering power on the structure and mechanical properties of Ti6Al4V film on the glass substrate as deposited by the RF magnetron sputtering technique. The results showed that the structure of the Ti6Al4V films deposited at all the target sputtering powers was polycrystalline with dual structure phases cubic and hexagonal. Films crystallinity and crystalline size increased with an increase in the RF power. AFM data indicated that film roughness was less for samples deposited on the glass substrate at lower sputtering power. It was observed from UV-visible measurements that the absorbance and the extinction coefficient (k) for deposited thin films increased with an increase in the RF power, while other parameters such as dielectric constants and refractive index decreased. The present work can be a guideline for obtaining good quality Ti6Al4V thin films for biomedical applications.
2,151.8
2017-03-14T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
The intestinal immunoendocrine axis: novel cross-talk between enteroendocrine cells and the immune system during infection and inflammatory disease The intestinal epithelium plays a crucial role in maintaining barrier function and immune homeostasis, a failure of which results in disease. This review focuses on the epithelial enteroendocrine cells and the crosstalk that exists with immune cells during inflammation. Introduction Dispersed throughout the intestinal epithelium are the enteroendocrine cells which, despite only comprising 1 % of the epithelium, collectively form the largest endocrine system in humans. Enteroendocrine cells respond to luminal nutrients by secreting >20 peptide hormones, including cholecystokinin (CCK), glucagon-like peptide 1 and 2 (GLP-1, GLP-2), glucose-dependent insulinotropic peptide (GIP), peptide YY (PYY), somatostatin and ghrelin; as well as bioactive amines such as serotonin . The historical dogma of differentiated enteroendocrine cellular sub-types secreting distinct hormone peptides has been superseded via the use of transgenic reporter mice, to the recognition that enteroendocrine cells can secrete a comprehensive array of peptide hormones altering based on their location within the gut [1]. These secreted peptide hormones act on distant organs such as pancreatic islets or locally on neighbouring cells such as enterocytes and vagal nerve endings. Enteroendocrine cells have classically been studied for their roles in enabling efficient postprandial assimilation of Enteroendocrine cells make up 1 % of the intestinal epithelium and, beyond their classical role of detecting luminal nutrients, they also detect and respond to (1) pathogens via the expression of TLRs and (2) the intestinal microbiome via the expression of specific receptors for the metabolites commensal bacteria produce. (3) In response to pathogens and microbial metabolites, enteroendocrine cells secrete peptide hormones and classical cytokines to the surrounding immune cell rich milieu. In addition to classical cytokine receptors, immune cells express a vast array of receptors for peptide hormones which have direct immunomodulatory effects. (4) Enteroendocrine-secreted hormone peptides also signal to vagal afferents triggering an anti-inflammatory vagal reflex. The resulting acetylcholine released from vagal efferents inhibits inflammatory responses from the surrounding immune cells. (5) Vagal afferent signalling also modulates classical feeding pathways resulting in altered fat deposits. This, in turn, modifies the levels of fat secreted adipokines, such as leptin, influencing immune cell function. (6) CD4 + T-cells directly influence the function of peptide hormones via increased secretion and hyperplasia of enteroendocrine cells via direct enteroendocrine and indirect stem cell signalling. nutrients via endocrine and paracrine induced alterations in gastrointestinal secretion, motility, pancreatic insulin release and satiety [2]. The key feature of enteroendocrine cells is to sense luminal nutrients and bring about the ideal absorption conditions for the particular nutrients detected. Classical examples of this fine tuning in nutrient detection are the enteroendocrine I-cells of the duodenum. In response to sensing long-chain fatty acids via activation of G proteincoupled receptors (GPRs), I-cells undergo Ca 2 + flux and membrane depolarization, culminating in secretion of the hormone CCK. CCK acts through the CCK receptor to cause gall bladder contraction and pancreatic enzyme secretion allowing efficient assimilation of the long-chain fatty acids detected [3]. Further to mediating digestion and metabolism, secreted hormones can also terminate meal size by vagally triggering satiation in feeding centres of the brain [2]. Therefore, clinical trials are focusing on the use of enteroendocrine peptide receptor agonists for the therapeutic treatment of obesity and metabolic diseases [2]. Of particular note is the use of GLP-1 receptor agonists for the treatment of diabetes, following the key observation that the incretin GLP-1 is anti-apoptotic for pancreatic β-cells [4]. Intriguingly, both murine and human studies have demonstrated alterations in enteroendocrine cell number and secretion during inflammation [5,6] and the vast immune system that serves the gut has been shown to express an array of enteroendocrine cell peptide receptors [7]. Furthermore, in vitro/in vivo studies have demonstrated that enteroendocrine cells possess functional toll-like receptors (TLRs) [8] and can directly respond to metabolites produced from commensal bacteria [9]. These observations indicate that enteroendocrine cells may have direct and critical roles in orchestrating intestinal immune responses to both pathogens and commensal bacteria ( Figure 1) and despite the ongoing therapeutic trials and use of enteroendocrine cell peptide receptor agonists, few studies have examined the potential importance of this immunoendocrine axis. This review will focus on alterations in enteroendocrine number and peptide secretion during inflammation and disease, highlighting in-depth mechanistic mouse model studies. Furthermore, the emerging potential of enteroendocrine cells acting as innate sensors of pathogens and perturbations in the intestinal microbiome will be discussed, identifying enteroendocrine cells as key orchestrators of intestinal immunity. Enteroendocrine cells and inflammatory bowel disease Given that reduced feeding, anorexia and altered intestinal motility often accompany intestinal inflammation, it is surprising that enteroendocrine cells, as key instigators of these changes during homoeostasis, have been neglected as possible orchestrators of these pathologies during disease. However, genome-wide association studies for Crohn's disease (CD) have identified a single nuclear polymorphism in the enteroendocrine associated homeodomain transcription factor paired-like homeobox 2B (Phox2B) [10]. This, coupled with the detection of autoantibodies for the ubiquitination factor E4A, specifically in enteroendocrine cells during Crohn's [11], has brought some focus upon the possible role of enteroendocrine cells in the pathogenesis of inflammatory bowel disease (IBD). Indeed, alterations in enteroendocrine cell numbers and secretion have been noted during IBD with increased PYY and 5-HT cells in lymphatic colitis, reduced colonic PYY cells in both CD and ulcerative colitis (UC), increases in GLP-1 and PYY cell number in terminal ileal CD and increases in GLP-2 in both CD and UC [5]. GLP-2 is a well-known epithelial growth factor with additional antiinflammatory properties, including aiding secretion of antibacterial peptides from Paneth cells [12] and is therefore the most simplistic example of enteroendocrine function influencing intestinal disease pathology. Indeed, GLP-2 has been shown to be protective in animal models of IBD [13] and long acting analogues of GLP-2 are currently on trial for the treatment of CD [14]. Despite this beneficial change in enteroendocrine function during IBD, the reduced appetite, anorexia and nausea associated with IBD is also likely to be driven by altered enteroendocrine function. Although, increases in GLP-1 in UC are not thought to be responsible for any changes in feeding patterns, due to unaltered gastric emptying; small bowel Crohn's-associated feeding decreases and nausea do correlate with increased PYY levels [5]. Furthermore, increased enteroendocrine numbers in long-standing UC have been suggested to act as promoters for the neoplasia associated with IBD [15], whereas recent data has demonstrated enteroendocrine cells as being key producers of the pro-inflammatory cytokine interleukin (IL)-17C during CD and UC, possibly playing a key role in disease progression [16]. Taken together, this suggests that enteroendocrine cells play an essential and varied role in the pathology of IBD and are strong candidates for therapeutic intervention. Enteroendocrine cells in mouse models of IBD Further mechanistic study of the pathways involved in enteroendocrine cell pathology during IBD has been made possible via the use of animal models of intestinal inflammation. Colitis can be induced chemically via the administration of dextran sulfate sodium (DSS) or 2,4,6trinitrobenzenesulfonic acid (TNBS) and both models are well associated with reduced feeding and weight loss. Interestingly, it has been reported that the feeding alterations seen in TNBS-induced colitis are probably due to alterations in enteroendocrine satiety as opposed to simple malaise, due to changes in gastric emptying [17]. Guinea pigs with TNBSinduced colitis have been shown to have hyperplasia of 5-HT and GLP-2 enteroendocrine cells. Through the use of Bromodeoxyuridine (BrdU) labelling of proliferative cells it has been demonstrated that, although a small capacity of 5-HT producing enterochromaffin cells retain proliferative capacity, the majority of hyperplasia is due to alterations in the stem cell niche [18]. As all epithelial cells arise from the same pluripotent stem cell [6], this is suggestive that alterations in enteroendocrine number occur at the stem cell level and due to the high turnover of intestinal epithelial cells can quickly influence the inflammatory state. These chemical-induced colitis models have been particularly useful in establishing the role of enteroendocrine cells in the pathogenesis of mouse models of disease. Further elucidations have been made utilizing infection-based models of intestinal inflammation, which have demonstrated a key role for enteroendocrine cells during infection, as well as offering translational lessons for IBD. Enteroendocrine cells as mediators of intestinal infection There are numerous reports of alterations in enteroendocrine cell number and secretion during a variety of infectious agents in a diverse range of animals. For example decreased somatostatin-positive cells are seen during schistomiasis in mice [19], whereas increases in CCK-positive cells occur in giardia-infected humans [6] and myxozoa-infected fish [20]. Many studies within the livestock industry have associated changes in enteroendocrine function with weight loss during intestinal infection. Infection with the intestinal parasites Ascaris suum in pigs and Trichostrongylus colubriformis in lambs results in hypophagia that is coupled with an increase in CCK [6], whereas increased 5-HT and CCK enteroendocrine cells significantly correlate with the cachexia seen in Enteromyxum scophthalmi infected Turbot [20]. Animal models have been particularly useful for dissecting the mechanisms responsible for the hyperplasia of enteroendocrine cells during inflammation with studies suggesting an immunedriven alteration. There is a close physical association of immune cells with enteroendocrine cells [21] and the 5-HT hyperplasia observed during Citrobacter rodentium infection is absent from severe combined immunodeficiency (SCID) mice [22] which lack adaptive immunity. We have carried out in-depth studies with the helminth Trichinella spiralis which causes a well-characterized transient enteritis and weight loss in mice, with parasite expulsion dependent on T-helper (Th) 2 cytokines and mastocytosis [23]. Utilizing a variety of transgenic mice, we have dissected the molecular mechanisms and actual function of the hypophagia seen during this parasitic infection. Intriguingly both CCK + cell hyperplasia [23] and CCK hypersecretion [24] are observed during T. spiralis infection and this correlates with the period of hypophagia seen during enteritis. Furthermore, the absence of CD4 + T-cells or the CCK signalling pathway results in a complete lack of hypophagia during enteritis [23,24], whereas the adoptive transfer of CD4 + T-cells to infected SCID mice restores the otherwise absent hypophagia [23]. Collectively, this indicates that the adaptive immune system hijacks classical feeding pathways to reduce food intake during infection. We further pursued the possible benefit of such a mechanism, beyond a simple innate device to prevent continued feeding at an infected site, by examining if reduced feeding was in any way beneficial to the host in coping with the parasitic burden. The period of immune-mediated CCK-induced hypophagia during infection resulted in a significant reduction in weight and visible reduction in visceral fat pads, a rich source of immune manipulating adipokines, most notably leptin [25]. We therefore postulated that the immune driven reductions in leptin, a strong Th1-inducing adipokine [25], could be beneficial in allowing the helminth expelling Th2 immune response to develop, allowing parasite expulsion. To investigate such an effect, we restored basal leptin levels throughout infection-induced hypophagia via the injection of recombinant leptin and saw a significant reduction in CD4 + T-cell Th2 cytokine production and mastocytosis, culminating in a significant reduction in parasite expulsion. Hence, we have identified immune-driven alterations in enteroendocrine feeding pathways as a novel mechanism in helminth expulsion [23]. Parallel studies have demonstrated CD4 + T-cell control of 5-HT producing enterochromaffin cells during a large intestinal helminth infection, which is thought to be driven at the enterochromaffin cell level via the expression of IL-13Rα1 expression [26]. Indeed, CD4 + Th2 cytokines are essential for these alterations, as a chronic dose of the same helminth, resulting in a Th1 immune response does not drive the enterochromaffin hyperplasia [27]. Although, the precise function of these changes has not been defined, 5-HT has many possible immune-modulating abilities [28] and could therefore again be an adaptively driven mechanism of parasite expulsion. The possibility IL-13 is responsible for the alterations in CCK seen during T. spiralis infection is less likely, given the ample natural killer (NK) cell-derived, IL-13-induced goblet cell hyperplasia observed in infected SCID mice, but lack of accompanying I-cell hyperplasia [23]. This uncoupling of enteroendocrine differentiation during inflammation holds promising therapeutic potential, given the diverse potential functional roles of individual enteroendocrine peptide hormones. Direct immunomodulatory roles of enteroendocrine cells Intriguingly, immune cells express a vast array of receptors for enteroendocrine secreted hormone peptides [7], suggesting an exciting potential of bi-directional signalling in the immunoendocrine axis. The production of the amine 5-HT from enterochromaffin endocrine cells is well established as a direct immunomodulatory factor, with the seven receptor isoforms expressed on mast cells, monocytes, dendritic cells (DCs), eosinophils, T-and B-cells and neutrophils [28]. Immune cells can also produce 5-HT independently of endocrine cells and the effect on immune cells is varied from cellular recruitment, activation, phagocytosis, antigen presentation and cytokine secretion [28]. Recent and ongoing studies are dissecting the potential for peptide hormones to influence immunity in a similar manner to the well-studied actions of 5-HT. Indeed, carboxypeptidase E-null mice, an enteroendocrine-associated exopeptidase essential for processing and packaging endocrine peptides, demonstrate increased IL-6 and chemokine (C-X-C motif) ligand (CXCL) 1 and exacerbated DSS-induced colitis [29]. CCK octapeptide has been shown to inhibit TLR9 stimulation of plasmacytoid DCs via tumour necrosis factor receptor-associated factor 6 signalling [30], whereas it can promote IL-12 production from DCs and reduce IL-6 and IL-23 production offering protection during collageninduced arthritis [31]. CCK octapeptide can also directly affect T-and B-cells and has been shown to promote a Th2 and regulatory T-cell phenotype in vitro [32], promote IL-2 production in the Jurkat T-cell line [33], stimulate B-cells to produce acetylcholine [34] and reduce B-cell lipopolysaccharide (LPS)-induced activation [35]. Strikingly, the huge atrophy in lymphoid tissue, including Peyer's patches, IgA production and total cellularity, seen during parenteral feeding can be rescued via the infusion of CCK alone [7], functionally rescuing immune responses to infectious bacteria [36]. Other promising immunomodulatory enteroendocrine hormone peptides include the orexigenic peptide ghrelin. Ghrelin increases T-cell proliferation via phosphoinositide 3-kinase, extracellular-signal-regulated kinases and protein kinase C [37] and has been shown to have an antiinflammatory effect in DSS-induced colitis [38]. Interestingly ghrelin actually has direct anti-parasitic [39] and anti-bacterial effects [40]. Similarly to 5-HT, T-cells themselves can produce ghrelin that is involved in anti-inflammatory responses in terms of reducing Th1 and Th17 responses [41]. Moreover, somatostatin is inhibitory to T-cell proliferation [42], GLP-1 also has anti-inflammatory effects on T-cells via decreased mitogen-activated protein kinase (MAPK) activation [43] and may modulate regulatory T-cells [44]. Indeed, intraepithelial lymphocytes respond to GLP-1 to influence the response to DSS-induced colitis [45]. Enteroendocrine cells are also direct sources of cytokines, being key producers of the pro-inflammatory cytokine IL-17C during CD and UC [16]. Enteroendocrine cells have been shown to express functional TLRs, in vitro and in vivo studies have shown that CCK-secreting cells express TLR 1, 2 and 4, with stimulation resulting in increased nuclear factor kappa light chain enhancer of activated B cells (NFκβ), MAPK signalling, as well as Ca flux culminating in tumour necrosis factor-α, transforming growth factor-β and macrophage inhibitory protein 2, as well as CCK release [8]. Indeed, enteroendocrine cells appear to be able to modulate their response between pathogenic and nutrient sensing, secreting CXCL1/3 and IL-32 in response to flagellin and LPS, but not to fatty acids in vitro [46]. Taken together, this indicates that enteroendocrine cells can act as front-line pathogen detectors releasing either classical cytokines or peptide hormones that can directly orchestrate adaptive and innate immunity. Vagally-mediated immunomodulatory roles of enteroendocrine released peptide hormones As well as being able to directly influence immune cells, enteroendocrine-secreted products can indirectly influence immune responses via the triggering of vagal afferents. This anti-inflammatory pathway was first examined during haemorrhagic shock. Nutritional stimulation of CCK via a highfat diet protected via a vagal reflex releasing acetylcholine which inhibited pro-inflammatory cytokine secretion from macrophages [47]. Others have demonstrated similar vagalmacrophage regulation in a variety of inflammatory settings [48], with the pathway also regulating other innate immune cells [48]. However, these results should be considered in parallel with other data demonstrating direct effects of CCK on macrophages, CCK inhibits inducible nitric oxide synthase (iNOS) production by macrophages [49], as well as studies demonstrating direct alteration of acetylcholine production by B-cells in response to CCK [34]. This anti-inflammatory role of the vagus nerve and, therefore, enteroendocrine peptide hormone stimulation is an exciting and growing area of research. Enteroendocrine cells as sensors of the intestinal microbiome Finally, the current explosion in studies into the intestinal microbiome has not failed in linking both enteroendocrine cells and vagal signalling to the billions of bacteria which inhabit our intestines. Historic studies have demonstrated germ-free mice have drastically altered enteroendocrine cell numbers [50]; whereas, recently it has been shown that enteroendocrine cells have specific receptors which can respond to bacterial products. In particular GLP-1-secreting cells have receptors for many microbiome metabolites such as GPR41 and 43 for short-chain fatty acids, read GPR 131 for bile acids and GPR119 for N-oleoylethanolamide and 2-oleoylglycerol and can secrete GLP-1, GLP-2 and PYY in response to stimulation [9]. It is therefore highly likely that our intestinal microbiome is able to influence not only obesity, but also our entire immune system via regulating the production of immunomodulatory enteroendocrine hormone peptides. Summary In summary, emerging data have begun to demonstrate a huge interaction between enteroendocrine cells and the immune system. Enteroendocrine cells can secrete classical cytokines as well as hormonal peptides that have the ability to directly and indirectly influence the entire breadth of our intestinal immune system. Due to the scarcity of these cells and lack of specific markers for purification, this immunoendocrine axis has until recently remained neglected. The transgenic reporter models now available have led to a huge potential to fully investigate this exciting cross-talk between our intestinal endocrine and immune systems, opening up new therapeutic targets and the possibility to utilize current drugs used for metabolic syndromes in wider immune inflammatory settings.
4,148.2
2015-08-01T00:00:00.000
[ "Biology" ]
Measurement of isolated photon production in deep inelastic ep scattering Isolated photon production in deep inelastic ep scattering has been measured with the ZEUS detector at HERA using an integrated luminosity of 320 pb. Measurements were made in the isolated-photon transverse-energy and pseudorapidity ranges 4 < E T < 15 GeV and −0.7 < η < 0.9 for exchanged photon virtualities, Q, in the range 10 < Q < 350 GeV and for invariant masses of the hadronic system WX > 5 GeV. Differential cross sections are presented for inclusive isolated photon production as functions of Q, x, E T and η . Leadinglogarithm parton-shower Monte Carlo simulations and perturbative QCD predictions give a reasonable description of the data over most of the kinematic range. The ZEUS Collaboration S. Chekanov, M. Derrick, S. Magill, B. Musgrave, D. Nicholass, J. Repond, R. Yoshida Argonne National Laboratory, Argonne, Illinois 60439-4815, USA n M.C.K. Mattingly Andrews University, Berrien Springs, Michigan 49104-0380, USA P. Antonioli, G. Bari, L. Bellagamba, D. Boscherini, A. Bruni, G. Bruni, F. Cindolo, M. Corradi, G. Iacobucci, A. Margotti, R. Nania, A. Polini INFN Bologna, Bologna, Italy e S. Antonelli, M. Basile, M. Bindi, L. Cifarelli, A. Contin, S. De Pasquale, G. Sartorelli, A. Zichichi University and INFN Bologna, Bologna, Italy e D. Bartsch, I. Brock, H. Hartmann, E. Hilger, H.-P. Jakob, M. Jüngst, A.E. Nuncio-Quiroz, E. Paul, U. Samson, V. Schönberg, R. Shehzadi, M. Wlasenko Physikalisches Institut der Universität Bonn, Bonn, Germany b J.D. Morris H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom m M. Kaur, P. Kaur, I. Singh Panjab University, Department of Physics, Chandigarh, India M. Capua, S. Fazio, A. Mastroberardino, M. Schioppa, G. Susinno, E. Tassi Calabria University, Physics Department and INFN, Cosenza, Italy e J.Y. Kim Chonnam National University, Kwangju, South Korea Z.A. Ibrahim, F. Mohamad Idris, B. Kamaluddin, W.A.T. Wan Abdullah Jabatan Fizik, Universiti Malaya, 50603 Kuala Lumpur, Malaysia r Y. Ning, Z. Ren, F. Sciulli Nevis Laboratories, Columbia University, Irvington on Hudson, New York 10027, USA o J. Chwastowski, A. Eskreys, J. Figiel, A. Galas, K. Olkiewicz, B. Pawlik, P. Stopa, L. Zawiejski The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland i L. Adamczyk, T. Bo ld, I. Grabowska-Bo ld, D. Kisielewska, J. Lukasik, M. Przybycień, L. Suszycki Faculty of Physics and Applied Computer Science, AGH-University of Science and Technology, Cracow, Poland p DIS Photon emission Quark radiation Isolated photon Isolated photon production in deep inelastic ep scattering has been measured with the ZEUS detector at HERA using an integrated luminosity of 320 pb −1 . Measurements were made in the isolated-photon transverse-energy and pseudorapidity ranges 4 < E γ T < 15 GeV and −0.7 < η γ < 0.9 for exchanged photon virtualities, Q 2 , in the range 10 < Q 2 < 350 GeV 2 and for invariant masses of the hadronic system W X > 5 GeV. Differential cross sections are presented for inclusive isolated photon production as functions of Q 2 , x, E γ T and η γ . Leading-logarithm parton-shower Monte Carlo simulations and perturbative QCD predictions give a reasonable description of the data over most of the kinematic range. Introduction In the study of high-energy collisions involving hadrons, events in which an isolated high-energy photon is observed provide a direct probe of the underlying partonic process, since the emission of these photons is unaffected by parton hadronisation. * Corresponding author. E-mail address<EMAIL_ADDRESS>(T. Haas). 1 Also affiliated with University College London, United Kingdom. 2 Supported by the US Department of Energy. 3 Supported by the Italian National Institute for Nuclear Physics (INFN). 4 Now at University of Salerno, Italy. 5 Supported by the German Federal Ministry for Education and Research (BMBF), under contract Nos. 05 HZ6PDA, 05 HZ6GUA, 05 HZ6VFA and 05 HZ4KHA. 6 Now at Queen Mary University of London, United Kingdom. 7 Supported by the Science and Technology Facilities Council, UK. 8 Also working at Max Planck Institute, Munich, Germany. 9 Also Senior Alexander von Humboldt Research Fellow at Hamburg University, Institute of Experimental Physics, Hamburg, Germany. 10 Supported by Chonnam National University, South Korea, in 2009. 11 Supported by an FRGS grant from the Malaysian government. 12 Supported by the US National Science Foundation. Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. 13 Isolated high-energy photon production has been studied in a number of fixed-target and hadron-collider experiments [1]. Previous ZEUS and H1 publications have also reported the production of isolated photons in photoproduction [2][3][4][5][6], in which the exchanged photon is quasi-real ( Q 2 ≈ 0), and deep inelastic scattering (DIS) [7,8], in which Q 2 ≈ GeV 2 . Isolated photons are produced in DIS at lowest order in QCD as shown in Fig. 1. Photons produced by radiation from an incoming or outgoing quark are called "prompt"; an additional class of high-energy photons comprises those radiated from the incoming or outgoing lepton. In this Letter, results are presented from a new inclusive measurement of isolated photon production in neutral current DIS. The data provide a test of perturbative QCD in a kinematic region with two hard scales: Q 2 , the exchanged photon virtuality, and E γ T , the transverse energy of the emitted photon. Compared to the previous ZEUS publication [7], the kinematic reach extends to lower values of Q 2 and to higher values of E γ T . The statistical precision is also improved. Leading-logarithm parton-shower Monte Carlo (MC) and perturbative QCD predictions are compared to the measurements. The cross sections for isolated photon production in DIS have been Experimental set-up The measurements are based on a data sample corresponding to an integrated luminosity of 320 ± 131 ± 3 pb −1 of e + p data and 189 ± 5 pb −1 of e − p data 57 with centre-of-mass energy √ s = 318 GeV. A detailed description of the ZEUS detector can be found elsewhere [13]. Charged particles were tracked in the central tracking detector (CTD) [14] and a silicon micro vertex detector (MVD) [15] which operated in a magnetic field of 1.43 T provided by a thin superconducting solenoid. The high-resolution uranium-scintillator calorimeter (CAL) [16] consisted of three parts: the forward (FCAL), the barrel (BCAL) and the rear (RCAL) calorimeters. The BCAL covers the pseudorapidity range −0.74 to 1.01 as seen from the nominal interaction point. The FCAL and RCAL extend the range to −3.5 to 4.0. The smallest subdivision of the CAL was called a cell. The barrel electromagnetic calorimeter (BEMC) cells had a pointing geometry aimed at the nominal interaction point, with a cross section approximately 5 × 20 cm 2 , with the finer granularity in the Z -direction. 58 This fine granularity allows the use of shower-shape distributions to distinguish isolated photons from the products of neutral meson decays such as π 0 → γ γ . A three-level trigger system was used to select events online [17] by requiring well isolated electromagnetic deposits in the CAL. The luminosity was measured using the Bethe-Heitler reaction ep → eγ p by a luminosity detector which consisted of two independent systems: a lead-scintillator calorimeter [18] and a magnetic spectrometer [19]. Event selection and reconstruction Events were selected offline by requiring a scattered-electron candidate, identified using a neural network [20]. The candidates were required to have a polar angle in the range 139.8 • < θ e < 171.9 • in order to ensure that they were well measured in the RCAL. The impact point ( X, Y ) of the candidate on the surface of the RCAL was required to lie outside the region (±15 cm, ±15 cm) centred on (0, 0) to ensure well understood acceptance. The energy of the candidate, E e , was required to be larger than 10 GeV. The kinematic quantities Q 2 and x were reconstructed from the scattered electron by means of the relationships Q 2 = −(k − k ) 2 and is the four-momentum of the incoming (outgoing) lepton and P is the four-momentum of the incoming proton. The kinematic region 10 < Q 2 < 350 GeV 2 was selected. To reduce backgrounds from non-ep collisions, events were required to have a reconstructed vertex position, Z vtx , within the range |Z vtx | < 40 cm and to have 35 is the energy of the ith CAL cell, θ i is its polar angle and the sum runs over all cells [21]. At least one reconstructed track, well separated from the electron, was required, ensuring some hadronic activity which suppressed deeply virtual Compton scattering (DVCS) [22] to a negligible level. Photon candidates were identified as CAL energy-flow objects (EFOs) [23] for which at least 90% of the reconstructed energy was measured in the BEMC. EFOs with wider electromagnetic showers than are typical of a single photon were accepted to allow evaluation of backgrounds. The reconstructed transverse energy of the EFO, E γ T , was required to lie within the range 4 < E γ T < 15 GeV and the pseudorapidity, η γ , had to satisfy −0.7 < η γ < 0.9. The upper limit on the reconstructed transverse energy was selected to en-57 Hereafter 'electron' refers to both electrons and positrons unless otherwise specified. 58 The ZEUS coordinate system is a right-handed Cartesian system, with the Z axis pointing in the proton beam direction, referred to as the "forward direction", and the X axis pointing towards the centre of HERA. The coordinate origin is at the nominal interaction point. sure that the shower shapes from background and signal remained distinguishable. To reduce the background from photons and neutral mesons within jets, the EFO was required to be isolated from reconstructed tracks and hadronic activity. Isolation from tracks was initially is the distance to the nearest reconstructed track with momentum greater than 250 MeV in the η-φ plane, where φ is the azimuthal angle. Jet reconstruction was performed on all EFOs in the event, including the electron and photon candidates, using the k T cluster algorithm [24] in the longitudinally invariant inclusive mode [25] with R parameter set to 1.0. Further isolation was imposed by requiring that the photon-candidate EFO possessed at least 90% of the total energy of the jet of which it formed a part. Each event was required to contain both an electron and a photon candidate. The invariant mass of the hadronic system, W X , is then defined by W 2 where p γ is the four-vector of the outgoing photon. A total of 15 699 events were selected; at this stage the sample was dominated by background events. The largest source of background was neutral current (NC) DIS events where a genuine electron candidate was found in the RCAL and neutral mesons, such as π 0 and η, decaying to photons, produced a photon-candidate EFO in the BEMC. Theory Two theoretical predictions are compared to the measurements presented in this Letter. In the approach of GGP [10], the contributions to the scattering cross section for ep → eγ X are calculated at order α 3 in the electromagnetic coupling. One of these contributions comes from the radiation of a photon from the quark line (called QQ photons; Fig. 1a, b) and a second from the radiation from the lepton line (called LL photons; Fig. 1c, d). In addition to QQ and LL photons, the interference term between photon emission from the lepton and quark lines, called LQ photons by GGP, is evaluated. For the kinematic region considered here, where the outgoing photon is well separated from both outgoing electron and quark, the interference term gives only a 3% effect on the cross section. This effect is further reduced to ≈ 1% when e + p and e − p data are combined as the LQ term changes sign when e − is replaced by e + . The QQ contribution includes both wide-angle photon emission and the leading q → qγ fragmentation term. GGP have chosen to use CTEQ6L leading-order parton distribution functions [26]. The factorisation scales used are Q for QQ events and max(Q , μ F ,min ) for LL events where μ F ,min = 1 GeV. Parton-tohadron corrections were not made, in view of technical issues in relating 2 → 2 and 2 → 3 topologies, following the advice of the GGP authors. We note that others have taken a different view [8]. A naïve study indicated the likely effect to be a reduction after hadronisation in predicted inclusive cross-sections of order 15%. In the approach of MRST [12,27], a partonic photon component of the proton, γ p , is introduced as a consequence of including QED corrections in the parton distribution functions. This leads to ep interactions taking place via QED Compton scattering, γ p e → γ e. A measurement of the isolated high-energy photon production cross section therefore provides a constraint on the photon density in the proton. The model includes the collinearly divergent LL contribution, which is enhanced relative to that of GGP by the DGLAP resummation due to the inclusion of QED Compton scattering. The QQ component is not included in the MRST model, in which the transverse momentum of the scattered electron is expected to balance approximately that of the isolated photon. In the analysis presented here, such a constraint was not imposed. The theoretical uncertainties in the models have been estimated by varying the factorisation scales by a factor two. Since the MRST cross sections include the LL contribution of GGP to a good approximation, but exclude the QQ, an improved prediction can be constructed by summing the MRST cross section and the QQ cross section from GGP [27,28]. The theory uncertainties are of the same order as those of the individual QQ and LL components. Monte Carlo event simulation The MC program Pythia 6.416 [29] was used to simulate prompt-photon emission for the study of the event-reconstruction efficiency. In Pythia, this process is simulated as a DIS process with additional photon radiation from the quark line to account for QQ photons. Radiation from the lepton is not simulated in this Pythia sample. The LL photons radiated at large angles from the incoming or outgoing electron were simulated using the generator Djangoh 6 [30], an interface to the MC program Heracles 4.6.6 [31]; higherorder QCD effects were included using the colour dipole model of Ariadne 4.12 [32]. Hadronisation of the partonic final state was performed by Jetset 7.4 [33]. The small LQ contribution was neglected. The NC DIS background was simulated using Djangoh 6, within the same framework as the LL events. This provided a realistic spectrum of mesons and overlapping clusters with well modelled kinematic distributions and hence was preferred to single-particle MC samples for backgrounds, such as were used in the previous ZEUS publication [7]. The MC samples described above contained only events in which W X was larger than 5 GeV. Isolated photons can also be produced at values of W X less than 5 GeV in 'elastic' and 'quasielastic' processes (ep → epγ ) such as DVCS and Bethe-Heitler photon production. Such events were simulated using the GenDVCS [34] and Grape-Compton [35] generators. The contribution of these elastic processes was negligible after the selections described in Section 3. The generated MC events were passed through the ZEUS detector and trigger simulation programs based on Geant 3.21 [36]. They were reconstructed and analysed by the same programs as the data. In addition to the full-event simulations, MC samples of single particles (photons and neutral mesons) were generated and used to study the MC description of electromagnetic showering in the BEMC. Extraction of the photon signal The event sample selected according to the criteria in Section 3 was dominated by background; thus the photon signal was extracted statistically following the approach used in previous ZEUS analyses [2][3][4]7]. The photon signal was extracted from the background using BEMC energy-cluster shapes. Two shape variables were considered: of the centre of the ith cell, Z cluster is the centroid of the EFO cluster, w cell is the width of the cell in the Z direction, E i is the energy recorded in the cell and the sum runs over all BEMC cells in the EFO; • the ratio f max of the highest energy deposited in any one BEMC cell in the EFO to the total EFO BEMC energy. The distributions of δ Z and f max (after the requirement δ Z < 0.8) in the data and the MC are shown in Fig. 2. The MC LL and QQ distributions have been corrected in each two-dimensional (η, E T ) bin using factors derived from the difference between simulated and real DIS electron data. The δ Z distribution exhibits a double-peaked structure with the first peak at ≈ 0.1, associated with the signal, and a second peak at ≈ 0.5, dominated by the π 0 → γ γ background. The f max distribution shows a single peak at ≈ 0.9 corresponding to the photon signal, and has a shoulder extending down to ≈ 0.5, which is dominated by the hadronic background. The number of isolated-photon events contributing to Fig. 2 and in each cross-section bin was determined by a χ 2 fit to the δ Z distribution in the range 0 < δ Z < 0.8 using the LL and QQ signal and background MC distributions as described in Section 5. By treating the LL and QQ photons separately, one automatically takes account of their differing hadronic activity (resulting in significantly different acceptances) and their differing (η, E T ) distributions (resulting in different bin migrations due to finite measuring precision). In performing the fit, the LL contribution was kept constant at its MC-predicted value and the other components were varied. Of the 15 699 events selected, 4164 ± 168 correspond to the extracted signal (LL and QQ). The scale factor resulting from the global fit for the QQ photons in Fig. 2 was 1.6; this factor was used for all the plots comparing MC to data. The fitted global scale factor for the hadronic background was 1.0. The signal fraction in the crosssection bins varied from 21% to 62%. In all cross-section bins, the χ 2 /n.d.f. of the fits was 2.1 or smaller. For a given observable Y , the production cross section was determined using: where N(γ QQ ) is the number of QQ photons extracted from the fit, Y is the bin width, L is the total integrated luminosity, σ MC LL is the predicted cross section for LL photons from Djangoh, and A QQ is the acceptance correction for QQ photons. The value of A QQ was calculated using Monte Carlo from the ratio of the number of events generated to those reconstructed in a given bin. It varied between 1.2 and 1.7 from bin to bin. The fits employed in this analysis were performed using δ Z because of the larger difference in shape between signal and background for this quantity. Fits in terms of the f max distributions were performed as a cross-check and gave similar results. As a further cross-check, an algorithm from the previous ZEUS publication [7], which selects wider electromagnetic clusters as photon candidates, was used. This proved to be more sensitive to the modelling of calorimeter backgrounds. In every case where a satisfactory fit was obtained, good agreement with the principal method was found. The corrections to the MC photon-signal energy-cluster shapes gave changes to the results within the statistical uncertainties and were not further considered [37]. Systematic uncertainties The following sources of systematic uncertainty were investigated [37]: • the energy scale of the electromagnetic calorimeter (EMC) was varied by its known scale uncertainty of ±2% causing variations in the measured cross sections of typically less than ±2%; • the dependence on the modelling of the hadronic background by Ariadne was investigated by varying the upper limit for the δ Z fit in the range 0.6-1.0, giving variations that were typically ±5% but up to +12% and −14% in the most forward η γ and highest-x bins respectively. The following sources of systematic uncertainty were also investigated and found to be negligible compared to the statistical uncertainty [37]: • variation of the EMC energy-fraction cut for the photon candidate EFO by ±5%; • variation of the Z vtx cut by ±5 cm; • variation of the upper and lower cuts on δ by ±3 GeV; • variation of the R cut used for track isolation by ±0.1; • variation of the track-momentum cut used in calculating track isolation by ±100 MeV; • variation of the LL-signal component by ±5%. All the uncertainties listed above were added in quadrature to give separate positive and negative systematic uncertainties in each bin. The uncertainty of 2.6% on the luminosity measurement was not included in the differential cross sections but included in the integrated cross sections. Results The cross section for inclusive isolated photon production, ep → eγ X , was measured in the kinematic region defined by: 10 < Q 2 < 350 GeV 2 , W X > 5 GeV, E e > 10 GeV, 139.8 • < θ e < 171.8 • , −0.7 < η γ < 0.9 and 4 < E γ T < 15 GeV, with isolation such that at least 90% of the energy of the jet containing the photon belongs to the photon, where jets were formed according to the k T algorithm with R parameter set 1.0. The measured integrated cross section is Fig. 3 and given in Tables 1-4. It can be seen that the cross section decreases with increasing E γ T , η γ , Q 2 and x. The predictions for the sum of the expected LL contribution from Djangoh and a factor of approximately 1.6 times the expected QQ contribution from Pythia agree well with the measurements, except for some differences at the lowest Q 2 (and correspondingly lowest x). The theoretical predictions described in Section 4 are compared to the measurements in Fig. 4. The predictions from GGP describe the shape of the E γ T and η γ distributions well, but their central value typically lies 20% below the measured cross sections. The calculations fail to reproduce the shape in Q 2 ; a similar observation was made by H1 [8]. As with the MC comparison, the measured cross section is larger than the theoretical prediction; this is also reflected in an excess of data over theory at low x. The MRST predictions mostly fall below the measured differential cross sections. However, they lie close to the measurements at large values of Q 2 and x, for backward η γ and for high values of E γ T , where the LL cross section is expected to be a substantial fraction of the total. Also included in Fig. 4 is the sum of MRST and QQ of GGP; it gives an improved description of the data over much of the range of the kinematic variables. 5 shows the measured dσ /dη γ compared to previous measurements from ZEUS [7] and H1 [8] for the restricted range Q 2 > 35 GeV 2 and 5 < E γ T < 10 GeV. The results are consistent but the uncertainty in the present measurement is smaller. The symbols are mutually displaced for clarity. Conclusions Inclusive isolated photon production has been measured in deep inelastic scattering using the ZEUS detector at HERA using an integrated luminosity of 320 pb −1 . Differential cross sections as functions of several kinematic variables are presented for 10 < Q 2 < 350 GeV 2 and W X > 5 GeV in the pseudorapidity range −0.7 < η γ < 0.9 for photon transverse energies in the range 4 < E γ T < 15 GeV. The order α 3 predictions of Gehrmann-de Ridder et al. reproduce the shapes of the experimental results as functions of transverse energy and pseudorapidity, but are lower than the measurements at low Q 2 and low x. The predictions of Martin et al. mostly fall below the measured cross sections but are close in the kinematic regions where lepton emission is expected to be dominant. An improved description of the data is obtained by appropriately combining the two predictions, suggesting a need for further calculations to exploit the full potential of the measurements.
5,730.2
2010-01-01T00:00:00.000
[ "Physics" ]
Be Bejan number In this study, the effect of thermal radiation on micro-polar fluid flow over a wavy surface is studied. The optically thick limit approximation for the radiation flux is assumed. Prandtl’s transposition theorem is used to stretch the ordinary coordinate system in certain directions. The wavy surface can be transferred into a calculable plane coordinate system. The governing equations of micro-polar fluid along a wavy surface are derived from the complete Navier-Stokes equations. A simple transformation is proposed to transform the governing equations into boundary layer equations so they can be solved numerically by the cubic spline collocation method. A modified form for the entropy generation equation is derived. Effects of thermal radiation on the temperature and the vortex viscosity parameter and the effects of the wavy surface on the velocity are all included in the modified entropy generation equation. Introduction In recent years, the heat convection of wavy surfaces has been studied extensively because of its wide practical applications.Yao [1,2] proposed a simple transformation to study the natural convection heat transfer of isothermal vertical wavy surfaces, e.g., sinusoidal surfaces.Using transformation, the boundary layer equations of natural convection in Newtonian fluids can be solved by a numerical finite difference method.Results show that the local heat transfer rate varies periodically along the wavy surface, with a frequency equal to twice the frequency of the surface.Study of natural convection and study of mixed convection along a vertical wavy surface was provided by Moulic and Yao [3].They showed that the total mixed convection heat flux along a wavy surface is smaller than that of a flat surface.Yao [4] showed that the enhanced total heat-transfer rate seems to depend on the ratio of the amplitude and wavelength of a surface.Wang and Chen [5] studied the rates of heat transfer for flow through a sinusoidally curved converging-diverging channel.Results showed that flow through a periodic array of wavy-wall channels forms a highly complex pattern which is composed of a strong forward flow and an oppositely directed recirculating flow with each wave.Micro-polar fluids possess certain microscopic effects that come from local structure and micro-motion of the fluid [6][7][8][9], and can be used to study the behavior in fluid media such as polymeric fluids, liquid crystals and animal blood.Wang and Chen [10][11][12] studied micro-polar fluids and heat convection, showing that the harmonic curves for the local skin friction coefficient and the local Nu have the same frequency as the frequency of the wavy surface.Moreover, the vortex viscosity parameter tends to decrease the heat transfer rate and to increase the skin friction coefficient.Additionally, literature regarding non-Newtonian fluids such as Lien et al. [13,14], Yang et al. [15], Chen et al. [16] and Wang [17], are available for different thermal conditions and field effects.The above studies show that the heat transfer of an irregular surface is a topic of fundamental importance and is encountered in heat transfer systems such as flat plate solar collectors, condensers in refrigerators and fins used to enhance the rate of heat transfer in electronic equipment cooling systems.Lien et al. [18] studied heat transfer in a plate fin, showing that the modified local heat transfer coefficient is determined by a highly coupled interaction among the fin conduction, radiation and fluid convection flow, assuming the optically thick limit approximation for radiation flux.The optically thick approximation for radiation flux was derived by Rosseland [19].Many studies have used this approximation, e.g., Novotny and Yang [20] who discussed the role of the optically thick approximation in convection-radiation interaction situations.Chen and Ozisic [21] studied radiation with free convection in absorbing, emitting and scattering media.Hossain and Takhar [22] studied radiation effects on mixed flow along a heated vertical flat plate with the Rosseland diffusion approximation.Elsayed [23] studied thermal buoyancy and thermal radiation effects on the development of a boundary layer flow past a horizontal plate, with emphasis placed on energy conservation and efficient use of energy.Analysis of thermodynamic irreversibility appears to be increasingly important [24][25][26][27][28]. From an engineering viewpoint, thermodynamic irreversibility is particularly applicable in the analysis of complex thermal systems [29].The analysis of thermodynamic irreversibility enables us to identify the irreversibility associated with various components and avoid loss of available power [30].This information can be employed to design thermal systems, guide efforts to reduce sources of irreversibility in engineering systems, estimate the cost of engineering systems and optimize complex systems [31][32][33][34][35].In view of the above, it is seen that wavy surface analysis in the literature tends to focus on the first law of thermodynamics.On the other hand, it is seen that thermodynamic irreversibility is also a topic of importance.This present work focuses on enhanced modeling of entropy generation due to micro-polar fluid flow along the wavy surface including radiation effect. Mathematical Formulation Consider a two dimensional semi-infinite wavy surface that is placed in a fluid field of temperature = T .The wavy surface and the outside free stream are parallel.Additionally, the surface-temperature is maintained as T w .The formula of the wavy surface can be described as: In this formula, Ɨ is the amplitude of the wavy surface and L is the length of a surface wave.Figure 1 illustrates the physical module and coordinate system. L The governing equations for micro-polar parameter under consideration can be written as: ( ) The optically thicken limit approximation for the radiation flux is: where ı is the Stefan-Boltzman constant, Į r is the extinction coefficient, q r is the radiation heat flux.v 3 is the micro-rotation component and C P is the specific heat of the fluid at constant.Now the dimensionless variables are defined as: The transformed variables are: By substituting the dimensionless and transformed variables into Equations ( 2)-( 5) and then transforming the wavy surface into flat surface by Prandtl's transposition theorem [2], then letting ∞ → Re (boundary layer approximation), the transformed equations can be given as: Re − ), which implies that the lowest-order pressure gradient along the x -direction can be determined from the inviscid flow solution, that is: Eliminating y p ∂ ∂ between Equations ( 8) and ( 9) and using Equation (11) gives: with the x, y, u, v and N variables defined as: Then Equations ( 7), ( 12), (10a) and (10b) can be transformed into, respectively: The corresponding boundary conditions are: Next the inviscid flow along the wavy surface is obtained.The inviscid solution here is valid only for small values of the amplitude-wave length ratio.The potential-flow solution U w (x) for small values of Į (<<1) was reported by Moulic et al. [36].U w (x) can be expressed as: Removing the singular point in the integral by the residue theorem, Equation (15a) can be written as: For a two-dimensional Cartesian system, the local entropy generation rate is: According to Bejan [37,38], the dimensionless entropy generation equation is the entropy generation number which is the ratio of the volumetric entropy generation rate to the characteristic entropy generation rate.Thus, from Equation (16a), the entropy generation number can be expressed as: where the characteristic entropy generation rate is K f (ǻT/T L) 2 , N H is the heat transfer irreversibility, N F is the fluid friction irreversibility and N F /N H is the irreversible distribution rate (ij) presented by Bejan.Thus, the Bejan number can be written as [28,39]: The Bejan number has a value between 0 and 1.If the Bejan number equals 0, then irreversibility is dominated by fluid friction.If the Bejan number equals 1, then irreversibility is dominated by heat transfer.Friction and heat transfer irreversibility are the same when the Bejan number equals 0.5. Numerical Method An improved version of the cubic spline collocation method [40] is used to perform numerical computation in this study [41].Using cubic spline collocation, Equations (14a)-(14d) can be written as: where the values F, G and S are known coefficients evaluated at previous time steps, i and j are computational nodes, n refers to the time step, ĭ represents u and ș are as shown in Table 1.When combined with cubic spline relations, Equation (17a) can be written as: The Thomas algorithm is then used to solve Equation (17b).( ) Results and Discussion An accuracy test of grid fineness is made for grids of 100 × 20, 100 × 50, 100 × 150, 25 × 50, 50 × 50 and 100×50.Results are listed in Table 2.In this study we use a 100 × 50 nonuniform grid with smaller spacing of the mesh points in the neighborhood of the fluid-solid boundary in the y direction.In order to verify the accuracy of the solution, numerical results are obtained for Newtonian fluid flow over a flat plate (Į = 0).The N H and N F results from Equation (16b) are found to be in good agreement with results of previous study [42,43].Although Figure 2, does not include any variation, it is included in order to verify the accuracy of the solution, since it demonstrates good agreement with results from the Chen et al. [43].In addition, Figure 2 shows that the Bejan number for a flat plane is 0.5, which means that both of heat transfer and flow friction contribution to entropy generation are in the same level.In order to solve Equations (14a)-(14d), Simpson's integral is applied to calculate the value of U w (x) in Equation (15b).As shown in Figure 3, while ascending along the wavy surface (slope S' is positive from trough to crest), the flow accelerates and the pressure gradient is negative; while descending along the wavy surface (slope S' is negative from crest to trough), the flow decelerates and the pressure gradient is positive.Thus U w (x) varies periodically along the surface with a frequency equal to that of the wavy surface and increases with increasing amplitude-wave length ratio.The pressure distribution has a frequency equal to that of the wavy surface.The maximum and minimum values of the pressure gradient occur at the points of inflection of the wavy surface, and the pressure gradient tends to increase as the amplitude-wave length ratio increases.Referring to Figure 4, it is observed that the amplitude of N H increases with increasing x.It is seen that increasing the value of N R leads to a shift of the N H curve upwards.The results indicate that increasing N R leads to increased temperature distribution, and also to enhanced entropy generation due to heat transfer of the flow field.Figure 5 shows that increasing N F has no effect with regard to increasing N R .This is because the values of both R and N R are not coupled and the value of Br/ȍ is constant.In other words, when the value of N R is increases, the heat flux of thermal radiation is absorbed into the fluid.Although the temperature of the fluid is increases with increasing N R , phase change does not take place in the micro-polar fluid.In consequence, the total fluid viscosity value and the entropy generation value due to fluid friction are constant.Figure 5 shows that the value of N F in a micro-polar fluid tends to decrease rapidly near the leading edge as the fluid moves downstream, which is different than the behavior for Newtonian fluids.The behavior is the same as that observed by Wang [10] in his study of micro-polar fluids.Figures 4 and 5 show that as the vortex viscosity parameter R increases, N H decreases and N F increases.This is because an increase in the vortex viscosity results in an increase in the total viscosity of fluid flow, inducing increased N F due to fluid friction and decreased N H due to heat transfer.It is observed that the amplitude of N H and N F increase with increasing x. Figure 6 shows that when the value of N R equals zero, then the crests of the N S harmonics when R equals 5 are larger than when R equals 1, and the crests of the N S harmonics increase with an increasing values of N R .It is also seen that when the value of N R equals 1, then the crests of the N S harmonic when R equals 5 are smaller than when R equals 1. The reason for this behavior is that the N H harmonics trend to increase but the N F harmonics remain unchanged when N R increases.It is observed that the amplitude of N S increases with increasing x, which corresponds to the behavior of the harmonics of N H and H F .This phenomenon can be understood from the fact that entropy in the system trends to increase with increasing x.In addition, the harmonics of N H , N F and N S show a periodic variation with a frequency equal to the frequency of the wavy surface, but their maximum and minimum values do not occur exactly at the crests and troughs of the wavy surface.Note that for a complete cycle (1 < x < 2), the maximum value occurs at x = 1.35, not at the crest of the wavy surface (x = 1.5), and the minimum value occurs at x = 1.85, not at the trough of wavy surface(x = 2). From Figure 7, in addition to the first crest of the Be harmonic, it is found that the crests of the Be harmonic are larger than 0.5 when R equals 1 and N R equals 0 (no thermal radiation), which means that heat transfer dominates entropy generation at the crests of the wavy surface when when R equals 1.However, it is seen that the crests the of Be harmonic are smaller than 0.5 when R equals 5 and N R equals 0 (no thermal radiation).This indicates that friction dominates entropy generation in the crests of the wavy surface when R equals 5.In addition, the Be harmonics are larger than 0.5 when N R is over 0.5, which means that as the value of N R increases, the thermal radiation heat flux is absorbed into the fluid, which then leads to enhanced entropy generation due to heat transfer of the flow field, hence heat transfer dominates entropy generation over the wavy surface.Be For a complete wave (0.5 < x < 1.5), as seen in Figure 8, the temperature of fluid motion in a trough is higher than in a crest, and the temperature of fluid motion in the troughs and crests increases with increasing N R .Figure 9 shows that increasing N R has no effect on the velocity of both crests and troughs, although the velocity of fluid motion in troughs is lower than in crests.This is because when the value of N R increases, the thermal radiation heat flux is absorbed into the fluid.Although the temperature of the fluid increases with increasing N R , phase change does not take place in the micro-polar fluid, hence the value of total fluid viscosity remains constant.The y axial distribution of micro-rotation N is shown in Figure 10.The micro-rotation gradient of N along the y direction is positive when y > 2. However, close to the wavy surface, the micro-rotation gradient of N along the y direction is negative.In other words, a negative micro-rotation gradient trends to retard the velocity of fluid motion near the plate, but a positive micro-rotation gradient accelerates the velocity of fluid motion away from the wavy surface, hence the velocity of fluid motion in a trough is lower than at a crest. Conclusions In this paper, the effect of thermal radiation on micro-polar fluid flow over a wavy surface has been studied.A modified form of the entropy generation equation has been derived.Effects of thermal radiation on the temperature and the vortex viscosity parameter, and the effect of the wavy surface on the velocity are all included in the modified entropy generation equation.The general results of this investigation are summarized as follows.When the value of R remains constant, then N R has no effect on the relation between R and N F .That is because the values of N R and R are not coupled, and the value of Br/ȍ is constant.In other words, with increasing value of N R , the thermal radiation heat flux is absorbed into the fluid.Although the temperature of the fluid increases with increasing N R , phase change does not take place in the micro-polar fluid, so the value of fluid total viscosity remains constant and entropy generation due to fluid friction is maintained at a constant level.From the results and discussion, moreover, it is seen that when N R increases, the thermal radiation heat flux is absorbed into the fluid, which leads to enhanced entropy generation due to heat transfer of the flow field, so heat transfer dominates entropy generation for the wavy surface.In addition, it is seen that a negative micro-rotation gradient trends to retard the fluid velocity near the plate, but a positive micro-rotation gradient accelerates the velocity of fluid motion away from the wavy surface, hence the fluid velocity in the troughs is lower than at the crests. velocity of the inviscid flow evaluated at surface u Dimensionless x component of velocity T Temperature v Dimensionless y component of velocity x , y Axial and transverse (Cartesian) coordinates, respectively Figure 1 . Figure 1.Physical module and coordinates system. Figure 3 . Figure 3. Inviscid surface velocity distribution and axial distribution of pressure gradient. Table 1 . The value of F i,j , G i,j , and S i,j . Table 2 . Local heat transfer rate and local skin friction coefficient for different grids. [10]: Uniform grid (y direction);The forms for both local heat transfer rate and local skin friction coefficient are obtained from Wang and Chen[10].
4,089.4
2011-09-02T00:00:00.000
[ "Physics" ]
Some Existence Results for Impulsive Nonlinear Fractional Differential Equations with Closed Boundary Conditions and Applied Analysis 3 Definition 2.1. The fractional arbitrary order integral of the function h ∈ L1 J, R of order α ∈ R is defined by I 0h t 1 Γ α ∫ t 0 t − s α−1h s ds, 2.1 where Γ · is the Euler gamma function. Definition 2.2. For a function h given on the interval J , Caputo fractional derivative of order α > 0 is defined by D α 0 h t 1 Γ n − α ∫ t 0 t − s n−α−1h n s ds, n α 1, 2.2 where the function h t has absolutely continuous derivatives up to order n − 1 . Lemma 2.3. Let α > 0, then the differential equation Dh t 0 2.3 has solutions h t c0 c1t c2t · · · cn−1tn−1, ci ∈ R, i 0, 1, 2, . . . , n − 1, n α 1. 2.4 The following lemma was given in 4, 10 . Lemma 2.4. Let α > 0, then ID α h t h t c0 c1t c2t · · · cn−1tn−1, 2.5 for some ci ∈ R, i 0, 1, 2, . . . , n − 1, n α 1. The following theorem is known as Burton-Kirk fixed point theorem and proved in 21 . Theorem 2.5. Let X be a Banach space and A, D : X → X two operators satisfying: a A is a contraction, and b D is completely continuous. Then either i the operator equation x A x D x has a solution, or ii the set ε {x ∈ X : x λA x/λ λD x } is unbounded for λ ∈ 0, 1 . Theorem 2.6 see 22 , Banach’s fixed point theorem . Let S be a nonempty closed subset of a Banach space X, then any contraction mapping T of S into itself has a unique fixed point. 4 Abstract and Applied Analysis Next we prove the following lemma. Lemma 2.7. Let 1 < α ≤ 2 and let h : J → R be continuous. A function x t is a solution of the fractional integral equation: x t ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪ ⎪⎩ ∫ t 0 t − s α−1 Γ α h s ds Ω1 t Λ1 Ω2 t Λ2, t ∈ J0 ∫ t tk t − s α−1 Γ α h s ds 1 Ω1 t k ∑ i 1 ∫ ti ti−1 ti − s α−1 Γ α h s ds T − tk Ω1 t Ω2 t t − tk k ∑ i 1 ∫ ti ti−1 ti − s α−2 Γ α − 1 h s ds Introduction This paper considers the existence and uniqueness of the solutions to the closed boundary value problem BVP , for the following impulsive fractional differential equation: x T ax 0 bT x 0 , Tx T cx 0 dT x 0 , where and Δx t k has a similar meaning for x t , where a, b, c, and d are real constants with Δ : c 1 − b 1 − a 1 − d / 0. The boundary value problems for nonlinear fractional differential equations have been addressed by several researchers during last decades.That is why, the fractional derivatives serve an excellent tool for the description of hereditary properties of various materials and processes.Actually, fractional differential equations arise in many engineering and scientific disciplines such as, physics, chemistry, biology, electrochemistry, electromagnetic, control theory, economics, signal and image processing, aerodynamics, and porous media see 1-7 .For some recent development, see, for example, 8-14 . On the other hand, theory of impulsive differential equations for integer order has become important and found its extensive applications in mathematical modeling of phenomena and practical situations in both physical and social sciences in recent years.One can see a noticeable development in impulsive theory.For instance, for the general theory and applications of impulsive differential equations we refer the readers to 15-17 .Moreover, boundary value problems for impulsive fractional differential equations have been studied by some authors see 18-20 and references therein .However, to the best of our knowledge, there is no study considering closed boundary value problems for impulsive fractional differential equations. Here, we notice that the closed boundary conditions in Motivated by the mentioned recent work above, in this study, we investigate the existence and uniqueness of solutions to the closed boundary value problem for impulsive fractional differential equation 1.1 .Throughout this paper, in Section 2, we present some notations and preliminary results about fractional calculus and differential equations to be used in the following sections.In Section 3, we discuss some existence and uniqueness results for solutions of BVP 1.1 , that is, the first one is based on Banach's fixed point theorem, the second one is based on the Burton-Kirk fixed point theorem.At the end, we give an illustrative example for our results. Preliminaries Let us set J 0 0, t 1 , The following definitions and lemmas were given in 4 . Definition 2.1.The fractional arbitrary order integral of the function h ∈ L 1 J, R of order α ∈ R is defined by where Γ • is the Euler gamma function. Definition 2.2.For a function h given on the interval J, Caputo fractional derivative of order α > 0 is defined by where the function h t has absolutely continuous derivatives up to order n − 1 . Lemma 2.4.Let α > 0, then The following theorem is known as Burton-Kirk fixed point theorem and proved in 21 . Theorem 2.5.Let X be a Banach space and A, D : X → X two operators satisfying: a A is a contraction, and b D is completely continuous. Then either i the operator equation x A x D x has a solution, or ii the set ε {x ∈ X : x λA x/λ λD x } is unbounded for λ ∈ 0, 1 .Theorem 2.6 see 22 , Banach's fixed point theorem .Let S be a nonempty closed subset of a Banach space X, then any contraction mapping T of S into itself has a unique fixed point. Next we prove the following lemma.Lemma 2.7.Let 1 < α ≤ 2 and let h : J → R be continuous.A function x t is a solution of the fractional integral equation: if and only if x t is a solution of the fractional BVP where Abstract and Applied Analysis 5 2.8 Proof.Let x be the solution of 2.7 .If t ∈ J 0 , then Lemma 2.4 implies that 2.10 for some d 0 , d 1 ∈ R. Thus we have 2.11 Observing that 12 then we have 2.14 If t ∈ J 2 , then Lemma 2.4 implies that 2.15 for some e 0 , e 1 ∈ R. Thus we have 2.16 Similarly we observe that 2.17 Abstract and Applied Analysis 7 thus we have 2.19 By a similar process, if t ∈ J k , then again from Lemma 2.4 we get 2.20 Now if we apply the conditions: x T ax 0 bT x 0 , Tx T cx 0 dT x 0 , 2.21 we have 2.22 In view of the relations 2.8 , when the values of −c 0 and −c 1 are replaced in 2.9 and 2.20 , the integral equation 2.7 is obtained.Conversely, assume that x satisfies the impulsive fractional integral equation 2.6 , then by direct computation, it can be seen that the solution given by 2.6 satisfies 2.7 .The proof is complete. Main Results Definition 3.1.A function x ∈ PC 1 J, R with its α-derivative existing on J is said to be a solution of 1.1 , if x satisfies the equation C D α x t f t, x t on J and satisfies the conditions: x T ax 0 bT x 0 , Tx T cx 0 dT x 0 . 3.1 For the sake of convenience, we define The followings are main results of this paper. Theorem 3.2. Assume that A1 the function f : J × R → R is continuous and there exists a constant L 1 > 0 such that f t, u − f t, v ≤ L 1 u − v , for all t ∈ J, and u, v ∈ R, A2 I k , I * k : R → R are continuous, and there exist constants L 2 > 0 and Moreover, consider the following: 3.3 Then, BVP 1.1 has a unique solution on J. Proof.Define an operator F : 3.4 Now, for x, y ∈ PC J, R and for each t ∈ J, we obtain Abstract and Applied Analysis T pL 3 x s − y s . 3.5 Therefore, by 3.3 , the operator F is a contraction mapping.In a consequence of Banach's fixed theorem, the BVP 1.1 has a unique solution.Now, our second result relies on the Burton-Kirk fixed point theorem. Proof.We define the operators A, D : 3.6 It is obvious that A is contraction mapping for Now, in order to check that D is completely continuous, let us follow the sequence of the following steps. Step 1 D is continuous .Let {x n } be a sequence such that x n → x in PC J, R .Then for t ∈ J, we have 3.8 Since f is continuous function, we get Dx n t − Dx t −→ 0 as n −→ ∞. 3.9 Step 2 D maps bounded sets into bounded sets in PC J, R .Indeed, it is enough to show that for any r > 0, there exists a positive constant l such that for each x ∈ B r {x ∈ PC J, R : x ≤ r}, we have D x ≤ l.By A3 , we have for each t ∈ J, 3.10 Step 3 D maps bounded sets into equicontinuous sets in PC 1 J, R .Let τ 1 , τ 2 ∈ J k , 0 ≤ k ≤ p with τ 1 < τ 2 and let B r be a bounded set of PC 1 J, R as in Step 2, and let x ∈ B r .Then where This implies that A is equicontinuous on all the subintervals J k , k 0, 1, 2, . . ., p. Therefore, by the Arzela-Ascoli Theorem, the operator D : PC 1 J, R → PC 1 J, R is completely continuous. To conclude the existence of a fixed point of the operator A D, it remains to show that the set ε x ∈ X : x λA x λ λD x for some λ ∈ 0, 1 T pM 3 . 3.15 Consequently, we conclude the result of our theorem based on the Burton-Kirk fixed point theorem. An Example Consider the following impulsive fractional boundary value problem: 4.2 Since the assumptions of Theorem 3.2 are satisfied, the closed boundary value problem 4.1 has a unique solution on 0, 1 .Moreover, it is easy to check the conclusion of Theorem 3.3. 1 . 1 include quasi-periodic boundary conditions b c 0 and interpolate between periodic a d 1, b c 0 and antiperiodic a d −1, b c 0 boundary conditions.
2,689.4
2012-11-14T00:00:00.000
[ "Mathematics" ]
Gluino meets flavored naturalness We study constraints from LHC run I on squark and gluino masses in the presence of squark flavor violation. Inspired by the concept of ‘flavored naturalness’, we focus on the impact of a non-zero stop-scharm mixing and mass splitting in the right-handed sector. To this end, we recast four searches of the ATLAS and CMS collaborations, dedicated either to third generation squarks, to gluino and squarks of the first two generations, or to charm-squarks. In the absence of extra structure, the mass of the gluino provides an additional source of fine tuning and is therefore important to consider within models of flavored naturalness that allow for relatively light squark states. When combining the searches, the resulting constraints in the plane of the lightest squark and gluino masses are rather stable with respect to the presence of flavor-violation, and do not allow for gluino masses of less than 1.2 TeV and squarks lighter than about 550 GeV. While these constraints are stringent, interesting models with sizable stop-scharm mixing and a relatively light squark state are still viable and could be observed in the near future. Introduction While the Large Hadron Collider (LHC) has just begun its second period of data taking, its first run has been an experimental success with many new measurements at an energy regime unexplored beforehand. In particular, the search for new phenomena has been given a boost with the discovery of a new particle, the celebrated Higgs boson [1,2]. Theoretically, however, we are still in the dark, as most searches performed at the LHC seem to be consistent with the Standard Model (SM) predictions. This includes the measurements of the Higgs couplings [3], the searches for new physics at the energy frontier with the ATLAS and CMS experiments and at the luminosity frontier with the LHCb experiment. 1 In the absence of new physics signals, the naturalness argument that motivates the possible observation of new dynamics at the TeV scale seems slightly less appealing as a guiding JHEP04(2016)044 principle (see ref. [8] for a general status review and ref. [9] for a focus on supersymmetry). One of the main tasks of the next high-energy LHC runs at 13 TeV and 14 TeV will hence be to shed more light on the electroweak symmetry breaking mechanism and to estimate to what extent the Higgs-boson mass is fine-tuned. One of the robust features of all natural extensions of the SM is the presence of top partners. These act to screen away the quadratic sensitivity of the Higgs-boson mass to the ultraviolet (UV) scales due mostly to the large top Yukawa coupling. Naively, one might expect that flavor physics and naturalness are two decoupled concepts. However, even within a minimal top partner sector, the definition of the flavor structure of the model could be non-trivial. The mass-eigenstates of the theory could be non-pure top partners and still yield a sufficient cancellation of the UV-sensitive quantum contributions to the Higgsboson mass. In this way, even a model exhibiting a single top-partner might incorporate large flavor-and CP-violating effects. This possibility, however, is typically ignored, due to prejudices and a possibly too simplistic interpretation of the bounds stemming from lowenergy flavor-changing neutral current processes. Indeed, most studies on naturalness have assumed either flavor universality among the partners or an approximate U(2) symmetry which acts on the partners of the first two generations. Nonetheless, a thorough analysis of the constraints arising from D −D and K −K mixing has shown that the degeneracy of the partners is not required for models of down alignment [10], and such frameworks in which new physics couplings are non-diagonal in flavor-space have been considered both in the context of supersymmetry [11,12] and Higgs compositeness [13,14]. Taking supersymmetry as an illustrative example, the non-degeneracy of the partners is even more appealing as the direct experimental bounds on second generation squarks are rather weak, their masses being only constrained to be larger than 500 GeV. This is a consequence of the underlying ingredients of all supersymmetry searches which are mainly sensitive either to 'valence' squarks or to third generation squarks [15]. If the supersymmetric top-partners are not flavor-eigenstates but rather admixtures of stops and scharms, then the signatures of supersymmetric events could change dramatically. In particular, the typically sought signatures such as top-quark pairs and missing transverse energy ( / E T ) could be exchanged for charm-jet pairs and top-charm pairs plus / E T . This has led to the concept of supersymmetric 'flavored naturalness'. Despite the non-trivial flavor structure of the top sector, the level of fine tuning of these setups is similar to more conventional supersymmetric scenarios with pure-stop mass eigenstates and sometimes even improved [16]. In addition, it has been shown that low-energy electroweak and flavor physics still allow for large deviations from a minimal flavor structure in the squark sector [17][18][19][20][21][22][23][24][25][26][27][28]. Complementary, the gluino state included in any supersymmetric extension of the SM leads to an independent source of fine tuning [29]. This result is a combination of the rather strong bounds on the gluino mass, and the genuine gluino loop-diagram contributions to all scalar masses (and in particular to the squark masses mq). Indeed, the gluino mass mg which is constrained to be larger than about 1 TeV, and even more in some specific setups, implies a naturalness relation between the squark and gluino masses, mg 2mq . (1.1) JHEP04(2016)044 In this paper we focus on gluino phenomenology and study how the presence of a not that heavy gluino in the theory can give rise to bounds on flavored naturalness, and on non-minimal flavor mixing in the squark sector. In the case of squark flavor-violation, several new supersymmetric decay channels become relevant and new signatures could be expected. Although there is no specific experimental search dedicated to this nonminimally flavor-violating supersymmetric setup, the panel of signatures that can arise is large enough such that we can expect to be able to derive constraints from standard analyses that have been designed from flavor-conserving supersymmetric considerations, as already depicted in previous prospective studies [16, 19, 21-23, 28, 30-37]. We therefore investigate how standard searches for squarks and gluinos can be sensitive to a non-trivial flavor structure in the squark sector. In particular, we focus on four searches of the ATLAS and CMS collaborations, dedicated either to third generation squarks [38], to gluino and squarks from the first two generations [39,40], or to charm-squarks [41]. This is only a representative subset of the vast experimental wealth of searches, but it is sufficient to derive meaningful results that constrain flavored naturalness since it covers all topologies that can arise in our non-minimally flavor-violating setup. This work is structured as follows: the simplified model description that has been used throughout this analysis is introduced in section 2. Section 3 describes the reinterpretation procedure including the simulation setup and a concise summary of the experimental searches under consideration. The results are discussed in section 4, while our conclusions are presented in section 5. Details regarding the implementation of the ATLAS-SUSY-2013-04 search in the reinterpretation framework that we have used are provided in appendix A, while those related to the other considered searches can be found in ref. [42] 2 Theoretical framework: a simplified model for studying gluino flavor violation Following a simplified model approach such as those traditionally employed by the ATLAS and CMS collaborations [43,44], we consider a supersymmetric extension of the SM where only a subset of the superpartners feature masses accessible at the LHC. Effectively, we supplement the SM by a neutralino stateχ 0 which is the Lightest Supersymmetric Particle (LSP), one gluino stateg and two up-type squark statesũ 1 andũ 2 . The latter are mass eigenstates which are linear combinations of the right-handed stop and scharm flavor-eigenstates, Note that the squark mixing angle lies in the [0, π/4] range so that theũ 1 (ũ 2 ) state always contains a dominant scharm (stop) component. Throughout our analysis, we consider an extremely light neutralino with a mass fixed at mχ0 = 1 GeV. This assumption maximizes the amount of missing energy to be produced in signal events, and thus represents a favored scenario for any analysis relying on large missing energy signatures. Consequently, JHEP04(2016)044 the results of this work represent a conservative estimate of the LHC sensitivity to the studied setup. We define our model parameter-space by the three remaining masses, namely the squark and gluino masses mũ 1 , mũ 2 and mg, and the squark mixing angle θ R 23 . Although strong bounds on the parameter-space could be derived from flavor physics observables even with just two 'active' squark states, all flavor constraints are, in practice, only sensitive to the quantity in the mass insertion approximation [45,46] that is suitable for relatively small mass splittings and mixings. Since flavor bounds on the squark masses and mixings in the rightright sector are mild and arise mostly from D physics [47,48], flavor violation involving the right-handed stop and scharm is essentially unconstrained by flavor data, as long as the mixing between the stop and the up squark is assumed to be small. Therefore the squark mass difference ∆m = mũ 2 −mũ 1 and the sine of the mixing angle sin θ R 23 could both be large, with related implications on squark pair-production cross sections [19]. Furthermore, LHC searches are potentially sensitive to the ∆m and sin θ R 23 parameters separately, in contrast to flavor constraints which cannot disentangle the two [49]. 2 The parameter-space can be divided into two domains depending on whetherũ 1 or u 2 is the lightest squark. For the sake of a clearer discussion and in order to put these two setups on an equal footing, we define two series of scenarios, both parameterized by a mg, mũ, ∆m, θ R 23 tuple, and distinguished by the sign of ∆m. For the first series of scenarios (that we denote by S.I), the mũ parameter is identified with theũ 1 mass, while for the second series of scenarios (that we denote by S.II), the two squark masses are interchanged and the stop-dominatedũ 2 squark is now the lightest squark state with a mass given by mũ. This can be summarized, for given values of θ R 23 and mg, as In order to study the gluino effects on the constraints that can be imposed on flavored naturalness models, we perform a scan of the above parameter-space. The range in which each physical parameter of the model description is allowed to vary is given in table 1. The ∆m = 0 case deserves a clarification. If the two squarks are indeed entirely degenerate, then the mixing can be rotated away as a consequence of the U(2) symmetry of the two squark mass-squared matrix. In this case, the mixing has thus no physical meaning. By imposing ∆m = 0, we in fact mean that the splitting between the squark states does not manifest itself in LHC processes and is still larger than the width of the squarks so that oscillation and interference effects are unimportant. 2 This was pointed out in ref. [49] in the context of simplified models featuring a flavorful slepton sector. In practice, due to the highly restrictive lepton flavor bounds, their analysis included scenarios where only one parameter (∆m or sin θ R 23 ) was large enough to induce an observable effect in the slepton searches of the first LHC run. 3 Monte Carlo simulations and LHC analysis reinterpretation details Technical setup and general considerations To determine the LHC sensitivity to the class of models introduced in section 2, we reinterpret the results of several ATLAS and CMS searches for supersymmetry for each point of the parameter-space scan defined in table 1. Technically, we have started by implementing the simplified model described above into FeynRules [50], exported the model information in the UFO format [51] and have then made use of the MadGraph5 aMC@NLO [52] framework for event generation. The description of the QCD environment (parton showering and hadronization) has been achieved with the use of the Pythia 6 [53] package. Next, we apply a detector response emulator to the simulation results by means of the Delphes 3 program [54], that internally relies on the anti-k T jet algorithm [55] as implemented in the FastJet software [56] for object reconstruction. For each of the recasted analyses, the Delphes configuration has been consistently tuned to match the setup depicted in the experimental documentation [42]. Finally, we have used the MadAnalysis 5 framework [57,58] to calculate the signal efficiencies for the different search strategies, and to derive 95% confidence level (CL) exclusions with the CLs method [59]. As most of the LHC analyses under consideration rely on a proper description of the jet properties, we have merged event samples containing up to one extra jet compared to the Born process and accounted for the possible double-counting arising from radiation that could be described both at the level of the matrix element and at the level of the parton showering by means of the Mangano (MLM) scheme [60,61]. Moreover, we have normalized the cross-section for the signal samples to the next-to-leading-order (NLO) accuracy in QCD. Nevertheless, as flavor effects on supersymmetric production crosssections at NLO have yet to be calculated, we have taken a very conservative approach and applied a global K-factor of 1.25 to all leading-order results that have been obtained with MadGraph5 aMC@NLO. In the above simulation chain, we employ a detector simulation which is much simpler than those of the CMS and ATLAS experiments. It cannot therefore genuinely account for the full complexity of the real detectors. For instance, event cleaning requirements or basic object quality criteria cannot be implemented in Delphes. While those are expected to only have a small impact on the limits that are derived, it is important to bear in JHEP04(2016)044 mind that related uncertainties exist. Furthermore, the searches we are focusing on are multichannel searches, and the experimental limits are often derived after combining all channels. The statistical models that are used in the official exclusions are however not publicly available, so that we have made the approximation of considering each search channel independently and computed our limits by restricting ourselves to the channel yielding the strongest exclusion. In this way, we have omitted all correlations that could improve the bounds. Finally, in some of the considered searches, the background estimation in the various signal regions depends on an extrapolation from designated control regions. Consequently, the possibility of control region contamination by signal events should also be taken into account. This contamination, however, depends on the signal model being explored, and the information on how we should quantify it is not public. This has therefore not been pursued in our work. The combined effect of all these features leads to results that are compatible with the experimental ones within 10% − 20% [42]. We stress however that the relative uncertainty is much smaller. Many of the errors arising from our simplified recast should lead to an overall mis-estimation of a given bound. Yet we do not expect these recast errors to be sensitive to the size of the flavor mixing. In other words, when taking ratios of bounds (which we effectively do when considering how bounds change), this systematic uncertainty should largely drop out. We have verified the consistency of our methodology for the analyses under consideration, and validated in this way our reimplementation procedure. More details are given in the rest of this section, in appendix A as well as in ref. [42]. ATLAS: multijets + / E T + lepton veto The ATLAS-SUSY-2013-04 search [39] is a supersymmetry-oriented search which focuses on a multijet signature accompanied by large missing transverse energy and no isolated hard leptons (electron or muon). In the context of our model description of section 2, it targets the production of gluino pairs, squark pairs or gluino-squark associated pairs that subsequently decay into missing transverse energy and jets. The selection strategy relies on dedicated multijet triggers with a minimal requirement of five (six) very energetic central jets with a transverse energy E T > 55 GeV (E T > 45 GeV) and a pseudorapidity satisfying |η| < 3.2. Events are then collected into two types of signal regions, the so-called 'multijet + flavor stream' and 'multijet + M Σ J stream' categories, which yields a total of 19 overlapping signal regions. In the 'multijet + flavor stream' signal region category, the events are classified according to requirements on the number of jets exhibiting specific properties. In a first set of regions, the events are required to feature exactly 8 (8j50), 9 (9j50) or at least 10 (≥10j50) jets with a transverse momentum p T > 50 GeV and a pseudorapidity |η| < 2. In a second set of regions, they are constrained to contain exactly 7 (7j80) or at least 8 (≥8j80) jets with a transverse momentum p T > 80 GeV and a pseudorapidity |η| < 2. Except for the ≥10j50 region, a further subdivision is made according to the number of b-tagged jets (0, 1 or at least 2) with a pseudorapidity |η| < 2.5 and a transverse momentum p T > 40 GeV. The signal selection strategy finally relies on the missing transverse energy significance, where H T is defined as the sum of the hadronic transverse energy of all jets with E T larger than 50 GeV. For SM processes, this variable is expected to be small, and almost insensitive to the jet multiplicity. A further background reduction is thus obtained by requiring the same, relatively high missing transverse-energy significance in all signal regions. The 'multijet + M Σ J stream' signal region category relies on an extra variable, M Σ J , that is defined as the invariant mass obtained after combining the momenta of all fat jets (whose radius is R = 1) with a transverse momentum larger than 100 GeV and a pseudorapidity smaller than 1.5 in absolute value. Unfortunately, the MadAnalysis 5 framework is currently unable to handle fat jets. Consequently, we refrain ourselves from implementing this type of signal region flow in our recasting procedure. This work features the first use of a reimplementation of the ATLAS-SUSY-2013-04 search in the MadAnalysis 5 framework. Details on its validation are therefore given in appendix A. CMS: single lepton + at least four jets (including at least one The CMS-SUS-13-011 search [38] is a stop search that targets stop pair-production and two possible decay modes of the stop,t → tχ 0 → W + bχ 0 andt → bχ + → bW +χ0 , that lead to similar final-state topologies. In the model description of section 2 we have assumed that the charginos are decoupled, which is a fair assumption if the µ-term is large. Nevertheless, we still include this search in our analysis as its signal regions cover signatures which are related to the top-neutralino decay of the stop. The CMS-SUS-13-011 search targets events where one of the W -bosons decays hadronically, while the other one decays leptonically, into an electron or a muon (the τ channel is ignored). It contains two analysis flows, a first one using a predefined selection strategy and that is hence 'cut-based', and a second one relying on a boosted decision tree (BDT) technique. Although the BDT analysis provides a sensitivity that is 40% better, the absence of related public information prevents the community from making use of it for phenomenological purposes. We therefore focus only on the cut-based analysis strategy. The object definition and event preselection criteria require the presence of a single isolated lepton with a transverse momentum p T > 30 GeV (25 GeV) and a pseudorapidity |η| < 1.4 (2.1) in the case of an electron (a muon). Moreover, no jet can be found in a cone of R = 0.4 centered on the lepton. A veto is further enforced on events featuring an additional (loosely) isolated lepton or a track with an electric charge opposite to the one of the primary lepton, as well as on events containing hadronic taus. Furthermore, at least four jets with a transverse momentum p T > 30 GeV and a pseudorapidity |η| < 2.4 are required, with at least one of them being b-tagged. The preselection finally imposes that / E T > 100 GeV, that the azimuthal angle between the missing momentum and the first two leading jets is above 0.8 and that the transverse mass constructed from the lepton and the missing momentum is larger than 120 GeV. Various signal regions are then defined from several considerations. First, one designs categories dedicated to probe each of the two considered stop decay modes,t → tχ 0 and t → bχ + . To this aim, one imposes constraints on the hadronic top reconstruction quality JHEP04(2016)044 in the case of the region category related to the top-neutralino stop decay as well as on the amount of missing energy. In the case of a stop into a neutralino decay, four overlapping signal regions are defined after requiring / E T to be larger than 150, 200, 250 and 300 GeV, respectively. In the case of a stop into chargino decay, four regions are again defined, but using missing energy thresholds of 100, 150, 200 and 250 GeV. Next, the categories are further subdivided into regions whose goal is to probe large or small mass differences ∆M between the stop and the LSP. Large ∆M regions are defined by enforcing the transverse variable M W T 2 [62] to be larger than 200 GeV and the leading b-jet to have a p T greater than 100 GeV, this last criterion being only relevant for the case of a stop decay into a chargino. Information on the implementation and validation of this analysis in the MadAnalysis 5 framework can be found in ref. [42] and on Inspire [63]. CMS: at least 3 jets + / E T + lepton veto The CMS-SUS-13-012 analysis [40] is a search for supersymmetry, which focuses on the pair-production of gluinos and squarks. The main final state sought in this search is comprised of a multijet system and missing transverse energy, without any isolated leptons. It is hence directly sensitive to our simplified models in which such signatures would be copiously produced. The event selection requires at least three jets with a transverse momentum p T > 50 GeV and a pseudorapidity satisfying |η| < 2.5. The total hadronic activity of the events is then estimated by means of the H T variable defined as the scalar sum of the transverse momenta of all jets satisfying the above requirements. The amount of missing transverse energy in the events is computed via the / H T vector obtained by a vector sum of the transverse momenta of all jets with p T > 30 GeV and a pseudorapidity smaller than 5 in absolute value. The analysis requires that H T > 500 GeV, / H T > 200 GeV (where / H T = | / H T |) and events in which one of the three hardest jets is aligned with / H T are vetoed by requiring |∆φ(p j T , / H T )| > 0.5 for the two hardest jets, and |∆φ(p j T , / H T )| > 0.3 for the third hardest one. The selection strategy finally includes a veto on any isolated lepton whose transverse momentum is larger than 10 GeV. The events that pass these selection criteria are then categorized into 36 non-overlapping signal regions defined by the number of jets and the value of the H T and / H T variables. Information on the implementation and validation of this analysis in the MadAnalysis 5 framework can be found in ref. [42] and on Inspire [64]. ATLAS: scharm pair-production using charm-tagging + lepton veto The ATLAS-SUSY-2014-03 search [41] is a supersymmetry search that looks for scharm pair production followed by thec → cχ 0 decay. The final state is thus comprised of two charm-jets and missing energy. The experimental analysis therefore targets events that are required to present a large amount of missing transverse energy, / E T > 150 GeV, at least two very hard jets with transverse momenta greater than 130 and 100 GeV, respectively, and no isolated leptons. The two jets are then demanded to be c-tagged. This constitutes the main novel feature of this search, that involves dedicated charm-tagging techniques JHEP04(2016)044 based on algorithms optimized with neural networks. Additional requirements are finally applied by constraining the so-called contransverse mass [65]. Mimicking charm-tagging algorithms is beyond the ability of our simplified detector emulation that relies on Delphes. We therefore recast this search following a different strategy, not based on the MadAnalysis 5 framework. A very conservative (over)-estimate of the bounds is instead derived from the cross-section limits presented in the experimental publication [41] that we compare to theoretical predictions for the production cross section of a system made of two charm quarks and two neutralinos that originate from the decay of two superpartners. More precisely, the theoretical cross section is calculated from the sum of the production cross sections σ xy of any pair of superpartners x and y, the individual channels being reweighted by the corresponding branching ratios (BR) so that a final state made of two charm quarks and two neutralinos is ensured, We then implicitly (and incorrectly) assume that all the other requirements of the ATLAS analysis described above are fulfilled. In particular, the fact that the neutralino is almost massless in our parameterization enforces a large amount of missing energy. Results In this section, we present and discuss the main results of our reinterpretation study. As a preliminary, we introduce two concepts that we call 'signal migration' and 'signal depletion'. Each of the recasted experimental analyses targets several event topologies which are assigned to one (exclusive) or more (inclusive) signal regions. These signal regions then serve as exclusion channels for the new physics scenarios that are probed. In our case, the typical effect of the flavor mixing and the squark mass splitting will be to modify the branching fractions of the particles, and perhaps also the rate of some production processes. Consequently, signal regions that are usually largely populated by signal events in the case where there is no squark mixing can turn out to be depleted from events once flavor violation in the squark sector is allowed, and conversely, signal regions that are not sensitive to any supersymmetric signal in the flavor-conserving case can become populated. One thus expects a migration of the signal across the different regions with squark flavor violation. JHEP04(2016)044 As a concrete example, we compare the event topologies resulting from the decaỹ t → qχ 0 in a flavor-conserving model to the case of a model with stop-scharm mixing. If the stop decays solely into tops, one expects signatures which would include a b-tagged jet, in addition to either two other jets or a lepton-neutrino pair that originates from the decay of the W -boson. Alternatively, if the stop can also decay into charm-jets, the jet multiplicity distribution of the stop decay products peaks to a smaller value. Furthermore, a decay into a charm jet, rather than into a top quark, is not bounded by the top mass and can result in larger neutralino energies which manifest as a signature with a larger amount of missing energy. The direct interpretation of this example in the context of the searches under consideration is simple. In comparing a flavor-conserving to a flavor-violating scenario, one expects that signal regions which are defined according to requirements on the missing energy, the number of isolated leptons, the number of jets and the number of b-tagged jets, to redistribute signal events between one another. This migration of signal events could cause a signal region depletion or population. In the rest of this section, we present and discuss our results in the mũ − mg plane for the model parameterization described in section 2. ATLAS: multijets + / E T + lepton veto The ATLAS-SUSY-2013-04 search [39] has been designed to target supersymmetric signals with large hadronic activity in conjunction with missing energy. The requirement for a minimum of seven hard jets implies that this search should be most sensitive togg andgq production with subsequent decays into top quarks. In this subsection we discuss how the exclusion limits shown in the (mũ, mg) plane depend on the two flavor-violating parameters: the squark mass splitting ∆m, and the squark mixing angle θ R 23 . We take as benchmark the case of a light stop and a decoupled scharm that are not mixed, with ∆m = −500 GeV and sin θ R 23 = 0. In the upper panel of figure 1 we show the reach of this ATLAS search for this scenario. In the other sub-figures of figure 1, we collect a representative set of results with various ∆m and θ R 23 values, which depict the interesting changes in the analysis reach due to flavor effects. For cases in which the lightest squark is stop-like (scenarios of class S.II with ∆m < 0), it is evident that the sensitivity of this search to gluino masses of about 1.4 TeV is reduced by a non-zero stop-scharm mixing. Due to flavor mixing, final states with charm quarks instead of top quarks can arise, changing in this way the possible event topology and depleting some of the multijet signal regions that require a large jet multiplicity. A similar effect happens to the constraint on the mass of the lightest squark of about 500 GeV when the gluino is heavy, with a mass ranging up to 2 TeV. This constraint originates mostly from direct squark-gluino associated production, and the global jet multiplicity of the event is again reduced when decays into charm quarks are allowed. Interestingly, close to the degeneracy line mq = mg, the opposite effect occurs and the sensitivity increases when the mixing is turned on. For sin θ R 23 = 0, the two-body decaỹ g →qt is kinematically forbidden when mg − mt < m t so that the dominant gluino decay mode proceeds through a three-body channel,g → ttχ 0 1 . When the stop and the scharm JHEP04(2016)044 Figure 1. Sensitivity of the ATLAS multijet + / E T + lepton veto search of ref. [39] for different values of the ∆m and θ R 23 parameters (the exact values being indicated in the top bar of each subfigure). The excluded regions are shown in red in the (mũ, mg) plane, where mũ is the mass of the lightest squark (a stop-like squark here since ∆m < 0). The upper panel describes the reference scenario in which the lightest squark is a pure stop state and the scharm is almost decoupled. mix, the two-body decay modeg →qc is open and dominates, even for relatively small mixing angles. In this case, the top quarks from the subsequent squark decay are more energetic than in the three-body decay case. The decay products of these tops are therefore experimentally easier to detect so that the search sensitivity is enhanced. In figure 2, we focus on scenarios of type S.I where the lightest squark is scharm-like and ∆m > 0. In the upper panel of the figure, we consider a reference scenario in which the stop is decoupled and both squarks do not mix. In comparison with the S.II benchmark JHEP04(2016)044 scenario, the search has a more limited reach. The reason is twofold. On the one hand, the lighter scharm-like state is produced more copiously than the the stop-like one viagq production, and on the other hand, the gluino branching fraction to charm-jets is increased, leading to a depletion of the signal regions with a large jet multiplicity. A non-vanishing stop-scharm mixing allows the gluino and the scharm-like state to decay into top quarks, thus increasing the number of jets in the event and repopulating the search signal regions JHEP04(2016)044 by signal migration. The impact of such a change is, however, rather modest, and becomes significant only for large mixings. This result is due to the small branching fractions of the gluino and the scharm-like state into top quarks which are suppressed by the top mass, and could therefore become substantial only for large values of the mixing angle. 4.2 CMS: single lepton + at least four jets (including at least one b-jet) + / E T The CMS-SUS-13-011 search of ref. [38] has been designed to look for the semileptonic decay of a stop pair. Consequently, we expect this search to be rather insensitive to spectra where the lightest squark is scharm-like (scenarios of type S.I). The results in figure 3 and figure 4 show that this is indeed the case with the exception of scenarios in which 0 < ∆m < 200 GeV and/or in which the mixing angle θ R 23 is large. In these cases, constraints on the gluino masses of about 1 TeV find their origin in gluino-squark associated production followed by their decay to two neutralinos and three quarks, at least two of which are tops. In all the other cases with a light scharm-like squark, the top mass severely suppresses any branching fraction to a final state containing a top quark and therefore reduces the sensitivity of the search. For a stop-like lightest squark (scenarios of class S.II), the reach of this search is much more significant. In addition to gluino pair-production, direct squark pair-production can also yield a significant number of top-pairs. As a result, light squark masses around 500 GeV are excluded independently of the gluino mass, unless the mixing angle is very large. In the latter case (right column of figure 3), the signal regions are depleted when the squarks can decay not only to top quarks, but also to charm-jets which have the advantage of a larger phase-space. Similarly to the ATLAS multijet search, the sensitivity to nearly degenerate squark and gluino states increases with a non-zero stop-scharm mixing angle. CMS: at least 3 jets + / E T + lepton veto Next, we consider the CMS-SUS-13-012 search of ref. [40] that is complementary to the previous CMS search. This search targets the purely hadronic decays of pair-produced superpartners, and therefore vetoes final-states which include leptons. The dependence of the sensitivity on the squark mass splitting and the mixing angle is depicted in figure 5 and figure 6. Taking as a reference the non-mixing scenario of class S.II with a decoupled charm-squark (upper panel of figure 5), we observe that the sensitivity is enhanced in all the other cases. This is a direct consequence of the relatively small jet-multiplicity requirement of this CMS analysis alongside the lepton veto. Those imply that events featuring charmjets which come fromgg andgq production (whose respective cross-sections are large and mildly depend on the squark mass) are more likely to pass all the selection steps than those featuring top quarks. The latter have indeed a non-negligible branching fraction into leptons, and the top mass tends to limit the amount of missing energy in the events (the neutralino p T ). Consequently, one expects that for the case in which the lightest squark is stop-like, a non zero mixing will increase the reach of the search, which is indeed the case as seen through figure 5. In an analogous way, one can explain the pattern of exclusion limits for scenarios of class S.I with a scharm-like lightest squark illustrated in figure 6. The gluino mass bound JHEP04(2016)044 in the reference scenario is somewhat stronger here than in the stop-like case, but it is reduced for increasing mixing angles. ATLAS: scharm pair-production using charm-tagging + lepton veto The ATLAS-SUSY-2014-03 charm-squark search of ref. [41] targets the production of a pair of charm squarks via a signature comprised of two charm-tagged jets and missing transverse energy / E T . We recall that our implementation of this search is very conservative and is JHEP04(2016)044 only based on cross sections and branching ratios. The sensitivity of this search to the model studied in this work is presented in figure 7 and figure 8. In case where the lightest squark is scharm-like (scenarios of class S.I), the search is sensitive to two specific regions of the parameter-space. The first, with lower squark masses of about 500 GeV, is independent of the gluino mass and extends up to mg ∼ 2 TeV. In this domain, squark pair production is sufficient to yield an exclusion of the model regardless of whether gluinos can be produced. In contrast, in the second domain, the squarks are JHEP04(2016)044 Figure 5. Sensitivity of the CMS supersymmetry search in the multijet plus missing energy channel of ref. [40] for different values of the ∆m and θ R 23 parameters (the exact values being indicated in the top bar of each subfigure). The excluded regions are shown in yellow in the (mũ, mg) plane, where mũ is the mass of the lightest squark (a stop-like squark here since ∆m < 0). The upper panel describes the reference scenario in which the lightest squark is a pure stop state and the scharm is almost decoupled. heavier, so that an exclusion of the model must rely on the production of gluinos followed by their subsequent decays into charm-jets. In this case, the reach depends on the gluino mass that limits the production cross section. This feature also explains the effect of the different mass splittings and mixings. For large enough positive mass splitting, ∆m > 200 GeV, there is a light scharm-like state whose mixing dependent branching fractions to charm-jets leads to an exclusion roughly independent of the gluino mass. Although the exact value JHEP04(2016)044 Figure 6. Sensitivity of the CMS supersymmetry search in the multijet plus missing energy channel of ref. [40] for different values of the ∆m and θ R 23 parameters (the exact values being indicated in the top bar of each subfigure). The excluded regions are shown in yellow in the (mũ, mg) plane, where mũ is the mass of the lightest squark (a scharm-like squark here since ∆m > 0). The upper panel describes the reference scenario in which the lightest squark is a pure scharm state and the stop is almost decoupled. of the excluded mass limit changes with the mixing angle, the general shape persists. For smaller values of the squark mass splitting, both squark states become accessible at the LHC, and with sufficiently large mixing, they contribute significantly to the production of events with several charm-jets. The search loses sensitivity if the lightest squark is a pure stop state (scenarios of class S.II). Consequently, the exclusion reach for negative mass splittings strongly relies on the JHEP04(2016)044 value of the mixing angle. A large mixing indeed opens the possible supersymmetric decays into charm-jets. Alternatively, lowering the mass splitting also improves the reach as it makes the heavier state, that is scharm-like, accessible. Combined reach It is interesting to discuss the combined reach of the four previously pursued searches for a few benchmarks points. To this end, we overlay all four exclusion contours on top of each JHEP04(2016)044 other, keeping the same color-coding as for the individual results. We present the results in figure 9 and figure 10. Neglecting the effect of the LSP mass (taken here to be at its preferred value for increasing the missing energy), the main result of the combined reach figures is that the CMS search for three or more jets plus missing energy and zero leptons, CMS-SUS-13-012 [40], leads to a robust lower bound on the gluino mass of about 1. JHEP04(2016)044 an improvement in the reach is implied by the ATLAS search in the multijet plus missing energy final state, ATLAS-SUSY-2013-04 [39], which leads to even stronger constraints on the squark and gluino masses. Another noteworthy point is that the bound on the lightest squark mass is always greater than about 400 GeV, reaching even roughly 600 GeV when this squark is primarily a scharm. Finally, the CMS stop search in the single lepton final state, CMS-SUS-13-011 [38], is only relevant for stop-like lightest squark while for the inverse scenario with a scharm-like lightest squark, the same reach in the parameter space is this time covered roughly by the ATLAS scharm-pair search, ATLAS-SUSY-2014-03 [41]. Conclusions In this work we have studied the LHC constraints on the gluino and squark masses in the presence of squark flavor violation. The violation of the flavor symmetry has manifested itself in two interesting and distinct manners. First, we have not assumed squark degeneracy, and in particular, we have considered the case where the squarks of the first and second generations have different masses. Effectively, the first generation squarks, which are subject to the strongest LHC constraints have been taken decoupled, while the would-be (right-handed) scharm eigenstate has been allowed to be significantly lighter. Second, we have allowed the squarks of the second and third generations, in particular the (right-handed) scharm and stop, to mix. Such a scenario has been studied in the past, however, either under the assumption that the gluino is decoupled and/or when the gluino flavor violating couplings are vanishing. In the absence of non-MSSM structures, the former assumption is unnatural, and the latter is inconsistent with the basic flavor structure described above. We can distinguish between two different sources of tuning. The first type is associated with the contribution of the would-be stop flavor eigenstate to the Higgs-boson mass. In a previous work, this has been studied in models featuring a decoupled gluino [16], with the conclusion that the mixing could slightly improve the level of required tuning. On this front, we have nothing qualitatively new to add beyond the fact that the bounds on the stopand scharm-like states have been slightly improved by the LHC experiments. The second type of tuning is related to the contributions of the gluino to the squark masses, where naturalness requires the gluino mass to be smaller than about twice of that of the squarks. This is particularly relevant to the above study due to the fact that the second generation squark masses are less constrained by the LHC searches. Thus, the main purpose of our study has been to examine what are the bounds on the gluino-squark system within the above framework. As the combined reach of the four LHC analyses that we have considered has shown, the fine-tuning requirement still allows for sizable stop-scharm mixing. Equivalently, we have found that for various values of the mixing, there is a wide range of unexcluded and relatively light squark and gluino masses which satisfy the gluino-mass naturalness criterion. In fact, for some mixings and mass splitting cases, the 'natural' region in the squark-gluino mass plane could even become larger. JHEP04(2016)044 In the foreseen future, the experiments are expected to publish new results with an improved reach. An improvement in the sensitivity to the above framework is obviously expected due to the increase in the center of mass energy. Furthermore, as the ATLAS collaboration has now effectively installed a new inner layer of pixel detector (IBL), its charm-tagging capabilities are expected to be upgraded, resulting in more efficient ways to look for charm squarks. Moreover, we note that during the finalizing of this work, CMS has released several analyses which target stop pair production and which are likely sensitive to gluinos as well. The search with highest reach is presented in ref. [66], with stop masses excluded up to roughly 750 GeV (for an LSP defined as in our model). Such an improved reach would cover additional parameter-space in our gluino-squark mass plane, however, it is expected to exhibit similar characteristics once the mixing and mass-splittings are turned on. As a result, the new CMS results stress the importance of flavor even beyond the reach shown in this work. Finally, we point out that we have not discussed here the impact of flavor violation on low energy observables. We have also ignored the bounds coming from the Higgsino sector even though naturalness suggests that those should be at the bottom of the supersymmetric spectrum modulo, possibly, the dark matter candidate itself. These are beyond the scope of our study. Table 2. Summary of yields for a gluino-stop off-shell scenario in which the gluino and neutralino masses have been fixed to 1100 and 400 GeV, respectively. The results obtained with MadAnalysis 5 are compared to the official ATLAS results, both in terms of event counts and efficiencies computed from the number of events before and after each of the selection steps. JHEP04(2016)044 For validation purposes, we generate events following the procedure of the ATLAS collaboration, using the Herwig++ program [67] for the simulation of the hard process, the parton showering and the hadronization. The supersymmetric spectrum file has been provided by the ATLAS collaboration via HepData and the Herwig++ configuration that we have used can be obtained from the MadAnalysis 5 webpage. In table 2, we compare the ATLAS results for the cut-flow counts to those obtained with our reimplementation of the ATLAS-SUSY-2013-04 analysis in MadAnalysis 5. We present the surviving number of events after each step of the selection strategy for the 13 signal regions under consideration and for a scenario in which the gluino mass is set to 1100 GeV and the neutralino mass to 400 GeV. We have found that all selection steps are properly described by the MadAnalysis 5 implementation, the agreement reaching the level of about 10%. In figure 11, we move away from the chosen benchmark scenario and vary the gluino and neutralino masses freely, enforcing however that the gluino decay channel into a top-antitop pair and a neutralino stays open. We observe that our machinery allows us to reproduce the ATLAS official bounds (obtained from HepData) at the 50 GeV level, which is acceptable on the basis of the limitations of our procedure mentioned in section 3.1. The MadAnalysis 5 implementation of the analysis can be obtained from Inspire [68]. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
10,802.4
2016-04-01T00:00:00.000
[ "Physics" ]
How Different EEG References Influence Sensor Level Functional Connectivity Graphs Highlights: Hamming Distance is applied to distinguish the difference of functional connectivity network The orientations of sources are testified to influence the scalp Functional Connectivity Graph (FCG) from different references significantly REST, the reference electrode standardization technique, is proved to have an overall stable and excellent performance in variable situations. The choice of an electroencephalograph (EEG) reference is a practical issue for the study of brain functional connectivity. To study how EEG reference influence functional connectivity estimation (FCE), this study compares the differences of FCE resulting from the different references such as REST (the reference electrode standardization technique), average reference (AR), linked mastoids (LM), and left mastoid references (LR). Simulations involve two parts. One is based on 300 dipolar pairs, which are located on the superficial cortex with a radial source direction. The other part is based on 20 dipolar pairs. In each pair, the dipoles have various orientation combinations. The relative error (RE) and Hamming distance (HD) between functional connectivity matrices of ideal recordings and that of recordings obtained with different references, are metrics to compare the differences of the scalp functional connectivity graph (FCG) derived from those two kinds of recordings. Lower RE and HD values imply more similarity between the two FCGs. Using the ideal recording (IR) as a standard, the results show that AR, LM and LR perform well only in specific conditions, i.e., AR performs stable when there is no upward component in sources' orientation. LR achieves desirable results when the sources' locations are away from left ear. LM achieves an indistinct difference with IR, i.e., when the distribution of source locations is symmetric along the line linking the two ears. However, REST not only achieves excellent performance for superficial and radial dipolar sources, but also achieves a stable and robust performance with variable source locations and orientations. Benefitting from the stable and robust performance of REST vs. other reference methods, REST might best recover the real FCG of EEG. Thus, REST based FCG may be a good candidate to compare the FCG of EEG based on different references from different labs. The choice of an electroencephalograph (EEG) reference is a practical issue for the study of brain functional connectivity. To study how EEG reference influence functional connectivity estimation (FCE), this study compares the differences of FCE resulting from the different references such as REST (the reference electrode standardization technique), average reference (AR), linked mastoids (LM), and left mastoid references (LR). Simulations involve two parts. One is based on 300 dipolar pairs, which are located on the superficial cortex with a radial source direction. The other part is based on 20 dipolar pairs. In each pair, the dipoles have various orientation combinations. The relative error (RE) and Hamming distance (HD) between functional connectivity matrices of ideal recordings and that of recordings obtained with different references, are metrics to compare the differences of the scalp functional connectivity graph (FCG) derived from those two kinds of recordings. Lower RE and HD values imply more similarity between the two FCGs. Using the ideal recording (IR) as a standard, the results show that AR, LM and LR perform well only in specific conditions, i.e., AR performs stable when there is no upward component in sources' orientation. LR achieves desirable results when the sources' locations are away from left ear. LM achieves an indistinct difference with IR, i.e., when the distribution of source locations is symmetric along the line linking the two ears. However, REST not only achieves excellent performance for superficial and radial dipolar sources, but also achieves a stable and robust performance with variable source INTRODUCTION Electroencephalography (EEG) has excellent temporal resolution and is a valuable and cost effective tool for the study of brain functional interactions across a wide range of clinical and research applications (Friston and Frith, 1995;Courchesne and Pierce, 2005;Stam and Reijneveld, 2007;Fogelson et al., 2013;Frantzidis et al., 2014;Van Schependom et al., 2014). It offers a window into the spatiotemporal structure of phase-coupled cortical oscillations that underlie neuronal communication (Tallon-Baudry et al., 1996;Gross et al., 2006;Womelsdorf and Fries, 2006;Fries, 2009;Miller et al., 2009). However, the EEG scalp recording can only provide the potential difference between two points meaning that the use of an appropriate reference is vital (Geselowitz, 1998). This is a problem because no neutral locations exist on the human body (Nunez et al., 1997), and any choice for the reference location inevitably affects the EEG measurements. To minimize this effect, a number of different reference schemes have been proposed including the vertex (Lehmann et al., 1998;Hesse et al., 2004), nose (Andrew and Pfurtscheller, 1996;Essl and Rappelsberger, 1998), unimastoid or ear (Basar et al., 1998;Thatcher et al., 2001), linked mastoids or ears (Gevins and Smith, 2000;Croft et al., 2002), and average reference (i.e., average potential over all EEG electrodes) (Offner, 1950;Nunez et al., 2001). These can provide a relatively neutral reference at least with respect to the signal of interest. Specific laboratories, research fields, or clinical practices have various preferences, and the least biased reference site remains controversial (Nunez and Srinivasan, 2006;Kayser and Tenke, 2010). The lack of a universally accepted reference scheme also represents a major obstacle for cross-study comparability (Kayser and Tenke, 2010). A neutral potential is required to resolve the problems inherent to using body surface points as a reference. Theoretically, a point at infinity is far from brain sources and has an ideal neutral potential. Therefore, a point at infinity constitutes an ideal reference (infinity reference, IR). Unlike the channel-based methods, such as AR, LR, and LM, Yao (Yao, 2001;Yao et al., 2007) proposed a "reference electrode standardization technique (REST)" to approximately transform EEG data recorded with a scalp point reference to recordings using an infinity reference (IR). REST has recently been quantitatively validated via simulation studies with assumed neural sources in both a concentric threesphere head model (Yao, 2001) and a realistic head model (Zhai and Yao, 2004). These studies have shown that data referenced with REST are more consistent with physiology than data referenced using traditional scalp references. This has been shown with a variety of techniques including EEG spectral imaging (Yao, 2017), EEG coherence (Marzetti et al., 2007;Qin et al., 2010), brain evoked potentials (EP) and spatiotemporal analysis (Yao and He, 2003). Previously studies on EEG electrode reference effects have predominantly focused on the power spectra or spatiotemporal analysis; however, there are few reports focusing on EEG reference effects from the perspective of graph theory. This is a significant method to evaluate functional connectivity (FC) networks (Singer and Gray, 1995;De Vico Fallani et al., 2014;Garces et al., 2016). In the realm of FC, Qin (Qin et al., 2010) and Chella (Chella et al., 2016) reported a relatively comprehensive changes on network pattern with different reference schemes. The relative error (RE) (Pereda et al., 2005;Nunez, 2010;Qin et al., 2010) is a metric to evaluate the difference of coherence matrices between each reference scheme and IR. Strictly speaking, instead of describing the FCG similarity (Garces et al., 2016) intuitively, the RE can only detect the global difference between the two matrices. To further evaluate the quantized similarity between FCGs, this study exploited HD as another metric to differentiate the two graphs via the transformed times (Makram Talih, 2005;Medkour et al., 2010;van Wijk et al., 2010;Garces et al., 2016). One aim of this paper is to get deeper insight into the reference effects on FCGs of EEG with simulated data. Another goal is to determine how the source orientations and locations influence the FCGs from different EEG references. All simulations use an ideal three-shell spherical head model (Yao, 2001(Yao, , 2017. Four regular references are involved for performance comparison including average reference (AR), the digitally linked mastoid (LM), left mastoid references (LR), and the REST transformation. A coherence matrix (Pereda et al., 2005;Srinivasan et al., 2007;Nunez, 2010) can nicely represent the relationship among EEG channels, and it is utilized to construct a FCG. The reference effects are then evaluated at the matrix level and the graph level. In the matrix level, RE detects the global difference between different references. In the intuitive graph level, HD assesses the difference between connective networks (Makram Talih, 2005;Medkour et al., 2010;van Wijk et al., 2010). Referencing Techniques of EEG Here, we summarize the most commonly used reference schemes. Reference Electrode Standardization Technique There are two key points exploited in REST (Yao, 2001(Yao, , 2017, one is the fact that an approximate neutral reference can be achieved at an infinity point that is far from brain sources, and the other is that activated neuronal sources in the brain are always the same no matter what kind of the reference schemes are utilized (Pascual-Marqui and Lehmann, 1993). Therefore, if we denote S as the unknown matrix of the source activities and G REST as the transfer matrix from these sources to sensors with REST scheme, we have where V REST is the scalp EEG recording with a reference at infinity generated by S. Similarly, with the same source activities, the scalp EEG recordings measured with any original reference can be expressed as in where G REF denotes the corresponding transfer matrix of any original reference. Thereby, a linear transformation T REST can be derived, by combining the above equations, that derives a directly estimate V REST from V REF as follows where G + REF denotes the Moore-Penrose generalized inverse and From Equation (4), one significant advantage of REST is that EEG inverse problem is not necessary to solve explicitly, that is, the transformation matrix T REST can be computed without the need to know the actual sources S. In fact, only transfer matrices G REST and G REF are imperative to construct T REST . We can calculate G REST and G REF based on this ESD rather than on the actual sources because the potential originated by any source can be equivalently produced by a source distribution enclosing the actual sources (Yao, 2003;Yao et al., 2005) and an equivalent source distribution (ESD) on the cortical surface encloses all the possible neural sources. The other main advantage of REST is that, rather than depending on actual EEG data, can only rely on the characteristic of the assumed ESD including the head model, electrode montage, original reference, and spatial geometric. In this study, the ESD is assumed to be a discrete layer of current dipoles forming a closed surface analogous to previous studies (Yao, 2001(Yao, , 2017Marzetti et al., 2007;Zappasodi et al., 2014). AR Reference, LM Reference and LR Reference The reference electrodes should ideally be placed on a presumed "inactive" zone to ensure an arbitrarily "zero level." The option of the reference depends on the goal of the recording. Frequently, the AR reference, LM reference, and LR reference are adopted. LR uses the right earlobe as a reference, and LM uses the average of both earlobes as a reference. AR, as the name implies, takes the mean of all electrodes as the reference similar to the CZ transformation (vertex) electrode (Lehmann et al., 1998;Hesse et al., 2004). The transfer of data to recordings with reference AR, LR, and LM is easy. A perfect example can be seen in the simulated data derived from an original IR. The results of each reference can be obtained by subtracting the respective reference channel signal from the other channel (Yao, 2017). Coherence and Network Construction Coherence Coherence is a frequently utilized measure in the analysis of co-operative, synchrony-defined, cortical neuronal assemblies (Pereda et al., 2005;Nunez, 2010). Coherence represents the linear relationship at a specific frequency between two signals x(t) and y(t), which can be expressed as: where C xy (f ) denotes cross-spectral density between x(t) and y(t), C xx (f ) and C yy (f ) denote the auto-spectral density of x(t) and y(t) respectively. Construct the Functional Connectivity Topography FCG plays an increasingly important role in offering a plausible mechanism for information transfer among neurons (Singer and Gray, 1995;Thatcher et al., 2001;Garces et al., 2016). According to its definition, FCG describes how different brain regions interact with each other while recording signals interact simultaneously (Stephan et al., 2000). A reliable FCG can reproduce the synchronous changes and the interactions between the two brain areas. In this study, a scalp FCG based on EEG is constructed with a coherence matrix, i.e., the coherence among the channels is deemed as the weight of connectivity. To give an efficient representation of network connectivity topography, a connectivity threshold is set to remove weak links between nodes by gradually increasing the connectivity threshold until the degree of each network corresponding to different references reaches four. Therefore, we produce a binary-weighted network. Affected by the effect of volume conduction (van den Broek et al., 1998), a dense intensity of electrodes may introduce unnecessary or fake links while analyzing the interactions between brain areas. Therefore, 19 nodes are selected from the 129 channels in the EGI montage. These nodes were labeled Ch9, Ch14, Ch20, Ch27, Ch34, Ch36, Ch42, Ch44, Ch62, Ch65, Ch68, Ch73, Ch88, Ch94, Ch96, Ch103, Ch110, Ch116, and Ch121 to approximate the 20 standard electrode locations (Fp1, Fp2, Fz, F3, F4, F7, F8, C3, C4, Cz, T3, T4, T5, T6, Pz, P3, P4, O1, and O2) in the 10-20 system. Simulated Source Signals To investigate the robustness and stability of each reference scheme, an EEG connectivity network for each reference was reconstructed by conducting a simulation study. To avoid the effect of volume conduction as much as possible-as well as to better visualize the data-a low-density EEG montage consisting of 19 electrodes from the EGI (Electrical Geodesics, Inc.) 129 system approximating the standard 10-20 system locations was selected. EEG is mainly used to detect the neuronal activity on the cortex; therefore, rather than deep-level source activity, EEG accurately records the active cortex active from the radial oriented and superficial located dipolar pairs. To clearly confirm the difference between each reference overall, 300 simulated dipole-pair configurations [each consisting of two unit radial dipoles randomly positioned within the upper hemisphere (radius 0.87)] were analyzed. To further determine the feasibility of each EEG reference scheme, 20 dipole pair configurations (each containing two unit radial dipoles with a specific position and 12 different orientations) were analyzed. Figure 1 shows that two coherent dipolar neural source are generated using a damped Gaussian function, which can be expressed as Where, t 0 = 100 * dt, f = 30Hz, γ = 5, α = π 4 for one dipole in the pair, and t 0 = 200 * dt, f = 30Hz, γ = 10, α = π 2 for the other. Evaluation Metrics Relative error for coherence RE calculates the overall difference between the two matrices, which can be utilized as a holistic approach to evaluate the effectiveness of each reference. Smaller RE values are closer to the reference with IR. Here, RE is calculated as: where denotes the coefficient matrix of coherence (19 * 19) between channel pairs in specific frequency referenced at infinity, and denotes the coherence coefficient matrix C AR ; C LM , C L , C REST and are calculated with an alternative reference scheme. The matrix norm * is the Frobenius norm defined as where N denotes to the total electrode number, and C ij refers to the coherence between channel i and channel j. Hamming distance for similarity Although RE can measure the entire relationship between two coherence matrices from two methods, the accurate relationship of the two elements, which share the same location in two matrices, cannot be measured sometimes due to the effect of square operator. Therefore, another more efficient metric should be considered to measure FCG. HD is usually used to measure the distance between graphs (Makram Talih, 2005;Medkour et al., 2010;van Wijk et al., 2010). In recent studies on FCG (Singer and Gray, 1995;De Vico Fallani et al., 2014;Garces et al., 2016), HD is introduced to measure the percentage of vector entries that differ. Compared to RE for coherence, the HD can recognize the similarity between two graphs in a more direct way. Given the number of elements of two graphs G 1 and G 2 with adjacency matrices N (1) and N (2) that disagree, HD is defined formally as follows. The square bracket notation here reflects an indicator function that is equal to one if its argument is true and zero otherwise. The Hamming distance may also be viewed of as the number of addition/deletion operations required to turn the set of edges of into. Smaller HD values results in more similar distances between two FCGs. Comparison between HD and RE HD is an excellent complement for RE. Assuming that there are three nodes, the 3 × 3 coherence matrices from three different methods are listed in Figure 2. For each node, the coherence for itself is equal to 1. Here, we take matrix A as a standard reference and use RE and HD to evaluate the difference of B and C. According to Equation (7), matrix B and matrix C share the same RE (both are equal to 0.1172). However, B and C are not the same. Especially, in the perspective of connective graph, the two matrices are indeed different from each other. The topographies from matrix B and C-which are combined by the satisfied connections-are quite different when setting the threshold to 0.55 (Figures 2B,C). This difference can then be detected by HD, and the HD for matrices B and C are 0.4444 and 0, respectively. Therefore, despite the similar RE of matrix B and C, matrix C has a smaller value than matrix B. Thus, matrix C is more similar to matrix A than matrix B. The signal is inevitably mixed with noise in each collection channel. Thus, a good metric should be insensitive to noise. We used 10 groups of data, and each group consists of three matrices-all of which are 5 × 5. In each group, matrix A represents the reference and the other two matrices B and C are used for comparison. B and C can obtain their RE and HD separately by comparison with A. To better investigate the Frontiers in Neuroscience | www.frontiersin.org FIGURE 2 | Illustrations of the differences between RE and HD. Here, a coherence matrix of a three-node connection is used as an example. To compare the differences in the matrices from different methods, we set a threshold of 0.55; all connections larger than the threshold are colored with blue, and the connectivity graphs are then calculated from the original coherence matrices. According to Equation (7), the RE between matrix A and B, as well as the RE between matrix A and C, share the same value (0.1172). According to Equation (9), the HD between matrix A and B is 0.4444, while the HD between matrix A and C is zero. The topographies of each matrix are shown in bottom subfigure of (A), (B), or (C); the blue circle denote the nodes, and the yellow lines denote the binary connections between two nodes. influence of noise on HD and RE, we suppose that B and C in each group have the same overall difference with matrix A but they have different inner connectivity. That is, they share the same RE but different HD. The results of HD and RE are analyzed statistically in different SNR values ranging from 1 to 9. The HD and RE from 10 groups are recorded under specific signal-to-noise (SNR) ratios. Firstly, normal distribution test is exposed on HD and RE to determine whether the two vectors come from normal distribution, Then, the Bartlett test is utilized to determine whether the two vectors own the homogeneity of variance. Finally, if two vectors have the same variance, then a paired test is then exploited to conduct a test decision whether two vectors share the same equal mean. The Bartlett test results of HD and RE illustrate that by adding noise with specific SNR, the intragroup HD and RE can maintain the normal distributions with the same variance (p-value > 0.05). Paired-test results show that; intragroup HD can hold the stability in distinguishing matrices with various SNRs (p-value < 0.05), while intragroup RE cannot recognize the difference between matrices even in high SNRs (p-value > 0.05). The Appendix discusses in more detail the effects of HD and RE on evaluating the similarities between two FCGs (Supplementary Material). Simulation 1: Reference Effects on Two Fixed Dipoles A general case is shown in two fixed dipoles, and the configuration of the corresponding sources are set as follows: one dipolar is set in with orientation vector, and the other is set inwith the orientation vector. Both the simulated source signal is in the form of a damped Gaussian without any noise (see Figure 1). Simulation 2: Reference Effects on Superficial and Radial Dipoles To explore the influence of orientation on difference references, various orientation combinations were used for the simulations. Source orientations in the human brain are dynamic, and thus a good reference scheme should be insensitive to changes in source orientations. To investigate the stability and robustness of each reference scheme, different orientations that contain almost all of the possible combinations of basic orientation components of sources should be applied to each simulated dipolar pair. Inspired by Qin et al. (2010), the performance of each reference scheme with 300 random distributed dipolar pairs was investigated. However, in their work, the factor of source orientations was discussed only in passing. Their results from deep sources have not yet been clearly detailed. Therefore, we further explored the source direction in this study. Twenty dipolar pairs were considered, and each pair contained 12 orientations. In this simulation, we used 20 dipolar pairs with a large scale of variations on orientations and locations. While the variation between each dipolar pair is distinct, the distributions cover almost the entire possible active area in the cortex. These are primarily located in four situations including bottom-up, left, right, central, and left-right (Table 1). To evaluate the stability of the different reference schemes in all possible directions, 12 orientation combinations were applied to each dipolar pair, respectively. The vector of each orientation is represented in the three unit components, i.e., the unit along the X-axis, Y-axis, and Z-axis. The combinations are listed in Table 2. All the electrodes and the simulated dipolar pairs were projected into the central transverse section in simulations. This better reveals the relative temporal relationship of each electrode in one plane. To give a better representation of the network connectivity topography, a connectivity threshold was used to remove weak links between nodes. The threshold was increased by decreasing the network degree (mean number of links per node across the network) until the degree of each network reached two. Simulation 1: Reference Effects on Positions Fixed Two Dipoles To illuminate the source location vividly, a standard three-view MRI structure was used from an anatomy template ICBM512 in Brainstorm. Sources location in Simulation 2 are shown in Figure 3A, and the corresponding FCGs are shown in Figure 3B. The RE and HD statistics are shown in Figure 3C. Taking the FCG of IR as a standard, REST obviously has the most similarity with IR at the first sight, and AR FCG is the most disordered ( Figure 3B). Here, HD is used to evaluate the graph similarity, and the quantized performance of each method is HD REST = 7.2%, HD AR = 16.96%, HD LM = 9.94%, HD LR = 14.62%. This agrees with the exhibited connectivity topographies. In no-noise simulation, RE is an efficient metric to illustrate the accuracy of different schemes quantitatively. However, the Left-right Source is projected on the z = 0 plane. 12 (1,0,1) (−1,0,1) persuasiveness of RE in FCG is not that intuitive. HD is a complementary metric, and it can measure the distance between each reference schemes and IR with respect to graph similarity. Theoretically, for each reference, smaller HD and RE values result in values that are more similar to the IR. This further improves the method. On the fixed location, the results of different combinations of orientation are shown Figure 3C, and REST is closer to zero than the other three reference schemes from the perspective of average HD and RE. While the standard REST is higher than that of LM, the entire range of REST is closer to zero than LM. Simulation 2: Reference Effects on Superficial and Radial Dipoles Theoretically, if the active source is located on the superficial cortex and the source direction is radial, then EEG can detect and recover active signals very well. Therefore, a good EEG reference must have an excellent reflection of the source activationespecially the superficial and radial cortex source. The RE and HD metrics are utilized to evaluate the difference for each reference from the perspective of coherence matrix and the similarity of FCG. The histogram can reflect the distribution of results at different levels. Figure 4A shows RE histograms of each reference in a noisy situation (SNR of 5). There are 300 dipoles with REs between REST and IR, 200 diploes are nearly zero, and almost 75 dipoles are around 0.1. However, for REs between AR and IR, only ∼125 dipoles are nearly zero. A comparative number of dipoles are around 0.1. The remaining dipoles are distributed across a relatively large scale of variation. The LR situation shows a worse result-fewer than 100 dipoles are obtained from the nearly zero RE. There are fewer than 200 dipoles with RE values of 0.1. As for LM, the distribution scale is larger than REST, and the number of RE that is less than 0.1. These only occupy half of the total. During EEG measurements, the electronic disturbance from noise must be considered. A good reference should have a stable performance at different noise levels. Figures 4B,C shows that when the noise is difficult to distinguish from signal, then SNR equals 1. Here, the EEG measurements at all references lose efficacy. However, when SNR is greater than 1, REST is much better. Clearly, the averages of REST RE in different SNRs (≥2) from 300 dipoles are all around 0.1. The REST HD are all below 0.025. The RE and HD of other references are almost twice as high in terms of average and variation. The REST RE and REST HD have relatively smaller values and vary on a smaller and more stable scale. AR in particular varies more sharply than other references in different SNRs. Figure 5 shows the overall statistical results of HD and RE. These are consistent, i.e., RE tends to be similar to HD at each reference. While these are affected by the distributed form of sources, REST also shows a better performance than the other methods. Statistically, REST has the smallest average RE and HD as well as the smallest fluctuation ( Table 3). The HD and RE variations of REST are both about 5%; other references are much greater. Thus, REST seems to be a better reference choice. Figure 5 shows that LR is obviously the worst choice. It has a high average and variance; the performance of AR and LM is moderate. Simulation 3: Reference Effects on 20 Dipoles with Various Orientation Combinations The results in Figure 5 do not consider noise. However, scalp electrodes always contain real EEG and noise. Thus, to verify the robustness of the different methods in a real situation, we simulated the signals with different SNRs by adding random Gaussian noise considering both poor and good situations. Once the location of each dipolar pair is determined, random Gaussian noise is added to the ideal source signal. This is repeated 100 times. Figure 6 shows both high SNR (SNR = 5) and low SNR (SNR = 1) vs. other methods. The average and standard deviation of HD and RE from REST is the minimum. Thus, in a noisy situation, REST achieves relatively higher robustness. In ideal (no-noise) situations, the orientation of the dipolar pair significantly affects the performance, in addition to the The results of RE between each reference and IR on 300 dipolar pairs. The results are shown for different SNR conditions where the blue bars denote the RE results for different SNRs. The orange, gray, and yellow represent the RE results of AR, LR, and LM, respectively. (C) HD results between each reference and IR based on 300 dipolar pairs. The results are seen for different SNR conditions where the blue bars denote the RE results for different SNRs, and the orange, gray, yellow represent the RE results of AR, LR, and LM, respectively. positional influence on each method. To investigate the stability of each reference scheme with these inevitable variable factors, the results in each orientation are considered separately by exploiting HD as a direct metric. In fact, the real orientation of the dipolar pair is usually complicated; therefore, a good zero-reference scheme should be promising with a stable tolerance in many possible orientations. Figure 7 shows that even though REST may not always have the best performance, it is the most stable. AR, LM, and LR have good performance in limited. According to Figure 3, REST should achieve excellent performance when the source active is superficial and radial, but it is affected by the deeper simulated source (Figure 7). The REST has undesirable performance in ORI6 (orientations of two source that are both radial). Although REST has poor performance in ORI6, REST is better in AR. AR operates better than REST under certain orientations, but it performs worse in many orientations like ORI3, ORI5, ORI6, ORI10, and ORI12 that contain the upward component in dipolar pair. Since AR fluctuates largely with the change of orientation, AR maybe not a good choice for zero-reference. LR and LM are limited by their own strategy and are largely affected by the source position. In the simulated 20 dipolar pairs, the amount of symmetry distribution is larger than the asymmetry distribution. Therefore, LM performs better than LR. DISCUSSION EEG results from different reference sometimes vary widely. They are influenced by the inevitable reference issue and are limited by the principle of EEG. Here, we studied EEG reference effects on FCG with AR, LR, LM, and REST. Each reference has specific zero-reference schemes. The LR systematic decreases the EEG amplitude in the electrodes, and these are closer to the reference side. Although the LM reference makes use of "linked" earlobes, asymmetry from LR reference is avoided, but this distorts the EEG mapping because the electric current flows inside the linking wire. This affects the intracranial currents that form the EEG potentials. AR avoids asymmetry from LR or LM. However, vs. REST, the AR reference needs several strict conditions to gain zero integral assumptions: (1) sufficiently dense electrodes, (2) complete electrode coverage (sampling both the upper and lower part of head), and (3) the head must be spherical (Nunez and Srinivasan, 2006;Yao et al., 2007). Such ideal conditions are rarely realized. in contrast to REST, the AR reference, LM reference, and LR reference are all theoretically based on the channel transformation. The unexpected activity would be largely induced to the referenced recordings because the specific channels are not electrically active. Therefore, channelbased references are not that recommended (Yao and He, 2003). It must be acknowledged that RE (Pereda et al., 2005;Nunez, 2010;Qin et al., 2010) can well reflect the overall difference between the two matrices and has its irreplaceable superiority on measuring the difference between graphs, thus RE has been widely adopted to evaluate the difference between coherence matrices from EEG references. However, evaluations on EEG references which only depend on RE are not sufficient. A perfect example can be found that, if two graphs share the same whole difference but their inner networks are changed, RE cannot detect the difference between the two graphs. To complete RE, HD (Makram Talih, 2005;Medkour et al., 2010;van Wijk et al., 2010) is induced as a new metric, which can well evaluate the difference in topographies. Derived from graph theory, HD can effectively detect the edge changes in networks. Even though, unlike RE, HD cannot measure the entire difference of weights, it is relative intuitive and objective to detect alterations in FCG. Thus, as a complementary, HD contributes to helping complete the detection of RE by measuring the alterations in networks. For example, in Simulation 1, the difference of RE between LM and REST is too subtle to detect. But by combing the two metrics, we can evaluate the similarity of graphs more precisely, so that we can better study reference effects on FCG. The two metrics have their unique superiority, and they can make their respective advantages complementary to each other. Therefore, we should choose the appropriate evaluate metrics according to the practical issues. The results of RE and HD validates that REST performs well in terms of both stability and robustness. REST works because it grasps the essence of the zero-reference. AR can average the signal and noise from each electrode; thus, it achieves good performance when the orientation of the source is along with the axial plane or under noisy situations. However, once there is an upward component in the source orientation, the baseline of AR is abnormally high. Thus, thus performance of AR is unsatisfactory. Although LM and L are insensitive to the orientation of sources, the results depend significantly on the distribution of sources. LM would achieve a stable performance especially for of bilateral symmetry of sources. LR requires rigorous conditions to achieve good results, i.e., LR is close to IR only when the location of the source is far from the left ear. We conclude that REST can achieve stable performance under diverse situations, while AR, LM, and LR can achieve satisfactory results only in a few situations. CONCLUSIONS In this study, we investigated how different reference choices influence FCG using simulated EEG data with various SNR values that were generated from different source combinations. The simulation shows that reference choices have a significant effect on coherence-a measure that indicates synchronization and interaction. As a result, the FCGs also differ across reference schemes. The RE or HD between REST and IR had the smallest values relative to AR, LM, and LR references as well as IR. This means that REST reconstructs FCG better than IR. Moreover, the results revealed that REST could perform stably even when the sources vary on orientations compared to other reference schemes. These findings indicate that the choice of reference plays a crucial role in functional network studies in the brain. It is critical to consider this thoughtfully. REST is the recommended reference technique for objective comparisons as well as crosslaboratory studies and clinical practice. AUTHOR CONTRIBUTIONS YH: Simulate the designed experiments and evaluate the results, Write the whole manuscript. JZ and QL: Design the whole experiments and Revise the entire framework of the manuscript. YC, LH, GaY, and GuY takes part in analyzing the logic and checking the grammar error of the manuscript.
8,155
2017-07-05T00:00:00.000
[ "Computer Science" ]
Detection of Respiratory Sounds Based on Wavelet Coefficients and Machine Learning Respiratory sounds reveal important information of the lungs of patients. However, the analysis of lung sounds depends significantly on the medical skills and diagnostic experience of the physicians and is a time-consuming process. The development of an automatic respiratory sound classification system based on machine learning would, therefore, be beneficial. In this study, 705 respiratory sound signals (240 crackles, 260 rhonchi, and 205 normal respiratory sounds) were acquired from 130 patients. We found that similarities between the original and wavelet decomposed signals reflected the frequency of the signals. The Gaussian kernel function was used to evaluate the wavelet signal similarity. We combined the wavelet signal similarity with the relative wavelet energy and wavelet entropy as the feature vector. A 5-fold cross-validation was applied to assess the performance of the system. The artificial neural network model, which was applied, achieved the classification accuracy and classified the respiratory sound signals with an accuracy of 85.43%. I. INTRODUCTION Because respiratory sounds convey important lung information of patients, the auscultation of lung sounds is a fundamental component of a pediatric lung disease diagnosis, similar to the diagnosis of pneumonia, bronchitis, and sleep apnea [1]- [3]. Crackles and rhonchi are the most common adventitious lung sounds. Crackles are explosive sounds caused by fluid bubbles in the tracheal or bronchial tubes, and rhonchi are caused by the obstructed pulmonary airways when air flows through these tubes. Estimation of crackles and rhonchi is vital in lung diagnosis. However, the auscultation depends greatly on the medical skills and diagnostic experience of the physician, which are difficult to acquire. With the development of computer-based respiratory sounds, automatic lung sound recognition based on machine learning has an important clinical significance for the diagnosis of lung abnormalities [4]. There are three main methods used in the feature extraction of respiratory sounds, i.e., statistics in the time-frequency domain, wavelet coefficients, and cepstrum coefficients. The associate editor coordinating the review of this manuscript and approving it for publication was Shahzad Mumtaz . Statistics in the time-frequency domain are intuitive features of lung sounds. Naves et al. use higher order statistics to extract features. Naves et al. [5] employ temporal-spectral dominance-based features. Auto-regressive (AR) models have also been widely used in the classification of lung sounds [6], [7]. However, certain individual statistics in the time and frequency domains cannot reveal the time-frequency properties of such sounds. Statistics in the wavelet domain have also been widely used in respiratory sound classification. Chang and Lai [8] chose the mean, average energy, and standard deviation of the wavelet coefficients in every wavelet layer and the ratio of the absolute mean values of adjacent sub-bands as the feature vectors. A new type of feature extraction method based on the fast wavelet transform is presented in [9]. The frequency distribution of lung sounds cannot be characterized by the simple statistics of the wavelet coefficients. Cepstrum coefficients, particularly Mel frequency cepstrum coefficients (MFCCs), which are used to evaluate the formants in the spectrum, have been widely applied to research on speech recognition. Numerous researchers have applied MFCC to lung sound recognition in recent years [10]- [13]. However, the formants of respiratory sounds are not obvious, and the vector dimensions of MFCCs are extremely high, which means a large dataset is needed during the training process. In conclusion, a low-dimensional feature vector that evaluates the frequency distribution of the respiratory sounds is needed. Therefore, the feature vector for respiratory sound classification must be studied. Clinical respiratory sounds are difficult to acquire in practice, and the sample set of lung sounds is often small. Therefore, researchers have generally chosen traditional machine learning methods, such as an artificial neural network (ANN) [9], [14], hidden Markov model (HMM) [15], [16], support vector machine (SVM) [17], [18], or k-Nearest Neighbor [4], instead of a deep learning method for the classification of lung sounds. Chamberlain et al. [19] attempted to recognize wheezes and crackles through deep learning conducted on 11,627 sounds recorded from 11 different auscultation locations on 284 patients. However, the signal samples have strong correlations, and the classification model is not generalized. Because of the small sample dataset, the feature vectors play a more important role than the selection of a classification model for lung sound classification. Renard et al. [20] evaluated the discriminatory ability of different types of features, including WT and MFCC used in former studies, based on the evaluation index MCC, ROC, AUC, and F1 score. They found that certain individual features show good results in wheeze recognition, whereas combinations of features increase the accuracy. Haider et al. [21] investigated the chronic obstructive pulmonary disease (COPD) using lung sounds and obtained excellent results. They improved the classification results to 100% by combining lung sound parameters with spirometry parameters. Ashok et al. [22] proposed a new method for classifying normal and abnormal lung sounds using an ELM network. The proposed method achieves a classification accuracy of 92.86%. Jaber et al. [23] proposed a telemedicine framework for lung sound based on the telemedicine framework. Messner et al. [24] proposed a multi-channel lung sound classification method. They selected the convolutional recurrent neural network and obtained the F1 score approximates to 92%. Rizal et al. [25] use multi-scale Hjorth descriptors for lung signal classification and achieves a high accuracy. In this study, it was found that if the frequency spectrum of the original signal concentrates on the same frequency range of a certain wavelet sub-signal, the origin signal is similar to the wavelet sub-signal. Therefore, in this research, the frequency properties of the signals can be characterized based on the signal similarity between the wavelet sub-signals and the original signals. The Gaussian kernel function is selected to evaluate the signal similarity as a part of the feature vector. The Gaussian kernel function measures the signal similarity and scales the signal similarity into a range of zero to one. In addition, the relative wavelet energy (RWE) and wavelet entropy (WE) are used. RWE and WE were first proposed by Rosso et al. [30] in research into brain electrical signal processing. RWE and WE are widely used in EEG signal processing [31], [32] and have been introduced into ECG signal processing [33]. However, a few researchers have used RWE and WE in audio signal processing. In this research, RWE is used to measure the energy distribution in different wavelet bands, and WE is applied to evaluate the RWE distribution. In this paper, 705 lung sound signals with 240 crackle signals, 260 rhonchus signals, and 205 normal respiratory sound signals are acquired from 130 patients. All signals are divided into 5 groups, with 48 crackle signals, 52 rhonchus signals, and 41 normal respiratory sound signals in each group. A 5-fold cross-validation is applied to assess the performance of the system. In every step of the training process, four groups were chosen as the training dataset, and the remaining group is chosen as the test set. A 15-D feature vector is obtained with seven dimensions of the relative wavelet energy, one dimension of the wavelet entropy, and seven dimensions of the Gaussian kernel functions. Three classification methods, i.e., an SVM, a KNN, and an ANN, were tested. The results show that an ANN has the highest classification accuracy of 85.43%. The rest of this paper is organized as follows: Section 2 briefly introduces the lung sound signal acquisition scheme. In Section 3, a wavelet transform and multi-resolution wavelet decomposition are introduced. In Section 4, the feature extraction method is described. The feature vector comprises the wavelet energy, wavelet entropy, and Gaussian kernel function. In Section 5, an SVM, an ANN, and a KNN are used to design the classifier. The results are presented in Section 6. Finally, Section 7 summarizes the procedure followed by the algorithm and proves the validity of the characteristic vector choice. II. LUNG SOUND SIGNAL ACQUISITION All breath signals were acquired from the pediatric department in the China-Japan Friendship Hospital in China. This research methodology was approved by the Institutional Ethical Committee of the China-Japan Friendship Hospital and informed consent was obtained from the participants. For the participants who are minors, the research method was approved by their parents. All abnormal lung sound signals were collected from patients having pneumonia or bronchitis. All normal signals were acquired from patients with other diseases, such as heart or stomach diseases. The 705 signals were collected using a 3M Littmann electronic stethoscope on 44 patients having lung crackles, 50 patients VOLUME 8, 2020 with rhonchi, and 36 patients having normal respiratory sounds. Five or six respiratory cycles are selected from every patient. The respiratory cycles from one patient were collected on different days. The sampling frequency of the signals was 4 000 Hz. The signal collection equipment is shown in Figure 1. The signals were pre-processed to reduce environmental noises. All signals were filtered using the algorithm mentioned in [34]. The algorithm is an integrated serial filter consisting of a Chebyshev band-pass filter, a wavelet de-noising filter, and an adaptive filter. The signal-to-noise ratio (SNR) of the filtered signals was greater than five. The duration of the signals ranged from 1 to 3 s. The signals were segmented according to the respiratory cycle. All signals have periodic inspiratory and expiratory phases. The signals are segmented based on their amplitude. First, a signal amplitude threshold was set. Signals higher than the threshold were respiratory sounds, and signals lower than the threshold were noises. Second, the signals were segmented using an alternation of noises and respiratory sound components. III. WAVELET ANALYSIS A. WAVELET TRANSFORM A Fourier transform (FT) is a traditional method used to study the frequency of a signal. However, FT provides the frequency information of the signal, not the frequency information within the time location. Therefore, the FT does not provide sufficient frequency information of non-stationary signals. A short time Fourier transform (STFT) utilizes window functions to segment the non-stationary signals into short-term sub-signals. The short-time sub-signals are considered stationary signals. The FT of every short-time sub-signal determines the frequency components of the sub-signal time location. In addition, the signals are mapped into two dimensions, i.e., the frequency and time domain. However, the sinusoidal frequency of an STFT has a constant time-frequency window, as shown in Figure 3 (a), limiting the promotion of STFT in the time-frequency analysis. To solve this problem, a wavelet transform (WT) is proposed. The WT determines the frequency components within the wavelet domain. The time-frequency windows of the wavelet domain are variable, as shown in Figure 3 (b). This guarantees the time domain resolution at low-frequency scales and the frequency domain resolution at high-frequency scales. The WT is as follows where a is the dyadic dilation, b is the dyadic position, and the wavelet mother function ϕ(x) is defined as: The discrete wavelet transform (DWT) is as follows: The MALLAT algorithm is a simple algorithm of WT. Wavelet coefficients are obtained through the multi-resolution decomposition of the signals in the MALLAT algorithm [35], [36]. The process used by the algorithm is shown in Figure 4. The signal is decomposed into a high-frequency part by a high-pass filter h[n] and a low-frequency part by a low-pass filter g [n]. The relationship between the two filters is as follows: After the down-sampling process, the wavelet coefficients in layer j are obtained. The high-frequency part of the signal is transformed into the detail coefficients Dj,k, and the low-frequency part is transformed into the approximation coefficients Aj,k, where j and k are the dyadic dilation and dyadic position, respectively. The same decomposition process is repeated on the approximation coefficients Aj,k to obtain the detailed coefficients Dj+1,k and the approximation coefficients Aj+1,k at a higher resolution. The relationship between the wavelet coefficients at different resolutions is After n iterations, n groups of detail coefficients Dj,k, j = 1, 2,. . . ,n and one group of approximation coefficients An,k are obtained. The frequency range of the wavelet coefficients in layer j is as follows: where f s is the sampling rate of the original signal. IV. FEATURE EXTRACTION Rhonchi, crackles, and normal respiratory sounds have different frequency distributions, as shown in Figure 5. Therefore, the statistics of the wavelet coefficients are selected to evaluate the frequency components in the wavelet domain. Authors use permutation entropy as the wavelet selection criteria like [37]. We compare the Daubechies (db) series (db1 ∼ db7), coif series (coif1 ∼ coif2) and sym series(sym2 ∼ sym7). The coif2 has the least permutation entropy. The signal is decomposed into six wavelet layers using the coif2 wavelet base. The sampling frequency of the signals collected is 4000 Hz. The frequency ranges of the wavelet coefficients in different layers are presented in Table 1. In this research, the relative wavelet energy, wavelet entropy, and similarity between the original and sub-signals are chosen as the elements of the feature vector. the same frequency component of the original signal, the sub-signal of that wavelet layer would be similar to the original signal. As shown in Figures 5 and 6, the frequency of the crackle signal concentrates on the lower frequency, and the original crackle signal is similar to the wavelet sub-signals a6 and d6. As shown in Figures 5 and 8, the normal lung signal frequency concentrates within the higher frequency, and the original signal is similar to d4, d5, and d6. As shown in Figure 5, it was found that the Rhonchus signal had a wide frequency distribution. In addition, as shown in Figure 7, the wavelet sub-signals d3, d4, and d5 are similar to the original signal in the inspiratory of the original signal, whereas the sub-signal d6 is similar to the expiratory of the original signal. Because the frequency distributions of the crackles, rhonchi, and normal lung sounds are different, the similarities between the sub-signals and the original signals were chosen as the elements of the feature vector. In this study, the kernel function was used to measure the similarity of the original andsub-signals. A kernel function is a symmetric function that measures the correlation of different signals, and is defined as follows: where x and x' are different signals, and ϕ(x) is the function of signal x. Kernel functions are often used with the SVM method, which are proportional to the signal similarity. In this, the Gaussian kernel function, a type of homogeneous kernel, is used through the following: which is proportional to the Euclidean distance of the two signals. In addition, the standard deviation σ is chosen as one. The wavelet correlation coefficients are not used because the Gaussian kernel function scales the similarity values into the range of zero to one. The normalized values are helpful for the training step. Because the kernel function is influenced by the amplitude of the signals, the sub-signals need to first be normalized. The normalization method is based on the power of the signals. The power of a discrete signal is given by the following: where x[n] is the sampling point of the signal, and N is the length of the signal acquired. The original signal is normalized by the following: where P 1 is the power of a standard signal, and s and P 2 are the signal and power of the signal to be processed, respectively. In this way, the amplitudes of the signals are normalized to a similar scale. The similarity between the original signal and the sub-signal in the wavelet layer i is calculated as follows: where s i is the sub-signal of wavelet layer i, and s is the original signal. In addition, SIMR i s are chosen as the first seven elements of the feature vector. B. RELATIVE WAVELET ENERGY The energy of the wavelet coefficients is an intuitive statistic describing the frequency distribution of the wavelet coefficients in different wavelet layers. where W ki is the kth wavelet coefficient in layer i. The energy is defined as the wavelet energy. However, the wavelet energy is proportional to the energy of the original signals. Therefore, the relative wavelet energy (RWE) is introduced as follows: where WEN i is the wavelet energy of layer i, and WEN total is the total energy calculated as where RWEs are chosen as the next seven elements of the feature vector. C. WAVELET ENTROPY The Shannon entropy [38] measures the disorder of the probability distribution of a random process. The definition of entropy is described as follows: where p i is the probability of a random process. If the probability distribution is more concentrated, the Shannon entropy is lower. By contrast, if the probability distribution is more dispersed, the Shannon entropy is higher. The idea of entropy is introduced to measure the disorder of the RWE distribution [20]. If the probability p i is replaced by RWE i , the Shannon entropy is defined as the wavelet entropy (WE). As shown in Figure 9, the RWE of the rhonchus concentrates on the sixth approximation coefficients, and the RWE of the sixth approximation coefficients is higher than 60%. The RWE of the crackle concentrates on the sixth approximation coefficients and the sixth detail coefficients. However, the RWE distribution of the normal lung sound signal is more disperse and therefore the WE of the crackle, rhonchus, and normal respiratory sound signals are different. In addition, the WE is chosen as the last element of the feature vector. D. STATISTICAL SIGNIFICANCE ANALYSIS E. Not all the features extracted in this research are equally important for the lung sound recognition. If a feature has similar statistical distributions among the crackles, rhonchus and normal lung sounds, the feature should not be included in the feature vector. Therefore, statistical significance analysis should be conducted to reduce the computation time and increase the classification accuracy. Authors firstly conduct the normality test by the Kolmogorov-Smirnov test. The result is shown in Table 2. All the features do not follow the normal distribution. Therefore, the Non-parametric test should be used for checking the significance level of the features. Authors use the Mann-Whitney U test at 95% confidence interval to check the statistical significances of the extracted features. Because there are three classes, the Non-parametric tests are carried by pairs. The results are shown in the following table. RWE in detail layer 6 and SIMR in detail layer 2,3,4,5 are found not statistically significant between crackles and wheezes with p>0.05. But the features are statistically significant between abnormal signals and normal signals. Therefore, the features are reserved for recognizing the normal lung sounds. RWE in detail layer 4 is reserved for the same reason. Therefore, all the features are reserved for the classification. V. CLASSIFICATION DESIGN A. SUPPORT VECTOR MACHINE SVM is a non-linear classifier that maximizes the margin of the samples in the hyper plane. The margin is the distance between a straight line and the two points closest to the line, as shown in Figure 10. The maximal margin in the hyper plane is called the max margin hyperplane (MMH). The sample points lying on the MMH are called support vectors. A classifier based on support vectors is called an SVM. The hyper plane is defined as: where x is the feature vector, ω is the weight vector, b is the bias, and y is the output. The hyper plane classifying the samples applies the following formula: The outputs of the SVM are +1 and -1. The distance from a sample point to the hyper plane is To maximize the distance, ω is minimized. The training target of SVM is shown as A Lagrangian is selected for optimization: By setting the derivatives of L to zero with respect to ω and b, ω is obtained as follows: The training target is reformulated as follows: To solve the non-separable case, the regularization factors C are introduced and reformulated Eq. (23): To reduce the operational complexity of the inner products, the kernel functions are used to replace the inner product: The regular method used to obtain the coefficients is the sequential minimal optimization (SMO) algorithm [39]. For the SVM parameters, a context-aware support vector machine (C-SVM) is selected as the SVM type and a radial basis function (RBF) as the kernel function. The coefficient γ in the RBF function is 0.6667. The cost C is set to five and the minimum step size is 0.0001. The mean support vector is 334.8. B. BP NEURAL NETWORK The structure of a linear classifier can be defined as follows: where ω i indicates the coefficients of the linear classifier, φ i (x) is the nonlinear function of input data x, and function f (·) is the nonlinear activation function. In the BP neural network, the nonlinear function φ i (x) can be regarded as the same model of (22). The model of the BP neural network is defined as follows: if the nonlinear function η(·) is defined as the network is regarded as a two-layer-network. If the number of layers increases, the function η(·) has the same form as (23). The structure of a two-layer BP neural network is shown in Figure 11, which is divided into an input layer, a middle layer, and an output layer. The input layer has 15 nodes for the 15-dimensional feature vector, the middle layer has 500 nodes, and the output layer has 3 nodes. The outputs of the layer are the probabilities of occurrence of every type of sound. A rectified linear unit (ReLU) is selected as the activation function. The ReLU is defined as follows: The Softmax function is selected as the activation function of the output layer. The Softmax function is defined as The target of the training process is to minimize the loss function. Cross entropy is chosen as the loss function. where y i is the real label of the data, andŷ i is the predicted label of the data. The coefficients are updated by the back propagation. For the optimizer, the root mean square prop (RMSProp) is chosen. C. KTH NEAREST NEIGHBORS The principle of the KNN depends on the Euclidean distance of the different points in an N -dimensional hyperplane, which is defined as follows: where x 1,i and x 2,i are the coordinates of two points. To classify a new sample, the K -nearest points between the sample points are chosen. The sample can be trusted to belong to the classification, which repeats the most in K points. In this research, the number of neighbors K is five. A. ACCURACY OF MODELS To avoid an over-fitting, a k-fold cross-validation process is designed for proving the correction of the scheme. The sampling set is divided into five groups, with 58 rhonchi samples, 42 crackle samples, and 41 normal lung sound samples. During every step of training, four groups are chosen as the training group, and the remaining group is chosen as the test group. The classification accuracy of the model is shown in Figure 12. The average classification accuracies of the SVM, ANN, and KNN are 69.50, 85.43, and 68.51%, respectively. The ANN model is chosen as the classifier of this research. B. FURTHER ESTIMATION OF THE ANN MODEL Accuracy, sensitivity, specificity, AUC (micro), AUC (macro), and F1 score are selected as the performance measures. The 5-fold cross-validated models are evaluated and the results are presented in Table 3. The system has multiple classifications. Therefore, the sensitivity and specificity of every category are calculated and the sensitivity and specificity are obtained based on the mean values. The average sensitivity of all models is 86.16%, and the average specificity of all models is 90.49%. The receiver operating characteristic (ROC) curves are shown for all models in Figure 13. We demonstrate the ROC curves of all classes, the macro-average ROC curve, and the micro-average ROC curve together. The area under the curve (AUC) are obtained for the macro-and micro-averages. All AUCs are higher than 90%. The F1 score was also used to evaluate the models. The average F1 score was 0.8608. The models showed good performances for the three categories applied to the test dataset. The models do not overfit the dataset. The true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN) are calculated by the mean value of every category. For example, the crackles, the patients with crackles were considered the positive group were evaluated, whereas the patients with rhonchi or normal lung sounds were considered as the negative group. In addition, three groups of TP, FP, TN, and FN were obtained. The final TP, FP, TN, and FN values were calculated using the mean value of the three groups. C. DEEP LEARNING RESULT Deep learning is widely used in sound classification. In this research, the long short-term memory (LSTM) deep learning model, which is widely used in sound recognition, is implemented for lung sound classification. The data are divided into five groups similar to machine learning methods. The network has 50 LSTM cells and a dense layer with the activation function of the soft-max function. The loss function is the categorical cross-entropy. The sensitivity, specificity, and accuracy of the training and test sets are presented in Table 3. The sensitivity, specificity, and accuracy are 100% for the training data. However, they are much lower on the test set, particularly sensitivity. All sensitivities are lower than 70% on the test set. Therefore, the deep learning model overfits when a small sample dataset is used. D. COMPARISON WITH SIMILAR APPROACHES Most studies classifying respiratory sounds have only considered two types of respiratory sounds. Xaviero et al. [20] studied normal lung sounds and wheezes. Ashok et al. [22] studied respiratory sounds to distinguish between normal and abnormal subjects. However, crackles and wheezes are the most common abnormal lung sounds. Crackles and wheezes are symptoms of different respiratory disorders. Therefore, it is necessary to distinguish lung sounds from among crackles, wheezes, and normal lung sounds. Sengupta et al. [14] classified respiratory sounds into crackles, wheezes, and normal lung sounds. However, they used only 72 cycles as the training data. A multi-classification requires more training samples than the binary classification because the models overfit with small training sets. In addition, the feature vector used in this study is low dimensional, and the structure of the neural network is simple. There are only 9503 parameters in the model, which is a much smaller number than in deep learning models, such as those developed by Perna and Tagarelli [26] and Chamberlain et al. [19]. The model is only 93 kilobytes (kb) in size and has a low calculation complexity. Therefore, the model is easily transplanted into a small auscultation device based on the micro-programmed control unit. Haider et al. [21] use the median frequency and linear predictive coefficients combined with the spirometry parameters to classify the normal patients and COPD. The classification accuracy reaches 100%. The results prove that the respiratory sound classification may improve when combined with other medical indexes. Therefore, we will perform pulmonary disease studies in the future by combining respiratory sounds with other medical indexes. E. RESUTL OF FEATURE VECTORS WITHOUT SMIs The novelty of this research is the determination of the similarity between the sub-signals in different wavelet layers, and the original signals reflecting the frequency distribution of the signals. Therefore, the Gaussian kernel function is used to evaluate the signal similarities. The classification results are compared with a feature vector with and without wavelet sub-signal similarities. The results are presented in Table 5. From the results, the proposed wavelet sub-signal similarities increase the classification accuracy. F. TEST RESULT WITH OPEN SOURCE RESPIRATORY SOUND DATABASES The algorithm was tested using the 2017 ICBHI dataset. The signals are divided into two categories, normal and abnormal sounds. There are signals in the training group as well as in the valid and test groups. The classification results are shown in Table 6. The model was proved to be effective on the ICBHI database. VII. CONCLUSION In this paper, a classification model was proposed to classify crackles, rhonchi, and normal lung sounds of patients. The sample contained 705 signals acquired from 130 patients from a AAA hospital in China. The feature vector comprised the relative wavelet energy in seven wavelet layers, the wavelet entropy, and the Gaussian kernel functions to measure the wavelet similarity between the wavelet sub-signals and the original signal in seven layers. When compared, the artificial neural network showed the highest classification accuracy of 85.43% among the methods using SVM, AAN, and KNN. It was found that the similarity between the wavelet deposition sub-signals and original signal reflects the time-frequency characteristics of the signals. The statistics were chosen as the elements of the feature vector classifying the normal and abnormal lung sounds. However, some limitations to the methods exist that need to be improved. Because the Gaussian kernel function used in this study is related to the amplitude of the subsignals, a normalization step is conducted before obtaining the signal similarities. This step may lead to an accumulation of errors. Therefore, a new statistic which measures the signal similarity should be proposed. In addition, the respiratory sounds can be combined with other medical parameters, such as spirometry parameters in intelligent disease recognition. Furthermore, multi-signal integration studies should be conducted in medical sound signal research. Although there are several wavelet families including Daubechies, in this study, a coif2 wavelet function was chosen because it achieves good performance under practical situations. Different wavelet functions and their classification accuracies will be considered in future studies.
6,858.6
2020-01-01T00:00:00.000
[ "Computer Science" ]
Novel Biocompatible Au Nanostars@PEG Nanoparticles for In Vivo CT Imaging and Renal Clearance Properties Nanoprobes are rapidly becoming potentially transformative tools on disease diagnostics for a wide range of in vivo computed tomography (CT) imaging. Compared with conventional molecular-scale contrast agents, nanoparticles (NPs) promise improved abilities for in vivo detection. In this study, novel polyethylene glycol (PEG)-functionalized Au nanoparticles with star shape (AuNS@PEG) with strong X-ray mass absorption coefficient were synthesized as CT imaging contrast agents. Experimental results revealed that AuNS@PEG nanoparticles are well constructed with ultrasmall sizes, effective metabolisability, high computed tomography value, and outstanding biocompatibility. In vivo imaging also showed that the obtained AuNS@PEG nanoparticles can be efficiently used in CT-enhanced imaging. Therefore, the synthesized contrast agent AuNS@PEG nanoparticles as a great potential candidate can be widely used for CT imaging. Background The past decade has witnessed the rapid development of nanoparticles in nano-biotechnology, owing to their diverse constituent materials and large surface area [1,2]. Among these nanoparticles, Au has a widely applications as its excellent biocompatibility and affinity in the biomedical field [3,4]. In recent few years, Au nanoparticles are widely used in CT imaging, due to its bigger atomic number, precious metal, and chemical inertness, as well as not easy to proteins in the body reaction [5][6][7]. CT imaging is a noninvasive clinical diagnostic tool through different density and thickness of different tissues or organs of the X-ray generator attenuation in varying degrees, to form different tissue or organ distribution of gray-scale image contrast, and thus to the relative position of the lesion, and the size of the shape change [8][9][10][11]. Currently, the clinical application of CT contrast agents mainly contain iodine compound which is a small molecule including organic iodine and inorganic iodine small-molecule compounds, such as diaztrizoate (diatrizoic acid (DTA)) and iohexol (Omnipaque) [12]. However, the small molecular iodine-based contrast agent removing effects of iodine-containing compounds only needs a very short imaging time, and it does not have low kidney toxicity [13,14]. In clinical practice, a deterioration of renal function is one complication of iodinated radiocontrast agents [15]. Therefore, the development of nano-materials provides new ideas and methods to solve these problems. Recent studies have also confirmed that nanoparticle-based CT contrast agent can effectively extend the imaging time, weakening of the kidney toxicity, and have better X-ray attenuation than iodine-based contrast agents, such as gold nanoparticles and nano-silver particles used as CT contrast agents have been attracted researchers attention [16][17][18]. Dendrimer nano-platform not only as modified small molecule of iodinated contrast media, but also as a template package and stability of different inorganic nanoparticles, improve blood circulation time of the contrast agent which make it better for CT imaging [19]. In this study, we prepared the PEG-functionalized Au nanostar nanoparticles (AuNS@PEG); due to the larger surface area in comparison with normal Au nanoparticles in the same size, Au nanostar could greatly enhance the CT imaging. After functionalized with PEG, Au nanostar nanoparticles could improve its biocompatible and renal clearance properties. Various methods, including TEM, EDX, XPS, MTT, and flow cytometry, were used to determine the characters and biocompatibility of AuNS@PEG nanoparticles. In addition, histological analysis and hematology studies had been used for tests about the toxicity of AuNS@PEG nanoparticles in vivo, and the results confirmed the nice biocompatible of AuNS@PEG nanoparticles. Moreover, in vitro and in vivo CT imaging experiments also exhibited the excellent CT imaging capabilities of AuNS@PEG nanoparticles. All of these results revealed that the synthesized contrast agent AuNS@PEG nanoparticles as a great potential candidate could be widely used for CT imaging and had good renal clearance properties. Methods All experimental protocol including any relevant details were approved by the Regional Ethics Committee, Jinzhou Medical University, Liaoning Province, China. Materials and Instruments All chemicals were purchased from Sigma-Aldrich (St. Louis, MO) and used directly unless otherwise noted. Synthesized nanoparticles were characterized by transmission electron microscopy (TEM) and energy dispersive X-ray (EDX) analyses using 200-kV acceleration voltage (Tecnai G2 Twin, FEI, Hillsboro, OR). The TEM sample was prepared by drying diluted nanoparticle solutions on a formvar/carbon-coated copper grid. The samples were prepared by depositing a drop of a diluted colloidal solution on a carbon grid and allowing the liquid to dry in air at room temperature. UV-vis adsorption spectra were recorded on a Shimadzu UV-2450 UV/Vis/NIR spectrophotometer. Dynamic light scattering (DLS) measurement was performed on a Malvern Zetasizer NANO ZS at 25°C. Synthesis of Au Nanostars/PEG (AuNS@PEG) Nanoparticles Au nanostars (Au NS) were synthesized via seedmediated growth method according to a previous report [20][21][22] with some slight adjustments. Typically, Au seeds which were formed with 10-nm diameter were synthesized by the chemical reduction of HAuCl 4 according to previous report [23]; 6 ml HAuCl 4 solution (w/v 1%) was added to 140 ml ultrapure water and heated to boiling while stirring. Then, 0.75 ml of oleylamine was rapidly injected, and the resulting mixture was boiled for another 2 h. The Au colloid was naturally cooled down to room temperature; 60 ml of cyclohexane was added to the colloid, and the solution was magnetically stirred for another 1 h. Subsequently, 1.5 ml of NaOH (4 M) was injected into the mixture while vigorously stirring for another 30 min. The mixture was left to be hierarchical. The Au nanoseed contained in the upper layer was precipitated by adding ethanol. The precipitates were alternately purified with ethanol and water one more time and dispersed in water. Au nanostars with around 50-nm diameter were synthesized according to previous work by rapidly and simultaneously mixing AgNO 3 (1 ml, 3 mM) and ascorbic acid (500 μl, 0.1 M) with 100 ml of a solution containing 0.25 mM HAuCl 4 , 1 mM HCl, and 1.5 ml of the gold nanosphere seeds. Then, the thiolated polyethylene glycol (PEG, 6 kDa) polymer was added in large excess to passivate the nanoparticle surface. The mixture solution was continuously stirred for 24 h, then the obtained AuNS@PEG nanoparticles were collected through 3 cycles of centrifugation/ redispersion in water. The formed AuNS@PEG nanoparticles were redispersed in water for further use. Cell Culture and AuNS@PEG Nanoparticle Exposure Neuroglia cells were collected from rat spinal cord tissues. The cells were cultured in Dulbecco's modified Eagle's medium (DMEM) (Gibco, USA) supplemented with 10% fetal bovine serum, 100 U per ml penicillin, and 100 μg per ml streptomycin at 37°C in a humidified incubator with 5% CO 2 . Cells were seeded in culture plates followed by exposure to AuNS@PEG nanoparticles for 2 h at certain concentrations (50, 100, 200, 500, and 1000 ppm). DMEM without AuNS@PEG nanoparticles were used as the control group. Animals and Treatment This work was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Committee on the Ethics of Animal Experiments of the Jinzhou Medical University (permit number: LMU-2013-368), China. Male Sprague Dawley rats (180-200 g) were purchased from Animal Centre of the Jinzhou Medical University (license number: SCXK 2009-0004). All rats were fed in a temperature-controlled room (25.0 ± 0.2°C) in a Specific Pathogen Free laboratory, with a 12-h/12-h light/dark photoperiod and 50% humidity. The rats were allowed free access to food and water. Humane endpoints are chosen to minimize or terminate the pain or distress of the experimental animals via euthanasia, including inhalant agents, noninhalant pharmaceutical agents, and physical methods, rather than waiting for their deaths as the endpoint. In this work, rats were divided into two groups: (1) control: rats were anesthetized by intraperitoneal injection of chloral hydrate solution (10 wt%), and then, 800 μL of phosphate-buffered saline were injected via the tail vein. (2) Test: rats were anesthetized by intraperitoneal injection of chloral hydrate solution (10 wt%), and then, 800 μL of AuNS@PEG nanoparticle solution (200 μg/ ml) were injected via the tail vein. For the H&E study, the rats were sacrificed by cervical dislocation without prior anesthesia, and their hearts, livers, kidneys, spleens, and intestines were immediately dissected, stored at − 80°C, and snap-frozen in isopentane on dry ice until further processed. Cell Viability Assay Logarithmic-phase neuroglia cells were seeded on a 96-well plate at 1 × 10 4 cells per well in 100 μl cell suspension. Phosphate-buffered saline (PBS) was added to the surrounding wells. The plate was incubated at 37°C and 5% CO 2 for 24 h to allow the cells to adhere. The cells were then allocated to four groups: cells in the control group were incubated in DMEM containing 10% fetal bovine serum; in the AuNS@PEG nanoparticle group, 0, 25, 50, 100, 200, 500, or 1000 ppm AuNS@PEG nanoparticles were added to the culture medium; cells were observed 24 h later under an inverted phase contrast microscope (Leica, Heidelberger, Germany). Subsequently, 20 μl MTT (Sigma, St. Louis, MO, USA) was added to each well for 4 h. The medium was removed, and the cells were incubated with 150 μl of dimethyl sulfoxide for 10 min at 37°C. Optical density (OD) values were measured at 490 nm with a microplate reader (Bio-Rad, Hercules, CA, USA). Flow Cytometry Cells were incubated in 6-well plates for 24 h, then grouped and treated as described above. A single-cell suspension was made using trypsin and centrifuged at 300g for 3 min. Following removal of the supernatant, cells were washed twice with precooled PBS and centrifuged in 1 ml annexin V (Tianjin Sungene Biotech Co, Ltd., Tianjin, China) for 10 min. Cells were adjusted to 10 5 /ml. Cell suspension was centrifuged and washed three times with PBS. Samples (100 μl) were added to Eppendorf tubes with 5 μl annexin V-APC (Tianjin Sungene Biotech Co., Ltd.) and 7-AAD (Tianjin Sungene Biotech Co, Ltd.) and mixed. The volume was made up to 500 μl with PBS, and the tubes were incubated at CT Imaging CT imaging was acquired using 128-row 64-slice spiral CT produced by General Electric Company (GE). Imaging parameters were as follows: slice thickness is 0.625; medium is nude mice; tube energy, kvp, is 120 μA and 100 mA; CTDIVOL is 6.53 mGy; and radius is 4.8 cm. All animals were scanned in the cranial to caudal direction from the low chest to the pelvis. CT data were analyzed by images and after-treatment. Histological Analysis The organs were removed and fixed in 4% paraformaldehyde, then with 30% paraformaldehyde sucrose solution once every 2 days, sectioned, and stained with hematoxylin and eosin (H&E) for histological examination using standard techniques. The sections were examined under an inverted phase contrast microscope. Assessment of Renal Function Biochemical analyzer (Jinzhou medical university) were used to evaluate BUN, Crea, β 2 -MG, and CO 2 in the blood. Kidney function was evaluated by the changes of serum levels of BUN, Crea, β 2 -MG, and CO 2 before and after injection of AuNS@PEG nanoparticles on rat. Statistical Analysis Data were expressed as the mean ± SD and were analyzed using GraphPad Prism 5.0 software (GraphPad Software, Inc., La Jolla, CA, USA) and SPSS. Groups were compared using one-way analysis of variance and the least significant difference test. P < 0.05 was considered statistically significant. Synthesis and Characterization of the AuNS@PEG Nanoparticles Nanomaterials enter the human body and play the role of detection. The physical and chemical properties of the nanoparticles are first considered before they enter into the circulatory system [24,25]. As we know, there are two key factors in the development of high-performance nanoprobes for in vivo CT imaging and renal clearance properties. One is further surface functionalization; the other is the size control. A large-scale transmission electron microscopy (TEM) image (Fig. 1a) was used to confirm the structure of AuNS@PEG nanoparticles, which showed obvious the star-structure AuNS@PEG nanoparticles were prepared, and these nanoparticles had the ideal sizes around 50 nm with high uniformity. Then, the elements of Au found in the energy dispersive X-ray (EDX) spectrum of AuNS@PEG nanoparticles also prove the preparation of Au nanostar (Fig. 1c). In addition, the composition on the surface of the AuNS@PEG nanoparticles was further characterized by XPS spectra, and the Au4f, C1s, and O1s derived from Au nanostars and PEG were clearly shown in the Fig. 1b which also confirm the formation of AuNS@PEG nanoparticles. The above characteristics demonstrated the successful synthesis of AuNS@PEG nanoparticles. CT Value of the AuNS@PEG Nanoparticles Au nanoparticles have been widely used as CT contrast agents because of their better X-ray attenuation property than conventional iodine-based small-molecule CT contrast agents. Iodine (Z = 53) has historically been the atom of the first choice in CT imaging field. To assess the feasibility of AuNS@PEG nanoparticles for X-ray computed tomography imaging, we measured the CT values (Hounsfield units, HU). Figure 2a shows that the AuNS@PEG nanoparticles have higher CT value compared to the iodine and DI water at the same concentration. When the AuNS@PEG nanoparticle concentration increased, the CT image intensity also continuously increased with brighter images. By plotting the CT value (in HU) of the AuNS@PEG as function of concentration (Fig. 2b), we could see a linear attenuate of the CT value of AuNS@PEG nanoparticles with the different concentrations. These results reveal that AuNS@PEG nanoparticles are ideal candidates for a positive CT imaging nanoprobe. Cytotoxicity Assay It was crucial to investigate the biocompatibility of AuNS@PEG nanoparticles in vitro before it was used in CT imaging in vivo as a contrast agent. MTT assay was performed to evaluate their cytotoxicity on neuroglia cells. After incubation with AuNS@PEG nanoparticles at different concentrations (25, 50, 100, 200, 500, and 1000 ppm, respectively) for 24 h, an MTT viability assay of neuroglia cells was carried out. It could be seen that the viability of the cells after treatment with AuNS@PEG nanoparticles in the studied concentration range is quite similar to the control, which clearly indicated that the formed AuNS@PEG nanoparticles have a good cytocompatibility at a concentration up to 200 ppm. Even at a relatively high dose of nanoparticles (1000 ppm), the cell viability still remained above 90% (Fig. 3a). The cytocompatibility of the AuNS@PEG nanoparticles was further confirmed by flow cytometric analysis of the cells treated with the AuNS@PEG nanoparticles at different concentrations for 2 h. In the flow cytometric analysis, cells were stained with annexin V-APC and 7-AAD after treatment with PBS and AuNS@PEG nanoparticles. Neuroglia cells treated with PBS without staining was used as the control (Fig. 3bi). It could be seen that cells treated with the AuNS@PEG nanoparticles at concentrations of 25, 50, 100, 200, 500, and 1000 ppm, respectively (Fig. 3bi-vii). Taken together with the results from MTT assay, our results exhibited that the AuN-S@PEG nanoparticles have good cytocompatible, and there was no obvious cellular morphology change after treatment with the AuNS@PEG nanoparticles, which agreed with the MTT data. In Vivo CT Imaging and Biodistribution Encouraged by their high CT contrast performance in the in vitro experiment, we have further confirmed the feasibility of AuNS@PEG nanoparticles as a CT contrast agent in vivo. AuNS@PEG nanoparticles (200 ppm) were injected intravenously into the tail veins of the rat. Such a dose of the AuNS@PEG nanoparticles was chosen because of the results of low toxicity and apoptosis percentage of MTT and flow cytometry and high sensitivity of CT. The CT imaging of the important organ regions were recorded before tail vein injection and at different time points post tail vein injection (Fig. 4). Our study aim to test the capacity of CT imaging and renal clearance. So we stress the change of the organ of kidney and bladder in the CT imaging. Figure 4a is the CT image of the rat kidney before injection. Compared with preinjection, the kidney imaging is greatly enhanced from (Fig. 4b-d). The time-dependent distribution of the AuNS@PEG nanoparticles in the rat was also tracked by CT signal value after intravenous injection. The kidney and bladder imaging were greatly enhanced from 0.5 to 2 h, and HU value of them rose from 95 to 464 and 105 to 664. After 6 h post-injection, the CT contrast intensity in the kidney of rat obviously decrease over time (Fig. 4e). After 24 h post-injection, CT imaging of bladder organ is completely clear, showing the excellent renal clearance properties of AuNS@PEG nanoparticles (Fig. 4f ). Owing to their optimal particle size and surface functionalization, the elimination of AuNS@PEG nanoparticles from blood during circulation can be such slow. Hence, these results indicate that the as-prepared AuNS@PEG nanoparticles might be a unique and promising nanoprobe to provide the realtime CT imaging in vivo. This is beneficial for future clinical applications because the contrast agents can be administered to patients in the hospital. H&E Staining Histological changes in the organs of the mice were performed after 24 h post-injection of AuNS@PEG nanoparticles, and the results were shown in the Fig. 5, We can see that no obviously change in the histology of the major organs was observed, and most important, there are no residual AuNS@PEG nanoparticles left in these organs. Based on the above results, the AuNS@PEG nanoparticles exhibited good biocompatibility and no obvious in vivo toxicity, which promise it as a new CT imaging contrast agent for biological medicine application. Renal Function Study of AuNS@PEG Nanoparticles To further evaluate the in vivo toxicity of AuNS@PEG nanoparticles, the parameters of BUN, Crea, β 2 -MG, and CO 2 were measured for studies of renal function; we analyzed the serum. These values can assess the renal function of the rat is good or not. The values of BUN Table 1 The effect of AuNS@PEG nanoparticles on BUN, Crea, β 2 -MG, and CO 2 levels before and after injection of AuNS@PEG nanoparticles in rat (n = 5) Bun (mmol/l) Crea (μmol/l) β 2 -MG (mg/l) CO 2 (mmol/l) Before injection 10 can assess the rat's urinary function. The changing value of Crea represents the various diseases in the rat's body. The β 2 -MG concentration is mainly related to renal tubular function. And the value of CO 2 can evaluate the acidification function of the renal tubule. Rat was given AuNS@PEG nanoparticles at a concentration of 200 ppm. The level of these results was examined 24 h after injection, and there were no difference between before and after injection of AuNS@PEG nanoparticles in rat (Table 1). Conclusions In summary, we developed facile AuNS@PEG nanoparticles for applications in CT imaging. The formed AuNS@PEG nanoparticles have ultrasmall sizes, low toxicity, good water dispersibility, hemocompatibility, and cytocompatibility in the given concentration range. The CT values show that the AuNS@PEG nanoparticles have a good bright imaging. In vitro imaging results indicate that the AuNS@PEG nanoparticles possess strong X-ray attenuation properties as a new contrast agent for CT imaging applications, which were also demonstrated by the CT imaging of rat kidney in vivo. Moreover, the distribution of biological study and exploration of in vivo toxicity show that AuNS@PEG nanoparticles can metabolize and have high biological compatibility. Thus, AuNS@PEG nanoparticles can be promising candidates for medical applications.
4,407.2
2017-10-12T00:00:00.000
[ "Materials Science", "Biology" ]
Generating Syntactically Controlled Paraphrases without Using Annotated Parallel Pairs Paraphrase generation plays an essential role in natural language process (NLP), and it has many downstream applications. However, training supervised paraphrase models requires many annotated paraphrase pairs, which are usually costly to obtain. On the other hand, the paraphrases generated by existing unsupervised approaches are usually syntactically similar to the source sentences and are limited in diversity. In this paper, we demonstrate that it is possible to generate syntactically various paraphrases without the need for annotated paraphrase pairs. We propose Syntactically controlled Paraphrase Generator (SynPG), an encoder-decoder based model that learns to disentangle the semantics and the syntax of a sentence from a collection of unannotated texts. The disentanglement enables SynPG to control the syntax of output paraphrases by manipulating the embedding in the syntactic space. Extensive experiments using automatic metrics and human evaluation show that SynPG performs better syntactic control than unsupervised baselines, while the quality of the generated paraphrases is competitive. We also demonstrate that the performance of SynPG is competitive or even better than supervised models when the unannotated data is large. Finally, we show that the syntactically controlled paraphrases generated by SynPG can be utilized for data augmentation to improve the robustness of NLP models. Introduction Paraphrase generation (McKeown, 1983) is a longlasting task in natural language processing (NLP) and has been greatly improved by recently developed machine learning approaches and large data collections. Paraphrase generation demonstrates the potential of machines in semantic abstraction and sentence reorganization and has already been applied to many NLP downstream applications, Figure 1: Paraphrase generation with syntactic control. Given a source sentence and a target syntactic specification (either a full parse tree or top levels of a parse tree), the model is expected to generate a paraphrase with the syntax following the given specification. such as question answering (Yu et al., 2018), chatbot engines (Yan et al., 2016), and sentence simplification . In recent years, various approaches have been proposed to train sequence-to-sequence (seq2seq) models on a large number of annotated paraphrase pairs (Prakash et al., 2016;Cao et al., 2017;Egonmwan and Chali, 2019). Some of them control the syntax of output sentences to improve the diversity of paraphrase generation (Iyyer et al., 2018;Goyal and Durrett, 2020;Kumar et al., 2020). However, collecting annotated pairs is expensive and induces challenges for some languages and domains. On the contrary, unsupervised approaches build paraphrase models without using parallel corpora (Li et al., 2018;Roy and Grangier, 2019;Zhang et al., 2019). Most of them are based on the variational autoencoder (Bowman et al., 2016) or back-translation Hu et al., 2019). Nevertheless, without the consideration of controlling syntax, their generated paraphrases are often similar to the source sentences and are not diverse in syntax. This paper presents a pioneering study on syntactically controlled paraphrase generation based on disentangling semantics and syntax. We aim to disentangle one sentence into two parts: 1) the semantic part and 2) the syntactic part. The semantic aspect focuses on the meaning of the sentence, while the syntactic part represents the grammatical structure. When two sentences are paraphrased, their semantic aspects are supposed to be similar, while their syntactic parts should be different. To generate a syntactically different paraphrase of one sentence, we can keep its semantic part unchanged and modify its syntactic part. Based on this idea, we propose Syntactically Controlled Paraphrase Generator (SynPG) 1 , a Transformer-based model (Vaswani et al., 2017) that can generate syntactically different paraphrases of one source sentence based on some target syntactic parses. SynPG consists of a semantic encoder, a syntactic encoder, and a decoder. The semantic encoder considers the source sentence as a bag of words without ordering and learns a contextualized embedding containing only the semantic information. The syntactic encoder embeds the target parse into a contextualized embedding including only the syntactic information. Then, the decoder combines the two representations and generates a paraphrase sentence. The design of disentangling semantics and syntax enables SynPG to learn the association between words and parses and be trained by reconstructing the source sentence given its unordered words and its parse. Therefore, we do not require any annotated paraphrase pairs but only unannotated texts to train SynPG. We verify SynPG on four paraphrase datasets: ParaNMT-50M , Quora (Iyer et al., 2017), PAN (Madnani et al., 2012), and MRPC (Dolan et al., 2004). The experimental results reveal that when being provided with the syntactic structures of the target sentences, SynPG can generate paraphrases with the syntax more similar to the ground truth than the unsupervised baselines. The human evaluation results indicate that SynPG achieves competitive paraphrase quality to other baselines while its generated paraphrases are more accurate in following the syntactic specifications. In addition, we show that when the training data is large enough, the performance of SynPG is competitive or even better than supervised approaches. Finally, we demonstrate that the syntactically controlled paraphrases generated by SynPG can be 1 Our code and the pretrained models are available at https://github.com/uclanlp/synpg used for data augmentation to defense syntactically adversarial attack (Iyyer et al., 2018) and improve the robustness of NLP models. Unsupervised Paraphrase Generation We aim to train a paraphrase model without using annotated paraphrase pairs. Given a source sentence x = (x 1 , x 2 , ..., x n ), our goal is to generate a paraphrase sentence y = (y 1 , y 2 , ..., y m ) that is expected to maintain the same meaning of x but has a different syntactic structure from x. Syntactic control. Motivated by previous work (Iyyer et al., 2018;Zhang et al., 2019;Kumar et al., 2020), we allow our model to access additional syntactic specifications as the control signals to guide the paraphrase generation. More specifically, in addition to the source sentence x, we give the model a target constituency parse p as another input. Given the input (x, p), the model is expected to generate a paraphrase y that is semantically similar to the source sentence x and syntactically follows the target parse p. In the following discussions, we assume the target parse p to be a full constituency parse tree. Later on, in Section 2.3, we will relax the syntax guidance to be a template, which is defined as the top two levels of a full parse tree. We expect that a successful model can control the syntax of output sentences and generate syntactically different paraphrases based on different target parses, as illustrated in Figure 1. Similar to previous work (Iyyer et al., 2018;Zhang et al., 2019), we linearize the constituency parse tree to a sequence. For example, the linearized parse of the sentence "He eats apples." is (S(NP(PRP))(VP(VBZ)(NP(NNS)))(.)). Accordingly, a parse tree can be considered as a sentence p = (p 1 , p 2 , ..., p k ), where the tokens in p are non-terminal symbols and parentheses. Proposed Model Our main idea is to disentangle a sentence into the semantic part and the syntactic part. Once the model learns the disentanglement, it can generate a syntactically different paraphrase of one given sentence by keeping its semantic part unchanged and modifying only the syntactic part. Figure 2 illustrates the proposed paraphrase model called SynPG, a seq2seq model consisting of a semantic encoder, a syntactic encoder, and a decoder. The semantic encoder captures only the semantic information of the source sentence x, Figure 2: SynPG embeds the source sentence and the target parse into a semantic embedding and a syntactic embedding, respectively. Then, SynPG generates a paraphrase sentence based on the two embeddings. while the syntactic encoder extracts only the syntactic information from the target parse p. The decoder then combines the encoded semantic and syntactic information and generates a paraphrase y. We discuss the details of SynPG in the following. The semantic embedding z sem is supposed to contain only the semantic information of the source sentence x. To separate the semantic information from the syntactic information, we use a Transformer (Vaswani et al., 2017) without the positional encoding as the semantic encoder. We posit that by removing position information from the source sentence x, the semantic embedding z sem would encode less syntactic information. We assume that words without ordering capture most of the semantics of one sentence. Indeed, semantics is also related to the order. For example, exchanging the subject and the object of a sentence changes its meaning. However, the decoder trained on a large corpus also captures the selectional preferences (Katz and Fodor, 1963;Wilks, 1975) in generation, which enables the decoder to infer the proper order of words. In addition, we observe that when two sentences are paraphrased, they usually share similar words, especially those words related to the semantics. For example, "What is the best way to improve writing skills?" and "How can I improve my writing skills?" are paraphrased, and the shared words (improve, writing, and skills) are strongly related to the semantics. In Section 4, we show that our designed semantic embedding captures enough semantic information to generate paraphrases. Since the target parse p contains no semantic information but only syntactic information, we use a Transformer with the positional encoding as the syntactic encoder. Decoder. Finally, we design a decoder that takes the semantic embedding z sem and the syntactic embedding z syn as the input and generates a paraphrase y. In other words, y = (y 1 , y 2 , ..., y m ) = Dec(z sem , z syn ). We choose Transformer as the decoder to generate y autoregressively. Notice that the semantic embedding z sem does not encode the position information and the syntactic embedding z syn does not contain semantics. This forces the decoder to extract the semantics from z sem and retrieve the syntactic structure from z syn . The attention weights attaching to z sem and z syn make the decoder learn the association between the semantics and the syntax as well as the relation between the word order and the parse structures. Therefore, SynPG is able to reorganize the source sentence and use the given syntactic structure to rephrase the source sentence. Unsupervised Training Our design of the disentanglement makes it possible to train SynPG without using annotated pairs. We train SynPG with the objective to reconstruct the source sentences. More specifically, when training on a sentence x, we first separate x into two parts: 1) an unordered word listx and 2) its linearized parse p x (can be obtained by a pretrained parser). Then, SynPG is trained to reconstruct x from (x, p x ) with the reconstruction loss Notice that if we do not disentangle the semantics and the syntax, and directly use a seq2seq model to reconstruct x from (x, p x ), it is likely that the seq2seq model only learns to copy x and ignores p x since x contains all the necessary information for the reconstruction. Consequently, at inference time, no matter what target parse p is given, the seq2seq model always copies the whole source sentence x as the output (more discussion in Section 4). On the contrary, SynPG learns the disentangled embeddings z sem and z syn . This makes SynPG capture the relation between the semantics and the syntax to reconstruct the source sentence x. Therefore, at test time, given the source sentence x and a new target parse p, SynPG is able to apply the learned relation to rephrase the source sentence x according to the target parse p. Word dropout. We observe that the ground truth paraphrase may contain some words not appearing in the source sentence; however, the paraphrases generated by the vanilla SynPG tend to include only words appearing in the source sentence due to the reconstruction training objective. To encourage SynPG to improve the diversity of the word choices in the generated paraphrases, we randomly discard some words from the source sentence during training. More precisely, each word has a probability to be dropped out in each training iteration. Accordingly, SynPG has to predict the missing words during the reconstruction, and this enables SynPG to select different words from the source sentence to generate paraphrases. More details are discussed in Section 4.5. Templates and Parse Generator In the previous discussion, we assume that a full target constituency parse tree is provided as the input to SynPG. However, the full parse tree of the target paraphrase sentence is unlikely available at inference time. Therefore, following the setting in Iyyer et al. (2018), we consider generating the paraphrase based on the template, which is defined as the top two levels of the full constituency parse tree. For example, the template of (S(NP(PRP))(VP(VBZ)(NP(NNS)))(.)) is (S(NP)(VP))(.)). Motivated by Iyyer et al. (2018), we train a parse generator to generate full parses from templates. The proposed parse generator has the same architecture as SynPG, but the input and the output are different. The parse generator takes two inputs: a tag sequence tag x and a target template t. The tag sequence tag x contains all the POS tags of the source sentence x. For example, the tag sequence of the sentence "He eats apples." is "<PRP> <VBZ> <NNS> <.>". Similar to the source sentence in SynPG, we do not consider the word order of the tag sequence during encoding. The expected output of the parse generator is a full parsep whose a syntactic structure follows the target template t. We train the parse generator without any additional annotations as well. Let t x be the the template of p x (the parse of x), we end-to-end train the parse generator with the input being (tag x , t x ) and the output being p x . Generating paraphrases from templates. The parse generator makes us generate paraphrases by providing target templates instead of target parses. The steps to generate a paraphrase given a source sentence x and a target template t are as follows: 1. Get the tag sequence tag x of the source sentence x. 2. Use the parse generator to generate a full parsẽ p with input (tag x , t). 3. Use SynPG to generate a paraphrase y with input (x,p). Post-processing. We notice that certain templates are not suitable for some source sentences and therefore the generated paraphrases are nonsensical. We follow Iyyer et al. (2018) and use n-gram overlap and paraphrastic similarity computed by the model 2 from to remove nonsensical paraphrases 3 . Datasets For the training data, we consider ParaNMT-50M , a paraphrase dataset containing over 50 million pairs of reference sentences and the corresponding paraphrases as well as the quality scores. We select about 21 million pairs with higher quality scores as our training examples. Notice that we use only the reference sentences to train SynPG and unsupervised paraphrase models since we do not require paraphrase pairs. We sample 6,400 pairs from ParaNMT-50M as the testing data. To evaluate the transferability of SynPG, we also consider the other three datasets: 1) Quora (Iyer et al., 2017) contains over 400,000 paraphrase pairs and we sample 6,400 pairs from them. 2) PAN (Madnani et al., 2012) contains 5,000 paraphrase pairs. 3) MRPC (Dolan et al., 2004) contains 2,753 paraphrase pairs. Evaluation We consider paraphrase pairs to evaluate all the models. For each test paraphrase pair (x 1 , x 2 ), we consider x 1 as the source sentence and treat x 2 as the target sentence (ground truth). Let p 2 be the parse of x 2 , given (x 1 , p 2 ), The model is expected to generate a paraphrase y that is similar to the target sentence x 2 . We use BLEU score (Papineni et al., 2002) and human evaluation to measure the similarity between x 2 and y. Moreover, to evaluate how well the generated paraphrase y follows the target parse p 2 , we define the template matching accuracy (TMA) as follows. For each ground truth sentence x 2 and the corresponding generated paraphrase y, we get their parses (p 2 and p y ) and templates (t 2 and t y ). Then, we calculate the percentage of pairs whose t y exactly matches t 2 as the template matching accuracy. Models for Comparison We consider the following unsupervised paraphrase models: 1) CopyInput: a naïve baseline which directly copies the source sentence as the output without paraphrasing. 2) BackTrans: back-translation is proposed to generate paraphrases Hu et al., 2019). In our experiment, we use the pretrained EN-DE and DE-EN translation models 4 proposed by Ng et al. (2019) Notice that training translation models requires additional translation pairs. Therefore, BackTrans needs more resources than ours and the translation data may not available for some low-resource languages. 3) VAE: we consider a vanilla variational autoencoder (Bowman et al., 2016) as a simple baseline. 4) SIVAE: syntax-infused variational autoencoder (Zhang et al., 2019) utilizes additional syntax information to improve the quality of sentence generation and paraphrase generation. Unlike SynPG, SIVAE does not disentangle the semantics and syntax. 5) Seq2seq-Syn: we train a seq2seq model with Transformer architecture to reconstruct x from (x, p x ) without the disentanglement. We use this model to study the influence of the disentanglement. 6) SynPG: our proposed model which learns disentangled embeddings. We also compare SynPG with supervised approaches. We consider the following: 1) Seq2seq-Sup: a seq2seq model with Transformer architecture trained on whole ParaNMT-50M pairs. 2) SCPN: syntactically controlled paraphrase network (Iyyer et al., 2018) is a supervised paraphrase model with syntactic control trained on ParaNMT-50M pairs. We use their pretrained model 5 . Implementation Details We consider byte pair encoding (Sennrich et al., 2016) for tokenization and use Stanford CoreNLP parser to get constituency parses. We set the max length of sentences to 40 and set the max length of linearized parses to 160 for all the models. For the encoders and the decoder of SynPG, we use the standard Transformer (Vaswani et al., 2017) with default parameters. The word embedding is initialized by GloVe (Pennington et al., 2014). We use Adam optimizer with the learning rate being 10 −4 and the weight decay being 10 −5 . We set the word dropout probability to 0.4 (more discussion in Section 4.5). The number of epoch for training is set to 5. Seq2seq-Syn, Seq2seq-Sup are trained with the similar setting. We reimplemnt VAE and SIVAE, and all the parameters are set to the default value in the original papers. Syntactic Control We first discuss if the syntactic specification enables SynPG to control the output syntax better. Table 1 shows the template matching accuracy and BLEU score for SynPG and the unsupervised baselines. Notice that here we use the full parse trees as the syntactic specifications. We will discuss the influence of using the template as the syntactic specifications in Section 4.3. Although we train SynPG on the reference sentences of ParaNMT-50M, we observe that SynPG performs well on Quora, PAN, and MRPC as well. This validates that SynPG indeed learns the syntactic rules and can transfer the learned knowledge to other datasets. CopyInput gets high BLEU scores; however, due to the lack of paraphrasing, it obtains low template matching scores. Compared to the unsupervised baselines, SynPG achieves higher template matching accuracy and higher BLEU scores on all datasets. This verifies that the syntactic specification is indeed helpful for syntactic control. Next, we compare SynPG with Seq2seq-Syn and SIVAE. All models are given syntactic specifications; however, without the disentanglement, Seq2seq-Syn and SIVAE tend to copy the source sentence as the output and therefore get low template matching scores. Table 2 lists some paraphrase examples generated by all models. Again, we observe that without syntactic specifications, the paraphrases generated by unsupervised baselines are similar to the source sentences. Without the disentanglement, Seq2seq-Syn and SIVAE always copy the source sentences. SynPG is the only model can generate paraphrases syntactically similar to the ground truths. Human Evaluation We perform human evaluation using Amazon Mechanical Turk to evaluate the quality of generated paraphrases. We follow the setting of previous work (Kok and Brockett, 2010;Iyyer et al., 2018;Goyal and Durrett, 2020). For each model, we randomly select 100 pairs of source sentence x and the corresponding generated paraphrase y from ParaNMT-50M test set (after being post-processed as mentioned in Section 2.3) and have three Turkers annotate each pair. The annotations are on a three-point scale: 0 means y is not a paraphrase of x; 1 means x is paraphrased into y but y contains some grammatical errors; 2 means x is paraphrased into y, which is grammatically correct. The results of human evaluation are reported in Table 3. If paraphrases rated 1 or 2 are considered meaningful, we notice that SynPG generates meaningful paraphrases at a similar frequency to that of SIVAE. However, SynPG tends to generate more ungrammatical paraphrases (those rated 1). We think the reason is that most of paraphrases generated by SIVAE are very similar to the source sentences, which are usually grammatically correct. On the other hand, SynPG is encouraged to Table 3: Human evaluation on a three-point scale (0 = not a paraphrase, 1 = ungrammatical paraphrase, 2 = grammatical paraphrase). SynPG performs better on hit rate (defined as the percentage of generated paraphrase getting 2 and matching the target parse at the same time) than other unsupervised models. use different syntactic structures from the source sentences to generate paraphrases, which may lead some grammatical errors. Furthermore, we calculate the hit rate, the percentage of generated paraphrases getting 2 and matching the target parse at the same time. The hit rate measures how often the generated paraphrases follow the target parses and preserve the semantics (verified by human evaluation) simultaneously. The results show that SynPG gets higher hit rate than other models. Target Parses vs. Target Templates Next, we discuss the influence of generating paraphrase by using templates instead of using full parse trees. For each paraphrase pair (x 1 , x 2 ) in test data, we consider two ways to generate the paraphrase. 1) Generating the paraphrase with the target parse. We use SynPG to generate a paraphrase directly from (x 1 , p 2 ). 2) Generating the paraphrase with the target template. We first use the parse generator to generate a parsep from (tag 1 , t 2 ), where tag 1 is the tag sequence of x 1 and t 2 is the template of p 2 . Then we use SynPG to generate a paraphrase from (x 1 ,p). We calculate the template matching accuracy to compare these two ways to generate paraphrases, as shown in Table 4. We also report the template matching accuracy of the generated parsep. We find that most of generated parsesp indeed follow the target templates, which means that the parse generator usually generates good parsesp. Next, we observe that generating paraphrases with target parses usually performs better than with target templates. The results show a trade-off. Using templates proves more effortless during the generation process, but may compromise the syntactic control ability. In comparison, by using the target parses, we have to provide more detailed parses, but our model can control the syntax better. Another benefit of generating paraphrase with Table 4: Influence of using templates. Using templates proves more effortless during the generation process, but may compromise the syntactic control ability. target templates is that we can easily generate a lot of syntactically different paraphrases by feeding the model with different templates. Table 5 lists some paraphrases generated by SynPG with different templates. We can perceive that most generated paraphrases are grammatically correct and have similar meanings to the original sentence. Training SynPG on Larger Dataset Finally, we demonstrate that the performance of SynPG can be further improved and be even competitive to supervised models on some datasets if we consider more training data. The advantage of unsupervised paraphrase models is that we do not require parallel pairs for training. Therefore, we can easily boost the performance of SynPG by consider more unannotated texts into training. We consider SynPG-Large, the SynPG model trained on the reference sentences of ParaNMT-50M as well as One Billion Word Benchmark (Chelba et al., 2014), a large corpus for training language models. We sample about 24 million sentences from One Billion Word and add them to the training set. In addition, we fine-tune SynPG-Large on only the reference sentences of the testing paraphrase pairs, called SynPG-FT. From Table 6, We observe that enlarging the training data set indeed improves the performance. Also, with the fine-tuning, the performance of SynPG can be much improved and even is better than the performance of supervised models on some datasets. The results demonstrate the potential of unsupervised paraphrase generation with syntactic control. Word Dropout Rate The word dropout rate plays an important role for SynPG since it controls the ability of SynPG to generate new words in paraphrases. We test differ- Figure 3: Influence of word drop out rate. Setting the word dropout rate to 0.4 can achieve the best BLEU score. However, higher word dropout rate leads to better template matching accuracy. ent word dropout rates and report the BLEU scores and the template matching accuracy in Figure 3. From Figure 3a, we can observe that setting the word dropout rate to 0.4 can achieve the best BLEU score in most of datasets. The only exception is ParaNMT, which is the dataset used for training. On the other hand, Figure 3b shows that higher word dropout rate leads to better template matching accuracy. The reason is that higher word dropout rate gives SynPG more flexibility to generate paraphrases. Therefore, the generated paraphrases can match the target syntactic specifications better. However, higher word dropout rate also make SynPG have less ability to preserve the meaning of source sentences. Considering all the factors above, we recommend to set the word dropout rate to 0.4 for SynPG. Improving Robustness of Models Recently, a lot of work show that NLP models can be fooled by different types of adversarial attacks (Alzantot et al., 2018;Ebrahimi et al., 2018;Iyyer et al., 2018;Tan et al., 2020;Jin et al., 2020). Those attacks generate adversarial examples by slightly modifying the original sentences without changing the meanings, while the NLP models change the predictions on those examples. However, a robust model is expected to output the same labels. Therefore, how to make NLP models not affected by the adversarial examples becomes an important task. Since SynPG is able to generate syntactically different paraphrases, we can improve the robustness of NLP models by data augmentation. The models trained with data augmentation are thus more robust to the syntactically adversarial examples (Iyyer et al., 2018), which are the adversarial sentences that are paraphrases to the original sen- tences but with syntactic difference. We conduct experiments on three classification tasks covered by GLUE benchmark (Wang et al., 2019): SST-2, MRPC, and RTE. For each training example, we use SynPG to generate four syntactically different paraphrases and add them to the training set. We consider the setting to generate syntactically adversarial examples by SCPN (Iyyer et al., 2018). For each testing example, we generate five candidates of adversarial examples. If the classifier gives at least one wrong prediction on the candidates, we treat the attack to be successful. We compare the model without data augmentation (Base) and with data augmentation (SynPG) in Table 7. We observe that with the data augmentation, the accuracy before attacking is slightly worse than Base. However, after attacking, the percentage of examples changing predictions is much less than Base, which implies that data augmentation indeed improves the robustness of models. Related Work Paraphrase generation. Traditional approaches usually require hand-crafted rules, such as rulebased methods (McKeown, 1983), thesaurus-based methods (Bolshakov and Gelbukh, 2004;Kauchak and Barzilay, 2006), and lattice matching methods (Barzilay and Lee, 2003). However, the diversity of their generated paraphrases is usually limited. Recently, neural models make success on paraphrase generation (Prakash et al., 2016;Cao et al., 2017;Egonmwan and Chali, 2019;Gupta et al., 2018). These approaches treat paraphrase generation as a translation task and design seq2seq models based on a large amount of parallel data. To reduce the effort to collect parallel data, unsupervised paraphrase generation has attracted attention in recent years. Wieting et al. (2017); use translation models to generate paraphrases via back-translation. Zhang et al. (2019); Roy and Grangier (2019) generate paraphrases based on variational autoencoders. Reinforcement learning techniques are also considered for paraphrase generation (Li et al., 2018). Conclusion We present syntactically controlled paraphrase generator (SynPG), an paraphrase model that can control the syntax of generated paraphrases based on the given syntactic specifications. SynPG is designed to disentangle the semantics and the syntax of sentences. The disentanglement enables SynPG to be trained without the need for annotated paraphrase pairs. Extensive experiments show that SynPG performs better syntactic control than unsupervised baselines, while the quality of the generated paraphrases is competitive to supervised approaches. Finally, we demonstrate that SynPG can improve the robustness of NLP models by generating additional training examples. SynPG is especially helpful for the domain where annotated paraphrases are hard to obtain but a large amount of unannotated text is available. One limitation of SynPG is the need for mannually providing target syntactic templates at inference time. We leave the automatic template generation as our future work.
6,826.4
2021-01-26T00:00:00.000
[ "Computer Science" ]
Expression of Ceramide Synthase 6 Transcriptionally Activates Acid Ceramidase in a c-Jun N-terminal Kinase (JNK)-dependent Manner* Background: Ceramide is important for cellular signaling. Results: Increasing the expression of ceramide synthase 6 (CerS6) results in transcriptional activation of acid ceramidase independent of catalytic CerS6 activity. Conclusion: Modulation of a single member of the ceramide synthase family impacts on sphingolipid composition and ceramide metabolizing enzymes. Significance: Understanding how CerS impacts gene expression and signaling is important for the development of novel therapeutic approaches. A family of six ceramide synthases with distinct but overlapping substrate specificities is responsible for generation of ceramides with acyl chains ranging from ∼14–26 carbons. Ceramide synthase 6 (CerS6) preferentially generates C14- and C16-ceramides, and we have previously shown that down-regulation of this enzyme decreases apoptotic susceptibility. In this study, we further evaluated how increased CerS6 expression impacts sphingolipid composition and metabolism. Overexpression of CerS6 in HT29 colon cancer cells resulted in increased apoptotic susceptibility and preferential generation of C16-ceramide, which occurred at the expense of very long chain, saturated ceramides. These changes were also reflected in sphingomyelin composition. HT-CerS6 cells had increased intracellular levels of sphingosine, which is generated by ceramidases upon hydrolysis of ceramide. qRT-PCR analysis revealed that only expression of acid ceramidase (ASAH1) was increased. The increase in acid ceramidase was confirmed by expression and activity analyses. Pharmacological inhibition of JNK (SP600125) or curcumin reduced transcriptional up-regulation of acid ceramidase. Using an acid ceramidase promoter driven luciferase reporter plasmid, we demonstrated that CerS1 has no effect on transcriptional activation of acid ceramidase and that CerS2 slightly but significantly decreased the luciferase signal. Similar to CerS6, overexpression of CerS3–5 resulted in an ∼2-fold increase in luciferase reporter gene activity. Exogenous ceramide failed to induce reporter activity, while a CerS inhibitor and a catalytically inactive mutant of CerS6 failed to reduce it. Taken together, these results suggest that increased expression of CerS6 can mediate transcriptional activation of acid ceramidase in a JNK-dependent manner that is independent of CerS6 activity. A family of six ceramide synthases with distinct but overlapping substrate specificities is responsible for generation of ceramides with acyl chains ranging from ϳ14 -26 carbons. Ceramide synthase 6 (CerS6) preferentially generates C 14 -and C 16 -ceramides, and we have previously shown that down-regulation of this enzyme decreases apoptotic susceptibility. In this study, we further evaluated how increased CerS6 expression impacts sphingolipid composition and metabolism. Overexpression of CerS6 in HT29 colon cancer cells resulted in increased apoptotic susceptibility and preferential generation of C 16 -ceramide, which occurred at the expense of very long chain, saturated ceramides. These changes were also reflected in sphingomyelin composition. HT-CerS6 cells had increased intracellular levels of sphingosine, which is generated by ceramidases upon hydrolysis of ceramide. qRT-PCR analysis revealed that only expression of acid ceramidase (ASAH1) was increased. The increase in acid ceramidase was confirmed by expression and activity analyses. Pharmacological inhibition of JNK (SP600125) or curcumin reduced transcriptional up-regulation of acid ceramidase. Using an acid ceramidase promoter driven luciferase reporter plasmid, we demonstrated that CerS1 has no effect on transcriptional activation of acid ceramidase and that CerS2 slightly but significantly decreased the luciferase signal. Similar to CerS6, overexpression of CerS3-5 resulted in an ϳ2-fold increase in luciferase reporter gene activity. Exogenous ceramide failed to induce reporter activity, while a CerS inhibitor and a catalytically inactive mutant of CerS6 failed to reduce it. Taken together, these results suggest that increased expression of CerS6 can mediate transcriptional activation of acid ceramidase in a JNK-dependent manner that is independent of CerS6 activity. Sphingolipids are important signaling molecules and can significantly impact on cellular function. Ceramide, the central molecule in sphingolipid biosynthesis, can be generated through the action of ceramide synthases (CerS) 3 in the de novo or the salvage pathway (1). CerS comprise a family of six enzymes that preferentially conjugate a fatty acyl-CoA moiety to the sphingoid base, thereby generating ceramides with fatty acid side chains ranging from 14 to 26 carbons. Recent studies have demonstrated associations between specific ceramide species and cellular responses (2). We have previously shown that RNAi-mediated down-regulation of CerS6 results in a specific decrease in C 16 -ceramide and increased resistance to the death receptor ligand TRAIL whereas overexpression of CerS6 increased susceptibility to TRAIL (3). CerS6 has also been implicated to contribute to apoptosis induced by 17AAG (4) and MDA-7 (5), the combination of sorafenib and vorinostat (6,7), celecoxib-mediated chemoprevention of colon cancer (8), and efficacy of photodynamic therapy (9). These studies suggest that CerS6 activity contributes to the efficacy of existing therapies and might be a potential biomarker to predict responsiveness. Evidence that CerS can have opposing and tissue-specific roles is also emerging. Thus while several studies have found links between CerS6/C 16 -ceramide and apoptosis (10), overexpression of CerS2, which generates C 24 -ceramides can promote proliferation and offer protection against radiation-induced cell death (11,12). Generation of CerS-deficient mice is revealing tissue-specific effects as well. For example, knock out of CerS2, which is expressed at similar levels in the mouse liver and kidney, results in liver abnormalities while kidney function remains normal (13,14). CerS6 also appears to have tissuespecific effects as decreased expression induces ER stress and apoptosis in head and neck squamous carcinoma cells but not in A549, MCF-7, or SW480 cells, which were derived from lung, breast, and colon cancer, respectively (3,15,16). Altered ceramide distributions and/or changes in ceramide synthase expression are beginning to be associated with specific diseases (17). For example, elevated expression of CerS6 has recently been suggested to play a role in the onset of disease in chronic experimental autoimmune encephalomyelitis, a model of multiple sclerosis (18). CerS6 is highly expressed in the intestinal tract (19), and we have therefore focused our investigations on the role of CerS6 in colon cancer. In HCT-116 colon cancer cells overexpression of CerS6 induced spontaneous apoptosis (11). In our hands elevated expression of CerS6 in SW620 colon cancer cells did not result in spontaneous apoptosis but increased susceptibility to apoptotic stimuli (3). However, the consequence of CerS6 expression was not previously investigated in detail, and we hypothesized that due to the highly dynamic nature of sphingolipid metabolism (13,14,20,21), alterations in the expression of CerS6 may have impacts beyond increased generation of C 16 -ceramide. In support of this hypothesis, the current study reveals a novel connection between ceramide synthases and acid ceramidase. The adenovirus expressing CerS6 was generated using the AdEasy system (ATCC) (25). Briefly, the EcoRI and HindIII fragment from pCerS6-IRES-GFP was subcloned into the Shuttle vector, which was then recombined with the AdEasy vector in the Escherichia coli BJ5183 strain. Following screening by PacI digestion, DNA of positive recombinants was transiently transfected into HEK293A cells using Lipofectamine 2000 and CerS6 expression verified by Western blot. A positive recombinant was transfected into HEK293A cells, and cultures were observed for formation of viral plaques (25). Crude viral lysate was provided to Vector Biolabs (Philadelphia, PA, for amplification and determination of titer). The control adenovirus was also obtained from Vector Biolabs. Flow Cytometry-Flow cytometry (LSRFortessa) and cell sorting (MoFlo) was performed in the MUSC flow cytometry shared resource. Transfections and Transductions-HT29 transfectants expressing CerS6 or GFP only were generated by transfection of pCerS6-IRES-GFP (3) or pIRES-GFP plasmids using Lipofectamine 2000 (Invitrogen, Grand Island, NY) followed by selection and maintenance in 1.5 mg/ml neomycin (Fisher Scientific). Experiments with stably transfected mass clones were performed within 25 passages. HT29 cells expressing pGL3-ASAH1 were generated by co-transfection with a G418 resistance plasmid and selected as above. SW480 were transfected with pLKO-Tet-On shRNA-CerS6 followed by selection in 1 g/ml puromycin. For adenoviral transductions, cells were plated overnight and the next day adenovirus added to the culture medium at the indicated concentration. Luciferase activity was measured 4 days post-infection. Transient transfections of HEK293A cells were performed in 96-well plates using 200 ng DNA and 0.5 l Lipofectamine per well according to the manufacturer's instructions. Viability and Luciferase Activity Assays-Viability was measured using the CellTiterBlue substrate and luciferase activity was determined using the Steady-Glo kit. Both kits were purchased from Promega (Madison, WI). Signals were quantified using a BMG Optima plate reader. Sphingolipid Analysis and Metabolic Labeling-Sphingolipid analysis was performed as described previously (27). The cell pellets were stored at Ϫ80°C until processing for sphingolipid analysis by liquid chromatography/mass spectrometry (LC-MS/MS) in the MUSC Lipidomics facility (28). An aliquot of the lipid extract was used to carry out lipid phosphate estimation using Bligh Dyer extraction and a colorimetric assay (29). For sphingolipid metabolic labeling, cells were incubated with 1 M 17 C-sphingosine (Avanti Polar Lipids) for 30 min. The LC-MS/MS analysis was modified to detect only 17 C-sphingolipids (30). In Vitro Acid Ceramidase Activity Assay-Cells were plated at a density of 1 ϫ 10 7 cells on 150-mm dishes and allowed to form a monolayer for 2 days. At harvest, cells were counted and lysed in an acidic buffer (50 mM sodium acetate, 5 mM magnesium chloride, 1 mM EDTA, and 0.5% Triton X-100, pH 4.5) to determine acid ceramidase activity as previously (26). The assay was carried out in duplicate using an equal amount of protein lysate (close to 200 g), and results displayed as pmol palmitate liberated per hour per mg protein. Statistical Analysis-Differences in viability and lipid composition were determined in the unpaired Student's t test using the GraphPad software. To evaluate differences in fold-change of luciferase activity in transient transfections experiment in HEK293A cells, we used a hierarchical linear mixed effects regression model that was fit with the main effects of marker (CerS1, CerS2, etc.) and nested random effects for experiment and well over 10 cycles (10 min/cycle) of luciferase activity. Regression coefficients and their standard errors were used for making inferences regarding statistical significance at the ␣ ϭ 0.05 level. Fold-change and S.E. were based on model results. Cells with Elevated CerS6 Expression Have Increased Susceptibility to TRAIL and 5-Fluorouracil Chemotherapy-HT29 colon carcinoma cells were stably transfected with pCerS6-IRES2-EGFP or pIRES2-GFP plasmids to generate HT-CerS6 and HT-GFP cells, respectively. Analysis of mRNA from G418resistant mass clones indicated that total CerS6 transcript lev-els increased ϳ3-fold in the HT-CerS6 transfectants compared with the HT-GFP cells (Fig. 1A). Levels of endogenous CerS6 mRNA were not significantly changed (Fig. 1A). These results suggest that the increase in total CerS6 mRNA resulted from expression of the transgene. Based on GFP expression, at least 20% of cells within mass clones expressed the transgene (Fig. 1B). An increase in CerS6 protein in the mass clone was observed when GFP-positive cells were analyzed by Western blot following sorting by flow cytometry (Fig. 1C). Consistent with our previous findings in SW620 cells, elevating CerS6 increased susceptibility of HT29 cells to the death ligand TRAIL (Fig. 1D). Sensitivity to 5-fluorouracil, a chemotherapeutic agent frequently used in colorectal patients, was also enhanced in HT-CerS6 cells compared with HT-GFP (Fig. 1D). Up-regulation of CerS6 Expression Increases C 16 -ceramide Generation at the Expense of Very Long Chain Ceramides-Although cells expressing elevated CerS6 are more susceptible to cell death, the exact impact of CerS6 expression on sphingolipids in stably transfected cells has not previously been determined. Therefore, we next used a cell-based assay, in which 17 C-sphingosine serves as a metabolic label, to determine how increased expression of CerS6 impacts on the incorporation of sphingosine into ceramides. HT-GFP and HT-CerS6 cells were incubated with 1 M 17 C-sphingosine for 30 min followed by analysis of the 17 C-sphingolipid profile. Total 17 C-ceramide levels were similar between the GFP and the CerS6 transfectants ( Fig. 2A), indicating that the total cumulative activity of CerS, at least in the salvage pathway in which sphingosine is recycled into ceramides, is comparable between HT-GFP and HT-CerS6 cells. The majority of 17 C-sphingosine was incorporated into 17 C 16 -, 17 C 22 -, and 17 C 24 -ceramides ( Fig. 2B) but CerS6 expression influenced the distribution of ceramide species. Compared with HT-GFP, HT-CerS6 cells contained significantly more 17 C 16 -and 17 C 24:1 -ceramides and significantly less 17 C 22:0 -, 17 C 24:0 , and 17 C 26:0 -ceramides. These results suggest that increased generation of C 16 -ceramide occurs at the expense of very long chain (C 22 -C 26 ) saturated but not unsaturated ceramides. Changes in ceramide composition observed in the metabolic labeling assay were also reflected at the steady state, i.e. the very long chain saturated ceramides C 24:0 -ceramide and C 26:0 -ceramide were significantly decreased while C 16 -ceramide and C 24:1 -ceramide were increased (Fig. 3A). Similar results were observed in SW620 cells upon adenoviral expression of the CerS6 transgene (data not shown), which suggests that CerS6mediated alterations in ceramide species distribution is not cell line specific. Since ceramide can be further metabolized into complex sphingolipids, sphingomyelin (SM) composition was also analyzed. We found that expression of CerS6 increased the C 16 -SM content from 43% in HT-GFP cells to 51% in HT-CerS6 cells (Fig. 3B). This significant increase in C 16 -SM was accompanied by an almost 50% decrease in C 24:0 -SM. The increase in C 24:1 -Cer in HT-CerS6 cells corresponded to an increase in C 24:1 -SM (Fig. 3B). These results suggest that changes in ceramide composition as a consequence of increased CerS6 expression are also reflected in complex sphingolipids such as sphingomyelin. HT-CerS6 Cells Have Elevated Acid Ceramidase Expression and Activity-In addition to being incorporated into complex sphingolipids, ceramide can also be hydrolyzed to sphingosine by ceramidases. We found that HT-CerS6 cells contained nearly twice as much sphingosine as HT-GFP cells (Fig. 4A). Five ceramidases, including ASAH1, ASAH2, and ACER1-3, have been identified. The gene threshold for ASAH2 and ACER1 was high (Ͼ30) indicating that the relative expression levels of these genes are low in HT29 cells. Of the remaining ceramidases, mRNA levels were increased only for acid ceramidase (ASAH1) (Fig. 4B). Western blot analysis of two colon cancer cells confirmed that ASAH1 expression is elevated when CerS6 is overexpressed (Fig. 4C). An in vitro assay confirmed that acid ceramidase activity is also higher in HT-CerS6 cells than in HT-GFP cells (Fig. 4D). Sphingosine serves not only as substrate for ceramide synthases in the salvage pathway but can also be further metabolized to sphingosine-1-phosphate (S1P) through the action of sphingosine kinases. Steady state levels of intracellular S1P were below detection in our analysis. We therefore used 17 Csphingosine as the substrate for metabolic labeling and found that HT-CerS6 had a significantly reduced capacity to generate S1P compared with HT-GFP cells (Fig. 4E). Our results suggested that increasing CerS6 results in elevated expression of acid ceramidase. To investigate if decreasing CerS6 reduces acid ceramidase expression we chose SW480 cells, which express higher levels of CerS6 than the isogenic SW620 cells used in overexpression studies (3). SW480 cells were transfected with an inducible shRNA against CerS6 and analysis for acid ceramidase performed in two individual clones. As shown in Fig. 4F, knockdown of CerS6 did not appear to decrease acid ceramidase expression. Increased Expression of Acid Ceramidase in Response to CerS6 Expression Occurs via a JNK-AP1-dependent Mechanism and Is Important for Survival-We hypothesized that increased expression and activity of acid ceramidase occurs in response to increased generation of C 16 -ceramide. Ceramide stress has been shown to activate the JNK pathway (31-33) and more recently it has been shown that radiation-induced ceramide stress and subsequent up-regulation of acid ceramidase occurs in an AP-1-dependent manner (34). Therefore, we treated HT-CerS6 cells with the JNK inhibitor SP600125 or curcumin, a natural compound with anti-tumor activity that has been shown to directly interfere with DNA binding at the AP-1 transcription factor (35,36), and examined the impact on phosphorylation of c-jun and acid ceramidase expression. Compared with HT-GFP cells, HT-CerS6 cells had increased phosphorylation of c-jun on Ser-63 (Fig. 5A). Treatment with the JNK inhibitor SP600125 decreased phosphorylation of c-jun to undetectable levels. In contrast to SP600125, curcumin did not decrease phosphorylation of c-jun, which is consistent with its function of inhibiting AP-1 DNA binding downstream of c-jun. Treatments with either SP600125 or curcumin also diminished levels of acid ceramidase (Fig. 5A). Morphological assessment of treated cultures suggested that HT-CerS6 cells were beginning to die when exposed to inhibitors of the JNK pathway. Analysis of PARP, a marker of apoptosis, confirmed that this protein was cleaved when HT-CerS6 cells were treated with SP600125 or curcumin. To more directly quantify the effect of SP600125 and curcumin on viability, HT-GFP and HT-CerS6 cells were cultured in the absence or presence of these agents. As shown in Fig. 5B, HT-CerS6 cells were significantly more susceptible to SP600125 than control cells. Viability in the presence of curcumin was also reduced but did not reach statistical significance. Taken together, the results suggested that inhibition of acid ceramidase expression by either SP600125 or curcumin may be responsible for induction of PARP cleavage and preferentially decreased viability of HT-CerS6 cells. To directly test how inhibition of acid ceramidase impacts on viability, HT-GFP and HT-CerS6 cells were treated with the acid ceramidase inhibitor LCL-521 (26). As shown in Fig. 5C, HT-CerS6 cells were more susceptible to inhibition of acid ceramidase than HT-GFP cells. Transcriptional Activation of Acid Ceramidase by Ceramide Synthases-The increase in acid ceramidase (ASAH1) mRNA suggested that CerS6 may transcriptionally activate expression of this enzyme. To further explore this possibility, we stably transfected HT29 cells with pGL3-ASAH1, a plasmid in which luciferase expression is under control of the full-length acid ceramidase promoter (23). Stable transfectants were then transduced with an adenovirus-expressing CerS6 (AdCerS6). As shown in Fig. 6A, the control virus did not alter ASAH1-driven luciferase activity whereas AdCerS6 increased luminescence 2-3-fold. Next, we asked whether the ability to transcriptionally activate acid ceramidase expression is unique to CerS6. HEK293A cells were co-transfected with the acid ceramidase reporter construct (pGL3-ASAH1) and plasmids expressing CerS1-6. Expression of CerS3-6 transcriptionally activated the ASAH1 promoter as evidenced by a significant ϳ2-fold increase in luciferase reporter gene activity (Fig. 6B). Expression of CerS1 did not significantly alter ASAH1 reporter activity while expression of CerS2 slightly but significantly decreased luciferase activity. Since treatment with SP600125 or curcumin decreased acid ceramidase expression in HT-CerS6 cells, we next investigated how these inhibitors affect transcriptional activation of ASAH1. We verified based on GFP expression that neither inhibitor interfered with transfection efficiency. When CerS6 was expressed in the presence of SP600125, ASAH1 reporter activity was not significantly different from GFP-transfected cells (Fig. 6C). We found that similar to inhibition of JNK, treatment with 25 M curcumin prevented the CerS6-mediated transcriptional activation of acid ceramidase (Fig. 6C). Next, we investigated whether the increase in ASAH1 transcription upon elevated CerS6 expression is a direct consequence of increased generation of C 16 -ceramide. HEK293A cells were transfected with pGL3-ASAH1 and after 20 h, C 16ceramide was exogenously added for 6 h. In the presence of 10 M C 16 -ceramide we did not observe an increase in pGL3-ASAH1 reporter activity (Fig. 6D). At higher concentrations of exogenous C 16 -ceramide luciferase reporter activity declined, and cells began to lose viability (Fig. 6, D and E). Similarly exogenous C 6 -ceramide, which is preferentially metabolized into C 16 -ceramide but was less toxic under our assay conditions failed to alter luciferase reporter gene activity (Fig. 6, D and E). Furthermore, the ceramide synthase inhibitor fumonison B1 did not prevent the CerS6-induced increase luciferase reporter activity, suggesting that enzymatic activity does not significantly contribute to increased acid ceramidase expression (Fig. 6F). Finally, to substantiate this observation, we utilized a CerS6 mutant in which the catalytic domain has been inactivated through a histone to alanine substitution at residue 212 (15). As shown in Fig. 6F, mutant H212A CerS6 retained the ability to significantly increase pGL3-ASAH1 reporter activity. Discussion Several groups including ours have shown that decreased expression of CerS6 results in a specific decrease in C 16 -ceramide (3,37). The current study was initiated to understand how overexpression of CerS6 impacts on sphingolipid composition and signaling. We show that cells with increased expression of CerS6 preferentially generate C 16 -ceramide, which is consistent with previously observed activity of the enzyme in vitro (1). Incorporation of sphingosine into ceramide was comparable between HT-GFP and HT-CerS6 cells, suggesting that the increase in C 16 -ceramide occurred at the expense of saturated very long chain ceramides (C 22:0 , C 24:0 , and C 26:0 ) (Fig. 2). Similar results have been observed in models that modulate other CerS family members. For example, in CerS2-deficient mice and in SMS-KCNR neuroblastoma cells treated with CerS2 RNAi, long chain ceramides such as C 16 -Cer compensated for the decrease in C 24 -and C 24:1 -ceramides (13,38). Mullen et al. also showed that down-regulation of individual CerS in MCF7 breast cancer cells can transcriptionally impact the expression of non-targeted CerS (37). The distribution of ceramide species was mirrored in sphingomyelin composition (Fig. 3), suggesting that incorporation of ceramides into complex sphingolipids occurs without preference for a specific ceramide species. In addition to altered ceramide composition, HT29 cells expressing CerS6 also contained increased intracellular sphin-gosine, which suggested the possibility that CerS6 expression also increases ceramidase expression and activity. Our results indicate that expression of CerS6 can stimulate expression of acid ceramidase, resulting in increased mRNA, protein expression and activity of the enzyme (Fig. 4, A-C). The increase in acid ceramidase following CerS6 expression was not unique to HT29 cells and was also observed in SW620 colon cancer cells that overexpress CerS6 (previously described in (3)) as well as in PPC1 prostate cancer cells transduced with an adenovirus expressing CerS6 (34). These data suggest that the increase in acid ceramidase following CerS6 expression is not a cell line or tissue-specific response. Interestingly, both HT29 and SW620 cells transfected with CerS6 express higher levels of acid ceramidase, yet are more susceptible to apoptotic stimuli (3,6). This is in contrast to prostate cancer cells in which elevated acid ceramidase expression has been associated with apoptosis resistance and relapse following radiation therapy (34,39). It was previously observed that an increase in acid ceramidase, which is a lysosomal enzyme, resulted in elevated lysosomal density and increased levels of autophagy (40). Autophagy has been demonstrated to serve as a cellular mechanism to limit ceramide levels in the liver (41) and also occurs as a consequence of increased sphingolipid synthesis in RAW264.7 cells following TLR4 stimulation (42). In contrast to these studies, we did not detect an increase in overall ceramide synthesis but rather a shift in composition (Fig. 2) and using lysotracker staining we were unable to detect any differences in lysosomal density. 4 One possibility for the discrepancy in apoptotic responsiveness between prostate cancer cells and our model system are differences in subsequent metabolism of sphingosine that is generated as a consequence of increased acid ceramidase expression. Sphingosine holds a unique position in sphingolipid metabolism in that it can be further metabolized to sphingosine-1-phosphate (S1P) by sphingosine kinases or serve as a substrate for ceramide synthases in the salvage pathway (10). Irradiation of prostate cancer cells increased the pro-apoptotic sphingolipids ceramide and sphingosine but also elevated sphingosine-1-phosphate (S1P), indicating that sphingosine was further metabolized by sphingosine kinases (34). In contrast, in HT29 colon cancer cells intracellular S1P generation is greatly diminished upon CerS6 expression (Fig. 4E). It is possible that ceramide synthases and sphingosine kinases compete for the sphingosine substrate, although this idea would need to be reconciled with subcellular localization of the enzymes involved. Ceramide synthases are primarily localized to the ER and have also been detected in mitochondria, while sphingosine kinases have been localized to cytosol/plasma membrane (SK1) or the nucleus (SK2) (43). Therefore, although sphingosine is a soluble product of sphingolipid metabolism, it is unlikely that ceramide synthases and sphingosine kinases directly compete for the substrate in the same compartment. While details remain to be investigated, the increased apoptotic susceptibility of HT-CerS6 or SW620-CerS6 cells, despite elevated acid ceramidase expression, may offer an explanation for recent studies in ovarian and 4 C. Voelkel-Johnson, unpublished data. We also observed that HT-CerS6 cells were more susceptible to treatment with the acid ceramidase inhibitor LCL-521 (Fig. 5C), which suggests higher levels of acid ceramidase maybe important for cells to maintain viability when CerS6 expression is elevated. However, as a consequence of elevated acid ceramidase activity, intracellular levels of sphingosine increase, which may result in heightened susceptibility to apoptotic signals either in a therapeutic setting or endogenously through the immune system. The dynamic nature of sphingolipids and the complexity of crosstalk between sphingolipid metabolic pathways, suggests it may be very difficult to utilize a single sphingolipid enzyme such as acid ceramidase as a biomarker for prognosis or therapy responsiveness. Using an ASAH1-promoter driven luciferase reporter plasmid, we confirmed that CerS6 induced transcriptional up-regulation of acid ceramidase (Fig. 6). Experiments with pharmacological inhibitors suggest that transcriptional activation of acid ceramidase by CerS6 occurs in a JNK/AP-1-dependent manner (Figs. 5 and 6). These results are consistent with previous studies that show activation of the JNK pathway following ceramide stress as well as with ceramide/AP-1-dependent transcriptional activation of acid ceramidase following radiation therapy in prostate cancer cells (31)(32)(33)(34). JNK belongs to the larger group of mitogen-activated protein kinases and responds to a variety of signals including cytokines, radiation, heat shock, and autophagy (46,47). How CerS6 overexpression impacts on signaling pathways other than apoptosis largely remains to be determined. Curcumin, which also inhibited CerS6-induced transcriptional induction of acid ceramidase, affects numerous signaling pathways. In addition to inhibiting AP-1 binding, it has recently been shown to stimulate ceramide synthase activity through increased CerS dimer formation (48). Although it is unclear whether CerS dimer formation can occur at the concentration of curcumin used in our study (25 M versus 50 M), these studies suggest an impact of curcumin on sphingolipid signaling may possibly occur through both down-regulation of acid ceramidase expression and increased CerS dimer formation. To investigate if the ability to enhance acid ceramidase transcription is unique to CerS6, we extended our study to all members of the CerS family. CerS1 was the only CerS family member that did not alter acid ceramidase reporter activity, which was not completely unexpected, as CerS1 is phylogenetically the most distant family member and appears to have functions that are distinct from CerS6 (49). For example, CerS1 but not CerS6 has been shown to induce mitophagy (50). Transfection of the CerS2 plasmid slightly but significantly decreased ASAH1-promoter-driven reporter activity (Fig. 6B). Whether the opposing effects of CerS2 and CerS6 on transcriptional activation of acid ceramidase is a consequence of differential heterodimer composition remains to be investigated. CerS3, -4, and -5 shared the capacity of CerS6 to stimulate transcriptional activation of acid ceramidase. Somewhat surprisingly, there was no correlation between the ability to stimulate ASAH1-luc reporter activity and the fatty acid specificities of the CerS isoforms (19). For example, both CerS1 and CerS4 can generate C 18 -ceramide, yet transcriptional activation of ASAH1-luc was only observed with CerS4 and not CerS1. A similar discrepancy was observed between CerS2 and CerS3/4, which have overlapping abilities to generate very long chain ceramides (19). To further investigate the requirement for ceramide synthase activity for induction of acid ceramidase transcription, we used three different approaches: exogenously added ceramide, the CerS inhibitor FB1, and a CerS6 mutant that lacks catalytic activity. Exogenous ceramide failed to induce ASAH1-luc reporter activity, while FB1 and the CerS6 H212A mutant failed to significantly reduce it. Taken together, these results suggest that transcriptional activation of acid ceramidase in our model is not mediated by ceramide itself and does not depend on the catalytic activity of CerS. Recently it has been demonstrated that ectopically expressed Bcl2L13 binds to CerS6 (51). Therefore, one possibility is that elevated expression of acid ceramidase in response to CerS6 overexpression occurs as a consequence of altered protein-protein interactions with non-CerS proteins. This hypothesis would also explain why knockdown of CerS6 did not decrease acid ceramidase expression in SW480 cells (Fig. 4F). Future studies to investigate CerS binding partners may elucidate the exact mechanism by which overexpression of CerS6 (or CerS3, 4, 5) mediates a transcriptional increase in acid ceramidase. Conclusions This study shows how altered expression of a single ceramide synthase has profound effects on the sphingolipid network, impacting both sphingolipid composition as well as acid ceramidase expression, which leads to alterations in signaling pathways and cell death susceptibility. Furthermore, our results suggest that the catalytic activity of CerS6 is not required for the ability to transcriptionally activate acid ceramidase expression, thereby revealing a new level of complexity by which ceramide synthases can impact cellular responses.
6,651.4
2015-04-03T00:00:00.000
[ "Biology" ]
Light-Trapping Engineering for the Enhancements of Broadband and Spectra-Selective Photodetection by Self-Assembled Dielectric Microcavity Arrays Light manipulation has drawn great attention in photodetectors towards the specific applications with broadband or spectra-selective enhancement in photo-responsivity or conversion efficiency. In this work, a broadband light regulation was realized in photodetectors with the improved spectra-selective photo-responsivity by the optimally fabricated dielectric microcavity arrays (MCAs) on the top of devices. Both experimental and theoretical results reveal that the light absorption enhancement in the cavities is responsible for the improved sensitivity in the detectors, which originated from the light confinement of the whispering-gallery-mode (WGM) resonances and the subsequent photon coupling into active layer through the leaky modes of resonances. In addition, the absorption enhancements in specific wavelength regions were controllably accomplished by manipulating the resonance properties through varying the effective optical length of the cavities. Consequently, a responsivity enhancement up to 25% within the commonly used optical communication and sensing region (800 to 980 nm) was achieved in the MCA-decorated silicon positive-intrinsic-negative (PIN) devices compared with the control ones. This work well demonstrated that the leaky modes of WGM resonant dielectric cavity arrays can effectively improve the light trapping and thus responsivity in broadband or selective spectra for photodetection and will enable future exploration of their applications in other photoelectric conversion devices. Electronic supplementary material The online version of this article (10.1186/s11671-019-3023-x) contains supplementary material, which is available to authorized users. Introduction Photodetectors (PDs) are in great demand for enhancing responsivity, which is practically important to its commercial applications, such as optical communication, sensing, and imaging in our daily life. It is well acknowledged that the material extinction in active region of the devices must be high enough to allow the efficient light absorption and photocarrier generation [1]. Hence, the application of advanced light-trapping technology has been considered as the most important approach to realize the efficient photodetection in various broadband PDs [2]. Additionally, the newly raised demands for tunable selective spectral responsivity or multiple band sensing in photodetecting field also need to develop new light-manipulating methods [3][4][5][6][7][8][9]. Various optical capture strategies have been developed and employed in optical devices, e.g., the random texture interfaces [10] or three-dimensional (3D) nanostructures [11][12][13][14] for sensitivity improvement by fully utilizing the large surface-to-volume ratio and Debye length. Among these 3D light-trapping nanostructures, low Q resonant optical cavity has been considered as the most attractive medium to manipulate light in a broadband range through the multiple resonance modes [15][16][17][18][19][20][21][22][23]. The main principle is that the whispering-gallery-mode (WGM) resonances in the sphere can enhance the light-matter interactions in the cavity [16,19,23] or couple the light into the under-layer substrate through the waveguide mode [17,20]. Consequently, improved photoelectric conversion efficiency or photo-response can be realized in the corresponding optoelectronic devices [24,25]. This concept of light trapping in thin-film solar cells by utilizing wavelength-scale resonant dielectric nanospheres was proposed by Grandidier et al. with the aims to enhance the light absorption in the active layer and further photocurrent in the device [15]. Further, significantly enhanced light absorption and power conversion efficiency have been well demonstrated by Cui et al. [16]. The selfassembled dielectric hollow nanospheres, embracing multiple low Q WGM resonances in the visible light region, also have been demonstrated for effective light trapping and short-circuit current density improvement on thin-film solar cells in our previous work [17]. Theoretically, different from the conventionally used optical film technology, this kind of multiple resonances should be possible for the application in PDs towards the specific wavelength manipulation or broadband light-trapping enhancement, but which has not been investigated yet. In this work, the 3D nanostructured dielectric microcavity arrays (MCAs) were introduced for light-trapping engineering in broadband and specific spectral region on the silicon-based PDs. Here, the wide bandgap semiconductor ZnO was selected as the cavity material, which can be facilely prepared through varieties of physical or chemical methods [26][27][28]. The hollow spherical ZnO cavity was fabricated using the self-assembled PS nanosphere arrays as template combined with the physical depositing and thermal annealing as reported in our previous work [29]. The significant broadband light trapping was characterized in the optimized ZnO cavities, which was proved to be originated from the WGM resonances by the theoretical calculation. Therefore, a broadband photodetection enhancement was achieved in ZnO MCA-decorated PDs. Meanwhile, because of the multiple WGM resonances, especially the leaky modes in the MCA, the local optical density and the effective absorption at specific wavelength region were promoted in the silicon PDs' active layer. Consequently, besides the broadband responsivity enhancement, an up-to-25% increment in photo-sensitivity at specific wavelength region (800-940 nm) under the bias of 0 V was successfully achieved. The employment of WGM-enhanced absorption for light management in PDs demonstrated in this work opens the door to various applications in other optoelectronic devices, such as efficient photovoltaics and light-emitting diodes (LEDs). Results and Discussion The cross-sectional and top views of the device structure in the ZnO MCA-decorated PIN silicon PD are schematically shown in Fig. 1a and b, respectively. Here, the as-fabricated ZnO MCAs with the actual core diameter of 470 nm when using the 530-nm-PS nanospheres as template, referring to the experimental details and fabrication processes in (Additional file 1: Figure S1), on the PIN PDs are well ordered in the monolayer arrangement with a hexagonal close-pack as displayed in Fig. 1c. The acceptable spherical shape of the cavities except for the contact area with the substrate can be well recognized in the cross-sectional and titled SEM images of Fig. 1d and Additional file 1: Figure S2a. The smooth inner surface also can be visualized in the internal morphology of this optical cavity as seen in Additional file 1: Figure S2b, which would be understandably beneficial for light resonating in the cavity structure. The actual shell thickness (T shell ) in the cavity was measured to be~40 nm (Additional file 1: Figure S2b). Additionally, clear diffraction color can be seen on the large-scale fabricated ZnO MCA arrays on PIN substrate as shown in Additional file 1: Figure S3a, which originates from the diffraction effect of the ZnO MCA layer that happened at the specific angles satisfying the Bragg's equation [30]. It is well acknowledged that when cavity parameters (e.g., diameter and thickness) match with the light wavelength, the whispering-gallery-mode (WGM) resonances would be generated. Therefore, in this kind of MCAdecorated PIN PDs, the light confinement and coupling into the active layer of PD through the leaky modes [30] and the consequent light-trapping enhancement in the devices can be expected. In order to verify the light confinement and trapping properties of the fabricated ZnO MCAs, FDTD simulated transmission spectrum for the ZnO MCAs on the sapphire substrate as a simplified case was firstly examined and compared with the experimental results, as shown in Fig. 2a and b. Several distinguished valleys can be well resolved at wavelengths of 415, 495, 547, and 650 nm in the simulated transmission spectrum. Because of the intrinsic band-edge absorption of ZnO, no resonance appeared in the UV region where the wavelength is shorter than 380 nm. Undoubtedly, these valleys in transmission spectrum originate from the series of supported WGM resonances in the ZnO MCAs and can be well identified by their corresponding near-field distribution patterns under each resonance peak, as shown in Additional file 1: Figure S4. The typical resonance pattern for the second order of WGM resonance near 650 nm was selectively shown in the inset of Fig. 2a. An intensified field distribution was clearly resolved around the cavity, which is known as the leaky mode [31] and would be subsequently favorable to the light radiating into the underlying active layer of the devices. The experimental transmission spectrum agrees well with the simulated one at the corresponding resonance wavelengths except for a little shift of wavelength peaks at 416, 492, 545, and 637 nm, as shown in Fig. 2b. These WGM resonances in the MCAs produced a broad angle scattering [32] of the incident light, exhibiting as a valley in the transmission spectra near the resonance wavelength. This scattering effect on ZnO MCAs decorated Si substrate also can be well evidenced by the simulated reflection spectrum as shown in Fig. 2c, where series of peaks can be found which matched well with the resonance valleys shown in the transmission spectra [33]. Additionally, it was found that a broadband anti-reflection effect was successfully achieved on the MCA-decorated silicon substrate when compared with the bare silicon. The experimental reflection spectrum on ZnO MCA-decorated silicon substrate (Fig. 2d) also shows the similar antireflection effect and resonance peaks to the theoretical results, except for a much lower resonance quality (Q) which might be caused by the non-ideal spherical structure and the existed defects within the experimentally prepared MCAs. However, this decreased resonance quality might be further conducive to the anti-reflection in the short wavelength region (< 550 nm), which would be much beneficial for the broadband light trapping on the corresponding devices as evidenced in the previous work [16,34]. With comparing to the reflection from the bare silicon surface, both the theoretical and experimental reflection spectra from the MCA-decorated silicon well demonstrated that the supported series of WGM resonances can be used for light trapping by utilizing the leaky modes. However, interestingly, it was noteworthy that the mostly decreased reflection happened in the offresonance region rather than the on-resonance peaks. Further simulation well indicated that the strong absorption enhancement can be successfully realized in the MCA-coated silicon substrate under the off-resonance band (840 nm) compared with that on the bare silicon, while much lower absorption profile was obtained under the on-resonance illumination (660 nm), as shown in Fig. 2e (the detailed simulation set up was shown in Additional file 1: Figure S5). This result infers that the WGM resonance, especially the resonance with highquality factor in some special wavelength positions, might also scatter the light back [35], which is unfavorable for the light-trapping enhancement. The extracted near-filed distribution shown in Additional file 1: Figure S6 also evidenced that a large amount of optical power was scattered back due to the resonance, leading to a decreased absorption profile in the active layer while comparing with bare silicon under the on-resonance wavelength illumination. The functionality of the light-trapping MCA layer on silicon PIN PDs was then evaluated by characterizing the photo-response of the devices. As shown in the typical I-V response of Fig. 3a, a satisfying Fig. 3b). The wavelength-dependent photo-responsivity as shown in Fig. 3c presents a dramatically enhanced photo-response within a broadband spectrum nearly over the whole visible and near-infrared (IR) region after decorating the MCAs on the devices. The enhancement ratio was calculated and is shown in Fig. 3d. It can be seen that only within the wavelength region from 625 to 695 nm with the center valley located at~660 nm there is no enhancement, which just matched well with the second-order (n = 2) WGM resonance (peak wavelength at~640 nm) as seen in the transmission spectra (on-resonance region) of Fig. 2b. While within the mostly used near-infrared (IR) region (~800 to~980 nm) for silicon PDs, obviously enhanced responsivity by up to~17% was successfully accomplished. Coincidentally, this wavelength region also lay at the off-resonance region as could not be enhanced under the on-resonance illumination while obviously enhanced absorption can happen in the off-resonance region, as shown in Fig. 2e. However, for the short wavelength region (< 600 nm), the significant enhancement in absorption, as well as the photo-response, still can be obtained, which matched well with the remarkable anti-reflect properties for the MCAs on silicon presented in Fig. 2d. As discussed above, the actual much low resonance quality in cavities within this region should be the main reason for the broadband light trapping which is independent of the on-resonance or off-resonance. The above results well demonstrated that the lighttrapping properties via the WGM microcavity are highly related to the resonance quality, which is dependent on the cavities' parameters. In order to further verify the enhancement mechanism mentioned above and manipulate the responsivity enhancement on devices in specific wavelength region, such as the widely used near-infrared (IR) region detecting for communication or sensing, the WGM resonances in MCAs were regulated by controlling the cavities' size. For the shell structure cavity adopted in this work, the effective optical length can be easily increased by thickening the shell layer [36]. As shown in Fig. 4a, by increasing the shell thickness to 60 nm, much more resonance modes were observed in the transmission spectrum of the MCAs. These resonance modes also can be assigned to the corresponding WGM resonances by means of the theoretical simulation, as shown in Additional file 1: Figure S7. Comparing with the MCAs in shell thickness of 40 nm (Fig. 2b), the same resonance mode exhibits an understandable redshift due to the increased effective cavity length. The experimental reflection spectra in Fig. 4b also matched well with the transmission spectrum. Different from the experimental reflection spectra for the MCAs with a shell thickness of 40 nm shown in Fig. 2d, the actual resonance is more distinguishable indicating the higher resonance quality, which means that the backscattering effect might be stronger and not in favor of the light trapping. The wavelength-dependent responsivity curves are shown in Fig. 4d well demonstrate this inference, where responsivity in specific wavelength regions has been enhanced while some other regions were decreased. From Fig. 4d, it can be noted that the mostly enhanced region consistently happened in the offresonance area while decrementing region located in the on-resonance area. Additionally, compared to the MCAs decorated PDs with shell thickness of 40 nm (shown in Fig. 3d), much higher responsivity enhancement was achieved within the region of 800-980 nm, which is mostly used in communication and sensing for silicon a b c d PDs. An up to~25% enhancement can be achieved at the wavelength of 820 nm, as shown in Fig. 4d. This much stronger enhancement should have originated from the higher resonance quality for the second-order WGM of the MCAs, leading to the higher light-trapping effect through the leaky mode of WGM resonance in this wavelength region. The much lower reflectance intensity in this wavelength region well explained this significant enhancement in light trapping, as well as the responsivity, as shown in Fig. 4b when comparing with the reflection spectrum in Fig. 2d for the MCAs with a shell thickness of 40 nm. Additionally, this enhancement also mostly happened at the off-resonance region. While for the on-resonance region from~640 to 710 nm as shown in Fig. 4d (background was marked as light red), obviously decreased responsivity was obtained reasonably due to the backscattering effect induced by the high resonance quality for this resonance mode, as discussed above. Similar as the MCAs with a shell thickness of 40 nm, strong enhancement still can be realized in the short wavelength region (< 500 nm) most likely because of the much lower resonance quality and higher antireflection effect. The stability performance for these enhancements by the light-trapping engineering also has been further evaluated by examining the photo-response for the same device storing in ambient air for 1 year, which show nearly no decay in the current response compared with the control one under the same test conditions, as seen in Additional file 1: Figure S8. Conclusions In conclusion, a new strategy was proposed for light absorption improvement within broadband and specific wavelength region for photodetectors (PDs) by utilizing the multiple WGM resonances generated in ZnO microcavity arrays (MCAs). With the decoration of the facilely prepared dielectric microcavity arrays (MCAs) on the silicon-based PIN PDs, a broadband light trapping and photo-responsivity enhancement were successfully achieved covering nearly the whole ultraviolet-visible near-infrared (300-1000 nm) region. Theoretical and experimental results indicated that the leaky mode radiation of the WGM resonances, which most effectively work in the off-resonance region, is the main enhancement mechanism for light trapping. With further manipulating the WGM resonance peaks and resonance quality by increasing the shell thickness of cavities, specific light trapping and responsivity enhancement were achieved in the mostly used communication and sensing region (800-980 nm) with the maximum improvement of up to~25% at 820 nm. This work well demonstrated a low-cost and good compatibility method to improve the light trapping and thus responsivity with broadband or selective spectra for photodetection by introducing the leaky mode of WGM resonant dielectric cavity arrays. The light manipulation approach employed in this work provides an important guide for designing microand nanomaterial architectures to facilitate the novel applications within a specific wavelength range in optoelectronic devices. Methods/Experimental Fabrication Process of PIN PD Devices The PIN PDs were fabricated on a 200-μm-thick p-type (100) silicon substrate purchased from WaferHome [37] with the resistivity of 0.001 Ω cm. A 20-μm-thick intrinsic layer was epitaxially grown on the substrate. Then, n-type phosphorus-ion implantation with an implantation dose of 1 × 10 16 cm −2 and an energy of 160 keV was performed on the intrinsic layer to form the final PIN device structure. Before the decoration of the MCA structures, the PIN wafer was standardly cleaned to remove the surface residual organic matters and metal ions. Finally, the chip-fabrication processes were carried out with the designed photosensitive region of 2.8 mm × 2.8 mm. A 100-nm-thick aluminum electrode in a diameter of 160 μm on the n-type surface and a 50-nm-thick Au film with 5-nm Ti bonding layer on the back side were sputtering deposited (Explorer-14, Denton Vacuum) to form a metal ohmic contact. Fabrication Process of ZnO MCA Layer The ZnO MCAs were produced using the polystyrene (PS) nanospheres as the template followed by sputtering deposition of ZnO film, and the PS nanospheres were finally removed by thermal annealing [29]. Commercial PS nanospheres purchased from Nanomicro (Suzhou Nanomicro Technology Co., Ltd.) in the diameter of 530 nm were used as the template material to fabricate ZnO microcavity arrays. The shell of ZnO thin films in different thicknesses (~40 and~60 nm) was controlled by adjusting the different deposition durations. Characterizations The morphology and structure were characterized by Hitachi S-4800 field emission scanning electron microscope (FE-SEM). Experimental transmission and reflection spectra data were collected by Varian Cary 5000 UV-Vis-NIR spectrophotometer. The photocurrent and IV characteristics of the devices were measured on an electrochemical workstation (CHI660D) equipped with a room-temperature probe station and LED light sources. The external quantum efficiency (EQE) of the devices under 0 bias were measured using an optical power meter (Newport, 2936-R), which equipped with a light source (Newport, 66,920) and a monochromator (Cornerstone 260, Newport). Simulated transmission/reflection spectra and near-field distribution were extracted by a FDTD simulation package (FDTD Solutions, Lumerical Inc.). Additional File Additional file 1: Figure S1. Fabrication of ZnO MCAs on Si PIN substrate. Figure S2. Detailed morphology of ZnO MCA arrays on PIN substrate. Figure S3. Large-scale ZnO MCA arrays on PIN silicon substrate. Figure S4. Near-field distribution patterns of ZnO MCA with shell thickness of 40 nm. Figure S5. Simulation method and setup for the absorption profile. Figure S6. Comparison of the absorption profile and near-field distribution for the MCAs on silicon substrates under on/off-resonance wavelengths. Figure S7. Near-field distribution patterns of ZnO MCA with shell thickness of 60 nm. Figure S8.
4,466
2019-05-30T00:00:00.000
[ "Engineering", "Physics" ]
Identification of ‘erasers’ for lysine crotonylated histone marks using a chemical proteomics approach Posttranslational modifications (PTMs) play a crucial role in a wide range of biological processes. Lysine crotonylation (Kcr) is a newly discovered histone PTM that is enriched at active gene promoters and potential enhancers in mammalian cell genomes. However, the cellular enzymes that regulate the addition and removal of Kcr are unknown, which has hindered further investigation of its cellular functions. Here we used a chemical proteomics approach to comprehensively profile ‘eraser’ enzymes that recognize a lysine-4 crotonylated histone H3 (H3K4Cr) mark. We found that Sirt1, Sirt2, and Sirt3 can catalyze the hydrolysis of lysine crotonylated histone peptides and proteins. More importantly, Sirt3 functions as a decrotonylase to regulate histone Kcr dynamics and gene transcription in living cells. This discovery not only opens opportunities for examining the physiological significance of histone Kcr, but also helps to unravel the unknown cellular mechanisms controlled by Sirt3, that have previously been considered solely as a deacetylase. DOI: http://dx.doi.org/10.7554/eLife.02999.001 Introduction Histone posttranslational modifications (PTMs) play a crucial role in regulating a wide range of biological processes, such as gene transcription, DNA replication, and chromosome segregation (Kouzarides, 2007). Increasing evidence has indicated that PTMs of histones can serve as a heritable 'code' (so-called 'histone code'), which provides epigenetic information that a mother cell can pass to its daughters (Jenuwein and Allis, 2001). Histone code is 'written' or 'erased' by enzymes that add or remove the modifications of histones (Goldberg et al., 2007;Kouzarides, 2007). Meanwhile, 'readers' of histone code recognize specific histone modifications and 'translate' the code by executing distinct cellular programs necessary to establish diverse cell phenotypes, while the genetic code (DNA) is unaltered (Seet et al., 2006;Taverna et al., 2007). Lysine acetylation (Kac) was among the first covalent modification of histones to be described . Since its identification, histone Kac has been correlated with gene expression. However, the mechanistic insights into the regulation and functions of histone Kac remained challenging and elusive, until the identification and characterization of the enzymes responsible for the addition and removal of this PTM, which are now known as histone acetyltransferases (Roth et al., 2001) and deacetylases (Sauve et al., 2006;Yang and Seto, 2008b;Haberland et al., 2009), respectively. Extensive studies have now revealed that Kac plays an important role in controlling chromatin structure and gene transcription (Grunstein, 1997;Yang and Seto, 2008a). By neutralizing positively charged lysine residues, acetylation alters the coulumbic interactions between basic histones and the negatively charged DNA, and thereby influences the structure of chromatin compaction (Ura et al., 1997;Shogren-Knaak et al., 2006). In addition, acetylation may serve as a docking site for 'reader' proteins (e.g., bromodomain containing proteins), which are recruited onto chromatin to carry out downstream cellular processes, such as gene transcription (Dhalluin et al., 1999;Marmorstein and Berger, 2001;Zeng et al., 2010). Lysine crotonylation (Kcr) is a newly discovered histone PTM that is specifically enriched at active gene promoters and potential enhancers in mammalian cell genomes . In postmeiotic male germ cells, Kcr specifically marks testis specific X-linked genes, suggesting it is likely that it is an important histone mark for male germ cell differentiation. However, further mechanistic and functional studies of histone Kcr have been limited by a lack of knowledge of the enzymes that catalyze the addition or removal of Kcr in cells. In a systematic screening of the activities of the 11 human zinc-dependent lysine deacetylases (i.e., HDAC1-HDAC11) against a series of C-terminal lysine acylated peptides, Olsen et al. found that HDAC3 in complex with nuclear receptor corepressor 1 (HDAC3-NCoR1) had detectable decrotonylase activity towards a model peptide substrate in a fluorometric assay (Madsen and Olsen, 2012). Recently, using a radioactive thin layer chromatography based assay, Denu et al. demonstrated that Sirt1 and Sirt2 can catalyze the removal of a crotonyl group from a histone H3K9Cr peptide (Feldman et al., 2013). However, this discovery was based on a single peptide substrate. Due to lack of further characterization of these identified enzymes, their mechanisms of catalysis and the molecular bases of substrate recognition remain unclear. More importantly, since both discoveries relied on peptide based in vitro screening assays, there is still an essential need to identify endogenous histone decrotonylases. eLife digest Most of the DNA in a cell is wound around histone proteins to form a compacted structure called chromatin. Enzymes can modify the histones by adding small chemical tags on to them, and these histone modifications can cause the chromatin to either become more tightly packed or more open. Opening up the chromatin makes the DNA more accessible to the cellular machinery involved in gene expression. Thus, cells can regulate which genes they express, and by how much, by modifying the histone proteins. Like all other proteins, histones are made of smaller molecules called amino acids. Specific amino acids within histone proteins can be modified in a number of different ways, with different effects. For instance, adding a chemical tag called an acetyl group onto an amino acid in a histone weakens the interaction between the histone and the DNA, which opens up the chromatin and increases gene expression. Another way that histones can be modified is by the addition of crotonyl groups. These chemical tags have not been examined much because the enzymes that add or remove them remain to be identified. However, it was recently suggested that enzymes called sirtuins-which are known to remove acetyl groups from histones-might also remove the crotonyl groups. Finding histone-modifying enzymes is challenging because the interactions between these enzymes and the histones are both weak and brief. Bao, Wang, Li, Li et al. have now overcome this challenge by developing a method to firmly link any protein that interacts with a crotonylated histone to the histone. Three out of the seven sirtuin enzymes found in humans were revealed to bind to crotonylated histones. All three of these enzymes-called Sirt1, Sirt2 and Sirt3-could remove crotonyl groups from histones in a test-tube, and Sirt3 could also do the same in living cells. Further biochemical experiments suggested that the mechanism used by these enzymes to remove crotonyl groups is the same as the mechanism they use to remove acetyl groups. Bao, Wang, Li, Li et al. then uncovered the three-dimensional structure of the Sirt3 enzyme bound to a crotonylated histone, and revealed that the enzyme recognizes the crotonyl group on the histone via a unique interaction between the crotonyl group and a specific amino acid in the binding pocket of Sirt3. This amino acid is also found in Sirt1 and Sirt2, but not in other sirtuins; this interaction can thus explain why decrotonylation activity was only detected for these three enzymes. Moreover, the levels of crotonylated histones and gene expression were higher in cells that lacked Sirt3, but not in those lacking Sirt1 or Sirt2. By identifying Sirt3 as the main decrotonylation enzyme in living cells, the role of histone crotonylation can now be investigated in greater detail. DOI: 10.7554/eLife.02999.002 To fill this knowledge gap, a method to profile 'eraser' enzymes that recognize Kcr is needed. A Cross-Linking Assisted and Stable isotope labeling of amino acids in cell culture (SILAC) based Protein Identification (CLASPI) approach has recently been reported to identify histone PTM 'readers' (Li et al., 2012;Li and Kapoor, 2010). However, this approach has not previously been explored to identify histone PTM 'erasers', which are likely involved in weak and transient interactions. Here we present the application of an optimized CLASPI approach to comprehensively profile 'eraser' enzymes that recognize histone Kcr marks. We identified human Sirt1, Sirt2, and Sirt3 as decrotonylases in vitro and examined the molecular basis for how the enzymes recognize Kcr using X-ray crystallography. Furthermore, we demonstrated that Sirt3 can function as an 'eraser' enzyme to regulate histone crotonylation dynamics in living cells. Results Chemical proteomics approach to profile proteins recognizing histone H3K4Cr mark We first focused on a crotonylation mark discovered on histone H3K4 . We designed a peptide probe (probe 1, Figure 1A) to convert non-covalent protein-protein interactions mediated by this Kcr into irreversible covalent linkages through photo-cross-linking. The probe is based on the unstructured N-terminal region of histone H3, with lysine-4 crotonylated, a photo-cross-linker (benzophenone) appended to alanine-7, and a bio-orthogonal handle (alkyne) at the peptide C terminus to enable selective isolation of captured binding partners. To identify proteins that bind H3K4Cr with high selectivity and high affinity, we performed two types of CLASPI experiments with cell lysates derived from HeLa S3 cells grown in medium containing either 'heavy' ( 13 C, 15 N-substitued arginine and lysine) or 'light' (natural isotope abundance forms) amino acids ( Figure 1B). In a 'selectivity filter' CLASPI experiment, the 'heavy' and 'light' cell lysates were photo-cross-linked with probe 1 and an unmodified H3 control probe (probe C, Figure 1A), respectively, and pooled for the subsequent steps. The captured proteins were then conjugated to biotin using click chemistry, followed by affinity purification, gel electrophoresis, and in-gel trypsin digestion. The digested peptide mixtures were separated by high performance liquid chromatography (HPLC) and analyzed with a LTQ-Orbitrap mass spectrometer. Using this method, proteins that show a high SILAC ratio of heavy/ light (H/L) are likely H3K4Cr selective binders. To further distinguish the high affinity interactions, we performed an 'affinity filter' CLASPI experiment, in which both lysates were photo-cross-linked with probe 1 but the 'light' sample also contained H3K4Cr peptide as a competitor (30 μM) ( Figure 1B). We expected that the addition of the competitor peptide in the 'light' lysate would effectively inhibit 1-induced cross-linking of H3K4Cr binders that have high affinity (K d < 30 μM) towards the H3K4Cr peptide, and should thereby produce a high SILAC ratio of H/L for these proteins. Together, we consider a protein as a selective and tight binder of H3K4Cr when it shows high SILAC ratios of H/L in both 'selectivity filter' and 'affinity filter' experiments ( Figure 2-source data 1). Sirt1, Sirt2, and Sirt3 recognize histone H3K4Cr mark A two-dimensional plot with logarithmic (Log 2 ) SILAC ratios of H/L of the identified proteins in the 'selectivity filter' and 'affinity filter' experiments, along the x axis and y axis, respectively, is shown in Figure 2A. As expected, the majority of identified proteins did not show significant differences between the signal intensities of their 'heavy' and 'light' forms (i.e., H/L close to 1:1), suggesting they are not likely to be H3K4Cr binding proteins. In contrast, three nicotinamide adenine dinucleotide (NAD)-dependent deacetylases (Imai et al., 2000;Landry et al., 2000;Sauve et al., 2006), Sirt1, Sirt2, and Sirt3, were enriched by more than 10-fold by the K4 crotonylated probe (1) in the 'selectivity filter' experiment ( Figure 2A, B and Figure 2-figure supplement 1), indicating that they preferentially bind to this histone Kcr mark. However, among these three selective H3K4Cr binders, only Sirt3 showed the highest SILAC ratio of H/L and thereby appeared as an outlier outside of the background in the 'affinity filter' experiment ( Figure 2A, B and Figure 2-figure supplement 1). This result indicates that Sirt3 is likely a selective and relatively tight binding partner of H3K4Cr. We next examined whether Sirt3 can directly and selectively bind to this crotonylated histone peptide in vitro. As shown in Figure 2C, the recombinant Sirt3 was captured by probe 1 but not by probe C, and the cross-linking was competed by the H3K4Cr peptide with an IC 50 =32.3 μM ( Figure 2D), verifying a direct and selective interaction between Sirt3 and the K4 crotonylated H3 peptide. Indeed, the direct measurement of binding affinity using isothermal titration calorimetry showed that Sirt3 bound to the H3K4Cr peptide with K d =25.1 μM ( Figure 2E). Consistent with our 'affinity filter' CLASPI analysis, Sirt1 and 2 showed lower affinities towards the H3K4Cr peptide ( Figure 2-figure supplement 2), indicating that they are selective but relatively weak binders towards this histone Kcr mark. Molecular basis for how Sirt3 recognizes histone Kcr To study the molecular basis for the recognition of H3K4Cr by Sirt3, we determined the crystal structure of human Sirt3 in complex with an H3K4Cr peptide to 2.95 Å resolution (PDB 4V1C). The asymmetric unit consists of six molecules, each containing one Sirt3-H3K4Cr complex. The two globular domains of Sirt3 composed of an NAD binding Rossmann fold and a zinc binding motif are similar to other sirtuins ( Figure 3A) (Avalos et al., 2004;Du et al., 2011;Yuan and Marmorstein, 2012;Jiang et al., 2013). Residues 2 RTKQTAR 8 of the H3K4Cr peptide were clearly identified based on electron density. The way that the substrate is bound is similar to the published complex structure of Sirt3, with a lysine acetylated AceCS2 peptide (PDB 3GLR) (Figure 3-figure supplement 1) (Jin et al., 2009). The crotonyl lysine is located in a binding pocket formed by hydrophobic residues Phe180, Ile230, His248, Ile291, and Phe294 of Sirt3 ( Figure 3B). Residue His248, a catalytic residue for the deacetylation activity of Sirt3, interacts with the crotonyl amide oxygen via hydrogen bonding in the structure ( Figure 3B). Strikingly, the phenyl ring of residue Phe180 aligns parallel to the planar crotonyl group and has a short distance of 3.6 Å to its conjugated carbon-carbon double bond (C=C) ( Figure 3C, D), indicating a robust π-π stacking interaction between the two functional groups. Interestingly, a primary sequence alignment of all sirtuins revealed that the phenylalanine residue (Phe180) of Sirt3 is conserved in Sirt1 and Sirt2, but not in other sirtuins (Figure 3-figure supplement 2), which may explain why Sirt4-Sirt7 were not identified in our CLASPI experiments. This π-π interaction therefore underlies the mechanism for the recognition of crotonyl lysine by Sirt1, Sirt2, and Sirt3. Research article Sirt1, Sirt2, and Sirt3 catalyze hydrolysis of crotonylated histone peptides in vitro Inspired by the fact that Sirt3 binds crotonyl lysine at its catalytic pocket that is known for hydrolysis of acetyl lysine, we next tested whether Sirt3 has decrotonylation activity. Liquid chromatographymass spectrometry (LC-MS) was used to monitor hydrolysis of the H3K4Cr peptide by Sirt3. As A mutation of the catalytic residue (H248Y) that is crucial for the deacetylation activity of the enzyme also completely abolished its decrotonylation activity ( Figure 4C). These data indicate that Sirt3 hydrolyzes crotonyl lysine with the same mechanism as it hydrolyzes acetyl lysine (Figure 4-figure supplement 3) (Tanner et al., 2000;Tanny and Moazed, 2001). In addition to H3K4Cr, we also examined the activity of Sirt3 to hydrolyze a collection of crotonyl histone peptides . As shown in Figure 4D-G, Sirt3 manifested varied decrotonylation activities towards these peptides and this substrate selectivity can be partially explained by the binding affinities of Sirt3 to these peptides ( Figure 4-figure supplement 4). The observation that Sirt3 binds a crotonylated peptide by recognizing both the modification site and its surrounding residues was also supported by the extensive hydrophobic and hydrogen bonding interactions between Sirt3 and the peptide side chains in the Sirt3-H3K4Cr complex structure ( Figure 3-figure supplement 1B). We next investigated whether other members of the sirtuin family could also function as decrotonylases. Consistent with the work of Denu and coworkers, Sirt1 and Sirt2 also catalyzed the hydrolysis of the H3K4Cr peptide, although they were relatively weaker binders towards this substrate ( phenylalanine residue, which is involved in recognition of crotonyl lysine via π−π stacking interaction ( Figure 3C, D), is only conserved in Sirt1-Sirt3 (Figure 3-figure supplement 2). To further examine the importance of this conserved phenylalanine to the decrotonylase activity of the enzyme, we mutated Phe180 of Sirt3 to a leucine residue (F180L), which lacks an aromatic ring as a π donor but retains a similar hydrophobicity. We then carried out kinetic studies on this F180L mutant Sirt3. The steady state kinetic data showed that the catalytic efficiency of Sirt3 F180L mutant (k cat /K m =21 s −1 M −1 ) for the hydrolysis of the H3K4Cr peptide was about 40-fold lower than that of wild-type Sirt3 (Figure 4figure supplement 1), indicating a critical role of the phenylalanine mediated π−π interaction in the decrotonylation activity of the enzyme. Interestingly, the F180L mutation caused only about a two-fold decrease in the deacetylation activity of the enzyme (Figure 4-figure supplement 6). This result rules out the possibility that the observed significant decrease in the decrotonylase activity of the enzyme is caused by a potential disruption of the NAD binding pocket in the mutated Sirt3. Sirt1, Sirt2, and Sirt3 remove Kcr marks from histone proteins in vitro To test whether Sirt1-Sirt3 decrotonylate proteins, we incubated whole-cell proteins that were resolved in a sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) gel and transferred onto a poly(vinylidene fluoride) (PVDF) membrane with the enzymes in the presence of NAD. A pan antibody against Kcr was used to assess protein crotonylation levels. While the incubations with Sirt1-Sirt3 had little influence on lysine crotonylation in most of the protein bands, substantial reductions in Kcr levels were observed in two bands with a molecular mass of approximately 15 kDa ( Figure 5A). Considering Sirt1-Sirt3 can decrotonylate histone peptides in vitro, we speculated that these 15 kDa proteins with reduced Kcr levels could be histones. We therefore examined the decrotonylation activity of Sirt1-Sirt3 using purified core histone proteins as substrates. Indeed, Sirt1-Sirt3 not only reduced global Kcr levels of all core histones, they also showed robust decrotonylation activity towards two known histone Kcr sites, H3K4Cr and H3K27Cr ( Figure 5B and Sirt3 regulates histone Kcr levels in cells We next examined whether Sirt1-Sirt3 regulate histone lysine crotonylation in cells. Although Sirt1 and Sirt2 can decrotonylate histone peptides and proteins in vitro, their knockdowns by siRNA did not cause an appreciable increase in crotonylation levels for both the global histone and the two tested Kcr (i.e., H3K4Cr and H3K27Cr) sites ( Figure 5-figure supplement 2). In contrast, Sirt3 knockdown caused accumulation of global histone crotonylation and the H3K4Cr mark, while the histone H3K4Ac and H3K4Me3 levels were unaltered ( Figure 5C, D), suggesting that Sirt3 selectively targets histone crotonylation. Interestingly, the crotonylation level on H3K27 was not influenced by the knockdown of Sirt3, which may be explained by the observation that Sirt3 showed weaker activity towards the H3K27Cr peptide in vitro (Figure 4-figure supplement 3). It should be noted that Sirt3 was found to localize predominantly to mitochondria and was mainly involved in metabolic regulations through controlling protein acetylation dynamics. However, recent evidence has suggested that Sirt3 can also be present in the nucleus in its full length form (Scher et al., 2007;Iwahara et al., 2012). Indeed, using an antibody that targets the N-terminal region of Sirt3, we detected endogenous full length Sirt3 in the nucleus of HeLa cells by both immunofluorescence and western blotting analyses ( Figure 5figure supplement 3). Taken together, these data suggest that endogenous Sirt3 can function as an 'eraser' enzyme to regulate histone crotonylation dynamics in cells. Sirt3 regulates histone Kcr levels and gene expression on its defined chromatin regions Finally, we sought to determine the potential biological consequence of histone decrotonylation mediated by Sirt3. It has been reported that Sirt3 can bind to chromatin and cause repression of the neighboring genes in U2OS cells (Iwahara et al., 2012). We therefore hypothesized that Sirt3 could regulate gene transcription via controlling local histone Kcr levels. To test this hypothesis, we focused on seven candidate genes, Baz2a, Brip1, Corin, Ptk2, Tshz3, Wapal, and Zfat, whose transcription start sites are close to the Sirt3 enriched region. Chromatin precipitation (ChIP) coupled with quantitative PCR (qPCR) was performed in U2OS cells with the pan anti-Kcr antibody to measure Kcr levels near the transcription start sites of the candidate genes. As shown in Figure 5E, Sirt3 knockdown by siRNA resulted in significant increases in Kcr levels of five of the seven genes analyzed, indicating that Sirt3 may directly regulate crotonylation dynamics at the genomic loci where it binds. Interestingly, the mRNA levels of the three candidate genes with increased Kcr levels, Ptk2, Tshz3, and Wapal, were also Research article Figure 5F). Given that histone Kcr is enriched at active gene promoters and potential enhancers , this positive correlation between the gene transcription level and the nearby histone Kcr level on Sirt3 knockdown suggests that Sirt3 might relieve a repressive effect on these target genes through 'erasing' histone Kcr 'marks'. Discussion We have established a robust chemical proteomics approach to comprehensively profile histone decrotonylases. There have been important advances in our ability to detect PTMs. However, we currently lack reliable methods to identify, without bias, enzymes that regulate the addition and removal of PTMs, as interactions between PTMs and their regulating enzymes can be weak and transient, thereby limiting the applicability of conventional biochemical 'pull-down' methods. Our CLASPI approach overcame this difficulty by applying photo-cross-linking chemistry to convert weak and transient enzyme-PTM interactions into irreversible covalent linkages, and enabled a systematic profiling of the 'erasers' of protein PTMs. The present study has also largely broadened the scope of CLASPI technology from finding PTM 'readers' (Li et al., 2012;Li and Kapoor, 2010), which are usually involved in relatively more stable protein-protein interactions, to identifying dynamic and transient interactions between PTMs and their 'erasers'. We anticipate that this approach can be used to comprehensively profile 'erasers' of other PTMs, such as arginine demethylases. Siruins were initially recognized as NAD-dependent deacetylases (Imai et al., 2000;Landry et al., 2000;Sauve et al., 2006). However, emerging evidence revealed that some sirtuins that displayed weak deacetylation activity had substrate specificity towards other acyl groups attached to lysine residues. For examples, Lin et al. recently demonstrated that Sirt5 can preferentially hydrolyze malonyl and succinyl lysine (Du et al., 2011;Peng et al., 2011), and Sirt6 can remove long chain fatty acyl groups (e.g., myristoyl group) from lysine residues . In this study, we demonstrated that the three human sirtuins, Sirt1-Sirt3, catalyzed the hydrolysis of crotonyl lysine. This newly discovered decrotonylase activity broadens the landscape of PTMs that are targeted by sirtuins, and it also provides new impetus to investigate the cellular mechanisms and functions of Sirt1-Sirt3, which to date have been considered solely as deacetylases. This finding is also partially in agreement with the work of Denu and coworkers, in which only Sirt1 and Sirt2 exhibited decrotonylase activity, whereas Sirt3 was totally inactive, towards a histone H3K9Cr peptide in their radioactive [ 32 P]-NAD thin layer chromatography assay. In contrast, Sirt3 displayed robust decrotonylase activity against a variety of crotonylated histone peptides, including an H3K9Cr peptide in this study (Figure 4). Given the fact that the activity of Sirt3 can be peptide sequence-dependent (Figure 4), this discrepancy may be caused by the different H3K9Cr peptide substrates used in Denu's and this study, which consisted of amino acid residues 5-13 and 1-15 of histone H3, respectively. We have demonstrated that endogenous Sirt3 functions as an 'eraser' to regulate histone crotonylation in cells. This finding opens new opportunities to investigate the cellular mechanisms and functions of histone crotonylation. In contrast, while the knockdowns of Sirt1 and Sirt2 did not cause accumulation of histone global or H3K4 crotonylation ( Figure 5-figure supplement 2), we cannot rule out the possibility that these two sirtuins could target other histone crotonylation sites. Future studies are therefore needed to systematically profile mammalian crotonylome and analyze the lysine crotonylation sites that are targeted by Sirt1, Sirt2, and Sirt3, by comparing the corresponding wildtype and genetic knockout cells or tissues in conjunction with quantitative proteomics approaches. The seven human sirtuins have distinct subcellular localizations. Sirt1, Sirt6, and Sirt7 are in the nucleus, Sirt3-Sirt5 localize to the mitochondria, and Sirt2 is primarily found in the cytoplasm (Houtkooper et al., 2012). However, Sirt3, in its full length form, has recently been found in the nucleus, and nuclear Sirt3 can associate with chromatin and result in repression of nearby genes (Scher et al., 2007;Iwahara et al., 2012). Based on the focused analysis at several Sirt3 target gene loci, the current study suggests a potential correlation of the transcriptional upregulation and the increase in local histone Kcr levels on Sirt3 knockdown. It also generates a hypothesis that Sirt3 could lead to silencing through 'erasing' Kcr at target genes. To test this hypothesis and examine the correlation between Sirt3 catalyzed histone deacrotonylation and gene expression genome-wide requires comprehensive profiling of global histone Kcr and gene expression regulated by Sirt3 using ChIP coupled to high throughput sequencing, in combination with RNA sequencing in future studies. In addition, the same type of PTM at different modification sites of histones may have distinct effects on gene expression. For example, trimethylation at histone H3 Lys-4 (H3K4Me3) 'marks' genes that are being actively transcribed, whereas the same modification at H3 Lys-27 (H3K27Me3) 'marks' transcriptionally silent chromatin (Martin and Zhang, 2005). By analogy, it is possible that crotonylation at specific lysine sites of histones could also play different roles in the regulation of gene expression. This possibility may account for the fact that the transcription of the two genes (i.e., Brip1 and Zfat) with elevated Kcr levels was not influenced in our study. The study of the effects of site-specific histone Kcr 'marks' (e.g., H3K4Cr) targeted by Sirt3 on the regulation of gene expression is an important next step. Instrumentation In-gel fluorescence scanning was performed using a Typhoon 9410 variable mode imager (excitation 532 nm, emission 580 nm). Isothermal titration calorimetry measurements were performed on a MicroCal iTC200 titration calorimeter (Malvern Instruments, United Kingdom). Peptides were purified on a prepara- Peptide synthesis and purification All peptides were synthesized on Rink-Amide MBHA resin following a standard Fmoc based solid phase peptide synthesis protocol. Removal of protecting groups and cleavage of peptides from the resin were done by incubating the resin with a cleavage cocktail containing 95% trifluoroacetic acid (TFA), 2.5% triisopropylsilane, 1.5% water, and 1% thioanisole for 2 hr. Peptides were purified by preparative HPLC with an XBridge Prep OBD C18 column (30 mm×250 mm, 10 μm; Waters). Mobile phases used were water with 0.1% TFA (buffer A) and 90% acetonitrile (ACN) in water with 0.1% TFA (buffer B). Peptides containing photo-cross-linker (benzophenone) were eluted with gradient 15-40% buffer B in 40 min; all other peptides were eluted with gradient 5-35% buffer B in 40 min. The elution rate was 15 mL/min. The purity and identity of the peptides were verified by LC-MS. Stable isotope labeling of amino acids in cell culture HeLa S3 cells were grown in suspension at 37°C in a humidified atmosphere with 5% CO 2 in DMEM medium (-Arg, -Lys; Life Technologies) containing 10% dialyzed fetal bovine serum (Life Technologies), penicillin-streptomycin, and supplemented with 22 mg/L 13 C 6 15 N 4 -L-arginine (Cambridge Isotope Laboratories, Tewksbury, MA) and 50 mg/L 13 C 6 15 N 2 -L-lysine (Cambridge Isotope) or the corresponding non-labeled amino acids (Peptide International, Louisville, KY). Harvested cell pellets were washed with ice cold phosphate buffered saline (PBS) and frozen in liquid N 2 . The cell powder grinded with a Ball Mill (Retch MM301) was stored at −80 °C until use. Preparation of whole-cell lysates for CLASPI experiment To prepare whole-cell lysates, the frozen cell powder was first resuspended in a hypotonic buffer (10 mM HEPES, pH 7.5, 2 mM MgCl 2 , 0.1% Tween-20, 20% glycerol, 2 mM phenylmethylsulfonyl fluoride (PMSF), and Roche Complete EDTA free protease inhibitors) and incubated for 10 min at 4 °C. The suspension was centrifuged at 16,000×g for 15 min at 4 °C and the supernatant was kept for use later. The pellet was resuspended in a high salt buffer (50 mM HEPES, pH 7.5, 420 mM NaCl, 2 mM MgCl 2 , 0.1% Tween-20, 20% glycerol, 2 mM PMSF, and Roche Complete EDTA free protease inhibitors) and incubated for 30 min at 4 °C. The suspension was centrifuged at 16,000×g for 15 min at 4 °C, and the supernatant was combined with the soluble fraction in hypotonic buffer to give the whole-cell lysates. CLASPI photo-cross-linking In a 'selectivity filter' experiment, probe 1 and probe C were incubated with heavy and light SILAC whole-cell lysates, respectively, in the binding buffer (50 mM HEPES, pH 7.5, 168 mM NaCl, 2 mM MgCl 2 , 0.1% Tween-20, 20% glycerol, 2 mM PMSF, and Roche Complete EDTA free protease inhibitor cocktail) for 15 min at 4 °C. The samples were then irradiated at 365 nm using a Spectroline ML-3500S UV lamp for 15 min on ice. In 'an affinity filter experiment', the heavy and light SILAC lysates were reacted with probe 1 in the absence and presence, respectively, of H3K4Cr (1-15) peptide (30 μM) as a competitor. After photo-cross-linking, the heavy and light lysates were pooled. Streptavidin affinity enrichment of biotinylated proteins After the click chemistry with cleavable biotin-azide, the reaction was quenched by adding 4 volumes of ice cold acetone to precipitate the proteins. After washing with ice cold methanol twice, the air dried protein pellet was dissolved in PBS with 4% SDS, 20 mM EDTA, and 10% glycerol by vortexting and heating. The solution was then diluted with PBS to give a final concentration of SDS of 0.5%. High capacity streptavidin agarose beads (Thermo Fisher Scientific) were added to bind the biotinylated proteins with rotating for 1.5 hr at room temperature. To remove non-specific binding, the beads were washed with PBS with 0.2% SDS, 6 M urea in PBS with 0.1% SDS, and 250 mM NH 4 HCO 3 with 0.05% SDS. The enriched proteins were then eluted by incubating with 25 mM Na 2 S 2 O 4 , 250 mM NH 4 HCO 3 , and 0.05% SDS for 1 hr. The eluted proteins were dried down with SpeedVac. Sample preparation for mass spectrometry The dried proteins were resuspended in 30 μL of lithium dodecyl sulfate sample loading buffer (Life Technologies) with 50 mM dithiothreitol (DTT), heated at 75 °C for 8 min, and then reacted with iodoacetamide in the dark for 30 min to alkylate all of the reduced cysteines. Proteins were then separated on a Bis-Tris gel, followed by fixation in a 50% methanol/7% acetic acid solution. The gel was stained by GelCode Blue stain (Pierce). The diced 1 mm (Goldberg et al., 2007) cubes of gels were then destained by incubating with 50 mM ammonium bicarbonate/50% acetonitrile for 1 hr. The destained gel cubes were dehydrated in acetonitrile for 10 min and rehydrated in 25 mM NH 4 HCO 3 with trypsin for protein digestion at 37 °C overnight. The resulting peptides were enriched with StageTips. The peptides eluted from the StageTips were dried down by SpeedVac and then resuspended in 0.5% acetic acid for analysis by LC-MS/MS. Mass spectrometry Mass spectrometry was performed on an LTQ-Orbitrap Velos mass spectrometer (Thermo Fisher Scientific). First, peptide samples in 0.1% formic acid were pressure loaded onto a self-packed PicoTip column (New Objective, Woburn, MA) (360 μm od, 75 μm id, 15 μm tip), packed with 7-10 cm of reverse phase C18 material (ODS-A C18 5-μm beads from YMC America, Allentown, PA), rinsed for 5 min with 0.1% formic acid, and subsequently eluted with a linear gradient from 2% to 35% B for 150 min (A=0.1% formic acid, B=0.1% formic acid in ACN, flow rate ∼200 nL/min) into the mass spectrometer. The instrument was operated in a data-dependent mode, cycling through a full scan (300-2000 m/z, single μscan) followed by 10 CID MS/MS scans on the 10 most abundant ions from the immediate preceding full scan. Cations were isolated with a 2 Da mass window and set on a dynamic exclusion list for 60 s after they were first selected for MS/MS. The raw data were processed and analyzed using MaxQuant (version 1.2.2.5). A human fasta file (ipi.HUMAN.v.3.68.fasta) was used as the protein sequence searching database. Default parameters were adapted for protein identification and quantification. In particular, parent peak MS tolerance was 6 ppm, MS/MS tolerance was 0.5 Da, minimum peptide length was 6 amino acids, and maximum number of missed cleavages was 2. The proteins quantified were supported by at least two quantification events. Both the 'selectivity filter' and 'affinity filter' experiments were repeated twice, and only the proteins that were identified and quantified in all experiments were reported. In-gel fluorescence visualization The click chemistry reactions were quenched by adding 1 volume of 2×sample buffer. The proteins were heated at 85 °C for 8 min, and resolved by SDS-PAGE. The labeled proteins were visualized by scanning the gel on a Typhoon 9410 variable mode imager (excitation 532 nm, emission 580 nm). Expression and purification of recombinant human sirtuins Plasmids of , and Sirt6 (1-314) for Escherichia coli expression were generated as previously described (Finnin et al., 2001;Du et al., 2011;Hubbard et al., 2013;Jiang et al., 2013). Plasmids of Sirt3 (102-399) cloned in pTrcHis 2C vector for E. coli expression and full length Sirt3 (wide-type and mutant H248Y) cloned into pcDNA3.1 vector for mammalian cell expression were generous gifts from Dr Eric Verdin (University of California, San Francisco). Sirt3 mutant F180L was generated by site directed mutagenesis. All of the proteins were expressed in E. coli Rosetta cells. To induce expression of target proteins, isopropyl β-D-1-thiogalactopyranoside was added to a final concentration of 0.2 mM when OD 600 reached 0.6, and the culture was grown at 15 °C (Sirt3 at 25 °C) for 16-18 hr. Cells were harvested and resuspended in lysis buffer A (50 mM Tris-HCl, pH 7.5, 500 mM NaCl, 1 mM PMSF, and Roche EDTA free protease inhibitors, for Sirt1, Sirt2, and Sirt6) or buffer B (50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1 mM PMSF, and Roche EDTA free protease inhibitors, for Sirt3 and Sirt5). Following sonication and centrifugation, the supernatant was loaded onto a nickel column pre-equilibrated with lysis buffer. The column was washed with 5 column volumes of wash buffer (lysis buffer with 30 mM imidazole) and then the target proteins were eluted with elution buffer (lysis buffer with 250 mM imidazole). After purification, Sirt2 was digested by UPL1 at 4 °C overnight and purified by a Highload 26/60 Superdex75 gel filtration column (GE Healthcare Life Sciences, United Kingdom). Sirt6 was purified by SP column and Superdex75 gel filtration column. Others were loaded onto a Superdex75 gel filtration or Highload 26/60 Superdex200 (for Sirt1) column. After concentration, the target proteins were frozen and stored at −80 °C. Isothermal titration calorimetry measurements Experiments were performed at 25 °C on a MicroCal iTC200 titration calorimeter (Malvern Instruments). The reaction cell containing 200 μL of 100-200 μM proteins was titrated with 17 injections (firstly 0.5 μL, and all subsequent injections 2 μL of 1.5-2.5 mM peptides). The binding isotherm was fit with Origin 7.0 software package (OriginLab, Northampton, MA) that uses a single set of independent sites to determine the thermodynamic binding constants and stoichiometry. Crystallization, X-ray data collection, and structure determination Sirt3/H3K4Cr mixtures were prepared at a 1:20 protein/peptide molar ratio and incubated for 60 min on ice. Crystals of Sirt3 (102-399) complexed with H3K4Cr (1-10) peptide were obtained by the hanging drop vapor diffusion method at 291 K using commercial screens from Hampton Research (Aliso Viejo, CA). Each drop, consisting of 1 μL of 10 mg/mL protein complex solution (20 mM Tris-HCl, pH 7.4, 100 mM NaCl, and 5 mM DTT) and 1 μL of reservoir solution, was equilibrated against 400 μL of reservoir solution. The qualified crystals of Sirt3 grew with a cube profile within 1 week with a reservoir containing 12% PEG4K, 0.1 M sodium malonate, pH 6.5, and 5% isopropanol. The mixture of 25% glycerol with the reservoir solution above was used as the cryogenic liquor. The X-ray diffraction data were collected at 100 K in a liquid nitrogen gas stream using the Shanghai Synchrotron Radiation Facility beamline 17U (λ = 0.9791 Å). A total of 120 frames were collected with a 1° oscillation and the data were indexed and integrated using the program HKL2000 (Otwinowski and Minor, 1997). The complex structure of Sirt3 with H3K4Cr peptide was solved by molecular replacement using the program Molrep from the CCP4 Suit (Collaborative Computational Project, Number 4, 1994), with the published Sirt3 structure (PDB: 3GLR) (Jin et al., 2009) as the search model. Refinement and model building were performed with REFMAC5 and COOT from CCP4. The X-ray diffraction data collection and structure refinement statistics are shown in Supplementary file 1. Enzymatic reactions The enzymatic activities of human sirtuins were measured by detecting the removal of the crotonyl group from peptides (Du et al., 2011). Sirtuin protein (5 μM) was incubated with 500 μM of corresponding crotonylated peptides and 1 mM of NAD in a reaction buffer containing 20 mM Tris-HCl buffer (pH 7.5) and 1 mM DTT at 37 °C for 2 hr. The reactions were stopped by adding one-third reaction volume of 20% TFA and immediately frozen in liquid N 2 . For Sirt3, samples without NAD or without enzyme were treated under the same conditions as the controls. Samples were then analyzed by LC-MS with a Vydac 218TP C18 column (4.6 mm×250 mm, 5 μm; Grace Davison, Columbia, MD). Mobile phases used were 0.05% TFA in water (buffer A) and 0.05% TFA in ACN (buffer B). The flow rate for LC was 0.6 mL/min. The peptide mixtures were eluted by buffer A for 10 min and then 0-30% buffer B over 10 min. MS started to record at 10 min for each injection. Determination of k cat and K m Enzyme was incubated with different concentrations of corresponding peptides bearing two tryptophans at the C terminus (20,40,60,80,100,200, and 500 μM) and 1.0 mM NAD in 20 mM Tris-HCl buffer (pH 7.5) containing 1 mM DTT in 25 μL reaction at 37°C for a certain period of time within the initial linear range. The enzyme concentration and reaction time used were: Sirt3-H3K4Ac: 1 μM enzyme, 5 min; Sirt3-H3K4Cr: 1 μM enzyme, 20 min; Sirt3 (F180L)-H3K4Ac: 0.8 μM enzyme, 5 min; and Sirt3 (F180L)-H3K4Cr: 5 μM enzyme, 20 min. The reactions were stopped by adding one-third reaction volume of 20% TFA and immediately frozen in liquid N 2 . Samples were then analyzed by HPLC with a Vydac 218TP C18 column (4.6 mm×250 mm, 5 μm; Grace Davison). Mobile phases used were water with 0.1% TFA (buffer A) and 90% ACN in water with 0.1% TFA (buffer B). The wavelength for UV detection was 280 nm. The analysis gradient for deacetylation samples was 16% buffer B for 20 min with a flow rate at 1.5 mL/min. The analysis gradient for decrotonylation samples was 15-35% buffer B in 12 min with a flow rate at 1.0 mL/min. Detection of O-Cr-ADPR Sirt3 (5 μM) was incubated with 500 μM of H3K4Cr (1-15) peptide and 1 mM of NAD in a reaction buffer containing 20 mM Tris-HCl buffer (pH 7.5) and 1 mM DTT at 37°C for 2 hr. The reactions were stopped by immediately freezing in liquid N 2 . Sample was then analyzed by LC-MS with a VisionHT C18 column (2.1 mm×150 mm, 3 μm; Grace Davison) on an Agilent 1260 Infinity HPLC system, followed by Thermo Fisher Scientific LCQ DecaXP MS Detector. Mobile phases used were 0.02% TFA in water (buffer A) and 90% ACN in water with 0.02% TFA (buffer B). The flow rate for LC was 0.2 mL/min. The sample was eluted by buffer A for 10 min and then 0-10% buffer B over 10 min. The wavelength for UV detection was 260 nm. MS started to record at 10 min. RNAi experiments Sirt1 siRNA 15 nM (Santa Cruz Biotechnologies), Sirt2 siRNA 30 nM (Thermo Fisher Scientific), or Sirt3 siRNA 30 nM (Thermo Fisher Scientific) was transfected into a HeLa cell line with DharmaFECT 1 Transfection Reagent (Thermo Fisher Scientific), according to the manufacturer's instructions. Corresponding concentrations of control siRNA were used as negative controls. Following transfection, cells were then maintained in a humidified 37 °C incubator with 5% CO 2 for another 48 hr (for Sirt1 and Sirt2) or 72 hr (for Sirt3). Histone extraction An acid extraction method was used to isolate histones from HeLa S3 cells (Shechter et al., 2007). Briefly, the harvested HeLa S3 cell pellet was resuspended with lysis buffer (10 mM Tris-HCl pH 8.0, 1 mM KCl, 1.5 mM MgCl 2 , 1 mM DTT 2 mM PMSF, and Roche Complete EDTA free protease inhibitors) and incubated at 4 °C by rotating for 1 hr. The intact nuclei were pelleted by centrifuging at 10,000×g for 10 min at 4 °C. To extract histones, 0.4 N H 2 SO 4 was added to resuspend the nuclei, followed by rotating at 4°C overnight. After centrifuging to remove the nuclei debris, histones were precipitated by adding 100% trichloroacetic acid drop by drop (trichloroacetic acid final concentration 33%). The precipitated histones were pelleted at 16,000×g for 10 min at 4 °C and washed with ice cold acetone twice. The air dried protein pellet was dissolved with ddH 2 O and stored at −80 °C for later use. On-membrane decrotonylation experiment HeLa S3 whole-cell lysate (20 μg) or 5 μg of extracted histones were resolved by SDS-PAGE gel and transferred to PVDF membranes. The membranes were incubated with or without 0.1 μM of Sirt3 in reaction buffer (25 mM Tris-HCl, 130 mM NaCl, 3 mM KCl, 1 mM MgCl 2 , and 1 mM DTT, pH 7.5) containing 1 mM NAD at 37 °C for 2 hr. Immunofluorescence HeLa cells grown on coverslips were fixed with 3.7% polyformaldehyde in PBS, permeabilized with 0.1% Triton X-100 in PBS, and blocked for 30 min at room temperature using 5% bovine serum albumin (dissolved with PBS containing 0.1% Triton X-100). Cells were incubated with primary antibody overnight at 4°C and washed trice with PBST (0.1% Tween-20 in PBS) prior to secondary antibody (containing DAPI for nucleus staining) incubation at room temperature for 1 hr. Washed cells were then subjected to a Zeiss LSM 510 laser scanning confocal microscope. Subcellular fractionation In brief, HeLa cells were harvested by centrifugation and washed with PBS twice; all subsequent steps were performed at 4 °C. Cells were then suspended in 5 cell pellet volumes of buffer A (10 mM HEPES, pH 7.9 at 4 °C, 1.5 mM MgCl 2 , 10 mM KCl, and 0.5 mM DTT) followed by incubation for 10 min. After centrifugation, cells were resuspended in 2 cell pellet volumes of buffer A and lysed by Dounce homogenizer (B type pestle) with homogenate checked by microscopy. The cell lysis was layered over 30% sucrose in buffer A and then centrifuged for 15 min at 800×g. The resulting pellet was recovered from the sucrose phase, washed by buffer A twice, and then extracted by buffer C (20 mM HEPES, pH 7.9, 25% (vol/vol) glycerol, 0.42 M NaCl, 1.5 mM MgCl 2 , 0.2 mM EDTA, 0.5 mM PMSF, and 0.5 mM DTT) for 30 min at 4 °C. After centrifugation at 12,000×g for 30 min, the supernatant was termed the nuclear fraction. The resulting supernatant was centrifuged twice at 800×g to complete the pellet nuclei and intact cell. The supernatant was then centrifuged at 7,000×g to pellet the mitochondria followed by washing twice with buffer A. The mitochondria were then lysed by TXIP-1 buffer (1% Triton X-100 (vol/vol), 150 mM NaCl, 0.5 mM EDTA, and 50 mM Tris-HCl, pH 7.4). Protein concentration was determined by BCA assay. Immunoblotting Proteins separated by SDS-PAGE were transferred onto a PVDF membrane which was then blocked (5% non-fat dried milk and 0.1% Tween-20 in PBS) for 1 hr at room temperature. The membrane was incubated with primary antibody diluted in PBST with 2% bovine serum albumin, followed by washing with PBST for 5 min trice, incubated with goat anti-rabbit-horseradish peroxidase conjugated secondary antibody (1:20000; Santa Cruz Biotechnologies), or rabbit anti-mouse-horseradish peroxidase conjugated secondary antibody (1:5000; Santa Cruz Biotechnologies) diluted in PBST for 1 hr at room temperature, and then visualized with western blotting detection reagents (Thermo Fisher Scientific). Gene expression analysis Total RNA was isolated using TRIzol Reagent (Life Technologies). RNA was reverse transcribed into cDNA by M-MLV Reverse Transcriptase (Life Technologies) using oligo (dT) primers. qPCR was performed using Power SYBR Green PCR Master Mix (Life Technologies) on an ABI StepOnePlus system following the manual's instructions. All primers used are listed in Supplementary file 2. ChIP and qPCR Cells were cross linked by 1% formaldehyde for 10 min and quenched by 0.125 M glycine for 5 min at room temperature. Cells were then lysed by ChIP lysis buffer (5 mM PIPES pH 8.0, 85 mM KCl, and 1% IGEPAL CA-630) and homogenized using a glass Dounce homogenizer (type B pestle). The nuclear fraction was precipitated and lysed in nuclei lysis buffer (50 mM Tris-HCl, pH 8.0, 10 mM EDTA, and 1% SDS) for 30 min at 4 °C. The nuclear lysis was sonicated to a chromatin ranging from 600 bp to 800 bp. Immunoprecipitation was done in immunoprecipitation dilution buffer (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 1% IGEPAL CA-630, 0.25% deoxycholic acid, and 1 mM EDTA) using Dynabeads coupled with Protein G (Life Technologies). Chromatin (5 μg) and 8 μg of pan anti-crotonyllysine antibody were used for each ChIP reaction. Chromatin complex was eluted from beads by ChIP elution buffer (50 mM NaHCO 3 and 1% SDS) and added to 5 M NaCl to a final concentration of 0.54 M. To reverse cross links of protein/DNA complex to free DNA, samples were incubated at 65 °C for 2 hr followed by 95 °C for 15 min. After incubation with RNase (Thermo Fisher Scientific) for 20 min at 37 °C, DNA was recovered and used for qPCR, as described above. All primers used are listed in Supplementary file 2.
10,589.6
2014-11-04T00:00:00.000
[ "Biology", "Chemistry" ]
The origin of the stingless bee species described by Frederick Smith from Brazilian specimens brought to the London International Exhibition of 1862 ABSTRACT For a long time, the provenance of the specimens used by Frederick Smith to describe the species of stingless bees from Brazil remained a mystery. The recent digitalization of 19th century publications has made possible to trace the origin of the material brought to the London International Exhibition of 1862 by the Brazilian delegation. We document that the bee specimens showed at the International Exhibition, and that served as type material of the species described by Smith, were collected by Manuel Ferreira Lagos, head of the Zoology section of the Comissão Científica de Exploração, during their stay in Ceará, from 1859 to 1861. Even if late, it is important to give due credit to the Comissão Científica de Exploração, and more specifically to Lagos, for the contribution to the knowledge of the stingless bee fauna from Brazil. Frederick Smith (1805Smith ( -1879) ) was a prolific English zoologist at the British Museum, having contributed massively to the description of hymenopterous species from the entire globe during the 19th century (Dunning, 1879).In a paper entitled "Descriptions of Brazilian Honey Bees belonging to the Genera Melipona and Trigona, which were exhibited, together with Samples of their Honey and Wax, in the Brazilian Court of the International Exhibition of 1862" and published in 1863, Smith recorded a total of 16 species of stingless bees and seven of social paper wasps based on material presented at the London International Exhibition of 1862.For the newly proposed taxa, Smith adopted the vernacular names as species epithets, with the exception of his Trigona mellea and T. recursa.He does not provide details of the provenance of the specimens and probably considered enough to know that they came from Brazil. Earlier authors, such as Ducke (1916) and Schwarz (1932Schwarz ( , 1948)), paid little attention to the origin of the specimens brought to the London Exhibition.An exception was the short note made by Ducke (1910, p. 109) about the type material of Smith's Trigona tataira: "The types of Smith are from Ceará" ("Les types de Smith sont du Ceara", in the original).Apparently unaware of Ducke's statement, Camargo and Moure's (1996), in their revisionary work of Geotrigona, have taken the matter into consideration when discussing the identity of G. mombuca, one of the species described in the work of Smith.They reached the conclusion that the specimens were collected somewhere in southeastern Brazil, most likely in eastern Minas Gerais, considering their vernacular names derived from the Tupi language (Camargo and Moure, 1996, p. 110).This same assumption was followed by Pedro and Camargo (2003), when interpreting the identity of Partamona cupira, and by Melo (2003) in relation to Melipona mondury, additional species described by Smith (1863). The first author has recently come across a digital copy of the report about Brazil's participation in the International Exhibition of London, prepared by Francisco Ignacio de Carvalho Moreira, Baron of Penedo, and published in 1863.Carvalho Moreira was assigned president of the Brazilian committee sent to represent the country during the event and made the report at the request of Dom Pedro II, Emperor of Brazil.The bees were treated in a section of their own of the "Annexo XXI", written by John Miers, who served as a juror in the Brazilian section of the International Exhibition.On pages 67−69, Miers makes reference The origin of the stingless bee species described by Frederick Smith from Brazilian specimens brought to the London International Exhibition of 1862 A B S T R A C T For a long time, the provenance of the specimens used by Frederick Smith to describe the species of stingless bees from Brazil remained a mystery.The recent digitalization of 19 th century publications has made possible to trace the origin of the material brought to the London International Exhibition of 1862 by the Brazilian delegation.We document that the bee specimens showed at the International Exhibition, and that served as type material of the species described by Smith, were collected by Manuel Ferreira Lagos, head of the Zoology section of the Comissão Científica de Exploração, during their stay in Ceará, from 1859 to 1861.Even if late, it is important to give due credit to the Comissão Científica de Exploração, and more specifically to Lagos, for the contribution to the knowledge of the stingless bee fauna from Brazil. that the wax and honey brought to the Exhibition were produced by different qualities of bees from the "Provincia do Ceará".He then continued and stated that specimens of these same bees, as well as of honey-producing wasps, were also brought to the Exhibition.He also reports to have shown these specimens to Frederick Smith, who offered to study them and publish a memoir on these Brazilian insects.Miers' report contains a list of the material shown in the Exhibition which, between bees and wasps, amounts to a total of 24 samples (see Table 1).In the same document, the "Annexo XLIV" deals with the awards received by the Brazilian exhibitors, where figures again the honey, wax and bees but now including the exhibitor in charge: "excellent wax and bees that produce it" exposed by M. F. Lagos and awarded with a medal, and "collection of bees that produces wax" by the same expositor and rewarded with honorable mention (both on page 507). In order to assemble, inventory and select national products to be sent for exposition at London, the Brazilian Empire, in partnership with two private institutions, organized a Brazilian exposition that took place in December of 1861, at Rio de Janeiro, Capital of the Empire (Cunha, 1862;Martins, 2020).All Brazilian provinces were invited to contribute sending and cataloging local products, either natural or manufactured.The products would then be exhibited and evaluated by a specialized jury, divided in five categories, which yielded the reports compiled and published by Antonio Luiz Fernandes da Cunha, in 1862.The bees and their samples of honey and wax appear in the report about the agricultural industry, which mentions that it was up to Mr. Manoel Ferreira Lagos the glory of presenting the best collection that has been seen of this genre; this gentleman exhibited no less than 24 different species or varieties of bees from Ceará, preserved dry in a beautiful frame, and in vials with alcohol; they were accompanied by the respective samples of wax and honey.(Cunha, 1862, p. 146).1 It is also mentioned that Mr. Lagos received an honorable mention, especially due to his collection of bees and their products. In Cunha's (1862) report, the section about the manufacturing industry, written by Luiz Cypriano Pinheiro de Andrade, lists some of the products chosen to be exhibited at the Universal Exposition, among them a board with 24 species of bees, from Ceará; 23 vials with prepared bees, from the same province; samples of different species of wax from these same bees, and different species of honey.(Cunha, 1862, pp. 314−315). Apparently, the board was exhibited in London identically as it was in Brazil, agreeing with the list provided in Miers' report in Carvalho-Moreira (1863).It is worth mentioning that seven of the 24 species cited as indigenous bees are in fact social wasps that Smith (1863) attributed to the genera Apoica, Nectarina and Polybia (Table 1).Also, as far as we could investigate (see details in Table 1), many of the specimens studied by Smith (1863) bear a numbering label, which to a certain degree correspond to the order in which the T = Trigona; M = Melipona; N = Nectarina; P = Polybia; A = Apoica (see Smith, 1863); NP = no numbering label present in the specimens; NE = specimens not examined. 1Smith was consistent in writing the species epithets given after vernacular names in capital letter. 2 In addition to the worker lectotype, Pedro and Camargo (2003, p. 69) also list four additional workers, all of them bearing the numbering label "2". 3 Correct spelling "Arapuá". 4In addition to the worker lectotype, Camargo and Moure (1996, p. 109) also list two additional males, all of them bearing the numbering label "14". 5Based on the type specimen of Trigona meadewaldoi Cockerell deposited at the USNM. 6Originally described by Smith (1854) based on specimen from "Pará" (= Belém; collected by Bates). 7The taxon described by Smith (1863) under the name Trigona longipes was given the new name Melipona longicrus by Dalla-Torre (1896, p. 580). species are listed in Miers' report and presented in Smith's paper. In addition, some of them bear an additional label in which the vernacular name is accompanied by the same number written in the smaller numbering label, as for example in the types of Trigona jaty and T. mombuca.We believe that these numbers correspond to some sort of numbering system used by Lagos when preparing the board.However, further study of this material will be required to better understand these number labels. The link between the bee species described by Smith ( 1863) with the information provided by the reports of Carvalho Moreira (1863) and Cunha (1862) reveals that the voucher specimens were collected during a well-known historical expedition in Brazil that took place between 1859 and 1861, conducted by the Comissão Científica de Exploração (Scientific Commission of Exploration, CCE).This scientific commission was proposed by the Instituto Histórico e Geográfico Brasileiro (Brazilian Historical and Geographical Institute, IHGB) in 1856 and approved by the government in the same year.The goal of this commission was to explore and gather information about the most unexplored provinces of Brazil, which led them to travel to the province of Ceará in 1859.To reach this objective, the commission counted with the participation of Brazilian naturalists and engineers (Dias, 1862;Braga, 1962;Teixeira, 2014;Santos, 2020). The CCE was composed of five sections: (1) Geology and mineralogy, (2) Astronomy and geography, (3) Botany, (4) Zoology, and (5) Ethnography and travel narrative.A president was assigned to coordinate the mission, while each section had its own head person.The section's heads were responsible for writing the guidelines for the activities of their respective sections during the travel in the province of Ceará (Dias, 1862).The zoology section, by which the collection of the bees, wax and honey exposed in London was obtained, was headed by Manuel Ferreira Lagos, zoologist and director of the division of comparative anatomy of the Museu Nacional, in Rio de Janeiro.The zoology section was given the assignment to assemble, list and describe the local fauna, including both vertebrates and invertebrates, either exotic or native to the province of Ceará.Whenever possible, the uses of the animals and their products by humans should be described, aiming to increase the knowledge on the natural resources of Brazil. The guidelines prepared by Lagos include a special observation on native bees, referring more specifically to the stingless bees: In collecting Hymenoptera, great attention must be paid to our species of bees, which are numerous, provide wax in abundance, and a somewhat fragrant honey; although it cannot be said that the quality of their products may rival with those of the common bee (Apis mellifica), it would be desirable to try to explore them.(Dias, 1862, p. XXIX). This part of the text evidences the special care given to the bees by Lagos, who aimed to show the relevance of commercially exploring the products of native bees. The results of this directive are dealt in the report prepared by Lagos, read in the session of the IHGB of December 6 th , 1861: Collecting hymenopterans was very profitable, serving as proof the frame shown in the national exposition, that took place recently in this Court, in which 26 species of bees, native to the province of Ceará, were accompanied by samples of honey and wax produced by each of them; some of these honeys have an exquisite flavor, and others are recommended for their medicinal properties.(Lagos, 1862, p. CLXV). In his report, Lagos then ensued: Beekeeping, which could flourish in that province and produce a good profit, is not properly exploited there, and only a few people keep nests out of curiosity or for domestic use: the indigenous wax and the honey that is sold are almost entirely collected in the woods.One wishes to see followed the example of Mr. Francisco Alves de Lima, a public teacher at Missão Velha, who in the backyard of his house has gathered about 150 colonies, in tree trunks, of the species of bees called canudo, mandaçaia, tubiba, moça-branca and cupira, whose honey he sells at 320 reis the bottle, also taking advantage of the wax.(Lagos, 1862, p. CLXVI). The CCE's expedition through Ceará lasted two years and by July, 1861, their members had returned to Rio de Janeiro.They brought back around 14,000 plant exsiccates, 17,000 animal specimens, among them circa 12,000 insects and 4,000 bird skins, as well as a large number of indigenous artifacts (Braga, 1962).In spite of the painstaking efforts to carry out the expedition, its results were never gathered in major publications as it was initially intended by the involved participants.This was partly caused by the premature death of a member of the Botany section, health problems faced by the head of this same section, the transference of Lagos and the head of the Geology section to positions in the government, and the diversion of resources from the Brazilian Empire to the Paraguayan War (Teixeira, 2014). As mentioned earlier, many of the objects brought from Ceará by the CCE were shown in the national exposition, in December, 1861, and some of them were chosen to integrate the Brazilian samples taken to the International Exhibition of London, among them the frame prepared by Lagos with the social bees and wasps.Before the national exposition, Lagos exposed in the Museu Nacional many of the objects he assembled during the expedition, in an event open to the public from the 7 th to the 15 th of September (reported in the editions of September 7 th and September 9 th , 1861, of the newspaper "Diário do Rio de Janeiro"; see also Braga, 1962, pp. 115-129).In the first newspaper article, from September 7 th , it is mentioned the collection of "bees" assembled by Lagos in Ceará, in which "one finds exposed eighteen qualities with their honey and their wax" and seven species of wasps, giving a total of 25 species. Little can be said about how the samples of bees and wasps were obtained, since no detailed report was ever published by Lagos.Information on the localities visited by the Zoology section during the expedition comes from reports made by the Botany section, since both delegations remained together for most of the trip, with only a few divergences along the way (Dias, 1862, p. LXVIII).The itinerary of the trips made by the Botany section can be found in the report written by Francisco Freire Alemão, head of this section (Alemão, 1862).Further details of the localities and events can be found in the manuscripts and letters of Freire Alemão, assembled and transcribed by Damasceno and Cunha (Alemão, 1964), and in his transcribed diary published posthumously (Alemão, 2011).The expedition departed from the town of Fortaleza in February of 1859, and, after going through the state, it returned to Fortaleza and then back to the Capital of the empire in July, 1861.Unfortunately, the specific localities where bees were collected could not be found in any of the texts derived from the works of the CCE nor from the manuscripts of Alemão.It is known that the expedition has basically been kept within the boundaries of the province of Ceará, with short visits to two nearby localities in the province of Pernambuco (Fig. 1).Given this, we can safely assume that all specimens examined by Smith (1863) originated from Ceará, although we cannot completely discard the possibility that some might have been obtained in these nearby localities in Pernambuco.Even if this latter possibility might have occurred, the taxonomic and biogeographic interpretations related to the origin of the specimens would be basically the same. Although most of Ceará is covered by dry savannic steppes (also known as caatinga), other five types of vegetation can be found to a smaller extent within the state's territory (IBGE, 2004).Probably due to the Botany section's interest in maximizing the diversity of plants sampled, all different formations in the state were visited during the trip (see Fig. 1), which may have influenced the diversity of bees sampled as well.Given that samples of wax and honey were present, as well as multiple specimens of each species, it is likely that most or all of the samples were collected directly from colonies.Additional evidence on that direction is the presence of males and of teneral adults (as for example the type material of Trigona mosquito) among the preserved material.According to the vernacular names mentioned by Lagos (1862) for the bees reared in the municipality of Missão Velha (see above), it is possible that at least the type material of Melipona mandacaia (mandaçaia), Trigona cupira (cupira), T. meadewaldoi (moça branca) and T. tubiba (tubiba), as well as the specimens of Scaptotrigona bipunctata (canudo), might have been collected from nests found at this locality.Although the precise provenance of the specimens for each of the species likely will never be known, case-by-case studies might help pinpoint the probable region of Ceará from which the specimens originated.A detailed documentation of the distribution of the stingless bees in the state of Ceará will certainly contribute to reach this goal.Also, possible changes to current taxonomic interpretations involving the taxa proposed by Smith (1863) are being dealt by the authors and will be published soon in forthcoming contributions. At the time, the CCE ended up being seen as a project that did not meet expectations and investments (Teixeira, 2014) and its contributions have gradually fallen into oblivion (Braga, 1962).Even if late, it is important to give due credit to Manuel Ferreira Lagos for his contribution to the knowledge of the stingless bee fauna in Brazil by collecting the material in Ceará.This early contribution of the CCE unfortunately went unnoticed because the origin of the material studied by Smith was not made explicit in his original article. Figure 1 Figure 1 Track of the travels in Ceará of the Botany section of the Comissão Científica de Exploração, during the years of 1859 to 1861.The plant physiognomies were based on IBGE (2004).The localities are indicated under their current names.See text for further details. Table 1 Brazilian stingless bees and social paper wasps brought to the London International Exhibition of 1862, listed according to Miers' report published by Carvalho Moreira (1863, p. 67).
4,316.2
2023-06-30T00:00:00.000
[ "Biology" ]
Non-commutativity and non-associativity of the doubled string in non-geometric backgrounds We use a T-duality invariant action to investigate the behaviour of a string in non-geometric backgrounds, where there is a non-trivial global O(D, D) patching or monodromy. This action leads to a set of Dirac brackets describing the dynamics of the doubled string, with these brackets determined only by the monodromy. This allows for a simple derivation of non-commutativity and non-associativity in backgrounds which are (even locally) non-geometric. We focus here on the example of the three-torus with H-flux, finding non-commutativity but not non-associativity. We also comment on the relation to the exotic 522 brane, which shares the same monodromy. Introduction Dualities lie at the heart of much of our understanding of string and M-theory.However, the normal formulations of these theories may not be the best suited for an efficient description of these dualities, and may not allow us to fully appreciate their effects.As a result, one is led to the idea of modifying and extending the original formulations, to obtain duality manifest models. One of the ambitions of these approaches has been to understand so-called "non-geometric backgrounds."[6,7,[38][39][40][41][42].These are backgrounds which are patched by T-dualities rather than just normal diffeomorphisms and gauge transformations; they are important in the context of flux compactifications [42].By considering such backgrounds on the same stringy footing as any other we are able to extend the set of branes we have to play with to include new "exotic" objects related by T-dualities to familiar branes [43,44].The non-geometrical features of these backgrounds may be better understood using ideas related to the doubled formalism such as non-geometric frames [45][46][47][48][49][50][51][52][53][54][55][56], while the "patching" by a T-duality can be understood as a particular finite coordinate transformation of the doubled geometry [37,[57][58][59][60]. The behaviour of strings in such backgrounds is expected to deviate from our usual expectations.In particular, there is plenty of evidence pointing towards non-commutativity and non-associativity in the non-geometric setting [61][62][63][64][65][66][67].In this paper our aim is to use the doubled string action developed in [20] to examine this behaviour.The advantage of this approach is that T-duality is manifest, and it is trivial to treat in a unified manner non-geometric backgrounds and their (geometric) T-duals 1 .Indeed from the point of view of the doubled space the notion of a non-geometric background does not make sense: all such backgrounds are (generalised) geometric.Only when we single out half of the coordinates as belonging to a physical spacetime do we have a meaningful notion of non-geometric properties. We will focus mainly on the three-torus with H-flux.Although not a true string theory background, this provides an interesting simplified model of the sort of non-geometric effects we are interested in, and the behaviour under T-duality of the string in this background has received plenty of attention [11,61,63,[65][66][67][69][70][71][72].By carrying out T-dualities along the two isometry directions of this model one obtains firstly a twisted torus (with geometric flux) and secondly a non-geometric background (with Q-flux).In the nongeometric background the metric and B-field can only be globally defined up to a T-duality transformation. A further T-duality along a direction which is not an isometry leads to a background which depends explicitly on a dual coordinate, and so is not even locally geometric (though it may still be thought of as carrying an R-flux). In the non-geometric background one encounters non-commutativity of string coordinates, and we will be able to give a simple derivation of the explicit non-commutative bracket found in [66].There a detailed and careful study of the T-duality rules was used to carry results from the geometric backgrounds over to the non-geometric one: we will be able to do a much more straightforward computation of the Dirac brackets of the doubled model leading to the same result (note that our results are all classical and in terms of Poisson or Dirac brackets, but this carries directly over to the quantum theory). Using the doubled model also gives us more control over the situation in which the background is not even locally geometric.By treating coordinates and their duals on an equal footing there is no difficulty at least in principle with considering T-dualities along directions which are not isometries.Thus we are also able to work out explicitly the brackets in this case.These brackets are more pathological.However by computing the Jacobi identity, which vanishes, we find that in fact our brackets do not display the non-associative property.This may be due to a subtle discrepancy between the coordinates used here and those used in other investigations such as [63]. In our doubled model the only piece of information about the background which is relevant to determining the Dirac brackets and hence non-commutativity (or non-associativity) is the O(D, D) monodromy which describes the (lack of) periodicity of the generalised metric (which in the T-duality description unifies the metric and B-field).This is a consequence of the fact that the underlying origin of such effects, is the mixing of physical and dual coordinates in the closed string boundary conditions, which is also encoded by the monodromy. This means that although the three-torus with H-flux is not a true string background but rather an interesting toy model, a T-duality chain of genuine string theory backgrounds with the same monodromy would inherit all our results.A particular example is provided by the NS5-KKM-5 2 2 T-duality chain, and one may conclude (as mentioned in [54]) that a string in the background of the exotic 5 2 2 brane also exhibits non-commutativity. The outline of this paper is as follows.In section 2 we review the T-duality chain of the three-torus with H-flux, and write down the standard string Polyakov action for this background.In section 3 we give the modification of our doubled action which allows us to describe strings in backgrounds with non-trivial O(D, D) monodromy.We then proceed to use this to study a doubled string in the background defined by the three-torus with H-flux: this doubled string simultaneously describes the various interesting T-duals of the original background.We show the equivalence of our description with the usual one.In section 4 we analyse the Poisson brackets and constraints of the doubled model and work out the Dirac brackets which are used to describe the dynamics: these Dirac brackets give rise to non-commutativity in the non-geometric background.In section 5 we then extend our analysis to study the effects of carrying out a T-duality along a direction which is not an isometry.We calculate here a triple bracket of string coordinates to investigate the appearance of non-associativity.Finally we include a short appendix to demonstrate the similarities between the toy background considered here and the 5 2 2 brane. 2 Torus with H-flux: ordinary sigma model and dual coordinates Reminder of T-duality in the doubled formalism We will be interested solely in backgrounds with non-vanishing metric g and B-field B. The T-duality properties of these fields can be expressed by combining them into the generalised metric: T-duality transformations P M N are elements of the group O(D, D), and so by definition obey 2) The O(D, D) structure η MN can be used to raise and lower the indices on the transformation matrix P M N . In the doubled picture the coordinates consist of the usual spacetime coordinates X together with their duals X, which form a single O(D, D) vector In order to make contact with the usual spacetime picture we have to specify a polarisation, or choice of coordinates which we take to be the physical ones.One way of seeing the action of T-duality is to keep the polarisation fixed and rotate the geometry: (2.4) The new "physical" coordinates after this T-duality will still be the upper D components of XM .An equivalent point of view is to keep the geometry (i.e. the generalised metric) fixed but have T-duality act on the polarisation, i.e. after acting with a T-duality we will now select a different set of physical coordinates. T-dual backgrounds Our example background for investigating non-geometric effects is that of the three-torus with H-flux.It is important to remember that this is not a true string theory background.We will mainly be interested in viewing it as a toy model with which to develop our understanding, although one can be more precise about viewing it as an approximation to a true background as for instance in [66]. Torus with H-flux We take the metric on the torus to be flat ds 2 = dX 2 + dY 2 + dZ 2 and pick a gauge such that the B-field is B = HZdX ∧ dY .There is then a constant flux H = HdX ∧ dY ∧ dZ through the three-torus.This background can be treated as a two-torus with coordinates X, Y fibred over a base circle parameterised by the coordinate Z.For the time being we will only double this two-torus, introducing dual coordinates X and Ỹ .The generalised metric is then a four-by-four matrix, and has the form As we loop around the Z direction this generalised metric is not periodic, but changes as with O(2, 2) monodromy This monodromy has a straightforward physical interpretation as a gauge transformation of the B-field: B → B + d(2πHX ∧ dY ), as is easily seen from the decomposition of the generalised metric in (2.1). One T-duality: the twisted torus We implement T-duality in the X direction by the matrix The physical configuration this leads to is known as the twisted torus.The physical coordinates are X (the T-dual of the original coordinate X), Y and Z.This background has vanishing B-field, and metric We need to make a geometric identification of the coordinates X, Y as we go around the Z circle for this space to make sense.However, this is nothing out of the ordinary.It corresponds to an O(2, 2) monodromy which lies in the GL(2) subgroup of O(2, 2) corresponding to coordinate transformations.This background is considered to have geometric flux, f i jk with f x yz = H. Another T-duality: the non-geometric background T-duality in the Y direction corresponds to the following element of O(2, 2) and produces a non-geometric configuration: (2.12) Though the generalised metric is still periodic up to an O(2, 2) element this transformation however cannot be interpreted as a coordinate transformation or gauge transformation. The background is said to carry a Q-flux, A T-duality too far? The conventional Büscher T-duality rules [73,74], only apply along directions which are isometries.However, in the doubled framework every coordinate appears together with its dual.This allows one to at least attempt to make sense of carrying out T-duality in arbitrary directions.If we were to therefore perform a further T-duality in the Z direction we would end up in a situation where our physical space has coordinates X, Ỹ and Z, and fields (2.14) The fields still depend however on Z which is the dual coordinate in this picture.As a result, this space is believed to not even be locally geometric.However it can still be thought of as carrying an R-flux, R ijk , with R xyz = H. For the time being we shall ignore the possibility of carrying out this T-duality, and return to it in section 5. The sigma model and dual coordinates Our goal in this paper is to use a doubled formalism to study all of the above backgrounds simultaneously. As a check on the doubled formalism we will first set up the usual string sigma model in the simplest case of the flat three-torus with H-flux.The Polyakov action in the H-flux background is leading to the equations of motion (2.16) The momenta are The action (2.15) is equivalent to the Hamiltonian form where H MN is the generalised metric given in (2.5) and We want to describe string configurations with winding in all directions, so that X(σ + 2π) = X(σ) + 2πN 1 and similarly for the others.This means that we can write mode expansions γ n e inσ . (2.20) While here the winding numbers N i as constant, in the doubled picture they will be dynamical variables. The dual coordinates X and Ỹ are defined in terms of the momenta by If we have winding then these are not periodic but obey Using the mode expansions (2.20) one can explicitly show that (2.23) This quantity is also not periodic but obeys where The equations of motion imply that ṗ1 = 0. Similarly, one has with and (2.28) Finally, the momentum conjugate to Z is just Ż, so that This has more conventional periodicity properties: where p 3 = ż.As this direction is not an isometry we do not have ṗ3 = 0. 3 Doubled sigma model for three-torus with H-flux The action The doubled sigma model will allow us to treat all three of the above backgrounds in a unified manner.To do so, we need to slightly generalise the derivation of the doubled action from [20] to describe non-trivial monodromies.As a first step to doing so, let us recall the steps taken in [20] where no non-trivial monodromy was assumed. The usual string action in Hamiltonian form and conformal gauge is The Hamiltonian depends only on X ′i and the momentum P i in a manifestly O(D, D) invariant form, with the O(D, D) vector We make the replacement P i → X′ i , thereby replacing the momentum with dual coordinates.The kinetic term can be manipulated by integration by parts into the form assuming for now that we have no O(D, D) monodromy, but do have winding such that X i (σ + 2π) = X i (σ) + 2πw i and Xi (σ + 2π) = Xi + 2πp i .In the usual string case we have constant w i , but we do not assume this for the momentum zero mode p i .Now, the second term of the last line of (3.3) is not O(D, D) covariant.One might proceed naively to drop it from the action entirely in order to obtain a proposed O(D, D) manifest sigma model.However, one can check [20] that this does not lead to the correct equations of motion or Dirac brackets, with the treatment of the zero modes turning out to be incorrect.The correct solution is to first note that as we are using a Hamiltonian form of the action it involves the dynamical quantity p i , and we wish to treat the winding w i on the same dynamical footing.One is therefore led to the following modification: adding to the action a term + π dτ Ẋi (0)w i , ( such that we have total action The generalised winding w M is treated as dynamical, and must be taken into account when varying the action.In this way one obtains equations of motion equivalent to those of the Polyakov action, and the expected covariant Dirac bracket: where ǫ ′ (σ) = δ(σ), such that we reobtain the usual brackets between coordinates and momenta.The situation is broadly similar for non-trivial monodromies.Given the general boundary condition then the generalisation of the action in [20] which describes non-trivial monodromies is The additional term has the effect of treating the zero modes correctly: it eliminates a cross-term involving ẇM and the oscillator modes which leads to incorrect Dirac brackets, and ensures the momentum p i are given by the expression obtained from the usual analysis. The equations of motion from the action (3.8) can be expressed as (3.9) Integrating over σ from 0 to 2π implies that If the generalised metric is constant, i.e. we have only doubled directions in which there are isometries, then the right-hand side of the above is zero.In this case if the matrix η MN + P MN is invertible, as will be the case for the backgrounds considered in this paper, then the winding must in fact be constant, ẇM = 0.This then implies that integrated equations of motion give exactly which is the correct duality relation for the doubled string [2,3] and shows that our doubled model indeed reproduces the correct physics of T-duality.When the generalised metric depends on a coordinate the result will generically be non-local duality relations in place of (3.11) as well as non-constant generalised winding. Mode expansions and equations of motion We will now use the action (3.8) to study the three-torus with H-flux.For now consider the four-dimensional doubled space with coordinates X, Y, X and Ỹ .The monodromy matrix and generalised winding are We therefore need mode expansions for the coordinates and their duals obeying These expansions are provided by and (3.17) The quantities p 1 , p 2 and αn , βn will be determined from the equations of motion.To derive these we insert the above mode expansions into the action (3.8) and carry out the σ integration.Note that the generalised metric for this background determines the Hamiltonian term to be explicitly The result of the σ integration is that the action can be expressed as where the symplectic terms are the part of the action involving the zero modes and the winding is and that involving the "dual oscillators" α and β is We do not need the explicit form of S α,β in what follows. We can now work out the equations of motion following from the doubled string with dynamical winding. First of all, the equations of motions for the zero modes x and ỹ imply that so we must have only constant winding, as expected.If we vary N 1 and N 2 then we obtain equations determining ẋ and ẏ, which has no effect on the physics.Varying with respect to the momenta (or dual winding) p 1 and p 2 we find that Using these equations in the equations of motion resulting from varying the zero modes x and y we learn that in fact also ṗ1 = 0 = ṗ2 . (3.26) Finally, for the dual oscillator modes α and β we have If we insert these expressions as well as those for p 1 , p 2 into the mode expansions (3.16) and (3.17) we find that we exactly recover the expressions for X and Ỹ that are found by integrating the momentum P X and P Y , (2.23) and (2.26). After solving the equations of motion as above we can identify the quantities X′ and Ỹ ′ appearing in the doubled action as the momentum P X and P Y .We thus have the same Hamiltonian as in the ordinary string action.In addition we know that all the winding are constant.Then the action of the doubled model is having integrated by parts and discarded total τ derivatives.The extra term appearing here is essentially We can also understand its appearance as follows.In deriving the equations of motion from S doubled we assumed from the start that the coordinates were not periodic.This was necessary as we wanted to have dynamical winding on the same footing as the ordinary momenta.In this case one finds that δS single gives rise to the standard string equations of motion (2.16) plus an additional term −2π dτ HN 3 (δX(0) Ẏ (0) − δY (0) Ẋ(0)), resulting from the sigma integration by parts if Z is assumed here to wind.This exactly cancels the contribution from the extra term in (3.29).Thus we have that δS doubled = δS single | no winding , so that the equations of motion of the doubled model are equivalent to the standard equations of motion of the usual Polyakov action, (2.16), as expected from the general discussion in section 3.1. 4 Non-geometry and non-commutativity Derivation of Dirac brackets We now study the Poisson brackets arising from the symplectic terms of equation (3.20).In the doubled picture the "momenta" such as p 1 and p 2 are to be treated as coordinates.This leads to second-class constraints when we come to define the momenta of all variables in the model.For each variable χ we define the conjugate momentum Π χ , and have constraints with initially the standard Poisson brackets Starting with the zero modes only, the second-class constraints are (4. 3) The matrix of Poisson brackets of second-class constraints for the zero modes is }, and works out as The inverse is From this one can read off the non-zero Dirac brackets, which are defined by so that we obtain Meanwhile, for the oscillators we have for which the matrix of Poisson brackets is (with now χ = {α n , β n , αn , βn }) so that Non-commutativity We can now use these Dirac brackets with the mode expansions (3.15), (3.16), (3.17) to determine the brackets between our coordinates.For each coordinate and its dual we find as expected which is compatible with the relationship X′ = P X between dual coordinates and momenta. In addition we find a non-zero bracket between X and Ỹ : These are the coordinates on the non-geometric background and we interpret this result as non-commutativity in this background.Observe that it is proportional both to the H-flux H and to the winding N 3 about the Z-direction.The latter reflects the fact that this non-commutativity is a consequence of the global properties of the background, with a string wound in the Z-direction feeling the effects of the global patching. The bracket (4.14) is in exact agreement with the result of [66] up to different choices of α ′ , and up to the overall constant term −πHN 3 /3 resulting from the {x, ỹ} * bracket.This discrepancy is not a complete surprise as in [66] the bracket between these zero modes was not fixed by the T-duality rules but instead argued for indirectly.In our case we may note that the origin of this bracket can be traced uniquely to the presence of the term in the Lagrangian.We observe that adding any multiple of dτ P M N ẇN η MP w P to the action does not alter any of our previous results about agreement with the standard sigma model.Doing so would only affect the N 1 and N 2 equations of motion, with the result of shifting the expressions for ẋ and ẏ by terms proportional to Ṅ 1 and Ṅ 2 , which are both zero.Hence in principle we could modify our original action to remove or modify the term (4.15).It would be interesting to have a precise check on the presence of this constant term: it is expected that the bracket of non-commutativity is related to the non-geometric fluxes [61][62][63][64][65][66][67] and perhaps there is a direct argument from a more (generalised) geometrical point of view (perhaps a derivation of the sigma model from an underlying geometrical principle such as in [19], or some relationship to a geometric or flux formulation of double field theory [55,75]). Comment on section condition In the Hamiltonian formalism of the bosonic string one has a pair of first class constraints which generate worldsheet parameterisations.Under Poisson brackets these constraints form a closed algebra.In the doubled formalism these constraints are Given a generalised metric H MN which is an arbitrary function of the doubled coordinates X M then the algebra of these constraints does not necessarily close, leading to a loss of worldsheet diffeomorphism invariance. Thus, as shown in [20], one is led to impose a condition on the background generalised metric. In [20] only backgrounds with trivial O(D, D) monodromy were considered.The Dirac brackets were , and worldsheet diffeomorphism invariance as manifested by the closure of the algebra of constraints required the vanishing of a term involving the Dirac bracket of the generalised metric with itself: This can be implemented by requiring the section condition of double field theory hold on the generalised metric in the form Now, for the torus with H-flux we have started with a background that obeys the section condition, but then gone on to show that we have an additional non-zero Dirac bracket, between two of the dual coordinates.In our situation this does not modify any of the above concerns, as the generalised metric depends only on the coordinate Z and its brackets receive no modifications due to the H-flux.However one might wonder about the general situation.A sufficient condition for closure of the worldsheet diffeomorphism algebra would be that a background cannot depend on any two coordinates who have a nonzero Dirac bracket, which ensures that {H P Q (σ), H RS (σ ′ } * = 0.The only property of the background that is relevant to determining the bracket is of course the monodromy.Ideally one would like to be able to calculate the Dirac brackets for an arbitrary monodromy P M N , which would allow one to rule out the validity of a given background by checking its monodromy.Here we were only able to work with a specific example, and check at the end when we had the brackets that they were consistent with the algebra closure analysis of [20].It is also tempting to speculate about whether one may be able to weaken the section condition for particular choices of P M N .We leave this for future work. Non-isometry and no non-associativity In this section we allow the non-isometric Z direction to be doubled, allowing us to derive the Dirac brackets involving the dual coordinate Z.As the background obtained by T-dualising in all three directions is not even locally geometric, this leads to new interesting behaviour.The brackets of Z with the other coordinates of this background are non-zero and in fact depend on the modes of the original string coordinates in an involved fashion.Thus we do not have a geometric interpretation of the brackets.By taking an additional bracket we can then compute the Jacobi identity, the failure of which in these backgrounds has been interpreted as evidence for non-associative behaviour [61][62][63][64][65][66][67].However, we find that the Jacobi identity is satisfied, and thus our model appears to not see the non-associative behaviour. Dirac brackets for the doubled non-isometry direction If the Z direction is doubled then the winding N 3 will be treated as dynamical, although we expect equations of motion to set it to be constant.This puts us in a situation where our monodromy P M N , which is now meant to be an element of the global O(3, 3) group, now involves a dynamical quantity.However we shall ignore this issue, and think of our boundary conditions as merely involving a set of coordinates and dynamical winding.We can then proceed as before by writing mode expansions for Z and Z: γ n e inσ , Z = z + p 3 σ + n =0 γn e inσ . (5.1) Our action is supplemented by the following terms: (5. 2) It is clear that the equations of motion work as expected, ensuring that N 3 is constant and Z′ = P Z .Let us now see what brackets we obtain.Our new second-class constraints are (5. 3) The non-trivial behaviour here results from the C N 3 constraint.This has non-zero Poisson brackets with , C αn and C βn of (4.3) and (4.9).The new and important feature is that these Poisson brackets are not constant, but depend directly on the modes.After carefully inverting the matrix of all second-class constraints one finds that this induces the following non-zero Dirac brackets (in addition to the ones we had previously, which are unchanged): {z, αn ) That the Dirac brackets of the coordinate Z with X and Ỹ involve only the zero mode z is consistent with Z′ being the momentum P Z conjugate to Z, which must have the usual Dirac brackets. The above Dirac brackets guarantee the existence of non-zero Dirac brackets between the coordinate Z and both of X and Ỹ .These brackets depend in an involved way on the different modes and it is not clear if they have any geometric interpretation.This fits in with the general expectation that the space we are now considering cannot even locally be described geometrically. Non-associativity? Following [61][62][63][64][65][66][67] we can however study the possible non-associativity of these brackets.Although a single bracket of Z with X or Ỹ has an involved dependence on the modes, the bracket of such a bracket has a simpler structure.In particular, one has (5.9) Using also the result (4.14) to get we find that the triple bracket in fact vanishes: Thus we do not find evidence of a non-associative structure.Note that in papers such as [62,63] which find non-associativity in the presence of H-flux, the coordinates ultimately used are not the naive dual coordinates X, Ỹ , Z used here.Rather, there the proper coordinates one adopts conformal field theory methods and defines new coordinates adapted to the non-trivial conformal field theory with H-flux.It would be interesting to understand if these different coordinates can be defined in our approach, and to see if they then lead to non-associative behaviour using the above Dirac brackets.This may be related to the loss of a well-defined target space for a locally non-geometric background. Additionally, it may be that there are subtle issues with defining the doubled string action when one wishes to consider T-dualities along non-isometric directions.As noted above, the boundary conditions encoded by the monodromy matrix P M N explicitly involve the dynamical winding N 3 , and yet this matrix is supposed to be a constant T-duality.Resolving these issues may require a deeper understanding of the precise geometrical nature of our model in the doubled space, and of how finite generalised diffeomorphisms are seen by the sigma model. Conclusions We have written down in this paper a proposed doubled string action (3.8), which is a simple generalisation of that in [20] to describe the string in backgrounds with non-trivial O(D, D) patching.This action reproduces the standard equations of motion and duality relations, and leads straightforwardly to the Dirac brackets of string coordinates.In particular, for the non-geometric background obtained by acting on the three-torus with H-flux with two T-dualities we have a non-vanishing Dirac bracket between the two coordinates X and Ỹ , e in(σ−σ ′ ) 1 n 2 .(6.1) This is in agreement with the result found in [66] by a different method, and implies that the doubled action (3.8) truly knows about the behaviour of the string in non-geometric backgrounds. In addition we are able to use the doubled formalism to investigate the brackets involving the coordinate Z on the background obtained by a further T-duality along a non-isometry direction.There however we found that the Jacobi identity was satisfied, so that we find no evidence of non-associativity.This may be due to a breakdown of the validity of our approach when considering T-dualities along non-isometry directions, which would be disappointing given what one expects to be able to do with a doubled formalism, or else a subtlety involving which coordinates are appropriate for investigations of these effects. One of the strengths of our approach is that the only information about the background which determines the Dirac brackets is the monodromy matrix.This means that our results hold for any background sharing the same monodromy, for instance the interesting exotic 5 2 2 brane example.Unlike the torus with H-flux this is a true supergravity background.However its global non-geometric properties are determined by the same monodromy as in the case of the simpler model. There remain open questions regarding the interpretation of both the doubled action and the resulting Dirac brackets in terms of the geometry and topology of the doubled space.This may have connections with other approaches to the doubled sigma model such as [19], where a closely related action can be written down by starting from a simple observation about the consequences of the section condition in double field theory.It is also clear that non-geometric fluxes play an important role which does not seem to be fully understood for the sigma model, so it would be worthwhile to investigate how exactly the doubled string feels their effects.This may involve the flux formulation of double field theory [55] or the similar torsionful geometry of [75]. One would also prefer to be able to derive the Dirac brackets for an arbitrary monodromy, without having to treat specific cases individually.This would be important for understanding the relationship to the section condition, which restricts allowed backgrounds.A related goal would be to study so-called "truly non-geometric" backgrounds, which are not T-dual to anything geometric. The quantum theories of the backgrounds studied in this paper could also be treated in this approach.
7,515.2
2014-05-09T00:00:00.000
[ "Mathematics", "Physics" ]
“GeSn Rule-23”—The Performance Limit of GeSn Infrared Photodiodes Group-IV GeSn photodetectors (PDs) compatible with standard complementary metal–oxide-semiconductor (CMOS) processing have emerged as a new and non-toxic infrared detection technology to enable a wide range of infrared applications. The performance of GeSn PDs is highly dependent on the Sn composition and operation temperature. Here, we develop theoretical models to establish a simple rule of thumb, namely “GeSn−rule 23”, to describe GeSn PDs’ dark current density in terms of operation temperature, cutoff wavelength, and Sn composition. In addition, analysis of GeSn PDs’ performance shows that the responsivity, detectivity, and bandwidth are highly dependent on operation temperature. This rule provides a simple and convenient indicator for device developers to estimate the device performance at various conditions for practical applications. Introduction The Group-IV GeSn material system is under extensive development for low-cost and high-performance infrared (IR) photodetectors (PDs) for a wide spectrum of applications covering military, communication, and thermal vision [1,2].While the present marketdominated IR PDs made of compound semiconductors such as InGaAs, InSb, HgCdTe, PbSe, PbS, etc. offer good quantum efficiencies, they are less compatible with the standard complementary metal-oxide-semiconductor (CMOS) processing, leading to high cost and complex fabrication.By contrast, the CMOS compatibility of GeSn PDs makes them ideal for monolithic integration with electronics on the same Si or silicon-on-insulator (SOI) chip for seamlessly manufacturing in modern CMOS foundries, allowing for the development of low-cost, high-performance, complex, and functional image systems [3,4].In addition, GeSn alloys offer a wide range of bandgap tunability by adjusting the Sn composition [5,6], thereby permitting expansion of their direct-gap absorption edge from ~1500 nm to shortwave IR (SWIR) range (1.5-3 µm), mid-wave infrared (MWIR) range (3-8 µm), and even long-wave infrared (LWIR) (8-14 µm).Most importantly, the presence of the indirect conduction band enables a unique momentum-space carrier separation scheme to enable high-performance photodetection [7,8].These unique advantages have encouraged the development of various types of GeSn PDs [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25] with remarkable photodetection range up to 4600 nm [25].More recently, a comprehensive theoretical study of GeSn PDs has indicated that the performance of GeSn PDs when reaching material maturity is comparable with, and even better than, market-dominated IR PDs [6], showing great promise for low-cost and high-performance IR detection. IR PDs usually require cryogenic cooling to suppress dark current in order to reach high performance for practical photodetection.In addition, the device performance such as dark current, cutoff wavelength, responsivity, and detectivity are strongly dependent on the operation temperature as well as alloy composition.Thus, from the viewpoint Sensors 2023, 23, 7386 2 of 15 of device and system developers, a rule of thumb is extremely useful to describe the performance of IR PDs in terms of operation temperature and alloy composition to achieve desired performance for meeting the requirement of various applications.Such simple relationships have been established to evaluate the performance of IR PDs including Mercury-Cadmium-Telluride (MCT) PDs (rule-07) [26] and extended-wavelength InGaAs PDs (IGA-rule-17) [27].However, such a simple rule of thumb has not been developed for GeSn PDs so far.Here, for the first time, a heuristic rule is proposed and established for GeSn PDs with various Sn compositions and operation temperatures as GeSn-rule 23, where the prefix GeSn stands for GeSn PDs and the postfix 23 represents the year 2023, to evaluate the performance of GeSn PDs for the device and system developers to advance the GeSn IR PD technology. The rest of the paper Is organized as follows: the structure of the GeSn p-i-n PD under investigation is described in Section 2; the theoretical models of temperature-dependent bandgap energies for analyzing the cutoff wavelength in terms of Sn composition and operation temperature are presented in Section 3; the theoretical models for evaluating dark current density and the establishment of GeSn-rule-23 are presented in Section 4; the theoretical models of temperature-dependent absorption coefficient and optical responsivity are shown in Section 5; the detectivity and noise-equivalent power in terms of Sn content and operation temperature are discussed in Section 6; the analysis of temperature-dependent bandwidth of GeSn PDs is given in Section 7; and finally the conclusion is summarized in Section 8. GeSn Device Structure To establish the GeSn PDs' fundamental performance, here we shall consider a general normal-incidence Ge 1−x Sn x p−i−n homojunction PDs with a circular mesa, as shown in Figure 1, so the optical response of the Ge 1−x Sn x PD is independent of the incident light's polarization.The Ge 1−x Sn x p-i-n diode is grown on a (001) silicon substrate via a fully strain-relaxed Ge 1−x Sn x virtual substrate (VS), so the entire Ge 1−x Sn x p−i−n stack is strain-free.It is noted that the inhomogeneity of Sn content in the material may significantly affect the material properties [28].In this study, we assume the Sn content in the Ge 1−x Sn x p-i-n stack is uniform.The intrinsic Ge 1−x Sn x layer can convert the incident IR photons to electron-hole pairs via optical absorption, which are then swept across the p−i−n junction and collected as electrical currents.To achieve high responsivity and suppress tunneling dark current, a thick intrinsic Ge 1−x Sn x layer is necessary.Here, we set the thickness of the intrinsic Ge 1−x Sn x layer to be t i = 3000 nm to enhance optical absorption and quantum efficiency.The thickness of the n-Ge 1−x Sn x is kept thin to be t n = 100 nm to enhance the optical absorption by the intrinsic Ge 1−x Sn x layer.On the other hand, the thickness of the p-Ge 1−x Sn x is fixed to t p = 500 nm to ensure sufficient etching tolerance for mesa definition.The doping concentrations in the n−Ge 1−x Sn x and p−Ge 1−x Sn x are set to N a = 1 × 10 19 cm −3 and N d = 1 × 10 19 cm −3 , respectively, because high doping concentrations can help suppress diffusion dark currents and thus enhance detectivity [5].(For different thicknesses and doping concentrations of the layers, the device performance can also be evaluated using our theoretical models).We make several assumptions in order to evaluate their achievable performance.First, we shall assume that the entire Ge 1−x Sn x p-i-n stack is defect-free as the defects can be properly confined in the Ge 1−x Sn x VS [8].The diameter of the GeSn diode mesa is set to D = 50 µm, which is significantly larger than the wavelength of interest, so the diffraction effect is negligible.On top of the GeSn is an antireflection layer which can minimize the reflection loss and thereby enhance quantum efficiency.The parameters for GeSn alloys used in this study were obtained from linear interpolation between these of Ge and α-Sn [29], and their dependences on wavelength (electrical frequency) are neglected. Temperature-Dependent Bandgap Energies and Cutoff Wavelength The cutoff wavelength (λc) of the Ge1-xSnx PDs is determined by optical absorption edges of the i−Ge1-xSnx active layer.For Ge1-xSnx alloys, the optical absorption has two contributors, the direct-gap and indirect-gap interband absorption, owing to the proximity of the Γ-and L-valley conduction band (CB).However, the indirect−gap interband absorption is much weaker than the direct-gap one [6] because of the need of additional phonons for momentum conservation.As a result, the cutoff wavelength of the Ge1-xSnx PDs is dominated by the direct-bandgap ( g E  ) via the expression 1. where α and β are the Varshni parameters for Ge1-xSnx alloys, g ET  = is the Γ-valley bandgap energy at T = 0 K, and 2.46 eV b  = is the bowing parameter [30].The Varshni parameters for Ge1-xSnx alloys are obtained by linear interpolation from these of Ge and α-Sn given in Table 1.The relationships between the operation temperature and cutoff wavelength for GeSn PDs with different Sn compositions are shown in Figure 2.For a fixed Sn content, the cutoff wavelength increases with increasing temperature.For pure Ge (x = 0%), the cutoff wavelength can only reach NIR.As the Sn content increases, the cutoff wavelength of Ge1-xSnx PDs significantly redshifts attributed to the bandgap shrinkage owing to the Temperature-Dependent Bandgap Energies and Cutoff Wavelength The cutoff wavelength (λ c ) of the Ge 1−x Sn x PDs is determined by optical absorption edges of the i−Ge 1−x Sn x active layer.For Ge 1−x Sn x alloys, the optical absorption has two contributors, the direct-gap and indirect-gap interband absorption, owing to the proximity of the Γand L-valley conduction band (CB).However, the indirect−gap interband absorption is much weaker than the direct-gap one [6] because of the need of additional phonons for momentum conservation.As a result, the cutoff wavelength of the Ge 1−x Sn x PDs is dominated by the direct-bandgap (E Γ g ) via the expression λ c = 1.24/EΓ g .The temperature− and composition−dependent direct bandgap energy of Ge 1−x Sn x alloys can be calculated using the Varshni equation [30][31][32] where α and β are the Varshni parameters for Ge 1−x Sn x alloys, E Γ g (T = 0) is the Γ-valley bandgap energy at T = 0 K, and b Γ = 2.46 eV is the bowing parameter [30].The Varshni parameters for Ge 1−x Sn x alloys are obtained by linear interpolation from these of Ge and α-Sn given in Table 1. Symbol Ge Sn The relationships between the operation temperature and cutoff wavelength for GeSn PDs with different Sn compositions are shown in Figure 2.For a fixed Sn content, the cutoff wavelength increases with increasing temperature.For pure Ge (x = 0%), the cutoff wavelength can only reach NIR.As the Sn content increases, the cutoff wavelength of Ge 1−x Sn x PDs significantly redshifts attributed to the bandgap shrinkage owing to the incorporation of Sn, and can reach SWIR, MWIR, and even LWIR range with a sufficiently high Sn content.These results show that Ge 1−x Sn x PDs can operate in different IR spectral ranges by adjusting the Sn content for different applications. incorporation of Sn, and can reach SWIR, MWIR, and even LWIR range with a sufficiently high Sn content.These results show that Ge1-xSnx PDs can operate in different IR spectral ranges by adjusting the Sn content for different applications. Dark Current Density and GeSn-Rule 23 For defect−free Ge1-xSnx PDs, the dark current density is dominated by minority carrier diffusion currents [6].The diffusion dark current density at zero bias under short-base approximation can be calculated using [6] where q is the elementary charge; .The intrinsic carrier concentration (ni) in GeSn alloys can be calculated using [6,32] ( ) Dark Current Density and GeSn-Rule 23 For defect−free Ge 1−x Sn x PDs, the dark current density is dominated by minority carrier diffusion currents [6].The diffusion dark current density at zero bias under shortbase approximation can be calculated using [6] where q is the elementary charge; D p , D L n , and D L n are the diffusion coefficients for holes, and electrons in the Γ−CB and L−CB, respectively; p n0 is the minority hole density in the n−GeSn region; n Γ p0 and n L p0 are the minority electron densities in the Γ− and L−CB in the p−GeSn region.The diffusion coefficient can be converted from mobility (µ) via the Einstein relationship D = µkT/q with k being the Boltzmann constant [32].The minority carrier concentrations in the doped layers can be linked to the intrinsic carrier concentrations and doping concentrations via The intrinsic carrier concentration (n i ) in GeSn alloys can be calculated using [6,32] (5) where is the reduced Planck constant, m * Γ and m * L are the electron effective masses in the Γ− and L−CB, respectively, m * h is the hole effective mass in the valence band, which are taken from a 30−band full−zone k•p model [33].The electron mobility in the Γ−CB (µ 0 e,Γ ) and L−CB (µ 0 e,L ) and the hole mobility (µ 0 h ) (in units of cm 2 V −1 s −1 ) for intrinsic Ge 1−x Sn x at T = 300 K can be expressed as [34,35] Sensors 2023, 23, 7386 5 of 15 The temperature-dependent mobility can be approximated with the power law µ ∝ T −p [36], where p is a constant.Owing to the lack of experimental data for GeSn alloys, the coefficients are approximated by these of Ge (p = 1.66 for electrons and p = 2.33 for holes [36]).With the mobilities, the minority mobility can be estimated using [37,38] where N a and N d are the doping concentrations (in units of cm −3 ) in the p− and n−GeSn regions, respectively.With the dark current density, the R 0 A product can be obtained using [32] Figure 3a shows the calculated dark current density as a function of operation temperature for Ge 1−x Sn x PDs with different Sn compositions.For a fixed Sn content, the dark current increases with increasing temperature owing to higher intrinsic carrier concentration.In addition, the dark current density also increases with increasing Sn content owing to higher intrinsic carrier concentration as a result of reduced bandgap energy.Figure 3b shows the calculated dark current density compared with selected experimental data from the reported Ge 1−x Sn x PDs operated at −1 V bias voltage and T = 300 K [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24].It can be found that the calculated dark current density is generally 2-3 orders of magnitude smaller than experimental data at T = 300 K.The results suggest that it is possible to significantly improve the dark current density by continuously improving the material quality.Figure 3c,d shows the calculated dark current density and R 0 A product as a function of cutoff wavelength at various temperatures in the range of T = 200-300 K, respectively.At a fixed temperature, Ge 1−x Sn x PDs require a higher Sn composition to extend the cutoff wavelength, causing larger dark current density and smaller R 0 A. At higher temperatures, the dark current density goes up while R 0 A product goes down.It is noted that the use of three-stage thermoelectric (TE) coolers can decrease the operation temperature of IR PDs to 210 K [39].Thus, it is anticipated that decreasing the operation temperature using TE coolers can effectively suppress the dark current density of Ge 1−x Sn x PDs. Figure 3e depicts the calculated dark current density as a function of product reciprocal of the product cutoff wavelength and temperature product (λ c T) −1 for Ge 1−x Sn x PDs with various Sn compositions.Also plotted in Figure 3e are the selected experimental data from the reported Ge 1−x Sn x PDs operated at −1 V bias voltage [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24] as well as the MCT rule-07 [26] and IGA-rule-17 [27].It can be seen that, for a fixed Sn composition, the dark current density decreases with increasing (λ c T) −1 because lower temperatures suppress the dark current density.The experimental data observed in Ge 1−x Sn x PDs show dark current densities about 1-3 orders of magnitude higher than the calculated performance.This discrepancy is likely attributed to (1) the residual compressive strain in the Ge 1−x Sn x active layer that enlarges the bandgap energy and thus blueshifts the cutoff wavelength, and (2) the defects in the Ge 1−x Sn x active layer that induce defect−related dark currents.These results suggest that there is considerable room to reduce the dark current density of Ge 1−x Sn x PDs by improving the material quality.In comparison with IGA−rule−07, it is found that Ge 1−x Sn x PDs with low Sn compositions (<5%) have higher dark current density than the E−InGaAs PDs for small (λ c T) −1 .As the Sn composition increases, the dark current density decreases, and eventually becomes comparable to, or even lower than, the E-InGaAs PDs, suggesting superior performance can be obtained with Ge 1−x Sn x PDs operating in longer-wavelength and higher-temperature conditions.Relative to MCT−rule−07, however, the Ge 1−x Sn x PDs exhibit higher dark current densities than MCT PDs.PDs with low Sn compositions (<5%) have higher dark current density than the E−InGaAs PDs for small (λcT) −1 .As the Sn composition increases, the dark current density decreases, and eventually becomes comparable to, or even lower than, the E-InGaAs PDs, suggesting superior performance can be obtained with Ge1-xSnx PDs operating in longer-wavelength and higher-temperature conditions.Relative to MCT−rule−07, however, the Ge1-xSnx PDs exhibit higher dark current densities than MCT PDs. in the literature [9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24].MCT rule−07 (dashed−dotted line) [26] and IGA−rule−17 (dashed line) [27] and are also depicted for comparison. We now establish GeSn-rule-23 based on our calculation results.The relationship between the dark current density, cutoff wavelength, and operation temperature can be described by the empirical expression [26,27] where C and J 0 are the fitting parameters, which can be obtained using the calculated results in Figure 3e and the results are listed in Table 2.These results offer device and system developers to easily evaluate the performance of GeSn PDs at different operation conditions.The dark current density of Ge 1−x Sn x PDs at T = 300 K (J s @T = 300 K) is also listed in Table 2 for comparison.J s @T = 300 K significantly increases with increasing Sn contents as a result of narrower bandgap energy. Temperature-Dependent Optical Absorption and Responsivity Next, we calculate the temperature-dependent optical absorption coefficient and responsivity of the GeSn PDs.For GeSn alloys, both direct−gap and indirect−gap interband transitions contribute to optical absorption.However, the indirect−gap interband transition is much weaker than the indirect-gap because additional phonons are necessary to conserve momentum.Thus, the absorption coefficient is dominated by the direct-gap absorption coefficient.The direct-gap optical absorption coefficient by the direct-gap transitions can be calculated using the Fermi's golden rule taking into account the Lorentzian lineshape function and the nonparabolicity effect as [4] α where n r is the refractive index of GeSn alloys; c is the velocity of light in vacuum; ε 0 is the free space permittivity; m 0 is the rest mass of electron; ω is the angular frequency of incident light; E is the incident photon energy; | q.p CV | 2 = m 0 E P /6 is the momentum matrix with E p denoting the optical energy parameter; γ is the full-width-at-half-maximum (FWHM) of the Lorentzian lineshape function.E CΓ (k) and E m (k) the electron energy in the Γ-CB and hole energy in the valence band, respectively, which are calculated using a multi-band k•p model presented in Refs.[4,40].Figure 4 depicts the calculated absorption coefficient spectra of GeSn alloys with different Sn contents in the temperature range of T = 200-300 K.For a fixed Sn content and temperature, the absorption coefficient gradually decreases with increasing wavelength, followed by a sharp decrease near the direct-bandgap energy.It can also be observed that the absorption spectra redshift as the temperature increases because of the reduced direct bandgap energy.Note that the direct-gap absorption coefficient is related to the joint density-of-state (JDOS), which is proportional to E − E g [31].As a result, the absorption coefficient at a fixed wavelength increases with increasing temperature.As the Sn content increases, the absorption coefficient significantly redshifts owing to the narrowed bandgap energy caused by Sn alloying.As a result, the absorption coefficient in the MWIR region can be significantly enhanced, thereby enabling efficient MWIR photodetection. bandgap energy.Note that the direct-gap absorption coefficient is related to the joint density-of-state (JDOS), which is proportional to As a result, the absorption coefficient at a fixed wavelength increases with increasing temperature.As the Sn content increases, the absorption coefficient significantly redshifts owing to the narrowed bandgap energy caused by Sn alloying.As a result, the absorption coefficient in the MWIR region can be significantly enhanced, thereby enabling efficient MWIR photodetection.With the absorption coefficient, we can then calculate the optical responsivity ( R  ) of the GeSn PDs using [31] ( ) ( ) where ηi is the internal quantum efficiency and its value is assumed to be ηi = 100%; and Rrefl is the reflectivity of the top surface of the GeSn PDs and its value is assumed to be zero as the anti-reflection coating can minimize the reflection.Figure 5 depicts the calculated optical responsivity spectra of the GeSn PDs with various Sn contents in the temperature range of T = 200-300 K.For pure Ge PDs (x = 0%), as shown in Figure 5a, the responsivity increases with increasing wavelength, and then sharply decreases near the direct-gap absorption edge, correspondingly to the cutoff wavelength of the PD.With an increase in the operation temperature, the cutoff wavelength exhibits a redshift, indicating a wider photodetection range.As the Sn content increases to 10%, as shown in Figure 5b, the cutoff wavelength is significantly extended to ~2600 nm.Meanwhile, the optical responsivity significantly increases because the reduced With the absorption coefficient, we can then calculate the optical responsivity (R λ ) of the GeSn PDs using [31] where η i is the internal quantum efficiency and its value is assumed to be η i = 100%; and R refl is the reflectivity of the top surface of the GeSn PDs and its value is assumed to be zero as the anti-reflection coating can minimize the reflection.Figure 5 depicts the calculated optical responsivity spectra of the GeSn PDs with various Sn contents in the temperature range of T = 200-300 K.For pure Ge PDs (x = 0%), as shown in Figure 5a, the responsivity increases with increasing wavelength, and then sharply decreases near the direct-gap absorption edge, correspondingly to the cutoff wavelength of the PD.With an increase in the operation temperature, the cutoff wavelength exhibits a redshift, indicating a wider photodetection range.As the Sn content increases to 10%, as shown in Figure 5b, the cutoff wavelength is significantly extended to ~2600 nm.Meanwhile, the optical responsivity significantly increases because the reduced photon energy per photon leads to more photons for a watt of light energy.As the Sn content further increases to 20%, the cutoff wavelength is further extended to ~7000 nm with enhanced optical responsivity, enabling sensitive MWIR photodetection.photon energy per photon leads to more photons for a watt of light energy.As the Sn content further increases to 20%, the wavelength is further extended to ~7000 nm with enhanced optical responsivity, enabling sensitive MWIR photodetection. Temperature-Dependent Detectivity and Noise-Equivalent Power The figure-of-merit for the performance of PDs includes not only dark current density, but also detectivity.Next, we calculate detectivity of Ge1-xSnx PDs in terms of Sn content and temperatures.With R0A and responsivity Rλ, we can obtain the specific detectivity ( * D  ), one of the most important figure-of-merits used to characterize PDs' performance, as [6,31 The corresponding metric noise-equivalent power (NEP), which represents the minimum detectable power per square root bandwidth, can be computed as [31] ) is the area of the photosensitive region of the GeSn PDs and f  is the bandwidth of the PD. Figure 6a shows the calculated specific detectivity spectra of the GeSn PDs with various Sn content at T = 300 K.For a fixed Sn content, * D  increases with increasing Temperature-Dependent Detectivity and Noise-Equivalent Power The figure-of-merit for the performance of PDs includes not only dark current density, but also detectivity.Next, we calculate detectivity of Ge 1−x Sn x PDs in terms of Sn content and temperatures.With R 0 A and responsivity R λ , we can obtain the specific detectivity (D * λ ), one of the most important figure-of-merits used to characterize PDs' performance, as [6,31] The corresponding metric noise-equivalent power (NEP), which represents the minimum detectable power per square root bandwidth, can be computed as [31] where A(=πD 2 /4) is the area of the photosensitive region of the GeSn PDs and ∆ f is the bandwidth of the PD. Figure 6a shows the calculated specific detectivity spectra of the GeSn PDs with various Sn content at T = 300 K.For a fixed Sn content, D * λ increases with increasing wavelength, followed by a rapid decrease near the direct-gap absorption edge.As a result, the peak detectivity (D * λ p ) is obtained at a wavelength of about λ p = 0.9λ c .For x = 0, a high D * λ ∼ 3 × 10 11 cmHz 1/2 W −1 is achieved at λ~1500 nm.As the wavelength increase,D * λ drops rapidly and becomes negligible near the direct-gap absorption edge because incident IR photons cannot be fully absorbed by the GeSn active region.As the Sn content increases, the detectivity spectrum is extended to longer wavelengths owing to the reduced direct bandgap, and can fully cover the entire SWIR (MWIR) spectral range with a Sn content of ~12% (~20%).However, D * λ also decreases as a result of increased dark current density.Figure 6b shows the calculated specific detectivity spectra for GeSn PDs with x = 10% in the temperature range of T = 200-300 K.As the temperature decreases, D * λ significantly increases owing to the suppressed dark current Meanwhile, the photodetection range exhibits a blueshift as a result of increased direct bandgap energy.(For other Sn contents, similar results are obtained.)Figure 6c shows the peak detectivity D * λ (which is defined as the detectivity at the wavelength λ p = 0.9λ c ) and the corresponding NEP normalized to a bandwidth of ∆ f = 1 Hz as a function of cutoff wavelength in the temperature range of T = 200-300 K.At a fixed temperature, D * λ p decreases and NEP increases with increasing cutoff wavelength of the Ge 1−x Sn x PDs owing to higher Sn compositions.Meanwhile, D * λ p decreases and NEP increases at higher operation temperatures owing to increased dark current density. wavelength, followed by a rapid decrease near the direct-gap absorption edge.As a result, the peak detectivity ( p * D  ) is obtained at a wavelength of about p c 0.9  is achieved at λ~1500 nm.As the wavelength increase, * D  drops rapidly and becomes negligible near the direct-gap absorption edge because incident IR photons cannot be fully absorbed by the GeSn active region.As the Sn content increases, the detectivity spectrum is extended to longer wavelengths owing to the reduced direct bandgap, and can fully cover the entire SWIR (MWIR) spectral range with a Sn content of ~12% (~20%).However, * D  also decreases as a result of increased dark current density.Figure 6b shows the calculated specific detectivity spectra for GeSn PDs with x = 10% in the temperature range of T = 200-300 K.As the temperature decreases, * D  significantly increases owing to the suppressed dark current density.Meanwhile, the photodetection range exhibits a blueshift as a result of increased direct bandgap energy.(For other Sn contents, similar results are obtained.)Figure 6c shows the peak detectivity * D  (which is defined as the detectivity at the wavelength Temperature-Dependent Bandwidth We then investigated the temperature-dependent 3-dB bandwidth of the GeSn PDs.The 3-dB bandwidth of PDs is usually governed by transit-time delay and RC time delay.The transit-time delay limited bandwidth (f T ) can be calculated using [41,42] where v s is the carrier saturation velocity.The carrier saturation velocity at T = 300 K can be calculated using [42,43] v S = ∆E m * σ s N s L me (19) where ∆E = hc s /a GeSn , c s is the velocity of sound in the GeSn material which can be obtained by c s = G/ρ with G being the shear modulus and ρ being the density, a GeSn is the bulk lattice constant of GeSn alloys, m* is the conductivity effective mass.The temperature-dependent saturation velocity can be approximated with [44] v S (T) = v S (T = 300 K) where σ is a constant.Owing to the lack of experimental data for GeSn alloys, the coefficient for GeSn alloys approximated by that of Ge (σ = 0.45 for electrons and 0.39 for holes). On the other hand, the RC-time delay limited bandwidth can be calculated using [41,42] where C is the capacitance of the GeSn junction, which can be calculated using C = εA/t i ; and R is the load resistance which is set to the standardized RF impedance of R = 50 Ω. Note that the permittivity of the materials is a function of temperature.Owing to the lack of experiment data for GeSn alloys, the permittivity is obtained by linear interpolation of these Ge and α-Sn.The temperature-dependent permittivity of Ge is taken from Ref. [45]. For α-Sn, the temperature-dependent permittivity is not available, so we approximate it by the value at T = 300 K (ε r = 24).As the Sn content in this study is not very high, we believe that this is still a good approximation.With the transit-time-delay and RC-delay bandwidths, the total 3-dB bandwidth of the PD can be obtained using [41,42] With the 3-dB bandwidth, the response time (t r ) can be obtained using [42,46] t r = 0.35 f 3dB (23) Figure 7 shows the calculated saturation velocities of electrons in the Γ-CB and L-CB and holes in the valence band as a function of Sn content at various temperatures.For a fixed temperature, the saturation velocities increase with increasing Sn content, because of the increased carrier mobilities.As the temperature increases, the electron and hole saturation velocities decrease because of the reduced mobilities.For electron saturation velocities, electrons in the Γ-CB have much higher saturation velocities then these in the L-CB, because of their much smaller effective mass (m* = 0.045 − 0.166x + 0.043x 2 for Γ-CB electrons and for m* = 0.566 − 0.449x + 1.401x 2 for L-CB electrons [33]).The hole saturation velocity is smaller than the electron ones, because of the much larger effective masses.Thus, when electron-hole pairs are created by the absorption of incident photons in the GeSn active region, holes require longer time to transit through the active region than electrons.As a result, the transit-time-delay-limited bandwidth is dominated by the transition time of holes.masses.Thus, when electron-hole pairs are created by the absorption of incident photons in the GeSn active region, holes require longer time to transit through the active region than electrons.As a result, the transit-time-delay-limited bandwidth is dominated by the transition time of holes.Figure 8a depicts the calculated transit-time-delay-limited, RC-delay-limited, and 3-dB bandwidth of the GeSn PD as a function of Sn content at T = 300 K.The transit-timedelay-limited bandwidth decreases with increasing Sn content as a result of increased carrier saturation velocity.On the other hand, the RC-delay-limited bandwidth decreases with increasing Sn content owing to the increased permittivity of the GeSn active layer.However, transit-time-delay-limited bandwidth is significantly lower than the RC-delay-limited bandwidth owing to the thick GeSn active layer.As a result, the 3-dB bandwidth is dominated by transit-time-delay-limited bandwidth.As the Sn content increases, the 3-dB bandwidth increases slightly and can be greater than 10 GHz, corresponding to a response time of <3.5 ps, indicating the capacity of high-speed operation.Figure 8b shows the calculated 3-dB bandwidth and response time of the GeSn PDs as a function of Sn content in the temperature range of T = 200-300 K.With an increase in temperature, the 3-dB bandwidth slightly decreases, but can remain >10 GHz. Figure 8a depicts the calculated transit-time-delay-limited, RC-delay-limited, and 3-dB bandwidth of the GeSn PD as a function of Sn content at T = 300 K.The transit-timedelay-limited bandwidth decreases with increasing Sn content as a result of increased carrier saturation velocity.On the other hand, the RC-delay-limited bandwidth decreases with increasing Sn content owing to the increased permittivity of the GeSn active layer.However, transit-time-delay-limited bandwidth is significantly lower than the RC-delaylimited bandwidth owing to the thick GeSn active layer.As a result, the 3-dB bandwidth is dominated by transit-time-delay-limited bandwidth.As the Sn content increases, the 3-dB bandwidth increases slightly and can be greater than 10 GHz, corresponding to a response time of <3.5 ps, indicating the capacity of high-speed operation.Figure 8b shows the calculated 3-dB bandwidth and response time of the GeSn PDs as a function of Sn content in the temperature range of T = 200-300 K.With an increase in temperature, the 3-dB bandwidth slightly decreases, but can remain >10 GHz. masses.Thus, when electron-hole pairs are created by the absorption of incident photons in the GeSn active region, holes require longer time to transit through the active region than electrons.As a result, the transit-time-delay-limited bandwidth is dominated by the transition time of holes.Figure 8a depicts the calculated transit-time-delay-limited, RC-delay-limited, and 3-dB bandwidth of the GeSn PD as a function of Sn content at T = 300 K.The transit-timedelay-limited bandwidth decreases with increasing Sn content as a result of increased carrier saturation velocity.On the other hand, the RC-delay-limited bandwidth decreases with increasing Sn content owing to the increased permittivity of the GeSn active layer.However, transit-time-delay-limited bandwidth is significantly lower than the RC-delay-limited bandwidth owing to the thick GeSn active layer.As a result, the 3-dB bandwidth is dominated by transit-time-delay-limited bandwidth.As the Sn content increases, the 3-dB bandwidth increases slightly and can be greater than 10 GHz, corresponding to a response time of <3.5 ps, indicating the capacity of high-speed operation.Figure 8b shows the calculated 3-dB bandwidth and response time of the GeSn PDs as a function of Sn content in the temperature range of T = 200-300 K.With an increase in temperature, the 3-dB bandwidth slightly decreases, but can remain >10 GHz. Conclusions We have developed GeSn-rule-23 for the purpose of evaluating the performance limit of Ge 1−x Sn x IR PDs.This rule establishes the relationship between the cutoff wavelength and operating temperature for GeSn PDs with different Sn compositions.Fitting parameters that describe the dependence of dark current density on cutoff wavelength and temperature are determined.In comparison with the experimental data obtained so far from the reported Ge 1−x Sn x PDs, this study suggests that their performance has significant room to improve.In addition, temperature-dependence analysis of the optical responses indicates that the optical responsivity increases, while the detectivity and 3-dB bandwidth decrease at higher temperatures.The GeSn-rule-23 is expected to provide useful guidelines for device and system developers to select proper Sn compositions and operation temperature to achieve the desired device performance to meet the requirement of practical applications. Figure 1 . Figure 1.Schematic diagram of normal-incidence GeSn p−i−n homojunction photodetector on a Si (001) substrate with a GeSn virtual substrate. Figure 1 . Figure 1.Schematic diagram of normal-incidence GeSn p−i−n homojunction photodetector on a Si (001) substrate with a GeSn virtual substrate. Figure 2 . Figure 2. Calculated cutoff wavelength versus operation temperature for GeSn PDs with different Sn contents. Dn are the diffusion coefficients for holes, and electrons in the Γ−CB and L−CB, respectively; 0 n p is the minority hole density in the n−GeSn region; are the minority electron densities in the Γ− and L−CB in the p−GeSn region.The diffusion coefficient can be converted from mobility (µ) via the Einstein relationship D = µkT/q with k being the Boltzmann constant[32].The minority carrier concentrations in the doped layers can be linked to the intrinsic carrier concentrations and doping concentrations via Figure 2 . Figure 2. Calculated cutoff wavelength versus operation temperature for GeSn PDs with different Sn contents. Figure 3 . Figure 3. Calculated GeSn−rule−23 performance for GeSn PDs.(a) Calculated dark current density as a function of temperature.(b) Calculated dark current density at T = 300 K for GeSn PDs as a function of Sn content.Experimental dark current densities taken from Refs.[9-24] are also shown for comparison.Calculated (c) dark current density, and (d) R0A product of the GeSn PDs as a function of cutoff wavelength in the temperature range of T = 200-300 K. (e) Calculated GeSn−rule−23 performance by means of dark current density as a function reciprocal of the product cutoff wavelength and temperature (solid lines) compared with reported experimental data (scatters) in the literature [9-24].MCT rule−07 Figure 5 . Photon energy (meV) of cutoff wavelength in the temperature range of T = 200-300 K.At a fixed temperature, p * D  decreases and NEP increases with increasing cutoff wavelength of the Ge1-xSnx PDs owing to higher Sn compositions.Meanwhile, p * D  decreases and NEP increases at higher operation temperatures owing to increased dark current density. Figure 6 . Figure 6.(a) Calculated specific detectivity spectra of GeSn PDs with various Sn contents at T = 300 K. (b) Calculated specific detectivity spectra for Ge0.9Sn0.1 PD in the temperature range of T = 200-300 K. (c) Calculated peak detectivity and NEP as a function of cutoff wavelength in the temperature range of T = 200-300 K. Figure 7 . Figure 7. Calculated (a) Γ-CB, (b) L-CB electron and (c) hole saturation velocities as a function of Sn content in the temperature range of T = 200-300 K. Figure 8 . Figure 8.(a) Calculated transit-time-delay-limited bandwidth, RC-delay-limited bandwidth, and 3-dB bandwidth of GeSn PDs as a function of Sn content at T = 300 K. (b) Calculated 3-dB bandwidth and response time as a function of Sn content at various temperatures. Figure 7 . Figure 7. Calculated (a) Γ-CB, (b) L-CB electron and (c) hole saturation velocities as a function of Sn content in the temperature range of T = 200-300 K. Figure 7 . Figure 7. Calculated (a) Γ-CB, (b) L-CB electron and (c) hole saturation velocities as a function of Sn content in the temperature range of T = 200-300 K. Figure 8 . Figure 8.(a) Calculated transit-time-delay-limited bandwidth, RC-delay-limited bandwidth, and 3-dB bandwidth of GeSn PDs as a function of Sn content at T = 300 K. (b) Calculated 3-dB bandwidth and response time as a function of Sn content at various temperatures. Figure 8 . Figure 8.(a) Calculated transit-time-delay-limited bandwidth, RC-delay-limited bandwidth, and 3-dB bandwidth of GeSn PDs as a function of Sn content at T = 300 K. (b) Calculated 3-dB bandwidth and response time as a function of Sn content at various temperatures.
8,757
2023-08-24T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Managing social-educational robotics for students with autism spectrum disorder through business model canvas and customer discovery Social-educational robotics, such as NAO humanoid robots with social, anthropomorphic, humanlike features, are tools for learning, education, and addressing developmental disorders (e.g., autism spectrum disorder or ASD) through social and collaborative robotic interactions and interventions. There are significant gaps at the intersection of social robotics and autism research dealing with how robotic technology helps ASD individuals with their social, emotional, and communication needs, and supports teachers who engage with ASD students. This research aims to (a) obtain new scientific knowledge on social-educational robotics by exploring the usage of social robots (especially humanoids) and robotic interventions with ASD students at high schools through an ASD student–teacher co-working with social robot–social robotic interactions triad framework; (b) utilize Business Model Canvas (BMC) methodology for robot design and curriculum development targeted at ASD students; and (c) connect interdisciplinary areas of consumer behavior research, social robotics, and human-robot interaction using customer discovery interviews for bridging the gap between academic research on social robotics on the one hand, and industry development and customers on the other. The customer discovery process in this research results in eight core research propositions delineating the contexts that enable a higher quality learning environment corresponding with ASD students’ learning requirements through the use of social robots and preparing them for future learning and workforce environments. Social robots, especially humanoids, are popular with humans due to their anthropomorphic, humanlike features and their capability to perform autonomous movements, sensory-motor tasks, and verbal and non-verbal communications (Zhang et al., 2019;Arora et al., 2022;Bertacchini et al., 2022).Social robotics researchers have defined 'social robots' in the HRI literature as 'sociable' (i.e., robots can be used as tools/aid for social cognition), 'socially evocative' (i.e., robots are anthropomorphic and evoke positive feelings in humans during human-robot interaction), 'socially intelligent' (i.e., robots portray social intelligence and exhibit models of social competence and human cognition), 'socially situated' (i.e., robots are intelligent beings and can distinguish objects and other social agents in their social space), and 'socially interactive' (i.e., robots can be utilized for peer-to-peer HRI for social interaction and interventions with humans) agents (Mahdi et al., 2022;Newman et al., 2022;Roesler, 2023). In the Springer Handbook of Robotics, social robots are defined as being 'human-centric' with the capabilities of operating in human-centered environments (Breazeal et al., 2016).They can be humanoids or animal-like with a unifying feature of engaging "people in an interpersonal manner, communicating and coordinating their behavior with humans through verbal, nonverbal, and affective modalities" (Breazeal et al., 2016(Breazeal et al., p. 1936)). Autism Spectrum Disorder (ASD) is a pervasive developmental disorder characterized by abnormalities in social interaction and communication, and restricted, repetitive patterns of behavior, interests, or activities (DSM-5, 2013).Many students with ASD typically avoid direct physical contact, do not orient toward others, do not point to communicate, and do not display signs of happiness or interest (Rutter, 2011).Some individuals with ASD require a high level of assistance in their daily lives, while others may function independently.ASD usually manifests before three and can last throughout a person's life, though symptoms may improve with age (Rutter, 2011;Rutter et al., 2011). Our research aims to fill significant gaps in the robotics education literature in the context of HRI by focusing on how high school students diagnosed with ASD engage in instruction provided by educational-social robots.Many research studies are available for educational robotics with elementary and middle school students.Still, little research is available on high school ASD students' motivation, cognition, and engagement with educationalsocial robots.Additionally, from robot design and curriculum development perspectives, there is a lack of expertise in creating a versatile methodology (e.g., Business Model Canvas or BMC) for robot design and curriculum development aimed at ASD students that can be validated through systematic investigation across educational research and industry applications.Even though many aspects of the BMC methodology, such as defining the system's goals, participatory design, and conducting ethical research on children/adolescents are intuitive, no visual, structured, and standardized methodology like BMC is available to the HRI and ASD community.As a strategic management tool, BMC is used to develop new business models or improve existing ones.It considers key stakeholders, value propositions, infrastructure, customers, customer relationships, and finances.Such a tool can help bridge existing gaps in HRI research among robotic designers, roboticists, industry, academics, students (interacting with social-educational robots), parents, teachers, and counselors (Arora et al., 2022). This research makes three significant contributions.First, we develop an integrative ASD student-teacher (co-working with social robot)-social robotic interactions triad framework that considers the social context in which robots operate with ASD students and teachers co-working with social robots and robotic technology.We offer eight core research propositions that highlight avenues for research.Robotic interactions and collaborations between humans (ASD students and teachers co-working with robots to help students with ASD) and social robots help to design and make knowledgebased, social robots and robotic technology more relevant and effective.Second, we highlight the importance and relevance of the Business Model Canvas (BMC) framework, signifying the triad of ASD student-teacher (co-working with social robot)-social robots.We conducted a series of customer discovery interviews in high school contexts with ASD students and their teachers (coworking with robots to help ASD students) in a large metropolitan area and a federal district of the United States of America to illustrate the BMC framework's relevance.Third, we will help connect the interdisciplinary fields of consumer behavior research, AI, social robotics, and human-robot interaction (HRI).Through this research, we wish to illustrate how the field of social robotics is helping to shape a sustainable future involving neurodivergent ASD individuals, which is far beyond the mere replacement of human workers. In this research, we propose a conceptual framework through a business model canvas methodology and customer discovery interviews of key stakeholders engaged in social robotic interactions with ASD students.Our study aims to target the 2030 Sustainable Development Goals (SDGs) of the United Nations Organization, which were developed as an internationally agreed "plan of action for people, planet and prosperity, " especially item 3-Good health and wellbeing (ensuring healthy lives and wellbeing at all ages), and item 4-Quality education (ensuring inclusive and equitable quality education and promoting lifelong learning opportunities for all).In this investigation, we broaden the research focus by considering the combinational, complex dynamics of the Business Model Canvas (BMC incorporating the triad: ASD student-teacher (co-working with social robot)-social robotic interactions framework. The rest of the paper is organized as follows.The upcoming sections focus on the literature review followed by business model canvas methodology and customer discovery interviews addressing previously mentioned research questions.After that, we focus on our conceptual framework: ASD student-teacher (co-working with social robot)-social robotic interactions triad framework leading to the development of research propositions.Lastly, we present conclusions, limitations, and future research directions. 2 Social robotics and autism: a review of literature Social robots are proven to help both typically developing (TD) students and students with autism spectrum disorder 10.3389/frobt.2024.1328467(ASD) (Chevallier et al., 2012), a neurodevelopmental disorder characterized by social communication impairments and abnormal (repetitive) behaviors (DSM-5, 2013).For example, social robots can enhance engagement and motivation, promote personalized learning, and encourage STEM education for TD students (Zhang et al., 2019;Arora et al., 2023).In contrast, for ASD students, they offer a safe and predictable environment for social interaction training, provide consistent and repetitive practice sessions, and can be customized to address individual needs, thus reducing overstimulation.These benefits, underscored by previous research (e.g., Chevallier et al., 2012;Belpaeme and Tanaka, 2021;Arora et al., 2023) highlight the versatility and effectiveness of social robots in educational settings, catering to the diverse needs of students across the spectrum of development.The development of ASD-specific social robots can be traced back to the seminal study by Emanuel and Weir (1976), in which a computer-controlled electrotechnical device, a turtle-like robot (LOGO) moving on wheels around the floor, was used as a remedial tool for a student diagnosed with ASD.It was not until the late 1990s that numerous laboratories started investigating this topic (see Begum et al., 2016;Ismail et al., 2019;Leoste et al., 2022;Bertacchini et al., 2022;Soleiman et al., 2023;Bharatharaj et al., 2023 for reviews).In the current research, a 'student diagnosed with ASD' is referred to as an ' ASD student.' As stated earlier, ASD is a pervasive developmental disorder, and it affects social interaction, communication, and behavior development, impacting each person differently and to varying degrees of severity, as the word "spectrum" implies.ASD can appear in any order and range from mild to severe.In the social context of high schools, communication and engagement challenges can lead to social isolation and bullying for ASD adolescents (Humphrey and Symes, 2010;Salhi et al., 2022).Adolescence offers an increasing self-awareness of social challenges for some students with autism, and negative encounters with peers can intensify social anxiety (White et al., 2011).Healthy peer interactions have been shown to enhance positive social/academic results (Lynch et al., 2013).Social issues are a significant obstacle to high school adolescents with ASD in achieving their scholastic goals (Camarena and Sarigiani, 2009).Since most social encounters occur outside of the classroom, in the hallways, in school cafeterias, and during extracurricular activities, the more challenging aspects of social life for students with ASD (e.g., entering social circles, making friends, and cultivating intimate relationships, etc.) may go unaddressed or overlooked by teachers and administrators. Social Motivation Theory (SMT: Chevallier et al., 2012) highlights that ASD students usually prefer nonhuman and mechanical stimuli rather than seeking out or maintaining relationships with human partners (Fosch-Villaronga and Heldeweg, 2018;Tavakoli, Carriere, and Torabi, 202;Burns et al., 2022).Social interaction challenges for ASD students stem from abnormal processing of social rewards, leading to decreased attention towards social cues early on.This diminished social focus then hinders the acquisition of social skills by limiting exposure to social learning experiences, consequently contributing to difficulties in social communication and interaction (Chevallier et al., 2012;Fosch-Villaronga and Heldeweg, 2018;Tavakoli et al., 2020).SMT utilizes three socio-biological mechanisms targeting ASD students. • Robots represent "social agents" that can move in a threedimensional space and physically interact with people and the environment through social orienting.• Adjustable sensory-cognitive stimulation can promote a more significant perceptive experience as a social reward than a simple video game.• A robotic system is perceived as an "artificially intelligent humanlike agent" that can simulate human behavior in socialaffective development through social maintenance, guiding ASD students in the complex world of social interactions (Chevallier et al., 2012;Burns et al., 2022). The social motivation theory of autism (SMT: Chevallier et al., 2012) suggests that individuals with ASD may have impaired social motivation, affecting their social learning and interactions.We can connect this theory with the Business Model Canvas (BMC) methodology by understanding the social motivation challenges faced by individuals with ASD in the business context.The Business Model Canvas (BMC) is a strategic management tool/methodology that allows one to describe, design, challenge, invent, and pivot a business model. Our research examines the use of business model canvas and customer discovery interviews to develop responsive robotics education for high school students with ASD.One of the research questions is: how can we use customer discovery interviews and the associated inquiry processes to develop responsive robotics education through the Business Model Canvas (BMC) to capture all stakeholders in the robotic intervention process with ASD students?We'll address this in the following sections. 3 Research methodology Business model canvas (BMC) as a research methodology Business Model Canvas (BMC) is a research-based, industryoriented framework highlighting key partners, key activities, and resources related to the research, value propositions, customer relationships, customer segments, and channels (as shown in Table 1).The BMC framework integrates user experience (UX) at its core, emphasizing UX best practices to develop responsive, ethical-educational-social robots that are commercially viable in HRI situations.As the HRI literature points out, "there is a lack of expertise in integrating and adapting UX best practices and defining UX goals in the context of HRI" (Nielsen et al., 2021, p. 266).The BMC seeks to address the gaps in the literature by providing a flexible, industry-oriented framework for developing and designing ethical robots or an ethical curriculum for educational-social robots.A business model canvas is developed to design ethical robots engaged in robotic interventions for high school and university students with learning disabilities (refer to Table 1). Table 1 highlights the key partners of the BMC Framework, including the Public School System (primarily middle and high schools), ASD students and teachers, and robotic companies.Our first value proposition is to increase the engagement of ASD students through social robotics.Our second value proposition is to increase robotic companies' revenue through potential partnerships with K-12 schools.Our third value proposition is to help minimize the time for engagement of ASD students in schools and universities through our recommended technology/robotics.The potential impact would extend beyond robotics companies to the entire K-12 school system.The customer segments include public schools, ASD students and teachers, parents, associations, technology heads (as Influencers), and robotic companies (e.g., RobotLAB from San Francisco, CA) as economic buyers and partners. Application of Business Model Canvas (BMC) in Human-Robot Interaction (HRI) Design.BMC methodology has been used in previous research related to robotics.Metelskaia et al. (2018) examined a specialized BMC for AI solutions in the context of robotics and AI.This framework is instrumental in aligning AI engineering, including HRI design, with broader business strategies.The study emphasized the importance of integrating technical development with market-oriented approaches, a highly applicable principle to HRI design (Metelskaia et al., 2018).This research presented BMC as a useful tool for creating and analyzing robotic and AI solutions.Exploring the dynamic aspects of BMC, Romero et al. (2015) presented an enriched BMC design using system dynamics.This approach offered a more nuanced understanding of the complexities involved in HRI design, emphasizing the flow network and the potential for identifying and testing changes in the business model.This modified approach showed additional benefits that can be obtained with its application.Zec et al. (2014) discussed the strengths and limitations of the BMC approach in collaborative environments.Their analysis provided insights into how software support can enhance collaborative design and evaluation of business models, a concept that can be extrapolated to collaborative HRI design processes.Bätz and Siegfried (2022) critically examined BMC's use in entrepreneurial contexts, suggesting that it might oversimplify the multifaceted nature of business environments, such as those in robotics.This critique is crucial in assessing BMC's applicability in the intersecting domains of technology, human interaction, and business goals.Joyce and Paquin (2016) introduced a triple-layered business model canvas, adding environmental and social layers to the traditional BMC.This extension is particularly relevant for HRI design, underscoring the need for sustainable and socially responsible robotics solutions. Despite the above shortcomings, BMC methodology offers a viable and effective framework to understand the applicability of social robotics for students with ASD.Even though the research on BMC and HRI environments is limited, the application of BMC in HRI design offers a comprehensive framework for aligning robotic technology with strategic business objectives.Previous research highlights the versatility of BMC in addressing diverse aspects of HRI design, from enhancing learning environments to ensuring sustainability and social responsibility.The convergence of BMC and HRI design has the potential to pave the way for more integrated, effective, and responsible robotic solutions in various sectors.Our research directly applies BMC methodology aided by customer discovery interviews to develop responsive robotics education for high school students with ASD. Business Model Canvas (BMC) and Social Motivation Theory of Autism (SMT).When considering the connection between SMT and BMC, it is important to integrate the understanding of social motivation challenges faced by ASD individuals into the various elements of the business model.For instance, in the customer segments and customer relationships sections, businesses can consider how to adapt their approaches to account for the social motivation difficulties of ASD individuals.This may involve creating inclusive and accessible customer experiences and communication strategies. Furthermore, in the key activities and resources sections of the BMC framework, businesses can explore how to support employees with ASD by providing appropriate accommodations that consider their social motivation challenges.This may involve tailored training programs, workspace adjustments, and communication support.By integrating the principles of the SMT into BMC, businesses can work towards creating more inclusive environments for ASD individuals, thereby tapping into a potentially underutilized talent pool, and better serving a diverse customer base.For the value propositions section of BMC, we need to develop a deeper understanding of SMT that resonates with individuals with ASD, such as creating environments or products that are less overwhelming and more accommodating to sensory sensitivities.SMT can influence the choice of channels used to reach out to ASD individuals, opting for those that are more aligned with their social preferences and comfort zones.Adapting a business model to cater to ASD individuals might involve unique cost considerations.Still, it could also open up new revenue streams by tapping into an often underserved market. Customer discovery interviews Customer discovery interviews are a crucial component of the business model canvas (BMC) research methodology, particularly in the field of social robotics and human-robot interaction (HRI) (Arora et al., 2023).These interviews involve engaging with potential customers to understand their needs, preferences, and problems (or pain points), which can then be used to inform the development of a business model.In the context of social robotics and HRI, customer discovery interviews can provide valuable insights into the specific use cases and applications of robots in various industries, such as hospitality and tourism (Tung and Au, 2018;de Kervenoael et al., 2020).For example, Tung and Au's (2018) study on consumer experiences with robotics in hospitality highlights the influence of robotic embodiment and human-oriented perceptions on consumer experiences, which can offer valuable insights for businesses in this sector.Similarly, de Kervenoael's et al. (2020) work on visitors' intentions to use social robots in hospitality services underscores the importance of perceived value, empathy, and information sharing in driving these intentions, providing further guidance for businesses in this field. The BMC methodology incorporates insights gathered from customer discovery interviews, ensuring that user needs and preferences are considered during the design and development process (Arora et al., 2023).These customer discovery insights can then be integrated into the BMC methodology to develop a sustainable and effective business model for social robotics and HRI.BMC's visual representation (refer to Table 1) encourages collaboration among team members, providing a clear and concise representation of the social robot's components and their interrelationships.BMC's modular structure enables researchers and developers to easily modify and update different aspects of the social robot as new insights or technological advancements emerge (Alves-Oliveira et al., 2022;Arora et al., 2023).By utilizing the BMC, researchers can create a visual representation of the various components that help in a successful social robot interaction and implementation, including customer segments, value propositions, channels, and revenue streams (Arora et al., 2023).By combining customer discovery interviews with the BMC methodology, robotic creators, developers, and researchers can create social robots that are not only technologically advanced but also user-centric, maximizing 10.3389/frobt.2024.1328467user experience and ultimately leading to more successful and effective HRI implementation (Alves-Oliveira et al., 2022). We conducted two studies utilizing customer discovery interviews.Study one engages ASD students and their teachers, whereby a total of 25 customer discovery interviews were conducted.On the other hand, Study two involves other stakeholders from schools (e.g., school principals, technology heads, etc.) in addition to the industry professionals from robotic companies, whereby a total of 35 customer discovery interviews were conducted.Study one is described below.Study two is described later, under Section 5.3. Study 1: Participants and Educational Settings.We conducted sixteen customer discovery interviews with ASD students from high schools after they interacted with social robots during HRI field experiments or social robotic intervention sessions.We also conducted nine interviews with their teachers, who had interacted with both robots and students.These interviews were conducted at three different public high schools of a large metropolitan federal district in the United States.We used Individualized Educational Plans (IEPs) to recruit students in consultation with the school counselors and teachers.A student's IEP confirmed the recruited participant had a professional diagnosis of ASD and that the teachers interviewed were aware of the students' ASD diagnosis.The high school students were 15-17 years old, with 11 males and five females.We used two kinds of social robots: NAO and Pepper.Both robots easily create an empathetic link with students, teachers, and researchers through their eye-catching appearances, moderate sizes, and humanoid behaviors. 1Our research proposal, including its objectives, methodologies, and participant engagement strategies, was approved by the Institutional Review Board. Procedure.Five sessions (1 hour each) were conducted using the social robots with 16 ASD high school students (see Supplementary Appendix S1).At the end of the session, researchers filled out an evaluation form with five variables (e.g., focused attention, following instructions, physical and verbal imitation, emotional response, and performance).Parents were informed of the study with their due consent taken before the study.The inclusion criteria for student selection were: (a) high school students diagnosed with ASD, (b) between 15 and 17 years of age, (c) obtained 'informed consent' signed by their parents, and (d) selected by high school counselors for HRI experiments with social robots according to their respective IEPs.The exclusion criteria were: (a) high school students who did not meet the age criteria (15-17 years of age), (b) did not obtain 'informed consent' from their parents, and (c) students with hearing, speech, and vision deficits, with abnormal eye movements and comorbidities such as Fragile X Syndrome or Down's Syndrome, and/or students diagnosed with other learning disorders. Supplementary Appendix S1 provides information on the demographics of ASD students and teachers, and Supplementary Appendix S2 provides more detail about the interview procedure and questions.Both sets of interviews included questions dealing with task accomplishment according to curriculum development for social-emotional skills targeting ASD students, and the interpersonal/people dimension of the task focused on social-emotional skills development.Multiple 1 https://www.aldebaran.com/en/pepper-and-nao-robots-educationrobotic intervention sessions were conducted with these 16 ASD students.The curriculum-related, educational-ethical robotic intervention scenarios focused on social-emotional learning (SEL) skills--comfort zone, conflict resolution, and job search --were developed as a part of the current research.These newly developed curriculum-related robotic intervention scenarios include: • Comfort Zone: This human-robot activity introduces humans (ASD individuals) to the concept of a comfort zone through the social robot and explains its benefit to the ASD individual.Lessons on moral values and ethics were integrated for the ASD students in each SEL skill.These human-robot activities were developed as case-based, ethical curriculum-related robotic intervention scenarios and individual lesson plans for students with ASD and other learning disabilities2 (see Arora et al., 2022). Interviews have been used in previous research across disciplines as a robust method for formulating research propositions.Specifically, van Doorn et al. ( 2023) leverage the power of qualitative interviews to explore significant relationships at the intersection of consumers, autonomous technology, and workers.This approach is pivotal in developing their Consumer-Autonomous Technology-Worker (CAW) framework, which sheds light on the evolving landscape of organizational frontlines in the digital age.By engaging directly with workers co-working with robots and consumers interacting with these human-robot teams, van Doorn et al. ( 2023) gather rich, contextually nuanced insights that are critical for framing their research propositions. All interviews were transcribed by the researchers to ensure that the rich, qualitative data contained within could be analyzed.Our methodology was informed by the principles and practices suggested by van Doorn et al. (2023), tailored to explore the specific nuances and dynamics observed in HRI settings.Our analytical process involved a detailed examination of the transcriptions to identify key patterns, behaviors, and insights related to the interaction between students and social robots.This involved an iterative process of coding the data (first order codes, second order codes, and aggregate dimension) as per Gioia approach (Gioia et al., 2013), discussing emergent patterns among the research team, and refining our understanding of the data considering the broader literature and the specific objectives of our research.In "When my student interacts with NAO or Pepper directly, of course, s/he enjoys the interaction.However, when I bring the robot with me, I find my students happier than them enjoying the interaction without me." (How do ASD students (and their teachers) experience overall learning experience with the social robots) students, teachers, and robotic professionals from the industry, we gain access to their lived experiences and insights, which are crucial for grounding our research propositions in real-world contexts.This approach aligns with the qualitative research tradition, where the depth and richness of data gathered through interviews offer a robust foundation for developing meaningful and relevant research propositions.Thus, the methodology employed in our study, inspired by van Doorn et al. 's (2023) work, stands on solid academic ground, demonstrating that customer discovery interviews are powerful instruments for generating deep insights essential for scholarly research. Conceptual framework: ASD student-teacher-robot triad framework A social robot can be designed along a spectrum of autonomy-ranging from non-autonomous or Wizard-of-Oz designed, semi-autonomous, to fully autonomous.Autonomous technology is defined as "machines capable of performing actions without (or with minimal) human intervention that can change their behavior in response to unanticipated events (Watson and Scheidt, 2005) … developed remarkably over recent decades and has become a top priority of both researchers and managers" (van Doorn et al., 2023, p. 2).Much research is available on consumer-facing AT (e.g., chatbots or digital voice assistants such as Alexa and Siri) that help consumers select the right goods and services (Guha et al., 2021).In this research, embodied robots (e.g., Pepper) guiding consumers in a store, a building, or school premises are considered consumer-facing AT.Employee (or worker)-facing AT can be considered a medical AI assisting hospital doctors (Longoni et al., 2019).In our research scenario, teachers working and collaborating with NAO and Pepper robots to teach ASD students can be considered employee-facing AT. Utilizing van Doorn et al. 's (2023) framework for consumerfacing AT and worker-facing AT, we propose our ASD student-teacher (co-working with social robot)-social robot triad framework (refer to Figure 1) with eight core research propositions.Figure 1 illustrates the relationships between ASD students and their teachers and how these relations change when social robots are integrated into curriculum planning for ASD students.We acknowledge that reality may be more complex than these portrayed relationships between ASD students and their teachers. In each quadrant of Figure 1, we provide anecdotal evidence from a series of interviews conducted during human-robot interaction (HRI) field experiments or social robotic intervention sessions between the robots and the ASD students.Both ASD students' and teachers' interviews focused on social robotic interventions (or HRI field experiments).They emphasized the need for collaboration between the teacher and their ASD student in a way that the ASD student-teacher (co-working with social robot)-social robot triad is considered as a whole. Utilizing the Business Model Canvas (BMC) framework to signify the triad of ASD student-teacher (co-working with social robot)-social robot (as seen in Figure 1), we develop eight research propositions in the next section.We start by focusing on ASD student-social robot and teacher (co-working with social robot)-social robot dyads.Each quadrant of the ASD student-teacher (co-working with social robot)-social robot triad framework (Figure 1) is explained in detail in the following section.Thereafter, we bring the three actors together in the triad framework by offering eight core research propositions. Development of research propositions 5.1 Teacher-student relationship through the social robot: how does the presence of a 'social robot' change (a) the way ASD students relate to their teachers, and (b) the way teachers interact and relate to ASD students? This sub-section focuses on the top-left and bottom-left quadrants of the ASD student-teacher (co-working with social robot)-social robot triad framework (see Figure 1).Top-left quadrant focuses on the relationship between ASD student and teacher (co-working with robot) through the presence and use of a social robot.The presence of a social robot helps both ASD students and their teachers focus on social-emotional skills and provides Frontiers in Robotics and AI 07 frontiersin.orgmore time for teachers and counselors to engage in interaction, feedback, and future curriculum planning.Social robots can help ASD students engage with the learning material by providing a novel and interactive learning experience (Belpaeme and Tanaka, 2021).This aspect can be particularly beneficial for students with ASD, who may struggle with traditional teaching methods, and indicates that social robots can play a supportive role in strengthening the ASD student-teacher relationship in the context of ASD education (refer to the top-left quadrant of Figure 1).The bottom-left quadrant focuses on the relationship between teacher (co-working with robot) and ASD student through the presence and use of a social robot.When ASD students are exposed to social robotic interactions through their teachers, it may affect how they relate to their teachers and how their teachers interact and relate to the students.Teachers may frequently switch between different perspectives when interacting with social robots, such as viewing the robot as a didactic tool or a social actor (Ekström and Pareto, 2022).This flexibility could potentially help teachers adapt their teaching strategies to support ASD students better.Teachers may try to create an inclusive approach, encourage collaboration, and establish mutual trust between the actors in their assigned roles (Ekström and Pareto, 2022).This could lead to a more supportive and inclusive learning environment for ASD students created by their teachers through social robots (refer to the bottomleft quadrant of Figure 1). Social-educational robotics is an innovative and viable platform for: • teaching and learning science, technology, engineering, and mathematics (STEM) and STEM-related curricula across diverse disciplines; • developing broad learning skills such as scientific inquiry, engineering design, problem-solving, creative thinking, and teamwork; and • fostering students' motivation to engage in science and technology while reducing psychological and cultural barriers for minority students from underprivileged communities. Robotics education can drive students to behave as coconstructors of learning rather than passive knowledge receivers or technology consumers.Broadening involvement in STEM education is essential for providing equitable learning opportunities for students with varying needs and diverse backgrounds (Lee and Buxton, 2010;Bellas and Sousa, 2023).Social-educational robots may serve as social mediators, encouraging prosocial behaviors in interactions with individuals.These behaviors include orienting the eyes and head, initiating physical contact, and pointing to shared interests (Dautenhahn et al., 2003;Diehl et al., 2012).Thus, our central hypothesis revolves around applying the social motivation theory of ASD to social-educational robotics.Our research focuses on students diagnosed with ASD, referred to as ASD students.It aims to understand how these ASD individuals positively react to sensory rewards delivered by a social robot, indicating their interest and satisfaction exposed to these stimuli (Kostrubiec and Kruck, 2020;Bellas and Sousa, 2023).During our interviews with ASD students, we found that students enjoyed interacting with robots more than their teachers or counselors.However, when teachers incorporated social robots into the classroom curriculum, ASD students enjoyed interacting with their teachers and the robots.One teacher interviewee articulated this by stating that "While social robots take care of the routine task (lesson plan) accomplishment of teaching social, emotional skills (e.g., comfort zones, conflict resolution, preparing for college and job applications, etc.), teachers have extra time to focus on an individual student's progress and growth, thus improving their relationships with their students."Consequently, we propose the following. Research Proposition 1: Teachers are more likely to forge stronger relationships with ASD students when social robots focus on routine tasks (e.g., teaching lesson plans), and teachers focus on creative, relational tasks, leading to overall student development and growth. While interviewing teachers and ASD students, it was evident that some teachers had concerns about robots replacing them (Bellas and Sousa, 2023).Not all teachers are comfortable working with robots; some need training to collaborate with robots and students effectively.However, when teachers overcome their fears and view robots as valuable aids rather than their replacements, they tend to relate better with their students.One teacher interviewee emphasized this by saying, "When my student interacts with NAO or Pepper directly, of course, s/he enjoys the interaction.However, when I bring the robot with me, I find my students happier than them enjoying the interaction without me."Thus, we have the following research proposition. Research Proposition 2: Teachers are more likely to forge weaker relationships with their ASD students when teachers (a) are not provided with adequate training to work and collaborate with robots, and (b) they fear robots replacing them in their jobs. Student-Teacher Relationship through the Social Robot: How does the presence of (a) teachers collaborating with social robots affect the way ASD students relate to their teachers and robots, and (b) ASD students interacting with social robots change the way teachers relate to their ASD students and robots? In this sub-section, we focus on the top right and bottom right quadrants of the ASD student-teacher (co-working with social robot)-social robot triad framework (see Figure 1).The top-right quadrant focuses on the relationship between an ASD student and a social robot through the presence of a teacher (co-working with robot).When a teacher is present alongside the social robot, the focus is on how the teacher engages the ASD student in various activities and learning scenarios through the social robot.Teachers can provide personalized and engaging experiences that cater to the specific needs of ASD students.They can offer a range of stimuli that help ASD students improve their social interaction, communication skills, and emotional recognition.The social robot serves as a consistent and predictable assistant to the teacher, which can be especially comforting for ASD students, who often prefer structured and routine interactions.This quadrant highlights the potential of social robots as complementary tools for facilitating learning and social interaction in ASD students (Belpaeme and Tanaka, 2021). The bottom-right quadrant of the ASD student-teacher (coworking with social robot)-social robot triad framework (see Figure 1) explores the relationship between the teacher (co-working with robot) and the social robot when the ASD student is present.During robotic interventions with ASD students, the teacher (co-working with social robot) utilizes the robot as a teaching aid, and the subsequent collaboration aids and influences the educational process.Teachers can leverage the capabilities of social robots to enhance their teaching methods, using them as tools to demonstrate concepts or as interactive elements that add novelty and engagement to lessons.Social robots also allow teachers to observe and understand how ASD students interact with technology, providing valuable insights that can be used to tailor educational approaches (Ekström and Pareto, 2022).This quadrant underscores the collaborative potential between human educators and robotic technologies in creating a more effective and inclusive educational environment for ASD students. Social-educational robotics has proven to be successful with ASD students.However, the potential of robotics in teaching has been debated with little regard for different types of students (Alimisis, 2013).Previous research focused on what robotics concepts and skills ASD and typically developing (TD) students can learn (e.g., Bers et al., 2014;Atmatzidou and Demetriadis, 2016) rather than how they learn.When teachers use social robots by focusing on student learning outcomes, detailed descriptions of robotic kits and curricula for instructional approaches receive much attention.Still, there is insufficient explanation for how different students interacted with the robots and participated in the activities.According to Johnson (2003), "the universality of the robotics phenomenon" (p.16) implies that robotics education is effective for all students regardless of their unique learning styles and diverse backgrounds.It is critical to investigate how children with varying needs and abilities engage in robotics to develop and implement responsive educational programs (Alimisis, 2013;Apraiz et al., 2023) to meet their needs. We interviewed the educational technology head and four of their associates for the entire public school system.We inquired about the technology assistance for students with learning disabilities.Evidently, the school system grants autonomy to individual schools to make their own technology choices.Assistive Technology is utilized through computers and/or gaming targeted at students with learning disabilities.Ensuring the safety of technology and safeguarding the privacy of students' data is of utmost importance when engaging students with different learning disorders.In fact, the public schools' system provides autonomous robotic technology (Sphero robots-please see Figure 2) to be used by each system school for all students.Additionally, special education teachers and counselors can request a set of 20 Sphero robots for their classrooms free of cost. One of the ASD student interviewees (Interviewee #3) mentioned: "I wait to meet my NAO on a regular basis.Initially, there were university professors who introduced us to NAO.Now, when my teacher uses NAO and brings it with her, I feel very happy.I feel that robots are our common interests."Students enjoy the overall learning experience with social robots when it is facilitated by their teachers.Thus, we have the following research proposition.Research Proposition 3: ASD students are more likely to forge stronger relationships with their teachers when teachers co-work and collaborate with social robots for teaching purposes due to shared humanness, interests, and engagement. However, in a different situation, if the teacher relies heavily on social robots without involving herself/himself in the class and engaging students, students do not enjoy the overall learning experience.One student interviewee (Interviewee #6) said, "My teacher brings in the robot and then leaves.University researchers make us interact with the robot, but I miss my teacher's voice.I wish she can be present when the robot interacts with us."Another mentioned: "It was my first time interacting with Pepper, and my teacher left me with Pepper and other university professors/researchers.I was intimidated and immediately left the room."Thus, we have the following research proposition. Research Proposition 4: ASD students are more likely to have weaker relationships with their teachers when student-robot interaction is (a) not mediated by their teachers, and (b) when teachers are not a part of the overall social robotic intervention process between ASD students and robots. Social robots' and robotic companies' perspectives Having examined the perspectives of teachers co-working with robots and ASD students interacting with social robots in the above sections, we now focus on the robotic companies' perspectives.Critics of educational-social robotics have argued that emotional bonding created between humans and anthropomorphic robots can make people vulnerable to emotional manipulation (Zhang et al., 2019;Arora, Arora, Jentjens et al., 2022;Arora et al., 2022;Sammonds et al., 2022;Yepez et al., 2022;Apraiz et al., 2023;Roesler, 2023) and can create ethical challenges.Regulation and ethics are interrelated and are essential to regulate robotic frameworks. Earlier standards in robotics technology separated robots from human operators for safety concerns through the European legislative via the Machine Directive 2006/42/EC, ISO 10218 (robots and robotic devices, and safety requirements for industrial robots Part 1 and 2), among other laws (Danks and London, 2017;Apraiz et al., 2023).European harmonized standards do not cover robots in educational-social spheres, autonomous vehicles, and/or additive manufacturing.Only industrial robots are a part of these standards/laws. ISO 13482:2014 standard focuses on human-robot interaction situations of voice-controlled robotic wheelchairs, exoskeletons, and other social robots, where minimum safety requirements of HRI are defined in terms of design factors dealing with, but not limited to, robot shape, robot storage, robot motion, and other design considerations (Danks and London, 2017).Since there is a lack of standardization in incorporating safety laws and standards in robot design worldwide, there is a growing potential for developing these standards for educational-social and collaborative robotics, including service, healthcare, medical, personal care, and/or therapeutic robotics.Some of these standards deal with moral hazards associated with robots. For example, the British Standard BS 8611:2016 on Robots and Robotic Devices enables roboticists to perform an ethical risk assessment of artificial agents.The USA-based standard, IEEE Ethically Aligned Design, mandates that roboticists and engineers should be empowered to take control of ethical design considerations in the development of robots.During the customer discovery interviews, a robotics company professional interviewee mentioned: "We have always wanted to design better ethical robots by working directly with high schools and university researchers.We want to make our robots HIPAA and FERPA compliant for use by vulnerable populations, e.g., ASD students at high schools." Thus, we have the following research proposition. Research Proposition 5: Robot-based ethical interactive intervention scenarios based on school curricula will enhance learning by ASD students. Through the customer discovery interviews with 16 ASD students and nine teachers, we analyzed the curriculumrelated, educational-ethical robotic intervention scenarios.These scenarios focused on social-emotional learning (SEL) skills, including comfort zone, conflict resolution, and job search.Pictures and videos of multiple robotic intervention sessions with some of these high school students can be found at: https://photos.app.goo.gl/9EXAW9fBfdkG5Ca5A.13 out of 16 ASD students showed interest in interacting with robots through three lesson plans-comfort zone, conflict resolution, and job search skills.All the human-robot activities or SEL skills developed as three robotic intervention scenarios were completed in approximately 30-45-min sessions over five instances each. ASD students with high cognitive and low social skills addressed the robot as 'he' or a 'human companion.' Conversely, ASD students with high social and low cognitive skills addressed the robot as 'it' or a 'technology/tool.' These interesting findings relate to the fact that ASD students learn by focusing on skills they lack more than the skills they possess.ASD students lacking social skills found robots to be their companions, while ASD students lacking cognitive skills found robots to be their tutors or technology tools for education.Analyzing the overall performance, we found that most ASD students understood the concepts associated with SEL skills through robotic interventions.Thus, we have the following research proposition. Research Proposition 6: ASD students with high cognitive and low social skills are more likely to address the robot as a 'human' companion.In contrast, ASD students with low cognitive and high social skills are more likely to address the robot as 'it' or a 'technology/tool.' Other stakeholders' perspectives Study 2: Participants and Educational Settings.In a separate study, we conducted thirty-five (35) more customer discovery interviews with schools and robotic companies.We interviewed principals, special education counselors, technology heads, and PTAs (parent-teacher associations) at three high schools in the public school system (a total of 20 interviews).Further, we interviewed 15 robotics company professionals. Procedure.Supplementary Appendix S3 includes the interview questions from 35 customer discovery interviews (20 interviews with school professionals and 15 interviews with robotics company professionals) using BMC methodology.In response to the customer discovery interviews conducted at schools, a well-known middle school in the public school system (with a high focus on education and technology) reported learning disabilities and disorders (e.g., anxiety, autism spectrum disorder or ASD, attention deficit hyperactivity disorder or ADHD, learning disabilities linked to diabetes or physical medical condition) for about 20% students (259 out of 1,524 students).At least six customer discovery interviews were conducted with different stakeholders at the school: the principal, PTA president, technology head, and two special needs counselors.ASD was identified as a critical issue, and communication and educational support (CES) services were identified and provided.Two special education programs, 14 special education counselors, three social workers (one per grade level from sixth to eighth grades), and a behavioral analyst worked closely with special needs students.The school had a good focus on technology, and Assistive Technology was already in use for ASD students.Reader Pens (which read to students through iPads) and special hi-tech chairs/furniture were in use.The school system has a policy of 'laptops for every student, ' and the educational system was found to be technology savvy (having funds/resources to use for learning and assistive technology through federal government initiatives/programs). A highly ranked high school was selected to conduct six customer discovery interviews with school stakeholders.Of the 1800+ students, about 200 were identified with special needs and learning disabilities (Autism Spectrum with a combination of high/low social/cognitive skills).The school was using a robotic arm for educational purposes.A science teacher organized a gaming club (supported by donations from local businesses).As a part of the gaming club activities, ASD and typically developing (TD) students built gaming computers, conducted gaming activities, worked on flight simulators, and used Xbox for gaming focused on cyber security.We conducted four customer discovery interviews with the principal, technology head, and special educational counselors at another (smaller) high school with 110 students in the Engineering program.Of those students, 20-25 students were identified with learning disabilities.The school had two special needs teachers or counselors.Assistive Technology was utilized through computers. Through the above 20 customer discovery interviews with school administrators, teachers, parents, and technology heads, we discussed the short-term and long-term impacts of humanrobot interaction on ASD students, and the potential pitfalls of over-exposure, over-engagement, and over-attachment with ASD students.We received consensus about the teachers' role and engagement in the overall social robotic intervention process with ASD students.One of the parent interviewees stated: "I think teachers do a fabulous job in avoiding any potential negative effects of ASD students indulging with social robots."In another instance, a high school principal noted: "A teacher's presence in the classroom ensures that technology is not seen as intrusive and there is no over-indulgence with social robots."Teachers were found to be effective in avoiding any potentially adverse effects on ASD students indulging in social robots.Teacher engagement and collaboration with social robots help in successful human-robot interaction (HRI) implementation over the long term.Thus, we have the following proposition. Research Proposition 7: Teacher (co-working with the robot) during student-robot interaction (a) helps ASD students relate more to the social robot in the short term, and (b) decreases any potential over-attachment or over-involvement (or other potential negative consequences) with the social robot over the long term. Technology heads at schools believe that technology helps all students, but it should be safe, secure, and ethical (with privacy considerations) when engaging students with ASD and other learning disabilities.Customer discovery interviews with 15 robotic company professionals/roboticists confirm these findings.Robotic companies were keen to work with the public school system.They were open to using academic support, especially where academic researchers can act as 'mediators' between schools and robotic companies for designing ethical curricula for students with learning disabilities.The robotic companies were focused on HIPAA and FERPA compliances for helping students with ASD and learning disabilities.The technology head of the school system stated: "We need to build safeguards with robotic technology.Technology (in any form) should be safe, secure, and ethical (with privacy considerations), especially while engaging students with ASD and other learning disabilities."School technology heads and robotic companies were happy to integrate security and privacy considerations in the robots and robotic systems through web-enabled platforms.Thus, we have the following research proposition. Research Proposition 8: Ethical technological interactions will lead to better (enhanced) learning for ASD students with learning disabilities. Implications of research The human factor plays a significant role in a successful ASD student-social robot interaction mediated by a teacher's presence.This research draws a parallel between ASD student-social robot interaction and van Doorn et al. 's (2023) research highlighting autonomous technology interaction with frontline workers and consumers, examining consumer-AT and worker-AT dyads.Our current research explored the ASD student-teacher-social robot interactions triad framework by considering the social context in which robots operate with ASD students and teachers co-collaborating with social robots and robotic technology.Building on previous literature and customer discovery interviews derived from the business model canvas (BMC) and social motivation theory of autism, we provided eight core research propositions highlighting avenues for research in the triad framework.Robotic interactions and collaborations between humans (ASD students and teachers co-working with robots to help students with ASD) and social robots help in the education (service) sector by bridging the fields of education, artificial intelligence (AI), human-robot interaction (HRI), and consumer behavior.The complex interactions between humans (ASD students, teachers) and social robots need to be studied simultaneously to understand the utilization of social robotics in the education sector.Some industry examples that could potentially work with teachers and ASD students, based on the ASD student-teacher-social robot interactions triad framework are as follows: • Education Technology (EdTech) Companies: These companies develop and provide tools and platforms for educational purposes, including those that can be adapted for students with ASD.We highlighted the relevance of the Business Model Canvas (BMC) framework, signifying the triad: ASD student-teacher (coworking with social robot)-social robots.We also conducted a series of customer discovery interviews in high schools with ASD students along with their teachers/counselors (co-working with robots to help ASD students), parents, technology heads, and robotic company professionals.Through this research, we illustrate how the field of social robotics is helping to shape a sustainable future involving neurodivergent ASD individuals, which is far beyond the mere replacement of human workers.While robotic anthropomorphism has been studied extensively, we predicted that its negative impact of over-involvement can be reduced by the presence of a human (i.e., teacher during HRI). Through the interdisciplinary fields of consumer behavior research, AI, social robotics, and human-robot interaction (HRI), we illustrated the relevance of social robotics and how it changes the relationships between the various actors depending on a series of factors.Division of labor between social robot and teacher ensures a successful HRI for ASD students whereby technology (i.e., robot) augments HRI instead of replacing the teacher (Tsai et al., 2022;Engwall et al., 2023;van Doorn et al., 2023).Human leadership and human factors through the presence of a teacher who is comfortable collaborating with the robot help strengthen the HRI impact in the short term for the ASD student and avoid the potential pitfalls of over-exposure and over-attachment of robots with ASD students.This aligns with the social presence theory (He et al., 2012). Our research is the first to integrate the research domains of social robotics and human-robot interaction (HRI), the BMC framework, and learning and education (as depicted in Figure 3).Through the current research, we aimed to: (a) develop responsive robotics education through the Business Model Canvas (BMC) to engage all stakeholders in the robotic interventions process with ASD students, (b) create the ASD student-teacher-social robot interactions triad framework by conducting HRI field experiments with ASD students in public schools, employing the BMC, and customer discovery process, and (c) investigate how educationalsocial robotic interventions, specifically involving humanoid robots, contribute to the progress of high school students diagnosed with ASD and other learning/cognitive disabilities.The research involved the active participation of various stakeholders such as ASD students, teachers collaborating with robots, parents, school technology heads, and robotics company professionals. 10.3389/frobt.2024.1328467 Future research directions As previously mentioned, our research aims to address two of the 2030 Sustainable Development Goals (SDGs): SDG three focuses on good health and wellbeing (ensuring healthy lives and wellbeing at all ages), and SDG 4 centers on ensuring inclusive and equitable quality education as well as promoting lifelong learning opportunities for all.There is a significant lack of awareness and understanding regarding the SDGs within the robotics community and among decision-makers.This knowledge gap is an obstacle to leveraging the contributions of robotics and AI towards achieving the SDGs (Mai et al., 2022).To overcome this challenge, future researchers should prioritize the integration of the SDGs with robotics.Additionally, there should be a stronger emphasis on interdisciplinary, humancentered, systemic thinking to highlight the benefits and relevance of social robots and robotic interventions in the context of the SDGs (Mai et al., 2022).Current research: Intersection of social robotics and HRI, BMC, and learning and education. We acknowledge the modest sample size utilized in our study given the constraints of the novel nature of our research question, the absence of prior research in this domain, and the contextualization of our in-depth customer discovery interviews to a specific field of social robotics and HRI.Despite addressing a timely and relevant issue related to HRI, due to the modest sample size, we note that our findings must be considered as preliminary and not extrapolated beyond our research setting.However, our findings do provide some valuable insights that advance both our knowledge and practice.Furthermore, our results serve as a strong foundation for subsequent research employing larger sample sizes and examining diverse application scenarios.Future research efforts can further validate our findings and delineate boundary conditions governing our findings. Preparing ASD students for the future is a challenging endeavor.Schools and universities are working with ASD students; however, the current effort is insufficient (Engwall et al., 2023).To address this gap, there is a growing need for more technological support (through robotics) to facilitate the development of SEL and other essential life skills like critical thinking, problem-solving, decision-making, and creative solutions.Integrating these SEL and life skills into our current educational landscape is a complex undertaking.Such integration may be achieved through HRI field experiments and building curriculum-related robotic intervention scenarios focused on life skills needed to excel in the future. Furthermore, it is unclear whether social robots forge stronger or weaker ASD student-teacher relationships.Jackson et al. (2020) predicted stronger relationships between humans and robots where interhuman differences based on race and religion are not relevant.On the other hand, some studies show weakened interhuman relationships because humans (i.e., teachers collaborating with robots) can be potentially dehumanized (Herak et al., 2020).Future researchers should investigate the ASD student-teacher-social 10.3389/frobt.2024.1328467robot interactions triad framework provided in our study and its implications for the relationships involved. One of the major limitations of our study is that in our ASD student-teacher-social robot interactions triad framework.We primarily examined interactions between ASD students and social robots, as well as between teachers and social robots, in individual settings.We did not explore group dynamics of decision-making within these interactions.Future researchers should investigate a broader range of research contexts. In our research, we utilized social robots deployed by the school system (i.e., business context).However, it is important to acknowledge that in different settings, such as when robots are deployed directly by families of ASD students (a consumer context), the outcome may be different.Further, our research focused on external stakeholders.Future researchers can concentrate on robotic companies and their influence on the education sector to further advance the research enterprise.Similarly, future research should delineate the usage of social robots for neurodivergent and neurotypical employees in organizations and how social robots can impact human capital and corporate culture (van Doorn et al., 2023).Furthermore, we did not investigate our framework for its relevance to robotic companies' suppliers, competitors, and policymakers.Future research may explore complex configurations of our ASD student-teacher-social robot interactions triad framework in diverse research contexts for different stakeholders. We hope that our research propositions hold promise for advancing research and practice in social robotics and HRI domain, and using robotic technology to address learning disabilities in the digital age.Such progress is both timely and relevant to create a positive impact on society. FIGURE 1 FIGURE 1 Relationships within the Triad [ASD Student-Teacher (co-working with social robot)-Social Robot] Framework (Adapted from van Doorn et al., 2023). TABLE 1 Business model canvas (BMC) framework signifying the triad framework.Our Impact of this Current Research is not just on the K-12 School System but also on the Robotic Companies. • Conflict Resolution: This human-robot activity explores the skill of conflict resolution, such that it involves helping someone resolve a conflict within themselves or between others: communicate, compromise, ask for help, apologize, and write a Pro and Con list through the social robot.•Job Search: This activity explores the skill of job search (explained to the human/ASD individual through the robot) and making use of available job opportunities: talking with others, business signs, community support, the internet, and/or newspapers. opinions of Student-Robot Interactions highlights our eight core research propositions. TABLE 3 Research propositions within the ASD student-teacher (co-working with social robot)-social robotic interactions triad framework.We have always wanted to design better ethical robots by working directly with high schools and university researchers.We want to make our robots HIPAA and FERPA compliant for use by the vulnerable populations, e.g., ASD students at high schools." TABLE 3 ( Continued) Research propositions within the ASD student-teacher (co-working with social robot)-social robotic interactions triad framework. Technology Head Interviewee # 2: "We need to build safeguards with robotic technology.Technology (in any form) should be safe, secure, and ethical (with privacy considerations), especially while engaging students with ASD and other learning disabilities." FIGURE 3
12,846.4
2024-04-24T00:00:00.000
[ "Education", "Psychology", "Computer Science", "Engineering" ]
A Corrosion Sensor for Monitoring the Early-Stage Environmental Corrosion of A36 Carbon Steel An innovative prototype sensor containing A36 carbon steel as a capacitor was explored to monitor early-stage corrosion. The sensor detected the changes of the surface- rather than the bulk- property and morphology of A36 during corrosion. Thus it was more sensitive than the conventional electrical resistance corrosion sensors. After being soaked in an aerated 0.2 M NaCl solution, the sensor’s normalized electrical resistance (R/R0) decreased continuously from 1.0 to 0.74 with the extent of corrosion. Meanwhile, the sensor’s normalized capacitance (C/C0) increased continuously from 1.0 to 1.46. X-ray diffraction result indicates that the iron rust on A36 had crystals of lepidocrocite and magnetite. Introduction Corrosion is a destructive attack on a metal such as carbon steel, aluminum, zinc and copper by chemical or electrochemical reactions with its environment [1]. It is a spontaneous process. If corrosion is not monitored and correctly fixed, it could threaten public welfares and people's lives [2]. Among various causes of corrosion, environmental factors are the most common ones because of ubiquitousness. Practically all environments are corrosive to some degree [3]. Some examples are air and moisture; fresh, distilled, salt, and mine waters; steam and other gases such as chlorine, ammonia, hydrogen sulfide, and fuel gases; mineral and organic acids [3,4]. Among these environmental factors, chloride is an important one. It is well known that chloride ions can cause passive layer breakdown and corrosion of metals [5,6]. Structures can be exposed to chloride ions through various means including deicing salts, fresh water, and a marine environment [7]. Manual inspection of corrosion is costly, low efficient, subjective and sometimes dangerous. It typically requires a large amount of time for professionals to travel and inspect each site. Especially when there are difficult-to-access or completely inaccessible areas, manual inspections are almost impossible. As a result, it is highly desirable to use corrosion sensors for automatic data collection, processing, and evaluation. Compared to manual inspections, automatic monitoring by corrosion sensors has significant advantages, such as promptness, comprehensiveness and efficiency. In addition, electrical signals from corrosion sensors are much easier to transmit, analyze and store than manual methods. Conventional corrosion sensors are typically based on the mechanism of an increase in electrical resistance of iron with the degree of corrosion [8,9]. However, a lengthy response time is required to register a significant change in corrosion rate [10], because the percentage of the thickness change of the sensor has to be noteworthy. Detection of the early-stage environmental corrosion is of critical importance to maintain the integrity and the safety of structures and systems, because the corrosion can be a self-accelerating process when no corrosion inhibitors are present [11,12]. As a result, a prototype corrosion sensor has been explored in this study, in order to find the sensitive and systematic change in electrical properties of a metal surface (e.g., A36 carbon steel as investigated in this study) during the early-stage corrosion as it is exposed to a corrosive environment. A36 carbon steel is commonly used in steel bridges and other structures [13,14]. The definition of the early-stage corrosion is the mass change of the sensor is within 0.2% (or iron loss per exposed surface area is within 187.7 g/m 2 ) as explored in this study. The prototype sensor was essentially a capacitor composed of two parts: (i) the same metal with the same passivation/coating as the metallic structure or the system to be monitored; (ii) a corrosion-resistant conductor. The two parts were separated by air, the same corrosion environment of the structure or the system in service. As a result, the corrosion of the sensor represents the extent of corrosion of the structure/system being monitored. During the course of corrosion and degradation, there is a rapid change of the surface morphology and the property of the metal, rather than a slow change of the bulk electrical resistance measured by the conventional corrosion sensors. Therefore, the degree of corrosion can be sensitively reflected by the systematic change of the capacitance and the resistance readings from the prototype sensor. To our best knowledge, this type of corrosion sensor based on surface electrical resistance and capacitance measurements has not been reported yet. In practice, the sensor can be connected to a wired or wireless network for automatic data acquisition, processing and storage. Multiple sensors can be deployed at varied locations of a structure to provide a comprehensive monitoring network without the need for a site visitation. Thus it is more efficient and cost-saving. This can make it possible to remotely monitor the extent of corrosion of a structure or a system that the sensors are attached to. Figures S1 and S2 in the Supporting Information show an example of the installed corrosion sensors from this study on a steel bridge, the data acquisition and the monitoring system. The sensors can be combined with conventional bulk electrical resistance sensors, which are not sensitive to early-stage corrosion. Thus a monitoring system is formed to examine corrosion of varied stages. The main objective of this study was to develop a sensor system that could sensitively determine the degree of the early-stage corrosion of steel and steel structures during service; thereby, giving a chance to estimate the integrity of the infrastructures and to apply corrosion-control measures timely, so that a catastrophic failure could be prevented. Iron Loss during Corrosion A new cylindrical corrosion sensor consisted of a rust-free A36 carbon steel rod in the centre and the 316 stainless steel ring (see the section of Experimental Procedures). Two electrical wires were connected to them respectively. During the corrosion test, rust was visually observed on the A36 steel as early as 2 h of exposure to an aerated 0.2 M NaCl solution. In the meantime, the NaCl solution turned yellowish in colour with suspended small rust particles. After accumulated 225.5 h in an aerated 0.2 M NaCl solution, rust was very apparent and had covered a large surface area of the A36 steel rod. However, as expected, no visible corrosion was observed on the 316 stainless steel ring or the stainless steel reference sensor. During the corrosion process, yellowish iron rust continuously released from the sensor to the NaCl solution. The amount of iron in the solution was quantified by Atomic Absorption Spectroscopy (AAS) after dissolution of the rust with 10% (v/v) nitric acid. The results indicate that 0.16-0.81 mg/day of iron was released from the sensor to the solution. As shown in Figure 1, at the end of the test of 225.5 h, 3.24 mg of the accumulated iron was in the solution. The corresponding corrosion rate varied between 0.60 and 3.02 g/(m 2 · day). It should be noted that there was rust on the A36 steel surface in addition to the amount found in the NaCl solution. The rust was not cleaned from the sensor intentionally to maintain its natural condition during corrosion process. As a result, the overall iron loss and corrosion rate should be greater than that found in the solution as shown in Figure 1. The varied corrosion rate in this study is likely due to different amounts of rust spalling from the sensor from time to time, which can affect the quantity of iron in the solution significantly. However, for the control test of the 316 stainless steel ring alone or the stainless steel reference sensor, the dissolved iron concentration was less than 0.02 mg at the end of 225.5 h of corrosion, indicating the corrosion was insignificant. Despite apparent corrosion of the A36 steel rod through AAS measurements and visual observations, the mass of the sensor was maintained almost constant at 25.20 ± 0.05 g (i.e., variation was within ±0.2%) throughout the test. This result is consistent with Figure 1, in which the iron loss in the solution was within mg range. Based on the reactions (1) to (9), the loss of iron (Fe) from the sensor can be compensated by gains of oxygen and hydrogen atoms in the rust. As a result, mass, as a bulk parameter is not sensitive to evaluate corrosion. On the other hand, the minor change of the mass suggests the early-stage corrosion. The air gap distance between the A36 steel rod and the 316 stainless steel ring of the sensor did not increase significantly, which is an important evidence for the explanation of the electrical resistance and the capacitance changes of the sensor during corrosion in later discussions. Compositions of the Rust The corrosion of carbon steel can occur as an electrochemical reaction, with one anodic reaction and one cathodic reaction [3,15]. The anodic reaction usually occurs as: The cathodic reaction, however, can be different depending on what is in the environment. This cathodic reaction is the main factor that influences the rate of corrosion [3,15]. In an aerated solution it is most likely: Combining the cathodic and anodic reactions gives: As a result, ferrous hydroxide precipitates from the solution. However, dissolved oxygen can oxidize ferrous hydroxide to ferric hydroxide: (4) Yellowish/brownish rust was observed at the A36 carbon steel surface and in the solution. In addition, black rust was also seen on the A36 steel underneath the yellowish/brownish rust, which is likely magnetite (Fe 3 O 4 ). The formation of magnetite is due to iron not having enough oxygen present for the reaction [16]. The following additional reactions may occur involving oxidation of iron and producing rusts at the A36 carbon steel surface [15,17,18]. To examine the crystalline compositions of the iron rust on the A36 steel surface, x-ray diffraction (XRD) analysis was performed with CuKα radiation. Figure 2a shows the XRD spectrum of uncorroded A36 steel. Figure 2b shows the XRD spectrum of the corroded A36 steel soaked in an aerated 0.2 M NaCl solution for 225.5 h. Three types of crystalline substances were found on the corroded steel surface according to their characteristic diffraction patterns [19,20], which were 1: iron, 2: lepidocrocite, and 3: magnetite. Consistently, it has been reported that the rust formed on steel surface is a mixture of lepidocrocite (γ-FeOOH), magnetite (Fe 3 O 4 ), hematite (α-Fe 2 O 3 ), goethite (α-FeOOH), and amorphous iron oxide [21][22][23][24], although only the first two were found in this study. To make the study more general, the resistivity and the dielectric constant of the common rust materials, along with iron and air are listed in Table 1. Figure 3 shows the electrical resistance of the sensor in parallel (i.e., the built-in measurement mode of a resistance, capacitance and inductance (RCL) meter) with the extension of corrosion time in the NaCl solution. In Figure 3, during the course of corrosion, the resistance gradually decreased following an apparent trend. At the end of the corrosion test of 225.5 h, the normalized resistance (R/R 0 ) decreased from 1.0 to 0.74, a decline of 26%. The electrical resistance for a coaxial cylinder can be expressed by the following equation. (10) where R = electrical resistance of a material (Ω); ρ = electrical resistivity of the material (Ω·m); h = height of the cylinder sensor (m); b = inner radius of the 316 stainless steel ring (m); a = radius of the A36 steel rod (m). Electrical Resistance of the Sensor After corrosion, the multi-material resistor (iron rust on A36 steel surface and the air gap) can be treated as resistance in-series to calculate the overall resistance of the sensor, (11) where = equivalent electrical resistivity of the porous iron rust on the A36 steel rod surface (Ω·m); = electrical resistivity of the air gap between the iron rust on the A36 steel rod surface and the 316 stainless steel ring (Ω·m); = average thickness of the iron rust on the A36 steel rod surface (m); = average porosity of the iron rust on the A36 steel rod surface. The resistance of a new sensor (before corrosion) is mainly due to the air gap between the central cylindrical A36 steel rod and the 316 stainless steel ring (see the section of Experimental Procedures). Air has an electrical resistivity of 4 × 10 13 Ω·m [31]. This is different from the conventional corrosion sensors, which measure the bulk electrical resistance of the metal. When corrosion of the steel rod happened, rust formed a porous and loose structure [22] extended from the A36 steel surface to the surrounding air. As a result, the rust took a partial space that was previously occupied by air. In other words, after corrosion the gap between the A36 steel rod and the 316 stainless steel ring was partially filled with porous rust (small portion) and air (big portion). Equation (11) shows the overall resistance of the sensor as a multi-material resistor (iron rust and air gap). As mentioned earlier, the iron rust is a mixture of lepidocrocite (γ-FeOOH) and magnetite (Fe 3 O 4 ) as shown in Figure 2b, along with reported hematite (α-Fe 2 O 3 ), goethite (α-FeOOH) and amorphous iron oxide [21][22][23][24]. Their electrical resistivity is shown in Table 1. As can be seen, the electrical resistivity of the rust components is at least eight orders of magnitude lower than air. According to Equation (11), a lower electrical resistivity has smaller resistance. Consequently, the electrical resistance decreases with the time or the extent of corrosion. Different from conventional corrosion sensors, which detect an increase in the bulk electrical resistance of iron with corrosion [8,9], the cylindrical sensor explored in this study examined the A36 surface property and morphology changes. Thus it is more sensitive to monitor the early-stage corrosion. In addition, the sensitivity of the sensor can be fine-tuned by optimizing the values of a and b. The closer the values of a and b, the greater sensitivity of the corrosion sensor is expected to have. Although spalling of rust from the sensor could enlarge the air gap between the A36 steel rod and the 316 stainless steel ring, the insignificant mass-loss result (i.e., mass change ≤0.2% or 187.7 g/m 2 ) indicates this was not the case during the testing period of early-stage corrosion. However, if significant spalling of rust happens (i.e., becomes much small) and thus the air gap between the A36 steel rod and the 316 stainless steel ring increases, the trend of electrical resistivity can be reversed (i.e., electrical resistance increases with time instead), suggesting much severe corrosion, which can be regarded as middle-or late-stage corrosion. This turning point can be used to rank the risk level of corrosion. In contrast, the electrical resistance of the stainless steel reference sensor was stable, as shown in Figure 3. In practice, the reference sensors can be used to normalize the baseline signal of electrical resistances including the effects of air moisture and temperature, in order to distinguish the electrical resistance changes caused by rust formation on the steel surface. Capacitance of the Sensor In addition to the electrical resistances, the capacitance of the sensor during corrosion was also examined. Again, the capacitance was from the built-in parallel measurement mode of the RCL meter. Figure 4 shows the change of the capacitance vs. accumulated time of corrosion in an aerated 0.2 M NaCl solution. A positive trend of the capacitance with the extent of corrosion was observed. More specifically, the normalized capacitance (C/C 0 ) increased from 1.0 at the beginning to 1.46 after 225.5 h. In other words, the capacitance had an increase of 46%. The capacitance of an infinite cylindrical sensor, neglecting the fringing effect, can be calculated from the following equation [32] (see the Supporting Information of a reference equation of the capacitance with fringing effect considered), (12) where C = capacitance (pF); = free space permittivity, 8.85 pF/m; h = height of the cylinder sensor (m); = dielectric constant of the material(s) between the A36 steel rod and the 316 stainless steel ring. Before corrosion, air was between the A36 steel rod and the 316 stainless steel ring. Air has a dielectric constant of 1 [31]. After corrosion, rust was formed at the A36 steel surface. It means the space between the A36 steel rod and the 316 stainless steel ring was filled with both rust and air, although rust has a much smaller volume than air in this case. The multi-material capacitor (iron rust on A36 steel surface and the air gap) can be treated as capacitors-in-series to calculate the overall capacitance of the sensor, where = equivalent dielectric constant of the porous iron rust on the A36 steel rod surface; = dielectric constant of the air gap between the iron rust on the A36 steel rod surface and the 316 stainless steel ring. As corrosion takes place, iron rust gradually grows on the steel surface (i.e., increase in from zero initially). As shown in Table 1, the dielectric constant of the substances composing the iron rust ranges from 2.6 to 20, much greater than air. Equation (13) shows the overall capacitance C is proportional to . An increase in and/or raises the capacitance of the corrosion sensor, although further sensitivity analysis of Equation (13) indicates that C is much more sensitive to than . However, spalling of the rust from the A36 steel and thus a decrease in was insignificant during the early-stage corrosion, because of the measured almost constant mass of the sensor during the test. Consequently, a higher capacitance reading reflects more rust formation at the A36 steel surface of the sensor during the early-stage corrosion. However, if spalling of the rust is significant enough (i.e., becomes much small), a decrease in capacitance would indicate severe corrosion, which can be regarded as middle-or late-stage corrosion. Same as the resistance, this turning point of capacitance trend (i.e., decrease in capacitance with time instead) can be used to rank the risk level of corrosion. Again, the reference sensor had little change in capacitance with corrosion time as shown in Figure 4. The reference sensor can be used to normalize the environmental factors (e.g., air moisture and temperature) caused capacitance changes other than corrosion. The configuration of a short cylindrical sensor (height is comparable to diameter/gap) was designed intentionally to enhance the sensitivity to the change of surface property. In other words, the capacitor of shorter height is more sensitive to the changes of the dielectric constant ε and the surface morphology of the sensor during the course of corrosion, because of greater specific surface area. Although Equation (13) does not consider the effect of fringe field, which lacks of a reliable equation to quantify, it does describe the fundamental relationship between capacitance C and the dielectric constant and the average thickness of the rust layer , which is verified by the experimental results from this study. During corrosion monitoring of a structure, the empirical equations can be developed based on laboratorial tests in an environmental chamber through multiple regressions, i.e., the differences in resistance and capacitance readings between the corrosion and the reference sensors as a function of the parameters including the extent of corrosion (iron loss), temperature and moisture level. The equations are used to interpret the resistance and capacitance data from the site along with the site's temperature and moisture information. As a result, the extent of corrosion can be determined by plugging the site's temperature and moisture data in the equation and then solving the extent of corrosion numerically. In addition, depending on the specific requirement of a monitoring site, a feature of periodic sleep and wakeup time can be adopted to save energy and cost. In order to mitigate potential fouling problem of the sensors on site, paired corrosion and reference sensors are installed side-by-side with the identical coating/passivation in order to minimize the uncertainty brought by location-dependent fouling issue caused by such as particles and debris. Special cares are taken to ensure the orientation of the sensors is not prone to accumulate dust; while rainfall and snow melting can help clean the sensors. After normalizing the capacitance of both of the corrosion and the reference sensors, the systematic differences in resistance and capacitance readings between the corrosion and the reference sensors are expected to reflect the extent of corrosion. In addition, sufficient paired corrosion and reference sensors are attached to the structure being monitored. Statistical analysis is an important part of the corrosion monitoring network. The averaged differences in resistance and capacitance readings between the corrosion and the reference sensors can further minimize the uncertainty caused by fouling problem. Cylindrical Capacitor ASTM A36 steel was used in this study. It contains at least 99.05% of Fe, and max 0.26% C, max 0.04% P, max 0.05% S, max 0.40% Si, and max 0.20% Cu (by wt) [33]. As shown in Figure 5, a cylindrical capacitor was created with the inner cylinder made from A36 carbon steel rod and the outer ring made from 316 stainless steel. Each of them had a height of 0.64 cm. The inner cylinder had a diameter of 1.27 cm while the outer ring had an outer diameter of 2.64 and inner diameter of 2.22 cm. The A36 carbon steel was polished by a 3M ® 80 grit then a 3M ® 600 grit sandpaper. Two wires were attached to the capacitor, one was soldered to the base of the center A36 steel rod and the other was welded to the base of the outer ring of the 316 stainless steel. A bridge made of a glass substrate epoxy resin insulator from the circuit board (BM-FR4-1SS2, T-Tech Inc., Norcross, GA, USA) was adhered on the bottom of both the inner cylinder and the ring through a waterproof epoxy (15206 Anchor-Tite, Super Glue Corporation, Rancho Cucamonga, CA, USA) to fix their relative positions. Finally, the waterproof epoxy was used to cover the connections of these wires as well as the base of the sensor to prevent corrosion of the connections and the wires, as well as to insulate the sensor from the infrastructure to be monitored. The uncovered A36 steel had a surface area of about 2.66 cm 2 , which was subject to corrosion. Similarly, a reference sensor was made following the same procedures and dimensions except replacing the A36 carbon steel rod with a 316 stainless steel rod of the same dimensions, and then being welded to connect to the circuit-board bridge. The connections including welding points of the reference sensor were coated with the waterproof epoxy to prevent corrosion. The reference sensor was served to draw baseline information by addressing environmental conditions such as the temperature and the moisture level of air other than corrosion. Corrosion Test A 500-mL 0.2 M sodium chloride solution at 20 ± 1 °C was used for corrosion testing. An air pump (Aqua Culture ® , China) with a flow rate of ~1.2 L/min was continuously bubbling air through a diffuser to the solution to provide oxygen for the corrosion process ( Figure 6). The air diffuser was porous sandstone. The dissolved oxygen level of the NaCl solution was maintained at around 8.8 mg/L. The corrosion sensor was submerged in the sodium chloride solution above the air diffuser. Every day during the course of the test, the sensor was removed from the solution, rinsed with DI water, dried at room temperature for 2-3 h, and then tested with an automatic RCL meter (PM6303A, Fluke Corporation, Everett, WA, USA). At each measurement, multiple readings from the RCL meter with time were recorded until stable readings obtained, indicating the senor was dried at equilibrium with air moisture. As a result, the sensor experienced periodically wet/dry cycles twice a day for total 11 days. A new 0.2 M sodium chloride solution was freshly made daily. The accumulated corrosion time of the senor in the sodium chloride solution was 225.5 h. As a control test, a 316 stainless steel ring and the reference sensor was soaked in an aerated 500-mL 0.2 M sodium chloride solution separately to investigate the degree of corrosion of the 316 stainless steel and the reference sensor, respectively. Sensor Measurements The RCL meter was used to measure the resistance and the capacitance of the prototype corrosion sensors or the reference sensors. The readings were obtained in parallel mode, which was one of the built-in functions of the meter. The meter used an AC power with frequency of 1 kHz and a voltage of 1.9 V for the measurements to minimize electrolysis and electrode polarization problems brought by a DC power. With an increase in AC frequency, the interval between successive anodic and cathodic half-cycles becomes progressively shorter [34]. Thus the electrolytic reactions do not have enough time to complete, and it even increases the potential to reverse the reactions of the immediate prior cycle. When the power frequency is high enough (e.g., 1 kHz [35]), no electrolysis of AC was observed, because all current would pass via the double layer of the electrodes [34,36]. The RCL meter took five measurements: a reference reading, a voltage reading at the phase angle of 0° and 90°, and a current reading at the phase angle of 0° and 90°, it then calculated the values for resistance and capacitance (which could be in either series and parallel) based on this model. It was decided to use the parallel mode of this meter as the focus of these measurements is on a capacitor, so all the measurements in this study were found using the parallel mode. In real time monitoring of an infrastructure, instead of using a RCL meter, the corrosion sensor data acquisition algorithm, coded in VbScript, running under a Windows XP embedded computer, produced one data file for each sensor at 30 min interval (see Supporting Information). In addition, the mass of the dried sensor was examined by a digital balance (ESA-3000, Brecknell, Fairmont, MN, USA) with a precision of 0.05 g. The initial weight of the prototype corrosion sensor was 25.20 g. For the prototype corrosion sensor, the initial resistance (R 0 ) was 48.63 × 10 6 Ω; the initial capacitance (C 0 ) was 7.8 pF before corrosion test. For the stainless steel reference sensor, the initial resistance was 37.66 × 10 6 Ω; the initial capacitance (C 0 ) was 24.9 pF. The difference of the initial readings between the corrosion and the reference sensor is likely due to welding connection to 316 stainless steel rod (rather than soldering to A36 carbon steel), because soldering is not feasible for stainless steel. Sample Analyses After the corrosion sensor was removed from the NaCl solution, the daily solution samples were acidified with ACS grade nitric acid from Mallinckrodt to 10% (v/v) to dissolve the iron rust in the solution. AAS (AAnalyst 200, PerkinElmer, Waltham, MA, USA) was utilized to measure the total dissolved iron in the solution. The crystalline substances on both of the corroded and uncorroded A36 steel surface were examined by XRD with CuKα radiation (APD3520, Philips, Amsterdam, The Netherlands). For the uncorroded sample, a piece of the A36 steel of 1.27 cm diameter and 0.2 cm thickness was polished by a 3M ® 80 grit then a 3M ® 600 grit sandpaper. The steel sample was cleaned by blowing thoroughly with compressed nitrogen. For the corroded sample, the same polishing procedures were followed as the uncorroded sample, then the A36 steel sample was submerged in an aerated 500-mL 0.2 M sodium chloride solution for 225.5 h. The corroded steel sample was rinsed with DI water and dried in air for XRD examination. Conclusions Automatic detection of the early-stage corrosion is highly important to find the potential problem and apply corrosion control techniques timely for safety and integrity concerns. This study explored an innovative cylindrical corrosion sensor made of A36 carbon steel (representing the material of a structure or a system to be monitored for corrosion) and a 316 stainless steel ring (representing an inert material of low corrosion potential). A capacitor was formed with both conductors separated by air. The sensor is more sensitive than the conventional corrosion sensors based on the bulk electrical-resistance method. After corrosion in an aerated 0.2 M NaCl solution for 225.5 h, the cylindrical corrosion sensor has shown a systematic decrease in the normalized electrical resistance (R/R 0 ) from 1.0 to 0.74. Meanwhile, the normalized capacitance (C/C 0 ) of the sensor increased from 1.0 to 1.46. However, the weight change of the sensor was within 0.2% (or 187.7 g/m 2 ), an indication of the early-stage corrosion. In the same time, the reference senor, which was not subject to corrosion apparently, showed a stable normalized reading around 1.0. XRD result shows that the rust contained lepidocrocite and magnetite. By attaching the paired corrosion and reference sensors with the identical passivation/coating to a steel structure in air, the extent of corrosion of the structure can be directly reflected by the electrical resistance decrease and/or capacitance increase of the sensor with time during the early-stage corrosion.
6,722.8
2014-08-01T00:00:00.000
[ "Materials Science" ]
Feature extraction and identification of gas–liquid two-phase flow based on fractal theory Due to the gas–liquid two-phase flow system with nonlinear characteristics, the fractal theory has a significant impact on nonlinear analysis, so the paper proposes applying the fractal theory to characterize the fractal characteristics of two-phase flow. Firstly, it performs the mathematical morphology fractal dimension (MMFD) to analyse the fractal dimension of typical signals, and the box-counting dimension is employed as a comparison. The results indicate that the MMFD has better accuracy in estimating the fractal dimension of the typical signals. The MMFD can reflect the complexity and nonlinear of a chaotic system; Finally, it applies the MMFD to extract features and analyse two-phase flow characteristics. The experimental results show that the MMFD can effectively identify signals of different flow patterns, especially the transitional flow pattern, and reflect the complexity of gas–liquid two phases. Introduction The two-phase flow system widely exists in industrial production processes such as petroleum, chemical industry, nuclear power and metallurgy. Moreover, it seriously affects the safety, energy-saving and environmental protection of industrial production through heat and mass transfer rate, momentum loss, pressure gradient and other parameters. However, its production process involves physical and chemical reactions, the conversion and transfer of substances and energy, which leads to the problems that the process parameters are difficult to be detected. In a gas-liquid two-phase flow system, the flow patterns and dynamic characteristics of the two-phase flow are closely related to its system parameters (Zhou, Yunlong, 2010;Xiaolei et al., 2020). It can measure and control the system by identifying the flow patterns of the two-phase flow. Therefore, it can optimize the pipeline design to ensure the safety of industrial production. The gas-liquid interface of the gas-liquid two-phase flow is randomly variable, and the flow shape of the two-phase flow system is complex and changeable, so we need to investigate its characteristics further to identify the flow patterns. The signal is a physical quantity representing the information. For example, electrical signals can express different information by changing amplitude, frequency, and phase (Meribout & Shehzad, 2020). Conductance CONTACT Chunling Fan<EMAIL_ADDRESS>fluctuation signal contains much information in the gas-liquid two-phase flow system. During the flow of gas-liquid flow in the pipeline, the flow pattern's conductance fluctuation exhibits nonlinear characteristics, so the feature extraction is essential for flow pattern recognition. Several signal processing techniques are widely used to extract certain system features, such as Fourier transform (Qiu et al., 2019;Sung et al., 2016), wavelet decomposition (Yan et al., 2018;Wang et al., 2018), and multi-scale complexity entropy causality plane (Dou et al., 2014). Meanwhile, many researchers apply the entropy theory and complex network features to characterize the twophase flow's complex characteristics. Gao (2020) developed a novel multiple entropy-based multilayer network (MEMN) for exploring the complex gas-liquid two-phase flow. The results show that the MEMN framework can effectively characterize the nonlinear evolution of the gas-liquid flow. Multivariate multi-scale weighted permutation entropy (MWMPE) can reflect the instability of oil-water two-phase flow and uncover the underlying evolution instability of the flow structures in oil-water flows (Han & Jin, 2018). Fan et al. ( 2018) proposed combining base-scale entropy with root mean square energy to analyse the gas-liquid two-phase flow. This method is a simple and straightforward strategy to extract the gas-liquid two-phase flow features and characterize the different flow patterns. Wavelet multiresolution complex network was applied to analyse multivariate nonlinear time series from oil-water two-phase flow experiments, and the results suggest that this method can characterize the nonlinear flow behaviour underlying the transitions of oil-water flows (Gao et al., 2017). Gao et al. (.,2016) inferred complex networks from multi-channel measurements in terms of phase lag index, aiming to uncover the phase dynamics governing the transition and evolution of different oil-in-water flow patterns. Although, in the field of two-phase flow we have made these achievements, we need to conduct more research deeply. Fractal theory is a science that studies the complexity of complex systems (J., C. & R, 2020; Razminia et al., 2019). Complex systems need to meet the characteristics of local and overall self-similarity. Therefore, we can study the complexity characteristics of gas-liquid two-phase flow pattern signals by the fractal theory. Zhou et al. (2017) employed the box-counting dimension algorithm theory to analyse multi-source information of gas-solid twophase flow. Their experiments show that it is conducive to deepen the understanding of the complicated flow behaviour in gas-solid two-phase systems, provided that the fractal characteristic of multi-source information can be correlated, and this theory is simple and convenient calculation. A novel relative permeability (RP) model for two-phase flow in fracture networks based on the fractal theory was proposed by Miao et al. (2018) This model can provide a better understanding of the fundamental mechanism of water-gas phase flow in fracture network. When the gas-liquid flow in the pipeline, the flow pattern's conductance fluctuation signal exhibits nonlinear characteristics. Therefore, the feature extraction is vital for identifying the flow pattern. Precisely, the fractal dimension is a significant parameter to describe complex and nonlinear systems, and the fractal theory has a significant effect on the process of nonlinear analysis. Therefore, this paper attempts to use mathematical morphology fractal dimension (MMFD) to study the characteristics of gas-liquid two-phase flow and achieve a good result. In this paper, it verifies the effectiveness of the MMFD, implements it to calculate the fractal parameters corresponding to different flow patterns of twophase flow and explores the characteristics of the conductance fluctuation signals of different flow patterns from the perspective of complexity. According to the analysis of experiment results, the MMFD can not only reveal the complexity and nonlinear characteristic but also distinguish different flow patterns of two-phase flow, especially the transitional flow patterns. Mathematical morphology fractal dimension theory The mathematical morphology fractal dimension (Maragos & Sun, 1993) is very flexible and direct to calculate area of signals. We can directly perform one-dimensional morphological dilations and erosions. This algorithm solves the shortcoming of a large amount of calculation caused by converting a one-dimensional signal f (t) to a two-dimensional signal K(f ). The specific calculation method as follows: Suppose that G is a compact support, g is a unit structure function which is defined on G satisfying Equation (1). We perform one-dimensional morphological dilations and erosions on f (t): x ∈ G}, ⊕ represents dilation operation and represents erosion operation. Defining the structure element function under scale ε: then the area A g (ε) covered by the morphology of the signal at scale ε is defined as: By referring to related literature (Maragos & Potamianos, 1999), the morphological coverage area obtained by performing two-dimensional dilations on K(f ) is equal to the morphological coverage area obtained from a onedimensional morphological erosions and dilations on f (t). Therefore, for a one-dimensional signal: Because the signals in the real situation are mostly discrete, it is necessary to discuss the calculation method of the MMFD in the case of discrete signals. Supposing that the discrete signal is f (n), n = 1, 2, · · · , N. The discrete scale is ε, and the unit structural element is defined as g. Then in the case of scale ε, the structural elements are defined: That is, the unit structural element g dilates ε times. Therefore, the erosions and dilations results of signal f (n) at scale ε are: The coverage area of the signal under scale ε is defined as: A g (ε) satisfies the following conditions: where D M is the fractal dimension obtained by MMFD,c is a constant, and ε max is the maximum scale. Therefore, the least squares fitting of log(A g (ε)/ε 2 ) and log(1/ε) can get the estimation of the MMFD. The fractal dimension of typical signal In this section, we take sine, cosine and Logistic mapping as examples to verify the effectiveness and accuracy of the MMFD. Figure 1 shows the modeling of typical signals. We regard time series as X(t), and employ the MMFD on X(t) to get fractal dimensions. The steps of the MMFD are shown in Table 1 First, we select g(1, 1, 1) and ε ∈ [1, 30]. X(t) performs erosion and dilation operations, as shown Equations (8) and (9), and we calculate log(1/ε) and log(A g (ε)/ε 2 ), Finally, we implement least-square fitting and obtain the fractal dimensions. Analyse the characteristic of periodic signals This section takes the X = sin(t) and Y = cos(t) as the experimental object. We take the sampling length Table 2 shows the calculation results of fractal dimension of the MMFD. Table 3 lists that the calculation results of fractal dimension of the box-counting dimension. We know that the real dimensions of the sine and cosine wave signal are one by consulting the literature (Wang, 2006). From Table 2, we can get that the fractal dimensions of the MMFD of sine and cosine signals are entirely equal to the real dimensions of sine and cosine signals. However, from Table 3, the box-counting dimension of sine is 1.0004. It can be seen that the calculation result of box-counting dimension is slightly higher than the real dimension 1, and the error from the real dimension is 0.04%. The grid dimension of the cosine wave signal is 0.9999. The box-counting dimension of the calculated cosine signal is slightly smaller than the true value 1, and the error from the true dimension is 0.01%. Therefore, we can concluded that the MMFD accurately analyses the periodic signal's fractal dimensions. Analyse the characteristic of chaotic systems This section we take typical Logistic mapping as an example to verify that the MMFD can analyse the complexity and nonlinear characteristics of chaotic systems broadly. The Logistic mapping equation can be expressed as Equation (12). where μ ∈ [2.8, 4], x(n) ∈ (0, 1). In the experiment, the generated data's length is 12,000, which is divided into 12 groups. Then we perform the MMFD on each group. The detailed steps of performing the MMFD can refer to Table 1 in section 3. The bifurcation diagram of the Logistic map is shown in Figure 2. The least-square fitting results of the fractal dimension of the MMFD are depicted in Figure 3. In Figure 2, we can see that when µ is from 2.8 to 3.1, each µ corresponds to only one value of x, and the range is called the fixed point or period-1 curve. When 3.1 < µ ≤ 3.5, the first double period bifurcation occurs from period-1 to period-2. When µ is from 3.5 to 3.6, the second double period bifurcation occurs, and the Logistic system lies in period-4. when µ is from 3.6 to 4, the third double period bifurcation occurs, from period-4 to period-8. The system works in a chaotic state. We can conclude from Figures 2 and 3 that when µ is from 2.8 to 4, as µ increases, the dimensions gradually increase. The larger the dimension of the system, the more complicated the system. When the chaotic system in period-1, the dimensions are the smallest, the dimension value is from 1.23479 to 1.23499, and the system complicated is smallest. When the chaotic system transforms into period-2, the dimension's value growth rate suddenly increases, the values enhance sharply from 1.23499 to 1.236. The complexity of the system is strengthened. When the chaotic system in period-8, corresponding serial number of Figure 3 is 8, the value of dimension is 1.23735. When the system works in a chaotic system, the dimensions of Logistics mapping increases bigger, the growth rate becomes a little faster, and the complexity of it becomes greater. According to the analysis, the method of the MMFD can characterize the complexity and dynamic characteristics of chaotic systems greatly. And the transition process of complexity can be indicated explicitly. Therefore, we can apply the MMFD to analyse the two-phase flow patterns' characteristics and evolution dynamics. Data collection and modeling In the experiment, the water pumped from the pool or the air sucked by the compressor, and they are well mixed through a mixer and flow into the test vertical pipeline. And then flows into the water pool. The velocity of the gas phase is measured by the gas rotameter, and the water phase flow rate is controlled by the Leif YZ35 peristaltic pump. The Schematic diagram of the experimental setups shows in Figure 4. The experimental scheme is to fix the gas phase velocity and gradually increase the water phase velocity. The flow rates of water and air are varied within 1-12 m 3 /h and within 0.1-140 m 3 /h in the experiment. The Modeling of gas-liquid two-phase flow is shown in Figure 5. First, we attain the conductivity signals under different working conditions through the acquisition system of conductivity signals of gas-liquid two-phase flow and regard it as X(t). The length of X(t) is 16,000, and we divide it into 16 groups; each group performs the MMFD operation. The detailed steps of conductance signal to perform the MMFD can refer to Table 1 in section 3. Analysis of the characteristics of two-phase flow pattern The two-phase flow system is a nonlinear dynamic system with complex characteristic properties affected by many factors. At present, we mainly employ sensors to obtain conductance fluctuation signals of different flow patterns. When the water flow rate is 12 m 3 /h, the conductance fluctuation signals of the three typical flow patterns and two transitional flow patterns under different gas flow conditions are shown in Figure 6. The bubble flow usually occurs at low airflow speeds, its signal resembles a random signal, and the signal amplitude is very low. Due to the liquid flow instability, the twophase flow's conductance fluctuation signal has intermittent peaks and high amplitude. With the gas phase rate increases, the bubble transforms into bubble-slug. The randomness of the system is decreased. As to slug flow, it exhibits periodic behaviour. The slug-churn flow is the transitional flow pattern of slug flow and churn flow. As to the churns flow, when the gas plugs and liquid plugs rise in the tube, because of the gravity, the liquid plugs fall and collide with the incoming flows of the next moment. It vibrates alternately upward and downward in the pipe, exhibiting the irregularity and chaotic characteristics of conductance signals, similar to bubble flow patterns. However, the churn flow has a higher amplitude and weaker randomness than the bubble flow. The MMFD analysis of two-phase flow patterns To further investigate the characteristics and distribution of flow patterns under different operating conditions, the distribution graphs of the MMFD are obtained when the water flow are 2, 4, 6, 8, 12 m 3 /h as shown in Figure 7. Figure 7 shows that with the increase of the gas flow rate, the dimension values increase accordingly when the flow pattern is bubble-slug flow. The complicated of the two-phase flow is strengthened. Next, the flow pattern transforms into the slug flow, its dimension values are biggest. This phenomenon indicates that the complicated of the two-phase flow system are the most complex. While the slug flow transforms into slug-churn flow, the values of dimensions become smaller, and the complexity of two-phase flow decreases. When the flow pattern is churn flow, the two-phase flow system motion behaviour's randomness becomes weaker. For the three typical flow patterns: the bubble flow, slug flow and churn flow, the slug flow has the largest dimension values, which indicates that it is more periodic. In comparison, the bubble flow's dimension values are smallest, which shows that its motions are the most random. The churn flow's dimension values are slightly more prominent than the bubble flow, which indicates that its randomness is weaker than bubble flow. According to the analysis, there are apparent boundaries of different flow patterns, especially the transitional flow patterns. Therefore, MMFD can distinguish different flow patterns. Meanwhile, this method can reflect the complexity and dynamic characteristics of flow patterns of two-phase flow broadly. Conclusions Considering the two-phase flow's complex non-linear characteristics and non-stationary properties of the twophase flow, we employ the mathematical MMFD to analyse the gas-liquid two-phase flow's characteristic information. We take sine, cosine and Logistic mapping as examples to verify the effectiveness and accuracy of the MMFD. The results show that that the MMFD is better than grid fractal dimension in accuracy. Moreover, the MMFD can reflect the characteristic of chaotic systems. According to the analysis, this method is successfully applied to the feature extraction and the recognition of two-phase flow patterns, especially the transitional flow pattern can be recognized well. The paper provides a novel method on the flow patterns characteristics analysis of gas-liquid two-phase flow. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This project is supported by the Natural Science Foundation of Shandong (grant number ZR2019MEE071).
3,899
2020-11-25T00:00:00.000
[ "Engineering", "Physics" ]
A collaborative method to survey and store urban components . Urban components are the important part of the city, and the rapid and efficient survey of urban components is a key requirement of urban digitization. In this paper, a method of the collaborative survey and storage of urban components is proposed. The national standard code for urban components survey is optimized to accelerate the survey and storage of urban components, and the time cost of error discovery is shorten by using AutoCAD to check the spatial location and employing a VBA macro programme to search for the attribute data errors. At the same time, a collaborative processing flow of components data is constructed through ArcPy to further speed up the storage of urban components. The experiment of urban components survey in Wanxiu District of Wuzhou City shows that this method can effectively reduce the complexity of urban component data storage procedure and the error rate. Compared with the traditional method, the proposed method is about 2 times more efficient to input the urban component data into the database. Introduction Urban components are the important part of the city, and are also an important data source of the so-called "digital city" [1] . Effective city management requires us to have a better way to obtain the information such as quantity, category, location, attributes, distribution, and so on, of the urban components. However, the survey of urban components is often time-consuming and need very great effort to collect their information one by one, and it is quite easy to make mistakes when processing the data and inputting them into the database. Therefore, it is necessary to establish an efficient method to facilitate the survey of urban components. Although some existing basic software, such as AutoCAD, ArcGIS, SuperMap, and MapGIS, are able to process urban components data, but they mostly focus on the basic editing function of spatial data, and do not deal with the storage of urban components. Therefore, these software are not able to support efficient entry of urban components data [2] . In the conventional technical process of urban components survey and storage, field workers rely on basic map data to obtain the geographical location of urban components and record their attribute information in the field. And then the quality of the field data is checked by the office operators, and the data for each field survey is entered into the database one by one. If there is an error, field workers need to go to the site for quality inspection and rechecking attribute data [3] . The way of conventional components data storage is easy to cause errors, which are mainly reflected in the following aspects: the first is the spatial position error. This is mainly caused by the measurement equipment and human factors. For example, the street trees are not on the same horizontal line, and the distance between the rain grate and the edge of the road is large. There are obvious logic errors in these spatial positions. The second is the attribute data error. This is mainly caused by human error. The error is that setting the street lamp category to the manhole category, having obvious factual errors for height and width of the tree, and mismatching the photos of the interest points, etc. The third is the attribute data omission. This is mainly caused by that some required components attributes are not collected. For example, the photos of the interest points, the heights and radiuses of the trees and the numbers of the parking lot, etc., are not collected. As for the deficiency of traditional urban component survey, this paper optimize the unified code rules for the fast survey of urban components based on the national standard code. And the VBA macro programme, AutoCAD and ArcPy are used to complete the batch data processing and implement the storage and management of the urban components data on ArcGIS. Collaborative survey and storage technology of urban components In order to improve the efficiency of quality survey and storage, project managers often divide a task into several parts. When managing multi-task parallel storage, they need to understand the processing procedure of all workers in real time, so as to adjust the workload at any time according to the progress. The workers should also be clear about the scope of their own operations, and simultaneously be aware of the operation progress of other members, so as to coordinate with other workers. In this paper, combined with the relevant norms of urban components survey, the survey data processing process is reorganized to construct the collaborative storage mode of three standards (spatial location check, attribute information check and component quality check) and two processing (pre-processing and verification processing). As shown in Figure 1, this process mainly involves two steps: one is to expand the code of urban components survey, so as to parse into attribute information in the later stage. The other is to use ArcPy plugin to implement data storage, automatic layering, automatic symbolization and batch drawing, etc. The collaborative survey and storage process of urban components constructed in this paper can realize the collaborative processing, transparent inspection and fast storage of the data, etc., which greatly improves the survey and storage efficiency of urban components. . Extension of survey codes for urban components According to the classification code of national urban components survey, a type of urban components is generally represented by four digits, which can only identify different types of components, but cannot distinguish the attributes of the same type of components. To parse the code into the types and attributes of the components, expanding and optimizing the original classification code is essential [4] . Therefore, seven to nine digits are used to code the urban components. The first digit code is the usage status of the components, indicating that the status of the components is intact/broken/lost/occupied, etc.; the second to fifth digit codes are the national standard classification code of the components, representing the types of the urban components; the sixth to ninth digit codes are the extended attribute code, representing the material, quantity and other attributes of urban components. For example, the code of a communication manhole cover is 10107112, and its the first digit "1" means intact, the second to fifth digits "0107" signifies the type of communication manhole cover, the sixth digit "1" denotes cast iron, the seventh digit "1" indicates that the manhole cover belong to the mobile company, and the eighth digit "2" implies that there are two communication manhole covers at this location. For another example, the code of a tree is 104020802, and its the first digit "1" means the tree is in good condition, the second to fifth digits "0402" signifies the type of street tree, and the sixth to seventh digits "08" implies the height of the tree is 8 meters, and the eighth to ninth digits "02" indicates the radius of the tree crown is 2 meters. If the code of components needs to be updated, the code can be added from the last digit of the original code while the original code remains unchanged. Thus, the code with the corresponding attribute is expanded according to the actual needs, which can greatly facilitate the internal workers to quickly parse the code and get the component attributes, avoid the input of component attributes one by one, and improve the storage efficiency of urban components survey data. The coding method is shown in Figure 2. Collaborative processing of urban components survey data. In this paper, an ArcPy plugin for ArcGIS is used to modify data, symbolize data, and edit data, and so on. A variety of logic checks, data batch modification and entry can be carried out through this plugin, which can maximize the efficiency of the collaboration of all the users of the ArcPy program. For an urban component, each mandatory attribute should be checked to make sure whether it is integral, if not, it should be modified according the integrity rules. For example, as some attributes may have a default value, and these attributes will be set to default value when some users do not enter their actual values for the sake of convenience, then it will cause errors and data integrity problem, which make a integrity check necessary. The rules of checking the required attribute information are shown in Table 1. After using the VBA programme to pre-process and check the data, the data is added, deleted and modified according to the feedback information of the program, and then the data is exported for importing into ArcGIS for collaborative processing. During the collaborative processing, the ArcPy plugin for ArcGIS is employed to check the data for errors, automate the data hierarchy, and perform batch symbolisation. Before components data being partitioned for producing a diagram through a self-defined grid, using the rules of the Table 1 to mark the missing items for prompting the quality inspection personnel is necessary, so as to increase the efficiency of quality inspection and simultaneously simplify the work of the internal processing. To get results data that met certain survey accuracy, the ArcPy program is adopted to merge regularised data according to the grid number. The specific method of the collaborative processing is depicted in Figure 3. Firstly, processing components data. Using the ArcPy programme to maintain the mapping from the full attribute field to each components attribute field for the purpose of automating the hierarchy of the components data. Then, after completing spatial position inspection, the components data is imported into Excel for parsing the code into the attribute information by using VBA macro programme. If the photos of some components are missing or their names cannot be matched to the components points because of misnaming, the corresponding components points is counted so that the field quality inspectors can take the corresponding photos later. Finally, the component symbol library is made by using the component symbol standard established by the state, and the batch symbolization is carried out according to the relationship between the symbol and the component type. Secondly, producing map. A diagram is produced for modifying the error data of the components according to the row number and column number when implementing the data quality inspection. Attribute correction is to convert the codes into corresponding text, while performing several logic checks to correct unreasonable attributes. Thirdly, organizing the photos. There are two cases when organizing the photos: In the first case, the quality inspection photos is renamed and transferred to the target path according to the components category and photo coding rules; In the second case, each subtype of the components need to be photographed, and the reorganized according to the small category and named according to the convention naming rules. Finally, filling in the ownership information. Founded on the components ownership account information, an ownership table is created and the components code is filled to meet the requirements for populating required fields. Case study and analysis According to the above technology, the rapid storage technology of urban components survey of the internal and field collaborative operation is be implemented by using VBA program and ArcPy script. This paper takes the survey of the components in Wanxiu District of Wuzhou City as an experiment. In the same area, two groups of the same number of people are selected to carry out the survey and quality inspection respectively through the traditional method and the method proposed in this paper. And then the missing rate, error rate and time cost of survey and storage are counted. The urban components of the Wanxiu District of Wuzhou City are divided into 6 categories and 123 sub-categories. 706 Las points are collected, and 496 urban components were collected in the experimental area of the Wanxiu District with a road section of 3.5 kilometers. The traditional method and the collaborative method are used to survey and store the components respectively. The comparison results of components input efficiency are shown in Table 4, and the comparison results of components input and omissions are shown in Table 5. It can be seen from the table that the traditional component data storage is carried out in a timely manner after the end of the field work every day, the component attributes of each field survey must be input one by one, and the working hours of this collaborative survey program is 2 times faster than the traditional technology. The distribution of field staff is shown in Table 2, and the comparison of internal processing treatment plans is shown in Table 3. The efficiency and time of the rapid survey and storage of the collaborative method are significantly better than the traditional method by the statistics. It not only realizes the seamless combination of urban component data information and attribute information, but also improves the accuracy of attribute input on the basis of improving the efficiency of attribute input. After verification of experimental data, this method is 2.14 times faster than the traditional method. The formula of the components attribute error rate is �� � �/N [5] , where m represents the number of components attribute errors, and N represents the total input of components. Component attribute errors usually include component attribute values containing wrong characters, attribute value errors, format errors of outcome document, etc. The formula of components missing rate is �� � O/N [5] , where O is the number of missing components and N represents the total input of components. The accuracy of components is calculated by two different processing methods under the same operation environment and the same component points are entered. The comparison results are shown in Figure 4 Conclusion In this paper, a method of the collaborative survey and storage of urban components is proposed. The national standard code for urban components survey is optimized to accelerate the survey of urban components, and the AutoCAD is used to check the spatial location and the VBA macro programme is employed to search for the attribute data errors. At the same time, the plugin developed by ArcPy is used to collaboratively process the components data. And this method is validated in the project of urban components survey in Wanxiu District of Wuzhou City. Compared with the traditional method, this method is about 2 times more efficient to survey and store the urban component data. As this method optimizes the work flow, and realizes the online scheduling and cooperation of data processing and quality control, it greatly improves the working efficiency and promotes the unified data management during the urban component survey project. Although the application of the proposed method has proved. Although this paper attempts to explore this aspect and achieved some results, the method has yet to be further improved in the practical application. Owing to the limitations of the project background, the survey scope is not large in this paper. As a next step, a larger survey scope is needed for verification to optimize the method in this paper.
3,494.4
2021-01-01T00:00:00.000
[ "Engineering", "Computer Science" ]
A Variable Structural Control for a Hybrid Hyperbolic Dynamic System Abstract: In this paper, we are concerned with a hybrid hyperbolic dynamic system formulated by partial differential equations with initial and boundary conditions. First, the system is transformed to an abstract evolution system in an appropriate Hilbert space, and spectral analysis and semigroup generation of the system operator is discussed. Subsequently, a variable structural control problem is proposed and investigated, and an equivalent control method is introduced and applied to the system. Finally, a significant result that the state of the system can be approximated by the ideal variable structural mode under control in any accuracy is derived and examined. In order to investigate the variable structural control problem for the system, first, let's transfer the system to an abstract Cauchy problem in an appropriate Hilbert space, then discuss the spectral properties and semigroup generation of the system operator. Spectral Analysis and Semigroup Generation We start this section with considering the system (1.1) in the underlying Hilbert space H = L 2 (0, 1) 2 . Define the (2.1) Then the system (2.1) can be written an an evolution equation in H: Proof. Given (f, g, b) ∈ X, we solve that is, Let's denote by M (x, y, λ) the fundamental matrix of the system d dx On the other hand, we see from the boundary condition in (2.1) that where It can be eventually seen from (2.5) that R(λ, A) is compact for any λ ∈ ρ(A). Theorem 2.2. The operator A defined by (2.1) generates a C 0 -semigroup T (t) on H. Proof. We need only to prove the assertion for the case C ≡ 0 because is a bounded operator by assumption (H2), and bounded perturbations do not affect C 0 -semigroup generations. For the sake of simplicity, we assume that H is real. The idea is to define an equivalent norm on H by properly choosing some positive weighting functions It is easily verified that H * , the dual space of H, consisting of all elements where q denotes the conjugate number of p, which satisfies 1 p + 1 q = 1. For any We estimate I i separately. It is clear from the expression of I 3 that e ij v j (0), we see that Because λ i (0) > 0 and µ j (0) < 0 from (H1) , we can always find g j (0) > 0 and f i (0) > 0 such that holds, which implies that I 2 ≤ 0. We now estimate I 4 by means of the inequalities (|a| + |b|) p ≤ 2 p (|a| p + |b| p ) and |a| 1 p |b| 1 q ≤ |a| p + |b| q which hold for any real a and b, we have with α i and β j denoting the obvious constants. Subsequently, it can be seen that If we choose f i (1) > 0, g j (1) > 0 such that for any 1 ≤ i ≤ N and N + 1 ≤ j ≤ n, then The estimations of I i above show that there exists a constant M such that Now we choose a weighting functions f i (x) and g i (x) such that they satisfy (2.10) and (2.11), and hence define a norm in H according to (2.3). Because A − M is dissipative and A has the properties stated in the Lemma 2.1, we can assert from [9] and [11] that A generates a C 0 -semigroup on H, and the Theorem 2.2 is established now. A Variable Structural Control Let's establish and discuss a structural control problem for the hybrid hyperbolic system (2.2) In the rest part of this paper, we are going to show that the actual sliding mode W (t) will approach uniformly to the ideal sliding modeW (t) under certain conditions. and I − P are commutative because A and P are commutative. We see that Letà denote the infinitesimal generator of T 2 (t). Since the limit on the left exists, we can assert that x ∈ D(Ã) and In the boundary layer T 1 (t) ≤ δ, let's introduce the equivalent control as follows Hence, the solution of (3.7) can be expressed as follows: (3.8) and therefore, the solution of (3.4) can be written as Subtracting (3.9) into (3.8) yields Since P A = AP , we see that P T 1 (t) = P T 1 (t). It should be emphasized that (I − P )P = 0 and T 2 (t) = (I − P )T 1 (t), and consequently, Thus, The proof of the theorem is complete. We see from the Theorem 3.2 that the solution of the beam system can be approximated by ideal sliding mode in any accuracy. Conclusion In the present paper, a variable structural control problem for a hybrid hyperbolic dynamic system dominated by partial differential equations subject to the boundary shear force feedback is investigated. An evolution equation corresponding to the beam system is established in an appropriate Hilbert space. A spectral analysis and semigroup generation of the system operator for the system are studied. Finally, a variable structural control is proposed, and a significant result that the solution of the system can be approximated by the ideal variable structural model under the control is obtained.
1,254.4
2021-03-11T00:00:00.000
[ "Mathematics" ]
Taking into account camera tilt when correcting a three-dimensional model of a hollow pipe during visual diagnostics with a self-propelled complex . This paper investigates the problem of assessing the degree of pipe deformation and the deviation of its longitudinal section profile from a cylindrical shape based on a three-dimensional point model of a hollow pipe under conditions when this model turns out to be displaced (rotated in space) relative to its real position. An algorithm for correcting a three-dimensional point model based on data on the current tilt of the camera is considered, as well as an algorithm for the actually obtained image overlaid with virtual cross-sections of the pipe, close to ideal. Introduction Long-term operation of hollow pipes often leads to the fact that such pipes can be deformed under the influence of miscellaneous external factors, can become corroded, and accumulate accidentally falling foreign bodies within it. All these factors can lead to a deterioration in the permeability of the pipe or even to a violation of its integrity. One of the approaches that makes it possible to diagnose such hollow pipes is to use video diagnostics of the pipe condition by using self-propelled robotic platforms equipped with video cameras and other sensors, which composition depends on the purpose of diagnostics and the destination of the pipeline. Modern technical devices of capturing video images allow us to combine receiving a video stream with obtaining a three-dimensional point model of the considered object, which allows an operator to use software instruments to measure the visible dimensions of pipe defects or objects within it. The creation of the software for an operator of a self-propelled robotic platform made it necessary to measure a deformation of the pipe and the deviation of its longitudinal section profile from the cylindrical shape. This problem can be solved by the actually obtained image overlaid with virtual cross-sections of the pipe, close to ideal. However, since the camera mounted on a self-propelled platform can change the angle of inclination to the longitudinal axis of movement, the three-dimensional model obtained from the camera turns out to be displaced (rotated in space) relative to its real position, and it is required to correct this model using the data on the current tilt of the camera. This paper considers a special case, when a self-propelled platform is installed in such a way so the vector of its movement direction coincides with the pipe axis. The method of turning a three-dimensional point model obtained from the directional camera of a selfpropelled platform is used, and an algorithm for constructing pipe cross-sections close to ideal, based on the obtained data is being developed. The following designations were used: C is the point where the camera is located, t is the vector of the direction of the camera's view, α is the angle of inclination of the camera along the pipe axis, r is the radius of the pipe cross-section. Formulation of the problem The self-propelled platform is located inside the hollow pipe so that the vector of its movement direction coincides with the axis of pipe. It is known that a camera mounted on a self-propelled platform is located at an angle α to the pipe axis, as it is shown in the Figure 1. It is necessary to define an unambiguous mathematical description of the camera, represented by the position of the camera in world space, the direction it is looking at, a vector pointing to the right direction, and a vector pointing upwards from the camera; to perform software implementation of the camera based on the mathematical description; define a rotation matrix to transform a three-dimensional point model of a hollow pipe; after transforming the 3D model, construct pipe cross-sections close to ideal. The solution of the problem World space means coordinates relative to a single global reference point in a threedimensional Euclidean coordinate system. By agreement, OpenGL technology uses a righthanded coordinate system that means the positive X-axis is to the right, the positive Y-axis is upward, and the negative Z-axis is inward of the monitor screen [1]. The point C (x0, y0, z0) can describe the position of the camera in world space. Let C be the base point of the origin in the view space. It is known, that the self-propelled platform and the pipe where it is located have the same coordinate system, and the camera mounted on the platform is rotated around the global Ox axis by an angle α, so we can say that the axes Oxt of the pipe and Oxv of the camera view space coincide. The camera direction vector lies on the positive Z-axis of the camera, and the direction vector of the camera view is on the negative Z-axis of the camera. The Oyv axis can be calculated by vector multiplication of the Oxv and Ozv axes. Thus, we got a triplet of vectors generating the right-handed coordinate system of the camera as it is shown in the Figure 2. This mathematical description is necessary since the concept of a camera is absent in OpenGL technology, and it allows us to implement a software camera by defining a special matrix for transforming vectors into the coordinate space of the camera view. The LookAt matrix in general form, according to [1], looks as follows: where R is the right vector (X-axis of the camera), U -vector pointing upward (Y-axis of the camera), D -camera direction vector (Z-axis of the camera), P -the position of the camera. Because of the software implementation with the camera position shown in the Figure 2, images converted into three-dimensional point models without taking into mind the camera rotation by the angle α were obtained from the directional camera. One of such models is shown in the According to the conditions of the problem, it is necessary to rotate the obtained threedimensional point model around the X-axis by a known angle α for further detection and analysis of defects in the pipe. Since this paper considers a special case when the global Xaxis coincides with the Oxv axis of the camera, it is sufficient to apply the basic geometric rotation of the points of the three-dimensional model around the X-axis, which, according to [2, p. 35], can be described by the matrix Rx of the following form: The software implementation of the rotation of the three-dimensional point model of the pipe by an angle α around the X-axis realized as follows: Points.ModelMatrix=Matrix4.CreateRotationX(MathHelper.DegreesToRadians(PointsXRo tAngle)); This software implementation made it possible to obtain images converted into threedimensional point models, taking into mind the rotation of the camera by the angle α, as it is shown in the Figure 4, and also made it possible to further analyze pipe defects by overlaying it with pipe cross-sections close to ideal. In conditions of shooting at a real camera, images of "points cloud" may contain different defects: the presence of extra points outside the real pipe or the absence of some groups of points where they exist in the real pipe. For further analysis, it is necessary to exclude such defects as much as possible, since they can interfere with the compilation of the correct model. To exclude such defects, it is necessary, using the existing three-dimensional point model as well as the radius of the cross-section of the pipe r, to reject all points that do not fall within a certain range. Defects are excluded and a pipe model close to ideal is built according to the following algorithm: 1. Having the coordinates of the points of the three-dimensional model of the pipe, sort these points by their distance from the camera with the center at point C. In this case, sorting occurs by the coordinates of the Z-axis. The result is shown in the Table 1. This kind of sorting allows you to select the points that form the cross-sectional planes into separate layers; 2. according to the algorithm from [3], on each layer a hundred triplets of points are selected, for each triplet a circle into which the points of this triplet fall is calculated; 3. after obtaining a set of circles with certain centers for each layer, calculate the radius of each circle; then, based on the set of semi-diameters, exclude from this set those circles which radius deviates excessively from the median value of the radius for this set; 4. after exclusion for each layer the circles, the radius of which does not fall within the allowable range, calculate the median radius and the average center in the remaining sets; 5. grouping layers by ten, find an average among the centers and among the semi-diameters over the available corresponding sets; 6. on the basis of the obtained averaged radius and center for each group of layers, construct a circle, which is a cross-section of the pipe, close to ideal. The result of this algorithm is shown in the Figure 5. Conclusion In this paper, a special case when a self-propelled platform was installed in such a way that the vector of its direction of movement coincided with the axis of the pipe within it was located was considered. To solve the problem of turning a three-dimensional point model, a mathematical description of the camera was given, a software implementation of the camera was performed based on the mathematical description; a rotation matrix for transforming a point three-dimensional model of a hollow pipe was determined. Also, an algorithm for constructing pipe cross-sections close to ideal, based on the obtained data, was developed.
2,220.4
2020-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Exploring miRNAs’ Based Modeling Approach for Predicting PIRA in Multiple Sclerosis: A Comprehensive Analysis The current hypothesis on the pathophysiology of multiple sclerosis (MS) suggests the involvement of both inflammatory and neurodegenerative mechanisms. Disease Modifying Therapies (DMTs) effectively decrease relapse rates, thus reducing relapse-associated disability in people with MS. In some patients, disability progression, however, is not solely linked to new lesions and clinical relapses but can manifest independently. Progression Independent of Relapse Activity (PIRA) significantly contributes to long-term disability, stressing the urge to unveil biomarkers to forecast disease progression. Twenty-five adult patients with relapsing–remitting multiple sclerosis (RRMS) were enrolled in a cohort study, according to the latest McDonald criteria, and tested before and after high-efficacy Disease Modifying Therapies (DMTs) (6–24 months). Through Agilent microarrays, we analyzed miRNA profiles from peripheral blood mononuclear cells. Multivariate logistic and linear models with interactions were generated. Robustness was assessed by randomization tests in R. A subset of miRNAs, correlated with PIRA, and the Expanded Disability Status Scale (EDSS), was selected. To refine the patient stratification connected to the disease trajectory, we computed a robust logistic classification model derived from baseline miRNA expression to predict PIRA status (AUC = 0.971). We built an optimal multilinear model by selecting four other miRNA predictors to describe EDSS changes compared to baseline. Multivariate modeling offers a promising avenue to uncover potential biomarkers essential for accurate prediction of disability progression in early MS stages. These models can provide valuable insights into developing personalized and effective treatment strategies. Introduction Multiple sclerosis (MS) is a chronic autoimmune disease affecting the central nervous system, triggering a wide range of symptoms resulting in cognitive, motor, sensory, sphincteric, and visual function impairments [1].MS is the most widespread source of non-traumatic neurological disability in young adults, with an incidence of around 1 per 1000 [2].Furthermore, MS displays a significant social and economic burden on patients, their families, and society, with substantial costs beyond direct healthcare expenses, including reduced quality of life and the emotional toll on caregivers [3].MS patients accumulate progressive disability due to chronic inflammation and neurodegeneration.In some patients, neurodegeneration may develop early in the disease process and represents a driver of disease progression [4][5][6].It is generally felt that the existing classification of MS in different subtypes does not reflect the clinical and biological heterogeneous nature of the disease [4].Although many DMTs are now available [7][8][9], the choice and sequencing require a personalized approach.Treatment selection is also based on individual demographic and clinical factors such as MS-related prognostic factors, patient comorbidities, risk tolerance, pregnancy planning, and route of administration [8]. Recent evidence has emphasized the effectiveness of high-efficacy-DMTs in reducing relapse rates, inflammation foci, and slowing down the relapse-associated accumulation of disability over time [10].Current research indicates, however, that disability progression in MS patients is not solely linked to new focal inflammatory demyelinating lesions and clinical relapse [11].Instead, it is increasingly recognized that progression independent of relapse activity (PIRA) [12][13][14] and the accumulation of disability in the absence of relapse-associated worsening (RAW) [12], as determined by the Expanded Disability Status Scale (EDSS), may occur from the disease onset [10,15].In the early stages of MS, PIRA is a significant provider of long-term disability, even in the absence of relapses [16,17].Relapsing multiple sclerosis (MS) appears to be orchestrated by the activation and migration of peripheral immune cells into the central nervous system (CNS), with a compelling focus on the interplay between T and B cells.Non-relapsing progressive MS seems to be related to neurodegeneration phenomenon and/or smoldering CNS inflammation [18,19].PIRA has been associated with several MRI features, including brain and spinal cord atrophy, as well as an increase in paramagnetic rim lesions [20,21].The pathophysiological substrate for PIRA is currently not extensively characterized, with a combination of pathological processes probably contributing to degenerative progression in early MS [20,[22][23][24].Moreover, studies have highlighted the challenges of compensatory mechanisms to leptomeningeal inflammation failure and focal spinal cord pathology, which are potentially linked to PIRA [25].To date, the clinical and neuroimaging predictors of PIRA at disease onset have not yet been outlined, nor is its association with inflammation [16,26].As a result, PIRA has gained significant attention in both research and clinical settings.Given that patients experiencing PIRA early in the disease course often face a challenging prognosis, there is a pressing need to unveil biomarkers capable of predicting and monitoring the clinical evolution of MS and the response to various DMTs. Among potential biomarkers, miRNAs have gained significant consideration over time due to their role in gene expression regulation at the post-transcriptional level.Furthermore, miRNAs' involvement in diverse cellular processes, such as inflammation, neurodegeneration, and remyelination, has been extensively investigated [27,28].miRNAs are small, non-coding RNA molecules, between 20 and 25 nucleotides, which regulate a multitude of cellular processes.Different signatures of miRNA expression have been identified in MS subjects in the relapse versus remitting phase, able to identify specific drug responses and radiological patterns [29][30][31].A specific signature of deregulated miRNAs in peripheral blood mononuclear cells (PBMC) has been shown as a biomarker in disease stratification or DMT response [32]. Modeling approaches, such as logistic and multilinear regressions are essential in modern medicine for predicting outcomes and understanding complex relationships between variables.These statistical tools enabled researchers using a backstep multivariate regression analysis to identify miRNAs significantly correlated with EDSS changes over time in MS patients [33][34][35]. As of today, there are limited notions regarding biomarkers-based models as potential predictors of MS disease progression.The study aimed to identify miRNAs associated with MS progression by analyzing their expression levels in relation to PIRA and EDSS scores.MiRNAs correlated with PIRA status in MS patients were highlighted using a predictive probabilistic regression model to accurately stratify future PIRA risk.A multivariate analysis further revealed miRNA predictors associated with changes in EDSS scores, demonstrating the utility of regression in forecasting disability progression. Patient Screening and Enrollment Twenty-five adult patients with relapsing-remitting MS (RRMS) (17 females and 8 males) were enrolled in this study, according to the latest McDonald criteria [9].Patients underwent clinical assessments before (time T0) and after high efficacy DMTs at different time points (6,12,18, 24 months: T1, T2, T3, T4, respectively).Patients' demographic and main clinical features are reported in Table 1.RAW was defined as a confirmed and sustained disability worsening (CDW) event with an onset within 90 days from the beginning of a relapse.PIRA was defined as a CDW event either without any preceding relapse or with an onset occurring more than 90 days after the beginning of the reported relapse.Peripheral blood was collected at baseline (T0) to measure miRNA expression profiles. Identifying Candidate miRNAs Linked with Multiple Sclerosis Progression With the aim of designing an optimal model to associate miRNA expression levels to clinical trajectory as quantified by PIRA and EDSS score, we performed a first filtering step to select miRNAs with significant correlation to PIRA status (0/1).This resulted in a pre-selection of nine miRNA genes (hsa-miR-4485-5p, hsa-miR-1973, hsa-miR-424-5p, hsa-miR-4466, hsa-miR-6126, hsa-miR-223-3p, hsa-miR-24-3p, hsa-miR-340-3p, hsa-miR-6090) (Table 2), shown in a heatmap (Figure 1).The Principal Component Analysis (PCA) highlighted how these genes collectively could partially stratify the subjects based on PIRA (Figure 2).Table 2. Correlation values between selected miRNAs, PIRA status, and EDSS (T4-T0) difference.Nine miRNAs were selected based on their significant correlation both with PIRA and EDSS change at 24 months (T4).These miRNAs were then used to build logistic and multilinear models.All correlations are statistically significant (p < 0.05).The trajectory of EDSS scores in subjects was influenced by the stratification of PIRA, with this relationship becoming increasingly visible at later disease stages (Figure 3A).Moreover, the change in EDSS score at T4 compared to T0 (EDSS T4 − EDSS T0 ) differed according to PIRA status, with a significantly larger deviation from baseline when PIRA was present (Figure 3B). Developing a Predictive Model for PIRA In order to refine patients' stratification, which is tightly connected to the EDSS score trajectory, we looked for a robust logistic classification model constructed on four baseline (T0) miRNA expression levels and their first-order interactions, to predict the PIRA status at follow-up (0 = No PIRA, 1 = PIRA).The selected optimal logistic score-based model allowed both a good prediction for PIRA status, based on score cut-off, and had statistically significant coefficient values (Equation ( 1)). where A score of 0.277 = optimal threshold of the logistic score by the maximum Youden method in the ROC curve (Figure 4). Evaluating EDSS Changes with a Multivariate miRNA Analysis Using four other miRNA predictors associated with EDSS, it was possible to build an optimal multilinear model, including interactions, to describe the relative EDSS changes after 24 months (EDSST4 − EDSST0, Equation ( 2)).(2) All model coefficients, including intercept, were again statistically significant (Wald test, p < 0.05).All model coefficients, including intercept, were statistically significant (Wald test, p < 0.05).Using this model, logistic scores were computed, together with the corresponding ROC curve.The model is also able to predict the correct PIRA status, with a computed AUC = 0.971 (bootstrapped AUC = 0.990 ± 0.023 with a high degree of similarity).The maximum Youden index criterion allowed to select an optimal score cut-off = 0.277 to discriminate between positive (>cut-off) and negative (<cut-off) PIRA status.The robustness of AUC was assessed by randomization tests on reshuffled PIRA status and predictor identity (Figure 4A,B). Evaluating EDSS Changes with a Multivariate miRNA Analysis Using four other miRNA predictors associated with EDSS, it was possible to build an optimal multilinear model, including interactions, to describe the relative EDSS changes after 24 months (EDSS T4 − EDSS T0 , Equation ( 2 All model coefficients, including intercept, were again statistically significant (Wald test, p < 0.05). Though each one of these four miRNA genes were significantly correlated with the EDSS changes at T4 (Figure 5), none singularly could be used for an optimal univariate regression model, since the performance was consistently worse than the full multivariate model in Equation ( 2) combining both single predictors and their interactions.The multivariate model performance in predicting changes in EDSS compared to baseline is shown in Figure 6, with a very good correlation level between predicted and real data. Discussion Accurately forecasting the trajectory of multiple sclerosis is a critical medical need, as predicting disease progression can equip clinicians to empower treatment interventions in a timely manner and improve patient outcomes.This study aimed to investigate whether miRNA baseline expression profiles in PBMC predicted disability worsening due to PIRA in RRMS patients.We designed logistic models to associate miRNA expression to clinical trajectory as quantified by PIRA, a binary variable, and EDSS scores.We decided to use multivariate logistic and linear models, including interaction terms, to improve the regression quality.We first pre-selected nine miRNA genes (hsa-miR-4485-5p, hsa-miR-1973, hsa-miR-424-5p, hsa-miR-4466, hsa-miR-6126, hsa-miR-223-3p, hsa-miR-24-3p, hsa-miR-340-3p, hsa-miR-6090), significantly correlated to PIRA which were also correlated to EDSS changes (T4-T0) (Table 2).This first step was necessary in order to perform a preliminary feature selection to narrow the set of predictor combinations for model building. The PCA of samples (Figure 2) obtained using these nine potential predictors highlighted how miRNAs partially discriminated between positive and negative PIRA conditions, but a further optimization step was required.The EDSS trajectories of subjects, as the disease severity increased, strictly depended on the PIRA stratification (Figure 3).No studies are available in the literature that clearly associates miRNA expression levels with PIRA.Hence, we were unable to benchmark our model with other results, nor were we able to validate the model using an independent dataset, as miRNA data collection is not a common practice in everyday clinical activities.We constructed the logistic formula taking care by selecting an optimal model with statistically significant coefficients for reasonable robustness.Additionally, we assessed the AUC significance using two different empirical null distributions, which were obtained by randomizing the PIRA status or by randomly selecting unrelated miRNA predictors. Different authors investigated the link between miRNA expression and EDSS in serum.Using a backstep multivariate regression analysis, Casanova et al. identified hsa-miR-9-5p as significantly correlated with EDSS change over 24 months, mirroring the timeframe of our research [34].Conversely, unlike our approach, they did not construct a multivariate model to derive an optimized score associated with EDSS changes over time.Nevertheless, consistent with the analysis approach in the current study, prior research has found that single miRNA predictors were unable to achieve optimal performance in univariate logistic models.In contrast, multivariate logistic models were able to attain better predictive performance [36,37].However, many of the model coefficients in these multivariate analyses did not reach statistical significance.This highlights the challenge of identifying robust miRNAs as biomarkers for multiple sclerosis disease staging using a univariate approach.The higher number of studies using multivariate models suggests the need to consider the combined effects of multiple miRNAs rather than relying on individual miRNA predictors alone.The lack of statistical significance for some model coefficients underscores the complexity of establishing definitive associations between specific miRNAs and disease staging in multiple sclerosis.Based on our findings, we advance the hypothesis that the nine chosen miRNAs may have potential associations with both PIRA status and EDSS change during 24-month follow-up.Additionally, a stronger and more comprehensive association could potentially be achieved through the implementation of appropriate multivariate logistic modeling for PIRA and linear modeling for EDSS.A more robust understanding of the relationships between miRNA expression, PIRA, and EDSS increase and progression could be achieved by integrating a specific subset of predictors along with their multiplicative interaction terms. Furthermore, with the logistic model being indicated as a possible marker of progression in this study, some of the microRNAs identified and correlated with PIRA and EDSS appear to have specific associations with the pathophysiology of multiple sclerosis based on the literature.Some miRNAs have been recently proposed as biomarkers for RRMS in serum [34] and extracellular vesicles (EVs) [35,38,39].In the present study, we showed that miRNA expression levels were associated with PIRA.MiR-223-3p has a pivotal role as an anti-inflammatory in immune cells and serves as a suppressor of NLRP3, a key protein of the inflammasome, recognized as a central component in the development of several inflammatory and autoimmune diseases [40].NLRP3 inflammasome activity is also critical for the inflammation-based microenvironment following demyelination and is a potential therapeutic target for inflammatory-mediated demyelinating diseases, including MS [41].The overexpression of miR-24-3p [42] and the downregulation of miR-223-3p [43] have already been shown in RRMS, and the correlation with EDSS has been reported [34,44].Moreover, Scaroni et al. in 2022 found both miR-223-3p and miR-24-3p overexpressed in serum EVs in CI (cognitively impaired) MS patients when compared to CP (cognitively preserved) MS patients [38].Vistbakka et al., 2022 described miR223-3p variation in all the subtypes studied across a 4-year follow-up but without a clear correlation with the clinical disability, as measured by EDSS.Over the same follow-up period, the expression of miR-24-3p was stable longitudinally, while miR-223-3p resulted in temporary variation [42]. In agreement with a previous study, we found that miR-24-3p correlated with the disability progression in RRMS.Our findings confirm the temporal correlation of miR-223-3p and miR-24-3p with clinical disability as measured by EDSS in RRMS. Moreover, some of the miRNA in our model, such as miR-340-3p and miR-424-5p, have a potential role in inflammation and in MS.MiR-340-3p is involved in the inflammatory processes and is reduced in B cells of RRMS patients and able to induce a specific cytokine and chemokine response [45].MiR-340-3p has been described by Wallach et al. as a novel TLR7/8 activator involved in CNS injury, thereby providing its potential role as a signaling molecule in CNS diseases.MiR-424-5p has been identified in the plasma of subjects who remained as Radiologically Isolated Syndrome (RIS) after 5 years of follow-up [46]. We propose a multivariate-based approach model that can explore the association between miRNAs and clinical activity and progression in MS.This model, based on measures of disability, such as the EDSS and PIRA, could uncover potential biological biomarkers essential for accurately predicting disability progression in the early stages of the disease. While these findings are promising, the lack of an independent validation cohort and the complexity of establishing definitive associations between specific miRNAs and disease staging in MS highlights the need for further research on a larger patient subset.Expanding the analysis by including a broader range of variables may yield additional insights into MS disease progression mechanisms and provide a more robust and reliable disease trajectory modeling.To elucidate and investigate the complex pathophysiology of MS and the specific response to DMTs, a comprehensive approach is further needed.This probably would involve the integration of biomarkers, such as miRNAs, from diverse sources, including PBMC, serum, EVs, and single-cell technologies, through modeling strategies that can capture disease progression course.If these miRNA-based models were validated and optimized in a larger, well-characterized population, they would unlock the potential for targeted pharmacological interventions in preventing disease progression. Study Design and Participants Patients with RRMS, according to the latest McDonald revised criteria [9], were enrolled.DMTs have been chosen as indicated [47].The patients were clinically tested before (time 0 = T0) and after treatment (T1-T4, 6-24 months).To protect sensitive data, anonymous codes were assigned to each participant and preserved for the study duration.All subjects gave written informed consent to participate in the study.The research was conducted following the Helsinki Declaration and approved by the Ethics of Sapienza University-Policlinico Umberto I (Rif.6361, protocol number 0635/2021).To minimize potential bias factors, all clinical data were gathered in the same clinical center following the same guidelines. PBMC Sample Collection The patient's peripheral blood was drawn via venipuncture at T0 for miRNAs' profiling and other laboratory tests.The blood samples were collected in Vacutainer tubes containing EDTA.Then, 15 mL of phosphate-buffered saline (PBS; without Ca 2+ , Mg 2+ ) was added to 10 mL of each sample's blood; after mixing, the diluted blood samples were carefully layered onto 7.5 mL of Ficoll for 30 min of centrifugation (18-20 • C) at 1800 rpm.The lymphocytes/monocytes layer was accurately collected in clean tubes; the cells were then pelted (1400 rpm for 10 min at 18-20 • C) and washed with PBS.The dry pellet was finally stored at −80 • C. RNA Extraction and Quality Checked RNA extraction and quality control RNA extraction was performed according to the miRNeasy Tissue/Cells Advanced Mini Kit (QIAGEN, Redwood City, CA, USA) instructions; the cells were suspended in 500 mL of RTL buffer + b mercaptoethanol, incubated at 37 • C for 10 min, and homogenized.The samples were passed through two different spin columns: the gDNA Eliminator spin column to remove all DNA and the Rneasy spin column to select RNA molecules.The miRNeasy Tissue/Cells Advanced Kits enabled efficient RNA enrichment down to approximately 18 nucleotides in size.All RNA samples were again stored at −80 • C. RNA purity and concentration quality control included the evaluation of absorbance at 260 nm by NanoDrop ND-1000 (Labtech International, Ringmer, UK).To assess the RNA integrity, samples were tested in the Agilent 2100 Bioanalyzer (Agilent Technologies, Santa Clara, CA, USA) via the Eukaryote Total RNA 6000 Nano kit (Agilent Technologies, Santa Clara, CA, USA) and the Small RNA kit (Agilent Technologies, Santa Clara, CA, USA).The bioanalyzer assessed each sample's RNA integrity number (RIN).Samples displaying an under-threshold RIN value (<8.0) were excluded from analysis. Agilent Microarray miRNA Profiles The miRNA profiles were performed according to the standard Agilent miRNA Microarray protocol (Agilent Technologies, Version 3.1.1,2015, Santa Clara, CA, USA).After a phosphatase treatment and a denaturation process via DMSO, 100 ng of RNA extracted from each sample was labeled with 3-pCp cyanine.Samples were hybridized to the Agilent Human miRNA Microarrays chip 8 × 60 K (Agilent PN G4870-60530, grid ID = 070156) containing 2549 human miRNAs.The glasses were incubated in the Agilent Hybridization Oven at 55 • C, 10 RPM, for 20 h, washed according to the protocol, and scanned using the Agilent DNA Microarray Scanner (G2539C). Statistical Analyses MiRNA expression values are median normalized and Log2 transformed.Only miR-NAs with expression values > 0.0 in every sample were included in the analysis.Data analysis was performed using R-Bioconductor [48,49].Pearson's correlation coefficient was used to estimate the association between miRNA profiles and clinical measures.The Shapiro-Wilk test was first applied to verify that the sample data followed a normal distribution, with a p-value > 0.05 indicating the data was consistent with a normal distribution [50].Only miRNAs with statistically significant correlation (p < 0.05) were selected for further analysis.Multivariate logistic and linear models were generated by the R package glmulti [51], including binary interaction terms.We selected the best model with 4 miRNA predictors out of 9 miRNA genes significantly correlated with PIRA.The statistical significance of coefficients was assessed using the Wald test (null hypothesis coeff = 0).ROC curves were obtained by the R package ROCR [52] and plotted by ggplot2 [53].Heatmaps were obtained by pheatmap [54].The robustness of AUC for the optimal logistic model was assessed by 1000 bootstrapped resampling of data and by two different randomization tests: 1000 randomizations of binary logistic response variable (PIRA) and 10,000 randomizations of miRNA predictors by randomly selecting non-optimal or unrelated miRNAs to evaluate the same model, so creating empirical null distributions.Experimental groups were compared using the Mann-Whitney two-sided test.Informed Consent Statement: Written informed consent was obtained from all subjects involved in the study. Data Availability Statement: MiRNAs raw and processed data are publicly available from the Gene Expression Omnibus database with the GSE230064 accession. Figure 1 . Figure 1.Sample heatmap visualization.The heatmap plot is based on Log2 normalized and standardized expression data (zero-centered, SD = 1.0) of the 9 miRNA genes significantly correlated with PIRA status (0/1).Samples are labeled according to PIRA. Figure 2 . Figure 2. Principal component analysis of samples.The PCA plot is based on Log2 normalized expression data of the 9 miRNA genes significantly correlated with PIRA status and EDSS score change.Samples are labeled according to PIRA. Figure 2 .Figure 3 . Figure 2. Principal component analysis of samples.The PCA plot is based on Log2 normalized expression data of the 9 miRNA genes significantly correlated with PIRA status and EDSS score change.Samples are labeled according to PIRA. Figure 3 . Figure 3. EDSS score variation and trajectory based on PIRA.(A) EDSS trajectory based on PIRA status.Subjects are divided according to the PIRA status and compared using the Mann-Whitney two-sided test.(*) p < 0.05.(B) EDSS score variation between T4 and T0 time points.Subjects are divided according to the PIRA status.The two groups are compared using the Mann-Whitney two-sided test.(****) p < 0.0001. Figure 4 . Figure 4. (A) Predicted scores obtained by the logistic model.The horizontal dashed line corresponds to the cut-off, estimated as the maximum Youden index in the corresponding ROC curve, between the two levels 0/1 = negative/positive of PIRA status based on actual clinical data.Negative PIRA subjects in clinical data are plotted in blue and similarly positive ones in red.All model coefficients, including intercept, are significant (Wald test, p < 0.05); see Equation (1).(B) ROC curve of the binary classifier logistic model.The AUC is significantly large, according to reference null distributions obtained by randomizing PIRA status (p < 0.01) or miRNA predictors (p < 0.05); see Section 4. Figure 4 . Figure 4. (A) Predicted scores obtained by the logistic model.The horizontal dashed line corresponds to the cut-off, estimated as the maximum Youden index in the corresponding ROC curve, between the two levels 0/1 = negative/positive of PIRA status based on actual clinical data.Negative PIRA subjects in clinical data are plotted in blue and similarly positive ones in red.All model coefficients, including intercept, are significant (Wald test, p < 0.05); see Equation (1).(B) ROC curve of the binary classifier logistic model.The AUC is significantly large, according to reference null distributions obtained by randomizing PIRA status (p < 0.01) or miRNA predictors (p < 0.05); see Section 4. Figure 5 . Figure 5. Correlation plots between selected miRNAs and EDSS (T4-T0).The four miRNAs used for the multilinear regression model in Equation (2) are significantly correlated to EDSS change at T4.The linear regression line in red is surrounded by the confidence interval in dark gray.MiRNA and EDSS data are normally distributed (Shapiro-Wilk test and Kolmogorov-Smirnov test, p < 0.05). Figure 5 . Figure 5. Correlation plots between selected miRNAs and EDSS (T4-T0).The four miRNAs used for the multilinear regression model in Equation (2) are significantly correlated to EDSS change at T4.The linear regression line in red is surrounded by the confidence interval in dark gray.and EDSS data are normally distributed (Shapiro-Wilk test and Kolmogorov-Smirnov test, p < 0.05). Figure 6 . Figure 6.Multilinear model to predict EDSS (T4-T0).The multilinear model is based on the same four miRNA predictors used for the logistic model.The gray band represents the confidence interval around the linear regression line in red.The Pearson correlation between actual data and prediction is significant (p < 0.00001); see Equation (2).Prediction values and EDSS data are normally distributed (Shapiro-Wilk test and Kolmogorov-Smirnov test, p < 0.05). Table 1 . Epidemiological and clinical data of the enrolled patients.Values are expressed as the number of subjects or (mean ± SD) otherwise.
5,773.6
2024-06-01T00:00:00.000
[ "Medicine", "Biology" ]
Effects of small surface tension in Hele-Shaw multifinger dynamics: an analytical and numerical study We study the singular effects of vanishingly small surface tension on the dynamics of finger competition in the Saffman-Taylor problem, using the asymptotic techniques described in [S. Tanveer, Phil. Trans. R. Soc. Lond. A 343, 155 (1993)]and [M. Siegel, and S. Tanveer, Phys. Rev. Lett. 76, 419 (1996)] as well as direct numerical computation, following the numerical scheme of [T. Hou, J. Lowengrub, and M. Shelley,J. Comp. Phys. 114, 312 (1994)]. We demonstrate the dramatic effects of small surface tension on the late time evolution of two-finger configurations with respect to exact (non-singular) zero surface tension solutions. The effect is present even when the relevant zero surface tension solution has asymptotic behavior consistent with selection theory.Such singular effects therefore cannot be traced back to steady state selection theory, and imply a drastic global change in the structure of phase-space flow. They can be interpreted in the framework of a recently introduced dynamical solvability scenario according to which surface tension unfolds the structually unstable flow, restoring the hyperbolicity of multifinger fixed points. I. INTRODUCTION The displacement of a viscous fluid by a less-viscous one in a Hele-Shaw cell, the so-called Saffman-Taylor problem [1,2,3,4,5], is a prototypical pattern formation problem. Since the seminal work of Saffman and Taylor [1] a considerable effort has been aimed at understanding both steady and unsteady interfacial patterns formed during this flow. The Saffman-Taylor problem is the simplest member of a wide class of interfacial pattern formation problems such as free dendritic growth, directional solidification, or chemical electro-deposition [6,7,8]. As such, a theoretical understanding of Hele-Shaw flow may help elucidate generic behavior common to many pattern forming systems. Despite its relatively simple formulation and the large amount of work devoted to it, however, several aspects of interfacial dynamics in Hele-Shaw flow are still poorly understood, in particular concerning the highly nonlinear and nonlocal dynamics of finger competition. One of the main reasons for the wide interest in Hele-Shaw flow, at least from a mathematical point of view, is that explicit time-dependent solutions can be found in the case of zero surface tension [9,10,11,12]. However, it is also known that the zero surface tension Saffman-Taylor (ST) problem is ill-posed as an initial value problem [13] and finite-time singularities appear frequently [14]. Nevertheless, rather large classes of zero surface tension solutions have been found which exhibit the variety of morphologies observed both in experiments and numerical simulations. Then, the question that naturally arises is to what extent smooth (nonsingular) zero surface tension solutions reproduce the dynamics of the physical problem with finite surface tension, in particular in the limit of vanishing dimensionless surface tension, B → 0. It is well known that surface tension is a singular per-turbation to the zero surface tension problem [13]. This singular character manifests dramatically in the classical selection problem posed by Saffman and Taylor [1] and only solved three decades later [15,16,17,18], where an arbitrarily small surface tension selects out a single, stable solution from the continuum of steady single-finger B = 0 solutions. Another manifestation of the singular nature of surface tension which is directly relevant to the present work is its effect on the dynamics. Siegel, Tanveer and Dai [19,20] showed that interfacial evolution for the regularized problem (i.e., vanishingly small B) may differ signifficantly from that for the B = 0 problem in order one time. Then, smooth time dependent solutions of the B = 0 case do not coincide, in general, with the limiting solutions for B → 0. Accordingly, the study of finite B dynamics encounters considerable difficulties since B = 0 solutions cannot be naively used as a starting point for the study of the problem with finite B. The physical content of exact zero surface tension solutions with pole-like singularities has been recently addressed in Refs. [5,21,22] using a dynamical systems approach. Through a detailed study it has been shown that the exact zero-surface tension phase flow, considered in a global sense, is structurally unstable. In other words, the zero surface tension phase dynamics are not topologically equivalent to the phase space flow of the physical problem, regularized by surface tension. Indeed, the zero surface tension phase flow omits the necessary saddle-point structure of multifinger fixed points, which is crucial to the physical finger competition process [22]. A natural extension of the well known solvability mechanism (first applied to 'select' a finger of width 1/2 out of a continuum of solutions in the single finger case) was proposed for multi-finger solutions in [22]; this helps clarify how the introduction of surface tension modifies the global phase space structure of the flow and restores the hyperbolicity of multi-finger fixed points. The approach of Ref. [22], however, was qualitative in nature and could not quantify the extent to which zero surface tension trajectories might resemble the evolution with small surface tension. In particular it was recognized that, while some trajectories appear to be qualitatively correct for infinite time, others may have a dramatically different evolution. In particular, adding an infinitessimal surface tension could give the opposite outcome in the finger competition, that is, make the 'losing' finger for B = 0 become the 'winning' finger when B > 0, for sufficiently generic sets of initial conditions. A satisfactory analytical understanding of the problem with B = 0 has been achieved in two regimes: the initial linear instability of the flat interface followed by the weakly non-linear regime [23], and the asymptotic regime where surface tension selects the width of the single finger [15,16,17,18]. The highly non-linear intermediate regime that connects the quasi-planar interface with the asymptotic single-finger regime has mostly been studied through numerical computation. The first exhaustive numerical studies were reported by Tryggvason and Aref [24,25], who paid considerable attention to the influence of viscosity contrast on the problem and studied both single-finger and multi-finger configurations. Later, Casademunt and Jasnow [26,27] showed that the basin of attraction of the single-finger solution depends strongly on viscosity contrast and that only when one of the two fluid viscosities is negligible it can be claimed that the single finger is the universal attractor of the problem. In the present work we will restrict to this limiting case. DeGregoria and Schwartz [28,29] observed that welldeveloped fingers split when surface tension is sufficiently decreased. This tip-splitting instability is related to the fact that the Saffman-Taylor finger is linearly stable but non-linearly unstable, and the size of the perturbation that triggers the tip-splitting decreases quickly with surface tension [30]. Dai and Shelley [31] showed that for small B numerical computations are extremely sensitive to the precision used in the computations. As a consequence noise level has to be controlled with care in order to ensure that the computation is sufficiently accurate. Hou et al. [32] developed a numerical method that deals with the numerical stiffness of the problem in an efficient manner, aiding the ability to perform long time computations. More recently Ceniceros et al. [33,34], using very high precision arithmetic have been able to study the effect of extremely small surface tension in the circular geometry with suction, and they have observed that surface tension can produce complex ramified patters even without the presence of noise. An analytical treatment of this highly nonlinear and nonlocal free-boundary problem faces challenging difficulties. In particular, a perturbative study for small B is complicated by the ill-posedness of the zero surface tension problem. Tanveer [13] was able to overcome this obstacle by embedding the zero surface tension problem in a well-posed one. In addition, this well-posed extension of the B = 0 problem allowed Baker et al. [35] to develop a numerical method to compute the time evolution of zero surface tension dynamics in a well-posed manner. Once the B = 0 problem is formulated in a well-posed way the B = 0 case can be studied using a perturbative approach. The main result of the asymptotic perturbative theory developed by Tanveer [13] is that the effect of surface tension may be manifest in a O(1) time: the evolution of the same initial interface for B = 0 and B = 0 will in general differ after a time of order one, even if the B = 0 solution is smooth for all time. Siegel et al. [20] have extended the work of Ref. [13] to later stages of the evolution, and through numerical computation for very small values of B they showed that smooth B = 0 solutions are indeed significantly affected by the presence of arbitrarily small B in order-one time, thus confirming the predictions of the perturbative theory. The zero surface tension solutions studied by Siegel et al. [19,20] in the channel geometry were single-finger solutions with an asymptotic width λ, specifically chosen to be incompatible with selection theory for vanishing surface tension. They found that the singular effect of surface tension was to widen the finger in order to reach the selected width. The surprising feature here is that the effect of surface tension is felt in order-one time, i.e., that the time lapse for which the regularized solution approaches the unperturbed one as B → 0 is bounded. The present paper expands the work of Refs. [19,20] in the spirit of Ref. [22], towards the study of multi-finger solutions. However, unlike the studies of [19,20] we chose zero surface tension multi-finger soltuions which are compatible with asymptotic selection theory, that is, with an asymptotic finger width λ = 1/2-the selected value in the limit B → 0. In this way we isolate the intrinsic finger competition dynamics from the selection effects responsible of restoring the asymptotic width. Two different kinds of two-finger zero surface tension solutions are studied, and in both cases it is shown that surface tension acts as a singular perturbation to the dynamics in orderone time, modifying dramatically the late time configuration of the interface not only quantitatively but also qualitatively. Specifically we show that paths in phase space associated with zero and nonzero surface tension evolution, and indeed the global topological structure of the the phase spaces, may differ appreciably, even for arbitrarily small B. In physical terms, our evidence suggests that the presence of arbitrarily small surface tension can completely alter outcome of finger competition when compared with the zero surface tension evolution. The paper is organized as follows: in Sec. II the equations describing Hele-Shaw flow are introduced, and a class of two-finger zero surface tension solutions relevant to two-finger competition is presented and briefly discussed. This class of solutions will be used as initial condition for numerical computation with B > 0. In Sec. III the basic features of the asymptotic theory are recalled, and the theory is applied to the zero surface tension solutions introduced in the previous section. The numerical computations with finite (but small) B are presented in Sec. IV. Sec. V discusses and summarizes the results obtained in previous sections. II. ZERO SURFACE TENSION In this section we present the equations which govern the interfacial dynamics in a rectilinear Hele-Shaw cell, following the formalism of [13]. We consider a class of exact, time-dependent zero surface tension solutions that are relevant to the finger competition problem, and briefly describe the solutions within this class. Consider Hele-Shaw flow in the channel geometry, in which a fluid of negligible viscosity displaces a viscous liquid. The equations governing the interfacial evolution can be conveniently formulated by first introducing a conformal map z(ζ, t) which takes the interior of the unit semicircle in the ζ plane into the region occupied by the viscous fluid in the complex plane z = x + iy, in such a way that the arc ζ = e is for s ∈ [0, π] is mapped to the interface and the diameter of the semi-circle is mapped to the channel walls [38]. The mapping function z(ζ, t) has the form z(ζ, t) = −(2/π) ln ζ + i + f (ζ, t), and inside and on the unit semi-circle we require f (ζ, t) to be analytic and z ζ (ζ, t) = 0. In addition, we require that on the real diameter of the semi-circle. This latter condition ensures that z maps the diameter to the channel walls. Under suitable assumptions (see [13]) the Schwartz reflection principle may be applied to show that f is analytic and z ζ = 0 for |ζ| ≤ 1. The effective velocity field, averaged across the plate gap, is a two-dimensional potential flow satisfying Darcy's law Here ϕ is a velocity potential defined by ϕ = −(b 2 /12µ) p, where p is the pressure, µ is the viscosity and b is the gap width. Under the assumption of incompressibility (∇ · u = 0) the potential satisfies Laplace's equation Incompressibility also implies the existence of a stream function ψ. Therefore, one can define a complex velocity potential W (z, t) = ϕ + iψ which is analytic for z in the fluid region of the channel. Its form as a function of ζ reads where ω(ζ, t) is an analytic function inside the unit circle. The condition that there is no flow through the walls implies that Im ω = 0 must hold on the real diameter of the unit semi-circle. In the absence of surface tension, ω = 0 (see Eq. (6)). At the interface we impose the usual boundary conditions. The kinematic boundary condition states that the normal component of fluid velocity at a point on the interface equals the normal velocity of the interface at that point, and takes the form The dynamic boundary condition specifies that the pressure jump across the interface is balanced by surface tension, and is given by The parameter B is the nondimensional surface tension and is defined by where T is the surface tension, V is the fluid velocity at infinity and a is half the cell width. The equations given in (1)-(5) are in nondimensional form, with lengths and velocities nondimensionalized by a and V , respectively. When B = 0 it is well known that pole singularities in z ζ (i.e. in f ζ ) present in the exterior of the unit disk are preserved under the dynamics, i.e., such singularities are neither created nor destroyed, although the location of those which are initially present will evolve with time. Exact B = 0 solutions consisting of a collection of pole singularities with constant amplitude have been the focus of extensive studies (see e.g. [9] ). The simplest such solution leading to nontrivial finger competition consists of a pair of singularites in the upper halfplane of |ζ| > 1, located at positions that are symmetric with respect to the y-axis. A second pair of poles conjugate to the first pair is required to satisfy the symmetry restriction (1). This exact solution takes the form [5,21,22] where λ and ǫ are real constants with 0 < λ < 1 and ǫ ≥ 0, and d(t) is real. The singularity locations are given by the complex parameter ζ s (t), which satisfies a simple differential equation given in [21]. Analyticity of f (ζ, t) in the unit circle implies that |ζ s (t)| > 1. We employ the convention that ζ s (t) is a complex number in the first quadrant. The amplitudes of the singularities, given here by the numbers 1 − λ + iǫ and its conjugate, are chosen so that the asymptotic form of the solution consists of one or two steadily propagating fingers of total width λ. The parameter ǫ determines the nature of the finger competition for B = 0. The solution (7) describes generically two different fingers of the nonviscous phase penetrating the viscous one. In the linear regime |ζ s (t)| >> 1 the interface consists of a single bump or finger, and as time increases a second finger may develop and grow, depending on the value of Arg ζ s (0). We summarize the features of the solution (7) that are most relevant to the study of finger competition. Consider first ǫ = 0. In this case the asymptotic configuration consists of one or two fingers of total width λ, depending on the initial condition. The singularities move toward the unit disk, with the limit as t → ∞ denoted by ζ s (t) → e iθ . When θ = 0 the asymptotic configuration is a single Saffman-Taylor finger growing in the center of the channel (this asymptotic configuration is denoted ST(R)), for θ = π/2 it is a 'side' Saffman-Taylor finger i.e. a pair of half fingers of total width λ with tips located at the cell walls (denoted ST(L)), and for θ = π/4 it is a 'double' Saffman-Taylor finger, namely two identical fingers of width λ/2 with tips at x = 0, ±1 (denoted 2ST). For any other value of θ the asymptotic configuration consists of two unequal steadily growing fingers. The two-finger asymptotic configuration is a consequence of the continuum of fixed points that is present in the phase portrait of the dynamical variables which govern the shape of the interface, namely (Reζ s (t), Imζ s (t)). In order to correspond to the notation of [21] . Then the planar interface corresponds to α = 0, the center Saffman-Taylor finger to α = −i, the side Saffman-Taylor finger to α = i, and the double Saffman-Taylor finger to α = 1. Figure 1 shows the phase portrait of the dynamical system obtained from the substitution of the mapping defined by Eq. (7) into the evolution Eqs. (5,6) for B = 0, using the dynamical variables (α ′ , α ′′ ) . The asymptotic states with two advancing fingers corresponds in the dynamical system (α ′ , α ′′ ) to a continuum of fixed points given by |α| = 1. Therefore, for ǫ = 0 the solution (7) does not exhibit finger competition. In addition, it is important to note that the evolution of (7) with ǫ = 0 is free of finite time singularities, i.e., z ζ = 0 in the domain |ζ| ≤ 1 for all time. For ǫ = 0 the continuum of fixed points is removed (see Fig. 1), as is the double Saffman-Taylor finger fixed point. Consequently, the solution to Eq. (7) exhibits 'successful' competition, in the sense that the asymptotic interface shape consists of a single Saffman-Taylor finger or side Saffman-Taylor finger. The price to pay is the appearance of finite time singularities for a certain subset of initial conditions, in the form of a zero of z ζ impacting the unit disk (this is a generic feature of conformal map solutions z ζ composed of a finite number of pole singularities-see [9]). Then, only the subset of initial conditions free of finite time singularities is capable of sustaining finger competition all the way to the t → ∞ outcome. Nevertheless, one may ask whether the class of B = 0 solutions that are free of finite time singularities may describe, at least qualitatively, the physical finger competition for positive surface tension in the limit B → 0. In the following sections we focus on the class of initial data for which the B = 0 solutions are devoid of finite time singularities, and examine the B > 0 dynamics. We develop a general theory for how the presence of positive surface tension affects the outcome of finger competition. This will enable us to predict the winner of finger competition, i.e., the eventual asymptotic state. Most interestingly, we find instances in which the presence of arbitrarily small surface tension leads to dramatically different outcomes in finger competition when compared with the zero surface tension evolution. III. ASYMPTOTIC THEORY Little is known about the effect of finite (but small) surface tension B on the dynamics of zero surface tension multifinger solutions, and in particular on the class of exact solutions (7). For single finger configurations, steady state selection theory predicts that the finger cannot have an arbitrary width. Indeed, for vanishing surface tension B → 0 the width λ = 1/2 is selected, asymptotically in time. Thus, it is clear that surface tension has a critical influence on single finger solutions with λ = 1/2. The nature of this influence in the limit B → 0 has been investigated by Siegel, Tanveer and Dai [19,20], who present evidence that zero surface tension single finger solutions with λ < 1/2 are significantly perturbed by the inclusion of an arbitrarily small amount of surface tension in order one time. The effect of surface tension is an increase of the finger width to reach the width predicted by selection theory. Consider now the effect of small surface tension on the exact (B = 0) two finger solution (7). When 0 < B ≪ 1 the asymptotic perturbation theory developed in Refs. [13,19,20] can be applied. This perturbation theory describes the effects of the introduction of a small amount of surface tension on initial data z(ζ, 0) specified in the extended complex plane, i.e., in a domain including the 'unphysical' region |ζ| > 1 (the extended domain is required to make the B = 0 problem well-posed). The effect of finite B is most important near isolated zeros and singularities of z ζ (ζ, 0), where a regular perturbation expansion in B breaks down. (Away from these points the perturbation expansion is regular, at least initially.) For the class of solutions (7) we are discussing, the isolated singularities of z ζ (ζ, 0) are simple poles. The theory suggests that the introduction of finite surface tension modifies the poles (ζ s ) by transforming them into localized clusters of −4/3 singularities, but these clusters move at leading order according to the B = 0 dynamics. Thus the effect of one of these clusters on the interface is approximately equivalent to that of the unperturbed (B = 0) pole-like singularity that has given birth to it. The influence of surface tension on the zeros of z ζ (ζ, 0) is more complex. Each initial zero instantly gives birth to two localized inner regions, i.e., regions where the B = 0 and B > 0 solutions differ by O(1). One of the two inner regions moves, at least initially, according to the B = 0 dynamics of the original zero ζ 0 [39]. Since the particular zero surface tension solutions considered here have zeros that are either bounded away from the unit disk for all time or impact the unit disk only after long times, the inner region around ζ 0 (t) has a negligible influence on the interface. The second inner region created around ζ 0 (0) moves differently. The theory suggests that this inner region consists of a cluster of singularities, whose size scales like B 1/3 . Unlike the case discussed above this second inner region moves away from the B = 0 zero since, to leading order in B, it moves like a singularity of the zero surface tension problem and this speed is different form the speed of the zero ζ 0 (t) which spawned the cluster. As this singularity cluster approaches the physical domain it may perturb the flow and the interface shape may differ significantly from that at B = 0 shape. The location of this singularity cluster will be denoted by ζ d (t), and following [13] we shall call it the daughter singularity. We emphasize that the dynamics of the daughter singularity cluster is determined at lowest order solely by the B = 0 solution z 0 (ζ, t), at least until it arrives at the surroundings of the unit circle, and therefore can be simply computed once the initial locations of the zeros of z ζ (ζ, 0) are determined. The daughter singularity evolution equation is given by (see [13]) where q 0 1 is defined by and the superscript 0 denotes that the function evaluations are done using the corresponding B = 0 solution. The function −q 0 1 (ζ, t) also gives the characteristic velocity of a pole or branch point singularity of z ζ (ζ, t) located at position ζ in the region |ζ| > 1. The initial position ζ d (0) is a consequence of the fact that each zero ζ 0 (0) of the zero surface tension solution will give birth to a daughter singularity. From Eq. (8) it can be shown [13] that d|ζ d |/dt < 0, so that the daughter singularity approaches the unit circle and it can impact it in a finite time t d , the daughter singularity impact time, satisfyng |ζ d (t d )| = 1. In the limit B → 0, the daughter singularity impact time t d signals the time when the effects of the surface tension are felt on the physical interface. For times larger than t d the B = 0 interface and the B → 0 are expected to differ significantly. For the family of exact B = 0 solutions the mapping function (7) has four pole-like singularities: ±ζ s and ±ζ s , and four zeros ±ζ 0+ and ±ζ 0− of z ζ located at For the particular case λ = 1/2 this solution presents only one pair of zeros ±ζ 0 located at In the following it will be useful to define the real quantity s which appears in (10) and (11). Depending on the value of λ the initial data may have zeros on both the real and imaginary axes, or all the zeros may lie on a single axis. This difference has significant consequences in the finite surface tension dynamics. More specifically, when λ < 1/2 the zeros described in (10a) and (10b) are located on both the real and imaginary axes of |ζ| > 1, namely at ±|ζ 0+ | and ±i|ζ 0− |. The situation is different for λ > 1/2, which is further divided into two cases, depending on whether β 2 + 4(1 − 2λ)|ζ s | 2 > 0 or < 0. In the former case all four singularities lie on the real axis (for β > 0) or on the imaginary axis (for β < 0). In the latter case the four zeros are located off the axes in conjugate pairs, i.e. at ±ζ 0 and ±ζ 0 . Finally, when λ = 1/2 the solution (7) has only two zeros, located on the real axis at ±|ζ s | 2 / √ −2β when β < 0 and on the imaginary axis at ±|ζ s | 2 / √ 2β when β > 0. Note that for λ = 1/2 the B = 0 solution has two less zeros than for λ = 1/2. The initial zero locations described above have a critical bearing on whether the daughter singularity will impact the unit disk [40]. Although all daughter singularities approach the unit disk, their impact may be shielded by the presence of an inner region corresponding to a pole singularity. More precisely, since ζ d and ζ s obey the same dynamical equation, they will move together if they get close enough to each other. However, the inner region around a pole moves to leading order like the B = 0 pole, i.e., it moves exponentially slowly toward |ζ| = 1 when |ζ s | − 1 << 1, and does not impinge upon the unit disk in finite time [9]. In this case the O(B 1/3 ) inner region around the daughter singularity will not affect the dynamics on |ζ| = 1, at least until t = O(− ln B). Before this time, we expect the interface to be uninfluenced by the presence of the daughter singularity. This shielding mechanism is discussed in the context of single fingers in [20]. Knowledge of the t → ∞ asymptotic state and the initial locations of zeros can be used to ascertain whether shielding can occur. The B = 0 asymptotic state corresponds to ζ 2 s (t → ∞) → ±1. Thus, for λ < 1/2, only one pair of daughter singularities may be shieldednever both-so at least one pair of daughter singularities will impinge on the unit disk. The daughter singularities will also not be shielded when λ > 1/2 and β 2 + 4(1 − 2λ)|ζ s | 2 < 0. However, for λ > 1/2 and β 2 + 4(1 − 2λ)|ζ s | 2 > 0 it is possible for all the daughter singularities to be shielded, since they lie on a single axis. The daughter singularities can also be completely shielded when λ = 1/2. The different possibilities are schematically depicted in Fig. 2. We have numerically computed the daughter singularity impact time t d for various values of λ and ǫ, using initial conditions close to the planar interface, |ζ s | 2 = 20 and various values of Arg[ζ 2 s ]. Figure 3 shows the phase portrait for different values of λ and ǫ with the daughter singularity impact indicated. From the plots it is immediately seen that for λ < 1/2 at least one daughter singularity always hits the unit circle, and for λ ≥ 1/2 some trajectories are free from daughter singularity impact. In addition, it is observed that for fixed λ a larger value of ǫ causes the daughter singularities to hit in shorter times (or less developed fingers) than a smaller value of ǫ, and for fixed ǫ larger λ implies larger impact times. We have also checked that the daughter singularity impact occurs well before a finite time singularity, i.e., the impact of a zero of z ζ . Thus, the effect of surface tension is significant well before the curvature in the zero surface tension solution becomes large. It is noted that the λ dependence of the daughter singularity impact is consistent with the results of steady state selection theory [15,16,17,18]. According to selection theory, for small B the possible values of λ are discretized: λ must satisfy the relation λ = λ n (B), given to leading order by where n parameterizes the branch of solutions. Note that λ n > 1/2 for all n. The steady finger shape is to leading order a Saffman-Taylor finger, with the above values of λ n substituted for the width λ. On the other hand for ǫ > 0 the asymptotic state of (7) is a Saffman-Taylor finger of width λ. From Eq. (12) it is clear there exists a steady solution with width λ n (B) close to a Saffman-Taylor finger of arbitrary width λ > 1/2. Thus the shielding of the daughter singularity, which leads to the persistence of a Saffman-Taylor solution with λ > 1/2 over long times, is consistent with steady state selection theory [41]. In contrast for λ < 1/2 there are no nearby steady solutions. Thus, a Saffman Taylor finger with λ < 1/2 cannot persist over a long time. We see that the impact of a daughter singularity provides a mechanism for the onset of finger competition, finger widening, and selection of a width λ > 1/2. For ǫ = 0 the scenario is similar, except there is an added class of exact B > 0 solutions. Magdaleno and Casademunt [36] have shown that two-finger solutions composed of steadily propagating but unequal fingers do exist for small nonzero B. The introduction of a small nonzero surface tension selects a discrete set of solutions from the continuum of fixed points of the B = 0 phase portrait. The solutions are parameterized by the total width of the fingers λ = λ 1 + λ 2 and the relative width q = λ 1 /λ, and the introduction of finite B discretizes the possible values of the parameters. In particular, they must satisfy a condition of the form λ = λ n (B) and q = q n,m (B) where n and m are integers. The expression for λ n (B) at lowest order is equivalent to Eq. (12), but with different coefficients C n . The shape of these solutions are given to leading order (in the limit t → ∞) by (7) with allowed value of λ n (B) substituted for the width λ. Again, λ n (B) > 1/2, and the consistency between daughter singularity impacts and steady state selection theory follows as above. We conjecture that the outcome of interfacial shape evolution after the daughter singularity impinges is in general independent of the particular finger on which the impact first occurs i.e., independent of the point at which ζ d (t) impacts on |ζ| = 1. More specifically, we surmise that impact on either the shorter (trailing) or larger (leading) finger retards the velocity of that finger, and is accompanied by the widening of the leading finger, so as to maintain a constant fluid flux at infinity. The widened leading finger then shields the trailing finger, preventing it from further growth. Thus, the finger which is leading at the time of the daughter singularity impact 'wins' the competition, in the sense that it will evolve for t → ∞ to the ST steady finger. To examine this conjecture and study the dynamics of finger competition with finite (but small) surface tension we have numerically computed the evolution of an interface with initial conditions given by the conformal mapping Eq. (7) close to the planar interface (|ζ s (0)| −2 ≪ 1). The results are reported in the next section. IV. NUMERICAL RESULTS Numerical computations have been performed for B > 0, using an initial interface corresponding to the explicit B = 0 solutions discussed in Sec. II. The effect of positive surface tension on this class of solutions is explored for various values of ǫ and a variety of initial pole positions. We employ the numerical method introduced by Hou et al. [32] and used in other studies of small surface tension effects in Hele-Shaw flow [19,20,33,34]. The method is described in detail in Ref. [32]. It is a boundary integral method in which the interface is parameterized at equally spaced points by means of an equal-arclength variable α. Thus, if s(α, t) measures arclength along the interface then the quantity s α (α, t) is independent of α and depends only on time. The interface is described using the tangent angle θ(α, t) and the interface length L(t), and these are the dynamical variables instead of the interface x and y positions. The evolution equations are written in terms of θ(α, t) and L(t) in such a way that the high-order terms, which are responsible of the numerical stiffness of the equations, appear linearly and with constant coefficients. This fact is exploited in the construction of an efficient numerical method, i.e., one that has no time step constraint associated with the surface tension term yet is explicit in Fourier space. We have used a linear propagator method that is second order in time, combined with a spectrally accurate spatial discretization. Results in this section are specified in terms of the scaled variables t = πt;B = π 2 B;x = πx;ỹ = πy. (13) instead of the original ones used in previous sections. The number of discretization points is chosen so that all Fourier modes of θ(α, t) with amplitude greater than round-off are well resolved, and as soon as the amplitude of the highest-wavenumber mode becomes larger than the filter level the number of modes is increased, with the amplitude of the additional modes initially set to zero. The time step ∆t is decreased until an additional decrease does not change the solution to plotting accuracy, nor lead to any significant differences in any quantities of interest. In a typical calculation 512 discretization points are initially used, and the initial time step is ∆t = 5 · 10 −4 . For small values of surface tension numerical noise is a major problem, and the spurious growth of short-wavelength modes induced by round-off error must be controlled. To help prevent this noise-induced growth at short wavelengths spectral filtering [37] is applied. Additionally, we minimize noise effects and also assess the time at which these effects become prevalent by employing extended precision calculations, as described in the next section. Our main interest is to uncover the role of surface tension in the dynamics of finger competition. To isolate the features of finger competition from those of width selection, we will concentrate on B = 0 solutions with λ = 1/2, the value selected by surface tension in the limit B → 0. Since the B = 0 dynamics for ǫ = 0 and ǫ = 0 is quite different the numerical results for the two cases will be presented separately. A. Solutions with ǫ = 0 We first consider parameter values λ = 1/2 and ǫ = 0. A typical set of interfacial profiles is shown in Fig. 4. The initial data is given by the mapping function Eq. (7), with λ = 1/2, ǫ = 0, d(0) = 0 and ζ 2 s (0) = 20 exp(iπ/6). With this value of ζ 2 s (0) the initial interface is well inside the linear regime. Evolutions are shown for different values ofB, and the B = 0 interface evolution is also plotted for comparison. In all these evolutions the filter level is set to 10 −13 , although later we shall make comparisons to profiles computed at higher precision. For the largest value of surface tension the computed B > 0 and the exact B = 0 solutions first differ appreciably at the seventh curve, corresponding tot ≈ 3. At this point the velocity of the small finger (at the channel sides) begins to decrease and it is clearly left behind when compared with the small finger evolution in the B = 0 solution. Eventually, the advance of the small finger is completely suppressed and the larger finger widens to attain a width close to 1/2 of the channel. For a smaller value of surface tension, for instanceB = 0.001, the evolution displays qualitatively the same behavior. The B > 0 interface differs appreciably from the B = 0 sightly later than before (i.e., at the eighth curve) and the region where the two solutions differ most is to some extent more localized around the small finger than for larger values of B. Additionally, for this value of surface tension the effect of numerical noise is clearly exhibited in the interfacial profiles. Here the tip-splitting and sidebranching activities are a clear effect of numerical noise, as can be easily checked redoing the computation with a different noise filter level. In order to suppress or delay the branching induced by numerical noise that appears for small values of surface tension it is necessary to use higher precision arithmetic, e.g. quadruple precision (128-bit arithmetic). The filter level can then be reduced by a large amount and the outcome of spurious oscillations is substantially delayed. Figure 5 shows the effect of reducing the filter level to 10 −27 . The B = 0 solution is plotted, as well as the computation with double precision. ForB = 0.001 the branching is totally suppressed, at least for the times we have computed, but for smaller values ofB the use of quadruple precision is only able to delay the branching and not totally suppress it. The quadruple precision computation confirms the results observed with lower precision: the introduction of finite (but small) surface tension results in the suppression of the small finger. From Fig. 5 one can also see that for long times, when the interface is clearly affected by numerical noise (in the double precision curve), the noise-induced branching is restricted to the large finger, and the small finger is basically unaffected by noise. This observation suggests that the small finger shape, as well as its tip velocity and tip curvature, can be trusted even when the large finger has developed tip-splittings and side-branchings due to the spurious growth of round-off error. Figure 6 shows the tip velocity of both fingers versust for decreasing values of surface tension. It can be seen that the velocity of the large finger is only slightly affected by surface tension, whereas the velocity of the small finger is substantially reduced by the inclusion of finite B. AsB is decreased the tip velocity of the small finger is more faithful to the B = 0 evolution before the daughter singularity impact (shown by a cross), and clearly veers away from the B = 0 velocity later in the evolution, consistent with asymptotic theory. Note that at the smallest value ofB the tip velocity of the large finger drastically differs from the B = 0 velocity at late times. This discrepancy is a manifestation of noise effects in the neighborhood of the large finger tip. However, as previously seen, the small finger is basically unaffected by noise at the times we have plotted. In order to further verify that the daughter singularity impact is responsible for the observed change in the small finger tip speed we follow the scheme introduced in [20]. Define t p as the time when the computed tip velocity differs by p from the B = 0 tip velocity. According to asymptotic theory this t p will be a linear function of B 1/3 in the limit B → 0 as long as p is small enough. showst p versusB 1/3 for various values of p, and it can be seen thatt p exhibits the predicted behavior. Moreover, we have extrapolated the B = 0 value oft p using the two points of lowestB and the result is very close tõ t d , whose value is represented by a cross. We conclude that the impact of the daughter singularity is associated with the dramatic change of the B > 0 solution when compared to the zero surface tension solution, reducing the velocity of the small finger and eventually suppressing it. In contrast, for the B = 0 dynamics the small finger 'survives', propagating with the same asymptotic speed as the larger finger. Note that the average interface advances at unit velocity, and a tip velocity below 1 implies that the finger is retreating in the reference frame of the average interface. In summary then, our numerical results show that the computed interface for B = 0 follows the B = 0 evolution for an O(1) time interval-roughly corresponding to the daughter singularity impact time-and that at further times the velocity of the small finger decreases while the large finger widens. The small finger eventually comes to a halt and the larger (leading) finger reaches an asymptotic width slightly above 1/2, the width singled out by selection theory. It is noted that for the initial condition we have studied the daughter singularity impact takes place on the tip of the small finger. Therefore, the influence of surface tension on the interface should be sig-nificant first around the impact point, that is, the small finger tip. Our numerical results show that in fact this is the case; the initial effect of the daughter singularity impact is to slow and then completely stop the growth of the small finger. Later on, as the singularity cluster centered in ζ d spreads over the unit circle, the effect of surface tension is felt by the whole interface and the large finger widens to reach the selected width. We have also studied the finite surface tension dynamics for a more general class of initial conditions. More precisely, we have studied initial conditions of the form ζ 2 s (0) = 20 exp(i nπ/12) where n = 0, ±1, ..., ±6, and have obtained the same qualitative results as in the case previously studied, namely that the presence of small surface tension suppresses the growth of the finger which is trailing at the time of daughter singularity impact. In order to compare the B = 0 and the B = 0 dynamics in a compact and global way we have plotted the phase portrait for B = 0 using the the tip velocities v 1 , v 2 as dynamical variables. In the laboratory frame they read Now a comparison between dynamics for B = 0 and B = 0 is straightforward since the trajectories can be plotted together and compared. In addition, the tip velocity is a useful variable because it contains geometric information; specifically the inverse of the tip velocity is equal to the width of the finger in the asymptotic (t → ∞) regime. It is important to note that (v 1 , v 2 ) are dynamical variables for the B = 0 problem, so that the plot of the zero surface tension trajectories onto the space (v 1 , v 2 ) is a true phase portrait. On the other hand (v 1 , v 2 ) are not state variables of the problem with finite surface tension, so in this case we simply obtain a projection onto the (v 1 , v 2 ) space of the original B = 0 trajectory, which is embeded in the infinite-dimensional phase space of interface configurations. Figure 8 shows the phase portrait for B = 0 together with the tip velocities obtained from the initial conditions described above forB = 0.01. From the figure it is evident that the introduction of finite surface tension has substantially changed the global phase dynamics of the problem. Only oneB = 0.01 trajectory connects the planar interface (1, 1) and the 2ST point (2, 2), corresponding to the unsteady double Saffman-Taylor finger. Any otherB = 0.01 trajectory ends in one of the two ST finger points, ST(L) at (2, 0) and ST(R) at (0, 2). In contrast, the (2, 2) point, equivalent to the continuum of fixed points present with the (α ′ , α ′′ ) or (Reζ s , Imζ s ) variables, has a finite basin of attraction for B = 0. The introduction of finite surface tension has dramatically changed the zero surface tension (v 1 , v 2 ) trajectories, to the extent that the B = 0 phase portrait and the B = 0 projection are not topologically equivalent. This result is not a complete surprise, since it was anticipated from the structural instability of the dynamical system governing the evolution of Eq. (7) for ǫ = 0 [21]. A more dramatic example of topological inequivalence of phase portraits will be given in the next subsection, when we consider the case ǫ = 0. Although the use of the variables (v 1 , v 2 ) has allowed us to project the finite surface dynamics onto the zero surface tension phase portrait this projection has one major limitation: it only considers a local quantity, the tip velocity. We have also considered a projection that takes more global properties of the interface into account. Specifically, given a computed B = 0 solution for an initial condition of the form (7), one can use a suitable norm to define a 'distance' between the computed interface and the B = 0 interface obtained from the mapping function Eq. (7). We choose this 'distance' to be the area enclosed between the two interfaces at a given time. Additionally, we define a projection of the B = 0 interface onto the B = 0 phase space (with phase space variables (Re ζ s , Im ζ s )) by selecting the value of ζ s that minimizes the 'distance' between the two interfaces, with the restriction that the position of the two mean interfaces must be the same. The latter condition ensures that the projection satisfies mass conservation. Figure 9 shows the B = 0 phase portrait and the corresponding projected evolution for surface tensionB = 0.01. Again, the plot clearly shows that the introduction of finite surface tension modifies the phase portrait of B = 0. The projected trajectories are initially close to the B = 0 dynamics, but for well developed fingers (corresponding to |α| ∼ 1) the projection departs from the B = 0 trajectory towards the Saffman-Taylor fixed point, located at α ′ = 0, α ′′ = 1. The projected trajectory only remains close to the corresponding B = 0 trajectory when the latter evolves towards the Saffman-Taylor fixed point. More precisely, the continuum of fixed points present for B = 0 has been removed by surface tension and the Saffman-Taylor fixed point is the universal attractor of the dynamics for finite surface tension. In Fig. 10 the projection for decreasing values ofB is plotted, using the initial condition ζ 2 s (0) = 20 exp(iπ/6). AsB is decreased the projected trajectory gets closer to the B = 0 trajectory, but as it approaches the point when the daughter singularity impinges the unit circle (this point is signaled by a cross) the projection departs from the B = 0 trajectory and approaches the Saffman-Taylor fixed point, consistent with asymptotic theory. B. Solutions with ǫ = 0 The continuum of fixed points present for ǫ = 0 is absent for ǫ = 0, but in this case finite-time singularities in the form of zeros of z ζ impinging on the unit disk do appear for some initial conditions. Therefore, we can expect that the effect of finite surface tension will be somewhat different than for ǫ = 0. Firstly, the presence of surface tension should eliminate finite-time singularities, and secondly, finite B could modify the basin of attraction for the two attractors of the B = 0 dynamical system, namely the side Saffman-Taylor finger and the center Saffman-Taylor finger. To explore this, we have performed computations with From the plot one can see that mostB = 0.01 velocity trajectories follow (at least qualitatively) their B = 0 counterparts, in the sense that they end up in the same fixed point. However, the second, third and fourth trajectories (counting from the upper left trajectory in clockwise direction) differ significantly from their B = 0 counterparts. The secondB = 0.01 trajectory moves apart from the B = 0 solution simply because the latter develops a finite-time singularity, which is regularized by the introduction of finite surface tension. However, the third and fourth trajectories exhibit a quite surprising behavior: the computed interface withB = 0.01 ends up in a different fixed point than the exact B = 0 solution, despite the fact that the B = 0 solution is smooth for all time and has the asymptotic width that would be selected by vanishing surface tension. In order to get further insight into this behavior we have computed the evolution for decreasing values ofB using the specific initial pole position ζ 2 s (0) = 20 exp(−iπ/6), with λ = 1/2 and ǫ = 0.1. Quadruple precision has been used when it has been necessary. Figure 12 shows its evolution for four values of the surface tension parameter, together with the B = 0 solution. The differences between the two interfaces for long times are readily apparent. When B = 0 the finger in the central position stops growing and the side finger wins the competition, whereas for B > 0 we encounter the opposite situation-namely, the central finger surpasses the side finger and wins the competition. For the smaller values of B the finger on the sides has not quite stopped growing when the computation is stopped, although its tip speed shows a marked decrease over that for B = 0 and is less than that of the central finger. The side finger tip speed is also decreasing at the final stage of the computation. The tip speed trend in the limit B → 0 is further illustrated in Fig. (13). This figure shows the tip speed versus time of each finger for a sequence of decreasing B. The plot suggests that upon impact of the daughter singularity the side finger velocity levels off and eventually decreases, whereas the velocity of the center finger is nearly unaffected and continues to increase. The trend is indicative of the center finger "winning" the com- petition in the B > 0 dynamics, while the opposite occurs for B = 0. Finally, it is noted that the influence of surface tension is first felt by the smaller finger, which is the recipient of the daughter singularity impact. Afterwards the leading finger begins to widen, in a manner consistent with the conjecture in Sec. III. Further remarks on this point are made in Sec. V. The projection method described in the previous section has been also applied to this case, and the results are displayed in Fig. 14 in the particular caseB = 0.01. It can be seen that for most trajectories the projection stays close to the B = 0 curves, even for long times. The daughter singularity impact still leads to O(1) differences between the B = 0 and B > 0 solutions, although the impact does not produce changes in the outcome of finger competition. However, as expected some of the trajectories (namely the third and fourth as measured clockwise from the bottom) do indicate significant qualitative differences in the long time evolution. The plot provides a simple depiction of the topological inequivalence of the B > 0 and B = 0 dynamics [42]. It has been shown that the introduction of a finite B has not changed the attractors of the problem, but it has changed their basins of attraction. Interestingly, in the B = 0 case there does not exist a single separatrix trajectory between the two Saffman-Taylor attractors, but rather a finite region, corresponding to the set of trajec-tories ending in cusps, that acts as an effective separatrix. Since for finite surface tension there are no cusps, it can be assumed that there is a single trajectory that separates the two basins of attraction. Obviously, this trajectory will depend on the value of the surface tension parameter. More precisely, the initial condition ζ 2 s (0) corresponding to the separatrix trajectory will be a function of the surface tension B. To quantitatively characterize this set of initial conditions we have studied the dependence of the separatrix trajectory in a neighborhood of the planar interface fixed point as a function ofB, using initial conditions of the form ζ 2 s (0) = 20 exp(iθ). For a given initial condition ζ s (0) introduce the parameter θ sep (B), defined as the unique value for which the evolution is attracted toward the fixed point ST(L) when θ > θ sep and to the fixed point ST(R) when θ < θ sep . Figure 15 shows the plot of θ sep versusB, and it is observed that asB decreases, θ sep saturates to a fixed value, namely θ sep (B → 0) = −0.4843 ± 0.0009. It is interesting to compare this value to the position of the separatrix region for B = 0, which is located between θ B=0 + = −0.95758 and θ B=0 − = −1.04796. The separatrix for finiteB lays outside and far away from the separatrix region for B = 0, even for vanishing surface tension. Our evidence therefore suggests that any B = 0 trajectory located between the trajectories defined by θ sep (B → 0) and θ B=0 + will not describe, even qualitatively, the reg- ularized dynamics in the limitB → 0, since the finger that will 'win' the competition under the B = 0 dynamics will 'lose' under the B → 0 dynamics. Thus, there exists a positive measure set of initial conditions of the form (7) such that the evolution with B → 0 cannot be qualitatively described by its evolution under B = 0 dynamics. This is a dramatic consequence of the singular nature of surface tension on the dynamics of finger competition which is not related to steady state selection, but confirms the ideas of the proposed dynamical solvability scenario in Ref. [22]. V. SUMMARY AND CONCLUDING REMARKS The asymptotic theory developed in Refs. [13,20] predicts the existence of regions of the complex plane where the zero surface tension solution and the finite surface tension solution differ by O(1). These regions are the daughter singularity clusters, and their influence is felt in the physical interface when they are close to the unit circle. Daughter singularities move towards the unit circle, and when their motion is not impeded by other singularities they reach the unit circle in O(1) time. When the distance between the daughter singularity and the unit circle is O(B 1/3 ) the interface can display O(1) discrepancies with respect the interface of the B = 0 solutions. However, the asymptotic theory does not predict the nature of the discrepancies caused by daughter singularity impact. Siegel et al. [20] showed numerically that the effect of the daughter singularity impact on the tip (in a singlefinger configuration with λ < 1/2) was a retard in the velocity of the finger accompanied by a widening of the finger. However, this provided small insight on the effect of the impact in multi-finger configurations, where finger competition could be substantially affected by the presence of finite surface tension, as suggested in Refs. [5,21,22]. Since the precise effect of the daughter singularity cannot be established by the asymptotic theory it is necessary to use numerical computation in order to establish the effects of daughter singularity on the dynamics of the interface. We have focused our efforts on uncovering the role of surface tension in the dynamics of two finger configurations, which is the simplest situation exhibiting nontrivial finger competition. Two different types of twofinger zero surface tension solutions have been studied. The first type (ǫ = 0) does not exhibit finger competition when B = 0 but rather contains asymptotic configurations consisting of two unequal steady fingers advancing with the same speed. These two-finger steady state solutions form a continuum of fixed points in the phase space of the corresponding, reduced dynamical system, which is structurally unstable. Numerical computations with small surface tension show that the introduction of a small B removes the continuum of fixed points and triggers the competition process which was absent for B = 0 by restoring the saddle-point (hyperbolic) structure of the appropriate multifinger fixed point. The second type (ǫ = 0) of two-finger solution we have studied exhibits finger competition for B = 0, but the numerical computation with small B has shown that the long time configuration of the computed interface may be qualitatively different from the B = 0 solution for a broad set of initial conditions, in the sense that the finger that 'wins' the competition is not the same with and without surface tension. Thus, the presence of surface tension seemingly can change the outcome of finger competition even in configurations that are well behaved and smooth for all time and whose asymptotic width is fully compatible with the predictions of selection theory for vanishing surface tension. This unexpected result shows that surface tension is not only necessary to select the asymptotic width and to prevent cusp formation, but plays also an essential role in multifinger dynamics through a drastic reconfiguration of the phase space flow structure. Our calculations support the conjecture that impact on either the shorter or larger finger retards the velocity of that finger, and is accompanied by the widening of the larger finger. As a consequence, in general the outcome of finger competition is independent of the particular finger on which the impact first occurs, and the finger which is leading at the time of the daughter singularity impact 'wins' the competition. This recipe fails only for interfacial configurations with very similar fingers, when not only the position of the finger (which finger is leading) but also the tip velocities (a trailing finger can have for a certain time a larger velocity than the leading one) at the impact time may play a role. The main conclusion of the present work is that surface tension is essential to describe multifinger dynamics and finger competition, even when the corresponding zero surface tension evolution is well behaved and compatible with selection theory. That is, we have detected singular effects of surface tension on the dynamics of finger competition that are not directly related to steady state selection. These can be properly interpreted in the context of an extended dynamical selection scenario as described in Ref. [22] where the reconfiguration of phase space flow by surface tension can be traced back to the restoring of hyperbolicity of multifinger fixed points.
14,124.6
2002-04-19T00:00:00.000
[ "Physics" ]
The New Generation Planetary Population Synthesis (NGPPS). I. Bern global model of planet formation and evolution, model tests, and emerging planetary systems Aims. Comparing theoretical models with observations allows one to make key step forward towards an understanding of planetary systems. It however requires a model able to (i) predict all the necessary observable quantities (not only masses and orbits, but also radii, luminosities, magnitudes, or evaporation rates) and (ii) address the large range in relevant planetary masses (from Mars mass to super-Jupiters) and distances (from stellar-grazing to wide orbits). Methods. We have developed a combined global end-to-end planetary formation and evolution model, the Generation III Bern model, based on the core accretion paradigm. This model solves as directly as possible the underlying differential equations for the structure and evolution of the gas disc, the dynamical state of the planetesimals, the internal structure of the planets yielding their planetesimal and gas accretion rates, disc-driven orbital migration, and the gravitational interaction of concurrently forming planets via a full N-body calculation. Importantly, the model also follows the long-term evolution of the planets on Gigayear timescales after formation including the effects of cooling and contraction, atmospheric escape, bloating, and stellar tides. Results. To test the model, we compared it with classical scenarios of Solar System formation. For the terrestrial planets, we find that we obtain a giant impact phase provided enough embryos (~100) are initially emplaced in the disc. For the giant planets, we find that Jupiter-mass planets must accrete their core shortly before the dispersal of the gas disc to prevent strong inward migration that would bring them to the inner edge of the disc. Conclusions. The model can form planetary systems with a wide range of properties. We find that systems with only terrestrial planets are often well-ordered while giant-planet bearing systems show no such similarity. Introduction Since the discovery of the first exoplanet detected around a main sequence star (Mayor & Queloz 1995), the number of known exoplanets has greatly increased. These planets span a wide range of masses and sizes, and they were detected using various techniques, such as radial velocity, transit, direct imaging, and microlensing. Despite all these observational constraints, the exact formation pathways are not yet certain. To highlight this, we first discuss possible formation mechanisms for different planet kinds. Giant planets have been found orbiting their host star over a wide range of periods. Some are of the order of days or tens of days, which is well within the orbit of Mercury (Mayor et al. 2011;Fabrycky et al. 2014); others were detected at large separations using the direct imaging technique (Marois et al. 2008;Lagrange et al. 2010;Rameau et al. 2013;Macintosh et al. 2015;Chauvin et al. 2017;Keppler et al. 2018). Most giant planets are thought to form via the core accretion mechanism as gravitational instability (Boss 1997(Boss , 2003 is found to work only at large separation (several tens of astronomical units, Rafikov 2005;Schib et al. 2021), though the clumps could migrate after formation (Nayakshin 2010), and for bodies above about 5 M (Schlaufman 2018) or even the deuteriumburning limit (Kratter et al. 2010). On the other extreme, for very close-in giant planets, core accretion (Perri & Cameron 1974;Mizuno 1980) up of solids. While this has been proposed (Boley et al. 2016;Batygin et al. 2016;Bailey & Batygin 2018), the possibility remains heavily debated. A scenario where these planets formed further out and were subsequently moved to their final location (Lin et al. 1996) is usually considered more likely. In the standard view, giant planets form from embryos located beyond the ice line, where solids are abundant owing to the volatiles being present in the solid form. This allows the embryo to form rapidly enough before the dispersal of the gas disc, which occurs in a time frame of several million years (Haisch et al. 2001;Fedele et al. 2010;Richert et al. 2018). Embryos initially accrete solids and a small quantity of gas. The further growth results in the accretion of gas, which is governed by the ability of the planet to radiate away the accretion energy (Pollack et al. 1996;. The cooling process becomes more efficient as the mass increase, so that when the planet reaches a mass of several times that of the Earth, the amount of solids and gas are equal (the critical mass, Stevenson 1982). Once the accretion rate becomes greater than what the disc is able to supply, the envelope can no longer remain in equilibrium with the surrounding nebula and it contracts. This process is further complicated by the implications of planetary migration (Baruteau et al. 2014(Baruteau et al. , 2016. The final mass and location of the planet depends thus on the interplay between growth and migration, not to mention the interactions with the other planets forming in the same system. Observations show that the giant planets are divided into two sub-groups depending on the host-star metallicity (Dawson & Murray-Clay 2013;Buchhave et al. 2018). Hot-Jupiters around metal-poor stars exhibit lower stellar obliquity and eccentricity than the ones around metal-rich stars. The usual concept of inward migration due to interaction with the gas disc (Goldreich & Tremaine 1979;Ward 1997;Tanaka et al. 2002) cannot account for the obliquity of these planets, which more likely were brought there by few-body interactions combined with tidal circularisation (Dawson & Johnson 2018). For the distant giant planets, core accretion is still favoured (Wagner et al. 2019). A possible formation pathway for some of these distant planets is accretion in the inner region of the disc followed by close encounters and scattering (Marleau et al. 2019a). This pathway is supported by evidence that it is able to reproduce the distribution of eccentricities of giant planets (Chatterjee et al. 2008;Jurić & Tremaine 2008;Raymond et al. 2010;Sotiriadis et al. 2017), and that most giant planet-harbouring system are multiple (Knutson et al. 2014;Bryan et al. 2016;Wagner et al. 2019). Exoplanets include planets unknown in the solar system, those between the Earth and Neptune (Mayor et al. 2011;Youdin 2011;Howard et al. 2012). The density of these planets vary more than one order of magnitude (Hatzes & Rauer 2015;Otegi et al. 2020). Sub-Neptunes exhibit a low bulk density, indicating the presence of a gaseous envelope (Weiss & Marcy 2014;Rogers 2015). This implies that they mostly formed in a time scale comparable with the lifetime of the protoplanetary disc. However, whether they formed early (in the same way as the core of giant planets) or towards the end of the disc (Lee & Chiang 2016) is not yet settled. Super-Earths on the other hand are compatible with being gas-free. They are not constrained by the lifetime of the protoplanetary disc and can form over longer periods of time (Lambrechts et al. 2019;Ogihara et al. 2018). These could also have had a gaseous envelope in the past that was removed by, for in-stance, atmospheric escape (e.g. Jin et al. 2014) or giant impacts (Schlichting & Mukhopadhyay 2018). Multi-planetary systems provide additional information. Many super-Earth systems have similar mass and spacing (Millholland et al. 2017;Weiss et al. 2018), though this is debated (Zhu 2020;Weiss & Petigura 2020). However, most of the planet pairs are out of mean-motion resonances (Fabrycky et al. 2014). The low number of planets in mean-motion resonances (MMR) may be surprising, as gas-driven migration is efficient at capturing the planets in MMRs. But it is possible for the resonances to be broken during the retreat of the gas disc or after the dispersal by dynamical instabilities (Inamdar & Schlichting 2016;Izidoro et al. 2017Izidoro et al. , 2021. The mutual inclinations remain relatively low (Lissauer et al. 2011;Fang & Margot 2012) and they exhibit low-to-moderate eccentricities (Xie et al. 2016;Mills et al. 2019). As the model has many parameters, a large number of planets with different properties are required to constrain their possible values. The model must then be able to predict all the necessary observable quantities for the different observational techniques, not only masses and distances, but also radii (for transits), luminosities, magnitudes (for direct imaging), and evaporation rates. To leverage the enormous amount of statistical observational data on exoplanets, the models should also be able to make quantitative predictions which can be compared statistically with the actual planetary population. For this, planetary population synthesis (Ida & Lin 2004a;Mordasini et al. 2009a) is a frequently used approach. In this work, we introduce a strongly improved and extended version the Bern global end-to-end model of planetary formation and evolution for multi-planetary systems. The model combines the work of Alibert et al. (2013), hereafter A13 (inclusion of N-body interactions) and the internal structure calculations and long-term thermodynamical evolution model of Mordasini et al. (2012c,b). Here, we track the planets with full N-body interactions, in contrast to Ida & Lin (2010) for instance, who introduced a semi-analytical approach, an improvement over previous works, such as Ida & Lin (2004a), to follow planet-planet interactions. The model follows the formation of many embryos, as it usually obtained from the end stage of the runaway growth of solids (Kokubo & Ida 1998), so that both terrestrial as well as giant planetary systems can be obtained. The structure of this work is as follows: in the first part, we describe our global model. In Sect. 2, we introduce the new version of our model with a general overview of its conception, along with its relationship to previous work. Detailed description are provided in Sect. 3 for the stellar and nebular components, in Sect. 4 for the planets, and in Sect. 5 for the migration and dynamical evolution. In a second part we perform some tests for different kind of planets and show possible resulting systems. In Sect. 6, we aim at reproducing formation of terrestrial planets with our improved model to determine whether it is applicable to kind of planets. The formation of giant planets and the implications for Jupiter are discussed in Sect. 7. Finally, in Sect. 8, we apply the presented model to specific systems to assess the interaction between the different mechanisms occurring during the formation and evolution of planetary systems. This work is the first of a series of several. In a companion paper, Emsenhuber et al. (in rev., hereafter referred to as Paper II), we will use this model to compute synthetic populations of planetary systems and perform statistical analysis. In subsequent articles, we will perform more detailed comparison with observations, and analyse various parameters that we have in the present model. History The model presented in this work, the Generation III Bern model, combines the formation and evolution stage of planetary system. It is based on many contributions in the field that aim to study different aspects of the physics of planetary formation and evolution. We thus start by a short history of the series of model, and its different branches that we couple together in this work. A graphical sketch of the different generations of the Bern model is provided in Fig. 1. The original model was introduced in Alibert et al. (2004Alibert et al. ( , 2005a for individual planets, then used in Mordasini et al. (2009a,b) for entire planetary populations. We refer to it as Generation I, which computed the formation on a single planet until the gas disc disperses. The model subsequently diverged into two different branches: one with the aim to follow the longterm evolution of the formed planet (Generation Ib; Mordasini et al. 2012c,b) while the other obtained the ability to form multiplanetary systems with an improved description of the planetesimals disc (Generation II; Alibert et al. 2013;Fortier et al. 2013). In this work we bring these two variants of the model back together so that we can follow the formation and the long-term evolution of multi-planetary systems. At the same time, we extend the model with new elements, which are shown in italic on Fig. 1. Previous versions of the model have been extensively described in referenced papers (see also Benz et al. 2014;Mordasini et al. 2015;Mordasini 2018 for reviews and the interactions between the different mechanisms involved in planet population syntheses). We nevertheless describe this new version in the remainder of this section. General description We base our study on the Bern model of planetary formation and evolution. This global model self-consistently computes the evolution of the gas and planetesimals discs, the accretion of gas and solids by the protoplanets, their internal and atmospheric structure, as well as interactions between the protoplanets and between the gas disc and the protoplanets. We provide a diagram of the main components of the overall model as well as the most important exchanged quantities in Fig. 2. In our coupled formation and evolution model, we first model the planets' main formation phase for a fixed time interval (set to 20 Myr, see the related discussion in Section 6 regarding the impact of this specific choice). Afterwards, in the evolu-tionary phase, we follow the thermodynamical evolution of each planet individually to 10 Gyr. Formation phase During the formation stage (0-20 Myr), the model follows the evolution of a gaseous protoplanetary disc and the dynamical state of planetesimals (Section 3). These serve as sources for the accretion of the protoplanets (Section 4). The lifetime of the gas disc is shorter than the simulated formation stage, so that solids accretion in a gas-free environment can also take place. The gas disc also leads to planetary migration, and interactions (scattering, collisions) between the concurrently growing proptoplanets are tracked via a N-body integrator (Section 5). The formation of planets is based on the core accretion paradigm (Mizuno 1980;Pollack et al. 1996): First, a solid core is formed, and once it becomes massive enough, it starts to bind a significant H/He envelope. Core growth results from the accretion of planetesimals. Gas accretion is initially governed by the ability of the planet to radiate away the energy released by the accretion of both solids and gas. Once the gas accretion rate of the envelope exceeds the limit from the disc, the envelope can no longer maintain equilibrium with the disc; hence it subsequently contracts and passes into the detached phase. (Bodenheimer et al. 2000). Evolution phase The long-term evolution of the planets (20 Myr -10 Gyr) is calculated by solving, like already in the formation phase, the standard spherically symmetric internal structure equations, but with different boundary conditions, and taking into account different physical effects like atmospheric escape, or radius inflation. In this phase, the planets evolve individually; N-body interactions and the accretion of planetesimals are no more considered. The orbits and masses of the planets may however still evolve because of effects like tides and atmospheric escape. As described in Mordasini et al. (2012c), the coupling between the formation and evolution phases is made selfconsistently, that is both the compositional information as well as the gravothermal heat content given by the formation model are given to the evolution model as initial conditions. Regarding the temporal evolution, we now also take the thermal energy content of the planet's core into account for a planet's luminosity, as described in Linder et al. (2019). This is important for core-dominated low-mass planets (e.g. Lopez & Fortney 2014). As in previous calculations, the other gravothermal energy sources are the cooling and contraction of the H/He envelope, the contraction of the core, and radiogenic heating due to the presence of long-lived radionuclides in the core. Envelope structure The calculation of the internal structure of all planets (Section 4) during their entire formation and evolution is a crucial aspect of the Bern Model, as visible from its central position in Figure 2. It not only yields the planetary gas accretion rate in the attached phase but is also key for the accretion of planetesimals via the drag enhanced capture radius. It also yields the radii and luminosities that on one hand enter multiple other sub-modules, and on the other hand are key observable quantities. The internal structure model assumes that planets have an onion-like spherically symmetric structure with an iron core, a silicate mantle, and A&A proofs: manuscript no. model Physical mechanisms and base assumptions included in all model generations -Formation paradigm: core accretion -Protoplanetary disc model: solution of 1D evolution equation for gas surface density in an axissymetric constant α-disc with photoevaporation -Solid accretion: rate equation (Safronov-type) from planetesimals of a single size; planetesimals are represented by a solid surface density with dynamical state -Gas accretion and planet interior structure: from solving 1D radially symmetric hydrostatic planet interior structure equations -Orbital migration: gas disc-driven, types I and II Evolution of physical mechanisms considered in various model generations Generation I (Alibert et al. 2005a): base model 1. 1 embryo per disc (no N-body), 0.6 M ⊕ 2. Formation only (to t disc ) 3. Runaway planetesimals accretion, 100 km 4. Attached phase only 5. Vertical disc structure, no stellar irradiation, no stellar evolution 6. Isothermal type I, equilibrium type II, thermal only transition criterion 7. Equilibrium gas flux in disc 8. Stellar irradiation of the disc (Fouchet et al. 2012) 9. Masses, orbital distances, bulk composition 10. Mordasini et al. (2009a,b); Alibert et al. (2011);Mordasini et al. (2012a) Generation Ib (Mordasini et al. 2012c,b): inclusion of longterm evolution 1. 1 embryo per disc (no N-body), 0.6 M ⊕ 2. Formation (to t disc ) and (thermodynamic) evolution (to 10 Gyr) 3. Runaway planetesimals accretion, 100 km 4. Attached, detached and evolutionary, with core structure 5. Vertical disc structure, with stellar irradiation, no stellar evolution 6. According to Dittkrist et al. (2014): Non-isothermal type I, non-equilibrium type II, thermal and viscous transition criterion 7. Non-equilibrium gas flux in disc 8. D-burning, atmospheric escape 9. Radii, luminosities, envelope evaporation rates 10. Mordasini et al. (2014Mordasini et al. ( , 2017; Jin & Mordasini (2018) Generation II : inclusion of N-body interaction 1. Several embryos per disc (EMPS N-body integrator), 0.01 M ⊕ 2. Formation only (to t disc ) 3. Oligarchic planetesimals accretion, 300 m 4. Attached phase only 5. Vertical disc structure, no stellar irradiation, no stellar evolution 6. According to Dittkrist et al. (2014): Non-isothermal type I, non-equilibrium type II, thermal and viscous transition criterion 7. Non-equilibrium gas flux in disc 8. Composition tracking (Thiabaud et al. 2015) 9. Multiplicity, eccentricities, MMR 10. Pfyffer et al. (2015); Alibert & Benz (2017) Generation III (this work): long-term evolution and N-body evolution 1. Many embryos per disc (Mercury N-body integrator), 0.01 M ⊕ 2. Formation (to 20 Myr) and (thermodynamic) evolution (to 10 Gyr) 3. Oligarchic planetesimals accretion, 300 m 4. Attached, detached, evolutionary (with D-burning, escape, bloating, Roche lobe overflow, core structure) 5. Vertically integrated, with stellar irradiation and stellar evolution 6. Non-isothermal type I, non-equilibrium type II, thermal and viscous transition criterion 7. Bondi-limited gas accretion 8. None 9. Combines output of Ib and II 10. This work (NGPPS series) (1) number of initial embryos per disc, N-body integrator type, initial embryo mass, (2) phases simulated, (3) planetesimal accretion mode and planetesimal size, (4) phases with calculation of the planets' internal structure, (5) disc model characteristics, (6) orbital migration: type I, type II, transition criterion from type I to II (here thermal refers to a criterion only with the ratio between the Hill radius and the scale height of the disc, while 'thermal and viscous' refers to the full criterion of Crida et al. 2006), (7) disc-limited gas accretion rate, (8) later additions and improvements, (9) additional output relative to older generation, (10) population synthesis publications using this generation. In the bottom right panel, text in italic indicates new elements. depending of a planet's accretion history, a water ice layer and a gaseous envelope made of pure H/He. In contrast to earlier syntheses predicting planetary radii (Mordasini et al. 2012c, we now use self-consistently the iron mass fraction as given by the disc compositional model (according to Thiabaud et al. 2014, Section 3.3.3), instead of assuming a fixed 2:1 silicate:iron mass ratio. Physical effects that are included in the model besides the usual cooling and contraction are XUV-driven atmospheric escape (Jin et al. 2014), D-burning (Mollière & Mordasini 2012), Roche-lobe overflow, and bloating of the close-in planets (Sarkis et al. 2021). Compared to some other 1D internal structures models in the literature (e.g. Vazan et al. 2013;Venturini et al. 2016;Valletta & Helled 2020), the model is simplified first by assuming that the gaseous envelope consists of pure H/He, while accreted solids sink to the core. In this sense, the model is similar to the original Pollack et al. (1996) model. We neglect thus the consequences of heavy element enrichment and compositional gradients in the envelope. These effects will be added in future work. One should note that also other modern models make use of the simplification of pure H/He envelopes (D'Angelo et al. 2021). Including enrichment would generally speed up gas giant formation (Venturini et al. 2016). Second, the effects of hydrodynamic flows Fig. 2. Sub-modules and most important exchanged quantities of the Generation III Bern model. The colours denote the stages at which processes are considered. Blue indicates processes active in the formation stage, but only before the dispersal of the gas disc. Green processes are considered during the entire formation stage, even after the dispersal of the gas disc. Processes in red are only considered during the evolution stage. The processes in black are included in all stages. affecting the (upper) envelope structure and cooling behaviour are currently also neglected (Ali-Dib et al. 2020;Moldenhauer et al. 2021). On the other hand, our internal structure model allows to model the entire 'life' of planets from t = 0 to 10 Gyr, modelling and coupling self-consistently all phases (attached, detached, evolutionary), for both the gaseous envelope and the solid core. Importantly, the model is capable of calculating the internal structure and temporal evolution of planets ranging in mass from 10 −2 M ⊕ to the lithium-burning limit (about 63 Jovian masses, Burrows et al. 2001). It includes besides the standard aspects (accretion, cooling, contraction) also atmospheric escape, bloating, Roche-lobe overflow, and deuterium burning. In particular, this makes it possible to model planets that reside very close to their host star. This quite unique general applicability to very different planet types reflects the needs arising from a population synthesis calculation. As shown in Figure 2, atmospheric escape is only included in the evolution phase starting at 20 Myr. In reality, it would start immediately once the gas disc has dissipated and the planets start to 'see' the stellar XUV irradiation. This could lead to a certain under-estimation of atmospheric escape. The consequences should, however, be small, since escape continues to be important for at least the first 100 Myr when stars are in the saturated phase of high XUV emission, and not only for the first 20 Myr. The effect that atmospheric escape can destabilise resonant chains for sufficiently high mass loss (Matsumoto & Ogihara 2020) is thus not included. On the other hand, we include during the entire formation phase (also after gas disc dissipation) the accretion of planetesimals, which also changes planet masses. In the following sections, we describe in detail all the submodules visible in Figure 2. Article number, page 5 of 45 A&A proofs: manuscript no. model Stellar model Instead of assuming a fixed 1 L stellar luminosity for a 1 M star as in previous model generations, stellar evolution is now considered by incorporating the stellar evolution tracks from Baraffe et al. (2015). These provide the radius R , luminosity L and temperature T for a given stellar mass M at any moment. Stellar temperature and radius are used for the outer boundary conditions of the gas disc; stellar radius is also used in the Nbody integrator to detect collisions with the star and to calculate the stellar tidal migration. Finally, the stellar irradiation enters into the calculation of the outer (atmospheric) temperature (at τ = 2/3) of the planets' interior structure as described in Mordasini et al. (2012c) and radius bloating (Sect. 4.2.2). Gas disc The protoplanetary gas disc is modelled with a 1D radial axisymetric structure. The evolution is given by solving the viscous diffusion equation as function of the time t and orbital distance r (Lüst 1952;Lynden-Bell & Pringle 1974), where Σ g = ∞ −∞ ρdz is the surface density of gas, andΣ g,photo andΣ g,planet are the sink terms related to photo-evaporation (Section 3.2.2) and accretion by the planets respectively. The viscosity is parametrised, following Shakura & Sunyaev (1973), with This equation is solved on a grid spaced regularly in log with 3400 points that extends from the inner location of the disc r in (an initial condition) to r max = 1000 au. At these two locations, the surface density is fixed to zero. Vertical structure The disc's vertical structure is computed at each step of the evolution following the approach of Nakamoto & Nakagawa (1994). This change is necessary to accommodate the new stellar model with variable quantities. With this approach, the link between the outer and midplane temperatures is given by with T mid the disc mid-plane temperature, T s the temperature due to irradiation (see below), σ SB the Stefan-Boltzmann constant, τ R and τ P are the Rosseland and Planck mean optical depths respectively, andĖ is the viscous dissipation rate. This formula yields the mid-plane temperature both in the optically-thick (the term with τ R ) and optically-thin (the term with τ P ) regimes. The Rosseland optical depth is given by τ R = κ disc (ρ mid , T mid )Σ where ρ mid = Σ/( √ 2πH) is the central density, H = c s /Ω the disc's vertical scale height, c s = k B T mid /(µm H ) the isothermal sound speed, µ = 2.24 the mean molecular weight of the gas, and m H the mass of an hydrogen atom. The opacity κ disc is given by the maximum of the opacities computed according to Bell & Lin (1994) (which accounts for micrometre size with a fixed interstellar dust-to-gas ratio of 1 %, independently of the dust-togas ratio chosen for the solids disc) and Freedman et al. (2014) (which gives molecular opacities for a grain-free gas). For the Planck optical depth, we follow further Nakamoto & Nakagawa (1994) and set τ P = 2.4τ R . It is clear that this treatment of the opacities is simplified: in reality, the evolution of the dust via coagulation, fragmentation, and drift influences via the resulting grain opacity the thermal and density structure of the disc. This structure in turn feeds back onto the dust evolution, meaning that the processes must be treated together in a self-consistently coupled way (Gorti et al. 2015;Savvidou et al. 2020). Such a more realistic coupled model affects for example the disc lifetime, the local dust-to-gas ratio (Gorti et al. 2015), orin the context of planet formation -the locations of the outward migration zones (Savvidou et al. 2020, see Sect. 5.1.3). They also show that the ratio of Planck and Rosseland opacity is in reality not simply a constant as currently assumed. In Voelkel et al. (2020) we have recently coupled the Birnstiel et al. (2012) dustpebble evolution model to the Bern Model. Based on this, future version of the Bern Model will include also a more physically realistic grain opacity and therefore disc structure model. This will in particular also include the dependency of the disc opacity on the stellar metallicity, which is currently not taken into account. In equilibrium, the radiative flux is identical as the viscous dissipation rate, which is given bẏ with Ω being the Keplerian angular frequency at distance r from the star. The second equality holds only if purely the mass of the central star is accounted for in the Keplerian frequency, that is Ω = GM /r 3 , G being the gravitational constant. The selfgravity of the disc has been neglected. The disc's outer temperature due to irradiation is given by (5) following Hueso & Guillot (2005), but also accounting for the direct irradiation through the disc's mid plane. The first term inside the bracket is the irradiation of the star onto a flat disc. The second term in the square brackets accounts for the flaring of the disc at large separation. In our case, we do not compute this factor explicitly and instead adopt ∂ ln H/∂ ln r = 9/7 (Chiang & Goldreich 1997). The T irr term accounts for the direct irradiation through the disc midplane. It is computed as which is the black-body equilibrium temperature accounting for the optical depth through the midplane of the disc τ mid = ρ mid κ(ρ mid , T mid )dr. This contribution is usually important only at the very end of the disc lifetime while it clears; otherwise, the optical depth confines the contribution to the very innermost region. However, taking this contribution in account is necessary to provide a smooth transition of the temperature at the surface of the planets (see Sec. 4.1) from the time when they are embedded in the nebula to the time when they are exposed to the direct stellar irradiation. The last term accounts for the heating by the surrounding environment (molecular cloud), which we set constant to T cd = 10 K. We thus neglect possible variations of this background temperature depending on the stellar cluster environment in which a star and its planetary system are born (Krumholz 2006;Ndugu et al. 2018). On the other hand, different cluster environments and thus different levels of the interstellar FUV field (Fatuzzo & Adams 2008) are taken into account by varying in the population syntheses (see Paper II) the magnitude of the external photoevaporation rateṀ wind . External photoevaporation is likely the most important environment-related factor for discs (Winter et al. 2020). Disc photoevaporation Photoevaporation in the protoplanetary discs is the principal means of controlling their lifetimes. For the prescription, we follow Mordasini et al. (2012b). In this scheme, we include contributions from both internal (due the host start itself) and external (due to nearby massive stars in the birthplace of the system) sources. For the external photo-evaporation, we use the far-ultraviolet (FUV) description of Matsuyama et al. (2003). FUV radiation (6-13.6 eV) creates a neutral layer of dissociated hydrogen whose temperature is T I ≈ 10 3 K. The corresponding sound speed is then where the mean molecular weight µ I = 1.35 for the dissociated gas. It corresponds to the gravitational radius (where the sound speed equals the escape velocity) of We assume that mass is removed uniformly outside of β I r g,I with β I = 0.14 (similar to Alexander & Pascucci 2012), so that the rate is given bẏ withṀ wind a parameter that provides the total mass loss rate if the disc would extend to r max = 1000 au. In practice however, the actual mass loss rate due to external photoevaporation is clearly smaller than that parameter, as the disc does not extend up to r max , but to a dynamically obtained radius which results from the interplay of viscous spreading (increasing the outer radius) and external photoevaporation (decreasing the outer radius). For the internal photoevaporation, we follow Clarke et al. (2001), which in turn is based on 'weak stellar wind' case of (Hollenbach et al. 1994). Here, extreme-ultraviolet (EUV; > 13.6 eV) creates a layer of ionised hydrogen whose temperature is T II ≈ 10 4 K and with a mean molecular weight µ II = 0.68. The sound speed and gravitational radius are computed in analogy with Eqs. (7) and (8). The scaling radius r 14 = β II r g,II /10 14 cm follows Clarke et al. (2001) while we select again β II = 0.14 following Alexander & Pascucci (2012). With this, we can estimate the base density with n 0 (r 14 ) = k Hol Φ 1/2 41 r −3/2 14 , where we set k Hol = 5.7 × 10 4 following the hydrodynamical simulations of Hollenbach et al. (1994) and Φ 41 = 0.1 √ M /M , which is the ionising photon luminosity in the units of 10 41 s −1 . The distance-dependent base density can then be calculated as n 0 (r) = n 0 (r 14 ) r r g,II − 5 2 . (11) We further follow Clarke et al. (2001) to getΣ g,photo,int = 2c s,II n 0 m H outside of β II r g,II . The final photoevaporation rate is given by the sum of the effects of host star + nearby massive stars witḣ Σ g,photo =Σ g,photo,ext +Σ g,photo,int . 3.2.3. Initial gas surface density profile and example We initialise the gas surface density profile with (Veras & Armitage 2004) where r 0 = 5.2 au is the reference distance, β g = 0.9 the powerlaw index (Andrews et al. 2010), r cut,g the characteristic radius for the exponential decay and r in the inner edge of the disc. The conversion between the total mass and the normalisation surface density Σ g,0 at r 0 is obtained with It should be noted that this formula neglects the lack of gas within r in , but since the total mass is dominated by the outer disc as we have a shallow power-law, there has in practice very limited effect. An example of evolution of such as disc, without any planets (i.e.Σ g,planet = 0), is provided in Fig. 3. The initial conditions and parameter are provided in Table 1 (note that the table also contains planetesimals disc properties that are not used here). The lifetime of that disc is nearly 5.3 × 10 6 yr. The temporal evolution shows overall a decrease in the surface density. A hole forms inside roughly 2 au by about 4.7 Myr. The change in the temperature profile initially between 1.5 and 3 au and that moves inwards is due to a maximum in the opacity (top right panel). This different behaviour is reflected in the surface density as the temperature affects the sound speed, hence the viscosity. For the midplane temperature, the direct irradiation term is only important in the innermost region (within about 0.2 au) until a few hundred thousand years before the dispersal of the gas disc. The last profile before dispersal shows an increase of temperature within 0.2 au due to this contribution; otherwise the midplane temperature remains below the equilibrium temperature, apart from the inner region (<<3 au) at early times. We compared the results obtained here with prescriptions from other works, as for instance Bitsch et al. (2015a). We find that in general, for a given stellar accretion rate (which is the parametrisation of the Bitsch et al. 2015a prescription) we obtain lower surface density profiles by 30 % coupled with larger temperature by 20-40 %. The two models cannot be very well compared directly due to the different underlying assumptions, like constant radial flow rate in Bitsch et al. (2015a). Our models accounts for the full evolutionary equation for the surface density including photoevaporation and gas accretion by the protoplanets, which means we have the radial flow rate varying with distance (bottom right panel of Fig. 3, where the radially constant 10 −1 10 0 10 1 10 2 10 3 Distance [AU] of a protoplanetary disc. The lines represent each one snapshot the state, and are spaced by about 2 × 10 5 yr. The blue line in both panels shows the initial profile, which has not yet been evolved at all, and is therefore not in equilibrium. The green line in the temperature profile shows the profile at disc's dispersal, which is given by the equilibrium temperature with the host star's luminosity. inflow in the inner disc, and the viscous spreading (outflow) in the outer disc can be seen). There are other model assumptions that result in the differences between the surface density and temperature in the two models: 1) the stellar luminosity, which in our case it starts with roughly 3 L as predicted by the Baraffe et al. (2015) tracks whereas Bitsch et al. (2015a) begins with 1.5 L following Baraffe et al. (1998), 2) the opacity which affects the relation between midplane and disc photospheric temperature, and 3) the different approach of including stellar irradiation (vertically integrated assuming an equilibrium for the flaring angle versus an explicit 1D vertical structure with radiative transfer). Planetesimal disc Planetesimals are represented by a fluid-like description, that is they are modelled not as individual particles but on a grid as a surface density (Σ s ) with eccentricity (e plan ) and inclination (i plan ) as dynamical state. Dynamical state For the time evolution of the dynamical state, we use the approach of Fortier et al. (2013) and explicitly solve the differential equations describing the change of eccentricity and inclination. In this framework, these are stirred by both the protoplanets, and to a lesser extent the other planetesimals, and damped by drag from the gas disc. The equations for the root mean square (RMS) of the planetesimals' eccentricity e plan and inclination i plan read aṡ The contributions from the aerodynamical drag, stirring by the protoplanets and the planetesimals are denoted by 'drag', 'VS,M' and 'VS,plan' respectively. The dynamical state is followed during the entire formation stage. The drag term is only Table 1. Initial conditions and parameters for the example system. The upper part contains the gas disc properties, the middle part the planetesimals disc properties, and the bottom part show planetary embryos properties. Quantity Value Stellar mass M 1 M Reference surface density Σ g,0 at 5.2 au 145 g cm −2 Initial gas disc mass M g 3.90 × 10 −2 M Inner edge of the gas disc r in 0.091 au (10 d) Characteristic radius of the gas disc r cut,g 66.5 au Disc viscosity parameter α 2 × 10 −3 External photoevaporation rateṀ wind 6.42 × 10 −7 M /yr Power law index of the gas disc β g 0.9 Dust-to-gas ratio 3.4 % Planetesimal disc mass 348 M ⊕ Power law index of the solids disc β s 1.5 Characteristic radius of the solids disc r cut,s r cut,g /2 Planetesimal radius 300 m Planetesimal density (rocky) 3.2 g cm −3 Planetesimal density (icy) 1 g cm −3 Embryo mass M emb,0 1 × 10 −2 M ⊕ Opacity reduction factor f opa 3 × 10 −3 evaluated while the gas disc is still present. After the dissipation of the gas disc, the term is set to 0. The form of the drag term depends on the regime: Epstein, Stokes (laminar) or quadratic (turbulent). The distinction between those regimes is made using the criterion proposed by Rafikov (2004) using the molecular Reynolds number Re mol = v rel R plan /ν mol , where ν mol = λc s /3 is the molecular viscosity, λ = (n H 2 σ H 2 ) −1 the gas molecules' mean free path, n H 2 the number density assuming all of the gaseous molecules having hydrogen mass, σ H 2 their collisional cross-section, R plan the planetesimals' radius, v rel = v K η 2 + 5/8e 2 plan + 1/2i 2 plan (17) their relative velocity, the deviation between the gas and Keplerian velocities due the support of the gas by the radial pressure gradient, ρ mid the midplane gas density, and v K = Ωr the Keplerian velocity. When R plan < λ, the gas drag is assumed to be in the Epstein regime. Otherwise, if Re mol > 20, the gas drag is taken to be in the quadratic (or turbulent) regime and in the Stokes regime if not. The expressions for the drag in the quadratic regimes are (Adachi et al. 1976;Chambers 2006 where is the gas drag time scale and C D = 1. In the Stokes regimes the drag expressions arė c s ρ mid ρ plan R plan (25) (Adachi et al. 1976;Rafikov 2004;Fortier et al. 2013). We also want to point out that we do not model the formation of gap in the gas disc by giant planets. This means that drag in the vicinity of such planets might be overestimated, resulting in lower eccentricities and inclination. As consequence, the accretion rate of planetesimals would be overestimated in this stage, which affects the heavy element contents of the planets. As in Fortier et al. (2013), the stirring by the protoplanets follows the approach of Guilera et al. (2010), where the stirring of Ohtsuki et al. (2002) is modulated with the separation from the protoplanets. The contribution reads aṡ where the sum is over all the protoplanets present in the system, is the modulation due to separation so that the perturbation is effectively restricted to the planet's feeding zone, the planet's Hills radius, and b = 5 is the half-width of the feeding zone (see Sect. 4.3.3). The terms P VS and Q VS are given by , Here,ẽ plan = re plan /R H andĩ plan = ri plan /R H are respectively the reduced planetesimals' eccentricity and inclination, Λ = ı plan (ẽ 2 plan +ĩ 2 plan )/12, β = i plan /e plan , while for I PVS and I QVS we use the approximations obtained by Chambers (2006): The stirring by the other planetesimals is given by, following Ohtsuki et al. (2002), and M plan = 4/3πR 3 plan ρ plan , the mass of a planetesimal. To set the initial dynamical state, we assume that the disc is initially in a cold state, that is only the self-stirring of the planetesimals contributes to their eccentricities and inclinations. In other words, this assumes that the embryos appear instantly at the beginning of the simulation. The equilibrium values can be derived by equating the contributions of self-stirring and damping (Thommes et al. 2003;Chambers 2006), which results in e plan = 2.31 M 4/15 plan r 1/5 ρ 2/15 plan Σ 1/5 and We also compared our prescription for the dynamical state with gamma-stirring from, for instance, and Okuzumi & Ormel (2013). Although this is not straightforward due to the differences in the sources, we find that, generally, the eccentricities resulting from γ-stirring are larger than the selfstirring from the planetesimals, but lower than the stirring by the forming protoplanets. Thus, accounting for the stirring of planetesimals by turbulent diffusion in the disc would increase their eccentricities at locations far away from growing protoplanets. Close to the growing protoplanets however, where the planetesimals' eccentricities are important for the solids accretion rate, neglecting this effect does not significantly affect planetesimals' eccentricities. 3.3.2. Size, initial surface density profile, and evolution To roughly take into account the observational (e.g. Ansdell et al. 2018) and theoretical (e.g. Birnstiel & Andrews 2014) finding that solids have a more concentrated distribution than the gas, the initial surface density profile of planetesimals now follows a steeper slope than the one of the gas disc (Lenz et al. 2019;Voelkel et al. 2020). This leads to a higher concentration of solids in the inner part of the disc. As already in the Generation II Model ), we assume a constant planetesimals radius of 300 m throughout the disc, which is a strong assumption and simplification. There is an ongoing discussion about the characteristic primordial planetesimal size in the literature. Observations of extrasolar debris belts (Krivov & Wyatt 2021), the presence of hypervolatile ices in comets that can only be preserved in impacts involving small bodies (Golabek & Jutzi 2021), direct size determinations by stellar occultations (Arimatsu et al. 2019) and some theoretical studies (Fraser 2009;Schlichting et al. 2013) suggest small (∼1 km) characteristic planetesimals sizes. On the other hand, the absence of small craters on Pluto (Singer et al. 2019), the size distribution in the asteroid belt (Morbidelli et al. 2009), and the theoretical predictions of planetesimal formation models (e.g. Klahr & Schreiber 2020) rather point at ∼100 km planetesimals. The first two points can, however, also be explained with other effects (Zheng et al. 2017;Wei et al. 2018, although the former work makes no determination about the initial size frequency distribution of planetesimals). In the more specific context of the simulations presented here, this choice was made for the following reasons: 1) small planetesimals undergo sufficient eccentricity and inclination damping by the disc gas to sustain a planetesimal accretion rate in the oligarchic growth regime that is high enough to build giant planet cores during typical disc lifetimes . We note that the Generation I and Ib Bern Models assumed in contrast runaway planetesimal accretion as Pollack et al. (1996). In the runaway regime, the eccentricities and inclinations of the planetesimals are assumed to remain low even without damping by the disc gas. Therefore, fast core growth occurred in these models also with 100 km planetesimals, which was the assumed size in these early model generations. 2) their drift time scales are longer than typical lifetimes of gas discs (Burn et al. 2019) and 3) this size was shown to be able to reproduce several of the known exoplanet properties across a wide range of masses . In any case, the constant planetesimal size is an important limitation of the model. Including in the Bern Model an explicit model for the evolution of the solid building blocks across the entire size range (dust-pebble-planetesimals) is thus subject of ongoing research. A first important step was recently made in Voelkel et al. (2020) where we have coupled the dust-and-pebble model of Birnstiel et al. (2012) and the planetesimal formation model of Lenz et al. (2019) to our global model. These effects are, however, not yet included in the Generation III Model presented here. To set the initial surface density profile of planetesimals, we thus use a slightly different description than for the gas, that is, with the power-law exponent is set to β s = 1.5, as in the MMSN, and r cut,s = r cut,g /2 is the exponential cutoff radius of the solids, set half the value of the gas disc following Ansdell et al. (2018). This formula also enables us to model relatively sharp outer edges of the solids disc (Birnstiel & Andrews 2014). The reference surface density value Σ s,0 is adjusted so that the bulk solids-to-gas ratio remains to the prescribed value (e.g. 1 %). The surface density of planetesimals is reduced by accretion onto and ejection by the protoplanets to ensure mass conservation (see Sect. 4.3), or removed entirely if e 2 plan > 0.95. Our model only includes ejection (Sect. 4.3.2) and not scattering by the forming planets. Thus, we do not have redistribution of material to other regions of the disc by planets, as obtained by for instance. Finally, the planetesimals disc remains after the dispersal of the gas disc; the only difference is that the damping terms for eccentricityė 2 plan drag and inclinationi 2 plan drag vanish. Compositional model The Bern model includes the simple condensation model of Thiabaud et al. (2014) and Marboeuf et al. (2014a). The initial abundance of volatile and refractory species is identical to the one given in Marboeuf et al. (2014b). Volatile species are composed of H, O, C, and S atoms whose abundance reflect solar composition (Lodders 2003). The relative abundances of the molecules are set according to interstellar medium. Then at each location in the disc at t = 0, we check whether each molecule is the solid or gas phase assuming local thermodynamical equilibrium. This yields the fraction of heavy elements that is locally condensed and thus contributes to the solid surface density (the ice line locations), and the chemical composition. This composition is tracked into the protoplanets when a propotoplanet accretes planetesimals, and in giant impacts between protoplanets. This yields in particular the final iron to silicate ratio and the volatile mass fraction of all the planets. The factor f s (r) in Eq. (40) for the initial planetesimal surface density accounts for the mass fraction of all elements that are in the solid phase at a given location. To compute its value, we use the aforementioned condensation model. Only the contribution of molecules in the solid phase are accounted for the resulting solid surface density. Thus, the value of f s in the inner locations is the mass fraction of condensed to total solids and this value increases by small jumps each time an ice line is crossed until it becomes unity at large separation. For the density of planetesimals, we assume that in the region where only refractory materials contributes to the solid phase ρ plan = 3.2 g cm −3 while when volatiles are in the solid phase we take ρ plan = 1 g cm −3 . This transition corresponds to the H 2 O-ice line in all discs, which induces the largest surface density jump because H 2 O makes up ∼60 % of all ices in mass (Marboeuf et al. 2014a). Example An example of the dynamical state of planetesimals is provided in Fig. 4. The initial conditions and parameters are provided in Table 1. This is the same initial disc as shown in Fig. 3, except than ten embryos were added to the disc, at the locations shown by the dashed vertical lines. In addition, both migration and Nbody interactions were artificially disabled so that the embryos remain at the same location throughout the simulation. The different jumps in the initial surface density profile are due to the crossing of the different ice lines; the most consequential one at at about 3 au is due to the water-ice line. The surface density of planetesimals is equalised inside the feeding zone of each planet. It should be noted that besides this effect, we do not include planetesimals redistribution, as was found by, for instance, Levison et al. (2010). In total, the planets accreted 61 M ⊕ of planetesimals (47 M ⊕ of which by the giant planet) while 89 M ⊕ were ejected (according to the prescription detailed in Sect. 4.3.2; virtually all of them by the giant planet). The feeding zones are all nearly depleted by the planets due to accretion, except for the giant planets where 65 % of planetesimals were ejected and 35 % accreted. The stirring by the protoplanets heats the planetesimals in the surrounding region. This effect is heavily dependent on the protoplanet's masses; the most massive one is the second outermost one (close to 10 au), which reaches a mass of about 5.4 M at the end of the formation stage. That planet has a core mass of 47 M ⊕ , which corresponds (for a pure H/He envelope) to a metallicity slightly lower than that of the star (2.8 % versus 3.0 %). This is below the relationship found by Thorngren et al. (2016) for the planet's mass. This, however, is not unexpected for the idealised setup used here: first, planets that form in the in-situ case tend to have lower core masses than planets that migrated (e.g. Alibert et al. 2005b). Second, with N-body interactions switched off here, giant impacts otherwise increasing the heavy element content are not possible. In the more realistic example in Sect. 8.1.2, where these effects are included, giant impacts strongly increase the solid content of the giant planets by a factor 2-3 relative to value at the moment gas runaway begins. The impacts are themselves triggered by the fast mass growth, destabilising neighbouring lower mass protoplanets. As noted by Fortier et al. (2013), the usual assumption that β = i plan /e plan ≈ 1/2 does not hold. We find that the stirring of eccentricities takes place over larger separation to the protoplanets than for the inclinations. This can be seen for instance in the region affected by the most massive planet. Further, the effect of the planet is not only limited to the surrounding area because of the following effect: the massive planet is able to significantly reduce the inward gas flow such that the region inside its orbit becomes gas-poor. This greatly reduces the damping of the planetesimals dynamical state to a such point that their eccentricity becomes close to unity. Envelope structure In the Bern model, the internal structure of the planets (and thus their gas accretion rate, radius, luminosity, and interior structure) are found at all stages (attached, detached, evolution) by directly solving the 1D structure equations. In contrast, many other global models use in contrast approximations and fits to find for example the gas accretion rate (see Alibert & Venturini 2019). While the 1D hydrostatic picture is also not the final 10 −1 10 0 10 1 10 2 10 3 Location [AU] , and surface density (bottom left) of a circumstellar disc that also contains 10 embryos. The lines represent temporal snapshots of the three quantities, and are spaced by about 2 × 10 5 yr. The blue line in both panels denote the initial profile. The dashed vertical lines represent the location of the embryos, which is fixed in this case. N-body interactions were also disabled. The lifetime of the gas disc is shorter than the case presented in Fig. 3 due to the accretion by the protoplanets. word for low-mass planets because of multidimensional hydrodynamic effects (e.g. Ormel et al. 2015;Lambrechts & Lega 2017;Cimerman et al. 2017;Moldenhauer et al. 2021), the fits (except the deep neural networks) often fail grossly to reproduce the result of 1D structure equations that they should in principle recover (Alibert & Venturini 2019). Many fits also neglect the influence of the luminosity on the gas accretion rate (e.g. Ida & Lin 2004a;Bitsch et al. 2015b). In reality, there is an important interplay between solid accretion which is dominant for the luminosity at early stages, and gas accretion. This leads to important feedbacks that can only be captured when solving the internal structure equations (Dittkrist et al. 2014). Also, from the point of view of guiding and interpreting astronomical observations, it is crucial to solve the internal structure equations, as this gives self-consistently at each moment in time the planet's radius and luminosity and associated magnitudes. These are the observable quantities for transit and direct imaging surveys. By predicting them self-consistently, the output of the Bern model can be compared in population syntheses not just with methods measuring quantities depending on the planets' mass (like RV, astrometry or microlensing), but also transit and direct imaging surveys. The downside is that solving the internal structure for bodies ranging in mass from 10 −2 M ⊕ to beyond the deuterium limit requires an internal structure model that is very versatile and numerically stable in all stages of planetary formation and evolution. Solving the internal structure also comes with significant computational cost. Attached phase In the initial phase, known as the attached phase, the envelope is in equilibrium with the gas disc and the gas density smoothly transitions from the value in the protoplanetary envelope to the one in the background nebula. The planets do not yet have a well-defined outer radius. During this phase, the gas accretion rate is governed by the ability to radiate the gravitational energy liberated by the accretion of solids and gas, and the envelope's contraction. For the forming giant planets, this phase generally lasts until the planets reach a total mass in the range of 30 to 100 M ⊕ where envelope contraction becomes fast, depending on the conditions. There is no fixed mass boundary; the transition occurs when the gas accretion rate obtained from solving the internal structure equations (that is the envelope's Kelvin-Helmholtz contraction) becomes larger than the disc-limited rate (Sect. 4.1.2. For low-mass planets which have very low gas accretion rates (very long Kelvin-Helmholtz timescales), the attached phase lasts (almost) until the gas disc dissipates. Gas accretion is calculated by solving the classical 1D radially symmetric internal structure equations (Bodenheimer & Pollack 1986), with M the mass enclosed in the radius R, P the pressure, T the temperature, ρ = ρ(P, T ) the density, computed using the SCvH equation of state (Saumon et al. 1995), and ∇ ad and ∇ rad the adiabatic and radiative gradients respectively. The minimum of the two indexes is the Schwarzschild criterion (e.g. Kippenhahn & Weigert 1994), and is used to ensure stability against convection. The adiabatic gradient comes from the equation of state, while the radiative gradient is given by with L being the luminosity. The opacity in the envelope κ is obtained in similar way as for the gas disc, but following Mordasini et al. (2014), the interstellar medium (ISM) grain opacity contribution in Bell & Lin (1994) is multiplied by a factor f opa = 0.003. This value was found in Mordasini et al. (2014) to fit best the detailed simulations by Movshovitz & Podolak (2008) and Movshovitz et al. (2010) of the grain dynamics in protoplanetary atmospheres (growth, settling) and the resulting dust opacities. Using one global reduction factor of the ISM opacity can of course not reproduce the full complex behaviour of the grain opacity which depends on planetary properties like the core or envelope mass as found in grain dynamics models (Movshovitz & Podolak 2008). But as shown in Mordasini et al. (2014), it still provides a useful first approximation. The value is not increased when a planetary system with higher metallicity is simulated. The reason is that a higher dust input in the outermost layer (as possibly associated with a high metallicity system) does not lead to a strong increase of the opacity. This was found numerically in (Movshovitz & Podolak 2008) and explained analytically in Mordasini (2014): a higher dust input leads to a higher dust-to-gas mass ratio (which increases the opacity), but also larger grains (which decreases the opacity). These effects cancel each other out in the dominating growth regime of differential settling. The boundary conditions for the integration are taken as follows: the outer radius is given by, following Lissauer et al. where is the Bondi radius, R H is the Hill's radius (Eq. 29), k 1 = 1 and k 2 = 1/4. The pressure and temperature are derived from the local properties of the disc with P(R tot ) = P neb (a planet ) and (47) and being the optical depth at the surface of the planet (Mordasini et al. 2012c), using the reduced opacities for the grains. The more complex parts come from the luminosity and the mass. The calculation of the outer luminosity L(R tot ) is described in Section 4.2. In the case of the mass, what is known is the core mass, that is M(R core ) = M core , while M(R tot ) = M tot is the quantity that is being searched for. We thus use an iterative method by guessing M tot , which is then used to integrate the internal structure equations until the boundary condition at the inner boundary is fulfilled, that is M(R core ) = M core . Once M tot is found, the envelope mass can be retrieved by M env = M tot − M core , and the gas accretion rate by taking the difference of the envelope mass between two successive steps of the envelope structure calculatioṅ M env = (M env (t) − M env (t − ∆t))/∆t. Maximum gas accretion rate In the initial stages, the gas accretion is limited by the planet's ability to radiate away the potential energy provided of the accretion material, that is the Kelvin-Helmholtz process. The rate at which gas can be accreted is set by the Kelvin-Helmholtz time scale, However, as the planet's core reaches a mass of about 10 M ⊕ , the value of τ KH becomes so low that the planet undergo runaway gas accretion. In this phase, the amount of gas that the planet can accrete is constrained by the supply from the gas disc. Therefore, we compute the quantityṀ env,max , which is used to limit the value ofṀ env found by solving the internal structure equations. Our approach to compute the maximum rate is similar to Mordasini et al. (2012c) but using only the 'local reservoir' component. This a major difference from the previous versions of the Bern model, where gas accretion was constrained from the radial flow of the gas. Following D'Angelo & Lubow (2008) and Zhou & Lin (2007), we adopt a Bond-or Hill-like accretion in a region of size R gc around the planet. For simplicity, we compute R gc according to Eq. (45). Depending on the value of R gc with respect to H, the local disc's scale height, two different regimes occur. In the case where R gc < H, the planet will not accrete from the full vertical extent of the disc, and so the gas flow through the gas capture cross section σ cross = πR 2 gc is given bẏ with ρ ≈ Σ/H the approximate density of the gas and v rel = max (ΩR tot , c s ) the relative velocity between the gas and the planet. On the other hand, in the case R gc > H, the planet will accrete from the whole gas column and the approximation of constant gas density breaks down. In this situation, the gas flow through the planet's capture radius is provided only by the radial extension of the gas capture area, hence we havė To distinguish between the two regimes, we use the lower rate of the two, that iṡ Finally, to ensure that no more gas than available in the feeding zone M feed is accreted during one time step, we further con-strainṀ env,max < M feed /∆t. We consider the limiting case to be that gap formation does not reduce the planetary gas accretion rate. Such a situation arises if the eccentric instability (Papaloizou et al. 2001;Kley & Dirksen 2006) allows the planets to efficiently access disc material even after a gap has formed. For circular orbits, gap formation would in contrast strongly reduce the gas accretion rate (Lubow et al. 1999;Bryden et al. 1999), and limit planetary masses to ∼ 5 − 10 M . The radial extent of the feeding zone is set by with f feed = 0.5 so that the overall extent is a half a Hill radius larger than the radial excursion of the planet's orbit. This radial extent provides the location over which the disc properties (Σ, H, etc.) are averaged for the calculation of the maximum rate and the removal of the accreted gas, witḣ The planet's eccentricity consequently does not directly affect the maximum gas accretion rate, but only indirectly through the size of the feeding zone. The self-limitation of gas accretion by removal of local disc gas by the planet, which then needs to be replenished by the inflow from more distant disc regions (i.e. mass conservation) is fully taken into account in our scheme via theΣ g,planet term entering the evolutionary equation of the disc gas surface density. We also take into account that for planets of any mass growing in multi-planet systems, the eccentricity can be increased via gravitational planet-planet interactions, which then affects the feeding zone width and thus indirectly the gas accretion rate. On the other hand, we currently do not take into account that the eccentric instability (i.e. the increase of a single giant's eccentricity because of gravitational interaction with the gas disc) in reality only acts for sufficiently massive planets (Papaloizou et al. 2001;Kley & Dirksen 2006 Maximum value that can be supplied by the gas disc (labelled 'Max.') and effective accretion rates (labelled 'Eff.'), which is given by intrinsic cooling in the initial attached phase and the maximum rate in the detached phase. Bottom panel: Corresponding enveloppe mass (i.e. total gas accreted). giant planet masses. This could potentially explain why our current model of disc-limited gas accretion seems to too strongly reduce the stellar gas accretion rate (Manara et al. 2019;Bergez-Casalou et al. 2020). Gap formation would reduce this effect, but could potentially lead to another issue: observationally, the giant planet mass function seems to extend smoothly to about 30 M (Sahlmann et al. 2011; Adibekyan 2019) (though Santos et al. 2017 and Schlaufman 2018 found a change in the metallicity dependency at about 4 to 5 M and concluded that the planets above that threshold formed predominantly by gravitational instability). Reaching such high masses could be difficult given the expected reduction of gas accretion because of gap formation in the circular case. The reduction of gas inflow into the inner disc because of an accreting giant planet can result in the clearing of the inner region of the protoplanetary disc by photoevaporation (Rosotti et al. 2013). This effect is also automatically taken into account by our model. To compare the prescription presented here with previous work, we provide in Fig. 5 the comparison of the gas accretion rate for the second outermost planet from the case shown in Fig. 4. The previous methodology, using the radial gas flow and taking into account the geometry was described in Mordasini et al. (2012c), with a limit of 0.9 of the radial flow to allow some gas to flow through the gap (Lubow & D'Angelo 2006). The results show that using the Bondi rate, as we presented here, gives a somewhat stronger limitation of gas accretion by the forming planet, especially during the onset of the runaway gas accretion. As a result, the final planet's mass is a bit lower when using the Bondi rate. Detached phase Once the gas accretion rate exceeds the maximum that can be provided by the disc -which includes the planet no longer being in a region where gas is present -the accretion regimes changes to the detached phase (Bodenheimer et al. 2000). In the detached phase, the solid and gas accretion rate are known (for the gas, it is given by the disc-limited rate), but not the planet's radius. The radius is determined following the approach of Mordasini et al. (2012c,b), that is by using the same internal structure equations as in the attached, but iterating on the radius until convergence is reached. The pressure outer boundary conditions are modified to take into account that the disc and the envelope are no longer connected, and that the gas free-falls onto the surface of the planet P(R tot ) = P neb (a planet ) + P edd + P ram + P rad (57) with P neb (a planet ) being the pressure at the midplane of the gas disc, P edd = (2g)/(3κ) the Eddington expression for the photospheric pressure due to the material residing above the τ = 2/3 surface, P rad = (2σ SB T 4 (R tot ))/(3c) the radiation pressure, c being the speed of light in vaccum, and being the ram pressure due to the accretion shock and the freefall velocity at the surface of the planet. Evolutionary phase For the evolutionary phase (after the dispersal of the gas disc), the outer boundary conditions are set to where T 4 int = L tot /(4πσ SB R 2 tot ) is the intrinsic temperature, T eq = T * R /(2 * a planet ), and A = 0.343 is the albedo, which is taken be the same as Jupiter (Guillot 2005). This value was selected for simplicity, although hot-Jupiter planets may have lower values (e.g. Mallonn et al. 2019). We thus use an Eddington grey boundary condition taking the stellar irradiation into account, as described in Mordasini et al. (2012c). During evolution, we assume a solar-composition condensate-free gas for the opacities, using the opacity tables of Freedman et al. (2014). Nebular grain opacity is neglected, at they are found to rain out quickly once gas accretion stops (Movshovitz & Podolak 2008). The identical envelope and atmospheric composition (pure H/He, solar composition opacities) in all planets means that for planets with identical bulk properties (orbital distance, core and envelope mass), the predicted radii will exhibit an artificially reduced spread. In reality, planets have different enrichment levels of heavy elements in the envelope (e.g. Fortney et al. 2013). This affects the equation of state and opacity, resulting in particular in a larger spread of the radii (e.g. Burrows et al. 2011;Müller et al. 2020). Example To illustrate the calculation of the internal structure, we provide snapshots of envelope structures in Fig. 6 and the time evolution of the radius and luminosity in Fig. 7. These are taken from the second outermost planet of the system shown in Fig. 4, which is a giant planet whose final mass is 6.4 M . Due to the different scales involved in the attached, detached, and evolutionary phases, they are shown in different panels. During the attached phase, the structure extends to the Bondi radius (Eq. 46), which is much larger than the core radius. Therefore, the structure spans a wider range of pressure. The upper part of the envelope is radiative while the lower part is convective, with several transitions in the mid region. The red profile shows the internal structure at the beginning of the transition from the attached to detached phase (the time marked with a dashed vertical line in the insert in Fig. 7). Note that the planet is still accreting during the initial stages of the detached phase. Accretion and contraction The luminosity calculation suffers from the same problem as the total mass in the attached phase, or the outer radius in the detached phase; that is that the new structure needs to be known to retrieve its energy, hence the luminosity. This means that the total energy of the new structure needs to be estimated for a luminosity to be obtained. The model uses the approach from Mordasini et al. (2012c). The total energy is given as with u being the specific internal energy of the gas, as obtained from the equation of state. The gravitational binding energy term includes the contribution from the core. For simplicity, we assume that it has a constant density, so its contribution is taken as −3/5GM 2 core /R core . It should be noted that this is not strictly self-consistent with our model to determine its density or radius, which assumes differentiation (Mordasini et al. 2012b); however, the difference remains small (Linder et al. 2019). The parameter ξ in Eq. (61) represents as in polytropic models the mass distribution and additionally the thermal energy content. It is retrieved from Eq. (61). The internal luminosity resulting from the accretion, cooling, and contraction L int can then be obtained as withṀ tot =Ṁ core +Ṁ env being the total accretion rate of the planet (solids and gas). The valueṀ tot in the attached phase and ofṘ tot in the detached phase are determined from the guess for the mass or radius during the iterations. The same is not true for The red line shows the first profile of the detached phase and is shown of both panels. The green and blue profiles lie at the transition between two stages and are shown of two panels each. In each profile, thin lines show the part where energy transport is radiative and thick lines for convective. ξ tot . To circumvent this problem, we estimate the luminosity with The correction factor C corrects for neglecting theξ tot term. The value of C can be calculated a posteriori by determining the actual total energy of the new planet, with The value of C is then used for the next time step. Marleau et al. (2017Marleau et al. ( , 2019b conducted 1D radiationhydrodynamic simulations of the planetary gas accretion shock, a feature that is seen in various 3D radiation-hydrodynamic simulations of accreting protoplanets of sufficiently high mass (e.g. Szulágyi & Mordasini 2017;Schulik et al. 2020). High postshock entropies were found, suggesting that warm or hot gas accretion is more plausible than cold accretion (see also . We therefore assume in our model that gas accretion in the detached phase is hot, which means that we do not subtract the accretion shock luminosity from L int (see Mordasini et al. 2012c). In addition to the accretion and contraction luminosity, we include the luminosity from radioactive decay, bloating for close-in planets, and, in the case of brown-dwarfs, deuterium fusion. The radiogenic luminosity L radio includes contributions from the three most important long-lived radionucleides 40 K, 238 U and 232 Th (Wasserburg et al. 1964). To compute the luminosity contributions, we follow the procedure of Mordasini et al. (2012b): we assume the mantle of the protoplanets has a chrondritic composition and the energy production rate are retrieved from meteoritic values of William (2007). The initial radiogenic contribution is Q 0 ≈ 5 × 10 −7 erg g −1 s −1 of mantle material (all elements besides iron). Bloating of close-in planets Massive, close-in planets exhibit anomalously large radii (Laughlin et al. 2011). To reproduce this effect, we include a bloating mechanism based on Sarkis et al. (2021). For planets which are in the detached and evolutionary phase and directly irradiated by the host star, we include an additional luminosity contribution that is based on the best empirical fit formula obtained by Thorngren & Fortney (2018): F = L /(4πa 2 planet ) the total stellar flux at the planet's location, and τ mid is the optical depth from the star to the planet location through the mid-plane of the disc, as in Eq. (6). We only apply bloating if the stellar flux F (in the evolutionary phase) or the stellar flux multiplied by the optical depth F exp (−τ mid ) (before the dispersal of the gas disc) is greater than 2 × 10 8 erg cm −2 s −1 (Demory & Seager 2011). Deuterium-burning For the calculation of the luminosity due to deuterium fusion, we follow the procedure of Mollière & Mordasini (2012). In this framework, the energy generation rate (per unit mass and time) is given by Kippenhahn & Weigert (1994), with the assumption that nuclei are fully ionised and non-degenerate. The energy released in each process is computed according to Fowler et al. (1967). The specific deuterium burning luminosity of a planet depends on the conditions in the planet's gaseous envelope, most notably the density, temperature, and the remaining deuterium nuclei. This implies that there is no universal mass at which deuterium burning starts, but as already found in Mollière & Mordasini (2012) (see also Bodenheimer et al. 2013), the mass where burning becomes important clusters around about 13 M . The presence of a solid core does thus not significantly alter the mass where burning starts relative to (coreless) brown dwarfs (Chabrier & Baraffe 2000). We use an initial deuterium number fraction [D/H]=2 × 10 −5 , which is the primordial value. Our model also includes the enhancing of the reaction rate by screening, that is the shielding of the positive charges by the surrounding electron. In turn, screening is affected by the electron degeneracy, as we are dealing with objects of high central densities. This procedure follows the work of Dewitt et al. (1973) and Graboske et al. (1973). Total luminosity The final luminosity is then given by We assume that at a given time, the luminosity does not change within the envelope, that is ∂L/∂r = 0. This approximation is fine under most circumstances because energy transport is due to convection and the luminosity enters only in the radiative gradient. During rapid gas accretion in the detached phase, under the effect of hot accretion, the interior may become radiative (Berardo et al. 2017; Berardo & Cumming 2017) and we do not account for the decrease of the luminosity with depth. This will be addressed in future work. Accretion of solids The growth of the astrophysical core of the planets can occur via three channels: 1) the accretion of planetesimals (e.g. Greenzweig & Lissauer 1992; Thommes et al. 2003), 2) the accretion of pebbles (e.g. Ormel & Klahr 2010;Johansen & Lacerda 2010;Lambrechts & Johansen 2012), and 3) by the collision with other embryos (which we call giant impacts). In the Generation III model, we consider accretion by planetesimals and giant impacts; the inclusion of pebble accretion is subject of ongoing work (Voelkel et al. 2020). For planetesimals accretion, core growth is given by the probability of collisions with planetesimals in the oligarchic regime (Ida & Makino 1993), as described in Fortier et al. (2013). This is a major difference to the first generation of the Bern model which followed Pollack et al. (1996) for the planetesimal accretion rate. According to Chambers (2006), the core growth can be computed assuming a particle-in-a-box approximation iṡ withΣ s the mean surface density of planetesimals in the planet's feeding zone and p coll the collision probability with planetesimals. As Ida & Lin (2008), we use the same prescription to calculate the planetesimal accretion rate independently of a protoplanet's orbital migration rate. In addition, we address the possible impact that orbital migration could have in the context of the shepherd/predator regimes proposed by Tanaka & Ida (1999): in the idealised situation studied by Tanaka & Ida (1999) (single protoplanet per disc, no local reservoir of planetesimals, no growth via collisions with other protoplanets), shepherding was found to significantly reduce the planetesimal accretion rate for protoplanets migrating sufficiently slowly. However, in the more realistic N-body simulations by Daisaka et al. (2006) where multiple protoplanets (oligarchs) form and grow concurrently as expected in the oligarchic regime (Kokubo & Ida 1998), the trapping of planetesimals by the protoplanets is only tentative and does not significantly reduce their accretion rates. We similarly find that in the more realistic situation we consider here with many embryos per disc, the existence of a local reservoir of planetesimals in a protoplanet's initial feeding zone accessible without migration and a time sequence of a solid accretion dominated initially by planetesimals and later on collisions with other protoplanets (Sect. 8.1), shepherding should only be of limited importance. We discuss these points further in Appendix A. Capture probability We distinguish three different accretion regimes depending on the random velocities: low-, mid-and high-velocity. The distinction is based on the reduced planetesimals' eccentricityẽ plan = re plan /R H and inclinationĩ plan = ri plan /R H (r is the heliocentric distance): the high-velocity regime forẽ plan ,ĩ plan 2, midvelocity for 2 ẽ plan ,ĩ plan 0.2 and low-velocity for 0.2 ẽ plan ,ĩ plan . According to Inaba et al. (2001), each regime has a different expression for the collision probability, where R cap is the planetesimal capture radius of the planet, β = i plan /e plan and the I F and I G functions can be approximated as, following Chambers (2006): The final collision is then given by In the initial stage, the capture radius R cap is the physical radius of the core R core . Once the planet has sufficiently massive H/He envelope, it will enhance the capture cross-section of planetesimals. As in Fortier et al. (2013), the capture radius is obtained following Inaba & Ikoma (2003) by solving the implicit equation The enhancement of the capture radius over the physical radius is very important for increasing the planet's planetesimals accretion rate (Podolak et al. 1988;Venturini & Helled 2020). We highlight this in Fig. 8, which compares the planetesimals capture radius to that of the core for the same planet we highlighted in Fig. 7. The calculation of the envelope structure begins at about 10 4 yr, before that, the capture radius is equal to that of the core. At that moment, the core mass is 9 × 10 −2 M ⊕ . By the time the core reaches 1 M ⊕ at 4.8 × 10 5 yr, the capture radius is 9 times the core radius. Therefore, for small roughly km-sized planetesimals as in our case, the enhancement of the capture radius is already important for low-mass bodies (starting about 10 −1 M ⊕ ), and the calculation of gaseous envelopes cannot be omitted at any stage. Besides the factor that the eccentricity and inclination damping by nebular gas drag is more efficient for smaller planetesimals which leads to a larger gravitational cross section (a larger Safronov factor), the larger envelope drag enhancing the planet capture radius further is the second effect making the accretion of small planetesimal more efficient. This reflects that the accretion of km-sized planetesimals is not a pure gravitational process. Ejection of planetesimals Planets not only accrete material; they also induce gravitational perturbations on the planetesimals that come close-by but are not accreted. These planetesimals, if they receive a sufficient velocity kick from a close approach by a planet, can be ejected from the system. To estimate this effect, we follow a procedure similar to Ida & Lin (2004a). The planetesimals that receive a velocity kick greater than the escape velocity from the primary, v esc = 2GM /a planet , will likely be ejected from the system. Thus, we have that the fraction of accreted-to-ejected planetesimals is (Ida & Lin 2004a) Region 1 Region 2 Before separation After separation Central star Fig. 9. Illustration of the procedure to separate planetesimals' feeding zones when zones would otherwise overlap. The horizontal axis represents the separation to the central star and four planets are shown. The light colour areas below the horizontal line show the initial feeding zones while the ones above show the final zones. a mid2,3 and a mid3,4 are the edges of the new feeding zone. Feeding zone To obtain the mean surface density of planetesimals in the feeding zone, we must determine its extent. The half-width of the feeding zone (centred at the planet's location) is usually given in terms of the Hill radius with For a planet on a circular orbit, conservation of the Jacobi energy implies that b = 12 + 4/3(ẽ 2 plan +ĩ 2 plan ) (e.g. Hayashi et al. 1977). So, in a quiescent disc withẽ plan ,ĩ plan 1, b = 2 √ 3 ≈ 3.5. For numerical stability reasons, however, we assume b = 5, as in Fortier et al. (2013). In the general case, to account for a non-circular orbit of the planet, we take the feeding zone to span from r peri − R feed to r apo + R feed , with r peri and r apo being the peri-and apocentre of the planet's orbit respectively. When multiple planets are present in the same disc, their feeding zones may overlap. To avoid problems with two planets accreting from the same location, such as mass-conservation issues, we separate the feeding zones so that there is at most one planet accreting at any location the disc. A graphical representation of the following procedure is provided in Fig. 9. First, we compute regions in the disc from where planets accrete. In the case a region contains a single planet, then the feeding zone is the same as in the single planet case (as in Region 1 on Fig. 9). If there are multiple planets in one region (as in Region 2 on that figure), the inner edge of the innermost planet and the outer edge of the outermost planet are set to the edges of the region. For the other edges, we sort planets by distance, and for each pair, we compute the location of the limit between their feeding zones with where the subscripts indicate the inner (in) and outer (out) planets of the pair. We scale with the square root of the planet masses because the area of the feeding zone scales with the square of the distance. This scaling keeps the area of the feeding zones related to the planet masses. We tested alternative prescriptions, like using the cubic root of the mass (as in the Hill sphere) or the midpoint between the two planets and found that the prescription does not significantly affect the outcomes of the simulations. Core radius To obtain the radius of the core (and its density), we applied a methodology similar to Mordasini et al. (2012b). This model also accounts for the composition of the core and the pressure burden exerted by the envelope. The principle is to solve similar structure equations as for the envelope, that is Eqs. (41) and (42), but with an equation of state that takes the form of a modified polytrope from Seager et al. (2007), which reads ρ(P) = ρ 0 + cP n . We include three different materials: iron, silicates (perovskite, MgSiO 3 ) and ice, whose parameters ρ 0 , c and n are taken from Seager et al. (2007). Because of the small thermal expansion coefficient of these materials compared to H/He, we neglect via the temperature-independent modified polytropic EOS a possible temperature dependency of the radius of the core. It should, in any case, be small (Grasset et al. 2009). For gas giant planets, where envelopes can reach masses of thousands of Earth masses, this can cause a significant compression of the core (Baraffe et al. 2008). Thus, the pressure on the core's surface is taken as boundary condition of the calculation to include this effect. Core compression can be observed in Fig. 8, where the core radius shrinks after the envelope contracts at 1.06 Myr. The core composition is retrieved from the accreted planetesimals described in Sect. 3.3.3 and other embryos in case of giant impacts. The chemical composition is used to obtain the fraction of the different elements to compute the core radius. While in the chemistry model includes 32 (Thiabaud et al. 2014) refractory and 8 volatile (Marboeuf et al. 2014b) chemical species, the core radius calculation groups them into only three types: iron, silicates, and water ice. Thus, we map all ice species to water ice when calculating the core structure and all refractories except iron to the silicate mantle. The reason for this is that first, equations of state are only available for a limited number of species. Second, the differences between different types of, for instance, silicates is not very large (Seager et al. 2007). Atmospheric escape During the evolutionary phase, that is after the dissipation of the gaseous disc, planets at small distances of their host star (∼ 0.1 au) receive intense XUV stellar irradiation, which will drive atmospheric escape. This effect is especially important for the low-mass planets, that can loose the whole of their gaseous envelope due to their low gravitational binding energy (e.g. Lammer et al. 2009;Lopez et al. 2012;Owen & Jackson 2012;Jin et al. 2014;Jin & Mordasini 2018). The stripping of the whole envelope has a significant effect on the planets radius. Due to the low density of gas, the presence of an envelope will result a significant increase of the planets' sizes even if the envelope mass is only on a percent level of the total planet mass. Bare cores are thus clearly separated from object that retain a gaseous envelope, and a gap is observed in the distribution of planetary radii (Owen & Wu 2013;Lopez & Fortney 2013;Jin et al. 2014;Chen & Rogers 2016;Fulton & Petigura 2018). The evaporation model is based on Jin et al. (2014). It takes into account contributions from X-ray and extreme-ultraviolet (XUV) irradiation. At the early stages, the evaporation is typically X-ray driven. We describe this regime using the energylimited rate from Jackson et al. (2012) using the flux in the 1 to 20 Å range from Ribas et al. (2005) and assuming an efficiency factor = 0.1. At later stages, the evaporation from EUV takes over. We also use the work of Ribas et al. (2005) to obtain the timedependent EUV stellar luminosity for a Sun-like star. EUV evaporation can be divided into two sub-regimes (Murray-Clay et al. 2009). At low EUV fluxes, the same energy-limited approximation as for the X-ray flux is used. In this case, the escape flux is given bẏ where F EUV is the EUV flux, R base the radius of the photoionisation base, calculated as in Murray On the other hand, energy-limited evaporation is not suitable when the EUV flux is high (> 10 4 erg cm −2 s −1 ), as a substantial part of the heating is lost in cooling radiation. In this regime, we adopt the radiation-recombination-limited approximation of Murray-Clay et al. (2009). The mass loss rate is given by wind due to escapė M env,rr ∼ 4πρ s c s R 2 s (81) at the sonic point R s , which is calculated the same way as R acc . Here c s = √ k B T/(m H /2) is the isothermal sound speed of ionised gas with T = 10 4 K. The density can be related to the one at the ionisation base, where τ = 1, with The photoionisation base is located where there is equilibrium between photoionisations and recombination: with n 0,base the density of neutrals at the base, hν 0 = 20 eV, σ ν 0 = 6 × 10 −18 cm 2 (hν 0 /13.6 eV) −3 , α rec = 2.7 × 10 −13 , and ρ base = n +,base m H . The model also includes the effect of Roche lobe overflow. When solving the internal structure equations, there are sometimes solutions found in the detached and evolutionary phase where the radius is larger than the Hill sphere. This occurs in two situations: First, for close-in low-mass planets with a high envelope mass fraction. At the moment when the nebula dissipates (and thus the ambient pressure vanishes), and when the star starts to irradiate the planets directly (resulting in an increase of the temperature, see Fig. 3), these planets bloat. Second, giant planets that get very close to the star because of tidal spiral in (see Sect. 5.3) can also overflow their Roche lobe. In this case, we remove at each time step the part of the H/He envelope that is outside of the Hill sphere. Initial conditions The simulation begin with a predetermined number of embryos whose initial mass is M emb,0 = 10 −2 M ⊕ (approximately the mass of our Moon). They are randomly placed with an uniform probability in log a, where a is the semi-major axis, between r in and 40 au. The starting location zone is slightly more extended that in the previous studies, where the upper boundary was set to 20 au. Also, two embryos cannot be placed within 10 Hill radii from each other. It should be noted that for the simulations with largest initial number of embryos, 100, this represents an average spacing of 28 Hill radii. The presence of a number of embryos right at the beginning of the simulations is a strong assumption we made because the model does not track the formation of the embryos themselves. This shortcoming of the model will be addressed in future evolutions of the model (Voelkel et al. 2020), where the evolution of the dust, pebbles and planetesimal and embryo formation is followed. Dynamical evolution: Orbital migration, N-body interaction, and tides As the planet mass increases, it will generate a stronger perturbation in the density of the gas around the planet. This perturbation will cause the nebula to no longer be axis-symmetric, and as a consequence produces a torque back on the planet, leading to planetary migration. At the same time, convergent migration can result in capture in mean-motion resonances or orbital destabilisation. Hence migration and dynamical evolution must be performed together to capture all the effects. Planetary migration We include two types of migration, Type I for low mass planets embedded in the gas disc and Type II for planets massive enough to open a gap in the disc. Type I migration For Type I migration, our model follows the approach of Coleman & Nelson (2014). This includes the torques formulation from Paardekooper et al. (2011), modified to consider that orbital eccentricity and inclinations attenuate the co-rotation torques (Bitsch & Kley 2010, 2011. The total Type I torque on a planet, following Eqs. (50) to (53) of Paardekooper et al. (2011) and (15) of Coleman & Nelson (2014), is given by with where Γ L , Γ hs,baro , Γ hs,ent , Γ c,lin,baro and Γ c,lin,baro are the Linblad torque, barotropic and entropy part of the horseshoe drag and linear corotation torque respectively. They are given by Eqs. (3) to (7) of Paardekooper et al. (2011). The function F governs saturation, while G and K provide the cutoff at high viscosity, and are given by Eqs. (22), (30) and (31) of Paardekooper et al. (2011). The other factors in Eq. (84) account for the shape of the orbit. F L provides the reduction of the Lindblad torque for eccentric or inclined orbits following Cresswell & Nelson (2008), with and Here,ê = e/h = e/(H/r) andî = i/h = i/(H/r) are the planet's orbital eccentricity and inclination scaled by the disc's aspect ratio h = H/r. F e and F i provide the reduction of the corotation torques due to eccentricity and inclination (Bitsch & Kley 2010). We use as suggested by Fendyke & Nelson (2014) for the reduction due to eccentricity and for the reduction due to inclination (Coleman & Nelson 2014). Eccentricity and inclination damping time scales follow Cresswell & Nelson (2008), with τ e = t wave 0.78 1 − 0.14ê 2 + 0.06ê 3 + 0.18êî 2 and where is the characteristic time of evolution of density waves (Tanaka & Ward 2004). Type II migration The criterion to detect gap opening and switch migration to Type II is from Crida et al. (2006), with ν is the viscosity from Eq. (2). Type II orbital migration follows the non-equilibrium approach from Dittkrist et al. (2014). Here, the planet follows the radial velocity of the gas, (Pringle 1981), but is limited if the planet's mass is much larger than the local disc mass (the fully suppressed case, see Alexander & Armitage 2009). The radial velocity of the planet v planet is given by For the larger planet masses, when the migration rate is constrained by the disc-to-planet mass ratio, this expression result in a similar behaviour as the formula obtained by Kanagawa et al. (2018), although it does not take into account the aspect ratio of the disc h. For our migration scheme, we convert the radial velocity into a torque according to This prescription allows in principle planets in Type II to migrate outwards if the disc is decreting (Veras & Armitage 2004). However, in practice this mechanism is limited by the restriction to planets that are already at large distances or during the final moments of the disc, and limited by the small surface density (Dittkrist et al. 2014). Article number, page 21 of 45 A&A proofs: manuscript no. model During type II migration, the eccentricity and inclination damping time scales are set to τ e = τ i = 1 10 |τ a | = 1 10 a planet |v planet | . This relationship was selected because hydrodynamical simulations of migrating planets in this regime have shown that eccentricity and inclination damping act on time scales that are shorter than migration (Kley et al. 2004;Kley 2019). Migration map An example of the outcome of the whole migration scheme for one disc profile is provided in Fig. 10. The disc is the same as the example shown in Fig. 3 at 1 Myr; at this time the disc mass is 1.46 × 10 −2 M . Its outer radius is 123 au, so we cut the figure at 200 au since there is no migration outside this distance. Migration is most efficient for intermediate mass planets, above about 10 M ⊕ up to the transition to Type II migration (shown with the dashed black line on the migration map). The outward migration at large separation for the type II migration regime is due to the outward spreading of the gas disc. We also note two convergence zones for low-to mid-mass planets. These are due to opacity transitions (Lyra et al. 2010) or structures in the gas disc (Kretke & Lin 2012) such as the increase of the surface density close to the inner edge of the disc . These are the locations where, for a given planet mass, outward migration happens on the inner side and inward migration on the outer side. Hence, at this moment of evolution, planets with masses less than ≈ 8 M ⊕ cannot reach the inner edge of the disc by migration only. However, as time goes and gas becomes scarcer, the zones of outward migration (hence the convergence zones) shift towards lower planetary masses. Thus, by the end of the gas disc, planet with masses down to ≈ 2 M ⊕ could reach the inner edge of the disc. N-body integration Gravitational interactions between the protoplanets are now modelled with the mercury N-body code (Chambers 1999) using the hybrid method. Unlike the direct resolution of the equation of motion (as performed in A13), this use a symplectic integration scheme (see e.g. Sanz-Serna 1992, for a review). The basic principle is to use the solution of Hamilton's equations, where x denotes the position coordinates, p the momentum coordinates, and is the Hamiltonian of the system, with ∆x i j = |x i − x j |. Here, the index i = 0 refers to the central star and M 0 = M while the subsequent are the planet with M i = M planet,i so that N is the number of planets in the system. However, while H has no analytical solution for N > 1, it is possible to split the Hamiltonian into several pieces, solving the simpler problems to finally combine them back so that a solution 10 −1 10 0 10 1 10 2 Distance [AU] into three components, so that H = H K + H S + H I , and Here, H K represents the unperturbed Keplerian orbits of the planets about the central star, H S the kinetic energy of the star and H I the interactions between the planets. The separation into three different Hamiltonians (rather than two) is required because the scheme uses mixed-centre coordinates (also called 'democratic heliocentric'): heliocentric positions and barycentric velocities. These coordinates are chosen so that H K H S , H I , unless two planets come close together. The evolution of such a system by splitting is done using a second-order method, where the notation H ... (τ) is used to represent the evolution under the given Hamiltonian for a step τ. For H I , this means that the planets receive a kick in velocity due to the interactions with the other bodies (except the central star). In our case, H I is extended to include additional forces representing the effect of the gas disc, see Section 5.2.1. The evolution under H S results in a shift τ/(2M ) p i while the evolution under H K is a Keplerian motion around the central star for a period τ. As we noted, the assumption that H I is small compared to H K is no longer valid when two bodies become close together. In that situation, the idea is to bring the interaction between the two close-by bodies into H K so that the interaction Hamiltonian remains small. This implies that H K is no longer analytically integrable during that period, but only for the orbits of the involved bodies. In practice, the orbits of the two close-by bodies are integrated with a conventional Bulirsch-Stoer method (Stoer & Bulirsch 1980) for the duration of the encounter. That algorithm is described in detail in Chambers (1999). The symplectic integration scheme has a huge advantage in terms of computational requirements compared to a standard Bulirsch-Stoer method, as the interaction between the planets, the one part that is O(N 2 ), is only computed once per step. We do not use the N-body when there is only one protoplanet in a system as the solution is analytical. This happens either for populations with one embryo per system or in the unlikely case that only one planet survives in a planetary system with initially multiple embryos per system. Additional forces Migration and damping are included as additional forces in the N-body. The contributions from migration and eccentricity damping apply in the orbital plane and are split into tangential (θ) and radial (r) components, while the inclination damping acts on the vertical component (z), resulting in with a denoting the additional accelerations, v the planet's velocity along each direction. Here, v K = Ωr is the Kelperian velocity. Collision detection Collisions are detected when two planets come closer than a predetermined distance, which is the sum of their radii. When the closet approach is found inside to be during one of the substeps of the N-body, the minimum distance is retrieved by fitting a third-degree polynomial equation whose condition are set by the relative separation and their radial velocity at the beginning and end of the substep (similar to A13). For planets with a significant and extended envelope (like during the attached phase), the assumption that planets have a unique radius which decides whether a collisions occurs or not is no very accurate, as the outcome is determined by gas dynamics inside the merging envelopes. As we do not have the full envelope structure in the N-body, we nevertheless remaining with a unique radius approach. In the attached phase, the envelope transitions smoothly to surrounding nebula. The outer radius, as provided by Eq. (45), is unsuitable for the detection of collisions, as it corresponds to very low gas densities. Thus, the radius used to detect collisions is computed assuming that the whole planet mass has the same density as its core. This is an approximation, but reflects that the gas density in the envelope is much higher close to the (solid) core surface. In the detached phase, we use the planetesimals' capture radius R cap ; this is normally an overestimation of the effective collision radius, larger bodies needing to penetrate deeper down in the envelope to be captured. However, in this phase the envelope scale height is small compared to the radius except for the very short time directly after detachment, so the actual error is small. Collision treatment When a collision is detected, the following procedure is applied: the cores merge, the eventual envelope of the less massive body is deemed to be ejected, and the impact energy is added as a additional contribution to the luminosity for the structure calculation of the new body. The merger of the cores will make that a part of the impact energy will already be taken into account consistently with the luminosity calculation described in Section 4.2; so the additional energy is calculated using where µ = M tot,1 M tot,2 /(M tot,1 + M tot,2 ) is the reduced mass, and the indexes 1 and 2 refer to the quantities of the larger and smaller body respectively. v imp is the relative velocity at time of contact. Here E acc,core = G M tot,1 M core,2 R core,1 + R core,2 is the centre-of-mass impact energy of two bodies with the total mass of the target and the core mass of the impactor colliding at their mutual escape velocity. Also, we restrict the supplementary energy to positive values. Negative value can arise if the bodies are colliding at below the mutual escape velocity, which is possible due to the drag by the gas disc or in the case of specific configuration, such as co-orbitals. However, the impact velocity is never quite lower than the mutual escape velocity, so that the error remains small. The addition of the core mass and luminosity is performed viȧ where t impact is the time of the impact, τ impact = 10 4 yr is the time scale of release taken as in Broeg & Benz (2012). These two terms are added to the core accretion rate due to planetesimal accretion, and to the luminosity (Sect. 4.2) used in the internal structure calculation, respectively. This impact model was tailored for the most common collisions that we find in our simulations. We highlight this by showing cumulative distrubutions of impactor-to-target mass ratio γ for different ranges of target masses in Fig. 11. At low masses, most target/impactor pairs are of similar masses, thus this source of growth cannot be neglected. In contrast, most collisions involving giant planets are with much smaller impactors (the red curve in Fig. 11). Our models neglect the envelope of the impactor, but there are only few collisions where this could provide significant source of mass. Tidal evolution During the evolution phase we include the inward migration of planets due to tides they raise onto the central star. In addition to planets that are pushed inwards due to capture in mean-motion resonances, this gives another channel to obtain planets that are within the inner boundary of the gas disc. For the tidal migration rate, we compute the rate according to -Mello et al. 2008;Jackson et al. 2009;Benítez-Llambay et al. 2011), where Q = 10 6 is the stellar dissipation parameter. It is clear that this model for the tidal spiralling-in is strongly simplified. It will be improved in future work along the lines of, for example, Bolmont & Mathis (2016). Terrestrial planet formation We begin by studying whether the new generation of the Bern model with a higher initial number of embryos, but which still includes a statistical description of planetesimals, is capable of reproducing models of terrestrial planets that use purely N-body (e.g. Chambers 2001), that is where the planetesimals are represented as individual (test) particles. This test is crucial to assess whether we can reach our goal of having a formation model which is able to simulate the growth of planets with a very large mass range from about that of Mars, to brown dwarfs. This is in contrast with earlier generations of the Bern Model, where mainly more massive planets were at the focus (or more specifically, planets for which the giant impact phase after disc dissipation is not very important). The formation of terrestrial planets does not have the same time constraint as for gas giants. In the case of planets with a significant H/He envelope, a sufficiently massive core must be formed before the dispersal of the gas disc, but this does not apply to terrestrial planets. Indeed, in the case of the Earth, cosmochemical evidences point to a formation time between a few tens Myr (Yin et al. 2002;Kleine et al. 2002) to roughly 100 Myr (Touboul et al. 2007;Allègre et al. 2008;Kleine et al. 2009). This is longer than the expected lifetime of the solar system's nebula of 4 Myr ) by about an order of magnitude or more. Hence the modelling of formation of planetary systems with terrestrial planets needs to span a longer time period for dynamical effects (i.e. the 'late stage') than for gas-dominated planets. Setup For this test cases, we performed a few modifications to our main model to mimic earlier work like Chambers (2001) and Raymond et al. (2005). Orbital migration has been disabled; as for the envelope structure calculation and the evolution phase, here, all planets are treated as purely rocky. We adopt an initial surface density profile close the minimum-mass solar nebula (MMSN; Weidenschilling 1977;Hayashi 1981), with a reference surface density of Σ 0,s = 7.1 g cm −2 at r 0 = 1 au, but truncated at 2 au, as we are primarily interested in the inner planets. This also helps to determine more precisely which fraction of the planetesimal disc has been accreted by the terrestrial planets during their formation. This gives a solids mass of 3.67 M ⊕ . The initial number of embryos is selected to have a similar spacing as the two populations presented Emsenhuber et al. (in rev., Paper II) with most embryos per system, which means that we have initially 23 (correspond to 50 in Paper II) and 46 (corresponding to 100) lunar-mass (0.01 M ⊕ ) embryos. In addition to that, we perform one run with 9 embryos initially in Sect. 6.3, which corresponds to 20 embryos in Paper II. It should be noted that the model lacks the 'dynamical friction' obtained in N-body simulations with a large number of small bodies (O'Brien et al. 2006;Raymond et al. 2006) because we do not include the effect of the damping of eccentricities and inclinations of the embryos by the planetesimals. However, after all material has been accreted onto the planets, the remainder of the formation process is similar to pure N-body simulations of terrestrial planet accretion, as all the mass is now contained in bodies that are directly followed by the N-body. For some simulations, we include Jupiter and Saturn to determine the effects they have on the formation of the inner planets. In that case, Jupiter and Saturn are on their present-day orbits, but they are rotated so that their invariant plane coincides with that of the disc (as in Chambers 2001Chambers , 2013Emsenhuber et al. 2020). We do not model the formation of these planets, because they form over a period that is much shorter than the terrestrial planets. To obtain a better overview of the influence of the parameters we are studying, and to reduce (and better understand) the stochastic effects of N-body interactions, we perform 10 simulations for each combination of parameters (initial number of embryos and presence of the outer planets). The only differences between the 10 simulations are the initial position of the terrestrial planet embryos. For the 10 simulations, we consider the average outcomes as being representative (e.g. Fig. 12). The simulations starts with a gas disc, which lives for roughly 4.4 Myr. Its only effect however is to damp the eccentricities and inclinations of the planetesimals. Planetesimals accretion continues after the dispersal of the gas disc. As the planets do not have envelopes, we perform only the formation stage of the calculation. However, the duration of that stage has been extended to 400 Myr to account for the much longer time needed for the solar system's terrestrial planets to converge. Gravitational interactions If the embryos remain at their initial locations during the whole formation process, then they grow to their isolation mass (Lissauer 1987). In our model, we obtain this behaviour if we artificially remove the N-body interactions, unless the feeding zones of two adjacent embryos overlap at some point, in which case the masses become slightly lower. When using this mode, the runs starting with 46 embryos have accreted roughly half of the disc's mass onto the embryos by about 4 Myr (the time at which the gas disc disperses) and accrete very slowly thereafter. For the runs starting with 23 embryos, only a quarter of the mass ends in the embryos by 4 Myr. For the other parameter sets (all with gravitational interactions), Fig. 12 provides the averaged results over 10 simulations, for the masses of solids and the number of embryos. The story is quite different when N-body interactions are included. We see for instance that in the case with 46 embryos and no outer giant planets, nearly all the planetesimals have been accreted onto the embryos. For the case with 23 embryos initially and no outer giant planets, more than half of the planetesimals end up accreted. There are two aspects we point out here. First, in the figure, the planetesimal mass accreted by embryos that have been later ejected is accounted as accreted. Second, our planetesimal model does not include redistribution of material by interactions with the embryos. For instance, in their less realistic setup where embryos only populate a limited orbital distance range in the disc, Levison et al. (2010) found that planetesimals can be redistributed to locations outside of the embryos' feeding zone rather than be accreted. However, when they add the mechanism that embryos can reside in all parts of the disc (which is more realistic, Levison et al. 2010) no gap in the planetesimal disc opens, as embryos mutually scatter planetesimal into their vicinity and accrete them eventually. This leads to an efficient formation of massive planets. Thus, feeding zones overlap in these simulations, therefore the effect of planetesimal redistribution should play little role in our case as there are very few locations in the disc where planetesimals would not be accreted by the local embryo and/or scattered back into the feeding zone of other embryos. Interactions lead to more massive planets To understand how the embryo-embryo interactions lead to a quasi-complete accretion of the planetesimals disc, we show the formation tracks for one particular system with a varying number of embryos in Fig. 13. We can easily observe that the larger the number of embryos, the more and the sooner they start to move around. In the system with only 9 embryos, they basically remain where they started and grow slightly above their isolation mass. For the other two simulations, however, the local isolation mass is sufficient to trigger significant embryo-embryo interactions that will change their positions in the disc. This in turn enables them to accrete from regions that would otherwise inaccessible, which creates a positive feedback since more massive planets will result in yet more interactions. This feedback only ends when nearly all planetesimals have been accreted onto the embryos. (Lissauer 1987). Middle panels: mass versus time; sudden increases in mass are due to embryo-embryo collisions. Bottom panels: semi-major axis versus time. Thus, closer packed embryos lead to enhanced stirring of their eccentricities, which has two consequences: the increase of the feeding zone size because of radial excursion for eccentric orbits, and collisions between embryos. Embryos having a greater eccentricity can sample a broader region of the disc, thus grow to a larger mass before depleting the disc. Collisions with other embryos are capable to bring material from more distant regions of the disc that would otherwise not be accessible to one embryo. At the end, we arrive at a result that is maybe counterintuitive at first: the larger the number of embryos, the less planets remain. We observe this for instance in the bottom panel of Fig. 12. Time needed for formation We find a similar pattern for the timing at which interactions start in the two simulations with the higher number of embryos of Fig. 13 (23 and 46 embryos). In the early phase (a few 10 5 yr), no dynamical interactions occur, because the embryos need to reach a certain mass before the eccentricities can be significantly excited. Then, the first embryos to show an increased eccentricity are located at ∼ 0.3 au, and then this propagates both inwards and outwards. In the inner part of the system, collisions happen rather rapidly so that the system has essentially obtained its final configuration by several Myr. On the other hand, in the outer region we observe that embryos remain on eccentric orbits for a certain amount of time before suffering from collisions. It takes more than 10 Myr for the planets located at about 1 au to reach their final mass. In the even more distant regions, it takes even longer, and we see the phase with several embryos on eccentric orbits remaining for more than 100 Myr. Such a growth wave travelling from the inside to the outside is expected, as the growth process scales with the local Keplerian frequency. Therefore, our choice of the integration time dictates the location where and how accurately the model can follow the formation of the terrestrial planets. With our choice of an integration time limited to 20 Myr for the formation phase, the model can only track most of the giant impact stage inside of roughly 1 au for systems that have a MMSN-like surface density of solids. Even within 1 au, the giant impact stage is not entirely finished within our set time frame, as it can be see in the innermost planet by about 300 Myr in the bottom right panel of Fig. 13. Nevertheless, these events remain rare. Locations further away or systems with a lower amount of solids (as formation is slower for less massive systems, Kokubo et al. 2006;Dawson et al. 2015) will, however, not have reached a final state by end of the formation stage at 20 Myr. With outer giant planets As the final stage of terrestrial planet formation (the giant impact stage) takes longer than formation of the giant planets, we also want to consider the effects of their presence on terrestrial planet formation. Here we perform the same simulations again, each time with the addition of two outer giant planets that represent Jupiter and Saturn. To provide a better comparison point between the two cases, we provide in Fig. 14 several snapshots of the simulations. One general consequence at earlier times is that there is slower growth for the embryos beyond 1.5 au. We see in the two top rows than the outermost embryos remain smaller in the runs with outer giant planets. Also, their eccentricities have already increased in the first snapshot, while this is not the case at all for the runs without giant planets. The underlying cause is stirring of planetesimal's eccentricity and inclination by the giant planets; this heavily reduces the collision probability with the low-mass protoplanets (Inaba et al. 2001) and hence the accretion rate. A consequence of the longer timescales of accretion in the outer part of the disc is the state at the moment of the dispersal of the gas disc. In the runs with giant planets, a larger percentage of the planetesimals remains unaccreted at the moment the gas disperses. In addition, after that point, there is no longer gas present to counterbalance the effects of the stirring by the giant planets. This means that after a short moment, the planetesimals will reach eccentricities of the order of unity and will be ejected. This can be observed in Fig. 12, where we see that up to a quarter of the original initial mass is ejected from the planetesimals disc. The final eccentricities of the terrestrial bodies are similar in both cases (Fig. 14), as the inner region is subject to the selfstirring while in the outer region, excitation by the outer planets makes up from a weaker self-stirring as the masses are lower. Thus, the outer giant planets will limit and delay growth of the terrestrial planets in the outer region. The number of objects is a bit higher than the one obtained by pure N-body simulations of terrestrial planet formation, but we are using a somewhat smaller initial surface density profile compared to, for example, Stacked eccentricity versus distance snapshots of 10 simulations with each 46 embryos initially. The left column shows the runs with outer giant planets whereas the right column has no outer giant planets. In each column, the 10 systems are represented with a different colour for each one. The bodies are shown by points whose sizes are proportional to their physical ones. Black crosses show the solar system planets. Raymond et al. (2006), which prevents the accretion into a lower number of higher mass bodies (Kokubo et al. 2006). Summary for terrestrial planets To summarise, we have just seen that as long as the separation between the embryos is sufficiently small that dynamical interactions are triggered before the embryos reach their local isolation mass, the model is capable of reproducing the main features of the formation of terrestrial planets in good agreement with pure N-body models. This is due to embryo-embryo interactions being able to increase the eccentricities, so that the embryos can move out of their original locations, and almost entirely depletes the planetesimals. An integration period (for the formation stage) longer than the lifetime of the protoplanetary disc is necessary to follow the giant impact phase. The time required for the bodies to obtain their final characteristics increase with distance (as shown here) and with decreasing initial amount of solids (e.g. Kokubo et al. 2006;Dawson et al. 2015). The limitation of the formation stage to 20 Myr (Section 2.2) permits to capture all the accretion of planetesimals (provided there are enough embryos initially) and most of the dynamical interactions of Earth-mass and larger planets forming via giant impacts up to roughly 1 au, and sub-Earths planets in the first few tenths of an au (corresponding to periods of roughly 100 days). For the population syntheses in Paper II, we estimate from tracking major changes of the planets' orbits, that for orbital distances of 1 au around 90 % of the major instabilities should have been captured when integrating the systems for 20 Myr. The integration time needed to capture most instabilities within a given orbital distance range is a function of the architecture of the planetary systems that results from the previous growth stages. If the growth and migration during the presence of the gas disc leads for example to very closely packed systems of massive planets, instabilities will often occur shortly after disc dispersal. On the other hand, if at gas disc dispersal only lowmass, widely-space planets are present, they will first have to grow further via accretion of remaining planetesimals and embryos -which can take a very long time -to eventually (or also never) become unstable. For larger planet masses, gravitational interactions can extend further: Even on distant orbits, massive planets can destabilise the system as noted by Bitsch et al. (2020) and Matsumura et al. (2021). This could explain why Izidoro et al. (2021) find that by 20 Myr only a fraction of the instabilities between the planets have happened in their setup, whereas Mulders et al. (2020) on the contrary find that increasing the integration time from 10 Myr to 100 Myr only leads to minor further evolution in their simulations. For systems lacking outer planets, Izidoro et al. (2021) also found a convergence after ∼30 Myr (their Model III). This is in better agreement to the analysis for planets with a < 1 au done in Mulders et al. (2020). The purpose of the model here is to obtain planetary systems that can be compared with observations at the population level. For this, it is important to see that the region where long-term growth will be most important (distant low-mass planets) represents at the same time the parameter space currently not accessible to most detection techniques of extrasolar planets (radial velocity, transits, and direct imaging). This should minimise the impact of this limitation. We acknowledge, however, that generally speaking, not all dynamical interactions will have taken place by the end of the integration time of 20 Myr in the model. While the later evolution should not be substantial enough to strongly affect the statistical results in the inner systems, this limitations must be critically kept in mind when comparing for example to microlensing surveys (e.g. Suzuki et al. 2018) that probe more affected regions. Nevertheless, we conclude that the new generation of syntheses can be used to describe in a much more comprehensive way planetary sub-populations ranging from sub-Earths to super-Jupiters. Giant planets The formation of giant planets is quite different. Cores must form before the dispersal of the gas disc so that they can undergo runaway gas accretion, and since we have massive cores in a gas disc, migration is efficient. To gain an understanding of the interplay of accretion and migration, we here show some illustrative cases with a single embryo per disc. For this case, we use the model without modifications, but the N-body is not used. The following examples are taken from the single-embryo population of Paper II. Simulations parameters are the same as provided in Table 1, except for disc masses and surface density (both gas and planetesimals), inner edge, characteristic radius, and external photoevaporation rate of the gas disc. In the following simulations, the inner radius has negligible effect on the final outcome, as we do not study close-in planets, and so we do not mention it. The characteristic radius r cut,g of the gas disc is set as M g 2 × 10 −3 M = r cut,g 10 au (see Paper II for the motivation). We provide the remaining two parameters, the initial masses of the gas and planetesimals discs in the following. Formation and evolution of Jupiter-mass planets We show in Fig. 16 the formation tracks of a few synthetic giant planets whose masses are in the 100 to 500 M ⊕ range and have a wide range of final positions. Due the inclusion of migration in the model, we observe that the final position of these planets is closer-in that the initial location of the embryo: all the embryos start beyond 10 au, with one close to 30 au, while all the planets end up inside 10 au. During the initial stage, both accretion and migration are slow, but accretion is still faster. As the planets grow, migration becomes more efficient; we observe that most of the migration occurs while the planets are close to the transition to gas giants, with masses between 20 and 50 M ⊕ . The innermost planet shows a strong inward migration at this stage, but this is due to limited accretion while migration remains at the same rate. Once the planets undergo the runaway accretion of gas and switch to type II migration, accretion is strong, and they experience limited migration. This leads again to near-vertical tracks. The two changes (from an attached to a detached envelope and from type I to type II migration) happen in the same period, not always in the same order. In one case (the inner most planet shown in red), the change of the migration regime occurs first, while in the three other cases it is the reverse. Once the migration regime changes to type II, the rate slows down (bottom centre panel of Fig. 17) but the accretion remains mostly constant. Thus, accretion dominates at the onset of this stage, but this reverses at the end. In contrast, Mordasini et al. (2009a) used the equilibrium values of the radial gas flow for both gas accretion and migration. Thus, the slope of detached planet migrating with the planetdominated case of the Type II regime exhibited a common slope in the mass-distance diagram. It should also be noted that for planets inside roughly 1 au, it happens that the criterion limiting the gas accretion rate changes to the mass in the feeding zone which leads to a reduction of the rate at the end of the formation. It can also be noted that our model allows for the growth of embryos at large separation (up to about 30 au, unlike the work of . The difference is mainly related to the planetesimals size. Smaller planetesimals have lower eccentricities and inclinations because of more efficient damping by the disc gas, and in addition, a larger capture probability by the planets for a given surface density because of the more strongly drag-enhanced capture radius for small planetesimals. This results in a larger accretion rate of solids, which enables planets to sufficiently grow to undergo runaway gas accretion before the dispersal of the gas disc also in the outer parts of the disc. The formation of Jupiter-like planets with migration and planetesimals accretion follows a different pattern in the oneplanet-per-disc approximation studied here than what was found by some other models using the in situ (and one-embryo-perdisc) approximation. For the latter, the favoured scenario is that a core between 10 and 20 M ⊕ forms early (less than 10 5 yr) and undergoes runaway gas accretion only close to the dispersal of the gas disc (Pollack et al. 1996;Alibert et al. 2018). The slow accretion of planetesimals, resulting in a steady luminosity, is able to prevent runaway gas accretion during the intermediate stage. This intermediate stage is the problematic part when migration is included; the reason being that migration is most efficient for planets that are between 10 and 50 M ⊕ (see Sect. 5.1.3 and Fig. 10). Hybrid pebble-planetesimals (Alibert et al. 2018) or pure pebble ) accretion models can account for the migration during the intermediate phase, as the cores are able to form at larger separation, provided that far out, bodies emerge early and massive enough to be able to efficiently capture pebbles. The simulations presented here show a situation with fast migration where no intermediate stage is possible, because the planets would otherwise end up at the inner edge of the gas without the opportunity to undergo runaway gas accretion. This means that the cores must form just at the time to undergo runaway gas accretion. The usual picture of the formation of Jupitermass planets in our model is then more similar that what was found by Alibert et al. (2005a), with an almost nonexistent intermediate phase (left panel on the second row of Fig. 17). As the accretion time scale are longer at large separation, the embryos will accrete their mass over a longer period. At the same time, the inward migration experienced by the protoplanets means that their feeding zone is not depleted as in the in situ formation scenario. In contrast, for the multi-embryos simulations that we present in Paper II (see also Sect. 8.1), migration can be significantly altered by mean-motion resonances chains. In that case, the torque acting on one planet must be spread over all the bodies, meaning that the planet with the largest specific torque will migrate slower than it would were it not in a resonance chain. This provides a way to obtain an intermediate stage and less overall efficiency of migration, as we show in that work. This effect leaves open the possibility to have an intermediate stage for the formation of giant planets, as obtained in Alibert et al. (2018). Thus, once multiplicity is included, the simulations here become more similar to Jupiter formation models like Alibert et al. (2018). Figure 17 shows the formation tracks of planets that are in the 2 to 10 M range. Compared to the planets previously discussed, these ones show a greater range of initial locations (from 6 to 40 au) and overall effect of migration. The planet shown in orange is the quickest to accrete a massive core and undergo runaway gas accretion, due to both the more massive disc and the inner location. The latter is made possible due to the disc's mass. This is also the one to migrate the least before reaching 10 M ⊕ because 1) the fast formation limits the effect of migration and 2) enters a convergence zone (see Fig. 10 and the discussion in Sect. 5.1.3). As the boundary of convergence zone moves inward (Lyra et al. 2010;Dittkrist et al. 2014) and to lower planetary masses over time, the planet shown in red will encounter the convergence zone at a different location, which will not affect the planet as much. More massive planets Unlike the Jupiter-mass planets, all the ones of this group first switch to type II migration before going to the detached phase. This is seen on the top right panel of Fig. 17, where the tracks become dashed and thin during a brief section. The slope break that was discussed in for the Jupiter-mass is stronger for the two innermost planets. Comparing the time evolution of the two, it can be noted that the migration rate remains mostly constant while in the type II regime, while the accretion rate decreases. Concerning the radius and luminosity, we observe that all the planets show a similar behaviour even with the difference in the final location. Giant planets ending in the star by tidal migration As an illustration how close-in planets are affected by the newly added physical processes during evolution, we finally discuss the formation and evolution of a close-in giant planet. These will raise tides onto the star, which will result in tidal migration. The consequence is that the planet can be accreted by the star at some point during its evolution. We show such a case in Fig. 18. The formation stage looks quite similar to the previous example, with the difference that the planet ends at a close-in location, 0.04 au. The radius shrinks already before the planet goes to the detached phase, because it experiences a strong inward migration at the same time (as it can be seen in the lower right panel of Fig. 18). As the planet migrates inward, the Hill radius shrinks. Once the detached phase begins, the Hill radius continues th shrink as further inward migration continues. As this planet is close to the star (0.04 au), the evolution stage is different from the case shown previously. The luminosity increases over time time, the envelope gradually expands and looses mass due to atmospheric escape and the planet migrates further inward due to the tides raised onto the star. The migration rate increases over time due to its strong dependence on the distance between the planet and the star (see Eq. (112)). To determine the reason for the luminosity increase, we print alongside the total value, the contribution from bloating (Eq. (64)). We see that from late in the formation stage until the end, this contributes to nearly all the planet's luminosity. And as it goes with the stellar flux, it increases at late times due to tidal migration. The luminosity increase in turns leads to an expansion of the envelope, which increases the loss rate by atmospheric escape. But rather than this being the main cause of gas loss, we see that the bulk of the envelope is removed because it overflows the Hill sphere. This occurs suddenly at the end of the planet's life, once the outer radius gets larger than the Hill sphere. Only a bare core remains, which get accreted by the star shortly thereafter. Summary for giant planets The formation and evolution of giant planets involves multiple concurrent processes. Migration being most efficient during the onset of the gas runaway accretion, this phase must occur in a relatively short time for the planets to not end up at the inner edge of the disc, in the absence of another planet to prevent migration. This also means that the cores must form late (i.e. shortly before the dispersal of the disc) to prevent a massive envelope from being accreted. Close-in planets will experience addition effects during their evolution, such as atmospheric escape and inward tidal migration that can lead to accretion by the star. In the latter case, it is possible for Hill sphere overflow to cause the loss of most of the envelope. Individual systems After discussing formation pathways of terrestrial under idealised conditions, and of single giant planets, we finally show results obtained with the full model. Using many embryos per system, the model is able to produce a very large variety of planetary systems. These range from terrestrial planets (as we saw in the previous section) to giant planets. We first provide two examples of the temporal emergence of planetary systems and then show the variety of the final architecture of 23 systems. 13, corresponding to a dust to gas ratio of 0.011. This results in a low initial solid content of 65.1 M ⊕ in the disc of planetesimals. The disc is seeded with 100 lunar mass embryos at t = 0, distributed uniformly in the logarithm of the semi-major axis inside of 40 au (see Paper II for more details on the initial conditions). Low initial solid content Many aspects of the emergence of the planetary systems can be understood with the comparison of the timescales of growth and migration, and the consequences of (large-scale) dynamical instabilities caused by the gravitational interactions of protoplanets. Therefore we colour code in Fig. 19 showing the temporal evolution of the system in the a − M plane the tracks of the planets by the ratio |τ mig /τ grow | = |d ln m/d ln a|. Regarding the timescales, it is of fundamental importance that the oli-garchic planetesimal accretion timescale increases with increasing planet mass (e.g. Thommes et al. 2003), whereas the orbital migration timescale in the Type I regime decreases with planet mass (e.g. Ward 1989). At the beginning (10 5 yr, top left panel of Fig. 19), the quasi in-situ accretion of planetesimals present in the initial)feeding zone of the embryos is the dominating process. Migration occurs at these very low masses on a much longer timescale, leading to nearly vertical upward tracks. We note that the model does not include any artificial reduction factors of Type I migration. The specific distance dependency of the mass to which the protoplanets have grown by 10 5 yr is given by the following interplay of growth timescale as a function of orbital distance and the local availability of solids: from the innermost embryo at about 0.03 au to the one at about 0.6 au, the protoplanets have already grown to the local planetesimal isolation mass (Lissauer 1987). Given the planetesimal surface density scaling with r −3/2 , the isolation mass increases with orbital distance. As can be seen in Panel b of Fig. 20 which shows the mean planetesimal surface density in the feeding zone of the planets, at 10 5 yr, the surface density is already strongly depleted in the inner parts of the disc. Between the local maximum at 0.7 au and the water iceline at A&A proofs: manuscript no. model 2.7 au, the mass is in contrast decreasing with distance because protoplanets further out grow slower. The next feature is a sharp increase of the protoplanets' mass by about a factor 2 across the water iceline because of the increase of the solid surface density. One protoplanets grows in the transition zone, giving it an intermediate mass. Outside of the iceline, the masses decrease again with distance because of the longer growth timescales. For the protoplanets in the inner part that have already reached the isolation mass, the growth is temporally stalled. Because of the very low (isolation) masses of these protoplanets, orbital migration is nevertheless negligible. At the very beginning, all protoplanets grow as if they were the only bodies in the disc, not feeling the influence of the other protoplanets. With increasing mass, the interaction (either directly via N-body interactions) or indirectly via resonant migration, become important. By 10 5 yr, the first dynamical interactions have started among some of the more massive protoplanets, which is visible as a 'jitter' in some tracks, and two collisions, which are shown by two open grey circles. At 1 Myr (top right panel in Fig. 19), inside of the iceline, the character of growth has changed from planetesimal-dominated, to some first growth via giant impacts (embryo-embryo collisions) for some protoplanets or stalled growth for others. As can be seen in Panel a of Fig. 20 which shows the semi-major axis of the (proto)planets as a function of time colour coding the mass, about 10 further giant impacts have occurred. This has allowed the protoplanets in the inner disc to grow beyond the local isolation mass. As visible in Panel b of Fig. 19, at 1 Myr, the planetesimal disc is now depleted out to about 1.3 au, and as time proceeds, the depletion moves even further out. We thus see a growth wave moving outwards (Thommes et al. 2003). All solid mass has been transferred into the embryos in this part, and their mutual interaction (giant impacts) governs the further mass growth. This implies that the accretion of planetesimals is only important at the early phases when the planets grow mostly in- Fig. 20. Same system as in Fig. 19, but now showing the semi-major axes a of the planets as a function of time, colour coding in panel (a) the planets' mass, in (b) the planetesimal surface density in the planets' feeding zone, and in (c) the local gas surface density. Here, the vertical line indicates the moment of gas disc dissipation. Panel (d) shows mass as a function of time, colour coding the semi-major axis. Small black circles indicate giant impacts, by showing the position or mass of the target (the more massive collision partner) at the moment of the impact. situ. In the outer disc beyond the iceline, growth in contrast still proceeds mainly via planetesimal accretion, as there is a larger mass reservoir available. Between 2 to 4 au, a group of about 10 protoplantes with a mass of about 1 M ⊕ has formed, meaning that the most massive planets are now found further out than before. These protoplanets originate from (just) beyond the iceline. The colours of the lines in Fig. 19 show that migration is still much slower than accretion for these planets at 1 Myr, but some slight inward migration is now occurring, causing the tracks to bend inwards. This applies also to the inner disc, where horizontal tracks are visible. They result from the depletion of the planetesimal disc, and the fact that the cores are of such a low mass that virtually no gas accretion is possible. At 3 Myr (bottom left panel of Fig. 19), in the inner disc, the dominant effect is further growth via giant impacts. About 25 protoplanets with masses between the one of Mars and Earth are now present. In the outer disc, beyond the iceline, the aforementioned group of the about 10 most massive protoplanets has grown further now reaching a maximum mass of 3 M ⊕ , and has also migrated further inward. As these planets migrate into zones that have been previously depleted by inner planets (in particular inside of the iceline), planetesimal accretion is quickly stalled. This means that planetesimal accretion for migrating planets is usually limited in low-mass multiple systems like the one present here. This means that a possible shepherding effect (Tanaka & Ida 1999) that we do not include in the model should not affect the outcome very much, except for a transition phase where τ mig ≈ τ acc for some planets. This phase can be seen for the outer group from the cyan line colours. As can be seen in Panel a of Fig. 20, the planets capture each other in very large resonant convoys and migrate together (e.g. Cresswell & Nelson 2008;Alibert et al. 2013). In this configuration, outer more massive planets push inner smaller planets. As visible in Fig. 20 by the small black circles, many giant impacts seem to occur in groups (i.e. at similar moments in time in fast sequence): a first group occurs at about 3 Myr, a next one at 4 Myr, and again one at the moment when the disc inside of about 2 au becomes free of gas. This is visible in Panel c of Fig. 20, which colour codes the gas surface density at the planets' position. This moment corresponds to the opening of the in-Article number, page 33 of 45 Fig. 21. Temporal evolution of the eccentricities of the planets of the system emerging in the low-mass disc shown in Fig. 19. Colours indicate the planet mass. For better visibility, only planets more massive than 0.1 M ⊕ are shown. The curves are running averages such that one sees more clearly the mean values instead of rapid variations of the eccentricities. The thick black line is the mass of the gas disc relative to the value at 10 5 years, which is in turn very similar to the initial value. The increase of the eccentricities at around 5 Myr when the gas disc dissipates is visible. ner hole in the gas disc because of internal photoevaporation (cf. Fig. 3). At this moment, the damping effect of the gas vanishes, allowing orbit crossings and collisions (e.g. Ida & Lin 2010). The outer gas disc dissipates a bit later, at 5.1 Myr, shown by the vertical line in Panel c of Fig. 20. After the dissipation of the disc, only 3 more giant impacts occur in this system to 20 Myr. The temporal evolution of the eccentricities is shown in Figure 21. The colours show the planet mass. For clarity, only planets with a mass of at least 0.1 M ⊕ were included. One can clearly see the increase of the typical values of the eccentricities near the time the gas disc dissipates at around 5 Myr. Before, typical values of the eccentricities are of the order of 10 −3 to a few 10 −2 . After disc dissipation, they increase to values between about 0.02 to 0.2. Such values are expected from the increase of the velocity dispersion of the orbits until they are comparable to the escape velocity from their surfaces resulting from close encounters, once the damping by the gas is gone (Goldreich et al. 2004). One also sees that more massive bodies tend to be less eccentric, likely a consequence of energy equipartition. In our model, dynamical friction by residual planetesimals is neglected. This would reduce the eccentricities and inclination of the protoplanets. This implies that our model tends to overestimate the eccentricities and inclinations of lower mass planets for which dynamical friction by planetesimals would play a role. The general sequence of solid growth that is first dominated by the near in-situ accretion of planetesimals followed by the second phase of growth via giant impacts is well visible in Panel d of Fig. 20. It shows the mass of the protoplanets as a function of time. The line colours show the semi-major axis. We note how the transition between the two regimes occurs the later the more distant a planet is. At the largest orbital distances where embryos were inserted into the disc (maximum starting distance is 40 au), nearly no growth at all has occurred during the simulated period. As described in Sect. 5.2.3, numerically speaking, we add the mass of the impactor in a giant impact over a timescale of 10 4 yr to the target. This is the reason why the vertical steps in the curves corresponding to giant impacts (indicated with the black circles) are not strictly vertical. This is visible particularly at the early ages. The bottom right panel of Fig. 19 shows the system at 20 Myr, which corresponds to the time where we stop the Nbody integration and planetesimal accretion. Between 3 and 20 Myr, numerous giant impacts have reduced the number of planets and destroyed the mean motion resonances (see also Fig. 20). The inner system now contains 8 roughly Earth-mass planets, exhibiting a certain inter-system similarity of the mass scale (Millholland et al. 2017) with an increase towards the exterior (Weiss et al. 2018). At 0.7 au, there is a sudden increase in the typical mass, corresponding to the transition from volatile-poor planets that have formed inside of the iceline, to very volatilerich planets originating from beyond the iceline. Compared to the original location of the iceline at 2.7 au, there was thus an inward shift in this transition by about 2 au because of orbital migration. In the end, the planetesimal disc is depleted out to about 5 au. Outside, about 35 M ⊕ remain in the form of planetesimals. This corresponds to a fraction of about 46 % of the initial planetesimal mass that was converted into planets. Since we follow the accretion for only 20 Myr, this remaining planetesimal mass must be considered an upper limit for the actual mass of remaining planetesimals, as over longer timescales, the distant protoplanets would continue to accrete. However, since the accretion timescales at several 10 au in the absence of eccentricity damping (corresponding to orderly growth) become extremely long (Ida & Lin 2004a), at least some part of these planetesimals could remain to eventually form a debris disc, in analogy to the Kuiper belt beyond the orbit of Neptune in the Solar System. High initial solid content The second system we consider is System 852 in NG76 (Paper II). The initial conditions are here a disc mass of 0.066 M and a metallicity of [Fe/H]=0.23. This leads to an initial planetesimal mass of 432 M ⊕ , 6.6 times as much as in the first example. As in the previous case, 100 lunar mass embryos are put into the disc at the beginning, uniform in log of the semi-major axis out to an orbital distance of 40 au. The evolution in the a − M plane is shown in Fig. 22. The semi-major axis and mass as function of time is shown in Fig. 23. In the top left panel of Fig. 22 we see that at 10 5 yr, the basic picture regarding the (relative) mass of the protoplanets as a function of orbital distances is analogous to the one in the low-mass disc at the same time. In absolute terms, the planet masses are, however, about one order of magnitude larger. As can be seen in Panel b of Fig. 23, the planetesimal disc is already strongly depleted out to about 1 au. Some giant impacts have also already occurred in the inner disc. This fast development away from the initial conditions is a sign that the early phase of solid growth (from dust to embryos) should be treated more explicitly (e.g. Voelkel et al. 2021). The situation at 0.5 Myr is already quite different, as a first core has undergone runaway gas accretion, at about 0.35 Myr. By 0.5 Myr, its mass has already grown to about 350 M ⊕ . In the end, it will have a mass close to 750 M ⊕ and be the innermost giant planet. The starting position of this embryo was 4.5 au. The water iceline in this system is for comparison found at about 3.4 au. The formation of this first giant planet does not yet strongly affect the rest of the system, at least at this moment. In the inner system, we in particular see a similar development as Fig. 22. Example of the formation of a planetary system from initially 100 lunar-mass embryos in a high gas mass (initial mass 0.066 M ), high metallicity ([Fe/H]=0.23) disc. The initial mass of planetesimals is 432 M ⊕ . The plot is analogous to Fig. 19, but the y-axis now extends to much higher masses, and the moments in time that are shown are different. At the end of the simulation at 20 Myr, this system contains one close-in sub-Neptunian planet, three giant planets, and a group of outer very low-mass planets. in the low-mass disc: the formation of very large resonant convoys and some giant impacts. However, shortly after 0.5 Myr, a second core, located about 0.5 au outside of the first giant, also starts runaway gas accretion. The embryo of this planets started at about 5.3 au, and was for some time in a resonant configuration with the first giant-to-be. It will eventually become the most massive giant planet in the system (about 2100 M ⊕ ) at 1.2 au. The growth of this second giant planet has important systemwide consequences, as can be seen in the panel at 1 Myr. It not only destabilises several Neptunian planets in the vicinity of the forming giants, but it also sends a protoplanet of about 3 M ⊕ from about 0.9 au into the inner system (close to 0.1 au). The orbit of this planet is eccentric, and triggers numerous giant impacts among the protoplanets in the inner system (see Panel d of Fig. 23). These orbit crossings and impacts are facilitated because the runaway gas accretion by the two forming giants strongly reduces the gas surface density in the inner disc temporarily, reducing eccentric damping (Panel c of Fig. 23). In the end, only the intruder from the exterior remains, the mass of which has increased to about 13 M ⊕ by accreting the local protoplanets. The formation of the second giant also scatters an initially low-mass protoplanet (0.7 M ⊕ ) from about 2 au onto a very eccentric orbit with a semi-major axis of about 15 au. This protoplanet grows then out there (potentially in a monarchical growth mode, Weidenschilling 2005), reaching a mass of about 3 M ⊕ by 1 Myr. By about 1.4 Myr, its mass has increased to 8 M ⊕ , and a phase of rapid inward migration sets in. It then runs from outside into a group of 7 protoplanets at about 2 to 4 au that are captured in MMRs with the giant planet that had formed second (see Panel a of Fig. 23. A series of giant impacts occur, and at 1.8 Myr, the protoplanet coming from the outside starts runaway gas accretion. It eventually becomes the third giant planet in the system with a mass of about 630 M ⊕ at 2.5 au. Interestingly enough, this implies that giant planets in a system need not to be strictly coeval, which could be of importance for ex-A&A proofs: manuscript no. model ample for direct imaging observations. Here, the outermost giant is nearly 1.5 Myr younger, and starts runaway accretion only when the inner two planets have already reached nearly their final mass. Actually, the fact that this third outer planet forms strongly reduces the gas accretion rate of the middle giant, by reducing the gas surface density in the inner system (see Panel c of Fig. 23). So, more precisely speaking, the formation of this third giant actually sets the final mass of the giant planet inside of it. A comparable, transient depletion of the inner gas disc is already also seen when the inner two giants form, as mentioned. It should be noted that the degree of depletion of the inner disc because of gas accreting giant planets might be overestimated in our model (Manara et al. 2019;Nayakshin et al. 2019;Bergez-Casalou et al. 2020). Then, this indirect interactions via the disc would be reduced. The lifetime of the gas disc is in this example about 3.4 Myr. This is less than the lifetime of the low mass system studied in the previous section, despite the higher starting mass. The difference is mainly a consequence of the higher external photoevaporation by nearly a factor 5 (it is an independent initial condition, see Paper II). The gas accretion of the giant planets also contributes to the dispersal by them containing in the end about 0.01 M of gas (out of the initial disc mass of 0.066 M ). The temporal evolution of this system shows how the growth of multiple giant planets strongly affects the overall system architecture. This also has important consequences for the giant planets themselves (see Panel d of Fig. 23): while they accrete their gas envelopes, they get hit by several lower-mass protoplanets that they destabilise. This increases the core mass of the three giants from about 24, 14 and 10 M ⊕ at the onset of gas runaway accretion to clearly higher finales values of 64, 26, and 21 M ⊕ , respectively. Such giants impacts thus strongly influence the final heavy element content (Thorngren & Fortney 2018), and could potentially lead to the existence of diluted cores as found in Jupiter (Liu et al. 2019). At the end of the simulation at 20 Myr, the system contains four planets more massive than one Earth mass. During the emergence of the system, eight protoplanets have collided with the host star and four were ejected. About 244 M ⊕ of planetesimals remain out of the starting value of 432 M ⊕ , corresponding to a difference of 188 M ⊕ . However, the planets actually existing at the end contain only 123 M ⊕ , meaning that about 65 M ⊕ of planetesimals were 'lost' because they were either directly ejected or contained in planets that were themselves ejected or fell into the star. This correspond to a solid conversion efficiency of planetesimals into planets of about 28 %. Over gigayear timescales, atmospheric escape reduces the mass of the close-in planet at 0.08 au from 13.2 to 11.6 M ⊕ , but it does retain a remaining H/He envelope. Under the effect of bloating, the planet therefore has a relatively large radius of 5.3 R ⊕ at 5 Gyr. It is an example of an inner sub-Neptunian planet in a system with outer giant planets (see Paper III). Finally, it is worth mentioning that systems with three giant planets are statistically a very rare outcome in the population synthesis (Paper II): There are only five such systems among the 1000 synthesised in the nominal population NG76. Systems with one or two giants are in comparison much more common (each about 100 systems). In the system at hand, orbital stability is provided by the giant planets residing in the 3:1 mean motion resonance for both pairs of planets. This allows them to remain stable (Alves et al. 2016) despite their relative proximity to each other, corresponding for both pairs to about 6-7 mutual Hill radii, and their significant eccentricities (about 0.08, 0.18, and 0.40 for the inner, middle, and outer planet). We have further tested the stability of this system by extending the orbital integration (including all bodies in the system) from 20 to 100 Myr. At least on this timescale, the system remained stable without secular growth of the eccentricities. Overview of the diversity of system architectures Figures Fig. 24 and 25 show the mass-distance and radiusdistance of 23 synthetic systems. The solar system is shown in the top-left panel for comparison. All these systems are again taken from the nominal synthetic population NG76 for 1 M star that will be presented in Paper II. However, here we study these as individual systems without taking into account the likelihood of such systems in populations. Hereafter we give an overview of some major correlations that we find. For quantitative results, we refer to the next papers of the series. It is important to point out a potential complication concerning the formation of the outer two giant planets in the system shown in the panel q of Fig. 24. These planets accreted their cores and started undergoing runaway gas accretion in the inner region of the system. They were subsequently moved to the outer region of the disc where they continue to accrete gas. However, the reason for their final distant locations are not planet-planet scatterings. The presence of a inner massive giant planet (in this case, the one at 0.2 au with nearly 30 M , which corresponds to 3 % of the mass of the central star, for an initial disc mass of 9 % of the stellar mass) results here in the outer planets obtaining large eccentricities. This, in turn, causes the prescription for the modulation of the torque (Eqs. 87 and 88) to reverse its sign. Via the additional forces added to the N-body integrator (Sect. 5.2.1), this if found to lead to outward migration in the present case. Generally speaking, a positive torque means that the angular momentum of a planet has to grow. For an eccentric planet, this can occur via two ways (Cresswell et al. 2007): by eccentricity reduction (circularisation) or outward migration (increase of the semi-major axis). The different approaches how to translate the positive torque found in hydrodynamical simulations into the additional N-body forces have been inconsistent with one another in the literature in the past in this regard (Ida et al. 2020). A reassessment was recently made in Ida et al. (2020), but is not yet included in the simulations shown here. The problem we encounter in the special setup here (the presence of an inner very massive giant planet) is likely that the eccentricity state towards which eccentricity damping is acting is becoming ill-defined. The setups used to derive the eccen-tricity and inclination damping expressions and their translation into additional N-body forces (e.g. Papaloizou & Larwood 2000;Cresswell & Nelson 2008;Bitsch et al. 2013;Ida et al. 2020) assume that the disc orbits on a nearly circular orbit centred on the star. However, in the case here, the planet and outer disc will tend to orbit the barycenter of the star-inner giant pair, which means that the eccentricity can likely not be stabilised near zero. Figuring out the consequences for the orbital evolution of the different involved planets would likely require dedicated hydrodynamical simulations. This shows a limitation of our N-body approach with additional forces instead of direct hydrodynamical simulations. This implies that the model results for distant giant planets with an inner massive planet must be taken with caution. Mass and final number of planets The number of planets that remain past the formation stage is anti-correlated to the mass of the formed planets. Systems forming giant planets loose more embryos than the ones forming lowmass planets only. We obtained some systems where only one giant planets remains, for instance in panels a and c (including a single one in the latter case), where all the other embryos were removed during the formation stage. When this occurs, at least one of the final planets remains on a wide orbit, as it needs to clear the outer embryos. If this is not the case, then we observe that some embryos with low masses remain in the outer region (e.g. panels e and i). Systems that still form giant planets, but of lower masses, are able to retain more bodies. We have a few examples that have an architecture in the fashion the solar system, with terrestrial planets inside of giants, such as in panels e, i, m and p. However, those are not comparable to the solar system for several reasons. First, the giant planets are quite more massive than in the solar system; it is not uncommon to find masses of the order of 5 to 10 M . Likewise, the terrestrial planets are many Earth masses. Further, the location of the giants is much closer in that Jupiter, with distances that are around 1 au. These findings indicate that 1) the gas accretion rate in the disc-limited regime could be too high and 2) the simple Type II migration model we employ in this work leads to too much inward migration. Finally, systems that form low-mass planets only remain with the largest number of bodies. This is seen for instance in panels l, n, r, t, v and w, where many ice-free bodies (shown in green circles) are present at the end. Similarity in the low-mass systems Systems where only terrestrial planets are present have planets with similar properties. It can be seen in panels d, g, h, l, n, r, s, t, v, and w. This is result consistent with observational results about masses and spacing (Millholland et al. 2017). To provide a comparison point with the similarity of planet radii (Weiss et al. 2018), we provide a radius-distance diagram in Fig. 25. For the rocky planets, both masses and radii show the same similarity. The transition from rocky to icy planets affects the radii only slightly. More important is the presence of (remaining) H/He envelopes that were not removed by photoevaporation. We observe a general slight increase of mass with distance, at least in the inner region. This is most likely linked to the surface density profile of solids. The isolation mass M iso ∝ r (1.5(2−β s )) (Lissauer 1987), and so since we have β s = 1.5, the value increases with distance. This increase stops at locations usually slightly outside of 1 au, which could be due to our limited integration time, as we discussed in the previous section. Composition of the close-in planets We find that close-in terrestrial planets are likely to be rocky, which is in agreement with inferences from observations (Jin & Mordasini 2018). This is especially the case for systems where no planets grow to more than a few Earth masses. We observe in all systems that icy planets are found inside the location of the ice line (the dotted vertical line). Nevertheless, the innermost planets only accrete from the inner region of the disc where the planetesimals are rocky. This indicates that these planets neither migrate from beyond the ice line to their current position, nor get moved to other locations by mean of dynamical instability. It should be noted that in our simulations, planetesimals composition is set from the initial temperature and pressure profile of the gas disc (Sect. 3.3.3). Nevertheless, there are systems without giant planet that consist of only ice-bearing bodies; these are shown in panels d, g and u. These systems form planets that are more massive than the previous ones, with most of them having at least one planet above 10 M ⊕ . The Type I migration timescale decreases with increasing mass; therefore these more massive planets can migrate from outside of the water ice line to their current position, increasing the compositional diversity of the systems (Raymond et al. 2018). Systems with giant planets exhibit different behaviours. Some have only ice-bearing planets (panels b, f and q) while others have also terrestrial planets. In the latter case, the giant planets do not necessarily separate rocky bodies from icy ones. Panels e and o show systems where rocky and icy planets are separated by giants, while in panels i, m and p icy planets are present both inside and outside of the gas giants. This points at a high diversity of the composition of planets in systems containing both giant and low-mass planets. Correlations between the occurrences of giant planets and others in planetary systems will be investigated in more details in Paper II. Schlecker et al. (in pressa, hereafter Paper III) will look thoroughly at correlations between close-in Super-Earth planets and long-period giants. Summary and conclusions In this work, we presented the Generation III version of the Bern global model of planetary formation and evolution. In this generation, the following two main aspects were improved. First is the ability to simulate planets with a mass range from Mars to deuterium-burning planets. Older generations of the Bern model could not address terrestrial planets, as they we lacking the giantimpact stage. To reach this goal, we improved the N-body integrator so that per disc, hundreds of concurrently forming embryos can now be included. This is crucial for the formation of low-mass planets in general and the Solar System. We also added several new physical processes to take into account the consequences of stellar proximity, allowing us to simulate with the new model planets that cover the widest range of orbital separations, from star-grazing to distant and even rogue planets. Second, the ability to predict self-consistently for multi-planet systems as many directly observable quantities as possible: not only masses and orbital elements as in the past, but also other key observables like luminosities, magnitudes, transit radii, or evaporation rates. To achieve this, we coupled our planet formation model (to 20 Myr) to our planet evolution model (20 Myr to 10 Gyr). Thanks to this, we can now self-consistently and statistically compare the same population to all important observational techniques, as will be done in the series of NGPPS papers. This is crucial, as different methods probe distinct planetary sub-populations. This combined comparison puts extremely compelling and powerful constraints on any theoretical model. The formation and evolution model follows the envelope structure of the giant planets during they entire lifetime. This allows for example to study the luminosities at any time , and enables the comparison with directlyimaged exoplanets (e.g. Vigan et al. 2017). The model now includes a multitude of physical processes (see Fig. 2). The following are included during both the formation and evolution phase: -A solution of 1D radially symmetric internal structure equations (Bodenheimer & Pollack 1986) is used to calculate the internal structure of the H/He envelope and thus the gas accretion rate (during the attached phase), radius and luminosity, which includes Deuterium burning (Mollière & Mordasini 2012) and bloating of close-in planets. -The solution of the 1D internal core structure is used to obtain the radius of the solid core with a modified polytropic EOS (Seager et al. 2007). -An atmospheric model yields the outer boundary conditions during the attached, detached, and evolutionary phase. For the detached phase, we assume hot gas accretion. For the evolutionary phase, we use a simple grey atmosphere. -The host star properties are retrieved from tabulated stellar evolution tracks (Baraffe et al. 2015). During formation, the following processes are included: -The radial structure of the protoplanetary gas disc is computed with a 1D radial (axis-symmetric) constant α-disc model. The effects of internal and external photoevaporation are included. -The vertical structure of the disc is modelled by building on radiative equilibrium (Nakamoto & Nakagawa 1994), including viscous heating and stellar irradiation (Fouchet et al. 2012). Irradiation now also includes the direct irradiation in the disc midlplane important when the disc becomes optically thin. -Planetesimals are presented by a 1D radial (axis-symmetric) disc, with a surface density and a dynamical state (eccentricity, inclination). The temporal evolution of e and i are explicitly followed, including the dynamic excitation by protoplanets and planetesimals, and damping from gas drag . The composition of the planetesimal and the position of ice lines is found from an equilibrium condensation model (Thiabaud et al. 2015). -The equation for the planetesimal accretion rate of the protoplanet is computed assuming the oligarchic regime (Chambers 2006). The enhancement of the planetesimal capture radius because of the planetary H/He envelope is included (Inaba & Ikoma 2003). -A prescription based on Bondi-and Hill-type gas accretion in the 2D and 3D cases limits the planetary gas accretion rate in the disc-limited regime. -Gas-driven Type I and Type II orbital migration are computed including the effects of non-isothermality and of the planet's eccentricity and inclination (Paardekooper et al. 2011;Coleman & Nelson 2014;Dittkrist et al. 2014). -Full N-body interaction between all the embryos forming concurrently in one disc are tracked using the mercury integrator (Chambers 1999). Orbital migration and the damping of eccentricity and inclination are input in the integrator via additional forces. In case of a collision, the impact energy is added as an additionally luminosity term (Broeg & Benz 2012) to the internal structure model. This can lead to the loss of the H/He envelope. During the evolutionary phase we include: -XUV-driven atmospheric photoevaporation in the energy and radiation-recombination-limited approximation (Jin et al. 2014), for close-in planets, the addition of a bloating luminosity modelled with the empirical relation of Thorngren & Fortney (2018), and tidal spiral-in because of stellar tides (Benítez-Llambay et al. 2011), along with Roche-lobe overflow. We show in Sect. 6 where we study the formation of terrestrial planets that provided there are initially enough embryo in each disc, mutual gravitational interactions will stir their eccentricities. Due to the radial excursions, embryos will have access to more material until all the planetesimals are accreted. Afterwards, a phase of giant impacts sets in. Thus, despite the use a fluid-like description for the planetesimals, the model is able to reproduce the giant impact phase of terrestrial planet formation. Due to the limitation of the integration time (20 Myr), this is only completely modelled within a distance of roughly 1 au. Giant planets, in contrast, are not affected by the integration time limitation as they must anyway form before the dispersal of the gas disc. The model is then able to track the formation of all planets in the inner part of planetary systems. After the description of the model, we study how the many different sub-models included in the Bern Generation III Model interact in the full end-to-end model by simulating the formation of two planetary systems. To understand the results, it is helpful to compare the timescales of growth and migration, revealing which process is dominant. It is also helpful to study the planetesimal surface density, revealing the solid accretion mode (planetesimal accretion versus growth by giant impacts). Other key processes occurring during the emergence of the planetary systems include the capture of many protoplanets into large resonant convoys, and the consequences of dynamical instabilities caused by the gravitational interactions between the protoplanets. This includes the destabilisation of other protoplanets at the moment a giant planet (especially a second one in the system) starts runaway gas accretion as well as series of giant impacts at the moment the gas disc dissipates. We also give a short overview of the diversity of planetary systems that were obtained using the model. We find that systems containing giant planets can have a great diversity of configurations, while for systems forming only low-mass (Earthlike) planets exhibit arranged planets with similar masses. This work is the first of a series. Here we present the outline of the series: -Paper II will introduce the methods to calculate population syntheses. Several populations for Solar-mass stars with different numbers of initial embryos per system are computed. The effects of this parameter at the population level will be investigated. -Paper III will look for correlations between of the occurrence of inner low-mass and outer giant planets. -Paper IV (Burn et al. in press) will extend the population synthesis to lower-mass stars (down to late M-dwarfs) and analyse the effects of the stellar mass. -Paper V (Schlecker et al. in pressb) will study the mapping of disc initial conditions to planet properties with machine learning. -Paper VI (Mishra et al. subm.) will look for the diversity between planets in each system compared to diversity of the overall population (Weiss et al. 2018). -There are then three papers on the quantitative comparison with various observational techniques: radial velocity with HARPS and CARMENES, and transits with Kepler. The discrepancies uncovered in these comprehensive and multiaspect comparisons with observations will be helpful to improve the understanding of planet formation and evolution. Mass-distance diagrams of specific systems with 100 embryos initially (panels a to w), which are taken from the nominal population predicted for a 1 M star (NG76). Symbols are as follows: red points show gas-rich planets where M env /M core > 1. Blue symbols are planets that have accreted some volatile material (ices) outside of the ice line(s). Green symbols are planets that have only accreted refractory solids. Open green and blue circles have 0.1≤ M env /M core ≤ 1 while filled green points and blue crosses have M env /M core ≤ 0.1. For all these bodies, the grey horizontal bars go from a − e to a + e. The top left panel with black crosses shows the solar system. Bodies lost because of collisions or ejections are shown in light grey. Planets accreted by the central star are show in the very left of each panel, the ejected ones on the very right and planets that collided with another (more massive) planet are shown at their last position on the diagram. The dotted vertical line in each system shows the location of the ice line. The number after each panel name is the metallicity [M/H] of the system expressed in dex, while the value on the top right is the initial mass of the planetesimals disc. Mass-distance diagram of the nominal synthetic population NG76 of solar-like starts which stars with initially 100 moon-mass embryos per disc (see Paper II). The epochs of 0.1, 0.5, and 1 Myr are shown. Coloured points show protoplanets that can no longer accrete planetesimals of the initial local reservoir, which have a planetesimal accretion timescale of less than 3 Myr, and which are still embedded in the parent gaseous disc. Whenτ mig τ mig,c , the planetesimal accretion of these planets could in principle be affected by shepherding if they would be the only protoplanets growing in the disc. toplanets which might at least in principle be affected by shepherding. Figure A.1 shows the mass-distance diagram of the nominal synthetic population NG76 from Paper II. Three moments in time are shown where planetesimal accretion is in general important. We colour code the absolute value of the ratio of the normalised migration timescale of a planetτ mig to the normalised critical migration timescaleτ mig,c which both are calculated as in Tanaka & Ida (1999). When this ratio is larger than approximately unity, shepherding would occur for a single protoplanet migrating alone through a disc of planetesimals. Only protoplanets which could in principle be affected by shepherding by fulfilling the following criteria are colour coded: First, the distance a planet has migrated away from its starting location is larger than five times the size of its Hill sphere. This means that it can no longer accrete from its initial local reservoir of planetesimals. Second, the planetesimal accretion timescale is less than three million years (the typical disc lifetime), meaning that planetesimal accretion (as opposed to growth via giant impacts) is still relevant. Third, the gas disc has not yet dissipated. Other protoplanets should in any case not be significantly affected by shepherding and are shown in grey. The plot first shows that the large majority of protoplanets are grey, meaning that shepherding should not be important for them in any case. Then, more specifically, at 0.1 Myr, there is a group of Mars-to Earth-mass protoplanets inside of the ice line whereτ mig τ mig,c . At 0.5 Myr, there is a radial interval from about 1 to 4 au whereτ mig is longer thanτ mig,c , however in most cases by less than one order of magnitude. These are regions where usually groups of tens of protoplanets form together (see Sect. 8.1), so that it is not clear if shepherding would occur at all. Planets where the ratio is clearly larger, and thus where the effect could in principle be particularly strong, are rare. At 1 Myr, a similar pattern is seen, but the potentially affected region is reduced. It is clear that this simple a posteriori analysis cannot be seen as a final result -for this, simulations where planetesimal are included directly in the N-body would be necessary. Nevertheless, together with the finding of Daisaka et al. (2006) that shepherding is by principle not important when several protoplanets form concurrently, they indicate that shepherding can only affect a relatively limited part of all growing protoplanets.
41,542.2
2020-07-10T00:00:00.000
[ "Physics", "Environmental Science" ]
Sex Differences in the Effects of Prenatal Bisphenol A Exposure on Genes Associated with Autism Spectrum Disorder in the Hippocampus Autism spectrum disorder (ASD) is a neurodevelopmental disorder inexplicably biased towards males. Although prenatal exposure to bisphenol A (BPA) has recently been associated with the ASD risk, whether BPA dysregulates ASD-related genes in the developing brain remains unclear. In this study, transcriptome profiling by RNA-seq analysis of hippocampi isolated from neonatal pups prenatally exposed to BPA was conducted and revealed a list of differentially expressed genes (DEGs) associated with ASD. Among the DEGs, several ASD candidate genes, including Auts2 and Foxp2, were dysregulated and showed sex differences in response to BPA exposure. The interactome and pathway analyses of DEGs using Ingenuity Pathway Analysis software revealed significant associations between the DEGs in males and neurological functions/disorders associated with ASD. Moreover, the reanalysis of transcriptome profiling data from previously published BPA studies consistently showed that BPA-responsive genes were significantly associated with ASD-related genes. The findings from this study indicate that prenatal BPA exposure alters the expression of ASD-linked genes in the hippocampus and suggest that maternal BPA exposure may increase ASD susceptibility by dysregulating genes associated with neurological functions known to be negatively impacted in ASD, which deserves further investigations. Prenatal BPA exposure alters hippocampal transcriptome profiles in a sex-dependent manner. To examine whether prenatal BPA exposure could lead to dysregulation of ASD candidate genes in the developing brain in vivo, we conducted an RNA-seq analysis of hippocampal tissues isolated from male and female neonatal rats exposed to 5,000 µg/kg·maternal BW of BPA in utero or vehicle control. Notably, the dose of BPA used to treat rats in this study is equal to the No-Observed-Adverse-Effect Level (NOAEL) in humans as determined by the FDA and ESFA. We found that when all male and female rat pups under the same treatment condition were combined into one group, as many as 5,624 transcripts corresponding to 4,525 genes were significantly differentially expressed in the hippocampi of BPA-treated rats compared with the controls. In addition, to determine whether prenatal BPA exposure alters hippocampal transcriptome profiles in a sex-dependent manner, DEGs in each sex were identified. We found that 2,496 transcripts (corresponding to 2,078 genes) and 4,021 transcripts (corresponding to 3,522 genes) were significantly differentially expressed in the hippocampi of BPA-treated male and female pups, respectively, compared to controls (P-value < 0.05 and FDR < 0.05). This finding indicates that the brain transcriptome profiles of males and females were unequally disturbed by prenatal BPA exposure. The lists of DEGs are shown in Supplementary Table S1. BPA-responsive DEGs in the hippocampus exhibit sex differences in ASD-associated genes. To determine whether DEGs in response to prenatal BPA exposure are associated with ASD, the lists of BPA-responsive genes were overlapped with the lists of ASD candidate genes from two ASD databases, including www.nature.com/scientificreports www.nature.com/scientificreports/ the SFARI and AutismKB databases. When all male and female pups were combined, a total of 298 and 700 genes among the DEGs were found to be ASD candidate genes in the SFARI and AutismKB databases, respectively. We next performed hypergeometric distribution analyses to assess the over-representation of ASD candidate genes among DEGs responsive to BPA. Hypergeometric distribution analysis of the list of DEGs in the combined male and female pups with respect to autism candidate genes showed no significant association. However, when each sex was analyzed separately, DEGs from male and female hippocampal tissues exhibited significant enrichment in ASD-related genes from the SFARI database (Table 1, Supplementary Table S2). Notably, DEGs in male hippocampal tissues tended to exhibit stronger associations with ASD genes than those in female tissues. This male bias was also observed when the list of DEGs was analyzed for enrichment of syndromic ASD genes in the AutismKB database. These results indicated that DEGs due to BPA exposure showed sex differences in their associations with ASD genes. In addition, to determine whether enrichment of ASD-related genes exists on the X chromosome in these DEG lists, we conducted hypergeometric distribution analyses between ASD-related genes on the X chromosome and each of these lists. Interestingly, we found significant enrichment of ASD-related genes on the X chromosome in the list of ASD-related DEGs in both sexes (9 from 298 genes; P-value = 6.45E-06), DEGs in males only (11 from 183 genes; P-value = 8.46E-10), and DEGs in females only (15 from 266 genes; P-value = 1.20E- 12), suggesting that the X chromosome may be involved in the underlying mechanism of BPA-associated risk for ASD. BPA-responsive genes in the hippocampus are involved in biological functions, canonical pathways, and networks associated with ASD. To predict biological functions, pathways, and interactome networks associated with BPA-responsive genes in the hippocampus, the lists of DEGs were analyzed using IPA software. DEGs in the hippocampus were associated with several functions impacted in ASD, including "nervous system development and function", "inflammatory response", and "digestive system development and function". Interestingly, the top canonical pathways significantly associated with DEGs in the male hippocampus included "glutamate receptor signaling", "axonal guidance signaling", and "circadian rhythm signaling", all of which have been associated with ASD. Similarly, "glutamate receptor signaling" and "axonal guidance signaling" were also present among the top canonical pathways significantly associated with DEGs in the female hippocampus (P-value < 0.05; Supplementary Table S3). Neurological diseases/disorders associated with DEGs included "autism or intellectual disability", "mental retardation", and "developmental delay". It was interesting to note that several neurological functions, including "morphogenesis of neurons", "neuritogenesis", and "formation of brain", were significantly associated with DEGs in males only (P-value < 0.05; Table 2). Additionally, the IPA comparison analysis between canonical pathways associated with DEGs in males and in females revealed several pathways that exhibited significant associations in a sex-dependent manner. Such canonical pathways included "DNA methylation and transcriptional repression signaling", "IGF-1 signaling", "synaptic long-term potentiation", and "androgen signaling", all of which have been associated with ASD (Supplementary Table S4). Interactome networks, which are collections of genes that interact with each other or with specific biological functions, were created using the lists of significant DEGs in males and females (Fig. 1). A representative interactome network of DEGs in the male hippocampus revealed gene interactions among DEGs and associations with disorders/diseases, neurological functions, and behaviors, including mental retardation, neuritogenesis, social exploration, learning, and motor functions (Fig. 1). Similarly, the interactome network of DEGs in the female Table 1. Association analysis between differentially expressed genes in hippocampi of offspring prenatally exposed to BPA and ASD-related genes. We overlapped the lists of significantly differentially expressed BPAresponsive genes in the neonatal hippocampus and ASD-related genes (SFARI and AutismKB databases). The lists of significantly differentially expressed genes in both sexes were analyzed using MeV software with a standard Bonferroni test (P-value < 0.05), and the lists of sex-specific significantly differentially expressed genes from the RNA-seq process were analyzed using Poisson distribution (FDR < 0.05, P-value < 0.05). P-values of association were calculated using hypergeometric distribution analysis and are shown in the table. SFARI scores represent the level of confidence. Score 1 = High confidence; Score 2 = Strong candidates; Score 3 = Suggestive evidence; Score 4 = Minimal evidence; Score 5 = Hypothesized; Syndromic: all syndromic genes associated with ASD. www.nature.com/scientificreports www.nature.com/scientificreports/ hippocampus showed associations with Rett syndrome, perseverance behavior, and mental retardation (Fig. 1). Interestingly, the hub gene in the interactome generated using DEGs from the male hippocampus is MeCP2, which is the key gene responsible for Rett syndrome. These findings suggest that prenatal BPA exposure alters the expression of genes in the brain, which may in turn disrupt gene regulatory networks/pathways and neurological functions underlying the pathobiology of ASD. In addition, to investigate whether BPA-responsive genes have divergent effects on biological pathways and networks in males and females, the separate lists of DEGs in males and females were used to predict disorders/ diseases associated with ASD using IPA. Interestingly, we found that the DEGs in male but not female hippocampal tissues exclusively associated with autism (P-value = 1.18E-02, 10 genes) (Table 3). However, DEGs in both males and females were significantly associated with pervasive developmental disorder (P-value = 2.20E-02, 17 genes, and P-value = 1.44E-04, 41 genes, respectively), which is currently considered a component of ASD. To determine whether the gene expression profiles in the hippocampi of rats prenatally exposed to BPA reflect those in the brains of ASD individuals, we obtained the lists of genes that are differentially expressed in post-mortem brain tissues of ASD individuals from two previously published ASD brain transcriptome profiling studies 41,42 , and overlapped them with our list of BPA-responsive genes. Interestingly, we found that as many as 206, 159, and 80 genes differentially expressed due to BPA exposure in both sexes, in females, and in males, respectively, were also dysregulated in ASD post-mortem brain tissues identified by Voineague, I., et al. 41 . In addition, as many as 1,045, 690, and 393 genes differentially expressed by BPA exposure in both sexes, in females, and in males, respectively, were also dysregulated in ASD post-mortem brain tissues identified by Parikshak, N. N., et al. 42 . The lists of DEGs identified by both ASD brain transcriptome studies and genes overlapping with BPA-responsive genes are shown in Supplementary Table S5. This finding suggests that prenatal BPA exposure may result in dysregulation of at least some genes reminiscent of those altered in the brains of ASD individuals. Quantitative RT-PCR analysis of BPA-responsive genes. To further examine whether prenatal BPA exposure causes the dysregulation of genes in the hippocampus, four DEGs (i.e., Auts2, Foxp2, Smarcc2, and Dicer1) identified by RNA-seq analysis were selected for further confirmation by qRT-PCR analysis in another set of hippocampal tissue samples (Fig. 2). Auts2 (Autism Susceptibility Gene 2), Foxp2 (Forkhead Box P2), and Smarcc2 (SWI/SNF Related, Matrix Associated, Actin Dependent Regulator of Chromatin subfamily C member 2) have been identified as ASD candidate genes, whereas Dicer1 (Dicer 1, Ribonuclease III) is involved in a post-transcriptional gene silencing mechanism that has been associated with ASD. We found that when both males and females were combined, the expression levels of the Auts2, Smarcc2, and Dicer1 genes were significantly reduced in the hippocampi of rats prenatally exposed to BPA (Fig. 2). Foxp2 expression tended to decrease in the BPA group, although the difference was not statistically significant. Interestingly, sex-specific dysregulation of genes was observed when qRT-PCR data from each sex were analyzed separately. The expression levels of Auts2 and Foxp2 were significantly decreased in males but not in females (Fig. 2), whereas Smarcc2 expression was significantly decreased in females but not in males (Fig. 2). These results indicate that prenatal BPA exposure causes the dysregulation of genes associated with ASD in the hippocampus in a sex-dependent manner. DEGs in response to BPA exposure based on the integration of data from multiple transcriptomic studies revealed an association with ASD candidate genes. To determine whether BPA-responsive genes identified by other independent investigators were also associated with ASD, transcriptome profiling data from cell lines, primary cells, or tissues from animal models treated with BPA were obtained from six independent transcriptomic studies previously deposited in the NCBI GEO DataSets database (https://www. ncbi.nlm.nih.gov/gds/). The details of each study, including the title, sample size, and sample type, are shown in Supplementary Table S6. Significantly differentially expressed genes in the BPA treatment group compared with the corresponding control group from each transcriptomic study were then identified using a common statistical www.nature.com/scientificreports www.nature.com/scientificreports/ program for large-scale expression analyses. The lists of BPA-responsive genes from the transcriptomic studies are shown in Supplementary Table S7. We next overlapped the list of DEGs from each study with ASD candidate genes previously deposited in two different ASD databases: SFARI (https://gene.sfari.org/) and AutismKB (http:// autismkb.cbi.pku.edu.cn/). Furthermore, hypergeometric distribution analyses were performed to determine whether ASD candidate genes were associated with the BPA-responsive genes from each study. Interestingly, several to hundreds of ASD candidate genes were found to be differentially expressed in response to BPA, and the www.nature.com/scientificreports www.nature.com/scientificreports/ hypergeometric distribution analyses revealed that the ASD candidate genes obtained from each ASD database were significantly enriched (P-value < 0.05) in the lists of BPA-responsive genes identified from four of six transcriptomic studies (Table 4). To determine whether the BPA-responsive genes identified by our study were also dysregulated in the independent studies, the lists of BPA-responsive genes in the hippocampi of rats prenatally exposed to BPA were overlapped with the BPA-responsive genes from the previously published transcriptome studies. The numbers of overlapping genes are shown in Table 5. When the DEGs from the published studies were combined, as many as 914 DEGs identified by our study were also found to be dysregulated in at least one of the independent studies (Supplementary Table S8). IPA revealed that this set of genes was significantly associated with several canonical pathways, including "Aldosterone Signaling in Epithelial Cells" (P-value = 2.14E-04), "PTEN Signaling" (P-value = 1.62E-03), "PPARα/RXRα Activation" (P-value = 5.75E-03), "Dendritic Cell Maturation" (P-value = 1.02E-02), and "Circadian Rhythm Signaling" (P-value = 1.78E-02) ( Table 6). Taken together, the results of these bioinformatic analyses suggest that BPA exposure may cause dysregulation of genes associated with ASD-related biological functions in the brain as well as other tissues. Discussion Accumulating evidence from both in vitro and in vivo studies indicates that exposure to BPA, even at low doses, disrupts the expression of multiple genes in the brain and alters the behaviors of offspring from exposed females 43,44 . Increased BPA levels have been reported in the blood and urine of ASD children compared with typically developing children [37][38][39] , prompting the hypothesis that BPA may be an environmental risk factor for ASD and that exposure to BPA, especially during pregnancy, may cause and/or increase the risk of ASD. However, whether prenatal BPA exposure causes the dysregulation of genes associated with ASD in the brain that could lead to the pathobiological conditions associated with ASD has never been investigated. This is the first study to demonstrate that BPA exposure can cause sex-dependent changes in the transcriptome profiles of many genes involved in biological functions known to be negatively impacted in ASD, and that significant associations exists between BPA-responsive genes and dysregulated genes observed in individuals with ASD. Using rats as an experimental model, we demonstrated that prenatal BPA exposure in pregnant dams dysregulated the transcriptome profiles of ASD candidate genes in the brains of the offspring. Specifically, RNA-seq analysis of hippocampal tissues isolated from prenatally exposed neonatal rats showed sex differences in the response to BPA exposure, with 2,078 and 3,522 DEGs in the hippocampi of males and females, respectively, indicating that prenatal BPA exposure affects brain transcriptome profiles in a sex-dependent manner. Sex differences in the effects of prenatal BPA exposure on brain transcriptome profiles have also been reported in recent studies 43,45 . Arambula et al. (2016) conducted a transcriptome profiling analysis of hypothalami and hippocampi isolated from neonatal rats prenatally exposed to BPA 43 and found that BPA induced sex-specific effects on hypothalamic ERα and ERβ (Esr1 and Esr2) expression and hippocampal and hypothalamic oxytocin (Oxt) expression. Moreover, prenatal BPA exposure was reported to disrupt the transcriptome of the neonate amygdala in a sex-specific manner 45 . Interestingly, when overlapped with the lists of ASD candidate genes, the list of DEGs in males identified in this study exhibited stronger associations with ASD genes than the DEGs in females. Moreover, we found significant enrichment of ASD genes on the X chromosome in the lists of ASD-related DEGs in both males and females, suggesting that BPA exerts its effect on the brain partly through X-linked genes, which provides a plausible explanation for the sex difference in BPA effects on the brain transcriptome. Notably, the X chromosome theory of ASD [46][47][48] posits that the male bias of ASD partly involves genes on the X chromosome, the dysregulation of which increases susceptibility to ASD. This result suggests that prenatal BPA exposure may elevate the risk of ASD in males and may help explain the higher male prevalence of ASD, which deserves further study. Additionally, IPA showed that DEGs in the hippocampus were significantly associated with ASD and mental retardation. Canonical pathways associated with DEGs in both males and females included glutamate receptor signaling, axonal guidance signaling, and circadian rhythm signaling, all of which have been associated with ASD [49][50][51] . Interestingly, several neuro/biological functions and disorders, including "autism", "global developmental delay", "formation of brain", "neuritogenesis", and "inflammatory response", were associated with DEGs in the male hippocampus only. The canonical pathway analysis also revealed significant associations of DEGs with "DNA methylation and transcriptional repression signaling" and "4-aminobutyrate degradation" in male only, both of which have been associated with ASD 3,52-54 . We then overlapped the DEGs in males together with those in females, and the lists of genes that were found to be dysregulated in only males or females were separately analyzed to demonstrate Table 3. Comparison of neurological diseases/disorders of DEGs uniquely found in males or females. The lists of genes that were dysregulated only in males or females were used to predict the neurological diseases/ disorders associated with ASD using IPA. Significance was determined by the Fisher' exact test, with a P-value = 0.05 as the cutoff. www.nature.com/scientificreports www.nature.com/scientificreports/ diseases/disorders specific to male and female DEGs. The results revealed that genes that were dysregulated in males were significantly associated with "Autism" (P-value = 1.18E-02) while the dysregulated genes in females were associated with "Pervasive developmental disorder" (P-value = 1.44E-04). Pervasive development disorder is a group of disorders characterized by developmental delays of socialization and communication skills, Foxp2 (B), Smarcc2 (C), and Dicer1 (D) were determined in both sexes and separately in males and females. The qRT-PCR analyses revealed that Auts2 and Foxp2 were significantly down-regulated in the hippocampi of both sexes and males that were prenatally exposed to BPA. In contrast, Smarcc2 was significantly reduced in both sexes and in females, and Dicer1 was significantly reduced in both sexes. * P-value < 0.05. www.nature.com/scientificreports www.nature.com/scientificreports/ consisting of autism, Asperger syndrome, Rett syndrome, childhood disintegrative disorder, and pervasive developmental disorder-not other wised specified (PDD-NOS). In the DSM-5, all of these neurodevelopmental conditions, except for Rett syndrome, were grouped into the new classification for autism spectrum disorder (ASD) which has an overall prevalence of approximately 1 in 59 children and is 4 times higher in males than females 1 . This result suggests that exposure to BPA during pregnancy can cause divergent effects on the expression of genes associated with ASD in both sexes, but may be more directly associated with classic autism (typically considered the most severe subtype) in males. Interactome analysis showed that Mecp2, a gene located on the X chromosome encoding the methyl-CpG binding protein 2, served as the hub gene in a biological network of DEGs in the hippocampus. This protein mediates transcriptional repression through interaction with histone deacetylase 55,56 and plays a role in the maintenance of synapses and normal brain function 57,58 . Loss-of-function mutations of MeCP2 in humans are known to cause Rett syndrome, a childhood neurodevelopmental disorder with some ASD-related symptoms that affects females almost exclusively. An increased MeCP2 gene copy number was reported in males with neurodevelopmental delay who exhibited autistic-like features, absent speech, stereotypic movements, and infantile hypotonia 59 . Moreover, increased binding of MeCP2 to the promoters of GAD1 and RELN which are candidate genes for ASD was also found in the ASD cerebellum 60 . This evidence suggests that the up-regulation of Mecp2 due to prenatal exposure to BPA may lead to ASD-like symptoms, which should be further studied. We then conducted quantitative RT-PCR analyses to further investigate the expression levels of four DEGs (Auts2, Foxp2, Smarcc2, and Dicer1) in the hippocampi of neonatal rats prenatally exposed to BPA compared with vehicle control. Auts2, Smarcc2, and Dicer1 were significantly reduced in the hippocampi of the BPA group compared with the control, whereas Foxp2 tended to decrease but did not show a statistically significant difference. Although the expression levels of these four genes seemed to be reduced in rats of both sexes exposed to BPA, there were some sex differences in the effects of BPA exposure on the expression levels of these genes. Auts2 and Foxp2 were significantly decreased in the hippocampi of male rats exposed to BPA compared with sex-matched controls, but these differences were not observed in females. Smarcc2, in contrast, was significantly decreased in females prenatally exposed to BPA, but not in males. These findings suggest that prenatal BPA exposure may pose an increased risk of ASD in males and females by disrupting the expression profiles of ASD-related genes, providing a plausible explanation for how an environmental factor can contribute to ASD susceptibility. The molecular mechanisms underlying how BPA affects differential gene expression between males and females should be studied further, but evidence indicates that exposure to BPA can alter genes related to global DNA methylation and histone modification processes 44,61 . Auts2 (Autism Susceptibility Candidate 2) is an ASD candidate gene that has been associated with ASD and other neurodevelopmental disorders that are comorbid with ASD, including intellectual disability 62 and developmental delay 62 . Auts2 is abundantly expressed in the developing brain and is mostly expressed in the hippocampus, prefrontal cortex, and cerebellum 63 , which are brain regions known to be impacted in individuals with ASD 64 . Recent studies have revealed that Auts2 is important for neuronal development. Knockout of both coding and noncoding sequences of the Auts2 gene in zebrafish caused microcephaly and a decreased number of neuronal cells 65 , both of which are consistently found in ASD patients 66 . Foxp2 (Forkhead Box P2) encodes a member of the forkhead/winged-helix (FOX) family of transcription factors that is widely reported as a candidate gene associated with language development 67 . Foxp2 is expressed in the fetal and adult brain and is required for the development of speech and language regions of the brain during embryogenesis. Mutation of this gene has been reported in speech-language disorder 1 (SPCH1), also known as autosomal dominant speech and language disorder with orofacial dyspraxia. A single-nucleotide polymorphism (SNP) in the FOXP2 gene has been associated with social deficits in ASD patients 68,69 . Moreover, the disruption of Foxp2 in mice caused altered ultrasonic vocalization 70 . Table 4. Hypergeometric distribution analyses between significantly differentially expressed genes from BPA studies and autism candidate genes. Hypergeometric distribution analyses were used to analyze associations between differentially expressed genes from six previously published BPA transcriptome studies and autism candidate genes. Statistically significant associations were determined by hypergeometric distribution analysis (P-value < 0.05). www.nature.com/scientificreports www.nature.com/scientificreports/ Smarcc2 (SWI/SNF Related, Matrix Associated, Actin Dependent Regulator of Chromatin subfamily C member 2) encodes a member of the SWI/SNF family of proteins. The functions of this gene include transcriptional activation and repression by chromatin remodeling process 71 . Smarcc2 is highly expressed in the brain and is required for the differentiation of stem/progenitor cells into mature neural cells during neural development. Recent studies reported that mutation of Smarcc2 resulted in alteration of chromatin remodeling complexes in ASD 6 . A de novo splice-site variant in this gene was also observed in ASD cases 72 . The Dicer1 (Dicer 1, Ribonuclease III) gene encodes a protein involved in the repression of gene expression. The protein acts as a ribonuclease that is required for RNA interference and small temporal RNA (stRNA) in the small RNA component production pathway. There is evidence that post-transcriptional mechanisms are associated with ASD. Recent studies revealed dysregulated miRNAs in the ASD brain 73 and in lymphoblastoid cell lines derived from individuals with ASD 8,74 . To further understand the systemic effects of BPA, we identified BPA-responsive genes using the transcriptome profiles of cells/tissues isolated from animals exposed to BPA because of the limitation of brain transcriptome data in the GEO DataSets database. In addition, we attempted to use several statistical tests, such as t-test with standard Bonferroni correction, to identify the DEGs, but we were unable to identify any DEGs from the studies under these stringent conditions for multiple testing correction. We then used student's t-test to re-analyze the significant DEGs from other studies with the goal of identifying some genes that are dysregulated due to BPA exposure in other cells/tissues. Hypergeometric distribution analyses were then performed using BPA-responsive genes from each transcriptomic study and lists of ASD candidate genes obtained from two different ASD bioinformatic databases. We found that ASD candidate genes were significantly enriched in BPA-responsive genes in four transcriptomic studies. Interestingly, one of these four transcriptomic studies investigated the effects BPA exposure on the transcriptome profiles of mouse placenta 75 . That study found that in utero exposure to BPA disrupted blood vessel development and morphology in the placenta. BPA exposure caused narrowing of blood vessels and disrupted the embryonic head and forelimb structures 76 . A recent study revealed that altered maternal vascular malperfusion was significantly associated with the pathobiology of ASD and increased the risk of ASD 77 . Moreover, we overlapped the DEGs from our study with DEGs from other BPA studies in different cell types or tissues. Interestingly, we found some overlapping genes among these sets of genes, suggesting that genes that are found to be differentially expressed in the brain also show differential expression in response to BPA in other tissues. The set of overlapping genes was significantly associated with pathways impacted in ASD. There is some evidence showing that "Aldosterone Signaling" 78 , "PTEN signaling" 79 and "Circadian Rhythm" 51 are implicated in ASD patients. These findings suggest that BPA exposure may cause changes in the transcriptome profiles of genes involved in biological functions known to be impacted in ASD. In addition to changes in transcriptome profiles, recent studies have shown that prenatal BPA exposure altered neurological functions, including neurogenesis in the hippocampus and hypothalamus and synaptic density in mouse models 31,80,81 . Moreover, prenatal BPA exposure induced behavioral impairments in offspring, such as in learning and memory 82 and in social interaction 82 , along with anxiety-like behavior 31 . Whether the changes in the transcriptome profiles observed in this study could lead to altered neurological functions and behaviors should be investigated further. Moreover, in this study we used oral administration of BPA at 5,000 µg/kg of maternal BW/ day which is equal to the NOAEL in humans determined by the FDA and ESFA. The TDI in humans is 50 µg/kg BW/day, and the estimated BPA exposure levels from use in food-contacting materials in infants and adults are 2.42 µg/kg BW/day and 0.185 µg/kg BW/day, respectively 22 . The effects of prenatal BPA exposure at the TDI and these estimated daily doses in humans on the brain transcriptome and functions warrant further investigation. Moreover, the molecular mechanisms through which BPA disrupts the expression of genes associated with ASD deserve further study. Conclusions In this study, transcriptomic profiling analysis of hippocampi isolated from rats prenatally exposed to BPA revealed sex-dependent dysregulation of gene expression, with a greater number of differentially expressed genes in females. However, the genes that were disrupted in the male hippocampus showed more significant association with ASD than those in females. Interestingly, the expression of ASD candidate genes selected for validation by quantitative RT-PCR, including Auts2, Foxp2, and Smarcc2, was also sex-dependent in response to prenatal BPA exposure. Finally, re-analyses of transcriptomic data obtained from multiple published studies on the effects of Scientific RepoRts | (2019) 9:3038 | https://doi.org/10.1038/s41598-019-39386-w www.nature.com/scientificreports www.nature.com/scientificreports/ BPA in various cellular, tissue, and animal models support our current findings that BPA-responsive genes are significantly associated with ASD candidate genes as well as ASD-related neurological functions and disorders. Taken together, this study shows that prenatal BPA exposure causes changes in the hippocampal expression of genes associated with ASD in a sex-specific fashion, supporting the hypothesis that BPA is an environmental risk factor for ASD, and thus providing a plausible explanation for how BPA exposure may contribute to the sex bias of ASD. Methods Animal husbandry and treatment. Eight-week-old female and male Wistar rats were purchased from the National Laboratory Animal Center (NLAC), Thailand. All animals were housed at the Chulalongkorn University Laboratory Animal Center (CULAC) under standard temperature (21 ± 1 °C) and humidity (30-70%) conditions in a 12-h light/dark cycle with food and RO-UV water available ad libitum. Female rats (gestational day 1 (GD1); n = 8) were divided into 2 groups (control group and BPA treatment group) with a total of 4 rats per group. The weight of each rat was measured daily and used to calculate the amount of BPA or vehicle control needed to treat each rat. For BPA treatment, BPA (Sigma-Aldrich, USA) was dissolved in absolute ethanol (Merck Millipore, USA) to a final concentration of 250 mg/ml to make a stock BPA solution. Then, the stock solution was further diluted with corn oil to a final concentration of 5,000 µg/kg·maternal BW of BPA to treat each rat. The vehicle control treatment was prepared by mixing absolute ethanol with corn oil in amounts equivalent to those used for preparing BPA. After mating, each rat was intragastrically administered either BPA or the vehicle control from GD1 until parturition. To prevent cross-contamination of the treatment conditions, rats in the BPA and control groups were raised separately in individual ventilated cages in a biohazard containment housing system. Separate sets of stainless steel needles and all consumable products were used for oral gavage. All reusable materials were cleaned with ethanol and rinsed with copious amounts of Milli-Q deionized water before use. All experimental procedures were approved by the Chulalongkorn University Animal Care and Use Committee (Animal Use Protocol No. 1673007 and No. 1773011), Chulalongkorn University. We confirm that all experiments were performed in accordance with the relevant guidelines and regulations. RNA isolation and transcriptome profiling analysis. Male and female neonatal pups were euthanized (BPA n = 6; control n = 6), and the hippocampi were isolated as previously described with slight modifications 83 . Briefly, neonatal pups were euthanized by decapitation on ice following intraperitoneal injection of 100 mg/ kg·BW sodium pentobarbital. The brain was quickly removed from the head and placed in a pre-chilled tube containing ice-cold, freshly prepared 1X HBSS (Invitrogen, USA) containing 30 mM glucose (Sigma-Aldrich, USA), 2 mM HEPES (GE Healthcare Bio-Sciences, USA), and 26 mM NaHCO 3 (Sigma-Aldrich, USA). The brain was then dissected, and the hippocampus was isolated under a Nikon SMZ18 Stereo Microscope (Nikon, Japan). Meninges were removed completely, and the hippocampal tissues were immediately placed in a tube with RNAlater (Ambion, USA) and stored at −80 °C, according to the manufacturer's protocol, until use. Total RNA from the hippocampus was isolated and purified using the mirVana miRNA Isolation Kit (Ambion, USA) according to the manufacturer's protocol. The RNA integrity was assessed using an Agilent Bioanalyzer (BGI, Hong Kong). To identify DEGs in the hippocampus in response to prenatal BPA exposure, a transcriptome profiling analysis of total RNA isolated from the hippocampi of neonatal rats from six independent litters prenatally exposed to BPA or vehicle control was performed by BGI Genomics Co., Ltd using the Illumina HiSeq 4000 next-generation sequencing platform with 4 G reads (Illumina, Inc.) according to the manufacturer's protocol. Briefly, total RNA was treated with DNase I, and oligo(dT) treatment was used for mRNA isolation. Next, the RNA was mixed with fragmentation buffer to fragment the mRNA. Then, cDNA was synthesized using the mRNA fragments as templates. Subsequently, sequencing reads were filtered and subjected to quality control. Clean reads in a FASTQ file were mapped to the rat reference genome (RefSeq ID: 1174938) using Bowtie 2 84 and gene expression levels were then calculated using RSEM 85 . We then compared the transcriptome profiles between the BPA and the control groups with Poisson distribution. Comparisons were performed with all male and female pups with the same treatment condition combined into one group and separately for each sex. P-values were calculated using a Poisson distribution method. DEGs with a P-value < 0.05 and FDR < 0.05 were considered statistically significant. Quantitative RT-PCR analysis. Four DEGs in the hippocampus identified by RNA-seq transcriptomic analysis were selected for further confirmation by quantitative RT-PCR analysis. These four DEGs were selected for further validation based on differential expression between males and females as well as known association with ASD. Total RNA was used for cDNA synthesis with the AccuPower ® RT PreMix (Bioneer, Korea) according to the manufacturer's protocol. Briefly, 0.5 µg total RNA was mixed with 0.5 µg (100 pmol) oligo dT 18 primer, and DEPC-treated water was added to 15 µl. Then, the reaction was incubated at 70 °C for 5 min and placed on ice. To perform the cDNA synthesis, the mixture (15 µl) was then transferred to an AccuPower ® RT PreMix tube, and DEPC-treated water was added to 20 µl. The cDNA synthesis reaction was performed by incubating the reaction at 42 °C for 60 min, followed by 94 °C for 5 min. The cDNA reaction mixture was further diluted to a volume of 50 μl with nuclease-free water and was used as a template for subsequent qPCR analyses. Quantitative PCR analysis was conducted in triplicate using AccuPower ® 2X GreenStar ™ qPCR MasterMix (Bioneer, Korea) according to the manufacturer's instructions. Briefly, 1 μl of the cDNA was mixed with 2X Greenstar Master Mix, forward primer, reverse primer, and nuclease-free water. The reaction was then incubated in a Bio-Rad CFX Connect Real-Time System (Bio-Rad, USA). The PCR amplification conditions were set as follows: an initial denaturing step at 95 °C for 15 min, followed by 40 cycles of 10 s at 95 °C for denaturing and 30 s at 55 °C for annealing/ extension. Product formation was confirmed by melting curve analysis (65 to 95 °C). The expression levels were calculated by the 2 −ΔΔCt method using the 18 S ribosomal RNA (Rn18s) gene as an endogenous control. The (2019) 9:3038 | https://doi.org/10.1038/s41598-019-39386-w www.nature.com/scientificreports www.nature.com/scientificreports/ specific primers in the qPCR analyses were designed using the UCSC Genome Browser (https://genome.ucsc. edu/), Ensembl (https://asia.ensembl.org/index.html), and Primer3 software (http://bioinfo.ut.ee/primer3-0.4.0/). Forward and reverse primers were designed for rat Auts2, Foxp2, Smarcc2, and Dicer1, and Rn18s. The sequences of the qPCR primers are shown in Supplementary Table S9. Prediction of biological functions and interactome analysis. Biological functions, disorders, canon- ical pathways, and interactome networks associated with DEGs were predicted using IPA software (Qiagen Inc., USA, https://www.qiagenbioinformatics.com/products/ingenuity-pathway-analysis/). The list of DEGs was overlapped with the list of genes experimentally validated to be associated with each function/disorder/canonical pathway in the Ingenuity's Knowledge Base database. Fisher's exact test was then performed to calculate P-values, and a P-value < 0.05 was considered statistically significant. Transcriptome data collection. Transcriptome profiling data of cells/tissues dissected from animals exposed to BPA or vehicle controls were obtained from the NCBI Gene Expression Omnibus database (GEO DataSets: http://www.ncbi.nlm.nih.gov/gds) in a search performed on May 13, 2017, using the keyword "bisphenol A" and the following criteria: i) the experimental models were animals, primary cells, or cell lines; and ii) each treatment group consisted of more than three samples. Transcriptome profiling data of cells exposed to chemicals other than BPA, when present in any selected study, were excluded prior to subsequent differential expression analyses. Identification of BPA-responsive genes and association with ASD candidate genes. To identify significant BPA-responsive genes in cells/tissues exposed to BPA, the transcriptome profile from each BPA study was analyzed separately using Multiple Experiment Viewer (MeV) (http://mev.tm4.org/) 86 . All transcriptome profiling data were filtered using a 70% cutoff, which removed transcripts for which intensity values were missing in >30% of the samples. The available transcripts were then used for identifying DEGs in the BPA group with two-tailed t-tests. Lists of ASD-related genes were obtained from two different ASD databases: the SFARI database (updated on April 17, 2018) (https://gene.sfari.org/) and the AutismKB database (from May 25, 2012) (http://autismkb.cbi.pku.edu.cn/). To determine whether the BPA-responsive genes identified in each transcriptomic study were significantly associated with ASD candidate genes, the list of BPA-responsive genes was overlapped with the list of ASD candidate genes from each ASD database, and a hypergeometric distribution analysis was conducted using the Hypergeometric Distribution Calculator program in the Keisan Online Calculator package (http://keisan.casio.com/exec/system/1180573201). There are four variables in the Hypergeometric Distribution Calculator: number of overlapping genes, total number of DEGs in the experiment, total number of ASD-candidate genes, and total number of genes from RNA-seq analysis. Statistical analyses. Statistical analyses were conducted using SPSS version 16.0. The criterion for statistical significance was a P-value < 0.05. A two-tailed Student's t-test was used to determine the statistical significance of differences between the mean values of two groups. A hypergeometric distribution analysis was performed to determine the association of DEGs with ASD candidate genes obtained from the SFARI (https://gene.sfari.org/) and AutismKB (http://autismkb.cbi.pku.edu.cn/) databases using the Hypergeometric Distribution Calculator in the Keisan Online Calculator program (http://keisan.casio.com/exec/system/1180573201). A P-value < 0.05 was considered statistically significant. Ethics approval and informed consent. All animal experimental procedures were approved by the Chulalongkorn University Animal Care and Use Committee (Animal Use Protocol No. 1673007 and No. 1773011), Chulalongkorn University. Data Availability The transcriptome profiling data used in this study have been published in the NCBI GEO DataSets database (GSE44387, GSE63852, GSE58642, GSE50527, GSE58516, and GSE86923). The RNA-seq data will be made publicly available in the GEO upon acceptance of this manuscript for publication.
8,684.2
2019-02-28T00:00:00.000
[ "Psychology", "Biology" ]
Emergency Remote Teaching and Learning in Greek Universities During the COVID-19 Pandemic: The Attitudes of University Students Emergency Remote Teaching and Learning in Greek Universities During the COVID-19 Pandemic: The Attitudes of University Students. European Journal of Interactive Multimedia and Education, (1), ABSTRACT Undoubtedly the pandemic of COVID-19 had a great impact globally on our daily activities. Whereas to face this unprecedented situation all the educational institutions were compelled to keep the lessons conducted over the internet. Under the current circumstances this quantitative research detects, describes, and measures attitudes of 807 students of 5 Greek universities towards the distance learning process. The data that was collected by using a 5-point Likert scale reflects the strong agreement of the students that face-to- face teaching cannot be replaced by distance learning, especially when it comes to laboratory training. The consensus is also that remote learning has abased pedagogical relationships between professors and classmates and among the latter as well. Findings indicate that students come to a meeting of minds about the educational inequalities which are worsened by the lack of digital equipment and undeveloped technological infrastructure. Furthermore, this study reveals a correlation between the responses of the sample and their demographic and social characteristics, something that offers possibilities for additional research. THEORETICAL UNDERPINNINGS Recently, and amid the circumstances created due to Coronavirus , new conditions have been established for almost the entire global higher education sector (Crawford et al., 2020). Given the impact of the pandemic crisis (COVID-19) on education, we consider it necessary to focus on two important issues. At first, during the crisis that arose at the end of March 2020, more than 1.5 billion students and youth across the planet have been affected by the school and university lockdown closure, due to the COVID-19 pandemic. UNESCO introduced the terms 'emergency' and 'educational disruption' for the effects of the crisis on educational institutions and systems. Over 100 million teachers and school personnel were impacted by the sudden closures of learning institutions. Today, two-thirds of the world's student population is still affected by full or partial educational institutions closures. In 29 countries, schools remain fully closed. 24 million children and youth are at risk of dropping out (Karalis, 2020; UNESCO Institute for Statistics Data, 2020). Liquidity (Bauman, 2000), and dynamic change have increased especially under the pressure of pandemic crisis' demands for social distancing, a phenomenon that is experienced worldwide (Dhawan, 2020). However, we should note here that like the so-called 'normal circumstances', the digital transformation to remote teaching and digital classrooms, bears and raises a variety of issues on quality, social interaction, data protection, issues which need to be carefully discussed and tackled. The remarks above convey a sense of urgency within academia that pressures students to keep up with changes and raises concerns that some students may be left behind. The closure of universities created new economic, social and educational phenomena and difficulties that had to be overcome, in order to continue the educational activity. A combination of soft and digital skills is required for educational and pedagogical practices in a complex social and digital universe, where the use and development of electronic means of communication were inevitable (Fotii, 2020). The Vice Chancellor of the Open University (UK) speaking to the University Council, mentioned the following (Jones et al., 2020): "Most of our students think IM, text and Google are verbs not applications! They expect to be engaged by their environment, OPEN ACCESS with participatory, sensory-rich, experiential activities (either physical or virtual) and opportunities for input. They are more oriented to visual media than previous generations -and prefer to learn by doing rather than by telling or reading. They explicitly prefer to discover rather than be told". It is well known among researchers that technology can facilitate our everyday lives (Dimensional Research, 2018). Τhe most important benefit of online education for students, under the current circumstances, turned out to be the ability to study in the safety of their own home (Sahbaz, 2020). Also, students listed flexibility as the main advantage of using digital infrastructure for studying (Serhan, 2020). But students' adaptation to distance learning under the pressure of the consequences of the pandemic crisis, cannot be effective in countries that a vast majority of students do not have access to the internet due to technical, pedagogical, and financial or organizational issues (Adnan & Anwar, 2020). A typical example was the set of the conditions in the southern regions of Italy, where approximately 20% of the students did not have access to any devices and were excluded from learning, a phenomenon which in turn generates a direct risk of increased adolescent delinquency (Ferraro et.al., 2020). According to the UN, at least 463 million or nearly one-third of students around the globe cannot access remote learning, mainly due to a lack of online learning policies or lack of equipment needed to connect from home. Most students do not have the appropriate connectivity, device, and digital skills required to find and use educational content dependent on technology (UNESCO Institute for Statistics Data, 2020). Because of these financial and technical obstacles, students are also challenged with electricity interruptions or/and with the storage capacity of their available digital devices. Making these needs meet requires individual or family internet expenses (Rotas & Cahapay, 2020). Thus, students attending universities in the so-called developing countries, where the technological infrastructure is not very developed, face significant problems due to technological developments in higher education, which are not sufficient for an urgent educational transition (Crawford et al., 2020). This is the COVID-19 version of the digital breakdown. On the one hand, the middle class is working safely and with access to technology in comfortable homes, and on the other side, students of disadvantaged communities (often ethnic minorities) are unable to access such technology in cramped homes, coupled with the need and requirement to work on-site. Thus, while for the former the conditions of their safe home led to a de-risking COVID-19 situation, for the latter the insecurity and risk increased (Breslin, 2021). The pandemic has exposed and deepened pre-existing education inequalities that were never adequately addressed. Despite critical additional funding needs, two-thirds of low-and lower-middle-income countries have cut their public education budgets since the start of the pandemic, according to a recent joint report by the World Bank and UNESCO (UNESCO Institute for Statistics Data, 2020). According to Audrey Azoulay, UNESCO Director-General, (Karalis, 2020): "We are entering uncharted territory and working with countries to find hi-tech, low-tech, and no-tech solutions to assure the continuity of learning". Additionally, to the Director-General, the British Prime Minister underlined the following (Johnson, 2020): "Most painfully of all, the costs of school closure have fallen disproportionately on the most disadvantaged, the very children who need school the most. Surveys estimate that while the majority of pupils have been learning at home, as many as a quarter of pupils were doing less than two hours of schoolwork a day. Keeping our schools closed a moment longer than what is absolutely necessary, is socially intolerable, economically unsustainable, and morally indefensible". Also, in the past, with the European economic-financial crisis comprising a considerable part of their life trajectories, students who are currently entering university are most likely to be anxious about their individual or parental financial situation (Asselmann et. al., 2020). Having inherited a set of economic, political, and social worries about their financial situation, they are already anxious about their future. Even when students in certain cases overcome their financial problems, adapt efficiently to online learning, and participate actively in the new educational environments, research data showed that there was a lack of enthusiasm (Agung et al., 2020). Emotions and emotional status are a major issue. While the University of Patra's students acknowledged that the university had to close because of the pandemic crisis, their emotional status was strongly negative (75%). Nevertheless, upon the beginning of electronic classes, the dominant emotions turned into positive ones (95%) (Karalis & Raikou, 2020). Moreover, an important issue in online learning is practice. Laboratory studies cannot be carried out with distance education (Sivrikaya, 2019). Art and Health students also need workshops. Chemistry or Physics students cannot perform their experiments at home. As some of the studies in Art Education or Medical and Health programs will only be carried out in a workshop environment or Clinical trials. Obviously, in both of the above, the dimensions of the course application are one of the biggest problems encountered in the distance education process in times of pandemic (Dilmac, 2020). The preclinical curriculum was transferred online and students completed virtual clinical skill assessments. Specifically, medical education and hospital training may never be the same again as many institutions experienced abrupt disruptions in the face of the pandemic crisis (Wayne et al., 2020). Researchers and academics tried to understand students' perceptions on distance education during the COVID-19 pandemic and carried out empirical studies in India (Mishra et al., 2020;Naik et al., 2021), Serbia (Bojovic et al., 2020), Pakistan (Malik et al., 2020), USA (Aguilera-Hermida, 2020), South Africa (Armoed, 2021), Poland (Cicha et al., 2021), and elsewhere. According to the relevant literature, technological infrastructure and monetary issues are not the only important factors that differentiate the adaptation of the online learning procedures. As for the negative elements of online education, apart from the technical obstacles that have arisen, they are mainly related to the lack of communication and cooperation, as well as to the general restriction of social contact in the academic context (Karalis & Raikou, 2020;Lassoued et al., 2020). Bao (2020) in her study for the University of Beijing with a sample of 44,700 subjects notes that the greatest difficulties for students did not come from a lack of technological skills, but from a lack of self-discipline and appropriate learning materials. Depending on their experience, students differ in the adoption of ICT skills (Mehdar, 2020). For example, as stated by Brown and Czerniewicz (2008) when looking at the case of almost all South African students, who were exposed to ICTs, the use of these technologies was rarely frequent and despite the hype associated with Web 2.0 technologies, there was low use of those for teaching and learning. First-year students are responsive and receptive to the use of ICT in a distance learning pedagogical framework, contrary to the answers of students of higher semesters, who prefer the use of traditional teaching methods and they appear opposed to distance learning processes, in which they face communication barriers (Amir et al., 2020). A similar study by Owusu-Fordjour et al. (2020) on a sample of 250 students from Ghana shows the negative impact of distance learning on learning outcomes, as students stated, among other things, that they had limited internet access, did not have the required know-how of using the educational platforms with which even the teaching staff of the universities was not familiar. Similar were the findings of Tsitsia et al. (2020) with the students of this research adding to the limited internet access and the high cost of distance learning. Communication, and more concretely, the absence of interaction between teaching staff and students seems to be one of the main problems (Gokbulu, 2020). Important issues also are the lack of access to internet facilities and the lack of campus socialization as the pedagogical benefits from face-toface communication and personal contact with teaching staff and peers cannot be recreated in the distance learning environment (Adnan & Anwar, 2020). Also, another common theme in university students' responses was the reference to the feeling of disconnection, the increased feeling of isolation caused by online classes (Al-Twait & Al-Saht, 2020). It should be emphasized here that connectivity is a very important feature in the daily life of students currently entering university. One of the most important effects of this process of connectedness is the multicultural and global conception of everyday life (Mack & Palley, 2012). Αs a result of this feeling of disconnection, students displayed behaviors of losing interest in the class, finding it hard to concentrate, feeling disengaged and finally evaluating themselves as less productive (Yang, 2021). Of particular interest is also research from the United Arab Emirates concerning the extent to which the pandemic has affected the selection of studies by students and their families, with the variables of economic cost, quality of student life and provision of on-line courses appear to be differentiated in relation to the pre-pandemic period (Nanath et al., 2021). In the past, the debates about the exposure of 'new students' to educational (Maton, 2004) or technological (Hickox & Moore, 1995) change were common. Not only the university students, but also professors, faced self-imposed obstacles, as well as pedagogical, technical, and financial or organizational obstacles. According to Merriman (2015), the current generation consists of true digital natives. For them, this is a technological world (Mack & Palley, 2012). While in the midst of this COVID-19 crisis the generation currently entering university is more complex than the literature would lead an observer to expect. In their research concerning the use of technologies, Kennedy et al. (2008) found that amongst first-year Australian students, there was significant diversity when looking beyond the basic and entrenched technologies. Their findings ran counter to the results of Prensky's research, about the characteristics of 'Digital Natives' (Prensky, 2001) and the similar analysis and results by Tapscott (1998Tapscott ( , 2008. RESEARCH METHODOLOGY Scope and Aims of the Research The purpose of the study (research problem) is to detect, describe, and "measure" the attitudes of the student population towards the distance learning process, which was adopted during the pandemic in Greek tertiary education. The aims of the research are, as follows: 1. To record university students' attitudes towards distance education (in general). 2. To record students' attitudes towards the Pedagogical relationship as it arises during distance education. 3. To record students' attitudes towards relationships among students. 4. To record students' attitudes towards Educational Inequalities. 5. To correlate students' attitudes with specific demographics, social interaction of the students, and choices of using the new technologies. Population and Sampling The research population consisted of Greek students studying at University of Athens, Aristotle University of Thessaloniki, University of Ioannina, University of Patras and University of Peloponnese. Through several educational platforms (e-class, classweb, and ecourse) 2,500 invitations were sent to students to complete the questionnaire, but 807 of them replied after all. The sample is convenient and random and is considered to be large, which ensures representativeness and regularity of its distribution. Besides, the reductions we attempted in the general sets of students do not show significant discrepancies. Research Method &Tool The current study adopted a quantitative cross section research design. The questionnaire was selected as the most appropriate tool for reviewing and mapping the attitudes of a large number of students in order to record and analyze as many parameters as possible of the terms and conditions of students' work. In the present research we chose the questionnaire as a research tool for the following reasons: 1. It easily arouses the interest of the respondents and increases the participation in the research process. 2. The initial decision on the need to use a large sample of subjects and the technical capabilities of the research team favors the use of a questionnaire. 3. The questionnaire is used to collect information about perceptions and opinions of subjects, which are not easy to observe. 4. The questionnaire as a research tool allows continuous testing and interventions to be formulated in the most appropriate way. For the purposes of the study, a questionnaire was structured as a tool for quantitative research of student's attitudes. The questionnaire incorporated Likert scales that cover the research questions. Four stop scales (5 points Likert Scales) were created. A face validity check of the scales was performed by 5 independent critical reviewers, two times before the first (pilot) use of the questionnaire to 25 persons. After the pilot use, some specific necessary reviews were made according to the results of the preliminary Cronbach's alpha test. After the second pilot use to other 37 persons, the questionnaire was distributed through Moodle platforms (e-course, e-class) of university courses to the five Greek universities. K = (I-CVI-Pc)/(1-Pc) In order to calculate the modified Kappa statistic, we used the following equation: The kappa statistic value is .81. Data Analysis Data were analyzed using IBM SPSS Statistics v. 25 and both descriptive and inductive statistics were used. To examine the effect of demographic factors as well as data collection questions, on the degree of agreement-satisfaction expressed by the subjects, t-test and ANOVA were used accordingly. Moral and Ethical Issues During the investigation, certain rules were followed. Primarily, the information about the real purpose of the investigation was avoided to be distorted. In addition, the following were avoided: the involution of the participants without being previously informed about the research, their compulsion to participate, their exposure to stressful situations, but also any violation of their privacy (Robson & McCartan, 2016). Therefore, the practices used at all stages of the research process are characterized by ethics and adherence to international practice regarding scientific research ethics. The Sample The 807 students of the research were distributed regarding their gender, the field, the year and level of their studies, the educational level of their parents and their place of residence. Table 1 shows the characteristics of the sample. Table 2 shows the sample's computer use. RESULTS The four scales of our research were constructed in order to measure the level of agreement on distance education in general and its main disadvantages as they are being described at the relevant literature (Coman et al. 2020;Ferri et al., 2020;Hassenburg, 2009;Jara & Mellar, 2007;Lozovoy & Zashchitina, 2019;Simonson et al. 2012;Sokolova et al. 2018;Xu & Jaggars, 2010). On that basis, the scales reflect the main negative characteristics of distance education, focusing on the experience that university students had while studying remotely during COVID-19 pandemic. University students, actually, express their agreement considering these general negative characteristics. Table 3 shows Cronbachs' alpha test for each scale. As we can see from Table 4, the mean of each scale is over 3.5 and especially for the scales C and D the means of agreement reach the value 4, which should be considered as relatively high. The means of the A and B scales are over Regarding scale C (relationships among students): C.8 Distance education has deprived many students of even the first contact with the university (4.72/5), C.6. Distance education degrades the general socializing function of university education (4.27/5), and C.2 Online courses do not develop relationships between students, which can develop into friendships (4.12/5). The t-test for the effect of gender on the responses of the subjects shows a small but statistically significant correlation for the responses of scale D (educational inequalities) where women seem to express a greater degree of agreement than men. According to the ANOVA Analysis using the Bonferroni Post Hoc Test, statistically significant differences were noticed between the subjects depending on their demographic and social characteristics. More specifically, the set of answers in the first 4 scales seems to be influenced by the attitude of the subjects towards the introduction of computers in education ( Worthy of attention is that in scale D (educational inequalities) it seems that the field of study is an influential factor (df: 4 & p<0.01), as those who study humanities, show a higher degree of agreement than the ones who study science (MD: 0.28116 & p<0.05) or technology (MD: 0.326338 & p<0.05). Furthermore, the frequency of computer use also appears to affect samples' responses (df:3 & p<0.01) in Scale A (distance education) (df:3 & p<0.01), as those who use the computer a few times a week show a greater degree of agreement than those who use the computer daily (MD: 0.7488 & p<0.05), or even several hours per day (MD: 0.7333 & p<0.01). Finally, the ideological position of the subjects seems to influence their answers on scale 4 -educational DISCUSSION Living in times of the pandemic, apart from the devastating health consequences, the COVID-19 crisis has immediate economic and social effects on the lives and studies of higher education students. Hence, the increasing interest of researchers to examine how it has affected their daily lives, including teaching and learning, social contacts, as well as how students are coping with the situation emotionally in different parts of the world. According to the Dell Technologies survey (12,000 secondary and post-secondary students), those who were born after the mid-1990s, bring new tech skills and high expectations. They use technology as part of their formal education (98%), say that technology literacy matters (97%), believe that technology and automation will create a more equitable work environment (80%), and rank their technological literacy as good or excellent (73%) (Dimensional Research, 2018). Relevant studies in Greece revealed that most of the first-year Greek students widely use technological media they have grown up with new digital technologies. Nevertheless, research results indicate that technological infrastructure and financial issues impinge on students' attendance and engagement (OECD, 2020). The circumstances created due to COVID-19, the pandemic crisis, have posed an unprecedented challenge to educational systems and the global higher education sector (Crawford et.al., 2020). Over 100 million teachers and school personnel were impacted by the sudden closures of learning institutions. Τhe complete or partial closure of University institutions due to COVID-19 led to social distance. Unavoidable during the pandemic crisis, students use more than before digital and networking technologies for learning. Consequently, the digital transformation to remote teaching and digital classrooms raised a variety of issues on quality, social interaction, and data protection. Moreover, a new field that needs to be carefully discussed and tackled has emerged. Given that the pandemic crisis (COVID-19) has substantial effects on education, we attempted to explore important phenomena such as technical, pedagogical, and financial or organizational issues. According to Adnan and Anwar (2020), students' adaptation to distance learning under the pressure of the pandemic crisis, cannot be effective in countries where the vast majority of students do not have access to the internet due to technical, pedagogical, and financial or organizational difficulties. As we have already mentioned and even more importantly, technological infrastructure and financial issues are not the only issues that differentiate the adaptation of the online learning procedures. Our data analysis in line with Cameron et al. (2021) and Karalis and Raikou (2020), indicates the lack of communication and cooperation, as well as the general restriction of social contact in the academic context as some of the major obstacles. According to our data, the pandemic has exposed and deepened preexisting education inequalities, as there are statistically significant differences between the subjects depending on their demographic and social characteristics (Kyridis, 1996(Kyridis, , 2003Kyridis et al., 2011). The sample's responses in scale Educational inequalities show a small but statistically significant difference that depends on gender. More precisely women seem to express a greater degree of agreement than men. As we have already pointed out COVID-19 can bring about a digital divide, assuming that the pandemic is more likely to accelerate and reshape existing phenomena rather than creating new ones. Freshmen are already anxious about their future. It is noteworthy to mention that our data analysis indicates that students who are skeptical about the introduction of computers in education show a greater degree of agreement on all 4 scales than those who say they are positive in this possibility. Also, the ideological position of the sample seems to influence their answers on scale 4 (educational inequalities). To be specific, those who stated that they ideologically belong to the Left-wing, show a greater degree of agreement than those who belong to the Right-wing. Furthermore, according to Agung et al. (2020), even when students in certain cases overcome their financial problems, adapt efficiently to online learning, and participate actively in a big percentage, research data showed that there was a lack of enthusiasm. Finally, the year of study seems to play an important role in the sample's answers. According to Mehdar's research (2008), depending on their experience, students differ in the adoption of ICT. As stated by Amir (2020) and Brown and Czerniewicz (2008), first-year students are responsive and receptive to the use of ICT in a distance learning pedagogical framework contrary to the answers of students of higher semesters who prefer the use of traditional teaching methods and opposed to distance learning processes, in which they face communication barriers. Also, in accordance with the data of the above surveys, in our analysis, the first and second-year students show a greater degree of agreement in comparison with the older students. As a conclusion, from the above presented findings and the literary review, it becomes evident that learning and social obstacles were raised in the academic environment by the uncertain situation that the pandemic crisis created. The remarks above convey a sense of urgency within the academy that pressures students to keep up with changes and raises concerns that some students may be left behind. In any case, the closure of university institutions, as the main precaution that was taken, created new economic, social and educational phenomena and difficulties that had to be explored in order to overcome obstacles and continue the educational activity.
5,873
2022-01-03T00:00:00.000
[ "Education", "Computer Science" ]
Thermomagnetic recording fidelity of nanometer-sized iron and implications for planetary magnetism Significance Extraterrestrial rocks that contain particles of iron or kamacite are thought to carry paleomagnetic recordings from the time of the formation of the Solar System. Interpretation of these recordings has hitherto falsely assumed particles were uniformly magnetized. We have reexamined the magnetic recording reliability of these minerals using numerical models that account for the more complex magnetic structures that are likely to exist and show that iron and kamacite particles are exceptionally good and thermally stable recorders of ancient magnetic fields, dominated by the recording made when iron cools through its Curie point. Additional recordings for thermal events that occur substantially below the Curie temperature will be difficult to extract from iron-dominated samples. Paleomagnetic observations provide valuable evidence of the strength of magnetic fields present during evolution of the Solar System. Such information provides important constraints on physical processes responsible for rapid accretion of the protoplanetesimal disk. For this purpose, magnetic recordings must be stable and resist magnetic overprints from thermal events and viscous acquisition over many billions of years. A lack of comprehensive understanding of magnetic domain structures carrying remanence has, until now, prevented accurate estimates of the uncertainty of recording fidelity in almost all paleomagnetic samples. Recent computational advances allow detailed analysis of magnetic domain structures in iron particles as a function of grain morphology, size, and temperature. Our results show that uniformly magnetized equidimensional iron particles do not provide stable recordings, but instead larger grains containing single-vortex domain structures have very large remanences and high thermal stability—both increasing rapidly with grain size. We derive curves relating magnetic thermal and temporal stability demonstrating that cubes (>35 nm) and spheres (>55 nm) are likely capable of preserving magnetic recordings from the formation of the Solar System. Additionally, we model paleomagnetic demagnetization curves for a variety of grain size distributions and find that unless a sample is dominated by grains at the superparamagnetic size boundary, the majority of remanence will block at high temperatures (∼100 °C of Curie point). We conclude that iron and kamacite (low Ni content FeNi) particles are almost ideal natural recorders, assuming that there is no chemical or magnetic alteration during sampling, storage, or laboratory measurement. M agnetic remanences recorded in meteorites and lunar samples have been used to investigate solar nebular formation (1,2), partial planetesimal differentiation (3,4), and the possibility of an early lunar dynamo (5)(6)(7). The magnetic recorder, Ni-poor kamacite (FeNi) (essentially metallic iron), is commonly found in such planetary materials; and due to kamacite's chemical instability, its presence is usually seen as an indicator of potentially pristine magnetic remanences. However, for a magnetic mineral to retain an original magnetic remanence, the magnetic carriers must also be thermally stable on geological timescales. Most of our theoretical understanding of the thermal stability of iron particles' remanences is based on single-domain (SD) theory, which assumes that ideal magnetic recorders are magnetically uniform (8). Using Néel's theory for SD grains, Pullaiah et al. (9) determined a series of curves (henceforth referred to as "Pullaiah curves") that describe the thermal response of the common terrestrial magnetic recorders magnetite and hematite. Paleomagnetists use such Pullaiah curves to estimate the temporal stability of natural magnetic remanences by linking measured laboratory unblocking temperatures to theoretical room-temperature relaxation times. These curves can be used in a variety of applications, e.g., magnetic dating (10,11) and determining the likely primary nature of magnetic remanences. With the exception of Winklhofer et al. (12) and Fabian et al. (13), all previously published Pullaiah curves found in the literature, e.g., Pullaiah et al. (9) and Garrick-Bethell and Weiss (14), are based entirely on SD theory, which does not take into account more complex magnetic domain structures such as the flower and single-vortex (SV) states (15). We know such nonuniform structures are ubiquitous in the vast majority of iron particles found in planetary materials (16)(17)(18). In fact, near-equant iron SD particles are theoretically thermally unstable at room temperature; i.e., they are superparamagnetic (19)(20)(21) with relaxation times of seconds, not billions of years. Given that the majority of magnetic remanence carriers in iron, and likely other minerals, are SV (22), the paleomagnetic recordings that they contain can be correctly understood only by a reevaluation of their thermomagnetic stability. Can such iron particles record and retain magnetic remanences over geological timescales and do Pullaiah curves for vortex states in natural kamacite significantly deviate from those of SD grains? A pioneering study by Winklhofer et al. (12) used a constrained micromagnetic model to calculate Pullaiah curves for magnetite for nonuniform magnetic structures. However, such constrained models make assumptions about possible transition paths (23) and may not necessarily correctly estimate the energy barriers needed to construct Pullaiah curves. Additionally the work of Winklhofer et al. (12) was limited by computers of the time, i.e., to calculating low-resolution models with only a few points for each curve. Significance Extraterrestrial rocks that contain particles of iron or kamacite are thought to carry paleomagnetic recordings from the time of the formation of the Solar System. Interpretation of these recordings has hitherto falsely assumed particles were uniformly magnetized. We have reexamined the magnetic recording reliability of these minerals using numerical models that account for the more complex magnetic structures that are likely to exist and show that iron and kamacite particles are exceptionally good and thermally stable recorders of ancient magnetic fields, dominated by the recording made when iron cools through its Curie point. Additional recordings for thermal events that occur substantially below the Curie temperature will be difficult to extract from iron-dominated samples. The aim of this study is to exploit new model developments (18,24), which allow us to quantify the thermal stability of nonuniform magnetic structures, such as those found in kamacite. These developments allow us to use unconstrained numerical micromagnetic approaches that use a nudged elastic band (NEB) algorithm to determine the thermal stability of complex nonuniform magnetic domain states (25). We determine relaxation times and thermal stability in submicrometer grains of iron as a function of grain size, shape, and temperature. We use the micromagnetic modeling package MERRILL (Micromagnetic Earth Related Robust Interpreted Language Laboratory) (26) to calculate relaxation times when producing new Pullaiah curves for realistic ferromagnetic domain states, i.e., flower and single-vortex counterparts in both spheres and cubes of iron. Results Domain States and Remanences. Although all three allotropes of iron available at atmospheric pressures have a cubic crystalline form, their occurrence in the terrestrial environment is rare because of the ease with which it oxidizes or alloys with other elements. In extraterrestrial settings, pure iron is often observed in spherical morphologies (27,28). Remanence characteristics of magnetic crystals are significantly affected by the grain morphology, and so we examine both cubic and spherical grain shapes of iron. The evolution of domain structure with grain size determined from unconstrained 3D micromagnetic models follows the well-established evolution seen in other materials (21,29,30) whereby the smallest particles have relaxation times of order 10 2 s or less and are termed superparamagnetic (SP). As particle size increases, grains become stable SD, followed by a transition to an unstable SV state and then to a stable SV state. For equidimensional cubes and iron spheres at room temperature, the critical grain size d0 marks the transition from SP to SD, d 0 that from stable SD to unstable SV, and d 1 that from unstable to stable SV. In iron, the stable SD grain size range is almost entirely absent (20,21) with the exception of a very narrow zone from 23 nm to 25 nm where the local energy minimum (LEM) is an SD-like flower state which switches via vortex nucleation and annihilation. The critical grain size (d 0 ) for iron at which the transition from an SD to an SV state occurs is at 28 nm equivalent spherical volume diameter (ESVD) for cubes, in agreement with the previous estimate of 24 nm edge length by Muxworthy and Williams (21), and 25 nm for spheres. However, SV grains at or just below the d 0 threshold are not thermally stable. Indeed, we find that the smallest stable SV domain states are at 32 nm (ESVD) for cubes and 43 nm for spheres. The SV state can also be further classified according to the alignment of the vortex core relative to the crystalline anisotropy axis. In equidimensional grains the transition from SD to SV initially favors a vortex core aligned with the hard axis (HSV), which like its counterpart seen in magnetite (24) is only weakly stable. At larger grain sizes the easy-aligned vortex (ESV) state dominates over a large grain size range and ESV states remain the lowest-energy state up to at least 200 nm, which is the largest grain size modeled in this study. Experimental observations indicate the SV states can be nucleated in substantially larger grains still (18). If SV states are to contribute substantially to the paleomagnetic signal in rocks, then each SV grain must contribute a significant net remanence. To this end we calculate the average net remanence at 20 • C as a function of grain size determined by averaging the domain state magnetizations from 100 solutions (with random initial magnetization) per grain size (Fig. 1). The most significant observation from Fig. 1A is that throughout the SV grain size range the remanence per particle increases monotonically for spheres. Given that the SD size range in iron is very restricted, it follows that SV grains provide both a large and a sta-ble remanence and are therefore almost certainly the dominant source of remanence in lunar and meteoritic samples whenever spherical particles of iron or kamacite are the primary magnetic mineral. The behavior for cubes of iron is somewhat more complicated. Grains smaller than d 0 are in a near uniform domain sate, and so we expect the remanence of each grain to increase as d 3 , which is what we observe. In this SD range, the remanence of cubes and spheres should have near identical values when plotted in ESVD units. Grains slightly larger than d 0 are in an unstable SV state, with the lowest-energy state having the vortex core paradoxically aligned with the hard crystalline axis (HSV). However, the energies of the easy-aligned vortex cores (ESV) are not predicted to be significantly higher than those of the hard-aligned vortex cores, and so both states are accessible, with a low-energy barrier between them. As a result, grains in this region are SP at room temperature so that d 0 < d 1 with respect to thermal stability. The ESV and HSV states have different remanences due to the slight deformation of the vortex core in response to the crystalline anisotropy. Additionally for cubes, the core axis length varies with direction (cubic diagonals vs. edges). Because of this, the SD of remanence (σr) increases as seen at about 30 nm grain size for both cubes and spheres. Cubic grains larger than 33.5 nm have only one stable state which is the ESV state, resulting in two distinct features of the remanence curve in Fig. 1B: first, the dramatic decrease in σr as expected (as the hard axis states are no longer easily accessible) and second, the decrease in average remanence value. The decrease is caused by the ESV core that carries the remanence aligning along the cube's 100 easy directions, which are shorter than those of the HSV 111 aligned core by a factor of √ 3, and so a net decrease in remanence is expected. In cubes larger than 60 nm, σr increases dramatically with grain size, marking the transition from simple symmetrical SV domain states to more complex twisted vortex states (31), where both the grain shape and crystalline anisotropy play an increasingly important part in determining both the number and form of the domain states that can be nucleated. Although still dominated by vortex-like structures, the increasing multiplicity and asymmetry of domain states that can be nucleated beyond 60 nm can be thought of as the slow transition toward a multidomain (MD) state. During this transition, the vortex cores distort along the hard crystalline directions and eventually evolve into domain walls. While we would not expect to see a decrease in remanence with grain size in spherical grains (because the core length is direction invariant), we might have expected to see a decrease in σr when the ESV state dominates. However, spheres, unlike cubes, do not have a shape that mirrors the crystalline anisotropy, and so preference of alignment of the vortex core along the easy crystalline axis is much weaker in spheres and their vortex cores often align in random directions in the HSV to ESV grain size range. Despite the increasing variance of the remanence with grain size in both spheres and cubes, the curves shown in Fig. 1 indicate that in most lunar and meteoritic samples where iron or kamacite is the dominant magnetic mineral, the primary carrier of magnetic remanence will be SV domain states and that these provide both high remanence and high thermal and temporal stability, a result unexpected from SD theory. Thermal and Temporal Stability. Using similar NEB calculations to those we have previously applied (18,24,30), we determined energy barriers between various LEM states and calculated Pullaiah curves with relaxation times for both cubic and spherical iron grains. Fig the relaxation times calculated analytically (Eq. 1) for ideal SD iron up to 30 nm (ESVD) using where τ is the relaxation time at temperature T (in degrees kelvin), τ0 is the switching time, v is the particle volume, K1 is the temperature-parameterized magneto-crystalline anisotropy constant, and kB is Boltzmann's constant. These times are calculated purely based on the energy barrier that results from the cubic magneto-crystalline anisotropy; we neglect the microscopic coercivity due to the self-demagnetizing field because of the particle symmetry of both cubes and spheres. There are a number of key observations to be made from Fig. 2 A and B. First, we observe that iron exhibiting SD domain structures, i.e., both flower-state micromagnetic models (24.8 nm model in Fig. 2B) and the analytical ideal-SD particle calculations (yellow-orange lines on left side in Fig. 2 A and B) are poor thermal recorders that behave superparamagnetically at relatively low temperatures in agreement with the literature [e.g., Kneller and Luborsky (19), Butler and Banerjee (20), and Muxworthy and Williams (21)]. The analytic calculations made for grain sizes from 25 nm to 30 nm are necessarily constrained to be in an SD state, and in reality these are all above the critical grain size and would exist only in SV domain states. Second, we find that the smaller iron particles containing SV domain states are also relatively poor magnetic recorders. The stability decreases very quickly with grain size so that we observe SP behavior for grain sizes below ∼43 nm and below ∼32 nm ESVD in spheres and cubes, respectively (Fig. 2). There is a change in the gradient of the Pullaiah curves for SD and SV that reflects the different domain states and switching mechanisms. The result is that small SV grains have lower temporal, but higher thermal stability. In spheres the energy barrier between LEM states is traversed by simple rotation of the vortex structure so that the contribution from the exchange and self-demagnetizing energies to the energy barrier is zero, leaving magneto-crystalline anisotropy as the sole remaining term controlling thermal blocking in small iron spheres. Small iron cubes are again more complex than spheres of the same nominal size. The primary mechanism by which SV states traverse energy barriers is by structure coherent rotation (SCR) (24). In this case the domain structure changes slightly during reversal owing to configurational anisotropy (32) caused by the interaction of domain structures with grain shapes. In SCR, in addition to the magneto-crystalline anisotropy, both the exchange and demagnetizing energies play a crucial role in controlling the height of the energy barrier between LEM states. The smallest iron cube that we modeled (24.8 nm) contains a flower-domain state that behaves similarly to the 25.5-nm ideal-SD case except that its relaxation gradient is slightly lower. The 33.5-nm state is unstable, entering the superparamagnetic regime above ∼325 • C. Domain states just below this (from Fig. 2. (A and B) Pullaiah curves for small spherical (A) and cubic (B) grains of iron through the SD and SV grain size range that shows the relationship between the temporal and thermal stabilities of magnetization. Heating a sample and noting the temperature at which it loses its magnetization can therefore tell us the age of remanence acquisition. In B the sizes are ESVD. The dashed lines are the interpolated Pullaiah curves (using Eq. 8) that determine the blocking temperatures and maximum affected grain sizes for the remagnetization scenarios listed in Table 1. 25 nm to 31 nm) comprise multiple possible LEM structures, both easy axis and hard axis aligned vortices, with free energy values very near to each other and with relatively low energy barriers between domain states. At 33.5 nm and beyond, the ESV state prevails and the barrier increases steadily with grain size. Thus, by ∼43 nm we observe blocking temperatures of ∼640 • C (Fig. 3) and by 74.4 nm temperatures of ∼745 • C (compared with ∼370 • C for similar-sized spheres). We summarize the blocking temperatures in Fig. 3, in which the stark difference between the thermal behaviors of spheres and cubes can be seen. For cubes we observe an initial unstable region as the switching regime changes from SD, to flower, and then to SV. Once this zone is traversed, there is a very rapid increase in blocking temperature. Spheres on the other hand do not exhibit an unstable region at room temperature, and the increase in blocking temperature is relatively smooth, following a pattern similar to what would be expected for SD rotation. Simulated Remanent Magnetization and Thermal Demagnetization. Thermal demagnetization curves can be used to estimate the range of thermomagnetic responses from distributions of iron cubes and spheres. From these, we can assess the ability of meteorites and lunar samples to hold a recording of the intensity of one or more components of a paleomagnetic field. We constructed simulated remanent magnetizations (SiRMs) from the range of LEM domain states found from random initial states. The SiRM cannot be said to be a true thermo-magnetic remanence (TRM) as we do not simulate cooling in an external field. In a true TRM the remanence is fixed by the fraction of the domain states that are aligned with the external field at their blocking temperature T b , although the remanence continues to grow below T b with Ms(T ). For uniaxial SD grains Neél (8,33) calculated this fractional alignment as proportional to tanh (Em(v )/kBT b ). Because a grain's magnetic energy (Em) increases much faster with grain volume than T b , the equation implies that the fractional alignment will increase with grain size. We expect a similar relationship for SV grains, although this has not yet been fully established. In our model we make the simplification that the fractional alignment of the domain states is constant for all grain sizes and that remanences of each grain are all aligned parallel to each other (they are saturated). The remanence attributed to any one grain size is simply the average magnetization from 100 random initial states. The SiRM will still have many of the characteristics of a TRM in terms of the expected demagnetizing (zero field) blocking temperature spectrum. In calculating the SiRMs, the relative number of particles of each grain size was chosen from the probability density function of a lognormal distribution of grain sizes (see Fig. 5). The stepwise thermal demagnetization of the SiRM is then simply determined from which grains would remain blocked after heating to a given temperature according to the blocking temperature curves of Fig. 3. It is important to note that although we have extrapolated grain remanences and blocking temperatures for grains much Blocking temperatures for small cubic (green) and spherical (blue) grains of iron. The small peak observed at the start of the cubic blocking temperature curve corresponds to a narrow unstable zone of hard-axis aligned single vortices (HSV) that mark the transition between stable SD and stable SV domain states (24,30). The dotted lines are extrapolations of blocking temperature beyond the size range for which full micromagnetic computations were performed. larger than those for which we have full micromagnetic simulations, the resulting uncertainties in the shape of the thermal demagnetization of the SiRM curves will be restricted to the relatively small region corresponding to temperatures above the maximum calculated blocking temperatures of 730 • C for iron cubes and 656 • C for iron spheres. The predicted SiRM demagnetization curves are shown in Fig. 4 for a range of assumed log-normal distributions (Fig. 5). Each curve in Fig. 4 has a shaded region representing the ±σr influence of the remanence curve uncertainties shown in Fig. 1. The grain size distributions shown in Fig. 5 all have the same SD of σ d = log(2), but with various geometric means (medians) fromd = 0.5 nm tod = 1,000 nm. A distribution with a median of 0.5 nm is clearly dominated by SP grains with a small percentage of stable SD and SV states. It is only these stable domain states that contribute to the remanence, and for this reason it is possible to distinguish remanence contributions from relatively large grains: For example, the relative population of stable ESV spheres at grain sizes 50 nm compared with 400 nm falls only by a factor of ∼10, so that these larger grains still make a significant contribution to the observed remanence. At the other extreme, the distribution withd = 1,000 nm is dominated by low-remanence MD grains (assumed zero in our model) and thus does not contribute to the observed sample magnetization. Discussion Discriminating Primary and Secondary Remanences. As stated in the first two sections of Results, it has been known for some time that the SD grain size range for iron is vanishingly small (20,21) so that the remanence is carried by the larger inhomogeneously magnetized particles, previously called "pseudosingle-domain" (PSD) grains. The exact nature of the remanence of PSD states remained poorly understood (34,35), until the advent of unconstrained 3D micromagnetic modeling (36,37) which identified vortex domain structures. These were suggested as the cause of "PSD" behavior by refs. 12 and 38. Only recently has it been possible to attempt a comprehensive estimate of their thermal stability (18,24,30). As a consequence, the interpretation of paleomagnetic signals has hitherto been done on the basis of SD theory even though it has long been acknowledged that SD particles are unlikely to be the dominant remanence carriers (22). Nagy et al. (24) demonstrated that SV domain states provide surprisingly high temporal and thermal stability, even in excess of that of SD grains that were until recently generally regarded as "ideal" magnetic recorders. What we have shown in this paper in the case of iron is that not only do SV domain states have high thermal and temporal stability, but also the remanence grows steadily with size, so that SV states will likely dominate the observed remanence in lunar rocks and chondritic meteorites where iron or kamacite is the major magnetic mineral. The remanence and blocking temperature calculations provide the means for constructing simulated remanence and stepwise thermal demagnetization curves which can provide an insight into the ability of assemblages of iron particles to accurately record a thermomagnetic remanence and to what extent this type of natural remanent magnetization (NRM) might be susceptible to secondary viscous remanent magnetization (VRM) and/or thermo-viscous (TVRM) overprinting. We have constructed simulated remanence curves for a wide range of possible grain size distributions. The distribution of smallest grains (d = 0.5 nm) is dominated by grains at the SD-SV boundary (d 0 ) where only the finest SV particles contribute to the signal. Only in this case do we observe a relatively smooth decay of magnetization between room temperature and the Curie point. In all other grain distributions and for all cubic grains (which exhibit the sharpest increase in blocking temperature with grain size), the SiRM remanence remains blocked to within a few tens of degrees of the Curie point. In natural samples therefore we would normally expect most of the remanence to be blocked within 100 • C of the Curie point. Experimental evidence in support of the prevalence of highblocking-temperature demagnetization curves is difficult to find because of the ease with which iron oxidizes on heating and the difficulty in most laboratories to achieve the high temperatures required. In fact, many of the published thermal demagnetizing curves for iron show evidence of chemical alteration, with nonreversible heating curves and Curie points well below the expected value of 770 • C. Indeed, they commonly display a 580 • C magnetite Curie point [e.g., Lawrence et al. (6), Wasilewski (28), Grommé et al. (39), and Helsley (40)]. However, Lawrence et al. (6) did have a single specimen with apparent blocking temperatures up to 770 • C. Because the average stability of SV domain states increases with grain size and for nonspherical grains, we would expect characteristic demagnetization curves in most lunar and meteoritic samples to be dominated by the high-unblocking-temperature particles. The implication is that most extraterrestrial material that is free from oxidation should be dominated by its primary remanence, with any secondary VRM or TVRM component accounting for a small fraction of the observed sample magnetization. This conclusion differs from that of Garrick-Bethell and Weiss (14) who used classical SD theory and obtained a much broader spectrum of blocking temperatures. They suggested that lunar rocks would be capable of recording secondary remanences arising from (i) shallow burial below the lunar surface, −20 • C for 1 billion y; (ii) lunar surface exposure where it experienced diurnal solar heating, 100 • C for 300 My, and finally (iii) Earth storage of lunar rocks at 20 • C for 10 y (14). Using our calculated Pullaiah curves (Fig. 2), we can predict the maximum temperature required to remove each of the VRM and TVRM secondary overprints shown in Table 1. Our Pullaiah curves indicate that in theory it is possible that secondary overprints may dominate the thermal demagnetization curves, and thus the Arai plots of any Thelliertype paleointensity experiment, up to temperatures of about 485 • C. However, these overprints occupy very different blocking temperature ranges in cubes and spheres, so that it may be impossible to separate different VRM components in samples which have a range of grain morphologies. More importantly, we can see from the simulated thermal demagnetization curves (Fig. 4) that, with the exception of the smallest grain size distribution withd = 0.5 nm, grains with blocking temperatures of less than 485 • C in cubes and 265 • C in spheres account for less than 0.05% of the total NRM. Even for a grain distribution withd = 0.5 nm a significant fraction of the NRM is overprinted only for spherical grains. The conclusions must therefore be that lunar and meteoritic samples could be exceptionally good paleomagnetic recorders, which are unlikely to acquire a significant overprint from a VRM or TVRM process relevant to the geological settings of lunar samples. Paleointensities and Chemical Alteration. We are left with the problem that many lunar samples demonstrate significant lowtemperature components with nearly all being completely unblocked by ∼580 • C (39,41,42), and, assuming this alteration occurs via a grain surface process leaving a core-shell structure (43), then the residual iron particles will be of a smaller size and thus also lower blocking temperature. We note, however, that Strangway et al. (44) suggested that many lunar samples were likely to have been exposed to moderate magnetic fields on return from the moon and Lawrence et al. (6) demonstrated that such samples were unlikely to preserve a pristine TRM. In the terrestrial environment, pure iron readily oxidizes so that thermal demagnetization experiments are extremely likely to fail even when attempted in vacuum or inert atmospheres (18). We suspect that many published thermal demagnetization curves or Arai plots for lunar and meteoritic samples will be contaminated by chemical alteration. The question remains as to whether it is possible to extract reliable paleointensities from these samples which have a high magnetic recording fidelity, but are exceptionally susceptible to thermochemical alteration. The answer is likely to reside in nonheating methods, but such techniques have been attempted several times with limited success (45)(46)(47). Such methods usually either rely on SD theory (48) or require construction of a transfer function between coercivities and blocking temperature (based on a derived "calibration factor"). This transfer function depends on the exact mineralogy and grain size distribution and critically on the magnetic domain structure that the grains contain. Hitherto a purely phenomenological approach has been taken where calibration factors have been assigned to certain rock types. These approaches can only ever be first-order approximations with poorly defined uncertainties given a lack of rigorous theoretical understanding of the underlying physical processes involved. Conclusions Butler and Banerjee (20) concluded that the proportion of stable naturally occurring magnetically single-domain, iron grains is extremely small. Although their purely analytical results systematically underestimated the critical grain size for the SD/SV transition region (21), this conclusion remains valid. We have shown here that SV domain states offer both high magnetic remanence and high magnetic stability and offer the possibility of holding a thermomagnetic recording over periods from the beginning of the Solar System. Thermomagnetic demagnetization curves are predicted to be dominated by high blocking temperatures with at least 80% of the remanence remaining until within 100 • C of the Curie point. This also implies that most meteoritic and lunar samples where iron or kamacite is the dominant magnetic mineral should contain a high-fidelity recording of an ancient magnetic field and be largely resistant to secondary TVRM overprints. However, the high-fidelity recording of iron particles remains tantalizingly out of reach using normal laboratory observations due to the ease with which iron particles thermochemically alter. Nonheating paleointensity methods may be the only way to access the paleomagnetic recordings in iron particles, and micromagnetic calculations such as those outlined in this study could eventually establish a complete theory to derive accurate transfer functions between coercivities and blocking temperatures for SV grains. This would significantly increase the reliability of nonheating paleointensity methods. The exposure of many lunar samples to moderate magnetic fields after sampling, however, remains a problem. Materials and Methods Calculation of Blocking Temperatures and Relaxation Times. Numerical micromagnetic modeling (26,49,50) is used to calculate the magnetization, m( x ) = (mx(x, y, z), my (x, y, z), mz(x, y, z)), of a magnetic material denoted by Ω with (x, y, z) ∈ Ω. This technique divides the total energy, Etot, resulting from the magnetization into four components: the exchange Ee, demagnetizing E d , magneto-crystalline aniosotrpy Ea, and external (Zeeman) Ez energies, according to the equations Ea = K 1 (T) where A(T), Ms(T), and K 1 (T) are the temperature-dependent exchange, saturation magnetization, and magneto-crystalline anisotropy constants, respectively (below), with H d the demagnetizing field and Hz the externally applied Zeeman field. The total energy Etot is the sum of Eqs. 2-5. Magnetization configurations ( m) that minimize Etot correspond to stable magnetization structures. In general, it is not possible to find analytical expressions for m that minimize Etot, and so the region Ω is subdivided into tetrahedral elements and m is spatially sampled at the n points comprising tetrahedra vertices. Etot then describes a 3n dimensional energy landscape with respect to the three components of the magnetization; the task of micromagnetic algorithms is then to find magnetization structures that correspond to LEMs of this landscape. The total energy itself is calculated as the sum of partial energy contributions over each element, where the magnetization is assumed to vary linearly. The size of the elements is controlled by the exchange length ex (31,51), and below this size (taken as the average length of the side of a tetrahedral element) magnetization fields resemble uniform domains and no longer capture complex magnetization structure. Eq. 5 outlines the expression used for exchange length with exchange A(T) and saturation magnetization Ms(T) outlined below as where µ 0 is the permeability of free space. Numerical values of A(T) and Ms(T) are detailed in ref. 26. The magnetization m at a given temperature results in a high-dimensional energy surface using Eqs. 1-4. Some configurations of m correspond to wells in the energy landscape, which are stable magnetization structures. The blocking temperatures may then be approximated by calculating the energy barrier between these LEM states determined by the NEB method. To calculate blocking temperatures and relaxation times we use the Neél-Arrhenius (8) relation that equates the magnitude of an energy barrier with the relaxation time where τ 0 is the atomic reorganization time taken to be ≈10 −9 (55), ∆E is the size of the energy barrier required to transition from one LEM state to another in joules, k B is Boltzmann's constant, and T is the temperature in degrees kelvin. Note that the relaxation times will be reduced by the degeneracy of the minimum energy paths by which the domain can switch. For a cubic crystalline symmetry this may be of the order of 4, but will also depend upon the grain symmetry. Given the uncertainly in τ 0 and that in determining the exact degeneracy, we have chosen simply to state the relaxation time for a single energy barrier; it should be noted that the actual relaxation times observed might be lower by a factor of typically 1-8. Once relaxation times τ (T) have been calculated for the complete temperature range (from 293 • K to 1,038 • K), it is a simple task to calculate the blocking temperature by selecting a reference relaxation time, typically a laboratory timescale (we take τ ref = 100 s in this study), and interpolating T to the temperature that corresponds to τ (T) = τ ref . Pullaiah Curve Interpolation. The following function was used to obtain the scenarios in Table 1 (dashed curves in Fig. 2) by interpolating between any two curves representing given sizes S 1 and S 2 on a Pullaiah diagram, P(T) = P 1 (T) + S − S 1 S 2 − S 1 (P 2 (T) − P 1 (T)), where S is a chosen size between S 1 and S 2 ; P 1 (T) and P 2 (T) are the polynomials representing the Pullaiah curves at size S 1 and S 2 , respectively; and P(T) is the interpolated line between P 1 (T) and P 2 (T).
8,107.4
2019-01-22T00:00:00.000
[ "Geology" ]
Why soft contacts are stickier when breaking than when making them Soft solids are sticky. They attract each other and spontaneously form a large area of contact. Their force of attraction is higher when separating than when forming contact, a phenomenon known as adhesion hysteresis. The common explanation for this hysteresis is viscoelastic energy dissipation or contact aging. Here, we use experiments and simulations to show that it emerges even for perfectly elastic solids. Pinning by surface roughness triggers the stick-slip motion of the contact line, dissipating energy. We derive a simple and general parameter-free equation that quantitatively describes contact formation in the presence of roughness. Our results highlight the crucial role of surface roughness and present a fundamental shift in our understanding of soft adhesion. Insects, pick-and-place manufacturing, engineered adhesives, and soft robots employ soft materials to stick to surfaces even in the presence of roughness. Experiments show that the force required for making contact is lower than for releasing it, a phenomenon known as the adhesion hysteresis. 1,2The common explanation for this hysteresis is either contact aging or viscoelasticity. 3,4Here, we show that adhesion hysteresis emerges even for perfectly elastic contacts and in the absence of contact aging and viscoelasticity because of surface roughness. We present a crack-perturbation model [5][6][7] and experimental observations that reveal discrete jumps of the contact perimeter.These stick-slip instabilities are triggered by local differences in fracture energy between roughness peaks and valleys.Pinning of the contact perimeter [8][9][10] retards both its advancement when coming into contact and its retraction when pulling away.Our model quantitatively reproduces the hysteresis observed in experiments and allows us to derive analytical predictions for its magnitude, accounting for realistic rough geometries across orders of magnitude in length scale. 11,12Our results explain why adhesion hysteresis is ubiquitous and reveal why soft pads in nature and engineering are efficient in adhering even to surfaces with significant roughness. Introduction Two solids stick to each other because of attractive van-der-Waals or capillary interactions at small scales. 3The strength of these interactions is commonly described by the intrinsic work of adhesion w int , the energy that is gained by these interactions per surface area of intimate contact.The work of adhesion is most commonly measured from the pull-off force F pulloff = −3πw int R/2 of a soft spherical probe (see Fig. 1a) with radius R which makes a circular contact with radius a (see Fig. 1b). 13For hard substrates, the measured apparent work of adhesion is smaller than the intrinsic value w int because roughness limits the area of intimate contact to the highest protrusions. 14,15Soft solids are sticky because they can deform to come into contact over a large portion of the rough topography. The overall strength of the adhesive joint is then determined by the balance of the energy gained by making contact and the elastic energy spent for conforming to the surface.Following Persson and Tosatti, 16 energy conservation implies that surface roughness reduces The contact forms a circle for contacting spheres, and its radius a can be measured from in-situ optical images of the contact area.(c) Most natural and technical surfaces are rough so that the solid needs to elastically deform to come into conforming contact.(d) The contact radius is larger and the normal force is more adhesive (negative) during retraction than during approach.The pull-off force is the most negative force on these curves. the apparent work of adhesion to where e el is the elastic energy per unit contact area required to conform to the roughness (Fig. 1c).As shown in Fig. 1d, experiments typically follow different paths during approach and retraction, leading to different apparent different values for work of adhesion of adhesion for making and breaking contact, w appr and w retr .This adhesion hysteresis contradicts Persson and Tosatti's balance of energy, which gives the same value w PT for approach and retraction. In this letter, we present and experimentally validate a theory that allows us to predict these apparent work of adhesion during approach and retraction and thereby the adhesive hysteresis.For soft spherical probes, we can describe the circular contact perimeter as a crack.The crack front is in equilibrium when Griffith's criterion is fulfilled: 17 The energy per unit area required locally for opening the crack, w loc , is equal to the energy released from the elastic deformation, GδA = w loc δA, where δA is the contact area swept out by the moving crack front.A more common way of writing this equation is where both the energy release rate G and w loc should be interpreted as forces per unit crack length.Johnson, Kendall and Roberts (JKR) 13 derived the expression for the energy release rate G for a smooth spherical indenter, G = G JKR (b, a).Equation (2) then allows the evaluation of not just the pull-off force, but of all functional dependencies between rigid body displacement b, contact radius a and normal force F during contact. For smooth spheres, w loc is the intrinsic work of adhesion w int which is uniform along the surface.In the presence of roughness, we will show below that w loc becomes a spatially fluctuating field describing the topographic roughness.Equation ( 2) must then hold independently for each point on the contact perimeter. Axissymmetric chemical heterogeneity We first demonstrate the physical origin of the adhesion hysteresis using a simplified surface that has concentric rings of high and low adhesion energy, similar to the model by Guduru 20 and Kesari and Lew. 21,22Rather than being random, w loc (a) varies in concentric rings of wavelength d as a function of distance a from the apex of the contacting sphere (Fig. 2a). Figure 2b The line is pinned by the first strong-enough obstacle it encounters, so that it is pinned at low contact radius when the contact area grows and at high radius when it shrinks. In the limit of roughness with small wavelength, d → 0, G JKR does not decrease significantly before the contact line arrests at the next peak (see Fig. 2c).In this limit, the contact line samples the minimum values w appr of w loc during approach and the maximum values w retr during retraction.The functional relationship between b, a and F then becomes iden- tical to the JKR solution for smooth bodies (Suppl.Eqs.(S-4) to (S-7)), but with a work of adhesion that is decreased during approach (w appr ) and increased during retraction (w retr , see Fig. 2d).In this limit, the hysteresis w retr − w appr becomes equal to the peak-to-peak amplitude of w loc (a). 21ndom chemical heterogeneity The next step in complexity is moving from a simplified axisymmetric surface to a surface with random variation of the local work of adhesion, where the contact line is no longer perfectly circular (see Fig. 3a).The energy release rate G at a given point now depends on the whole shape of the contact a(s), where s is the length of the corresponding path along the contact circle.Based on the crack-perturbation theory by Gao and Rice, 5,7,23 we recently derived the approximate expression 5,6 where the fractional Laplacian (−∆ s ) 1/2 of a(s) penalizes excursions from circularity and can be interpreted as a generalized curvature, similar to the restoring force of an elastic line. Supplementary Section S-ID derives this expression and shows that near equilibrium, where G(s) = w int , the stiffness of the line is given by c = w int .Note that counterintuitively, the stiffness of the line does not depend on the elastic modulus of the bulk. Numerical solution of Eqs. ( 2) and (3) (see Supplementary Section S-II) on a random field w loc (x, y) with lateral correlation of length d yields force-area curves similar to those of our axisymmetric model (Fig. 3b,c).4,25 Between these jumps, the contact line is pinned.At the same rigid body penetration, pinning occurs at lower contact radii in approach than during retraction, leading to a hysteresis in apparent adhesion described by two JKR curves with constant apparent work of adhesion w appr and w retr (Fig. 3c), similar to the curves obtained from our 1D axisymmetric model (Fig. 2d). Our numerical data in Suppl.Fig. S-5 shows that the magnitude of hysteresis, w retr − w appr ∝ w 2 rms , the variance of the random field w loc .To understand this expression, we first discuss the virtual limit c → 0 where the line is floppy and deviations from circularity are not penalized.Floppy lines (c < w rms ) can freely distort and meander along valleys during approach (green line in Fig. 3d) and peaks during retraction (purple line).Because of this biased sampling of the work of adhesion along the line, the contact radius is larger during retraction than during approach.In this individual-pinning limit, 10,26,27 each angle θ along the contact perimeter independently yields our 1D model and we obtain w retr −w appr ∝ w rms . In the opposite limit, c → ∞ the line is stiff and the contact remains circular (dashed line), randomly sampling as many regions of low and high adhesion.The fluctuations average out along the perimeter so that there is no hysteresis, w retr − w appr = 0.The contact radius is obtained from the JKR expression evaluated for the spatially averaged work of adhesion, w loc .Each colored patch corresponds to an elastic instability during which the perimeter jumps between two pinned configurations (dark lines), and the color scale represents the energy dissipated during each instability.The larkin length corresponds to the smallest extent of these jumps along the perimeter, and increases for weaker heterogeneity or for a stiffer line.The work of adhesion heterogeneity corresponds to a random roughness that has a flat power spectral density with shortwavelength cutoff λ s = 0.07.(b) Contact radius as a function of the normal force in the simulation for the stronger pinning field shown in panel (a).The elastic instabilities correspond to sudden jumps in the contact area and in the normal force.The solid black line corresponds to increasing energy release rates at fixed rigid body penetration b = 0, and the points A and B show that the contact radius is higher during retraction than during approach.The red arrows show the jump-in and jump-out of contact instabilities.(c) Contact radius as a function of the normal force in a simulation on a random chemical heterogeneity with smaller feature size ≈ 0.01 and w rms /c ≈ 0.45.The force-radius curve is smooth because the random field has small features that trigger a large number of instabilities.The dashed lines are JKR curves with work of adhesion w appr and w retr predicted by our theory Eq. (6).In this simulation, w loc corresponds to a self-affine randomly rough topography with an elastic energy for fully conformal contact e el /w int = 0.05 and power spectrum shown by the blue circles in the inset of Suppl.Fig. S-5.(d) Contact lines at rigid body penetration b = 0 on the random work of adhesion heterogeneity shown by the blue colormap.Floppy lines are pinned at higher contact radii during retraction (purple line) than during retraction (green line) because they meander predominantly between regions of low adhesion (white patches) during approach, and between regions of high adhesion (dark blue patches) during retraction.In the limit of a rigid line, the perimeter is perfectly circular (dashed line), randomly sampling as many regions of low and high adhesion.Units are nondimensionalized following the conventions of Refs. 18,19as described in the Supplemental Material. Our simulations (and experiments as exactly as observed in our simulations.Identical results were obtained previously for cracks in heterogeneous media. 10,28pographic roughness The final step in describing adhesion hysteresis on real surfaces is to relate the randomly rough topography to the spatial variations in local work of adhesion. For this we need to consider excursions of the contact line normal to the surface in additional to the lateral excursions that are described by the contact radius a(θ) (see Fig. 4).First note that the solid is always dilated near the crack tip.In order to conform to a valley, the elastic solid needs to stretch even more, requiring elastic energy.Using the same arguments that lead to Eq. ( 1), this additional elastic energy manifests as an effectively decreased local work of adhesion.Conversely, conforming to a peak decreases the overall strain near the crack tip and releases elastic energy, leading to an increased equivalent work of adhesion. While this intuitive picture approximately describes the relationship between heights and local adhesion, the quantitative value of the local adhesion w loc depends nonlocally on the topographic field h(x, y) via an integral transformation derived in Suppl.Sec.I B and C. Supplementary Section III also shows that a crack-front simulation on w loc (x, y) yields results virtually indistinguishable from an exact boundary-element calculation. Comparison to experiments We contacted a rough nanodiamond film with a PDMS hemispherical lense while optically tracing the contact perimeter (see Methods).The nanodiamond film was characterized from atomic to macroscopic length scales using a variety of techniques, as described in Refs. 11,12The resulting power-spectral density (PSD) 29 comprehensively describes the topography of the film and is shown in Fig. 5a.This experiment is compared to a simulation carried out on roughness field with an identical PSD, leaving w int as the only free parameter that we fit to the approach curve.This yields w int = 63 mJ m −2 , within the range expected for van-der-Waals interaction. First, our experiments show the same instabilities as the simulations.The trace of the contact line in Fig. 5b shows the jerky motion of the line for both, with comparable amplitudes of deviations from the ideal contact circle.Videos of the contact area in the indentation experiment show stick-slip motion of the contact line, similar to our simulations (Suppl. Mat. SV1).The fundamental hysteresis mechanism in our model, elastic instabilities and stick-slip motion of the contact line, is clearly present in the experiment. Second, measurements of the mean contact radius as a function of normal force also agree with our simulation results (Fig. 5c).While the intrinsic work of adhesion was adjusted such that the simulations follow the experimental data during approach, the overall functional form is JKR-like (with an effective w appr ) and agrees between experiments and simulation. During retraction, we observe the same phenomenology: From the point of largest normal force, the sphere retracts first at constant contact radius before starting to follow a JKR-like curve with an increased work of adhesion w retr .While simulations retract at slightly different forces, the order of magnitude of the hysteresis is correctly predicted from our simple elastic model. Quantitative differences could come from intrinsic assumptions in our model, such as small strains, linear elastic properties, approximations made in deriving the simplified crack-front expressions, or the assumption of conforming contact.Increasing roughness increases adhesion only as long as the energy needed to fully conform the surface roughness e el is lower than the gain in surface energy w int . 15,30,31Many experiments that report a decrease of pull-off force with increasing roughness, as for example reported in the classic adhesion experiment by Fuller and Tabor, 32 may be in this limit where only partial contact is established within the contact circle.Unlike the theory presented here for soft solids and our understanding of nonadhesive contact, 15 there is presently no unifying theory that quantitatively describes adhesive contact for stiff solids.Large scale simulations with boundary-element methods are needed to better understand this intermediate regime. 14,31,33,34alytic estimates We now show that simple analytic estimates can be obtained from our crack-front model.The equivalent work of adhesion field has the property that its mean corresponds to the Persson-Tosatti expression, Eq. ( 1).Furthermore, it has local fluctuations with amplitude w rms = √ 2w int e el which determine the adhesion hysteresis, which means that the main parameter determining the hysteresis is e el , see Eq. ( 5).We carried out crack-front simulations on self-affine randomly rough topographies (Fig. 3c and Suppl.Sec.IV) with varying parameters to confirm that the work of adhesion during approach and retraction is indeed given by with a numerical factor of k ∼ 3. The elastic energy for fully conformal contact can be written as where E is the elastic contact modulus 35 and h rms is a geometric descriptor of the rough topography.In terms of the two-dimensional PSD 29 C iso , we define where q is the wavevector.This expression contains the root-mean-square (rms) amplitude of the topography, h rms , the rms gradient of the topography, h rms , as well as arbitrary derivatives of order α.The elastic energy is given by the roughness parameter h rms , which is intermediate between rms heights and rms gradients. For most natural and engineered surfaces, h 7][38] Our model is then consistent with the increase in pull-off force with h rms reported in Refs. 21,39We note that most measurements report insufficient details on surface roughness to allow definite conclusions on the applicability of a certain contact model.The range of length scales that dominate h (1/2) rms in our own experiments is at the transition between power-law scaling and the flat rolloff at 2 µm, a length scale that is accessible with an atomic-force microscope.We illustrate the respective scales that contribute to h (α) rms in Fig. 5a. Summary & Conclusion The work performed by a soft indenter during the approachretraction cycle is dissipated in elastic instabilities triggered by surface roughness.The dissipated energy is the difference in energy between the pinned configurations just before and just after the instability (see Fig. 3a).This pinning of the contact line explains why adhesion is always stronger when breaking a soft contact than when making it, even in the absence of material specific dissipation.Roughness peaks increase local adhesion, which pins the contact line and increases the pull-off force.By describing rough adhesion as the pinning of an elastic line, we were able to derive parameter-free, quantitative expressions for the hysteresis in terms of a simple statistical roughness parameter.This analysis paves the way to better understanding the role of surface roughness in adhesion, and provides guidance for which scales of roughness to control in order to tune adhesion. Rough substrate We contacted the PDMS lens against a nanocrystalline diamond (NCD) film of known roughness.The diamond film was deposited on a silicon wafer by chemical vapor deposition and subsequently hydrogen terminated to avoid polar interactions and hydrogen bond formation between the PDMS lens and the rough substrate.The roughness of the film was determined by combining measurements from the milimeter to atomic scales using stylus profilometer, atomic force microscopy (AFM), and transmission electron microscopy (TEM).The full experimental dataset along with the averaged PSD shown in 8)).Specifically, short wavelengths and long wavelengths beyond this range respectively contribute to only 10% of the value of (h (α) rms ) 2 .Evaluating Eq. ( 8) requires the 2D or isotropic power-spectral density of the surface topography, while only the 1D PSD is known.Following Refs., 2,29 we converted the 1D PSD C 1D to the isotropic 2D PSD using the approximation C iso (q) π q C 1D (q).The data used in this figure is available online in Ref. 40 (b) Position of the perimeter in the contact between a rubber sphere and a rough surface during approach.The perimeters on the left side are extracted from the experiment on NCD shown in Synthesis of PDMS hemispheres We synthesized PDMS hemispheres of 0.7 MPa Young's modulus by hydrosilylation addition reaction.Vinyl-terminated PDMS V-41 (weight-averaged Molar mass M w = 62, 700 g/mol) as monomer, tetrakis-dimethylsiloxysilane as tetra-functional cross-linker and platinum carbonyl cyclo-vinyl methyl siloxane as catalyst were procured from Gelest Inc. Monomer and cross-linker were first mixed in a molar ratio of 4.4 in an aluminum pan.The catalyst was added as 0.1 weight percent of the total mixture, and finally the batch was degassed in a vacuum chamber for 5 minutes.Hemispherical lenses were cast on fluorinated glass dishes using a needle and a syringe, and cured at 60°C for three days.Since the PDMS mixture has a higher surface energy than the fluorinated surface, the drops maintain a contact angle on the surface, giving a shape of a hemispherical lens.We extracted the radius of curvature R = 1.25 mm from a profile image of the lens. After curing reaction, the lenses were transferred to cellulose extraction thimble for Soxhlet extraction where toluene refluxes at 130°C for 48 hrs.PDMS lenses were again transferred to a fluorinated dish and dried in air for 12 h.Finally, the lenses were vacuum dried at 60°C for 16 h and then used for experiments.The radius of curvature was measured by fitting a 3-point circle to the image obtained using an optical microscope (Olympus). We determined Young's modulus E = 0.7 MPa by fiting the JKR theory to an indentation experiment against a flat silicon wafer covered with octadecyltrichlorosilane (OTS). This experiment also shows that in the absence of surface roughness, the work of adhesion hysteresis is below 10 mJ/m 2 , a value significantly smaller than the hysteresis measured against NCD ( 80 mJ/m 2 ). Indentation experiment We measured the force and area during approach and retraction of a PDMS hemisphere against a rough diamond film using the setup of Dalvi et al. 2 The lens and the substrate were approached at a constant rate of 60 nm/s until a repulsive force of 1 mN and then retracted with the same rate.The PDMS hemisphere is transparent, allowing simultaneous measurement of the force and of the contact area, Fig. 1b.The video recording, provided in the Suppl.Mat.SV1, has a frame interval of 0.3s, but Fig. 5c shows values for the force and contact radius at intervals of ≈ 30 s. Extraction of contact line from video We extracted the perimeter from each time frame of the video of the contact area.The contact area appears as a bright region in the video, and we defined the contact perimeter as a contour line of fixed level of gray.At the length scale of a few pixels, the position of the line is affected by noise on the image.To reduce the effect of noise on the position of the line, we subtracted the image of the contact area at maximum penetration and subsequently applied a spatial Gaussian filter of variance 2 pixels.The lines shown in Fig. 5b therefore only reflect the position of the perimeter on coarse scales.Supplementary Material video SV2 shows that these lines match the shape of the contact area at large scales and follows the same intermittent motion.The original video is available in the Suppl.Mat.SV1. Our goal is to model the contact of a rough sphere on a deformable elastic flat (Fig. S-1a).The contact perimeter of the adhesive contact between a smooth sphere and flat can be regarded as a circular crack (Fig. S-1b).This is the basis of the Johnson, Kendall and Roberts (JKR) model for adhesion. 13JKR derived an expression for the elastic energy release rate G JKR for this spherical geometry, and balanced it with the intrinsic work of adhesion, G JKR = w int .Here, we extend this result to rough spheres, where the crack shape deviates from circularity.Surface roughness perturbs the shape of the crack in the surface normal direction (Figs. 4 and S-1c).This perturbs the local balance of energy, leading to additional deviation of the crack shape in direction parallel to the surface (Fig. S-1d). where e el is the elastic energy required to fully conform to the surface roughness and the square brackets indicate a functional dependency. The effect of the in-plane deflection on the elastic energy G was derived by Gao and Rice 5 and later extended by us to spheres. 6In our simulations, the equilibrium condition determines the contact radius with O(h 2 ) errors in the strength of the disorder.The left hand side represents the driving force to increase the contact radius that fluctuates according to the surface roughness, while the right hand side represents the elastic response of the line that only depends on the spherical geometry and the material properties.The numerical implementation follows Refs. 6,42and is summarized in supplementary material S-II.We validate our equations by comparing crack-front simulations to boundary element method simulations in supplementary material S-III.42 A. Axisymmetric contact: The JKR model We consider the contact of a sphere (to be exact, a paraboloid) adhering an elastic half-space at a fixed rigid body penetration b (Fig. S-1a).This case can be mapped to the contact of two spheres with the same composite radius R and contact modulus E . 35en only one half-space deforms, E = E/(1 − ν 2 ), where E is Young's modulus and ν is Poisson's ratio.Fracture mechanics typically considers the contact of two elastic half-spaces where E = E/2(1 − ν 2 ).We assume the contact is frictionless and consider only vertical displacements of the half space. The equilibrium radius and force for a perfect sphere against the axisymmetric work of adhesion heterogeneity w loc (a) is given by the JKR theory. 13,18,19JKR described the adhesion of a paraboloid with radius R by superposing the displacement and the stress fields of the nonadhesive Hertzian contact 43 and the circular flat punch under tensile load. 44e contact pressures p have a tensile singularity as the distance to the edge of the contact −ξ goes to 0, with the stress intensity factor Here and below we use the subscript JKR to indicate the circular contact to a smooth sphere. The energy release rate depends solely on the amplitude of this singularity 45 and the equilibrium condition G JKR (b, a) = w loc (a) yields the contact radius.The normal force is given by At equilibrium, where G JKR = w loc , the relationship between force and contact area is given by Once nondimensionalized using distinct vertical and lateral length units, the JKR contact is parameter free, 19,46,47 and we present our numerical results in the nondimensional units defined in Refs. 18,19Specifically, lengths along the surface of the half-space (e.g., the contact radius) are normalized by (3πw int R 2 /4E ) 1/3 , lengths in vertical direction (e.g., displacements) by (9π 2 w 2 int R/16E 2 ) 1/3 and normal forces by πw int R. The equations are in dimensional form but can be nondimensionalized by substituting R = 1, w int = 1/π and E = 3/4. B. Circular contact with surface roughness: Out-of-plane perturbation of the elastic energy release rate We now determine the energy release rate at the perimeter of the contact with a rough sphere but where the contact perimeter remains circular (Fig. S-1c).We denote the respective energy release by G • ([h]; b, a, θ), where the brackets indicate a functional dependency on the height field h(x, y) that describes the roughness.Out-of-plane deflections of the surface of the solid make the elastic energy release rate G • ([h]; b, a, θ) fluctuate along the contact perimeter, and θ parameterizes the angle along the perimeter of the circular crack front.In the main text and in our simulations, we formally describe this perturbation of the energy release rate by the equivalent work of adhesion w loc .In order to justify this mapping, we first discuss the true elastic energy release rate G • and show that the effects of the spherical geometry, surface roughness and in-plane distortion of the crack-front are decoupled. The JKR contact is the superposition of the (adhesive) flat punch 44 and the Hertz solution. 43For the rough sphere, we now additionally superpose the stresses and displacements needed to conform to the surface roughness.We do not need to determine the whole distribution of contact stresses because the energy release rate only depends on the stress intensity factor at the contact edge via Irwin's relation 45 where K ⊥ captures the effect of roughness.K ⊥ can be thought of as the stress intensity factor in the conforming contact of a circular flat punch with roughness h at zero external load. Note that the stress intensity factors of the JKR solution and the influence of roughness can be superposed linearly, because in linear elasticity we can simply superpose stresses originating from different geometric contributions.The pressures needed to conform to the surface roughness in the infinite contact are, 16,49 p∞ ( We compute the stress intensity factor caused by surface roughness for a straight crack K⊥ using a superposition.(a) We consider a semi-infinite crack located at x, for which the positive x and ξ directions point towards the cracked area.The local coordinate system ξ, ζ is centered on the crack tip, so that ξ < 0 corresponds to the contact area.(b) For ξ > 0, the surface is free to move vertically and the pressure p = 0.For ξ < 0, the solid fully conforms to the surface roughness so that the displacements u are prescribed to be equal to the heights h.Note that the positive direction for displacements and heights, corresponding to roughness peaks, points into the elastic halfspace (downwards).In the contact area, surface roughness causes fluctuating contact pressures p(x, y) with stress intensity factor K⊥ (x, y).We compute K⊥ (x, y) by superposing the solutions of two elastic problems (c) and (d).(c) Displacements and pressures in an uncracked contact with the roughness h.The displacements u(x, y) = h(x, y) cause the pressure distribution p ∞ (x, y) (d) Semi-infinite crack with pressures applied on his crack faces.We apply the pressures −p ∞ so that the pressures cancel out on the crack faces when superposing to (c).The displacements are 0 in the contact area so that the contact condition u = h remains satisfied for ξ < 0 after superposition.Loading the crack faces at fixed displacements in the contact area causes the stress intensity factor.This stress intensity factor corresponds to K⊥ because there is no stress singularity in solution (c). where q = (q x , q y ) is the wavevector and the tilde denotes the Fourier transform, dx dy e −i(qxx+qyy) h(x, y). (S-10) The stress intensity factor at the edge of the contact results from the crack-face loading needed to cancel p ∞ outside the contact area, The quantity K⊥ is the stress intensity factor at position y along the tip of a crack advanced to position x (Fig. S-2a).The crack-face weight function 50 is the stress intensity factor at the origin of a semi-infinite crack caused by a unit point force at (ξ, ζ).Evaluating the convolution Eq. (S-11) for each position of the crack x yields a two-dimensional field of stress intensity factors, which can be most easily represented in terms of its Fourier modes, dq x dq y K⊥ (q x , q y )e i(qxx+qyy) .(S-14) Note that K⊥ has zero average (because of symmetry of the elastic surface response) and that the solids overlap where the stress intensity factor is negative.Our final result has no overlap provided that | K⊥ | < K JKR .Anderson and Rice 41 derived an equation equivalent to Eq. (S-13) to understand the interaction of crack tips with dislocations. We now detail the steps leading from Eq. (S-11) to Eq. (S-13).The pressures needed to conform to the surface roughness in the infinite contact p ∞ are easier to express in Fourier space, see Eq. (S-9).Using the Heaviside step function Θ(ξ), we now define the weight function on the whole plane as This allows us to extend the integration bound on Eq. (S-11) to infinity.The convolution advantage of the expression for the straight crack Eq. (S-13) is that it links the statistics of the chemical heterogeneity to the statistics of the surface roughness in a simple way, while the expressions for the circular contact are more difficult to evaluate and to interpret. We now rewrite Eq. (S-8) as The term K ⊥ is stochastic, as it describes the influence of surface roughness, which is typically a random field.Since K ⊥ is linear in h, its spatial average K ⊥ a,θ vanishes.For a random field with a short correlation length, even partial averages over just the angle θ must vanish.This means the middle summand in Eq. (S-20) does not contribute to the average energy release rate.However, the variance K 2 ⊥ a,θ must be positive and nonzero.Parseval's theorem tells us that, where C 2D (q x , q y ) = (L x L y ) −1 | h(q x , q y )| 2 is the power spectral density of the heights 29 and L x , L y are the period of the system in the respective direction.Note that while we consider the limit of an infinite system size L x , L y → ∞, C 2D remains finite.The variance gives the elastic energy e el for fully conformal contact.The average of Eq. (S-20) then becomes This equation is equivalent to a classic result by Persson and Tosatti. 16They approximated equilibrium by G • a,θ = w int .Formally, this can be described by the equilibrium of a smooth sphere with the uniform equivalent work of adhesion w loc = w int − e el , where This approximation only works in the adiabatic limit.Fluctuations become crucial when they are able to pin the crack front and trigger instabilities. We now show how to generalize Persson and Tosatti's result to describe local fluctuations. This means we need to consider the effect of the second term in Eq. (S-20), that disappears in the average but represents the leading-order effect of roughness on the fluctuations of G • .G ⊥ depends on the geometry and position of the indenter via K JKR . This coupling between macroscopic boundary conditions and the microscopic disorder is a second-order effect of the roughness, which we can neglect because our final equilibrium equation determines the crack shape with first-order accuracy only.To first order in h, we approximate This first-order approximation allows us to describe the effect of surface roughness by the equivalent quenched disorder in work of adhesion The equivalent work of adhesion Eq. (S-25) contains only the essential leading-order contributions of the roughness and is independent of the macroscopic geometry, so that our results generalizes to other adhesion setups where our approximations are valid. Our mapping to an equivalent work of adhesion establishes a link to the pinning of elastic lines by quenched disorder.Theoretical work on the pinning of elastic lines [8][9][10]28 allow us to link the hysteresis in apparent adhesion to the root-mean-square (rms) fluctuations of w loc . Inserting Eq. (S-13) into Eq.(S-24) and (S-25) yields The quantity h rms is the rms half-derivative (or quarter fractional Laplacian) given by The green solid represent a small section of the circular reference configuration with constant radius a(θ P ) that we perturb by δa(θ Q , θ P ) = a(θ Q ) − a(θ P ) (red).Advancing the contact area brings the crack faces closer together even in front of the point P that we hold fixed, because of the nonlocal interaction of the surface displacements.(b) At the crack tip, the displacements u(ξ) Ξ √ ξ with displacement intensity factor Ξ, so that closing the crack faces requires displacements δu(θ The length Ξ 2 is the out-of-plane diameter at the crack tip (red circle) and corresponds to the elastic energy release via G ∝ E Ξ 2 .At equilibrium, this diameter is proportional to the elastoadhesive length a = w int /E .(c) The diameter of the crack tip decreases by δ(Ξ 2 ) = 2ΞΞ as the crack faces come together at the point P. crack tip.In valleys, the solid needs to stretch even more, increasing the elastic energy and decreasing the equivalent adhesion, while on roughness peaks, the equivalent adhesion increases because the solid needs to stretch less than for a perfect sphere (see also Fig. 4 of the main text).The amplitude of these energy fluctuations are given by h (1/2) rms , a generalized measure of the sharpness of peaks sensitive to larger length scales than curvatures and slopes.For self-affine roughness, this parameter is dominated either by large scales like the rms height, or by small scales like slope and curvatures, depending on the Hurst exponent. 16 Non-circular contact: In-plane perturbation of the elastic energy release rate Above, we discussed the effect of out-of-plane perturbation on a perfectly circular contact. In reality, the contact shape will deviate from circularity.We now compute the energy release rate at point P on a nearly circular contact to a rough sphere, (S-28) rate, only the perturbation of crack-face displacements close to the crack tip matter.They are described by 5 Ξ ([h, a]; b, θ P ) =Ξ([h, a]; b, θ where Ξ • is the displacement intensity factor for the perfectly circular contact including roughness.The kernel of the integral was obtained from the ξ → 0 limit of the crack-face weight-function of a circular external crack by Gao and Rice. 5,7,59Equations (S-31) and (S-32) combined with the results from section S-I B yield the energy release rate for the nearly circular contact to a rough sphere. Equation (S-32) captures the dominating effect of the long-ranged elastic coupling of the surface displacements on the energy release rate.We illustrate this effect in We now show that within our first-order approximation, the reduction of G for convex a discussed in the previous paragraph is independent of the indenter geometry and stiffness, and corresponds to a generalized curvature, the half-fractional Laplacian (−∆ s ) 1/2 a(s), where s = a θ is an arclength along the contact perimeter.The principal value integral in Eq. (S-32) depends on roughness, indenter geometry and indenter position via Ξ (S-34) Here, the half-fractional Laplacian of the contact radius with respect to the arclength ds = a(θ P ) dθ P is defined by |n| ãn e inθ P , (S-35) where ãn are the coefficients of the Fourier series The wavelength of a Fourier mode is n = 2πa(θ)/|n|.The Fourier amplitude of (−∆ s ) 1/2 a(θ P ), ãn / n , is the slope of the Fourier mode, but unlike slopes, the maxima and minima of the fractional Laplacian are in phase with maxima and minima of a. Hence, (−∆ s ) 1/2 a can be interpreted as a generalized curvature scaling like a slope. Equations (S-33) and (S-34) yield the first-order perturbation of the energy release rate describing that the line penalizes deviations from circularity with a strength proportional to the equilibrium energy release rate w int and a generalized curvature.This means that for a fixed jump-depth δa = d, it is easier to deflect the line over a wider lateral section , δG = w int d/ , explaining why a row of several asperities can collectively pin the crack front while an individual asperity cannot. 24 S-II. NUMERICAL IMPLEMENTATION OF THE CRACK-FRONT MODEL Our numerical simulations use the algorithm by Rosso and Krauth 42 to solve for the equilibrium configurations (metastable states) visited by the crack front as we pull the sphere in and out of the contact.We discretize the crack front in N collocation points at equally spaced angles θ following Ref. 6 The surface roughness h(x, y) is a Gaussian random field, where the height spectrum h(q x , q y ) has uncorrelated phases and random amplitudes scaling according to the PSD, and defines the equivalent work of adhesion field via Eqs.(S-13) and (S-25).Equation (S-13) describes the stress intensity factor for a straight crack that is rotated to be tangential to the contact circle.Note that the prefactor in Eq. (S-13) is a complex number that introduces a minor phase-shift between w loc and h in the direction normal to the front.While this phase-shift, and thereby the orientation of the crack, are important when comparing deterministically the crack-front model to the BEM, they have no effect on the powerspectrum of w loc and on the work of adhesion hysteresis.When the correlation length is much smaller than the contact radius, the heights decorrelate before the orientation of the crack changes significantly along the perimeter.For this reason, and because the rotation becomes computationally intractable on large grids, we generate the equivalent work of adhesion fields used in the main text and in Suppl.Sec.S-IV using a constant orientation of the crack. S-III. VALIDATION OF THE CRACK-FRONT MODEL AGAINST THE BOUND-ARY ELEMENT METHOD We compare the crack-front model to a boundary element method (BEM) simulation to validate our mapping from surface roughness to an equivalent work of adhesion heterogeneity. The implementation of the BEM and the parameters of the simulation are similar to Ref., 6 where we validated the crack-front model for spheres with heterogeneous work of adhesion. In the BEM simulation we perform here, the sphere is rough and the work of adhesion is uniform.The surfaces interact with a cubic cohesive law with a hard-wall repulsion.[62][63][64] The tensile pressures of the contact mechanics simulation during approach are shown in green, so that the perimeter of the contact is indicated by the darkest green pixels.The dashed lines represent the contact perimeter calculated with the crack-front model during approach (pink) and retraction (purple).The BEM simulation was discretized on a 1024 × 1024 grid with pixel size pix = 0.005.The roughness is a random Gaussian field with a flat power spectrum at wavelengths above the cutoff wavelength λ r = 0.2 and 0 below.The interaction is a cubic polynomial with a cutoff distance g c = 0.24, corresponding to a cohesive zone size coz = (π/36)g 2 c / a 0.012.In both simulations, we increased the penetration b in steps of 0.01 until the maximum penetration b max = 1 was reached and then decreased it until pull off.The results are nondimensionalized following the conventions of Refs. 18,1903 and a power spectrum that is flat for wavelengths above the correlation length λ r = 0.2 and 0 below.The force-penetration curves computed with the BEM and the crack-front model nearly overlap and contact perimeters agree well, confirming that the contact of rough spheres is equivalent to the pinning of a crack by the work of adhesion heterogeneity w loc mapped using equation (S-25).Note that in the BEM, the jump into contact instability occurs too early because of the finite interaction range.6,[65][66][67] This particular event converges slowly with interaction range, while the remainder of the force-penetration curve, including depinning instabilities, is well converged.Other discrepancies in the force-penetration curves are due to the linearization in the crack-front model.using k ≈ 3 that we fitted to the results.For e el /w int 0.01, the work of adhesion hysteresis in our numerical simulations (symbols) overlaps with the theoretically predicted scaling (dashed line) and is independent of the shape of the power spectrum.Below a critical value of e el , the hysteresis disapears because of the finite size of the contact.9,10,68 This onset of hysteresis depends on the shape of the power-spectral density: for roughness with a short correlation length (purple triangles), the contact process starts to dissipate energy at smaller e el than for a longer correlation length (pink crosses). The scaling of the hysteresis with w 2 rms was theoretically predicted and numerically verified on random fields with short-ranged correlation, 8,10,28 similar to our roughness with flat PSD represented by the green squares.Here we consider isotropic self-affine roughness leading to work of adhesion fields with long-ranged power-law correlations.Démery et al. 28 theoretically predicted that for isotropic fields, the relationship between hysteresis and w rms remains unaffected by these power-law correlations.They derived this result by analytically solving a small disorder expansion of the equation of motion of the elastic line.Our numerical simulations further confirm that Eq. (S-38) remains valid for isotropic self-affine roughness. FIG. 1 . FIG.1.Phenomenology of adhesive contact.(a) Many contacts can be described as spheres making contact with a flat surface.For soft materials, microscopic interactions are strong enough that the solids deform significantly near the contact edge.(b) The contact forms a circle for contacting spheres, and its radius a can be measured from in-situ optical images of the contact area.(c) Most natural and technical surfaces are rough so that the solid needs to elastically deform to come into conforming contact.(d) The contact radius is larger and the normal force is more adhesive (negative) during retraction than during approach.The pull-off force is the most negative force on these curves. shows w loc (a) alongside G JKR (b, a) for a fixed displacement b.Because of the spatial variations of w loc , there are multiple solutions to Eq. (2) indicated by the labels A and B. Moving into contact from the solution denoted by A leads to an instability where the solution A disappears, at which the contact radius jumps to the next ring of w loc (a).This samples the lower values of w loc shown by the green line in Fig. 2b.Conversely, moving out of contact progresses along a different path that samples the higher values of w loc (a), shown by the red line.The combination of fluctuations in w loc and the elastic restoring force G JKR acts like a ratchet resisting the growing and shrinking of the contact area and leads to a stick-slip motion of the contact line. FIG. 2 . FIG. 2. Simplified axisymmetric contact demonstrating the physical origin of adhesion hysteresis.The indenter is a perfect sphere with axisymmetric heterogeneity in local adhesion w loc (a).(a) Cross-section of the contact at rigid body penetration b = 0 (top) and top view of the axisymmetric work of adhesion heterogeneity w loc (a) (bottom).The blue color indicates regions of high adhesion.(b) Elastic energy release rates in an approach-retraction cycle for a sinusoidal work of adhesion w loc (a) with wavelength d = 0.36 (gray line).The black line shows the elastic energy release rate G JKR (b, a) as a function of contact radius for fixed rigid body penetration b = 0. Fluctuations of w loc (a) lead to several metastable states A, B at fixed b.During approach, the contact perimeter is pinned in metastable states with low adhesion (green curve), while during retraction the contact perimeter is pinned at higher radii by adhesion peaks (red curve).Arrows indicate elastic instabilities where the contact radius jumps between metastable states.(c) Energy release rates in an approach-retraction cycle for a work of adhesion heterogeneity with smaller wavelength d = 0.05.Note that the slope of G JKR appears to be flatter than in panel (b) because we show a smaller range of contact radii.For short wavelengths, the works of adhesion sampled during approach (light green curve) and retraction (light red curve) stay close to the constant values w appr and w retr .(d) The contact radius and the normal force during an approach-retraction cycle for wavelength d = 0.36 (darker colors) and d = 0.05 (lighter colors).The dashed lines are the prediction by the JKR theory using w retr and w appr for the work of adhesion.The solid black line corresponds to increasing energy release rates at fixed rigid body penetration b = 0. Energy release rates are displayed in units of the average work of adhesion and lengths and forces have been nondimensionalized following the conventions of Refs.18,19 as described in the Supplemental Material. FIG. 3 . FIG.3.Crack-front pinning by two-dimensional random heterogeneity.(a) Evolution of the contact line during retraction in a crack-front simulation on 2D random work of adhesion field.Each colored patch corresponds to an elastic instability during which the perimeter jumps between two pinned configurations (dark lines), and the color scale represents the energy dissipated during each instability.The larkin length corresponds to the smallest extent of these jumps along the perimeter, and increases for weaker heterogeneity or for a stiffer line.The work of adhesion heterogeneity corresponds to a random roughness that has a flat power spectral density with shortwavelength cutoff λ s = 0.07.(b) Contact radius as a function of the normal force in the simulation for the stronger pinning field shown in panel (a).The elastic instabilities correspond to sudden jumps in the contact area and in the normal force.The solid black line corresponds to increasing energy release rates at fixed rigid body penetration b = 0, and the points A and B show that the contact radius is higher during retraction than during approach.The red arrows show the jump-in and jump-out of contact instabilities.(c) Contact radius as a function of the normal force in a simulation on a random chemical heterogeneity with smaller feature size ≈ 0.01 and w rms /c ≈ 0.45.The force-radius curve is smooth because the random field has small features that trigger a large number of instabilities.The dashed lines are JKR curves with work of adhesion w appr and w retr predicted by our theory Eq. (6).In this simulation, w loc corresponds to a self-affine randomly rough topography with an elastic energy for fully conformal contact e el /w int = 0.05 and power spectrum shown by the blue circles in the inset of Suppl.Fig.S-5.(d) Contact lines at rigid body penetration b = 0 on the random work of adhesion heterogeneity shown by the blue colormap.Floppy lines are pinned at higher contact radii during retraction (purple line) than during retraction (green line) because they meander predominantly between regions of low adhesion (white patches) during approach, and between regions of high adhesion (dark blue patches) during retraction.In the limit of a rigid line, the perimeter is perfectly circular (dashed line), randomly sampling as many regions of low and high adhesion.Units are nondimensionalized following the conventions of Refs.18,19 as described in the Supplemental Material. FIG. 4 . FIG. 4. Mapping topographic roughness to equivalent chemical heterogeneity.The contact of a rough sphere (a) is equivalent to the contact of a sphere with a work of adhesion heterogeneity w loc (b).The solid is stretched at the crack tip and surface roughness perturbs this elastic deformation.The associated perturbation of the elastic energy can equivalently be described by fluctuations of the work of adhesion. Fig Fig.5bare available online.40Details on the film growth and the multiscale topography characterization are provided in Refs.11,12 Fig. 1 , and the right side shows equilibrium positions of the perimeter in a crack perturbation simulation (see Supplementary material I and II) on random roughness similar to NCD.The contact perimeter is pinned where the black lines are close to each other, while regions with low density of lines indicates where the contact perimeter accelerates during an instability.The simulation predicts instabilities of various sizes, reaching a lateral extent up to several tens of µm.In the experiment, only the largest instabilities and the largest features of the contact line are visible because of the limited resolution of the camera and because we removed image noise using a spatially averaging filter.Details on the extraction of the position of the line from the video are described in the Methods section and the original video recording is provided in the Supplementary Video SV1.The positions of the perimeter are shown from jump into contact until the force reaches 0.64 mN.(c) Contact radius and normal force during approach and retraction of the experiment (diamonds) and simulation (continuous line) shown in panel (b).We extracted the intrinsic work of adhesion w int = 63 mJ/m 2 used in the simulation by fitting the work of approach.The sphere has Young's modulus E = 0.7 MPa and radius R = 1.25 mm.More details on the experimental setup are provided in Methods and in Ref. 2 Supplementary Material Fig. S-6 shows that the power-spectral density of the synthetic random roughness used in the simulation is close to the power-spectral density of NCD at the length scales that dominate h (1/2) rms . Figure S- 1 Figure S-1 illustrates this decomposition in terms of the energy release rate G.The surface roughness h locally perturbs the elastic energy by G ⊥ (Fig. S-1c) 41 and the perimeter distorts within the plane to satisfy equilibrium with the uniform work of adhesion w int (Fig. S-1d).As FIG. S-1.We consider the contact of a sphere of radius R with roughness h(x, y) superposed to it.(a) Because of surface roughness, the contact perimeter is no longer circular.We describe it by the contact radius a(θ), the planar distance between the tip of the sphere and the perimeter of the contact.The energy release rate G at the point P along the crack is decomposed into three contributions, G = G JKR + (e el + G ⊥ ) + G , illustrated in panels (b) to (d).(b) The energy release rate G JKR (b, a) for the smooth contact is given by the theory of Johnson, Kendall and Roberts.13(c)Surface roughness leads to out-of-plane displacements of the contact perimeter.This increases the average energy release rate by e el and leads to additional local fluctuations G ⊥ ([h]; a(θ P ), θ).Here, e el is the elastic energy needed to fully conform to surface roughness.(d) The in-plane deflection of the perimeter from circularity leads to the additional contribution G ([a], θ). 13 FIG. S-1.We consider the contact of a sphere of radius R with roughness h(x, y) superposed to it.(a) Because of surface roughness, the contact perimeter is no longer circular.We describe it by the contact radius a(θ), the planar distance between the tip of the sphere and the perimeter of the contact.The energy release rate G at the point P along the crack is decomposed into three contributions, G = G JKR + (e el + G ⊥ ) + G , illustrated in panels (b) to (d).(b) The energy release rate G JKR (b, a) for the smooth contact is given by the theory of Johnson, Kendall and Roberts.13(c)Surface roughness leads to out-of-plane displacements of the contact perimeter.This increases the average energy release rate by e el and leads to additional local fluctuations G ⊥ ([h]; a(θ P ), θ).Here, e el is the elastic energy needed to fully conform to surface roughness.(d) The in-plane deflection of the perimeter from circularity leads to the additional contribution G ([a], θ). FIG. S-1.We consider the contact of a sphere of radius R with roughness h(x, y) superposed to it.(a) Because of surface roughness, the contact perimeter is no longer circular.We describe it by the contact radius a(θ), the planar distance between the tip of the sphere and the perimeter of the contact.The energy release rate G at the point P along the crack is decomposed into three contributions, G = G JKR + (e el + G ⊥ ) + G , illustrated in panels (b) to (d).(b) The energy release rate G JKR (b, a) for the smooth contact is given by the theory of Johnson, Kendall and Roberts.13(c)Surface roughness leads to out-of-plane displacements of the contact perimeter.This increases the average energy release rate by e el and leads to additional local fluctuations G ⊥ ([h]; a(θ P ), θ).Here, e el is the elastic energy needed to fully conform to surface roughness.(d) The in-plane deflection of the perimeter from circularity leads to the additional contribution G ([a], θ). 1 . Stress intensity factor caused by roughness at the tip of a semi-infinite crack We compute K ⊥ approximately by treating the contact as a semi-infinite crack (Fig. S-2a,b), i.e. the roughness features are small compared to the contact radius.We describe the semi-infinite crack in the coordinate system ξ, ζ, where ξ points in the normal to the crack front with ξ < 0 in the contacting area.ζ points parallel to it.This is essentially a locally rotated coordinated system at the angle θ on the crack, as shown in Fig. S-1c.The semi-infinite crack hence represents a small subsection of the circular perimeter centered at ξ = 0 and ζ = 0. We compute the stress intensity factor by a classic superposition 48 (2.6.4Full Stress Field for Mode-I Crack in an Infinite Plate), where we first compute the pressures needed to conform the surface roughness in the absence of a crack (Fig. S-2c) and subsequently cancel out these pressures on the crack faces (ξ > 0) (Fig. S-2d).Loading the crack-faces while keeping the displacements fixed in the contact area (ξ < 0) leads to the stress intensity factor K⊥ (ζ).The bar over K⊥ indicates that the result is valid for a straight crack. 27 ) with α = 1/ 2 . FIG. S-3.Effect of in-plane perturbations of the crack front on the energy release rate, G .(a)The green solid represent a small section of the circular reference configuration with constant radius a(θ P ) that we perturb by δa(θ Q , θ P ) = a(θ Q ) − a(θ P ) (red).Advancing the contact area brings the crack faces closer together even in front of the point P that we hold fixed, because of the nonlocal interaction of the surface displacements.(b) At the crack tip, the displacements u(ξ) Ξ √ ξ with displacement intensity factor Ξ, so that closing the crack faces requires displacements δu(θQ , ξ) = Ξ(θ Q ) √ ξ.The length Ξ 2 is the out-of-plane diameter at the crack tip (red circle) and corresponds to the elastic energy release via G ∝ E Ξ 2 .At equilibrium, this diameter is proportional to the elastoadhesive length a = w int /E .(c) The diameter of the crack tip decreases by δ(Ξ 2 ) = 2ΞΞ as the crack faces come together at the point P. Fig. S- 3 , where we hold a(θ P ) fixed and advance the contact area in the neighborhood, corresponding to a locally convex perturbation of the contact perimeter (Fig.S-3a).Closing the adhesive neck over the surface element dθ a(θQ )δa(θ Q ) requires the displacement δu(θ Q , ξ) = Ξ(θ Q ) √ ξ(Fig.S-3b), which perturbs the whole surface of the solid with amplitudes decaying with distance as || r P − r Q || −2 .We hold the crack front locally pinned in θ P , yet this nonlocal interaction along the crack front brings the crack faces together (Fig. S-3c) and thereby reduces the energy release rate G(θ P ). Figure S- 4 Figure S-4 shows a BEM and a crack front simulation on random roughness with e el /w int S -IV.HYSTERESIS ON RANDOM ROUGHNESS: CRACK-FRONT SIMULA-TIONSWe verify our theoretical prediction for the apparent work of adhesion (Main Text Eq. (6))w retr appr = w int − e el ± ke el , (S-38)using crack-front simulations on self-affine roughness with varying power spectra, and extract the numerical factor k ≈ 3 from these results.We show in Suppl.Sec.I B,C that self-affine surface roughness maps to an equivalent work of adhesion field with power-law correlation via the integral transform Eq. (S-13).The variance of this work of adhesion heterogeneity w 2 rms = 4e el w int , with e el the elastic energy required to conform to the surface roughness.
13,633.2
2023-07-26T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Strong Supernova 1987A Constraints on Bosons Decaying to Neutrinos Majoron-like bosons would emerge from a supernova (SN) core by neutrino coalescence of the form $\nu\nu\to\phi$ and $\bar\nu\bar\nu\to\phi$ with 100 MeV-range energies. Subsequent decays to (anti)neutrinos of all flavors provide a flux component with energies much larger than the usual flux from the"neutrino sphere."The absence of 100 MeV-range events in the Kamiokande-II and Irvine-Michigan-Brookhaven signal of SN 1987A implies that less than 1% of the total energy was thus emitted and provides the strongest constraint on the Majoron-neutrino coupling of $g\lesssim 10^{-9}\,{\rm MeV}/m_\phi$ for $100~{\rm eV}\lesssim m_\phi\lesssim 100~{\rm MeV}$. It is straightforward to extend our new argument to other hypothetical feebly interacting particles. If the FIPs interact so strongly that they are trapped themselves or decay before leaving the SN, they contribute to energy transfer [19] and may strongly affect overall SN physics and the explosion mechanism.A class of low-explosion-energy SNe provides particularly strong constraints on such scenarios [20].FIPs on the trapping side of the SN-excluded regime are often constrained by other arguments, although allowed gaps may remain, such as the historical "hadronic axion window" or more recently the "cosmic triangle" for axion-like particles, both meanwhile closed. In other cases, FIP decays include active neutrinos.In the free-streaming limit, FIPs escape from the inner SN core and so their decays provide 100-MeV-range events, much larger than the usual neutrino burst of few 10 MeV that emerges from the "neutrino sphere" at the edge of the SN core.The background of atmospheric muons has yet larger energies and so the new signal would stick out in a future SN neutrino observation.This argument was first advanced in Ref. [7], and offers an intriguing future detection opportunity. Our main point is that, by the same token, SN 1987A already provides restrictive limits because the legacy data do not sport any events with such intermediate energies.This constraint, which is available today without the need to wait for the next galactic SN, is far more restrictive than the traditional energy-loss argument. We illustrate our new argument with the simple case of nonstandard or "secret" neutrino-neutrino interactions [4][5][6][7][8], mediated by a (pseudo)scalar φ (mass m φ ) that we FIG. 1. Constraints on the Majoron coupling in the m φ -g φ m φ plane from SN 1987A energy loss (green) and the absence of 100 MeV-range ("high-E") events (blue).The shaded range brackets the cold (upper curves) vs. hot (lower curves) SN models, i.e., the Garching muonic models SFHo-18.8 and LS220-s20.0[29].Above the dashed line, Majorons with a reference kinetic energy of 100 MeV decay before leaving the SN core.The "ceiling" of the energy-loss bound is probably outside this figure, but we are not confident about its exact location.The schematic big bang nucleosynthesis (BBN) bounds are taken from Fig. 1 of Ref. [30], based on the cosmic radiation density.Somewhat more restrictive limits may follow from the cosmic microwave background (CMB) (see text). call Majoron and take to interact with all flavors with the same strength g.We consider m φ > ∼ 100 eV so that neutrino masses and refractive matter potentials can be ignored.The lepton-number violating production channels ν ν → φ and νν → φ and corresponding decays yield the constraints previewed in Fig. 1. Majoron decay and production.-Auniversal ν-ν interaction by Majoron exchange is given by [39] where ψ ν is a two-component Majorana field and g a real number.In the relativistic limit we refer to the Majorana helicity states as ν and ν in the usual sense. The decay into pairs of relativistic neutrinos requires equal helicities, implying the lepton-number violating channels φ → νν or ν ν.Each individual rate is which includes a symmetry factor 1/2 for identical finalstate particles.(We always use natural units with h = c = k B = 1.)The total rate requires a factor of 6 for six species [40].For a relativistic Majoron, this rate is slower by the Lorentz factor m φ /E φ , implying that the laboratory decay rate depends only on the combination gm φ . The requirement that Majorons with E φ = 100 MeV decay beyond the neutrino-sphere radius of 20 km thus implies gm φ < ∼ 10 −7 MeV, shown as a dashed line in Fig. 1.On the other hand, the decay neutrinos should not be delayed by more than a few seconds.The requirement MeV for E φ = 100 MeV.The time-of-flight difference is much smaller for relativistic Majorons, so for the constraints shown in Fig. 1 the signals are indeed contemporaneous, although somewhat marginally for m φ around 100 MeV. The neutrino decay spectrum is flat between In a neutrino gas of one species α, occupation number f α (E ν ), the spectral Majoron emission rate from ν α ν α coalescence then is For local thermal equilibrium with temperature T and neutrino chemical potential µ α , the corresponding Fermi-Dirac distribution is f α (E ν ) = e (Eν −µα)/T + 1 −1 .The chemical potential for a flavor ν enters with opposite sign, depending on α denoting a ν or ν.Notice that the lepton-number violation caused by the φ interaction implies µ ν = 0 in true equilibrium.All Majorons decay close to the SN equally into all six neutrino species with a flat spectrum.Therefore, the effective single-species spectral neutrino emission rate is The minimal E φ to produce a neutrino of energy E ν is The first factor of 2 is for two neutrinos per decay, whereas 1/6 appears because this is the rate into one of six species. One-zone SN model.-For a first estimate we use a one-zone model of the collapsed SN core with a chemical potential µ ν = 100 MeV for ν e and vanishing for the other flavors, volume (4π/3)R 3 with R = 10 km for the emitting region, and duration for substantial deleptonization of τ = 1 s [41].After collapse, the SN core is cold (T 10 MeV) and heats up from outside in as the material deleptonizes.Majoron emission is thus from the coalescence of ν e ν e alone which we take as perfectly degenerate.(In contrast, novel particle emission usually becomes large only after the SN core has heated up at around 1 s after collapse [24].) For m φ = 0 the integral in Eq. ( 3) is a "triangle function" that rises linearly to the value µ ν at E φ = µ ν and then decreases linearly to zero at E φ = 2µ ν .The energy-loss rate per unit volume is 3 with L ν 2 × 10 52 erg/s as recommended by a simple recipe [2] implies gm φ < ∼ 4π 3L ν /R 3 µ 3 ν = 5.5 × 10 −9 MeV.Likewise, the effective ν α production rate per unit volume is Ṅα = (g 2 m 2 φ /64π 3 ) µ 2 ν /3 and therefore the total emitted number is N α = Ṅα (4π/3)R 3 τ .The fluence at Earth is N α /(4πd 2 SN ) where d SN = 49.6 kpc is the distance to SN 1987A [66].The largest detector was IMB with a fiducial mass of 6.8 kton [15] and thus N p = 4.5 × 10 32 fiducial protons.The detection cross section is very roughly σ σE 2 ν with σ 10 −43 cm 2 /MeV 2 and E 2 ν = 7µ 2 ν /18.The total number of 100-MeV-range events therefore is MeV. Numerical SN models.-Thisconstraint is much more restrictive than from energy loss, motivating a detailed study.To this end we use the Garching 1D models SFHo-18.8 and LS220-s20.0that were evolved with the Prometheus Vertex code with six-species neutrino transport [67].These muonic models were recently also used for other particle constraints [24,29].With different final neutron-star masses and different equations of state, these models were taken to span the extremes of a cold and a hot case, reaching internal T of around 40 vs. 60 MeV.On the other hand, the initial µ νe profiles are much more similar, in both cases around 150 MeV in the center and a "lepton core" reaching up to around 10 km.The lepton number of the outer core layers is released within a few ms after core bounce in the form of the prompt ν e burst.More details about these models are provided in the Supplemental Material [42]. SN neutrinos follow a quasi-thermal spectrum that can be represented by a Gamma distribution [68][69][70].We thus write the time-integrated spectrum in the form where E tot is the total SN energy release, E 0 the average νe energy, α a parameter that would be 2 for a Maxwell-Boltzmann distribution, and Γ the Gamma function, not to be confused with a Gamma distribution.3) that we correct for gravitational redshift through the tabulated lapse factors as described in Ref. [24].In the cold model, we find a Majoron luminosity at 1 s post bounce of L φ (1 s) = (gm MeV ) 2 6.46 × 10 68 erg/s, where m MeV = m φ /MeV.According to the traditional SN 1987A cooling argument [2,24,71] we compare it with L ν (1 s) = 4.40×10 52 erg/s, leading to gm φ < 0.83 × 10 −8 MeV shown in Fig. 1.For larger masses, we include a cutoff for those Majorons that are produced with insufficient energy to escape the gravitational potential as explained in the Supplemental Material of Ref. [20].The total emission is E tot φ = (gm MeV ) 2 1.94 × 10 69 erg and nominally E tot ν = E tot φ for gm φ = 0.99 × 10 −8 MeV, practically identical to the luminosity comparison at 1 s. For the hot model we find L φ (1 s) = (gm MeV ) [9][10][11] and IMB (6.8 kton) [14][15][16].They observed events with energies up to 40 MeV via inverse beta decay νe + p → e + + n, whereas elastic scattering on electrons is small (but dominates for solar ν e detection).For our 100 MeV-range energies, charged current (CC) reactions on oxygen of the form MeV. For energies above the muon production threshold (m µ = 105.7 MeV), the corresponding muonic CC processes also happen, especially of course for atmospheric neutrinos at yet larger energies.Muons quickly come to rest by ionization and produce "Michel e ± " with a characteristic spectrum ending at 53 MeV, half the muon mass.Below the muon Cherenkov threshold of about 160 MeV, they are termed "invisible muons."(For more details about these processes see the Supplemental Material [42].) Figure 2 shows the spectral fluence (time-integrated flux) for the standard SN neutrinos from the cold model, averaged over νe , νµ and ντ .The energy-integrated fluence is 5.10 × 10 9 cm −2 for one species.We also show the corresponding e ± spectrum in the detector; the total event number is 5.07 per kton (for 100% detection efficiency).Next we show the ν spectrum from φ decay which is the same in every species; the total fluence in one species is (gm MeV ) 2 1.90 × 10 25 cm −2 .The e ± event number times (gm MeV ) 2 /kton is 3.62 × 10 17 produced by νe and ν e in CC reactions and 0.37 × 10 17 from Michel e ± (E < ∼ 53 MeV) caused by invisible muons, and a total of 3.99 × 10 17 . Above the muon Cherenkov threshold of 160 MeV, and assuming the same detection efficiency as for e ± , visible µ ± contribute another 11% to the total events.After each such event, the IMB detector would be blind by trigger dead time, so we should not include the subsequent Michel events.However, even for µ ± themselves, the Cherenkov threshold behavior and the detection efficiency are not available.Therefore, we do not include visible muons, making our Majoron bounds more conservative by some 5%. A single event with 100% detection efficiency in IMB thus requires gm φ = 6.06 × 10 −10 MeV.For the hot model, the corresponding result is gm φ = 3.71 × 10 −10 MeV, both smaller than the estimate from the onezone model, where we underestimated the cross section.Once more, the exact SN model is not crucial and we essentially find the limits shown in Fig. 1. Analysis of SN 1987A data.-Wenow turn to a detailed analysis of the Kamiokande II and IMB data.We summarize several details in the Supplemental Material [42] and here only remark that event information was recorded depending on a hardware trigger.In an off-line analysis, one searched for low-energy few-seconds event clusters."Low energy" was defined in Kamiokande-II as less than 170 photo electrons in the inner detector or E e < ∼ 50 MeV [9][10][11], whereas IMB used maximally 100 PMTs firing or E e < ∼ 75 MeV [14][15][16].However, as discussed in Supplemental Material [42], we can conclude that no high-energy events were actually observed even above these thresholds during the SN 1987A burst. The events from φ decay overlap with the standard SN signal, so one should perform a maximum likelihood analysis with g and m φ as fit parameters.However, the standard SN signal depends on the chosen SN model.For example, our cold (hot) model (using the average νe -ν µντ spectrum) would have produced 9.12 (21.3) events in Kamiokande II with average detected electron energy of 20.1 (22.6)MeV, to be compared with the actually observed 12 events with 14.7 MeV average energy.In IMB they would have produced 3.49 (12.5) events on average with 31.3 (34.4) MeV, to be compared with 8 events with 31.9MeV average.Neither of these models fits the data well and the Kamiokande II and IMB data are themselves in tension with each other, although in terms of the E tot -E 0 -α parameters one finds credible overlapping values [72,73]. We do not have a suite of SN models that would allow us to find the one that best fits the SN 1987A data.Instead we represent the signal in the form of Eq. ( 5) and use an unbinned likelihood for the energies of the events in each detector, as defined in the Supplemental Material [42].First we verify that the maximum of the likelihood for both experiments is at g = 0, i.e., neither of them prefers the new signal.Next we marginalize the combined likelihood by maximizing it for each value of g and m φ over E 0 and E tot .This guarantees our constraints to be conservative, because for each choice of the Majoron parameters we choose the SN neutrino spectral shape as the one that maximizes the agreement with the data.We then follow the procedure outlined in Ref. [74] to set upper bounds on the Majoron coupling for each value of the Majoron mass; more details on our statistical procedure are given in the Supplemental Material [42].We show the corresponding constraints, dominated by the IMB data, in Fig. 1. Discussion and outlook.-Wehave considered FIPs that escape from the inner SN core and later decay into active neutrinos.Our main result is that the lack of 100-MeV-range events in the SN 1987A data provides surprisingly restrictive constraints.Specifically, the energy loss by νν → φ Majoron emission must be less than 1% of the total binding energy, much more restrictive than the usual SN 1987A cooling limit. Moreover, our new bound depends mainly on emission during the first second and not on the sparse latetime events or the predicted cooling speed that depends, e.g., on PNS convection.Our result is also insensitive to a concern that the SN 1987A neutron star has not yet been found (see however [75,76]) and that the late events could have been caused by black-hole accretion [77].(See however [29] for a rebuttal of this scenario.) Our limit implies that the impact on SN physics and the explosion mechanism is small.However, our discussion leaves open what happens for much stronger couplings when Majorons do not freely escape.The SN core could deleptonize already during infall, perhaps preventing a successful explosion.On the other hand, a thermal bounce may still occur [35,78].If the interactions are yet stronger, neutrinos and Majorons form a viscous fluid that is more strongly coupled to itself than to the nuclear medium.This peculiar case was recently examined [8]; the SN 1987A signal may exclude a certain range of parameters beyond the upper edge of Fig. 1. For m φ < ∼ 1 MeV, the cosmic radiation density measured by BBN provides comparable bounds (Fig. 1 of Ref. [30], see also Refs.[79][80][81]), and those from the CMB may be more restrictive, but the exact reach in mass and coupling strength was not directly provided.Having different systematic issues, the cosmological and SN 1987A arguments are nicely complementary for m φ < ∼ 1 MeV, whereas the SN 1987A sensitivity is unique for larger m φ . Our method can be applied to any class of FIPs decaying to neutrinos.Examples include heavy neutral leptons [82,83] and gauge bosons arising from new symmetries like U (1) Lµ−Lτ [84,85], which can be further constrained relative to the existing bounds from energy loss [86,87].Notice also that bosons coupling exclusively to neutrinos have different production rates if the coalescence process is lepton-number conserving (ν ν → φ) or violating (νν → φ) because in the PNS core, the neutrino and antineutrino distributions differ. At present it remains open if there exist allowed Majoron parameters somewhere in the trapping regime, a question left for future study.Couplings below our limit leave open the exciting possibility of a detection in the neutrino signal of a future galactic SN [7] that would reveal FIP emission from the inner SN core. Note Added.-Sinceour paper had appeared on arXiv, our new argument was used to constrain the heavy-lepton model of Ref. [88]. [71] Scaling the original axion bounds to the Majoron case with that simple recipe ignores that here the particle emission is largest directly after core bounce, whereas in the axion or similar cases, the core first has to heat up and the emission is largest perhaps around 1 s post bounce.Moreover, Majorons remove both energy and lepton number.We suspect that the impact on the SN 1987A neutrino signal would be larger than implied by scaling the axion case.A detailed analysis would require including Majoron losses in self-consistent SN models.In view of the much more restrictive counting-rate argument, this exercise is not needed and we can post-process existing models. Supplemental Material for the Paper Strong Supernova 1987A Constraints on Bosons Decaying to Neutrinos We summarize some details about the detection cross sections for SN neutrinos in a water Cherenkov detector used in our analysis, the historical SN 1987A observations, our statistical analysis, and the Garching SN models. A. Detection cross sections The primary channel for neutrino detection from SN 1987A was inverse beta decay (IBD) νe + p → e + + n on the hydrogen nuclei of the water molecules.Neglecting the recoil of the nucleus, the final positron has an energy E e = E ν − Q νep , with Q νep = 1.29 MeV, and it emits Cherenkov radiation visible in the detector.At typical SN energies, νµ and ντ are kinematically unable to interact via charged current (CC). Above about 70 MeV, neutrino interactions in a water Cherenkov detector start to be dominated by CC reactions on oxygen of the form ν e + 16 O → e − + X, where X is a final excited nuclear state dominated by 16 F * [43-45] and a similar reaction for antineutrinos, where the dominant final state is 16 N * .The final state e ± retains memory of the initial neutrino energy.Specifically we use E e − = E ν − Q νeO , with Q νeO = 15.4MeV, and the positron energy is The cross sections are shown in Fig. S1, where the one for IBD is taken from Ref. [46], the one for νµ p scattering from Ref. [47], the ones for νe O and ν e O from Ref. [44], and the ones for νµ O and ν µ O from Ref. [48]. In this low-energy range, muon and tau neutrinos can only interact with nucleons via neutral-current interactions.In the interaction, nuclei can be excited and promptly decay to photons, leading to a potentially observable signature [49].For a future Galactic SN, this signature is likely to be observed.However, due to the lower cross sections of the neutral-current scattering, this process played no role for SN 1987A and we will not consider it even for our 100-MeV-range neutrinos. At energies above the muon production threshold (m µ = 105.6MeV), the muon-flavored neutrinos from φ decay also contribute to the analogous CC rates.Due to large energy losses by ionization, these µ ± are stopped within a short length of the order of 1 m from their interaction vertex, and they finally decay at rest and produce a visible e ± .They follow the well-known Michel spectrum, behavior and detection efficiency.Leaving out this signal causes only a small and conservative error in the Majoron bounds (see main text). B. SN 1987A Neutrino Observations Supernova 1987A, in the Large Magellanic Cloud at a distance of 49.59 ± 0.09 stat ± 0.54 syst kpc from Earth [66], was discovered independently by Ian Shelton, Oscar Duhalde, and Albert Jones [50] on February 23, 1987, and later targeted by searches in the entire electromagnetic spectrum.The first evidence for optical brightening was found at 10:38 UT (Universal Time) on plates taken by McNaught.The first naked-eye visible (in the southern hemisphere) SN since the invention of the telescope, its observation is narrated in detail in a review by Koshiba [12].This was also the first SN explosion with a known progenitor star, Sanduleak −69 202, a blue supergiant catalogued by Nicholas Sanduleak in 1970 [51].At the time of the explosion, there were four running experiments that were big enough that they could have detected the gargantuan flux of neutrinos emitted in the collapse of a stellar core. The largest one was the Irvine-Michigan-Brookhaven (IMB) water Cherenkov detector, an experiment built to look for proton decay [52], that was located in the Morton-Thiokol salt mine (Fairport, Ohio, USA).It was equipped with 2048 8-inch photomultiplier tubes (PMTs) such that 6,800 tons of water (of a total of 8,000 tons) were within the PMT planes, taken as the fiducial volume for the SN 1987A search [15].A failure of a high-voltage power supply shortly before SN 1987A left a contiguous quarter of the PMTs off-line with a geometric effect on the trigger efficiency that was later calibrated.The detector was triggered when at least 20 PMTs fired in 50 ns, corresponding to an energy threshold of 15-25 MeV for showering particles [15].(A trigger of 25 PMTs is mentioned in Ref. [14]).At the relatively shallow depth of 1570 m water equivalent, the flux of atmospheric muons caused a trigger rate of 2.7 Hz. Muons are recognized by tracks entering the detector from the outside and of course coming mostly from above.The detector is dead for 35 ms after each trigger.The SN 1987A signal consisted of 8 events and in addition 15 muons were recorded [16], a total of 23 triggers, amounting to 23 × 35 ms = 0.8 s dead time, or 13% of the SN signal duration of 6 s.In Fig. S3 we show the geometrically averaged detection efficiency, including the 0.87 reduction by dead time. Atmospheric neutrinos are recognized as contained events and occurred at a rate of around 2/day in the energy range 20-2000 MeV [14].Our new neutrino signal in the 100 MeV range would look like low-energy atmospheric neutrinos. The SN 1987A burst was found by looking in the recorded data for low-energy few-second event clusters, where "low energy" was defined as fewer than 100 PMTs firing, corresponding roughly to a 75 MeV energy cut.However, other than the 8 SN 1987A events and 15 muons, no other triggers occurred that would have been interpreted as a rare atmospheric neutrino. The SN 1987A events must be due to IBD with a practically isotropic distribution of final-state e + .However, IMB found a conspicuous directional correlation in the opposite direction of SN 1987A, i.e., the events look "forward peaked."This effect is not explained by the detector's geometrical bias due to the 25% PMT failure.One idea held that the signal was not caused by neutrinos but instead some new X 0 bosons that scatter coherently on oxygen and thus generate the observed angular characteristic [54].However, the required cross section is excluded by stellar cooling bounds from the reverse process [55].No viable explanation other than a rare statistical fluctuation is available. With a fiducial mass of 6.8 kton, IMB would have seen the largest number of 100 MeV-range events.At lower energies it suffered from a trigger efficiency of only 15% at 20 MeV, but rising to 80% at 70 MeV.During the SN 1987A burst, no events besides the 15 background muons + 8 SN events = 23 triggers were observed. 1We conclude that there were no unreported events above the low-energy criterion of 75 MeV.1987A neutrino data collected at Kamiokande-II, IMB, and Baksan.We show the detected positron energy as a function of time after the first event in each detector.Because of clock uncertainties, the exact temporal offset between the observations is not fixed.We do not show events which are attributed to background. The second largest detector was the Kamiokande II water Cherenkov detector (Mozumi Mine, Kamioka section of Hida, Gifu Prefecture, Japan) with a fiducial mass of 2,140 metric tons for the SN 1987A search, where again the entire volume up to the PMTs was taken [9,10].This detector was built in 1983 to search for nucleon decay (Kamiokande = Kamioka Nucleon Decay Experiment) [56] and later upgraded to Kamiokande II to search for solar ν e in the 10 MeV range.The photo cathode coverage was increased and radioactive backgrounds decreased to lower the threshold and solar data were taken since the end of 1986.Despite its smaller mass, the low threshold made Kamiokande II competitive for the SN 1987A discovery (see Fig. S3 for the trigger efficiency) although for our 100-MeV-range events, IMB is better suited. At a greater depth of 2700 m w.e., the atmospheric trigger rate was 0.37 Hz and indeed, 4 muons were found in the 20 s interval preceding the SN 1987A burst and several after the SN burst, but none until just before the 12th event.Atmospheric neutrinos, in the form of fully contained events, show up once every few days.Lowenergy radioactive backgrounds triggered with 0.23 Hz.The trigger dead time is less than 50 ns after an event. To find the SN 1987A burst, the data recorded on a magnetic tape were searched for low-energy event clusters, where here the definition was less than 170 PMTs firing (E e < ∼ 50 MeV).We show the burst in Fig. S2 as a function of time after the first event.The absolute timing is poorly known, probably to within ±15 s based on comparing the computer clock with a wrist watch, but a conservative uncertainty of ±1 min was officially stated.A power outage in the mine on February 26 prevented a recalibration of the computer clock [57].The signal arrived at 4:35 pm on Monday, 23 February 1987, but this was a substitute holiday.According to working-day schedule, the magnetic tape would have been exchanged at 4:30 pm and the signal might have been missed. The highest-energy events are also forward peaked, in analogy to IMB, while most of the events are isotropically distributed as expected for IBD.There is a conspicuous gap of 7.3 s between event 9 and 10, filled however with IMB data and probably has nothing to do with SN 1987A.Very recently, one member of the Kamiokande collaboration has speculated that the gap could have been caused by a fault of the magnetic tape drive.He noted that during that gap, there are also no other events (low-energy background or atmospheric muons) and that the probability for such a long gap was very small [13]. 2or our analysis, we are mainly interested in the highenergy events that Kamiokande would have seen during the SN 1987A burst.Contained events with 30 MeV < visible energy < 1.33 GeV would have gone into the atmospheric neutrino analysis, but none were found in the period around SN 1987A.For this analysis, the fiducial volume may have been as low as 780 tons (more than 2 m from the wall). 3We conclude that conservatively no event of our interest was observed in this volume. The third experiment was the Baksan Scintillator Underground Telescope (BUST) under Mount Andyrchi in the North Caucasus at a depth of 850 m w.e., operated by the Institute of Nuclear Research (Moscow) [17,18].It started operation in June 1980 and is still running today, with SN 1987A the only SN neutrino burst observed in more than four decades.BUST consists of 3156 segments of 70 × 70 × 30 cm.A possible SN 1987A event was selected as one that triggers one and only one segment and with E e < ∼ 50 MeV.The fiducial inner part has a mass of 130 t that was opened for the SN 1987A analysis to 200 t.Its burst was reported at 7:36:06:571 UT and thus 30 s later than IMB.While the clock synchronization with UT is usually ±2 s, the clock was observed to have shifted forward by 54 s between February 17 and March 11 for unknown reasons.So the observed signal is probably contemporaneous with IMB and Kamiokande II.Because of its small size, BUST is least useful for us and so we have not investigated how our 100 MeV-range events would have shown up there. A fourth instrument was the Liquid Scintillation Detector (LSD), located in the gallery of the Mont Blanc tunnel, between Italy and France [58,59].It was specifically built to search for a galactic SN burst with a typical assumed distance of 10 kpc.LSD used 72 100 × 150 × 100 cm 3 liquid scintillator modules, arranged in three horizontal layers for a total mass of 90 tons.Each module was equipped with three PMTs of 15 cm diameter, and the signal was recorded whenever a threefold coincidence occurred within 150 ns. The LSD collaboration was the first to declare the (possible) discovery of SN neutrinos due to the detection of 5 events, above the 7 MeV threshold, in an interval of 7 seconds, beginning at UT 2:52:36.79 and compatible with the core-collapse standard model at 50 kpc.This signal is almost five hours earlier than the other detectors which observed nothing special at the LSD time and LSD observed nothing special the time of the others.While high multiplicity events can be caused e.g. by spallation of oxygen induced by primary muons, no similar event was found during the entire LSD operation which ended with the devastating fire in the Mont Blanc tunnel March 24, 1999. The community has settled for the LSD event as being a rare or unexplained fluctuation.A credible physical origin at SN 1987A is astrophysically hard to construct.Schaeffer, Declais and Jullian computed that, assuming a SN origin for the events seen by LSD, the total energy emitted by SN 1987A would have been 3×10 54 erg, much larger than the value expected by standard core-collapse supernova theory [60]. C. Statistical Analysis We perform our maximum likelihood analysis along the lines of similar previous studies [72,73].For the standard SN νe signal we assume a quasi-thermal distribution of the form Eq. ( 5) described by the three parameters E tot , E 0 and α.We compute the standard e + signal from the IBD cross section discussed in Sec.A and for the event spectrum in each detector use the efficiencies discussed earlier, including the IMB dead-time effect of 0.87. The SN 1987A are not informative about α [73], so we do not try to fit it, but rather use a range of values motivated by numerical SN models.In particular, we use α = 2.39 (2.07) for the cold (hot) model.The instantaneous neutrino spectra are pinched, i.e., their variance is smaller than that of a Maxwell-Boltzmann spectrum (α > 2), whereas time-integrated spectra are close to Maxwell Boltzmann.The SN spectra somewhat depend on flavor, but the effect of flavor oscillations is not yet well understood and moreover, because of the LESA effect [61], the spectrum depends on the observer direction relative to the 3D structure of the SN explosion. For each of the two experiments, we thus define an unbinned likelihood ) where E i are the observed energies, and E det is the energy reconstructed from the number of firing PMTs, drawn from a Poisson distribution as in Ref. [72].An unimportant normalization constant has been removed, because we will only deal with likelihood ratios. In the event rate, we also include the new signal prediction that depends on the parameters g and m φ ; at small masses these appear in the combination gm φ and thus collapse to essentially a single parameter.For Kamiokande II, we reduce the fiducial volume from 2140 tons to 780 tons, as discussed above.We only consider the final-state e ± from CC reactions as well as from muon decay, but not the Cherenkov signal caused by muons above the Cherenkov threshold as discussed in the main text.We keep α fixed at the predicted value for the cold and hot SN model.We then marginalize over E 0 and E tot as explained in the main text.In this way, we obtain an effective two-dimensiona likelihood We now define a test statistic, The asymptotic distribution of this variable under the assumption that Majorons exist is a half-chi-squared distribution [74], which allows us to set a threshold value for 95% C.L. exclusion at χ 2 = 2.7.With this procedure, we find the limit contours shown in Fig. 1. D. Garching Supernova Models In our numerical analysis we use the SN models SFHo-18.8 and LS220-s20.0from the Garching group that were evolved with the Prometheus Vertex code with sixspecies neutrino transport [67] in spherical symmetry.These "muonic models" were recently also used for other particle constraints [24,29], where more details are described and radial profiles of various physical quantities are given for specific snapshots of time.PNS convection was taken into account by a mixing-length treatment.Explosions were triggered by hand a few 100 ms after bounce at the Fe/Si or Si/O composition interface of the progenitor star. Following Ref. [29], we note that the SFHo equation of state is fully compatible with all current constraints from nuclear theory and experiment and astrophysics, including pulsar mass measurements and the radius constraints deduced from gravitational-wave and Neutron Star Interior Composition Explorer measurements.For comparison, some of the Garching muonic models also use the traditional LS220 equation of state. The model SFHo-18.8 [29] uses a progenitor star with mass 18.8 M that reaches a final neutron-star baryonic mass of 1.351 M and gravitational mass of 1.241 M , hence a gravitational binding energy of (1.351 − 1.241)M = 0.110M = 1.98 × 10 53 erg.It is at the lower end of plausible neutron-star masses and released binding energy.It reaches a maximum core temperature near 40 MeV, the coldest of this suite of models.We thus refer to it as our "cold" model and it is taken to bracket the lower end of neutron-star mass and core temperature. The "hot" model LS220-s20.0reaches a maximum core temperature of around 60 MeV.It has a progenitor mass of 20.0 M and reaches a neutron-star mass of 1.926 M , near the upper end of observed neutron-star masses.Its final gravitational mass is 1.707 M and thus releases 0.219M = 3.93 × 10 53 erg.This model is taken to bracket the upper end of both energy release and internal temperature. In Figs.S4 and Figs.S5 we show several internal properties of these models as a function of time and mass coordinate for these two models.The left panels show the temperature and we see that after collapse the models are cold.They heat up at the edge of the inner core as they contract, with the maximum T and largest extent of the hot region achieved at around 1 s.Therefore, the emission rate of new particles would be largest around this time if the emission rate depends on temperature, as it often happens in other extensions of the Standard Model because it is the thermal energy of the medium constituents that is emitted. However, in our case of Majoron emission by neutrino coalescence, the process ν e ν e → φ dominates by far, and so the chemical potential µ νe rather than T is the key quantity.It is shown in the middle panels, and we see that it is some 100 MeV up to roughly the inner 0.5 M , corresponding roughly to a radius of 10 km.At 1-2 s it drops quickly as the core deleptonizes.Beta equilibrium implies that ∆µ = µ e − µ νe = µ µ − µ νµ = µ n − µ p , whereas the number densities of ν e and e − must add up to the trapped lepton number of around 0.30 per baryon.However, the exact value of ∆µ depends on the nucleon properties in the medium and thus on the equation of state.Using free protons and neutrons provides the right order of magnitude, but is not a good approximation to estimate the emission rate, because in our case the latter scales rapidly as µ 3 νe (see main text).In these models with six-species neutrino transport, a chemical potential also builds up for ν µ in the sense that a significant population of νµ builds up, but the maximum of |µ νµ | remains a factor of 2-3 smaller than µ νe .As the emission rate scales with µ 3 ν , the muonic contribution remains only an order 10% correction.In Fig. S6, we finally show contours of the Majoron emission rate per unit mass.While the Majoron emission rate per unit volume scales as µ 3 ν and thus peaks at the center of the star, the emission rate per unit mass peaks at the edge of the inner core shown by the "yellow peak".This is because of the larger volume associated with the outer shells of the core.The chosen coupling strength for both "hot" and "cold" model coincides with the corresponding energy loss criterion detailed in the text, so that the Majoron luminosity at 1 s coincides with the neutrino luminosity.For the chosen coupling strength of g φ m φ = 8.3 × 10 −9 MeV (cold) and g φ m φ = 7.7 × 10 −9 MeV (hot), the emission rate is around 1 × 10 20 erg/gs throughout the inner core up to 0.50 M for the first second and then drops quickly.In the hot model, there is significant emission at larger mass coordinate around 0.5 s, deriving from the relatively large νµ population. E. Neutrino chemical potentials and older models Previous authors have derived SN 1987A energy-loss bounds, based on the same coalescence process, or have provided sensitivity forecasts for 100-MeV-range events from a future galactic SN [4,7].The emitting SN core was approximated as a one-zone model with µ νe = 200 MeV over a volume with R = 10 km and, in the case of Ref. [7], for a time scale of 10 s.These assumptions yield far more restrictive limits or far more ambitious signal predictions than our one-zone model or the numerical Garching models. In Ref. [7], the chemical potential was taken from the pioneering paper [62] (see their Fig.11).In this protoneutron star (PNS) cooling simulation, the nuclear equation of state was still relatively rough.Moreover, the starting value of trapped lepton number per baryon of Y L = 0.35 was chosen as an initial condition and did not follow from a self-consistent SN simulation.More recent systematic PNS cooling simulations [63] used more sophisticated nuclear and microphysics, chose a similar initial Y L = 0.35, and found an initial value at the center of µ νe ∼ 170 MeV (see Fig. 9 for their baseline model). Modern self-consistent simulations that include the infall phase find much smaller values of the trapped lepton number, 0.30 being a more typical number, depending on the progenitor model, also leading to smaller µ νe .In the muonic Garching models used here, the trapped lepton number in the center at core bounce is around 0.28 for the hot and 0.29 for the cold model and the initial µ νe ∼ 150 MeV at the center. For Majoron emission, the geometrically largest region, very roughly around a mass coordinate of 0.5 M , is more relevant than the values at the center and so this region is indicative of the parameters that one could use for a one-zone description.This point is especially relevant for the time evolution because deleptonization occurs earlier at larger radii.Figure 11 of Ref. [62] reveals that after only a few seconds, µ νe strongly drops, and considering that the emission rate varies as µ 3 νe , the signal would strongly quench at 2-3 s and a similar conclusion follows from Fig. 9 of Ref. [63]. However, the deleptonization time scale can be much faster if the effect of PNS convection is included, in contrast to Refs.[62,63] or recently Ref. [64] who studied the late neutrino signal.We refer to a recent study of PNS evolution [65] (see this paper for references to the earlier literature) who found that convection, implemented with a mixing-length approximation, speeds up deleptonization by about a factor of 4 (see especially their Sec.4.1) and as such is crucial for determining the overall time scale.Of course, the exact quantitative impact on Majoron emission or on SN neutrino signal properties may not be captured by this single number which refers to deleptonization at the center of the star. In our study we have used the numerical Garching models described earlier that include a mixing-length treatment of PNS convection, use nuclear equations of state that agree with modern information (notably on neutron-star masses and radii), and find trapped lepton abundances and chemical potentials commensurate with other modern simulations. For the case of Majoron emission one can actually characterize the different models with a single figure, the trapped number of ν e in the core.In the degenerate limit, the Majoron luminosity of the SN core happens to be proportional to N νe , the total number of ν e present in the core as explained around Eq. (S15) below.For our cold model at core bounce, we find Nν = 3.5 in units (100 MeV) 3 (10 km) 3 , whereas at 1 s postbounce it is 0.74.These numbers justify the one-zone parameters adopted in the main text. If instead one uses µ ν = 200 MeV with the same onezone radius 10 km, at 1 s one finds Nν = 8, about a factor of 11 larger and thus leading to much more restrictive energy-loss bounds as reported, for example, in Refs.[4,7]. For our argument about missing 100-MeV-range neutrinos in the 1987A data or the earlier forecasts for a future galactic SN [7], what matters is a somewhat different quantity.In the degenerate limit, the number emission rate scales with µ 2 ν .If we assume very roughly that the detection cross section scales with energy-squared, the count rate arising from a one-zone model scales with µ 4 ν R 3 τ as discussed in the main text.Therefore, one simple figure of merit for the source model is Ĉν = 3 dt dr r 2 µ 4 ν (r, t).Our cold model yields Ĉν = 2.30 in units of (100 MeV) 4 (10 km) 3 s.If one were to use a onezone model with µ ν = 200 MeV, R = 10 km and τ = 10 s, one instead finds Ĉν = 160, a factor of 70 larger than our value. The sensitivity of the ν e abundance in the SN core, and its time-integrated value, to the microphysics input as well as deleptonization speed of the SN model mandates a somewhat careful gauging of one-zone parameters. F. Majoron decay rate and emissivity The matrix element for the decay of a single Majoron into a pair of neutrinos is This is also the matrix element for coalescence of a pair of neutrinos into a Majoron; notice that there are no additional factors coming from averages over spin states since we consider Majorana neutrinos. The decay rate of a Majoron into a pair of neutrinos is where we denote by p 1 , p 2 , and p φ the four-momenta of the two neutrinos and the Majoron respectively, and in bold we denote their three-momenta.The factor 1/2 accounts for the presence of two identical particles in the final state.Performing the phase-space integral, we obtain In the case of neutrino coalescence, the rate of Majoron production from a pair of neutrinos, restricting to a single flavor, is (see, e.g., Ref. [1]) where f ν (E) is the neutrino phase-space distribution function.Performing the integral we recover as reported in the main text.Actually it is instructive to compare the rate of absorption Γ A (E φ ) of a Majoron in the neutrino background with the spontaneous rate of emission Γ E (E φ ).The rate of absorption is given by the vacuum decay rate Eq.(S7) times a Lorentz factor m φ /E φ .Moreover, the final-state neutrinos are Pauli-blocked so that overall we find (S10) where E ± = 1 2 (E φ ± p φ ) as defined in the main text.The integral expression is equal to 1 in the absence of Pauli blocking because the interval of integration has length p φ . On the other hand, the emission rate per unit volume given in Eq. (S9) is equal to FIG. S1.Charged-current neutrino cross sections in a waterCherenkov detector. FIG. S2.1987A neutrino data collected at Kamiokande-II, IMB, and Baksan.We show the detected positron energy as a function of time after the first event in each detector.Because of clock uncertainties, the exact temporal offset between the observations is not fixed.We do not show events which are attributed to background. FIG. S3.Detection efficiencies for electrons and positrons at Kamiokande and IMB, taken from Ref.[72], including the dead-time effect in IMB.We continue them at energies higher than 60 MeV by extrapolation. FIG. S4.Temperature (left), chemical potential of electron neutrinos (center), and chemical potential of muon neutrinos (right) as a function of post-bounce time and mass coordinate for the Garching "cold" model.The red line identifies the density 3 × 10 12 g cm −3 and thus essentially the edge of the PNS.The final neutron-star mass is 1.351 M . The factor 1/6 represents assumed flavor equipartition.The parameters are chosen such that E tot , E 0 = E ν , and E 2 ν agree with the numerical spectrum.The cold model releases E tot = 1.98 × 10 53 erg.The exact impact of flavor oscillations on SN neutrinos is not yet fully understood.Averaging over all three ν flavors, we find E 0 = 12.7 MeV and α = 2.39.For the hot model, these parameters are E tot = 3.93 × 10 53 erg, E 0 = 14.3 MeV and α = 2.07.SN 1987A cooling limit.-Thelocal Majoron energy loss follows from Eq. ( 21.39 × 10 69 erg/s, to be compared with L ν (1 s) = 8.29 × 10 52 erg/s, leading to gm φ < 0.77 × 10 −8 MeV.Moreover, E tot φ = (gm MeV ) 2 4.39 × 10 69 erg and E tot ± from φ decay FIG.2.Normalized particle spectra from the time-integrated emission of the cold model SFHo-18.8."Standard ν" is the flavor average of the usual SN ν and "Standard e ± " the corresponding e ± spectrum in the detector (ignoring detection efficiencies), whereas the new contributions are marked "from φ decay."Theyinclude Michel e ± (endpoint 53 MeV) from µ ± decays at rest, which themselves emerge from CC interactions of νµ and νµ that come from φ decay.νe+ O → e + + X and ν e + O → e − + Y with X and Y excited final-state nuclei, dominate for E ν > ∼ 70 The absolute time of an event was recorded to an uncertainty ±50 ms thanks to the WWVB clock, a time signal radio station operated by the National Institute of Standards and Technology [53].The first IMB event occurred at 7:35:41.374Universal Time on 23 February 1987, corresponding to 2:35 am local time on a Monday very early morning.
11,278.4
2022-09-23T00:00:00.000
[ "Physics" ]
Lateral multilayer/monolayer MoS2 heterojunction for high performance photodetector applications Inspired by the unique, thickness-dependent energy band structure of 2D materials, we study the electronic and optical properties of the photodetector based on the as-exfoliated lateral multilayer/monolayer MoS2 heterojunction. Good gate-tunable current-rectifying characteristics are observed with a rectification ratio of 103 at V gs = 10 V, which may offer an evidence on the existence of the heterojunction. Upon illumination from ultraviolet to visible light, the multilayer/monolayer MoS2 heterojunction shows outstanding photodetective performance, with a photoresponsivity of 103 A/W, a photosensitivity of 1.7 × 105 and a detectivity of 7 × 1010 Jones at 470 nm light illumination. Abnormal photoresponse under positive gate voltage is observed and analyzed, which indicates the important role of the heterojunction in the photocurrent generation process. We believe that these results contribute to a better understanding on the fundamental physics of band alignment for multilayer/monolayer MoS2 heterojunction and provide us a feasible solution for novel electronic and optoelectronic devices. Two-dimensional (2D) materials based on atomically thin films of layered semiconductors, such as the family of transition metal dichalcogenides (TMDCs), have exhibited great potentials in various optoelectronic applications [1][2][3][4][5] . Among various TMDCs, MoS 2 is gaining increasing attention for applications in optoelectronic devices [6][7][8][9] , due to the suitable bandgap value, relatively high carrier mobility and high light absorbance 10 . It is interesting that bulk MoS 2 is semiconducting with an indirect bandgap of 1.2 eV 11 , whereas single-layer MoS 2 is a direct gap semiconductor with a bandgap of 1.8 eV 12 . In particular, the ability to modulate the band structure by varying the layer numbers allows their unique thickness-dependent electronic and optical properties 2 . Vertical or lateral semiconductor p-n junctions are the basic building blocks of modern optoelectronic devices [13][14][15] , such as photodetectors, light emitter diodes and solar cells. Vertical junctions such as WSe 2 /MoS 2 16 and black phosphorus/MoS 2 17 can be formed by stacking two different 2D materials through Van der Waals forces. However, the band offsets between different TMDCs are pivotal, which could inhibit carrier transport. In addition, impurities are inevitably introduced at the interface during the multiple-transfer process 18 . Within lateral junctions which can be formed via localized chemical doping or electrostatic tuning 3,19 , the impurities at the interface between p-type and n-type materials can be ignorable. While, multiple complicated fabrication processes are usually required and the band alignment between electrodes and 2D materials is technically challenging. Fortunately, utilizing the band offsets between various numbers of TMDCs layers to form lateral heterojunctions has been proposed in recent years 20,21 . In 2015, Ali Javey et al. 20 experimentally and theoretically proved the formation of a type-I heterojunction in as-exfoliated MoS 2 flakes by thickness modulation. Furthermore, Qiaoliang Bao et al. 21 reported a monolayer/bilayer WSe 2 lateral junction and demonstrated the whole 1L-2 L WSe 2 junction surface to be active area for photoresponse. However, the photoresponse abilities as well as the photoresponse spectrum of this structure have not been investigated carefully. Also, in such papers, the influence of the junction on photocurrent has not been provided directly. In this study, electrically tunable as-exfoliated multilayer/monolayer MoS 2 heterojunction is reported and exhibits good gate-tunable current-rectifying characteristics. Furthermore, we investigate the photoresponse abilities of the heterojunction to different wavelength from ultraviolet (UV) to visible (vis) light. Abnormal photoresponse under positive gate voltage is observed and analyzed, which indicates the important role of the heterojunction in the photocurrent generation process. Upon 470 nm light illumination, the heterojunction shows a photoresponsivity of ~1 × 10 3 A/W, a photosensitivity of 1.7 × 10 5 and a detectivity of 7 × 10 10 Jones which is comparable or higher than most recently reported vertical and lateral heterojunctions 3,19,[22][23][24][25] . This work may provide us a promising heterostructure for novel optoelectronic devices in the future high-performance photodetector applications. Results Characterization of the multilayer/monolayer MoS 2 heterojunction. Figure 1(a) depicts the optical microscopy images of MoS 2 before and after metal deposition. It can be seen that the colors are different with the layer numbers, which is light gray for monolayer MoS 2 and dark gray for multilayer MoS 2 . Figure 1(b) shows the schematic of the photodetector based on multilayer/monolayer MoS 2 heterojunction. In this device, the source electrodes are in contact with the monolayer MoS 2 , the drain electrodes are in contact with the multilayer MoS 2 , and the heavily p-doped Si serves as a global back gate. The thicknesses of the monolayer and multilayer MoS 2 are ∼0.65 nm and ∼6.9 nm, respectively, as determined from the atomic force microscopy (AFM) measurements shown in Fig. 1(c). From the inset of the Fig. 1(c), an obvious dividing line between monolayer and multilayer MoS 2 can be observed, which further proves the existence of the heterojunction. The thickness of the MoS 2 can be also confirmed by the peak positions in Raman spectrum, shown in Fig. 1(d). From the Raman spectrum, we obtain the E 1 2g peak frequencies of 384.549 cm −1 (379.214 cm −1 ) and A 1g peak frequencies of 402.318 cm −1 (404.093 cm −1 ) for monolayer (multilayer) MoS 2 which are consistent with the previous report 26 . Electronic properties of the multilayer/monolayer MoS 2 heterojunction. Next, the electrical characteristics of the multilayer/monolayer MoS 2 heterojunction are studied. Figure 2(a) shows the typical n-type gating characteristics on a semi-log plot with the drain voltage V ds changing from −3 V to 3 V. High On-Off current ratio of 10 7 and a subthreshold swing (SS = ∂V gs /∂ log 10 (I ds )) close to 300 mV/decade are achieved for this device. The field effect mobility has been extracted from the results in Fig. 2(a) and plotted as a function of V gs as shown in Fig. 2(b). The heterojunction shows a typical mobility in the range of 0.1-10 cm 2 V −1 s −1 , similar to previously reported values for MoS 2 transistors 27 . Figure 2(c) shows the gate-tunable I ds − V ds characteristics of the heterojunction on a semi-log plot. It could be concluded that the device exhibits excellent rectifying characteristics and indicates the existence of the multilayer/monolayer MoS 2 heterojunction. The influence of source/drain Schottky barriers on rectifying behaviors is excluded because of the almost linear output curves of multilayer and monolayer MoS 2 transistors, as shown in Figure S1 in the supporting information. In Fig. 2(d), the rectification ratio I fwd /I rev (the ratio of the forward/reverse current) of ∼10 3 is obtained at V ds = −3 V/3 V and V gs = 10 V. Additionally, the ideal factor of the heterojunction achieves a minimum value of 1.95 with a back gate voltage of 5 V. These strong current-rectifying characteristics and small ideal factor indicate that a high quality of heterojunction has been formed between multilayer and monolayer MoS 2 . Photoresponse of the multilayer/monolayer MoS 2 heterojunction. As high-quality multilayer/ monolayer MoS 2 heterojunction is achieved, the optoelectronic characteristics of the device are then explored. First, we investigate the modulation effects of gate voltage V gs on the light detection capabilities. Figure 3(a) shows the transfer curves (I ds − V gs ) of the heterojunction under 470 nm light illumination with the light intensity changing from 4.48 mW/cm 2 to 29.29 mW/cm 2 . The marked increase of current under illumination is observed, indicating the good photoresponse abilities of the device. Furthermore, the n-type characteristic of the heterojunction becomes more pronounced with the increasing of light intensity, which demonstrates the tunable effect of light on electronic behaviors of the heterojunction. To better understand the photoresponse properties of the device, the significant characteristics of the photodetectors for practical applications are concluded, including photosensitivity (S, (I light − I dark )/I dark ), photoresponsivity (R, (I light − I dark )/P incident ) and detectivity (D*, A 0.5 R/(2qI dark ) 0.5 ) where I light , I dark , P incident, A and q is the current under illumination, dark current, incident power, absorbing area and electronic charge, respectively. Figure S2 shows the dependence of R and S values on gate voltage. Combining the low dark current and high R, D* represents the ability of a detector to detect weak optical signals, as show in Fig. 3(b). It can be seen that D* increases and peaks at V gs = −7.5 V and then decreases as the gate voltage further increases. The maximum value of D* is about 7 × 10 10 Jones which is comparable to most reported MoS 2 -based photodetectors 19,28 . Figure 3(c) displays the output characteristics of the heterojunction under light illumination with different incident powers. The linear dependence of R on incident power can be concluded from the inset of Fig. 3(c). From Fig. 3(d), the value of R increases as V ds increases and reaches the maximum value of R is about 10 3 A/W at V ds = 3 V, which is comparable or higher than most recently reported vertical and lateral heterojunctions 3, 19, 22-25 . To apply the multilayer/monolayer MoS 2 heterojunction to a broadband photodetector 29,30 , the photoresponse of the device on other light wavelength has also been investigated. The different photoresponsivities of the device on various wavelength (typically for 365 nm, 470 nm, 590 nm, 660 nm) are shown in Fig. 4(a). The device exhibits a broadband photoresponse from ultraviolet to visible light and shows a slightly larger R under 470 nm light illumination. However, due to the relatively weak infrared light absorption 22 , there is no obvious photoresponse on infrared region. An interdigitated finger structure and laser light source are suggested for more accurate investigation on infrared region. Response speed is also one of the key figure of merits for a photodetector, particularly for that utilized in optical communication, imaging, and so on. Figure 4(b) shows the time-resolved measurement to study its photoresponse dynamics. The response is characterized by a typical rise time τ rise of 2 ms and decay time τ decay of 2 s. The fast rise time is induced by the depletion region of the heterojunction and Schottky barriers of the source/drain contact. However, due to the existence of adsorbates, defects or charge impurities in surrounded MoS 2 materials, a slow relaxation speed might be observed in the decay time. To reduce the response time, a more independent environment of channel is needed. Additionally, good photostability over multiple cycles of the device can be concluded from Figure S3. Working mechanism of the multilayer/monolayer MoS 2 heterojunction. To analyze the heterojunction effect on the photoresponse behaviors, the transfer curves of monolayer MoS 2 and multilayer MoS 2 photo-transistors are plotted in Fig. 5(a) and (b) respectively. Ignorable photoresponses are observed in the on state (V gs = 15 V) of the devices. The similar phenomenon has also been reported in the literatures 2, 22 . However, obvious photoresponse behaviors in the heterojunction are observed at the forward gate bias voltage (Fig. 3(a)). Furthermore, as shown in Fig. 5(c), the S of the heterojunction shows a linear dependence on gate voltage. Differently, the S of multilayer MoS 2 transistor and monolayer MoS 2 transistor both decrease exponentially as the gate voltage increases. The different photoresponse characteristics of two kinds of devices might be owing to the existence of the heterojunction and the reason will be discussed in the next part. To better understand the working mechanism of the heterojunction, the energy band diagram is shown in Fig. 6. According to the reported experimental and theoretical bandgap values for monolayer and multilayer MoS 2 20, 31 , a type-I heterojunction in equilibrium state is expected as depicted in the qualitative band diagram of Fig. 6(a). Simultaneously, Schottky barriers between MoS 2 and source/drain metal are formed 32 . Fig. 6(b) exhibits the typical band alignments in off state (negative gate voltage) and on state (positive gate voltage). Under negative gate voltage, the conduction band (E C ) and valance band (E V ) are pulled downward, which induces MoS 2 /Ti Schottky barriers. At this condition, the effective photosensitive areas (blue regions in Fig. 6(b)) consist of MoS 2 / Ti Schottky barriers and multilayer/monolayer MoS 2 heterojunction. As the gate voltage moves toward positive values, MoS 2 /Ti Schottky contact changes to the Ohmic contact. Correspondingly, photovoltage effect which is induced by the contact barriers will be weakened. However, multilayer/monolayer MoS 2 heterojunction still plays an important role in photoresponse process. So these devices exhibit different decreasing trend in Fig. 5c. As a conclusion, the good photoresponse of the multilayer/monolayer MoS 2 heterojunction might be derived not only from the effect of the Schottky barrier in the MoS 2 /metal contact but also from the effect of the build-in field in the heterojunction. Discussion The lateral multilayer/monolayer MoS 2 heterojunction is fabricated and the electronic and optical characteristics are investigated under the gate modulation. The lateral 2D heterojunction possesses a high On-Off current ratio of 10 7 and good current-rectifying characteristics with a high rectification ratio of 10 3 and a small ideality factor of 1.95 in the dark, revealing the high quality of the heterojunction. As a photodetector, the multilayer/monolayer MoS 2 heterojunction exhibits good photodetection capabilities upon the illumination from ultraviolet to visible light. Under 470 nm light illumination, the device shows a maximum photoresponsivity of 10 3 A/W, a high photosensivity of 10 5 and detectivity of 7 × 10 10 Jones. This work could offer an interesting platform for fundamental investigations of lateral multilayer/monolayer TMDCs heterojunctions, and will be valuable for fabricating flexible and transparent optoelectronic devices in the future. Method A multilayer/monolayer MoS 2 flake was obtained from a bulk crystal by mechanical exfoliation method and transferred to a highly p-doped Si (100) substrates with 90 nm thermal oxide as shown in the inset of Fig. 1(a). Metal source/drain (S/D) contacts are subsequently formed with source contact on the monolayer region and the other on the multilayer region of the MoS 2 flake. Then, electron-beam lithography (EBL) was used to pattern the source/drain contacts, followed by thermal evaporation of Ti/Au (10/50 nm) electrodes and lift-off process. The resulting structure is shown in Fig. 1(a) with channel length L of 3 μm and width W of 7.4 μm. Atomic force microscope (AFM, SPA 500, Seiko Instruments Inc.) and Raman spectroscopy (RM-1000, Renishaw) with a wavelength of 532 nm were used to confirm the layer number of MoS 2 flakes. The electronic and optical properties of multilayer/monolayer MoS 2 heterojunction were characterized with an Agilent B1500 parameter analyzer at room temperature in air ambient. The monochromic lights with different wavelengths were provided by CEL-LEDS35 LED illuminant system (CEAULIGHT).
3,459.4
2017-07-03T00:00:00.000
[ "Physics", "Chemistry" ]
Visible/Near Infrared Spectroscopy and Chemometrics for the Prediction of Trace Element (Fe and Zn) Levels in Rice Leaf Two sensitive wavelength (SW) selection methods combined with visible/near infrared (Vis/NIR) spectroscopy were investigated to determine the levels of some trace elements (Fe, Zn) in rice leaf. A total of 90 samples were prepared for the calibration (n = 70) and validation (n = 20) sets. Calibration models using SWs selected by LVA and ICA were developed and nonlinear regression of a least squares-support vector machine (LS-SVM) was built. In the nonlinear models, six SWs selected by ICA can provide the optimal ICA-LS-SVM model when compared with LV-LS-SVM. The coefficients of determination (R2), root mean square error of prediction (RMSEP) and bias by ICA-LS-SVM were 0.6189, 20.6510 ppm and −12.1549 ppm, respectively, for Fe, and 0.6731, 5.5919 ppm and 1.5232 ppm, respectively, for Zn. The overall results indicated that ICA was a powerful way for the selection of SWs, and Vis/NIR spectroscopy combined with ICA-LS-SVM was very efficient in terms of accurate determination of trace elements in rice leaf. Various calibration methods have been used to relate near-infrared spectra (NIRS) with measured properties of materials. Principal components regression (PCR), partial least squares (PLS), multiple linear regression (MLR) and artificial neural networks (ANN) are the most used multivariate calibration techniques for NIRS [16][17][18][19]. PLS is usually considered for a large number of applications in fruit and juice analysis and is widely used in multivariate calibration because it takes advantage of the correlation relationships that already exist between the spectral data and the constituent concentrations. However PLS is based on linear models and unsatisfactory results may occur when non-linearity is present [20,21]. The least-squares support vector machine (LS-SVM) can handle the linear and nonlinear relationships between the spectra and response chemical constituents [22,23], therefore, a new combination of ICA with LS-SVM was proposed as a nonlinear calibration model for quantitative analysis using spectroscopic techniques. The performance of ICA-LS-SVM was evaluated by a case study to determine the trace elements in rice, with the purpose of developing a fast and accurate nonlinear model using fewer selected variables for the determination of the trace elements in rice. The objective of this study were (1) to investigate the feasibility of using Vis/NIRS to predict trace elements such as Fe and Zn in rice leaf; (2) to compare the performance of ICA and the newly proposed ICA-LS-SVM model, variable selection methods (PCA, LVA and ICA) to predict the trace elements in rice. Experimental Design The experimental samples in this study were 15 basins of rice, which were planted in conditioned soil with three nitrogen levels: 0, 120, and 240 kg/ha. To avoid accidental damage to the basins or samples, a duplicate set of basins was prepared, so there were 30 basins in total. For each nitrogen level, there were 10 basins, including the additional basins. Each basin's inner diameter and height were 30 and 45 cm, respectively. Each basin contained 10 kg soil and four rice plants. The basins were placed in a slotted field using the surrounding soil for backfill, and they were placed along the line from north to south. The soil used in this experiment was from the 20 to 40 cm depth of the experimental field. Data Acquisition and Preprocessing Three leaf samples from each of 15 basins were selected for spectral measurement. Samples were also selected from another 15 replicate basins, so a total of 90 samples were obtained. The measurements were made at the booting stages. All 90 leaf samples reflectance measurements were made using a portable Spectroradiometer (FieldSpec Vis/NIR, Analytical Spectral Device, Boulder, CO, USA), with a sensitivity range from 325 to 1,075 nm. The instrument uses a sensitive 512-element, photo-diode array spectroradiometer, with a resolution of 3.5 nm. The scan number for each spectrum was set to 10 at the same position, and for each sample, three reflection spectra were taken, thus a total of 30 data points were properly stored for later analysis. To achieve the relative reflectance measurements, the white reference (a white panel purchased with the spectroradiometer used as white reference) was collected before scanning samples until a nice, clean, 100% reference line was obtained. All leaves were randomly divided into two sets, one was used as a calibration set (n = 70) and the remaining samples as a validation set (n = 20). In order to compare the performance of different calibration models, the samples in the calibration and validation sets were kept the same for all the models. Trace Elements (Fe, Zn) Measurement In the study, we used the national standard method to measure the trace elements Fe and Zn [24]. First, HNO 3 , HClO 4 , and distilled water were diluted and adjusted to the required concentration solution. Rice leaf samples were finely ground and then passed through a 20 mesh sieve to obtain very fine particles. An air-dried, ground and sieved sample (2.0 g) was placed in an Erlenmeyer flask and the extracting solution (20 mL) was added. Then it was placed on a magnetic stirrer and the mixture was stirred for 20 minutes. The resulting solution was filtered through a filter paper into a 50 mL polypropylene vial and diluted to 50 mL with the extracting solution. After that, a Perkin-Elmer Analyst TM 800 atomic absorption spectrometer (PerkinElmer, Inc., Shelton, CT, USA) was used to measure the signal strength of the elements Fe and Zn in each Erlenmeyer flask, and the results were shown using the software package of the instrument. After calculation, the Fe content was from 39.951 ppm to 134.254 ppm, and Zn content was from 9.085 ppm to 49.927 ppm in all 90 samples. Table 1 shows the statistic values of Fe and Zn contents in calibration and validation sets. Data Pretreatment Due to the potential system imperfections, obvious scattering noises could be observed at the beginning and end of the spectral data. Thus, the first and last 75 wavelength data points were eliminated to improve the measurement accuracy, i.e., all visible and NIR spectroscopy analyses were based on a 400-1,000 nm scan. The above spectral data preprocessing was finished in ViewSpec Pro V4.02 (Analytical Spectral Device, Inc.). After that, the spectral data was preprocessed using Savitzky-Golay smoothing with a window width of 7 (3-1-3) points [25]. The data preprocessing was implemented by the software Unscrambler V 9.6 (Camo Process AS, Oslo, Norway). Principal Components Analysis (PCA) Reducing the number of inputs to the LS-SVM can reduce training time. Furthermore, it can also reduce repetition and redundancy of the input spectra data. PCA is a method of data reduction that constructs new uncorrelated variables, known as principal components (PCs). They account for as much information as possible for the variability of the original variables, which are then used as the inputs of network. In addition, PCs can also eliminate noises and random errors in the original data. The equation of PCA could be described as follows: (1) where X is a N × K data matrix, T is a N × A score vector matrix, P is a K × A loading vector matrix, E is a N × K residual matrix, N is the number of samples, K is the number of spectral variables, and A is the number of PCs. Partial Least Squares Analysis In the development of PLS model, calibration models were built between the spectra and the content of trace element (Fe and Zn), full cross-validation was used to evaluate the quality and to prevent over-fitting of calibration models. Latent variables (LVs) can be used to reduce the dimensionality of data, and the optimal number of LVs was determined by the lowest value of predicted residual error sum of squares (PRESS). The prediction performance was evaluated by the coefficients of determination (R 2 ) and root mean square error of calibration (RMSEC) or prediction (RMSEP), and bias. The ideal model should have higher r value, lower RMSEC, RMSEP and bias. The RMSEP and bias could be calculated via: where ŷ i is the predicted value of each sample in prediction set (RMSEP), y i is the measured value of the each sample, and I p is the sample number in the prediction set. Independent Component Analysis Independent component analysis is a well-established statistical signal processing technique that aims to decompose a set of multivariate signals into a base of statistically independent components with the minimal loss of information content. The independent components are latent variables, meaning that they cannot be directly observed, and the independent component must have non-Gaussian distributions. A brief explanation of noise-free ICA model could be expressed by the following equation: where x denotes the recorded data matrix, s and A represent the independent components and the coefficient matrix, respectively. The ICs were obtained by a high-order statistic, which is a much stronger condition than orthogonality. This goal is equivalent to find a separating matrix W that satisfies: where ŝ is the estimation of s. The separating matrix W can be trained as the weight matrix of a two-layer feed-forward neural network in which x is input and ŝ is output. There are lots of algorithms for performing ICA [26]. Among these algorithms, the fast fixed-point algorithm (FastICA), which was developed by Hyvarinen and Oja [27], is highly efficient for performing the estimation of ICA. FastICA was chosen for ICA and carried out in Matlab 7.0 (The Math Works, Natick, MA, USA). Least Squares-Support Vector Machine Least squares-support vector machine can work with linear or non-linear regression or multivariate function estimation in a relatively fast way [28]. It uses a linear set of equations instead of a quadratic programming (QP) problem to obtain the support vectors (SVs). The details of LS-SVM algorithm could be found in the literature [29,30]. The LS-SVM model can be expressed as: (6) where K(x, x i ) is the kernel function, xi is the input vector, α i is Lagrange multipliers called support value, b is the bias term. In the model development using LS-SVM and radial basis function (RBF) kernel, the optimal combination of gam(γ) and sig 2 (σ 2 ) parameters was selected when resulting in smaller root mean square error of cross validation (RMSECV). In this study, gam(γ) were optimized in the range of 2 −1 -2 10 and 2-2 15 for sig 2 (σ 2 ) with adequate increments. These ranges were chosen from previous studies where the magnitude of parameters was optimized. The grid search had two steps the first step was for a crude search with a large step size, and the second step was for the specified search with a small step size. The free LS-SVM toolbox (LS-SVM v 1.5, Suykens, Leuven, Belgium) was applied with MATLAB 7.0 to develop the calibration models. Overview of Spectra and Statistic Values of Trace Elements The lack of trace elements such as Fe, S, Mg, Mn may reduce the chlorophyll content of plant leaf, and will affect the solar radiation absorption by the leaf, so the changes of plant nutritional elements such as nitrogen, water content, and trace elements may directly result in the spectral reflectance changes [31]. Figure 1(a) shows the Vis/NIRS spectral curves of 90 leaf samples. The trend of spectral curves in Vis/NIR region is similar, a small peak appeared at the green band from 560 to 580 nm, and reflectance increased rapidly at about 690-740 nm (red edge) from 10% to 30%-70%. Wavelengths at 580 nm were close to the green pigments, and wavelengths near 680 nm or 710-730 nm was at the red edge position [32].Treated them with 2nd derivative, some peaks and valleys were shown in Figure 1(b). There exists peaks at the wavebands near 690-700 nm, and at the wavebands 720-740 nm and 550-570 nm are troughs. PLS Models Calibration models were built between the spectra and content of trace elements (Fe and Zn). Different LVs were applied to build the calibration models, and no outliers were detected in the calibration set during the development of PLS models. The models were used to predict the left 20 samples, and the best performance was achieved with six LVs for Fe and five LVs for Zn. The R 2 , RMSEP and bias were 0.3820, 26.1431 ppm and −9.3674 ppm for Fe, 0.5800, 6.9637 ppm and 2.2320 ppm for Zn, respectively. PCA-LS-SVM Models PCs obtained from PCA were applied as inputs of LS-SVM models to improve the training speed and reduce the training error of Vis/NIR model because the training time increased with the square of the number of training samples and linearly with the number of variables. From the aforementioned analysis of the performance of PCA models, the PCs from the Vis/NIR region were used as new eigenvectors to enhance the features of spectra and reduce the dimensionality of the spectra data matrix. Several PCs were extracted from the spectra of 90 samples. Before the LS-SVM calibration model was built, three steps are crucial for the optimal input feature subset, proper kernel function and the optimal kernel parameters. Firstly, the six PCs obtained from PCA analysis were used as the input data set, and the accumulated contribution of it was reached 95.2%. Secondly, radial basis function could handle the nonlinear relationships between the spectra and target attributes. Finally, two important parameters gam (γ) and sig 2 (σ 2 ) should be optimal for RBF kernel function as aforementioned in multivariate analysis. The performance of the Vis/NIR models was evaluated by 20 samples in validation set. The R 2 , RMSEP and bias for validation sets were 0.4012, 23.9920 ppm and −7.8789 ppm for Fe, 0.6109, 6.5308 ppm and 2.0571 ppm for Zn, respectively. Figure 2(a,b) compare the predicted values and measured values for Fe and Zn, respectively, by the PCA-LS-SVM model. The diagonal line (y = x) shows the ideal results that mean the predicted values are equal to the measured values. The closer the sample plots are to this line, the better is the model. From these figures, the sample plots in the validation sets were distributed near the ideal line for Zn, but the prediction performance is not good for Fe. LV-LS-SVM Models Latent variables obtained from PLS were applied as inputs of LS-SVM models to improve the training speed and reduce the training error of Vis/NIR model. From the aforementioned analysis of the performance of PLS models, the LVs from the Vis/NIR region were used as new eigenvectors to enhance the features of spectra and reduce the dimensionality of the spectra data matrix. Several LVs were extracted from the spectra of 90 samples. The performance of the Vis/NIR models was evaluated by 20 samples in validation set. With a comparison of the results for calibration and validation sets, the best performance was achieved with six LVs for Fe and five LVs for Zn. The R 2 , RMSEP and bias for validation sets were 0.4070, 23.3845 ppm and −7.4975 ppm for Fe, 0.6067, 6.4869 ppm and 2.2336 ppm for Zn, respectively. Figure 3(a,b) show the predicted versus reference charts. Compared with the PCA-LS-SVM models, the prediction performance for Fe was improved a little, but still not good. The PCA-LS-SVM calibration model has better performance than the LV-LS-SVM model for Zn. ICA-LS-SVM Models Independent component analysis was applied for the selection of sensitive wavelengths (SWs), which could reflect the main features of the raw absorbance spectra. FastICA was used to the preprocessed spectra data, and the main absorbance peaks and valleys were indicated by the spectra of ICs. The SWs were selected by the weights of the first four ICs, which wavelengths with the highest weights of each IC were selected as the SWs. Figure 4(a,b) show the four ICs for Fe and Zn. Six SWs were selected corresponding to four ICs, and they were wavelengths near 680, 580, 960, 730, 760 and 830 nm for Fe, 680, 710, 640, 720, 580, and 800 nm for Zn. In order to evaluate the performance of SWs, they were applied as the input data matrix to develop the ICA-LS-SVM models. The validation results showed the R 2 , RMSEP and bias were 0.6189, 20.6510 ppm and −12.1549 ppm for Fe, 0.6731, 5.5919 ppm, 1.5232 ppm for Zn, respectively. Figure 5(a,b) show the predicted versus reference graphs. The ICA-LS-SVM models achieved a better performance compared to the best LV-LS-SVM models both in calibration and validation sets. Wavelengths at 580 nm were close to the chlorophyll content of leaf, and wavelengths at 680 nm, 710 nm or 720 nm were near the red edge position. The wavelength 960 nm was close to the water absorbance bands, and it means Fe may affect by the intimidating of water [33]. Therefore, the selection of SWs was suitable for such situation in the present study and the effectiveness of SWs was also validated. The SWs represented most of the features of the original spectra, and could replace the whole wavelength region to predict the trace elements in rice. Ma et al. reported that the element Co had high correlation near the wavelength 569.22 nm, with an R 2 value of 0.623 [31]. They claimed this might caused by the variation of chlorophyll content. Al Abbas et al. studied the spectra of "normal" and six types of nutrient-deficient maize leaves, and it showed that the chlorophyll concentration of leaves in all nutrient-deficiency treatments was lower than of leaves in the control [34]. This was accordant with the results concerning Co in the paper of Ma et al. In our study, Fe belongs to the family of iron elements, and Zn is kindred with sulfur elements. Fe and Co belong to the same element family, so it is normal that the spectral response of Fe is similar to that of Co, and both of them have high correlation near the wavelength of 580 nm. For Zn, the sensitive wavelengths near 680, 710 and 720 nm were near the red edge. Analysis of the Results Compared with the above PLS, PCA-LS-SVM, LV-LS-SVM and ICA-LS-SVM model, the nonlinear PCA-LS-SVM, LV-LS-SVM, ICA-LS-SVM models turned out to be better than linear model of PLS. The best model was obtained by using the ICA-LS-SVM model for prediction of trace elements in rice. Table 2 shows all the parameters of RMSEP and R 2 in the four models. The ICA-LS-SVM models had a better performance, and the reason might be that the LS-SVM models took the nonlinear information of the spectral data into consideration and the nonlinear information had improved the prediction precision. The ICs from ICA were obtained by a high-order statistic that is much stronger condition than orthogonality, so the SWs selected from ICs were more effective, and it could be very helpful for the development of portable instrument or real-time monitoring of the rice trace elements. Conclusions Vis/NIR spectroscopy was successfully utilized for the determination of some trace elements (Fe, Zn) in rice. A new combination of ICA-LS-SVM was proposed with comparison of nonlinear LV-LS-SVM models, PCA-LS-SVM modes and linear PLS models. ICA-LS-SVM model turned out to be the best for prediction of trace elements in rice, and was better than the nonlinear LV-LS-SVM model. The R 2 , RMSEP and bias by ICA-LS-SVM were 0.6189, 20.6510 ppm and −12.1549 ppm for Fe, and 0.6731, 5.5919 ppm and 1.5232 ppm for Zn, respectively. The overall results demonstrated ICA was a powerful tool for variable selection, and the newly proposed ICA-LS-SVM method could be applied as an alternative fast and accurate method for the determination of trace elements in rice.
4,554.2
2013-02-01T00:00:00.000
[ "Chemistry", "Agricultural And Food Sciences" ]
Coatings in Decellularized Vascular Scaffolds for the Establishment of a Functional Endothelium: A Scoping Review of Vascular Graft Refinement Developments in tissue engineering techniques have allowed for the creation of biocompatible, non-immunogenic alternative vascular grafts through the decellularization of existing tissues. With an ever-growing number of patients requiring life-saving vascular bypass grafting surgeries, the production of functional small diameter decellularized vascular scaffolds has never been more important. However, current implementations of small diameter decellularized vascular grafts face numerous clinical challenges attributed to premature graft failure as a consequence of common failure mechanisms such as acute thrombogenesis and intimal hyperplasia resulting from insufficient endothelial coverage on the graft lumen. This review summarizes some of the surface modifying coating agents currently used to improve the re-endothelialization efficiency and endothelial cell persistence in decellularized vascular scaffolds that could be applied in producing a better patency small diameter vascular graft. A comprehensive search yielding 192 publications was conducted in the PubMed, Scopus, Web of Science, and Ovid electronic databases. Careful screening and removal of unrelated publications and duplicate entries resulted in a total of 16 publications, which were discussed in this review. Selected publications demonstrate that the utilization of surface coating agents can induce endothelial cell adhesion, migration, and proliferation therefore leads to increased re-endothelialization efficiency. Unfortunately, the large variance in methodologies complicates comparison of coating effects between studies. Thus far, coating decellularized tissue gave encouraging results. These developments in re-endothelialization could be incorporated in the fabrication of functional, off-the-shelf alternative small diameter vascular scaffolds. INTRODUCTION Cardiovascular diseases (CVD) describe a variety of linked pathologies affecting the heart and blood vessels and are often associated with ischemic tissue damage predominantly as a consequence of severe arterial occlusion resulting from common underlying conditions such as atherosclerosis (1). Accountable for approximately 17.9 million deaths every year with projected annual mortalities rising to a staggering 23.3 million by 2030, CVDs represent a significant public health concern and is currently the leading cause of mortality and morbidity around the globe (2,3). Despite these fatality rates, rapid identification and rectification of modifiable atherosclerotic risk factors through the employment of noninvasive treatment options such as dietary and lifestyle modifications (e.g., lipid control, smoking cessation) and pharmaceutical therapies (e.g., statins) can retard the occlusive process which greatly reduces the risk of premature CVD (4,5). Patients experiencing severe arterial occlusion, as defined by the presence of >50% stenosis in the left coronary artery or >70% stenosis in a major coronary vessel (6), may opt for invasive vascular procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass grafting (CABG) to achieve revascularization in affected arteries over extended periods of time (7). CABG has been associated with improved survival rates and reduced recurrence of major cardiovascular events and compared to PCI, particularly in patients with multivessel occlusions, however, making it the preferable treatment option over PCI (8)(9)(10). The saphenous vein represents the current bypass conduit of choice for CABG procedures despite exhibiting poor longterm patency and high graft failure rates (∼50% failure after 10 years) compared to autologous arterial grafts such as the internal mammary artery due to its ease of harvest with minimal complications (11,12). Nevertheless, the use of autologous vessels for vascular bypass grafting comes with its own set of issues-vessel removal could cause damage at the extraction site and the extracted vessel may be of poor quality, thus preventing it from being used as a bypass conduit (13). Developments in tissue engineering techniques have allowed for the circumvention of these limitations, however, as the decellularization technique can be utilized to fabricate alternative biocompatible vascular scaffolds derived either from allogeneic or xenogeneic sources (14). These decellularized scaffolds can be used as conduits for vascular bypass, eliminating the need for autograft surgery. The decellularization process involves striking a fine balance between the targeted removal of cellular and nuclear material contained in existing vascular tissues whilst minimizing damage to the extracellular matrix (ECM) constituents (14). This can be achieved via treatment of vascular tissues with chemical agents such as detergents and alcohols, biological agents such as enzymes and chelating agents, and physical methods including freeze-thawing, hydrostatic pressure, and non-thermal irreversible electroporation (15). Successful decellularization allows vascular scaffolds to retain their biomechanical properties whilst reducing their immunogenicity (13,14,16). Inadequate cellular depopulation could lead to immune-mediated damage which could result in graft failure. Overly aggressive decellularization is also unfavorable as it potentially results in the elimination or disruption of critical ECM components, which would adversely affect the structural integrity and mechanical properties of the decellularized tissue (17)(18)(19). Decellularized vascular scaffolds have exhibited their capability of supporting the adhesion and development of endothelial cells (EC) and smooth muscle cells (SMC) in multiple studies (20)(21)(22), making the prospect of producing a non-immunogenic tissue-engineered alternative small diameter vascular graft very promising. A number of commercially available decellularized vascular grafts derived from bovine blood vessels and bovine ureters have been utilized in bypass surgeries in the past, but widescale implementation and utilization of these grafts as bypass conduits have not occurred to date due to their poor clinical outcomes post-implantation in regard to low patency and premature graft failure as a consequence acute thrombogenicity and intimal hyperplasia (13,23). Multiple studies have implied that the absence of a luminal EC layer in decellularized vascular grafts is responsible for the development of thrombosis and ultimately premature graft failure upon in vivo implantation (13,(24)(25)(26)(27)(28)(29). This occurs as the presence of a viable luminal endothelium aids to prevent exposed collagen from triggering the extrinsic blood coagulation pathway when coming into contact with peripheral blood, therefore an absence of ECs would result in thrombosis (29,30). Intimal hyperplasia also represents a significant factor adversely affecting the long-term patency of decellularized scaffolds (27,31,32), in which a thickening of the neovascular neointima is observed. Although the exact mechanism responsible for the development of intimal hyperplasia is not fully understood, excessive vascular SMC migration and proliferation from the vessel media to the intima along with excessive ECM protein deposition have previously been suggested to be the cause (33). The presence of a compliance mismatch between the vascular graft and its adjacent native vessel, and the absence of a functional endothelium are some additional causes of intimal hyperplasia that have been identified (34)(35)(36). Nevertheless, the risks of the aforementioned conditions can be mitigated through careful donor species, site, and tissue processing and antigen removal methodology selection alongside timely re-endothelialization with functional ECs (37,38). All tissue-engineered vascular grafts (TEVGs), including decellularized scaffolds, will encounter a certain degree of re-endothelialization post-transplantation. Current identified mechanisms reveal that a majority of the re-endothelialization processes required for the generation of an effective and persistent endothelium occurs spontaneously during in situ/in vivo regeneration, but in vitro re-endothelialization involving luminal cell seeding and maturation prior to graft implantation remains the most commonly used technique to achieve reendothelialization in TEVGs to date (39). The aforementioned in situ re-endothelialization processes include host intima ingrowth from the anastomotic regions predominantly occurring in TEVGs less than 2 cm in length (trans-anastomotic ingrowth) (40,41), migration of capillary endothelium from adventitial granulation tissue onto the intimal surface (trans-mural capillary ingrowth) (42, 43), and deposition of circulating endothelial progenitor cells (EPCs), a small subset of CD34 + mononuclear cells capable of differentiating into an EC-like phenotype whilst expressing EC-specific markers, through recognition of proteins adsorbed on the surface of graft (fallout endothelialization) (39,44,45). Upon successful re-endothelialization and cellular repopulation, Henry et al. observed graft remodeling with an influx of MHC + SMCs localized within the graft wall, and extensive matrix deposition of the ECM components collagen and elastin with a high percentage of cellular infiltration into graft (46). An increase in collagen and elastin deposition would be beneficial for the mechanical properties of a TEVG as collagen density has been shown to directly correlate with strength and stiffness whilst elastin imparts extensibility (47,48). Nevertheless, achieving and maintaining a healthy and functional endothelial lining has proven to be tricky. Depending on the length and diameter of the scaffold in question, in situ re-endothelialization mechanisms may or may not suffice for the generation of fully patent vascular grafts. Vascular grafts with an inner diameter size larger than 6 mm have demonstrated excellent patency (93%) after a period of 5 years post-implantation without the need for active in vitro re-endothelialization (49,50), but these results cannot be extrapolated to shorter grafts with smaller inner diameters (<4 mm diameter, <5 cm length) as these grafts are extremely prone to acute thrombotic obstruction post-implantation; a study revealed that 5 of 6 rats implanted with nonendothelialized decellularized rat abdominal aortas succumbing to endothelial damage and acute thrombosis after 3 days with a final graft patency of 50% after 14 days, while 4 out of 6 rats remained alive after 14 days of implanting the reendothelialized graft, retaining a graft patency of 63% (51). Despite achieving confluent in vitro endothelialization prior to implantation of the reendothelialized grafts, graft patency and rat survival rates were significantly lower than that of the control (undecellularized rat abdominal aorta; 100% patency and survival). This occurrence likely resulted from exposure to high shear stresses from the pulsatile blood flow present in the circulatory system; previous studies have reported a maximal cell loss of 70% in EC-seeded grafts within minutes of exposure to flow (52,53). As such, enhancing the efficiency of the re-endothelialization process and improving EC retention post-implantation proves to be critical for the development of functional small diameter decellularized vascular scaffolds. One approach that can be utilized to achieve these goals is through the functionalization of the surface of decellularized scaffolds through the application and integration of specific proteins, such as growth factors, as bioactive surface coatings or modifications. This approach has previously been shown to result in grafts with improved patency with lower risks of thrombosis and intimal hyperplasia (32,(54)(55)(56). In the case of the previously explained study conducted by Hsia et al., sphingosine-1-phosphate (S1P), which has been shown to possess antithrombotic and proangiogenic properties (57), was utilized as the bioactive coating of choice (51). The results of their study demonstrated that rats implanted with re-endothelialized rat abdominal aortas coated with S1P exhibited 100% survival and patency rates as a result of increased EC migration and adhesion strength (51). This rise in EC migration could possibly be attributed to the pro-angiogenic properties of the S1P coating, as endothelial cells have a propensity to proliferate toward angiogenic stimuli (39). In principle, a combination of in vitro cell seeding in the presence of a bioactive coating appears to be a promising strategy to enhance in situ endothelial regeneration, which is ideal to achieve a persistent and functional endothelium. Thus, this article aims to systematically review existing literature on various examples of surface modifications used improve the efficiency and efficacy of the re-endothelialization process for the production of functional decellularized vascular scaffolds. METHODS This review was conducted in accordance with the guidelines stated in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (58) to systematically assess publications in regard to the application and effects of coatings on the re-endothelialization of decellularized vascular scaffolds. Electronic databases including PubMed, Scopus, Web of Science (WOS), and Ovid were utilized to acquire relevant primary studies, with results shown up to January 2021. The specific string used for all database searches was "((Decellularis * OR Decellulariz * ) AND (Arter * OR Vein OR Vessel)) AND (coat * ) AND (endotheliali * OR populat * ) NOT (review)" to ensure the inclusion of studies involving the coating of decellularized vascular scaffolds and cellular repopulation or re-endothelialization whilst excluding review articles. Search results were screened based on title and only publications regarding relevant primary studies written in the English language were included. Publications qualifying through the preliminary title screening were subjected to abstract screening, and only articles with abstracts that illustrated the use of decellularized tissues or organs coated with bioactive molecules that have been shown to positively affect endothelial cell or endothelial progenitor cell recruitment/adherence/attachment/coverage or the re-endothelialization/cellular repopulation process were shortlisted for inclusion. Publications utilizing modifications that aim to improve the feasibility of decellularized grafts as potential cardiovascular therapeutics but do not affect the re-endothelialization process such as those enhancing graft mechanical properties or reducing intimal hyperplasia/thrombosis without enhancing endothelial coverage were deemed ineligible as the scope of this review is to enhance the re-endothelialization process. All eligibility assessments were conducted by two reviewers to reduce author/selection bias in the article selection process, and disagreements regarding the eligibility of included publications and/or data were resolved through discussion between reviewers. Publications meeting any of the following exclusion criteria were also eliminated from further review: non-English language articles, conference abstracts, news articles, letters, editorials, case studies, and review articles. Duplicate entries were removed from the inclusion group, and a data extraction table (Table 1) was generated to summarize the following information from included publications: author, tissue decellularized, coating used, and findings. VEGF stimulates EC adhesion to ligands of the basement membrane, promotes formation of a functional neo-endothelium, and stabilizes the neoendothelium in front of shear forces generated by the blood stream. Although coated grafts showed higher percentage of functional endothelium, the rate of neo-intimal hyperplasia was also increased. Coated grafts showed strong augmentation of neo-intimal hyperplasia between the 4 and 8th week in vivo, leading to a significantly increased intima-to-media ratio. Rat aorta Fibronectin Fibronectin coating induced medial graft repopulation without inflammatory reactions or adverse gene expressions, indicating the feasibility and potential of this strategy for the improvement of current clinically applied bioprostheses. Search Results Primary database search yielded a total of 192 results, of which 19 articles were derived from PubMed, 24 articles from Scopus, 28 articles from WoS, and 121 articles from Ovid. Independent screening of the initial results was conducted by two reviewers in accordance with the inclusion and exclusion criteria to reduce bias in the selection process, followed by a joint discussion which resulted in the unanimous decision to eliminate 135 unrelated articles and include 57 articles: 14 from PubMed, 14 from Scopus, 20 from WoS, and 9 from Ovid. Abstracts were carefully screened, and 20 articles were removed as they did not match the inclusion criteria, leaving a total of 37 articles in the inclusion group. Duplicate entries were screened for, and 21 articles were removed, resulting in a final tally of 16 publications, of which 11 were from PubMed, one from Scopus, one from WoS, and three from Ovid. A flow chart depicting the selection process is shown in Figure 1. Study Characteristics Decellularized biological scaffolds were utilized in all selected studies. However, the type of tissue decellularized and its origin differed. Decellularized scaffolds were modified through the addition of a coating agent prior to re-endothelialization in all included publications, but the coating agent utilized varied between each study. A summary of included publications is provided in Table 1. DISCUSSION Sixteen publications were selected from a pool of 192 results as they utilized different decellularized vascular scaffolds coated with various substances to improve re-endothelialization. Information regarding the types of decellularized vascular scaffolds and coatings utilized in the studies are summarized in Table 1. Effects of the coating agents on the re-endothelialization of decellularized vascular grafts are further discussed. Heparin The sulphated polysaccharide heparin has been well-documented as a potent, endothelial cell-binding anticoagulant agent. Heparin impedes the development of thrombosis via inhibition of specific serine proteinases involved in the blood coagulation cascade through the potentiation of antithrombin III (75). Hussein et al. (59) established that the coating of the decellularized porcine liver right lobe with heparin-gelatin, significantly increases attachment of EA.hy926 endothelial cells on vascular surfaces whilst reducing endothelial cell migration into the parenchyma. 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay results showed a 3.8-fold increase in cell attachment on heparin-gelatin coated samples compared to uncoated control samples. Heparin and gelatin coated samples yielded a smaller 2.1-fold and 2.5-fold increase, respectively. Increased EC attachment was observed in heparin-gelatin coated samples likely due to an increase presence of VEGF and bFGF, as these growth factors exhibit high affinity for heparin, and ECs have been shown to express receptors for these growth factors (59,(76)(77)(78). Musilkova et al. (67) used a different approach for their study, where decellularized human pericardium (DP) was strongly or weakly crosslinked with glutaraldehyde and/or genipin prior to coating with a fibrin mesh modified with heparin, fibronectin, heparin and fibronectin, or unmodified. Genipin acts as a natural, low-toxic crosslinking agent with prior experiments demonstrating its ability to stabilize decellularized scaffolds with minimal cytotoxicity and immunogenicity as compared to glutaraldehyde-crosslinked decellularized scaffolds (79)(80)(81)(82). Results from Musilkova et al.'s study indicated that glutaraldehyde crosslinking, irrespective of crosslinking strength, resulted in a reduction of seeded human umbilical vein endothelial cells (HUVEC) metabolic activity (MTA), while genipin crosslinking increased MTA across all DP samples. Highest MTA was produced in HUVECs seeded onto DP weakly crosslinked with genipin and coated with fibrin and fibronectin. MTA in samples coated with heparinmodified fibrin mesh and heparin and fibronectin-modified fibrin mesh were lower as compared to other treatments. Despite exhibiting lower MTA values, the anticoagulatory and anti-inflammatory effects of heparin-fibrin may be beneficial in inhibiting thrombosis prior to re-endothelialization (67,83). Heparin-treated decellularized porcine carotid arteries were coated with bFGF in Conklin et al.'s study (71) to analyze the impact of bFGF on the proliferation of human microvascular endothelial cells (HMEC) and canine endothelial progenitor cells (CEPC). The decellularized carotid arteries were pretreated with heparin to enhance adhesion of bFGF as it is a heparin-binding growth factor that has been previously shown to enhance EC migration and proliferation, angiogenesis, and re-endothelialization (84)(85)(86)(87). Experimental results shown that the coating improves in vitro proliferation rate of both HMECs and CEPCs, with bFGF-coated samples seeing a 2.4-fold increase in HMEC and 2.3-fold rise in CEPC cell quantities compared to uncoated samples after 4 days and 2 days, respectively. HMECs seeded on coated scaffolds were also found to be more resistant than uncoated samples when cultured under shear stresses (71). VEGF VEGF was also utilized in a number of studies as it functions as a powerful angiogenic and mitogenic agent that is capable of inducing EC migration and proliferation to accelerate the vascularization process when bound to its specific VEGF receptors (88)(89)(90)(91). A study conducted using decellularized murine descending aorta coated with VEGF established that incorporation of the growth factor facilitated the re-endothelialization process and led to decreased development of neointimal lesions in the murine aortic graft (66). This occurrence is presumably a result of VEGF's capacity to trigger differentiation of lesional progenitor cells toward an endothelial lineage (66), as evidenced by the development of endothelialspecific markers (eNOS, vWF, CD31, CD144, and VEGFR 1/2) on lesional progenitor cells cultured in vitro in the presence of VEGF (66). The authors also demonstrated that these in vitro results were extrapolatable to in vivo subjects, as local application of VEGF onto the implanted vascular graft decreased neointimal development and promoted EC localization to surface of the graft (66). Iijima et al. (68) explored the impacts of coating decellularized murine aorta with VEGF conjugated to a temperature-sensitive aliphatic polyester hydrogel (HG-VEGF) which allows for a steady and sustained release of the growth factor (92). They concluded that the sustained VEGF exposure enhanced EC adhesion to the basement membrane and promoted the formation of a functional endothelial layer, with HG-VEGF coated samples presenting 64.8 ± 7.6% EC coverage on their luminal surface 4-weeks post-treatment, while uncoated samples presented 40.4 ± 8.3% EC coverage. Medial repopulation was also increased in coated vessels, with the absolute cell count of coated samples at 4 weeks and 8 weeks being 7.3 ± 5.9 cells, and 22.1 ± 13.0 cells, respectively. Uncoated vessels had 0.80 ± 1.2 cells at 4 weeks and 3.2 ± 3.6 cells at 8 weeks. Despite the promising increase in re-endothelialization, use of the coating eventually resulted in neointimal hyperplasia, which led to a significantly increased intima-to-media ratio (68). Marinval et al. (63) proposed the modification of decellularized porcine heart valve scaffolds via a multi-layer application of the brown algae-derived sulphated polysaccharide, fucoidan, and VEGF (63,93). Multiple studies have shown that fucoidan promotes the adhesion, migration, and proliferation of endothelial cells, and has antithrombotic properties similar to that of heparin but carries a lower hemorrhagic risk than the former (93)(94)(95)(96). The authors disclosed that the coating led to an improvement in HUVEC adhesion accompanied by enhanced cell density and viability on the decellularized valvular scaffold in both static and dynamic cultures. Visual examination of HUVEC re-endothelialization in static culture conditions showed endothelial cells presenting as a highly connective homogenous monolayer, with a larger number of adherent, living cells present 6-h post-treatment with fucoidan/VEGF compared to untreated samples. Similar evaluations conducted on HUVECs cultured under perfusion revealed ECs aligning toward the direction of perfusion with greater endothelial cell adhesion in coated samples. Cell viability under perfusion was also significantly enhanced, with coated samples showing 4,549 ± 325 viable cells per field, while uncoated samples showed 3,343 ± 292 cells per field. Fucoidan/VEGF coating can be used to enhance re-endothelialization whilst reducing the risk of thrombosis (63). Human and rat kidneys were decellularized by Leuning and co-workers (74), and the effects of VEGF and Ang-1 coating on decellularized kidney scaffolds were investigated. Growth factor loading was determined to be an essential measure for maximal endothelial cell adherence, survival, and coverage, as samples coated with VEGF and Ang-1 exhibited increased EC adherence and viability. Human induced pluripotent stem cell-derived endothelial cells (hIPS-ECs) were also seeded onto the coated samples, and similar results were produced; VEGF and Ang-1 coating resulted in enhanced EC adherence and viability. hIPS-ECs were also seeded on decellularized human kidney, and the authors reported enhanced hIPS-EC proliferation in VEGF + Ang-1 coated samples with the potential to scale the re-endothelialization process throughout the entire kidney. Minimal vascular obstructions were also observed in coated samples, highlighting the importance of growth factor reconstitution for the re-endothelialization of decellularized kidney scaffolds (74). Cellular Communication Network Factor 1 (CCN1) CCN1 is a secreted surface-associated pro-angiogenic matricellular protein that is recognized to mediate cell adhesion and migration, and cell proliferation through integrin interaction and induction of growth factor-associated DNA synthesis respectively (97)(98)(99)(100). CCN1 has also recently been implicated to be capable of inducing the recruitment and localization of circulating CD34 + progenitor cells to the endothelial layer, contributing to the regeneration of the endothelium due to their ability to differentiate into mature ECs (101)(102)(103). The recruited EPCs may also positively influence angiogenesis and neovascularization through the paracrine secretion of pro-angiogenic cytokines (104,105). The study conducted by Bär et al. (61) utilized human cord blood-derived endothelial cells (hCBEC) to repopulate decellularized porcine small intestines with preserved pedicles (BioVaM) coated with CCN1. hCBEC attachment experiments were conducted on plates coated with gelatin or CCN1-enriched gelatin. Results showed significantly improved hCBEC adhesion and retention when seeded on CCN1-enriched gelatin-coated plates compared to hCBEC seeded on plates coated with only gelatin. When hCBECs were seeded on the decellularized scaffolds, enhanced re-endothelialization efficiency was observed in CCN1-coated BioVaMs compared to their uncoated counterparts (84 ± 9% cell retention vs. 47 ± 4% cell retention). DNA content analyses conducted 12 h post-re-endothelialization revealed significantly higher DNA content in coated samples (37 ± 2 µg/g) compared to uncoated samples (11 ± 3 µg/g), clearly showcasing the effects of CCN1 coating on decellularized porcine small intestine scaffolds (61). A similar study (65) conducted using decellularized equine carotid arteries coated with CCN1 observed that the CCN1 coating not only facilitates circulating endothelial cell attachment, but also induces endothelial and smooth muscle cell proliferation, neomedia formation, organized neovascularization, reduced local inflammatory reactions, and induced immunological tolerance which collectively enhances biocompatibility of decellularized vascular grafts significantly (65). Fibronectin Fibronectin (FN) represents a major ECM constituent that has the capacity to induce endothelial cell adhesion, migration, and differentiation through conjugation or adhesion with biomaterial surfaces (106). Therefore, Assmann et al. (70) and Flameng et al. (72) conducted studies utilizing FN as a coating for their decellularized vascular scaffolds to mimic what has been done in vitro to assess FN applicability in 3D. Assmann et al. (70) investigated the effects of FN on the autologous in vivo re-endothelialization of decellularized murine aortic conduits and concluded that FN greatly enhanced EC adhesion capacity and decellularized graft biocompatibility, resulting in accelerated re-endothelialization. Examination of the luminal surface 8-weeks post-treatment showed 89.9 ± 5.45% repopulation on the luminal surface of the coated samples, while uncoated samples displayed 73.6 ± 13.14% re-endothelialization. Immunofluorescence also revealed that the cells present in the luminal zone stained positive for vWF, confirming the presence of endothelial cells. Unfortunately, additional findings also suggest that FN exacerbated the development of hyperplastic neointima. Thus, FN may not represent the optimal coating agent for decellularized vascular grafts (70). Flameng et al. (72) coated decellularized ovine aortic valves with fibronectin and stromal cell-derived factor 1α (FN/SDF-1α) and noticed that the coating substantially improved re-endothelialization performance on coated decellularized samples, displaying values comparable to that of native cryopreserved aortic grafts, which exhibited 39 ± 8% re-endothelialization at the leaflet region and 37 ± 5% at the wall region of the aortic graft 5 months post-implantation in Lovenaar sheep. In contrast, uncoated decellularized samples only demonstrated 10-15% re-endothelialization within the same timeframe (72). Biomimetic Peptides RGD (arginine-glycine-aspartic acid) represents an extensively studied biomimetic peptide that has been widely utilized as a surface modifier for biomaterials. RGD has been shown to be capable of stimulating cell adhesion, cell migration, and cell proliferation through specific recognition and interaction with integrins (107,108). However, RGD peptides exhibit low biological activity innately, but this issue is easily counteracted via the introduction of chemical modifications to RGD to form biologically active peptides GRGDS and GRGDSPC (109). Research conducted by Wan et al. (73) modified decellularized murine pancreas with the GRGDSPC peptide to stabilize HUVECs on the scaffold. Immunofluorescent staining revealed that both GRGDSPC-conjugated samples and uncoated samples expressed Ki67 and CD31, suggesting that HUVECs effectively adhered and proliferated in groups, however, the levels of these markers were substantially higher in GRGDSPC-conjugated samples, indicating greater biocompatibility for growth and proliferation of HUVECs. The study concludes that the GRGDSPC peptide successfully binds to the pancreatic scaffold facilitating HUVEC proliferation and functional endothelialization, presumably through enhanced expression of integrins αvβ3, α5β1, and αIIβ3 (73). In their study, Lee et al. (69) incorporated a musselinspired polydopamine (pDA) coating for use on a decellularized canine vein matrix consisting of the inferior vena cava and jugular vein (DVM). pDA-coated decellularized vein matrices (pDA-DVMs) were then conjugated with the RGD and YIGSR peptides to produce (CGGRGD)-pDA-DVMs and (CGGYIGSR)-pDA-DVMs, respectively. Human cord bloodderived endothelial precursor cells (hCB-EPCs) and human embryonic stem cell-derived endothelial precursor cells (hESC-EPCs) were seeded onto pDA-coated DVMs (pDA-DVM) to analyze the effects of conjugated and unconjugated pDA coatings on the efficiency of re-endothelialization. Increased metabolic activity in hCB-EPCs seeded onto (CGGYIGSR)-pDA-DVMs were observed in comparison to uncoated DVMs and pDA-DVMs. qRT-PCR suggested enhanced precursor cell differentiation into endothelial cells on the peptide-modified DVMs, as indicated by an increased expression of endothelial specific markers, with the largest increase seen in cells seeded on (CGGRGD)-pDA-DVMs. Adhesion of hEPCs on peptide modified DVMs was also improved compared to uncoated control samples. With these results, Lee et al. confirmed that the modified PDA coating positively impacted EC adhesion and metabolic activity whilst inducing the differentiation of hEPCs into an endothelial lineage. All these factors contribute to increased re-endothelialization efficiency in decellularized vascular scaffolds (69). Other Coating Agents López-Ruiz et al. (60) demonstrated improved re-endothelialization with decreased thrombosis risk in decellularized porcine carotid arteries coated with the polymer poly(ethylmethacrylate-co-diethylaminoethylacrylate) (8g7). 8g7 has been previously described as a novel biocompatible polymer that carries the ability to facilitate and improve EC adhesion and viability (110). DAPI staining revealed cellular repopulation occurring in a similar manner in both 8g7-coated and uncoated samples, but with a significant higher number of visible cells attached to the coated sample. Platelet adhesion testing on 8g7-coated coverslips revealed reduced platelet adhesion under increasing shear stresses in a flow system. Additionally, ECs grown on coated coverslips exhibited increased angiogenic properties as evidenced by the formation of capillary-like tubes after 4 h. Biomechanical testing of the samples showed a substantial difference in the vessel burst pressure between native, non-decellularized arteries and 8g7-coated decellularized arteries (1,330 ± 135 mbar vs. 1,153 ± 138 mbar). Tensile strength of coated samples, however, was comparable to native arteries, while uncoated decellularized arteries showed a much lower maximum load (60). Fibrin glue (FG) is an essential biological adhesive generated from the activation of fibrinogen by thrombin and is essential in the blood coagulation cascade (111). This material was previously observed by Almelkar and colleagues to support angiogenesis, endothelial cell adhesion, migration, and proliferation (112). In their succeeding study, decellularized bovine aorta was coated with FG composed of a 1:1 ratio of fibrinogen and thrombin (62). Sheep external jugular vein endothelial cells (SEJVEC) cultured on coated and uncoated samples revealed that SEJVECs seeded onto FG-coated samples achieved full confluency in 5 days, with visual examinations showing that the seeded cells assumed a flat morphology with an expansion of filopodia, comparable to endothelial cells found in physiological conditions. Cell viability was also unaffected by the coating. In contrast, SEJVECs seeded onto uncoated samples only achieved 70% confluence after 10 days while adopting a cobblestone morphology. Immunocytochemistry revealed presence of vWF and lectin on FG-coated samples which was absent on uncoated samples. These results indicate that fibrin glue is a non-toxic coating that can be used to greatly enhance the re-endothelialization of large diameter decellularized vascular scaffolds (62). Kim et al. (64) proposed using an anti-CD31 aptamer coating to promote re-endothelialization in decellularized murine liver scaffolds. Nucleic acid aptamers are short sequences of synthesized single-stranded oligonucleotides that have low immunogenicity and high binding affinity for specific proteins, and thus, are often considered alternatives to antibodies. Compared to antibodies, however, aptamers are often easily adjusted and producible in massive amounts at a relatively low cost with limited risk of variability and contamination (113)(114)(115). Kim et al. determined that anti-CD31 aptamer coating of decellularized liver scaffolds facilitates re-endothelialization. These contribute to the development of vascular networks that can support perfusion with increased functionality and viability through the potentiation of integrin-Akt signaling cascades (64). ECs seeded onto anti-CD31 aptamer coated grafts exhibited significantly higher cell attachment compared to uncoated grafts or anti-CD31 antibody coated grafts. Attached ECs were also more resistant to shear stresses and were less likely to detach, with aptamer-coated grafts retaining 57.84 ± 2.9% cell adhesion under shear stress and uncoated grafts retaining 21.87 ± 1.2%. HUVEC endothelial coverage on aptamer-coated grafts were also significantly higher after 7 days in culture, at 76.10 ± 3.54% coverage compared to HUVECs on uncoated grafts which had 35.22 ± 7.74% endothelial coverage (64). Although the re-endothelialization of decellularized small dimeter vascular grafts in the context of cardiovascular diseases remain the priority for this review, the studies included utilized both decellularized vascular grafts and decellularized tissues due to the limited number of studies conducted involving bioactive coatings or surface modifications and decellularized small diameter vascular grafts. Nevertheless, the results obtained from these studies remain valuable as the core goal is to improve reendothelialization efficiency and endothelial cell retention in decellularized scaffolds whilst contending against shear stresses post-implantation. As explained previously, exposure of decellularized grafts lacking a functional endothelium to the blood stream increases the risk of acute thrombogenesis and premature graft failure, and as such, improving the efficiency the reendothelialization process is critical to minimize contact between these surfaces to reduce the chances of the aforementioned risks. The coatings discussed in the included studies generally function to enhance in situ fallout endothelialization and transmural capillary ingrowth by increasing adhesion, migration, and proliferation while promoting angiogenesis in endothelial and/or endothelial progenitor cells. VEGF, Fibronectin, and CCN1/RGD peptides represent the most utilized coatings. A similarity between these molecules is that they share is their ability to interact with various subtypes of the cell surface receptor integrin present on endothelial cells and endothelial progenitor cells, forming the basis for their effects on promoting endothelialization (65,116,117). To elaborate using VEGF as an example, the interaction between VEGF-A and its specific tyrosine kinase receptor VEGFR2 results in the autophosphorylation of the receptor and thus, the activation of the receptor. The association between VEGFR2 and integrin α v β 3 expressed on circulating ECs and EPCs leads to the phosphorylation of the β 3 subunit of the integrin, resulting in its activation which in turn upregulates VEGF expression, induces cell migration and activates the mitogen-activated protein kinase (MAPK) pathway, which has been shown to be involved in progenitor cell proliferation, differentiation, and survival (117)(118)(119)(120), among many other crucial downstream processes (121)(122)(123). Fibronectin and CCN1 are both also similarly able to interact with different integrin subtypes, i.e., FN and CCN1 with integrin α v β 3 , FN with α 5 β 1 , and CCN1 with α 6 β 1 (124,125) to carry out the aforementioned functions which helps improve in situ re-endothelialization mechanisms for the production of functional decellularized vascular grafts. Some studies described in this review resorted to coating with a combination of bioactive materials or pretreating their decellularized grafts with different proteins prior to application of the main coating agent, such as can be seen in Hussein et al.'s (59) study combining heparin with gelatin, and Marinval et al.'s (63) study incorporated fucoidan with VEGF. This is a promising approach that could enhance the effectiveness of existing coating materials, as the benefits the secondary material, such as the antithrombogenic and anticoagulant properties of fucoidan and the myriad of integrin binding sites present on gelatin, could be garnered to produce a more effective coating. The potential complications such as cell toxicity that could arise from combining these coatings has to be taken into consideration, however. Nevertheless, this could be an avenue for future studies aiming to improve the quality of re-endothelialization in decellularized vascular scaffolds. CONCLUSION Multiple different coating agents and their effects on reendothelialization have been discussed in this review, and the use of coating agents are capable of inducing EC adhesion, migration, and proliferation resulting in enhanced re-endothelialization efficiency. A schematic representation of the various positive effect of these coatings on decellularized vascular grafts are illustrated in Figure 2. However, the variation in testing FIGURE 2 | The missing puzzle piece. Coatings of decellularized vascular graft present a myriad of positive effects that promote cell adhesion, migration, and proliferation both on the luminal surface, enhancing reendothelialization, and in the decellularized vascular graft, establishing a neomedia that consequently strengthens mechanical properties of the graft. Coatings also halt thrombosis and reduce inflammatory reactions. methodologies and types of decellularized tissues complicates the comparison of coating effects across the studies. This is illustrated when comparing the studies conducted by Iijima et al. (68) and Assmann et al. (70) with other studies employing the same coating agents, as both studies reported significant neointimal hyperplasia while the same did not occur in other studies. Thus, a more standardized analysis or criteria should be established to enable more robust comparison across studies. Furthermore, future studies could compare the performance of different coating agents to ascertain the optimal reagent required to produce the highest re-endothelialization efficiency. In addition, the feasibility of large-scale implementation of these coating methodologies as well as the long-term potential of coated scaffolds is also yet to be determined. Nevertheless, these findings are promising and suggest that the creation of fully functional non-immunogenic off-the-shelf tissue engineered vascular graft alternatives could be feasible sooner than later with additional research. DATA AVAILABILITY STATEMENT The original contributions generated for this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS NS, MY, and MA laid out the broad initial research question for this systematic review. JH, NS, and MY designed the search and refine the research question. JH performed article selection and screening, data collection and extraction, manuscript writing, and data analysis. NS oversees, review and verify articles selection, data collection, and analysis. MY and MA performed final proofreading of the manuscript. All authors contributed to the article and approved the submitted version.
8,423.8
2021-07-29T00:00:00.000
[ "Medicine", "Engineering", "Materials Science" ]
Physicochemical and Photocatalytic Properties under Visible Light of ZnO-Bentonite/Chitosan Hybrid-Biocompositefor Water Remediation In this investigation, a hybrid-biocomposite “ZnO-Bentonite/Chitosan” was synthesized using inexpensive and environmentally friendly materials (Bentonitechitosan) and (ZnO). It was used as a photocatalyst for water remediation. The structural, optical, thermal, and morphological properties of the synthesized hybrid-biocomposite were investigated using XRD, FTIR spectroscopy, UV-vis diffuse reflectance spectroscopy, TGA, XPS, and SEM-EDS. The thermal measurements showed that the decomposition of CS was postponed progressively by adding PB and ZnO, and the thermal stability of the synthesized hybrid-biocomposite was improved. The characterization results highlighted strong interactions between the C–O, C=O, -NH2, and OH groups of chitosan and the alumina-silica sheets of bentonite on the one side, and between the functional groups of chitosan (-NH2, OH) and ZnO on the other side. The photocatalytic efficiency of the prepared hybrid-biocomposite was assessed in the presence of Methyl Orange (MO). The experiments carried out in the dark showed that the MO removal increased in the presence of Zn-PB/CS hybrid-biocomposite (86.1%) by comparison with PB (75.8%) and CS (65.4%) materials. The photocatalytic experiments carried out under visible light showed that the MO removal increased 268 times in the presence of Zn-PB/CS by comparison withZnO.The holes trapping experiments indicated that they are the main oxidative active species involved in the MO degradation under both UV-A and visible light irradiations. Introduction Water consumption is continuously increasing due to water demand in the agricultural and industrial sectors as well as in domestic utilities. It has been reported that about two million tons of wastewater coming from agriculture and industrial plants are daily discharged into the world's natural water resources [1]. This inevitably leads to a drastic reduction of water reserves. Worse, the scarcity of water resources is accentuated by the increased desertification, particularly in Mediterranean countries. To meet the challenge of water supply, wastewater treatment plants should be installed, and the reutilization of treated wastewater should be considered as an additional water resourcein these countries' waterpolicies. In this regard, several wastewater treatment processes are currently used such as coagulation/flocculation, ozonation, adsorption, and Advanced Oxidation processes (AOPs) [1][2][3][4]. The AOPs have proven their effectiveness as innovative wastewater treatment technologies. These processes are based on the insitu generation of highly reactive transitory species (i.e., HO • , O 2 •− , e − ) for refractory organic compounds' mineralization [4]. Among the AOPs, heterogeneous photocatalysis has demonstrated high efficiency in degrading a large number of ambiguous refractory organics into readily biodegradable compounds [1,5,6]. The Synthesis of the Materials 2.2.1. ZnO Nanoparticles The ZnO nanoparticles were prepared using the precipitation method following the procedure described by El Mragui et al. [8]. The required amount of zinc acetate was dissolved in a 100 mL of distilled water. Then, 20 mL of an aqueous NaOH solution (1 M) was added dropwise under 50 • C during 60 min. The obtained suspension was centrifuged, a white precipitate was collected, and then washed until the pH comes neutral. The obtained precipitate was dried at 100 • C overnight, then ground and calcined at 500 • C for 3 h. Bentonite/Chitosan "PB/CS" The synthesis of the PB/CS biocomposite was done as follows. An amount of chitosan was dissolved in an aqueous acetic acid solution (1.5% v/v). Simultaneously, a quantity of PB, corresponding to 7 wt% in the final PB/CS biocomposite, was dispersed in 100 mL of distillated water, and then added drop wise to the suspension of chitosan. The mixture was stirred continuously at 50 • C for 24 h. The resulting material was centrifuged, washed with distilled water, and then dried at 55 • C for three days. ZnO-Bentonite/Chitosan "Zn-PB/CS" The ZnO-bentonite/chitosan hybrid-biocomposite was synthesized in two steps. The first one was dedicated to the preparation of ZnO/Bentonite material containing 7 wt% of ZnO basing on the results reported by Fatimah et al. [13]. These authors showed the insertion of 5 wt% of ZnO in montmorillonite matrix increased adsorption capability of ZnO/montmorillonite and helped to enhance the rate of photooxidation reaction. During this step, an aqueous suspension of PB was prepared by dispersing the necessary amount of PB in 100 mL of water and stirred for 12 h. Then, a suspension of ZnO containing the desired amount of ZnO was prepared and added drop wise to the PB suspension, while stirring at 50 • C for 1 h. The second step concerned the synthesis of the hybrid-biocomposite respecting 7 wt% of Zn-PB. The used of weight ratio was based on a previous study [5] which reported that the optimal photocatalytic performance of ZnO/Chitosan composite was achieved by adding 7 wt% ZnO to chitosan biomaterial. Other study [29] reported that the biocomposite containing 5 wt% of bentonite provided the better thermal and adsorptive properties. The preparation procedure consisted of adding the resulting mixture from the first step drop wise to 250 mL of an aqueous acetic acid solution (1.5 v/v) containing CS. The obtained mixture was stirred at 50 • C for 24 h, and then centrifuged. The resulting hybrid-biocomposite (Zn-PB/CS), containing theoretically 0.5 wt% of ZnO, 6.5 wt% of PB, and 93 wt% of CS, was washed and dried at 55 • C for three days. Characterization The crystalline structure of the materials was analyzed by powder X-ray diffraction (XRD) using a X'PERT MPD_PRO Diffractometer (Malvern Panalytical Ltd., Malvern, United Kingdom) with Cu Kα radiation at 45 kV and 40 mA (λ = 1.5406 Å). FTIR spectra of the samples were recorded from 400-4000 cm −1 using a FTIR spectrometer type JASCO 4100 (Jasco International, Tokyo, Japan) and the KBr pellet method. The scanning speed was 2 mm/sand 40 scans were accumulated with 4 cm −1 as a resolution. The UV-vis diffuse reflectance spectroscopy (DRS) measurements were made on a JASCOV-570 spectrophotometer (Jasco International, Tokyo, Japan) equipped with a Labsphere DRA-CA-30I integration sphere using BaSO 4 as a reference. The Differential Thermal Analysis (DTA) was performed between 20 and 400 • C under a stream of air using a Shimadzu Simultaneous DTA-TG apparatus (DTG-60) (Kyoto, Japan). The heating rate was 10 • C/min. The chemical states of the elements were determined by the X-ray photoelectron spectroscopy (XPS) analysis using a Kratos AXIS Ultra HAS equipment (Kyoto, Japan). The analysis was performed with a monochromatic Al Kα X-ray source (1486.7 eV). The morphology and the chemical composition of the materials were obtained by scanning electron microscopy (Quanta 200 from FEI Company, Hillsboro, Oregon, USA) coupled with energy dispersive spectroscopy (SEM-EDS). The sample in the form of a powder was deposited onto a sample holder, and all loose particles from the sample were removed by spraying dry air on the sample. Then, the sample holder was introduced into the microscope for analysis. Pollutant Removal The photocatalytic effectiveness of the synthesized composite was assessed using methyl orange as a probe molecule pollutant. The photocatalytic reactions were carried out at room temperature (26 ± 2 • C) and pH 4 in a cylindrical beaker containing an aqueous suspension (10 −5 M of MO and 0.5 g L −1 of hybrid-biocomposite). The suspension was stirred using a magnetic stirrer in the dark for the necessary time to obtain the adsorption/desorption equilibrium. After that, the UV or visible lamp was turned on to initiate the photocatalytic reaction. Test samples were withdrawn at given times of reaction and filtered through a 0.45 µm Millipore filter. The MO concentration monitoring was carried out by measuring the absorbance at λ max = 465 nm using a UV-vis spectrophotometer (Shimadzu 2100 spectrophotometer). A low-pressure lamp (40 W, model Vilber, VL-340.BL, Eberhardzell, Germany) emitting UV radiation at 365 nm (light intensity ≈ 413 mW cm −2 ) and a commercial Feit White Compact Fluorescent lamp (23 W, cool daylight, 6500 K, 1311 Lumens, Mainhouse Electronic Co., Ltd., Xiamen, China) were used to produce UV-A and visible-light irradiations, respectively. The reactor was positioned at about 10 cm below the light source. When the visible lamp was used, the UV radiations were eliminated by placing between the reactor and the light source a chemical filter composed of an aqueous solution of sodium nitrite (0.73 M) [5]. The MO removal percentage was calculated using the following equation: MO removal (%) = 100 × (C 0 − C t )/C 0 , where C 0 and C t are the MO concentrations at the initial and t time of reaction, respectively. X-ray Diffraction The obtained XRD patterns are shown in Figure 1. The broad peaks appearing on the CS spectrum at 2θ = 9.35 and 19.25 • match well with those reported previously for CS biomaterial [5], and no further peaks are observed. The broadening of the peaks is due to the low crystallinity of the chitosan. The XRD pattern of PB sample displays peaks belonging to the montmorillonite structure (indicated by M on the spectrum of Figure 1) [26]. The intense peak at 2θ = 5.73 • corresponds to the interlamellar distance d 001 = 15.42 Å. The XRD patterns of PB/CS and Zn-PB/CS biocomposites show the two characteristic peaks of CS biomaterial (2θ = 9.35 and 19.25 • ) as well as a weak peak at 2θ = 5.73 • belonging to montmorillonite (d 001 ). Compared with XRD pattern of ZnO ( Figure S1), no peak of ZnO was observed for Zn-PB/CS hybrid-biocomposite. This is probably due to the very low amount of ZnO and/or the well intercalation of the ZnO in the interlayer of bentonite. On the other hand, the comparison of the XRD spectra of PB/CS and Zn-PB/CS shows that the intensity of the CS peak (d 020 ) increases while that of CS (d 110 ) decreases significantly. This behavior suggests that the peak appearing at 2θ = 19.25 • for PB/CS and Zn-PB/CS is due to the overlap of the peaks of CS (d 020 ) [5] and bentonite (d 100 ) [5,26] as reported in literature [27,30]. The intense peak at 2θ = 5.73° corresponds to the interlamellar distance d001= 15.42 Å . The XRD patterns of PB/CS and Zn-PB/CS biocomposites show the two characteristic peaks of CS biomaterial (2θ = 9.35 and 19.25°) as well as a weak peak at 2θ = 5.73° belonging to montmorillonite (d001). Compared with XRD pattern of ZnO ( Figure S1), no peak of ZnO was observed for Zn-PB/CS hybrid-biocomposite. This is probably due to the very low amount of ZnO and/or the well intercalation of the ZnO in the interlayer of bentonite. On the other hand, the comparison of the XRD spectra of PB/CS and Zn-PB/CS shows that the intensity of the CS peak (d020) increases while that of CS (d110) decreases significantly. This behavior suggests that the peak appearing at 2θ = 19.25° for PB/CS and Zn-PB/CS is due to the overlap of the peaks of CS (d020) [5] and bentonite (d100) [5,26] as reported in literature [27,30]. [5,27], the stretching vibration of C-H in -CH2 and -CH3 (2894 and 2927 cm −1 ) [5], the vibration of carbonyl groups in amide I (1663 cm −1 ) [5], the vibration of protonated amino groups (1580 cm −1 ) [5], the stretching vibration of C-N in amide (III) (1380 cm −1 ) [5], the bending vibration of C-H in -CH2 (1320 cm −1 ) and CH3 (1420 cm −1 ) [5]. The FTIR spectrum of ZnO ( Figure S2) shows the characteristic absorption bands of ZnOwürtzite appearing between 400 and 510 cm −1 others at about 1425 and 1545 cm −1 belonging to C-O and C=O stretching vibrations in acetate groups [9]. The FTIR spectrum of PB ( Figure S3) shows bands around 3430 cm −1 and 1640 cm −1 associated respectively to the stretching and bending vibrations of OH in H2O adsorbed on the surface and between the interlayers [26]. The band at 3630 cm −1 is attributed to the stretching vibration of OH in (Al,Al)-OH, (Al,Mg)-OH or (Al,Fe)-OH [26]. The bands at 915 cm −1 and 880 cm −1 are assigned to the bending vibration of the OH group in (Al,Al)-OH and (Al,Fe)-OH, respectively [26] ]. The intense broad band centered at 1038 cm −1 is assigned to the Si-O vibration of the tetrahedral sheet [26]. The absorption band at 624 cm −1 is related to the perpendicular vibration of the octahedral cations (R-O-Si) where R = Al, Mg, or Fe [26]. The band at 530 cm −1 is FTIR Spectroscopy The FTIR spectra of the synthesized materials are shown in Figures S2, 2 and S3. The major characteristic bands can be assigned to the stretching vibrations of O-H and N-H (3460 cm −1 ) [5,27], the stretching vibration of C-H in -CH 2 and -CH 3 (2894 and 2927 cm −1 ) [5], the vibration of carbonyl groups in amide I (1663 cm −1 ) [5], the vibration of protonated amino groups (1580 cm −1 ) [5], the stretching vibration of C-N in amide (III) (1380 cm −1 ) [5], the bending vibration of C-H in -CH 2 (1320 cm −1 ) and CH 3 (1420 cm −1 ) [5]. The FTIR spectrum of ZnO ( Figure S2) shows the characteristic absorption bands of ZnOwürtzite appearing between 400 and 510 cm −1 others at about 1425 and 1545 cm −1 belonging to C-O and C=O stretching vibrations in acetate groups [9]. The FTIR spectrum of PB ( Figure S3) shows bands around 3430 cm −1 and 1640 cm −1 associated respectively to the stretching and bending vibrations of OH in H 2 O adsorbed on the surface and between the interlayers [26]. The band at 3630 cm −1 is attributed to the stretching vibration of OH in (Al,Al)-OH, (Al,Mg)-OH or (Al,Fe)-OH [26]. The bands at 915 cm −1 and 880 cm −1 are assigned to the bending vibration of the OH group in (Al,Al)-OH and (Al,Fe)-OH, respectively [26] ]. The intense broad band centered at 1038 cm −1 is assigned to the Si-O vibration of the tetrahedral sheet [26]. The absorption band at 624 cm −1 is related to the perpendicular vibration of the octahedral cations (R-O-Si) where R = Al, Mg, or Fe [26]. The band at 530 cm −1 is attributed to the bending vibration of Si-O in Si-O-Al [26]. The band 466 cm −1 is assigned to Si-O-Fe and/or Si-O-Al vibrations [26]. The comparison of the FTIR spectra of PB, PB/CS and Zn-PB/CS ( Figure 2) indicates clearly that all the characteristic bands of PB decrease in the intensity or disappear for PB/CS and Zn-PB/CS samples, particularly those of Si-O (1038 cm −1 ) and (Al,Al)-OH (3630 cm −1 ) vibrations. On the other hand, the comparison of the spectra of the samples containing chitosan ( Figure 2) reveals a significant decrease in the intensity of the bands related to the C-OH stretching vibration (at 1150 cm −1 ) [31], the C-N stretching vibration of the amide III (at 1380 cm −1 ), and the protonated amino group (at 1580 cm −1 ). Furthermore, the band at 1663 cm −1 belonging to the carbonyl groups in amide I which shifts to 1640 cm −1 suggests that -NH 2 and -OH groups interact with ZnO [5] and PB [32]. The band at 1087 cm −1 belonging to the secondary -OH of chitosan which shifts to 1070 cm −1 indicates the existence of a coordination of -OH groups with ZnO [5]. Therefore, the significant decrease of the intensity of the characteristic bands of both PB and CS, and the shift of some characteristic bands of chitosan suggest the establishment of strong interactions between the groups of chitosan (C-O, C=O, -NH 2 , OH) and the aluminosilicate structure of bentonite (Al 3+ and Si 4+ ) on the one side, and between the functional groups of chitosan (-NH 2 , OH) and ZnO on the other side. Analogous results were reported about the complexation of ZnO with CS by several authors [5,33]. ison of the spectra of the samples containing chitosan ( Figure 2) reveals a significant decrease in the intensity of the bands related to the C-OH stretching vibration (at 1150 cm −1 ) [31], the C-N stretching vibration of the amide III (at 1380 cm −1 ), and the protonated amino group (at 1580 cm −1 ). Furthermore, the band at 1663 cm −1 belonging to the carbonyl groups in amide I which shifts to 1640 cm −1 suggests that -NH2 and -OH groups interact with ZnO [5] and PB [32]. The band at 1087 cm −1 belonging to the secondary -OH of chitosan which shifts to 1070 cm −1 indicates the existence of a coordination of -OH groups with ZnO [5]. Therefore, the significant decrease of the intensity of the characteristic bands of both PB and CS, and the shift of some characteristic bands of chitosan suggest the establishment of strong interactions between the groups of chitosan (C-O, C=O, -NH2, OH) and the aluminosilicate structure of bentonite (Al 3+ and Si 4+ ) on the one side, and between the functional groups of chitosan (-NH2, OH) and ZnO on the other side. Analogous results were reported about the complexation of ZnO with CS by several authors [5,33]. UV-Vis Diffuse Reflectance Spectroscopy The DRS spectra of ZnO and Zn-PB/CS are shown in Figure 3. It is clearly seen that the analyzed samples present a good absorption in the UV region with a net improvement for Zn-PB/CS hybrid-biocomposite (about 22%). In the visible domain, ZnO becomes practically transparent while Zn-PB/CS exhibits a long tail, extending thus its absorption up to 500 nm. Based on these results, it seems that the interaction established between chitosan, bentonite and ZnO particles makes the hybrid-biocomposite particles sensitive to visible light. Similar results were reported by Aadnan et al. [5] and Farzana et al. [11] concerning the ZnO-Chitosan biocomposite. The estimated band gap energy (Eg) values of the as-prepared samples were obtained by plotting (αhν) 2 versus hν (inset of Figure 3), assuming an indirect band gap transition for ZnO [5,8]. The obtained Eg value for ZnO (3.12 eV) indicates that ZnO cannot absorb wavelengths above to about 400 nm, whereas that of Zn-PB/CS (2.73 eV) suggests that the hybrid-biocomposite becomes sensitive to UV-Vis Diffuse Reflectance Spectroscopy The DRS spectra of ZnO and Zn-PB/CS are shown in Figure 3. It is clearly seen that the analyzed samples present a good absorption in the UV region with a net improvement for Zn-PB/CS hybrid-biocomposite (about 22%). In the visible domain, ZnO becomes practically transparent while Zn-PB/CS exhibits a long tail, extending thus its absorption up to 500 nm. Based on these results, it seems that the interaction established between chitosan, bentonite and ZnO particles makes the hybrid-biocomposite particles sensitive to visible light. Similar results were reported by Aadnan et al. [5] and Farzana et al. [11] concerning the ZnO-Chitosan biocomposite. The estimated band gap energy (Eg) values of the as-prepared samples were obtained by plotting (αhν) 2 versus hν (inset of Figure 3), assuming an indirect band gap transition for ZnO [5,8]. The obtained Eg value for ZnO (3.12 eV) indicates that ZnO cannot absorb wavelengths above to about 400 nm, whereas that of Zn-PB/CS (2.73 eV) suggests that the hybrid-biocomposite becomes sensitive to visible light. Thereby, an improvement of the photocatalytic efficiency of Zn-PB/CS sample under visible light is expected. Thermogravimetric and Differential Thermal Analyses The thermogravimetric curve of Zn-PB/CS along with those of CS, PB, ZnO, and PB/CS samples are shown in Figure 4a. The Thermogravimetric curves of ZnO and samples containing CS exhibit two main weight loss regions. The first one is before 240 • C. It is accompanied with endothermic peaks (Figure 4b). These peaks are due to the evaporation of adsorbed water as well as the residual solvents [34], validating the FTIR results. The second is above 240 • C. It corresponds to the thermal decompositionof CS, as well as to the acetate groups linked to Zn [5,11,34] since a wide exothermic peak is observed for all samples containing CS (Figure 4b). It is noteworthy that the maximum temperature of the decomposition of CS increases progressively from 335 • C (for CS) to 353 • C (for PB/CS), and then to 360 • C (for Zn-PB/CS) (Figure 4b). This behavior proves experimentally that the decomposition of CS was postponed progressively by adding PB and ZnO, and the thermal stability of the synthesized Zn-PB/CS hybrid-biocomposite was improved. Thermogravimetric and Differential Thermal Analyses The thermogravimetric curve of Zn-PB/CS along with those of CS, PB, ZnO, and PB/CS samples are shown in Figure 4a. The Thermogravimetric curves of ZnO and samples containing CS exhibit two main weight loss regions. The first one is before 240 °C. It is accompanied with endothermic peaks (Figure 4b). These peaks are due to the evaporation of adsorbed water as well as the residual solvents [34], validating the FTIR results. The second is above 240 °C. It corresponds to the thermal decompositionof CS, as well as to the acetate groups linked to Zn [5,11,34] since a wide exothermic peak is observed for all samples containing CS (Figure 4b). It is noteworthy that the maximum temperature of the decomposition of CS increases progressively from 335 °C (for CS) to 353 °C (for PB/CS), and then to 360 °C (for Zn-PB/CS) (Figure 4b). This behavior proves experimentally that the decomposition of CS was postponed progressively by adding PB and ZnO, and the thermal stability of the synthesized Zn-PB/CS hybrid-biocomposite was improved. The comparison of the weight losses observed above 240 °C for CS (58%), PB/CS (45%), ZnO (31%) and Zn-PB/CS (38%) (Figure 4a) suggests that ZnO and PB particles tend to hinder the thermal decomposition of CS.Therefore, the thermal stability of PB/CS and Zn-PB/CS biocomposites is improved. Based on these results, it is allowable to suggest the existence of strong interactions between the groups of chitosan (C-O, C=O, -NH2, OH) and the ZnO nanoparticles and bentonite. Analogous results were reported by Aadnan et al. [5] who indicated that the incorporation of an optimum of ZnO into chitosan polymer creates a strong interaction between the amino and hydroxyl groups of chitosan and Zn 2+ . In addition, Hristodor et al. [35] and Kausar et al. [36] suggested that the intercalation of CS in clay allowed a strong interaction between the amino and hydroxyl functional groups of CS and the silicate layer of clay. The TGA analysis of PB sample records a 13% weight loss between 25 and 400 °C (Figure 4a) accompanied with a wide endothermic peak (Figure 4b). This behavior is assigned to the evaporation of water molecules adsorbed onto the surface and/or into the interlayers. This is probably associated with the dehydration phenomena of exchangeable cations [37]. The comparison of the weight losses observed above 240 • C for CS (58%), PB/CS (45%), ZnO (31%) and Zn-PB/CS (38%) (Figure 4a) suggests that ZnO and PB particles tend to hinder the thermal decomposition of CS.Therefore, the thermal stability of PB/CS and Zn-PB/CS biocomposites is improved. Based on these results, it is allowable to suggest the existence of strong interactions between the groups of chitosan (C-O, C=O, -NH 2 , OH) and the ZnO nanoparticles and bentonite. Analogous results were reported by Aadnan et al. [5] who indicated that the incorporation of an optimum of ZnO into chitosan polymer creates a strong interaction between the amino and hydroxyl groups of chitosan and Zn 2+ . In addition, Hristodor et al. [35] and Kausar et al. [36] suggested that the intercalation of CS in clay allowed a strong interaction between the amino and hydroxyl functional groups of CS and the silicate layer of clay. The TGA analysis of PB sample records a 13% weight loss between 25 and 400 • C (Figure 4a) accompanied with a wide endothermic peak (Figure 4b). This behavior is assigned to the evaporation of water molecules adsorbed onto the surface and/or into the interlayers. This is probably associated with the dehydration phenomena of exchangeable cations [37]. Figure S4 gives the full scan XPS spectra of the prepared materials in which the main observed peaks (O1s, C1s, N1s, Zn2p, Al2p, and Si2p) were identified. The high-resolution spectra of the O1s, N1s and Zn2p signals are shown in Figure 5, while those of C1s, Al2p and Si2p are given in Figures S5-S7, respectively. Figure S4 gives the full scan XPS spectra of the prepared materials in which the main observed peaks (O1s, C1s, N1s, Zn2p, Al2p, and Si2p) were identified. The high-resolution spectra of the O1s, N1s and Zn2p signals are shown in Figure 5, while those of C1s, Al2p and Si2p are given in Figures S5-S7, respectively. X-ray Photoelectron Spectroscopy The full scan testifies the presence of Al2p at 75 eV and Si2p at 103 eV [38,39] for PB/CS and Zn-PB/CS biocomposites. The deconvolution of O1s spectrum of CS ( Figure 5) reveals the contribution of two peaks at 531.32 and 532.8 eV, which could be linked to C=O bond and OH, and/or >C-O bonds [40]. On the high-resolution spectra of PB/CS and Zn-PB/CS, the intensity of the peak at about 532.8 eV decreases, while that at 531.3 eV increases slightly because of the contribution of the bending energy due to Si-O-Si and Al-(OH) coming from the aluminosilicate structure of bentonite [41]. Furthermore, a third The full scan testifies the presence of Al2p at 75 eV and Si2p at 103 eV [38,39] for PB/CS and Zn-PB/CS biocomposites. The deconvolution of O1s spectrum of CS ( Figure 5) reveals the contribution of two peaks at 531.32 and 532.8 eV, which could be linked to C=O bond and OH, and/or >C-O bonds [40]. On the high-resolution spectra of PB/CS and Zn-PB/CS, the intensity of the peak at about 532.8 eV decreases, while that at 531.3 eV increases slightly because of the contribution of the bending energy due to Si-O-Si and Al-(OH) coming from the aluminosilicate structure of bentonite [41]. Furthermore, a third peak at about 530 eV is observed for the biocomposites containing bentonite, which is attributed to the Si-O-Si, Si-O-Al bonds in PB/CS, Zn-PB/CS [41], and/or Zn-O bond in the Zn-PB/CS hybrid-biocomposite [5,42]. The deconvolution of N1s spectrum of CS reveals the contribution of three peaks at 399.04, 399.71, and 400.34 eV, which could be attributed to the -NH-, -NH 2 and -NH 3 + groups [5,40,43], confirming the FTIR results. The high resolution of N1s spectra of the BP/CS and the Zn-PB/CS samples shows a slight shift of the characteristic peaks of -NH 2 and -NH 3 + groups with a significant variation in Nanomaterials 2022, 12, 102 9 of 16 their corresponding intensities. This is an indication that the functional groups of chitosan interact strongly with both ZnO [5] and the structure of bentonite as highlighted by the FTIR, XRD, and thermal measurements. The binding energies at about 1022 and 1047 eV are attributed to Zn2p3/2 and Zn2p1/2, respectively [5], validating the presence of Zn2+ state in Zn-PB/CS sample. The deconvolution of the XPS spectra of C1s ( Figure S5) obtained for the samples containing CS highlights the contribution of three peaks. The first one at about 285 eV is attributed to C-C (sp3) bonds [43,44]; the second at 286 eV is associated to C-O and/or C=O groups [43,44]; and the third at about 289 eV is assigned to C=O and/or O-C-O bonds [43,44]. Compared with the spectrum of CS, it is clearly observed that the intensity of the peak at 285 eV increases for PB/CS and Zn-PB/CS while for the peak at 286 eV, an opposite behavior is observed. In the meanwhile, the intensity of the peak at 289 eV remains practically unchanged. This suggests that the C-O and C=O groups of chitosan interact with the aluminosilicate structure of bentonite. Figure S6 shows that the position of the peak of Al2p (74.7 eV) for PB/CS and Zn-PB/CS is slightly lower than for PB (75.06 eV). A similar observation is made for the peak of Si2p ( Figure S7) which decreases slightly from 103.1 eV (for PB) to 102.8 eV (for PB/CS and Zn-PB/CS). All of these results supported with those obtained by XRD, FTIR and thermal measurements indicate clearly the existence of strong interactions between the C-O, C=O, -NH 2 , and OH groups of chitosan and the alumina-silica sheets of bentonite on the one side, and between the chitosan's functional groups (-NH 2 , OH) and ZnO on the other side. Scheme 1 illustrates the interactions which exist between ZnO nanoparticles, the functional groups of chitosan, and bentonite in the hybrid-biocomposite system. In this system, it is very plausible to assume that ZnO nanoparticles interact with -NH 2 and -OH groups of chitosan [5,33] in order to form a complex between ZnO and Chitosan as indicated by the XRD, FTIR, and XPS results. On the other hand, chitosan polymer interacts with the alumina-silica sheets of bentonite via the C-O, C=O, -NH 2 , and OH groups as highlighted by the XPS analysis. Figure S8). From Figure 6c,d, it can be observed that the surface of PB/CS and Zn-PB/CS hybridbiocomposites becomes less compact and rougher by comparison with CS, and presents heterogeneous structures with holes and fractures. It is easy to see the presence of bentoniteparticles which are scattered onto the surface. The EDS spectra of PB/CS and Zn-PB/CS ( Figure S8) highlight the presence of the different oxides of bentonite for PB and PB/CS samples along with ZnO for Zn-PB/CS sample. Moreover, the EDS mapping analyses ( Figure S9) shows clearly the homogeneous dispersion of Zn on the surface of Zn-PB/CS hybrid-biocomposite, promoting an interaction between ZnO and bentonite and chitosan. Photocatalytic Tests The photocatalytic efficiency of the prepared hybrid-biocomposite was evaluated by using methyl orange as a molecule probe. In order to have a fair evaluation, experiments were also carried out in the presence of ZnO, CS and PB samples. Before lighting the UV-A or visible lamps, the aqueous suspension containing 0.5 g L −1 of ZnO, CS, PB, or Zn-PB/CS was stirred in the dark for the necessary time to achieve the adsorption/desorption equilibrium. Figure 7 shows the MO removal curves as a function of reaction time under UV-A and visible light. From the obtained results in the dark (Figure 7a These results prove clearly that the photocatalytic efficiency of Zn-PB/CS is improved by 3% under UV-A, and 268 % under visible light. This behavior is expected taking into account the DRS results which showed an improvement of the absorbance for Zn-PB/CS hybrid-biocomposite (+22%) by comparison with ZnO nanoparticles. Therefore, the very impressive improvement of MO removal (+268%) observed under visible light could be due to the reduction of the band gap energy (2.73 eV) by comparison with ZnO (3.12 eV). A lot of studies dealing with the photocatalytic performance of various composites containing chitosan modified by ZnO and/or montmorillonite were reported [5,12,28,45]. They indicated that the improvement of the photocatalytic activity of ZnO-chitosan composite could be ascribed to the synergistic effect of both the reduction of the band gap energy and the separation of the charge carriers. In this study, taking into account the XRD, FTIR, DRS, DTA, and XPS results, it is quite legitimate to assume that the best MO conversion obtained under visible light irradiation in the presence of Zn-PB/CS could be attributed to the decrease of the Eg observed for the hybrid-biocomposite as well as to the blocking of the recombination of charge carriers which is promoted by the strong interactions established between the C-O, C=O, -NH 2 , and OH groups of chitosan and the aluminosilicate structure of bentonite on the one side, andbetween the functional groups of chitosan (-NH 2 , OH) and ZnO on the other side. Photocatalytic Tests The photocatalytic efficiency of the prepared hybrid-biocomposite was evaluated by using methyl orange as a molecule probe. In order to have a fair evaluation, experiments were also carried out in the presence of ZnO, CS and PB samples. Before lighting the UV-A or visible lamps, the aqueous suspension containing 0.5 g L −1 of ZnO, CS, PB, or Zn-PB/CS was stirred in the dark for the necessary time to achieve the adsorption/desorption equilibrium. Figure 7 shows the MO removal curves as a function of reaction time under UV-A and visible light. From the obtained results in the dark (Figure 7a,b), it appears that the MO removal is significantly improved in the presence of Zn-PB/CS hybrid-biocomposite when compared with ZnO, PB and CS materials. In fact, the final MO removal in the presence of PB (75.8%) and CS (65.4%) increases significantly in the presence of Zn-PB/CS In order to identify the role of the primary active species engaged in the degradation of MO under UV-A and visible light for the prepared hybrid-biocomposite, some active species (e − , h + , HO • ) scavenging experiments were carried out, and the obtained results were compared with the absence of any scavenger. The active species trapping experiments were conducted using K 2 S 2 O 8 as an electron scavenger, KI as a hole scavenger and isopropyl alcohol as a hydroxyl radical scavenger [46]. As shown in Figure 8, the addition of electron and hydroxyl radical trappers increases slightly the MO removal under both UV-A and visible light, indicating that these two active species do not play the major role in the photocatalytic process. In contrast, the addition of KI inhibits significantly the MO degradation by about 37% under both UV-A and visible light, indicating clearly that the holes are the main oxidative active species involved in the MO degradation. Analogous results have been reported for ZnO-Chitosan biocomposite used as photocatalyst in the MO degradation under UV and visible light [5]. Therefore, the suggested mechanism of the photocatalytic degradation of MO under UV-A and visible light in the presence of Zn-PB/CS hybride-biocomposite can be described by the following equations: MO ads + OH • ads → R • ads → degradation products (the main reaction) Conclusions In this investigation, a novel hybrid-biocomposite "ZnO-Bentonite/Chitosan" was synthesized using natural materials (Bentonite and chitosan) and ZnO nanoparticles. The eco-friendly hybrid-biomaterial has been used as a photocatalyst for water decontamination. The thermal measurements showed that the decomposition of CS was postponed progressively by adding PB and ZnO, and the thermal stability of the synthesized Zn-PB/CS hybrid-biocomposite was improved. The XRD, FTIR, DRS, XPS, and SEM results highlighted the existence of strong interactions between the C-O, C=O, -NH2, and OH groups of chitosan and the aluminosilicate structure of bentonite (Al 3+ and Si 4+ ) on the one side, and between the functional groups of chitosan (-NH2, OH) and ZnO on the other side. The experiments carried out in the dark showed that the MO removal was significantly improved in the presence of Zn-PB/CS hybrid-biocomposite (86.1%) by comparison with PB (75.8%) and CS (65.4%) materials. The photocatalytic experiments carried out under visible light showed that the MO removal has been increased 268 times in the presence of Zn-PB/CS. The radical-trapping experiments suggested that the MO photocatalytic degradation under both UV-A and visible light irradiations involved holes as main oxidative Conclusions In this investigation, a novel hybrid-biocomposite "ZnO-Bentonite/Chitosan" was synthesized using natural materials (Bentonite and chitosan) and ZnO nanoparticles. The eco-friendly hybrid-biomaterial has been used as a photocatalyst for water decontamination. The thermal measurements showed that the decomposition of CS was postponed progressively by adding PB and ZnO, and the thermal stability of the synthesized Zn-PB/CS hybrid-biocomposite was improved. The XRD, FTIR, DRS, XPS, and SEM results highlighted the existence of strong interactions between the C-O, C=O, -NH 2 , and OH groups of chitosan and the aluminosilicate structure of bentonite (Al 3+ and Si 4+ ) on the one side, and between the functional groups of chitosan (-NH 2 , OH) and ZnO on the other side. The experiments carried out in the dark showed that the MO removal was significantly improved in the presence of Zn-PB/CS hybrid-biocomposite (86.1%) by comparison with PB (75.8%) and CS (65.4%) materials. The photocatalytic experiments carried out under visible light showed that the MO removal has been increased 268 times in the presence of Zn-PB/CS. The radical-trapping experiments suggested that the MO photocatalytic degradation under both UV-A and visible light irradiations involved holes as main oxidative active species. Data Availability Statement: No new data were created or analyzed in this study. Data sharing is not applicable to this article.
8,454.6
2021-12-29T00:00:00.000
[ "Materials Science" ]
Fracture toughness and structural evolution in the TiAlN system upon annealing Hard coatings used to protect engineering components from external loads and harsh environments should ideally be strong and tough. Here we study the fracture toughness, KIC, of Ti1−xAlxN upon annealing by employing micro-fracture experiments on freestanding films. We found that KIC increases by about 11% when annealing the samples at 900 °C, because the decomposition of the supersaturated matrix leads to the formation of nanometer-sized domains, precipitation of hexagonal-structured B4 AlN (with their significantly larger specific volume), formation of stacking faults, and nano-twins. In contrast, for TiN, where no decomposition processes and formation of nanometer-sized domains can be initiated by an annealing treatment, the fracture toughness KIC remains roughly constant when annealed above the film deposition temperature. As the increase in KIC found for Ti1−xAlxN upon annealing is within statistical errors, we carried out complementary cube corner nanoindentation experiments, which clearly show reduced (or even impeded) crack formation for annealed Ti1−xAlxN as compared with their as-deposited counterpart. The ability of Ti1−xAlxN to maintain and even increase the fracture toughness up to high temperatures in combination with the concomitant age hardening effects and excellent oxidation resistance contributes to the success of this type of coatings. We carried out cantilever deflection (and cube corner nanoindentation experiments) to study the evolution of the fracture toughness of up to 1000 °C ex-situ vacuum annealed Ti 1−x Al x N free-standing films and correlated them with the film structural evolution and the mechanical properties, hardness (H) and Young's modulus (E), obtained from independent experiments. The mechanical properties were corroborated with HRTEM investigations to give atomic scale insights into the thermally decomposed Ti 1−x Al x N structure. TiN coatings are used as a benchmark, as no decomposition processes are active that would lead to the formation of new nm-sized domains. Results Structural evolution. Energy dispersive X-ray spectroscopy (EDXS) analysis rendered a chemical composition of Ti 0.40 Al 0.60 N. Due to the specific sputter condition of the Ti 0.5 Al 0.5 compound target, the coatings prepared are slightly richer in Al than the target for the deposition parameters used 15 . The oxygen content within the coatings is below 1 at.%, as obtained by elastic recoil detection analysis of coatings prepared under comparable conditions 15 . Figure 1a shows the X-ray diffraction patterns of our Ti 0.40 Al 0.60 N films grown onto Al 2 O 3 (1102) substrates after vacuum annealing at different annealing temperatures, T a , for 10 min. Up to 750 °C, Ti 0.40 Al 0.60 N maintains its single phase face-centered cubic (rock-salt-type, B1) structure. The slight peak shift to higher 2θ angles and decrease in peak broadening indicate recovery of built-in structural point and line defects, which results in a lattice parameter decrease in the films. The peak shift to higher 2θ angles also suggests B1 AlN formation (its lattice parameter is smaller as compared to Ti 0.40 Al 0.60 N 16 , hence the diffraction peaks occur at higher 2θ angles). Between 850 and 1000 °C, an asymmetric peak broadening is observed, which indicates isostructural formation of cubic AlN-and TiN-rich domains. Especially, the right shoulder in vicinity of the cubic (200) peak -indicative for cubic AlN formation -is clearly visible and becomes more pronounced with increasing temperature. Hexagonal (wurtzite-type, B4 structured) AlN first emerges at 850 °C and its phase fraction increases with increasing temperature. The shift of the XRD reflections from the major cubic structured Ti 1−x Al x N matrix phase to lower 2θ angles is a result of decreasing Al content (hence, the XRD peaks shift towards the lower 2θ position of TiN). On the other hand, compressive stresses, e.g., induced by the B1 to B4 phase transformation of AlN 17 under volume expansion of ~26% 16 or by thermal stresses, contribute to the peak shift to lower 2θ angles (for the thermal expansion coefficients, α, holds as α B1-(Ti,Al)N > α Al2O3 > α B4-AlN , see refs [18][19][20]. The structural evolution of single phase cubic structured TiN, Fig. 1b, is dominated by recovery of built-in structural point and line defects and results in smaller lattice parameters. Accordingly, the peaks are shifted to larger 2θ angles and become sharper with increasing temperature. Both, Ti 0.40 Al 0.60 N and TiN crystallized in a TEM/HRTEM study. TEM studies were performed on the sample annealed at 900 °C using cross-section samples. A low-magnified image presents an overview of the coating morphology (Fig. 2a), where columnar grains are clearly visible. At this annealing temperature, AlN based hexagonal phases emerge. An atomic resolution TEM image of one portion of grain interfaces are shown in Fig. 2b, the corresponding fast Fourier transforms (FFTs) are seen on the right-hand side. Analysis indicates that a cubic structured Ti 1−x Al x N grain is oriented along the [001] direction while the adjacent hexagonal AlN grain is close to [2110] direction, with an orientation relationship of Ti 1−x Al x N (220)//AlN (0001). This implies that hexagonal AlN (0001) grows on Ti 1−x Al x N (220) planes along this direction. The corresponding FFTs clearly signify the plane relationship between these two phases. This has also been proved by tilting the grains to another orientation. Figure 2c shows one hexagonal AlN grain, grown in between two cubic Ti 1−x Al x N grains, viewed along the [1120] direction while Ti 1−x Al x N is off [001] zone axis, as illustrated in the corresponding FFTs (inserted). Here, only a series of planes appear. The orientation relation is Ti 1−x Al x N (220)//AlN (1100) for this case. It is further noted that the planes in hexagonal AlN are severely distorted or inclined which means that internal stress is strongly involved during the phase transformation. There are numerous defects present in the hexagonal AlN regions, for instance stacking faults and nano-twins marked exemplarily with white arrows in Fig. 2c. In some cases, the AlN phase seems to form in the Ti 1−x Al x N matrix, i.e. Fig. 2b, since the FFT from AlN contains Ti 1− x Al x N spots. However, hexagonal AlN frequently forms at the grain boundary as demonstrated in Fig. 2c, in which the hexagonal AlN and Ti 1−x Al x N phases are separated and formed in between two Ti 1−x Al x N grains. Consequently, the AlN phase transformation (from cubic to hexagonal) can take place in the matrix and also at the grain boundaries, in agreement with earlier studies 21 . Nanoindentation. The mechanical properties as a function of annealing temperature are presented in Fig. 3 and are in line with previous studies reported in literature 9 . The indentation hardness (H), Fig. 3a, increases for Ti 0.40 Al 0.60 N (red curves) by ~9% from 34 ± 1 GPa in the as-deposited state to 37 ± 2 GPa at 900 °C, before it decreases again down to 28 ± 2 GPa at 1000 °C. The Young's modulus (E), Fig. 3b, shows a similar trend. Contrarily, the hardness of TiN (blue curves) steadily decreases with increasing T a , from 32 ± 1 GPa at room temperature to 27 ± 1 GPa at 850 °C, (Fig. 3a), while the Young's modulus marginally decreases (Fig. 3b). The chosen deposition conditions used in the present study resulted in coatings with excellent mechanical properties in the as-deposited state. In general, age hardening effects are more pronounced for softer coatings, e.g., a relative increase of ~25% was observed for Ti 1−x Al x N with an as-deposited hardness of 'only' ~26 GPa 21 . The elastic strain to failure [22][23][24][25][26] , (H/E), which is often used to qualitatively rate materials for their failure resistance, suggests superior properties of Ti 0. 40 (Please note that the actual cantilever dimensions, lever arms, and pre-notch depths differ from sample to sample. Hence, Fig. 4a, does not allow direct ranking of the samples with respect to their stiffness and fracture toughness). Figure 4b shows a typical free-standing cantilever. The substrate material had been removed by focused ion beam milling to avoid the influence of residual stresses and substrate interference. Scanning electron micrographs of the post-mortem fracture cross-sections, Fig. 4c,d, do not show discernible changes of the film morphology upon annealing. However, the structure of TiN (Fig. 4d) appears more columnar-grained in comparison with Ti 0.40 Al 0.60 N (Fig. 4c). The K IC values, as calculated from the maximum load at failure, the actual pre-notch depth, and cantilever dimensions using a linear elastic fracture mechanics approach 27 , are presented in Fig. 5. The data suggest an increase in K IC from 2.7 ± 0.3 MPa•√m in the as-deposited state to 3.0 ± 0.01 MPa•√m at 900 °C followed by a decreases to 2.8 ± 0.4 MPa•√m at 1000 °C (red curve). The relative increase of ~11% in fracture toughness of Ti 0.40 Al 0.60 N is similar to the relative increase in hardness of ~9%. Please note, however, that strictly speaking the increase in fracture toughness is within statistical error. Interestingly, the pronounced decrease in hardness at 1000 °C due to wurtzite AlN formation is not observed for K IC , which -in agreement with the H/E criterion-only slightly decreases. Lower K IC values of ~1.9 MPa•√m are found for as-deposited and annealed TiN (blue curve). To qualitatively proof that K IC increases upon annealing, we carried out independent cube corner nanoindenation experiments on coated Al 2 O 3 (1102) substrates. Scanning electron microscopy images of the indents show aggravated (or even impeded) crack formation for annealed Ti 1−x Al x N samples as compared to the as-deposited counterpart, see Fig. 6. Please note that in the cube corner experiment, residual stresses (e.g., massive compressive residual stresses forming due to the cubic to wurtzite AlN phase transformation under volumes expansion) and the underlying substrate can influence the formation of cracks. Discussion The structural evolution of supersaturated cubic Ti 1−x Al x N upon annealing has been experimentally proven in the literature by atom probe tomography 11,28 , small angle X-ray scattering 29 , transmission electron microscopy 30 , and described by phase field simulations 30 : During the early stage, very few nanometer-sized B1 AlN-and TiN-rich domains form in a coherent manner (that is, the crystallographic orientation of the domains correspond to that of the Ti 1−x Al x N parent grain). With progressive annealing time, the domains gain in size and the compositional variations become more pronounced, so that the modulation amplitudes (Ti-and Al-rich) become larger. If the annealing is continued for too long or performed at higher temperatures, coherency strains are relieved by misfit dislocations. Eventually, cubic structured AlN-rich domains transform into the softer but thermodynamically stable (first (semi) coherent then incoherent) hexagonal AlN. The cubic to hexagonal AlN phase transformation is associated with a large volume expansion of ~26% 16 . Thermally-induced hardening effects in the TiAlN system have been attributed to coherency strains 9 . Coherency strains hinder the movement of dislocations 31 , as it is more difficult for dislocations to passage through a strained than a homogenous lattice. In addition, the coherent domains differ in their elastic properties due to the strong compositional dependent elastic anisotropy of Ti 1−x Al x N 32 , which also hinders the dislocation motion and contributes to the hardness enhancement 32 . The structural evolution observed in the present study is in line with the literature reports mentioned above. Additionally, we have evidenced severely distorted or inclined lattice planes and numerous defects (including stacking faults) in the hexagonal AlN phase by HRTEM investigations (Fig. 2). This could explain why the measured hardness at 900 °C is relatively high despite the presence of the "soft" hexagonal AlN phase, which is usually reported to deteriorate the hardness. We have been able to show that besides age hardening effects, the fracture toughness increases upon annealing. Both properties show a similar relative increase of around 10% as compared to the as-deposited state and peak at the same temperature of 900 °C. This suggests that similar microstructural characteristics are responsible for the enhancement of the mechanical properties. We could demonstrate in an earlier study 3 that a coherent nanostructure composed of alternating materials has the potential to enhance the fracture toughness for a certain bilayer period of a few nanometers. In the superlattice films, also coherency strains 33,34 and variations in the elastic properties are present. It should be mentioned, however, that in contrast to the hardness, the fracture toughness is not primarily governed by the hindrance of dislocation motion: the load-displacement data collected during the cantilever deflection experiments (Fig. 4a) suggest a linear elastic behavior until failure with no indications of plastic deformation. In agreement with literature reports 21 , we found that cubic AlN forms preferentially at high diffusivity paths such as grain boundaries. If grain boundaries represent the weakest link where cracks preferentially propagate 35 , grain boundary reinforcement 36 has the potential to effectively hinder the crack propagation. Another important mechanism for increased fracture toughness is phase transformation toughening, which is omnipresent in partially stabilized zirconia bulk ceramics 12 , for example. For Ti 1−x Al x N coatings, the spinodally formed cubic structured AlN-rich domains represent the phase with the ability of a martensitic-like phase transformation from the metastable cubic structure to the stable wurtzite-type (w) variant. The associated volume expansion of ~26% 16 slows down or closes advancing cracks, leading to a significant K IC increase. Therefore, the evolution of K IC with T a of our Ti 0.40 Al 0.60 N coatings is not proportional to that of H with T a , especially at temperatures above 850 °C. The hardness significantly decreases for an increase of T a from 950 to 1000 °C, as also the w-AlN formation significantly increases (please compare Figs. 1 and 3a), but at the same time, the fracture toughness K IC only slightly decreases. The K IC value of 2.8 ± 0.4 MPa•√m after annealing at 1000 °C, is still above that of the as deposited state (with K IC = 2.7 ± 0.3 MPa•√m), whereas the hardness with H = 28 ± 2 GPa after annealing at 1000 °C is significantly below the as deposited value of 34 ± 1 GPa. Hence, effective other mechanisms are present in this type of material, especially when decomposition of the supersaturated matrix phase occurs and w-AlN based phases are able to form. Note that in the chosen free-standing cantilever setup macro-stresses are relieved and thus do not contribute to the observed toughness enhancement. However, due to the extensive difference in the molar volume between cubic and wurtzite AlN, the thermally-induced formation of hexagonal AlN results in pronounced compressive stresses 17,37 in the application where the coatings are firmly attached to a substrate/engineering component. Compressive stresses result in apparent toughening of Ti 1−x Al x N, as the coating can withstand higher tensile stresses before cracks are initiated (the compressive stresses have to be overcome first before crack formation). The effect of compressive stresses on the fracture toughness is supposed to be much more pronounced than its influence on the hardness. This is why, in real application, the K IC increase upon annealing is expected to be significantly larger than the K IC enhancement found from free-standing micro-cantilever bending tests. This is reflected in the aggravated crack formation observed in the cube corner experiments, see Fig. 6. As the 'inherent' fracture toughness enhancing effects are strongly connected with the spinodal decomposition, we anticipate that alloying [38][39][40][41] and other concepts to modify the spinodal decomposition characteristics (formation of coherent cubic AlN domains at lower temperatures but delayed formation of the thermodynamically stable phase wurtzite AlN, different shape and size of cubic AlN domains) are applicable to optimize the self-toughening behavior. In general, alloying has the potential to enhance the inherent toughness by modifying the electronic structure and bonding characteristics 42,43 . The peak in hardness and fracture toughness at 900 °C corresponds to spinodally decomposed TiAlN with fractions of hexagonal AlN as indicated by XRD (Fig. 1) and TEM (Fig. 2). The severely distorted hexagonal AlN with multiple stacking faults suggests that also nano-twinning might become a relevant mechanism. The presence of twins impedes dislocation motion and induces strengthening, but multiple twinning systems can also enhance ductility by acting as a carrier of plasticity 14 . Based on our results we propose that the additional functionality of Ti 1−x Al x N, i.e. the self-toughening ability at temperatures typical for many various applications, contributes to the outstanding performance of Ti 1−x Al x N coatings in e.g., dry or high speed cutting. Methods Sample preparation. Ti 0.40 Al 0.60 N films were deposited in a lab-scale magnetron sputter system (a modified Leybold Heraeus Z400) equipped with a 3 inch powder-metallurgical processed Ti 0.50 Al 0.50 compound target. Polished single crystalline Al 2 O 3 (1102) platelets (10 × 10 × 0.53 mm 3 ) were chosen as substrate materials due to their high thermal stability, inertness and to avoid interdiffusion between film and substrate materials upon annealing up to 1000 °C. Before the deposition, the substrates (ultrasonically pre-cleaned in aceton and ethanol) were heated within the deposition chamber to 500 °C, thermally cleaned for 20 min and sputter cleaned with Ar ions for 10 min. The deposition was performed at the same temperature in a mixed N 2 /Ar atmosphere with a gas flow ratio of 4 sccm/6 sccm and a constant total pressure of 0.35 Pa by setting the target current to 1 A (DC) while applying a DC bias voltage of −50 V to the substrates. The films were grown to a thickness of about 1.8 µm with an average deposition rate of about 75 nm/min. The base pressure was below 5·10 -6 mbar. TiN coatings of about 1.2 µm were synthesized by powering a 3 inch Ti cathode with 500 W within an N 2 /Ar gas mixture (flow ratio of 3 sccm/7 sccm, constant total pressure of 0.4 Pa) and applying a bias voltage of −60 V to the substrates. The deposition rate was about 13 nm/min. Energy dispersive X-ray spectroscopy (EDXS) measurements of the films were performed with an EDAX Sapphire EDS detector inside a Philips XL-30 scanning electron microscope. Thin film standards characterized by elastic recoil detection analyses were used to calibrate the EDX measurements. The films on Al 2 O 3 were annealed in a vacuum furnace (Centorr LF22-2000, base pressure <3·10 −3 Pa) at different maximum temperatures (T a ) between 750 and 1000 °C using a heating rate of 20 °C min −1 and passive cooling. At T a , the temperature was kept constant for 10 min. Structural investigations of coated Al 2 O 3 substrates were performed by X-ray diffraction in symmetric Bragg-Brentano geometry using a PANalytical X'Pert Pro MPD diffractometer (Cu-K α radiation). ScIEntIfIc REPORTS | 7: 16476 | DOI:10.1038/s41598-017-16751-1 Cross-sectional TEM specimens were prepared using a standard TEM sample preparation approach including cutting, gluing, grinding and dimpling. Finally, Ar ion milling was carried out. A JEOL 2100 F field emission microscope (200 kV) equipped with an image-side C S -corrector with a resolution of 1.2 Å at 200 kV was used. The aberration coefficients were set to be sufficiently small, i.e. C S ~ 10.0 μm. The HRTEM images were taken under a slight over-focus. The HRTEM images were carefully analysed using Digital Micrograph software. Micromechanical testing. The mechanical properties, hardness and indentation modulus, were measured using a UMIS nanoindenter equipped with a Berkovich tip. At least 30 indents per sample, with increasing loads from 3 to 45 mN were performed. The recorded data were evaluated using the Oliver and Pharr method 44 . To minimize substrate interference, only indents with indentation depths below 10% of the coating thickness were taken into account. The cube corner experiments were carried with the UMIS nanoindenter using a peak indentation load of 150 mN. The high load needed to create cracks resulted in indentation depths of about 1.3 µm in the cube corner experiment. The fracture toughness was determined from micromechanical cantilever bending tests of free-standing film material. As-deposited and annealed coated Al 2 O 3 samples were broken and their cross-sections carefully polished. The substrate material was removed by Focused Ion Beam (FIB) milling perpendicular to the film growth direction using a FEI Quanta 200 3D DBFIB work station. Then the sample holder was tilted 90° and cantilevers were milled perpendicular to the film surface. The cantilever dimensions of ∼t × t × 6t μm 3 , with t denoting the film thickness, were chosen based on guidelines reported in Brinckmann et al. 45 For the final milling step, the ion beam current was reduced to 500 pA, the initial notch was milled with 50 pA. To circumvent the problem of a finite root radii on the fracture toughness measurements, bridged notches according to Matoy et al. 27 were used (the notch length was chosen to be ∼0.75t). The micromechanical experiments were performed inside a scanning electron microscope (FEI Quanta 200 FEGSEM) using a PicoIndenter (Hysitron PI85) equipped with a spherical diamond tip with a nominal tip radius of 1 μm. The micro-cantilever beams were loaded displacement-controlled with 5 nm/s with the loading axis perpendicular to the film surface. Per annealing temperature at least 3 tests were conducted. The fracture toughness, K IC , was determined using linear elastic fracture mechanics according to the formula given in ref. 27 : . In the equation, F max denotes the maximum load applied, L the lever arm (distance between the notch and the position of loading), B the width of the cantilever, W the thickness of the cantilever, and a the initial crack length (measured from the post mortem fracture cross-sections).
5,089
2017-11-28T00:00:00.000
[ "Materials Science", "Engineering" ]
Direct RNA Sequencing of the Complete Influenza A Virus Genome For the first time, a complete genome of an RNA virus has been sequenced in its original form. Previously, RNA was sequenced by the chemical degradation of radiolabelled RNA, a difficult method that produced only short sequences. Instead, RNA has usually been sequenced indirectly by copying it into cDNA, which is often amplified to dsDNA by PCR and subsequently analyzed using a variety of DNA sequencing methods. We designed an adapter to short highly conserved termini of the influenza virus genome to target the (-) sense RNA into a protein nanopore on the Oxford Nanopore MinION sequencing platform. Utilizing this method and total RNA extracted from the allantoic fluid of infected chicken eggs, we demonstrate successful sequencing of the complete influenza virus genome with 100% nucleotide coverage, 99% consensus identity, and 99% of reads mapped to influenza. By utilizing the same methodology we can redesign the adapter in order to expand the targets to include viral mRNA and (+) sense cRNA, which are essential to the viral life cycle. This has the potential to identify and quantify splice variants and base modifications, which are not practically measurable with current methods. Introduction Decades ago, a method was published describing the use of base-specific chemical degradation with chromatographic and autoradiographic resolution as a way of directly sequencing short stretches of RNA 1 . Since then, little progress has been made on directly sequencing RNA. Instead, the elucidation of RNA sequences is typically indirect and primarily requires methods that synthesize cDNA from RNA templates. While these methods are powerful 2 , they suffer from limitations inherent to cDNA synthesis and amplification such as template switching 3 , artifactual splicing 4 , loss of strandedness information 5 , obscuring of base modifications 6 , and propagation of error 7 . In 2009, a method for RNA sequencing was developed on the Helicos Genetic Analysis System where poly(A) mRNA is sequenced by the step-wise synthesis and imaging of nucleotides labeled with an interfering but cleavable fluorescent dye 8 . While the input material requirements for this method are extremely low, the long workflow and short reads are limiting. Nevertheless, these approaches expose two major limitations of RNA sequencing: sequencing by synthesis and short read length. Overall, current technologies for sequencing RNA templates present difficulties in the assessment of base modifications, splice variants, and analysis of single RNA molecules. Influenza viruses are negative-sense segmented RNA viruses [9][10][11] , and sequencing these viruses has played an important role in their understanding for 40 years 12,13 including the discovery of highly conserved viral RNA termini 14 (Figure 1A). These 3' and 5' termini are 12 and 13 nucleotides in length, respectively, and they are highly conserved across the PB2, PB1, PA, HA, NP, NA, M, and NS genome segments of influenza A viruses, which enabled the development of a universal primer set for influenza A virus genome amplification 15,16 . Even though these conserved vRNA termini have been readily exploited for efficient next generation sequencing (NGS) of influenza virus segments [16][17][18] , current methods retain some of the limitations inherent to cDNA-based techniques [3][4][5][6][7] . A new tool for long read direct RNA sequencing could reduce these biases and greatly aid efforts to directly sequence influenza virus and other RNA viruses. Oxford Nanopore Technologies (ONT) recently released their direct RNA sequencing protocol. This method involves the sequential ligations of a reverse transcriptase adapter (RTA) and a sequencing adapter 19 . The RTA is a small dsDNA molecule (Figure 1B) that contains a T10 overhang designed to hybridize with poly(A) mRNA and a 5' phosphate (Pi) that ligates to the RNA creating a DNA-RNA hybrid. The RTA also serves as a priming location for reverse transcription of the entire length of the RNA molecule, though the cDNA generated is not sequenced. The DNA-RNA hybrid is then ligated to the sequencing adapter which directs the RNA strand of the assembled library into the nanopore for sequencing 19 . We describe direct RNA sequencing of an influenza A virus genome through modification of recently released RNA methods from Oxford Nanopore Technologies 19 ( Figure 1C) by targeting the conserved 3' end of the genome with an adapter to capture it (Figure 1D), rather than a primer to amplify it. The efficacy of the adapter is tested by sequencing the RNA genome of an influenza virus generated by reverse genetics A/Puerto Nanopore sequencing First, the RNA calibration strand enolase was directly sequenced on the MinION platform. Three sequencing experiments covered 100% of the 1,314 nucleotide long RNA molecule to an average depth of 122,207 ± 8,126 (sd). Of the 169,041 ± 28,741 reads, 98.6 ± 1.7% mapped to the reference sequence (Table 1), with 100% of the mapped reads in the sense orientation. The direction of the reads and the positive slope of the coverage diagram ( Figure S1) are indicative of directional sequencing of mRNA from the 3' end. The distribution of read lengths ( Figure S2 and Table S1) accurately corresponds to the expected length of 1,314 nucleotides. The read level accuracy was 90.4 ± 0.8%, and the consensus sequence was 99.7% in concordance with the known reference. Based on available details on the RTA system, it was possible to make further modification to target other RNA species (Figure 1). To adapt this technique for the influenza virus genome, the target sequence of the RTA was changed from an oligo-dT to a sequence complementary to the 12 nucleotides that are conserved at the 3' end of the RNA segments of influenza A viruses (Table S2). To test the effectiveness of the modified adapter, total RNA from allantoic fluid (crude) harvested from infected chicken eggs was sequenced via MinION. Three sequencing experiments covered 100% of the PB2, PB1, PA, HA, NP, NA, M, and NS gene segments to an average depth of 3,269 ± 1,892 (Figure 2). Although, there is reduced coverage at the extreme termini ( Figure 3) and a heavy coverage bias towards the 3' terminus of the negative sense RNA, since this approach reads from the 3' to 5' end of the molecule. Of the 54,353 ± 15,314 reads, 98.8 ± 0.1% mapped to influenza (Table 1) in a roughly even distribution among the 8 segments ( Figure S3), with 100% of the mapped reads in the negative-sense orientation. The distribution of read lengths (Figure 4 and Table S1) corresponds well to the expected length of the respective segment. The read level accuracy was 86.3 ± 0.3%, and the consensus sequence was 98.97 ± 0.01% in concordance with consensus sequence generated using our modified version of the multi-segment reverse transcriptase polymerase chain reaction (M-RTPCR) 15,16 , Nextera, and MiSeq approach. The distribution of read lengths ( Figure S5 and Table S1) corresponds to expected lengths of each respective segment. The read level accuracies for the two runs were 85.2 and 83.8%, and the consensus sequences were 98.7 and 98.5% in concordance with consensus sequence generated using our standardized M-RTPCR amplified genome and MiSeq approach. Illumina MiSeq sequencing The viral RNA segments from the pure and crude preparation were amplified by M-RTPCR 15,16 , and size fractionation of those amplicons showed the characteristic banding pattern of the amplified influenza virus genome ( Figure S6). Sequencing of the RNA from purified virus or crude virus produced 163,264 and 143,572 reads, respectively, of which 99.9% mapped to influenza A virus ( Table 1). The reads were roughly evenly distributed among the 8 segments (Figures S3). The mapped reads covered 100% of all 8 genome segments (Figures 2 and S4) with reduced coverage at the extreme termini ( Figure S7). The read level accuracy was 99.6% and the consensus sequences, which were used as the reference genome for the nanopore assemblies, were defined as 100% accurate and were 100% identical to each other. Discussion We have demonstrated, for the first time, complete 20 sequencing of an RNA virus genome by direct RNA sequencing. Using a method originally designed to sequence mRNA, we adapted the target sequence to bind the 3' sequence conserved among influenza A viruses. The specificity of this adapter allowed efficient sequencing of influenza virus RNA genomic segments from RNA isolated from purified virus particles (control) or from RNA isolated from a crude extract that contains a myriad of viral and host (chicken) RNAs. Using this adapter, 98.8% of reads from the crude RNA preparation mapped to the influenza virus, which is practically as efficient as with purified virus RNA sample (99.3%). This performance on crude virus stocks demonstrates that the sequence directed library preparation is a very effective method to select specific target RNA species among a population of RNAs, as the vast majority of reads were to A/Puerto Rico/8/1934 using 12 ribonucleotides as the target sequence. The data shows that other modifications to the adapter could target other RNA species such as RNAs from specific pathogens and different RNA species within a particular pathogen. For example, one could compare (+) sense cRNA [replication intermediate of (-) sense vRNAs], (+) sense mRNAs, or (-) sense RNAs present during RNA virus infections (such as for influenza viruses). The data illustrates that the adapter sequence could be modified to target specific viral families, genus, or species by extending the target sequence and or by adding degeneracies. This is an advantage over poly(A) methods that have a reduced signal-to-noise ratio due to host mRNA. Targeting influenza A virus vRNA and cRNA independently may prove difficult as there is complementarity between the two conserved termini of the vRNA segments, and therefore high sequence identity between the 3' termini of the (-) sense vRNA and (+) sense cRNA. Rather, cRNA and vRNA reads can be sorted based on their (+) and (-) polarity, respectively. In addition to avoiding any of the previously discussed limitations of cDNA synthesis and PCR amplification strategies, the technique developed for direct RNA sequencing is highly amenable to sequencing a variety of non-poly-adenylated RNAs from hosts and pathogens, including untranslated regions (UTRs), without biasing the sequence to the primer. This allows the examination of the UTRs in their native form, which we have done here with influenza A virus. However, direct RNA sequencing of UTRs is limited by read level accuracy and a loss of coverage at the extreme 5' end of the molecule. The extreme 3' termini (Uni-12) of all segments were fully sequenced and matched the expected sequence with the exception of the degeneracy at the +4 position which was not resolved. The sequences for the extreme 5' termini (Uni-13) that were obtained match the expected sequences with the exception of a C to G substitution at the -9 position in the segments PB1 and PB2. The loss of coverage at the extreme 5' end of the molecule is most likely due to unreliable processivity as the last of the molecule passes and resulted in the final nine nucleotides not being sequenced in some of the segments. The data presented demonstrates the adaptability of the platform and RNA sequencing protocol. The unmodified components were used to target enolase mRNA and could be used to target the variety of mRNA species present in any sample. Specifically, one could dissect viral replication processes as well as host mRNAs activated during an influenza infection at a given point in time. Genomic length and quantitative sequencing of viral mRNA species has the potential to provide direct detection of base modifications, splice variants, and transcriptional changes under different replication conditions, such as viruses used for vaccine production that are transferred between mammalian and avian hosts. The primary limitations of this technology are the high read level error rate and high input material requirements. Reducing the error rate would enable multiplexing and more accurate consensus sequence determination and is a requirement for understanding nucleotide polymorphisms and genome sub-populations, particularly in viruses such as influenza that have significant intra-host diversity and or base modifications to be identified. There are currently several bioinformatic tools for detecting DNA base modifications such as Tombo, Nanopolish, SignalAlign, and mCaller; however, RNA specific tools have yet to be released 19 . Currently, the RNA input requirements for direct RNA sequencing are high and are not physically achievable with most original clinical samples. Lessening the RNA input requirement of the direct RNA sequencing would take full advantage of the unbiased nature of direct RNA sequencing and allow for the detection and description of the rich diversity intrinsic to influenza and other viruses. Although ONT has continuously improved their basecaller Albacore, there is still demonstrable potential for improvement. The RNA basecaller was likely developed using the very same enolase mRNA used here, which would make it most effective at basecalling enolase mRNA. The marked difference in accuracy between the enolase and influenza virus reads demonstrates that further development of the RNA basecaller can, at a minimum, bring the accuracy of all RNA reads up to that of enolase reads. Moreover, the DNA basecaller is overall more developed and more accurate than the RNA basecaller (89% versus 85% read level accuracy for influenza samples). The continued effort to advance this technology by ONT will undoubtedly result in higher accuracy reads and greatly improved utility. Concentration and purification of A/Puerto Rico/8/1934 reassortant virus A/Puerto Rico/8/1934 reassortant virus was grown in 11 day-old embryonated hen eggs at 35°C for 48 hours. Allantoic fluid was harvested from the chilled eggs and clarified at 5,400 x g, 10 minutes, 4°C, (Sorvall SLA-1500 rotor). The virus was clarified twice more by centrifugation at 15,000 x g, 5 minutes, 4°C (Sorvall SLA-1500 rotor). Virus was pelleted by centrifugation at 39,000 x g, 3 hours at 4°C (Sorvall A621 rotor). Virus pellets were resuspended overnight in PBS and loaded onto a 30%/55% (w/w) density sucrose gradient. The gradient was centrifuged at 90,000 x g for 14 hours at 4°C (Sorvall AH629 rotor). The virus fractions were harvested and sedimented at 131,000 x g (Sorvall AH629 rotor) for 2.5 hours. The resulting virus pellet was resuspended in PBS and aliquoted for future use. RNA isolation Enolase II (YHR174W) mRNA is supplied in the ONT materials as the calibration RNA strand (CRS) at a concentration of 50 ng/µL. For influenza virus samples, total RNA was isolated by Invitrogen TM TRIzol® extraction 21 according to manufacturer's instructions with additional considerations for biosafety. The virus was inactivated by the addition of 10 volumes of TRIzol® in a Biosafely Level 2 biosafety cabinet. Following inactivation, a fume hood was used for the chloroform addition and aqueous phase removal steps. RNA pellets were resuspended in 10-40 µL nuclease free water and quantified by Quant-iT TM RiboGreen® RNA Assay Kit. Due to the difficulty in acquiring sucrose-purified material, the pure controls were limited to one MiSeq run and two separate MinION experiments. The availability of crude viral samples allowed it to be sequenced once on MiSeq and three times on MinION from the same RNA preparation. Nanopore Sequencing The ONT direct RNA library preparation input material requirement is 500 ng of target molecule in a 9.5 µL volume (Table S4). For mRNA sequencing of the enolase control, the protocol was used according to the manufacturer's instruction. For influenza viral RNA sequencing, modifications were made to the protocol components (Table S2). We altered the supplied reverse transcriptase adapter (RTA) which has a T10 overhang  Alignment read lengths were calculated as matching + inserted bases per read (CIGAR M+I). Illumina MiSeq Sequencing The complete influenza genome was amplified with the RNA from both the sucrose purified virus and the allantoic fluid. The MRT-PCR used the Uni/Inf primer set 16 with SuperScript III One-Step RT-PCR with Platinum Taq High Fidelity (Invitrogen). Following amplification, indexed paired-end libraries were generated from 2.5 µl of 0.2 ng/µL using the Nextera XT Sample Preparation Kit (Illumina) following the manufacturer protocol using half-volume tagmentation reactions. Libraries were purified with 0.8X AMPure XP beads (Beckman Coulter, Inc.) and assessed for fragment size (QIAxcel Advanced System, Qiagen) and quantitated using Quant-iT dsDNA High Sensitivity Assay (Invitrogen). Six pmol of pooled libraries were sequenced on the Illumina MiSeq with MiSeq v2 300 cycle kit and 5% PhiX spike-in to increase the sequence diversity. Sequence analysis was performed using IRMA 25 as part of the current Illumina-based pipeline utilized by the Influenza Genomics Team at the Centers for Disease Control and Prevention.
3,813.6
2018-05-08T00:00:00.000
[ "Biology", "Medicine" ]
Varicella Zoster Virus Encephalitis Varicella zoster virus in the adult patient most commonly presents as shingles. Shingles is a painful vesicular eruption localized to a specific dermatome of the body. One of the potential complications of this infection is involvement of the central nervous system causing encephalitis. An increased risk of this complication is associated with the immunocompromised patient. In this case report, we review the history and physical exam findings that should raise clinical suspicion for varicella zoster encephalitis, as well as the epidemiology, risk factors, treatment, and prognosis of this type of infection. INTRODUCTION We present a case of a patient with varicella zoster virus (VZV) encephalitis caused by a combination of the patient having active virus reactivation in the form of shingles on the right leg, in addition to being immunocompromised due to a kidney transplant. According to the World Health Organization, encephalitis occurs in one out of every 33,000-50,000 cases of VZV. It also carries a less favorable prognosis compared to the other extracutaneous complications of VZV. This case report shows how prompt recognition and treatment of this type of infection can decrease mortality and progression of the infection in the high-risk, immunocompromised patient. CASE REPORT A 67-year-old man with a medical history of kidney transplant, chronic renal dysfunction, prior cytomegalovirus infection causing retinal damage and vision loss and prescribed valacyclovir presented to the emergency department (ED) with a complaint of hallucinations and weakness. This was the patient's fifth healthcare encounter in three weeks. The first visit was to the ED for heel pain, and he was discharged home after an unremarkable right foot radiograph. The patient then returned to the ED for his second visit with a painful vesicular rash along the second sacral dermatome of his right leg and was prescribed valacyclovir 1 gram orally three times a day for seven days for shingles. Vaccination status was unknown at the time of diagnosis. Memorial Health System, Department of Emergency Medicine, Marietta, Ohio On the third ED visit two days later, the patient presented with vomiting after being seen by his primary care doctor that morning. The patient was able to tolerate two doses of valacyclovir; and while being seen by his primary care doctor, his valacyclovir dosing was adjusted to account for his renal disease. The patient also was experiencing hallucinations but was discharged home with the explanation that his symptoms could have been due to dehydration after a "negative workup." On his fourth visit to the ED seven days later, the patient stated that he would "close his eyes and see bands playing and rolling plains of green grass." He stated that these images were very vivid but would go away when he opened his eyes. The patient also had difficulty ambulating and generalized weakness. A family member reported that he also had difficulty with finding words. Vital signs during this fourth ED visit included the following: temperature 99.4° Fahrenheit; pulse 92 beats per minute; respiratory rate 20 respirations per minute; room air pulse oximetry 98%, and a blood pressure of 196/91 millimeters of mercury. Physical examination revealed crusted lesions following the second sacral dermatome on the posterior right leg extending from the sacral region to the lower calf. A neurological exam revealed generalized weakness and difficulty with ambulation without any focal deficits. Laboratory testing, including complete blood count, metabolic panel and urinalysis were unremarkable except for serum blood urea nitrogen, creatinine and glomerular filtration rate, which were 23.1 milligrams per deciliter (mg/ dL) (normal range 6.0-20.0 mg/dL), 3.03 mg/dL (normal range 0.67-1.17 mg/dL) and 22 milliliters per minute (mL/ min) (normal is >60 mL/min), respectively. Chest radiograph was unremarkable and brain computed tomography (CT) demonstrated only chronic mild to moderate degenerative changes. Based on the recent diagnosis of shingles, history of immunocompromise and hallucinations with weakness, lumbar puncture was performed. Results included elevated protein with lymphocyte predominance consistent with viral infection. Cerebral spinal fluid (CSF) culture was ordered, and the patient was administered one gram of acyclovir intravenously and admitted to the hospital. On hospital day one CSF culture demonstrated VZV via polymerase chain reaction (PCR). The patient also underwent brain magnetic resonance imaging (MRI) on hospital day two, which showed moderate chronic microvascular ischemia and abnormal appearance of the distal left vertebral artery. Infectious disease, neurology and hospital medicine teams all evaluated the patient and agreed with the diagnosis of VZV encephalitis in the setting of recent shingles, CSF findings, and patient presentation. The patient was administered a two-week course of acyclovir with improvement of his hallucinations and presenting symptoms prior to discharge on hospital day four. DISCUSSION VZV affects approximately 30% of people in the United States during their lifetime. 1 Primary infection causes chickenpox or varicella. The virus is never fully eradicated from the body, however, as it travels and lies dormant in the cranial, dorsal root, or autonomic ganglion. 2 Secondary VZV skin eruption demonstrates a characteristic unilateral, vesicular, and painful eruption that follows a distinct dermatomal distribution. The typical pain pattern of the virus is caused by increased excitability of central nociceptors in the spinal cord causing inflammation and disruption to the nerve cells, making them more sensitive to painful stimuli. 3 VZV can also cause many different central nervous system (CNS) pathologies if the infection invades the spinal cord or cerebral arteries, including cerebellar ataxia, arteritis, myelitis, meningitis, and encephalitis. CNS infection can occur with primary or secondary reactivation of the virus. Two main risk factors increase the risk for VZV, including age greater than 50 years old and immunocompromise due to reduced T cell-mediated immunity. 4 Transplant patients are at increased risk compared to the general public with an incidence rate of 17:1000. 5 The patient in this case study had both of these main risk factors. VZV encephalitis causes a headache, fever, vomiting, and altered level of consciousness or even seizures. The patient in this case presented with vomiting, mental status changes, and hallucinations. These symptoms can be seen more commonly as side effects due to inappropriately renal-dosed valacyclovir. VZV encephalitis mortality rate for immunocompetent patients is approximately 15% and almost 100% in an immunosuppressed patient, especially if both the liver and lung are infected. 1,6 VZV encephalitis CSF analysis typically demonstrates lymphocytic pleocytosis and elevation of protein both of which occurred in this case. Positive PCR testing in CSF confirms VZV. 7 CSF anti-VZV antibodies can be performed but cannot be used alone as means for diagnosis of VZV-related neurological conditions. 2,8 Common findings on brain CT specific for VZV encephalitis are a hypodensity in the temporal lobes with possible frontal lobe involvement. The basal ganglia are commonly spared. For MRI, the common findings for VZV encephalitis are edematous changes with hyperdensity in the temporal lobes and inferior frontal lobes with the basal ganglia being spared. 9,10 Treatment of VZV encephalitis is intravenous (IV) acyclovir for seven days in the immunocompetent patient
1,608.6
2019-10-14T00:00:00.000
[ "Medicine", "Biology" ]
Migratory behaviour and survival of Great Egrets after range expansion in Central Europe Great Egret Ardea alba is one of few Western Palearctic species that underwent a rapid range expansion in the recent decades. Originally breeding in central and eastern Europe, the species has spread in northern (up to the Baltic coast) and western (up to the western France) directions and established viable breeding populations throughout almost entire continent. We monitored one of the first Great Egrets colonies established in Poland to infer migratory patterns and survival rates directly after range expansion. For this purpose, we collected resightings from over 200 Great Egret chicks marked between 2002–2017 in central Poland. Direction of migration was non-random, as birds moved almost exclusively into the western direction. Wintering grounds were located mainly in the western Europe (Germany to France) within 800–950 km from the breeding colony. First-year birds migrated farther than adults. We found some, although relatively weak, support for age-dependent survival of Great Egrets and under the best-fitted capture-recapture model, the estimated annual survival rate of adults was nearly twice higher than for first-year birds (φad = 0.85 ± 0.05 vs. φfy = 0.48 ± 0.15). Annual survival rate under the constant model (no age-related variation) was estimated at φ = 0.81 ± 0.05. Our results suggest that Great Egrets rapidly adapted to novel ecological and environmental conditions during range expansion. We suggest that high survival rate of birds from central Poland and their western direction of migration may facilitate further colonization processes in western Europe. INTRODUCTION Many animal species undergo rapid changes in their biogeographical distribution, which may be driven by a variety of mechanisms (Newton, 1998;Pigot, Owens & Orme, 2010;Bradshaw et al., 2017). Some of these mechanisms act on the population level, resulting from changes in basic demographic parameters such as reproduction, survival or recruitment age (Duncan, Blackburn & Veltman, 1999;Menu, Gauthier & Reed, 2002). Others are related to ecological traits of the species, e.g., dispersal level, behavioural plasticity or tolerance toward human presence (Blondel, Chessel & Frochot, 1988;Devictor, Julliard & Jiguet, 2008;Cornelius et al., 2017). Finally, human activity and its consequences, such as climate change, habitat loss or alterations in agricultural practices, are the major determinants of species distribution (Chamberlain et al., 2000;Melles et al., 2011;Virkkala & Lehikoinen, 2017). Individuals that colonize novel habitats face wide range of challenges, as they lack knowledge about local food resources, predation or human disturbance level (West-Eberhard, 2003). Insufficient information about local environment can reduce their survival, breeding success or recruitment rate (Duckworth, 2008). Exploration of novel areas is often ephemeral and does not necessarily lead to the establishment of stable populations, but it can also lead to the extension of original distribution range (Sax, Stachowicz & Gaines, 2005;Kokko & López-Sepulcre, 2006). The mechanisms of range expansions are usually complex and case-specific, as they can be driven by a multitude of environmental, ecological and demographic factors (Blackburn, Lockwood & Cassey, 2009;Duckworth & Badyaev, 2007). Great Egret Ardea alba provides a noticeable example of recent breeding range expansion among the western Palearctic birds, but the causes of this process and its consequences for newly established populations have received little scientific attention (Ławicki, 2014). Great Egret is a relatively common species with a worldwide distribution (Del Hoyo, Elliott & Sargatal, 1992) and the global population size estimated at 0.6-2.2 mln individuals (BirdLife International, 2019). In the middle of 20th century its breeding colonies were found in central and eastern part of Europe (Bauer & Glutz von Blotzheim, 1966), mainly in the eastern Ukraine close to Black Sea, and along river Danube in Hungary, Bulgaria and Romania (Hagemeijer & Blair, 1997). Large colonies were also located on Lake Neusiedl in Austria and at Volga river delta in Russia (Bauer & Glutz von Blotzheim, 1966). Significant increase in the European population size and expansion to the west and north of the continent was recorded at the end of 20th century. In consequence, in the 21st century, the Great Egret was listed for the first time as a breeding species in thirteen new European countries (Ławicki, 2014). Currently, breeding populations from Belarus, France, Netherlands, Poland and Latvia are estimated to exceed 100 pairs in each country. The expansion continues and is reflected not only by the growing number of breeding pairs, but also by an establishment of stable wintering populations in the newly colonized parts of the range, even in the harsh climate of northern Europe (Ławicki, 2014). The aim of the study was to examine migratory patterns and survival of Great Egrets fledged in a breeding colony established after the range expansion in central Poland. The very first breeding attempt of Great Egrets in Poland was documented in 1863, but this was probably an accidental event not followed by the regular presence of breeders (Tomiałojć, 1972) and breeding of the species has not been observed for the next hundred of years. A rapid increase in the number of observations of non-breeding individuals started from the middle of the 20th century, and the second breeding event was recorded in 1997, when three nests were found at Biebrza Marshes (Pugacewicz & Kowalski, 1997). At the beginning of the 21st century, nesting Great Egrets were recorded at eleven locations, but most of these sites were ephemeral and birds did not breed there regularly. By 2010, only two permanent colonies occurred in Poland, one at Biebrza Marshes and the second at the Jeziorsko reservoir (Janiszewski, 2009). We monitored the fates of birds fledged in the latter colony since it has been established in 2002. MATERIALS & METHODS The study was performed at Jeziorsko reservoir, central Poland (51 • 40 N, 18 • 40 E). Every spring in 2002-2017 all areas suitable for Great Egrets were visited to find the exact location of the breeding colony. In 2010, 2014, and 2016 breeding of Great Egrets on the reservoir was not confirmed, possibly because we did not find the location of the colony. In the remaining years, the colony was visited regularly from May to July in order to ring chicks, but we avoided visits at the early reproductive stages (laying and incubation) to minimize disturbance. In total, we ringed 216 chicks from 82 nests. Each chick was marked with metal and plastic ring; the latter was put on tibia to increase detectability in the field. Bill and tarsus length were measured to estimate age of each chicks. Catching, ringing, and handling birds was performed with permission from the Polish Academy of Sciences, with the approval of the Ministry of Environment in Poland and General Environmental Protection Directorate in Poland (DZP-WG.6401.03.2.2018.jro). Data on resightings and recoveries were obtained from the database of Polish Ringing Centre. Until 31.12.2019, we collected 110 resightings from 51 individuals (1.5 observation per individual, range: 1-17). We also obtained five ring recoveries from dead birds. Overall, 30% of resightings were collected within one year period from ringing date. Polish Ringing Centre obtains resighting data from a wide range of professional and unprofessional observers, as all the information is collected using a website server open to the public (http://www.stornit.gda.pl). All Great Egrets from our study population were marked with large plastic leg rings that increase probability of resightings by unprofessional birdwatchers and nature photographers, who often visit areas attractive for waterbirds. Thus, our resighting data was largely independent from the activities of professional ringing schemes across Europe and resighting effort should be relatively even across the most of western and central European countries. Taking all this into account, we did not expect any major geographical biases in our data. We used geographical coordinates of resightings and recoveries to calculate the direction and distance of migration using a loxodromic formula (Imboden & Imboden, 1972), where north was referred to as 0 • . All observations were divided into three stages of annual cycle: wintering period (December-February), migrations (March-April for spring migration, and August-November for autumn migration), and breeding season (May-July). For each stage we calculated the mean gravity point expressed as the mean of the coordinates of all observation points, as in other studies (Bairlein, 2001;Remisiewicz, 2002). For individuals observed multiple times at the same location we used only one resighting per month. Also, resightings within a radius of 10 km in one month where treated as a single observation. After this treatment, the dataset consisted of 99 observation points. These data were used to calculate mean ± SD for the angle of migration using circular statistics in Oriana 2.0 (Kovach Computing Services, Anglesey, Wales) software. Differences in the angle of migration between successive stages of life cycle were tested using Watson-Williams test (Batschelet, 1981). The mean distance of recoveries of egrets observed in different months was compared by ANOVA and post-hoc Tukey's test (Zar, 1996). To analyse survival we used capture-recapture models implemented in Mark software (White & Burnham, 1999). We estimated two population parameters: survival probability (ϕ) and resighting probability (p) using Cormack-Jolly-Seber (CJS) models for live recaptures. We grouped all resightings into 15 encounter occasions for each bird (one year duration, 2004-2018), where the first encounter occasion after ringing started from the beginning of the first breeding season (beginning of May). First, we tested goodness-of-fit of our data to the fully time-dependent CJS model using RELEASE test 2 + test 3 approach and we found no evidence for the lack of fit (χ 2 = 5.04, df = 15, p = 0.99). Second, we fitted constant and time-dependent (between-year variation) models for both parameters (ϕ and p) and we also tested for the effects of age (first-year versus adult) and hatching date on survival probability, but the latter effect was fitted only for the first-year birds in the age-dependent model. The Akaike Information Criterion adjusted for small sample size (AIC C ) was used to compare relative fit of the models and the lowest AIC C value indicated the most parsimonious model. The models were also compared with Akaike weights, which are interpreted as the weights of evidence in favour of a given model against all other fitted models. All values are presented as means +/-SD. Wintering areas and migration Great Egrets from our study colony spent winter mainly in the western Europe (Fig. 1). Wintering locations were scattered through France (11%), Netherlands and Belgium (72%) and northern/central Germany (16%). Egrets migrated mainly in the western direction from their breeding colony, and their angles of migration were not randomly distributed (mean = 268.14 • ± 12.1 • ; Rayleigh test: Z = 13.39, p < 0.05). The mean distance from the breeding colony to the wintering sites was 883 ± 200 km, and most (71%) winter resightings were from a distance of 800-950 km. The farthest reported wintering location was 1414 km away from the colony, when a first-year bird was observed in western France, in Maine-et-Loire region (Fig. 1). Analysis of migratory behaviour during the autumn period revealed a varying rate of migration in the successive months (F 5,81 = 9.55, p < 0.001). In August and September birds stayed in a close proximity to the breeding colony (on average closer than 350 km; Figs. 1 and 2), while the migration distance increased rapidly and significantly in October (Tukey's test: all p < 0.05). In fact, many birds probably reached their wintering sites in October, as the mean migration distance did not increase significantly in the following months (Tukey's test: all p > 0.71; Fig. 2). Migratory distance differed significantly between first-year and adult birds (F 1,81 = 6.78, p = 0.011, Fig. 3A), as the adults were observed closer to the breeding colony, irrespectively of the month (F 5,76 = 1.57, p = 0.18). The mean angle of migration during the autumn period was 264.63 • ± 27.9 • , similar as for the angle of migration for wintering resightings (Watson-Williams test: F = 0.193, p = 0.989), which suggested that birds moved directly from breeding to wintering grounds. In fact, only one foreign resighting was reported east of the breeding colony, which was a third-year bird found dead in July at fish pond complex in Belarus, 440 km from the ringing site. Also, the mean angle of migration during autumn period did not differ between adult and first-year birds (Watson-Williams test: F = 0.019, p = 0.89, Fig. 3B), which suggested that both age groups migrate in similar direction. Resightings from spring migration period were scarce (n = 7) and too few for quantitative analyses. The earliest birds were recorded at the breeding grounds at the end of March, but others stayed at the wintering areas until late April. The marked egrets were not observed in breeding colonies other than the natal one. Six birds that fledged at Jeziorsko were reported from this site as adults, which suggested strong philopatry. Survival estimates To estimate survival rate of Great Egrets, we fitted six capture-recapture models (Table 1). Model selection provided support for age-related variation in survival (Table 1), where annual survival rate of adults was nearly twice higher than the estimate for first-year birds (ϕ ad = 0.85 ± 0.05 vs. ϕ fy = 0.48 ± 0.15). Resighting probability for both age classes was estimated at p = 0.10 ± 0.03. The effect of age was, however, relatively weak, as the model with constant survival rate across age classes had only slightly worse fit ( AIC C = 0.31; Table 1). Annual survival rate under this model was estimated at ϕ = 0.81 ± 0.05, whereas resighting probability was p = 0.06 ± 0.01. Both models only slightly differed in their relative importance, as measured with Akaike weights (0.44 vs. 0.37; Table 1). We found support neither for the effect of hatching date on survival of first-year birds nor for inter-annual variation in survival rates (Table 1). Figure 2 Migration distance of Great Egrets from central Poland during autumn and winter period. Means ± SE are presented. Full-size DOI: 10.7717/peerj.9002/ fig-2 DISCUSSION The results of our study provide one of the first descriptions of migratory patterns and survival rates of Great Egrets following their recent range expansion in Europe. Resightings of marked birds from the breeding colony at Jeziorsko reservoir, central Poland, indicate that they choose western direction of migration. During winter most birds were observed in Netherlands and Belgium, but certain proportion of individuals stayed closer to breeding grounds, wintering in central and northern Germany. However, we also recorded examples of long migratory distances exceeding 1,400 km, when birds wintered in western France. A tendency of birds from a newly established Polish population to winter in western Europe is reflected by a recent development of stable wintering population of Great Egrets in this part of the continent (Ławicki, 2014). Interestingly, we did not observe movements into southern and eastern direction, where core breeding and wintering areas of the species are located in Europe (Ławicki, 2014). There are no observations of our birds in Danube river valley or in Ukraine. River Danube valley and its estuary was listed as a breeding site for Great Egrets in 19th century (Bauer & Glutz von Blotzheim, 1966) and even during the period of massive egret persecution in Europe this area held relatively large number of birds. Although paucity of ringing data in colonies from the core European range hampers precise identification of migration patterns in these populations, extensive wintering of great egrets in the Mediterranean region, Balkans, and Turkey suggest that many birds from regular breeding grounds head south for winter. Similar pattern is found in the central European populations of the sister species, the Grey Heron Ardea cinerea, where southern or south-western migration direction seems to prevail. For example, birds ringed in Czech Republic spend winter mainly in Hungary, Austria, Switzerland and Italy (Cepák et al., 2008), whereas birds from Poland winter mainly in Mediterranean countries: Spain, France and Italy(Manikowska-Ślepowrońska, Mokwa & Jakubas, 2018). Our sightings of Great Egrets suggest that northward range expansion of this species is associated with serious alteration in migration patterns and location of wintering grounds. Interestingly, Great Egrets did not show intensive post-breeding dispersal activity. The mean distance of migration was relatively short during the first months of post-breeding period and long-distance migration started in October. During this month we noticed a rapid decline in the number of resightings collected in close proximity to the breeding colony, while the number of long-distance resightings increased. Probably, warm autumns during last decades in Central Europe allow birds to stay close to the breeding sites and maintain good physical condition for a relatively long time. It may be particularly advantageous for egrets to delay migration until October, when majority of carp fish farms move fish to smaller ponds before winter (Turkowski & Lirski, 2010). This procedure provides an easy access to small fish for many wild birds, mainly gulls and herons. Also, water level at dam reservoirs (including Jeziorsko) usually drops down in this period, producing large areas of shallow waters rich in fry. In consequence, egrets can use rich food resources and do not have to start their autumn migration until frosts reduce their access to attractive foraging sites. We were not able to directly test for the relationship between migration behaviour and carp production process, but locations of autumn egret resightings seem to support this scenario. 44% of observations collected from August-October came from fish farms or dam reservoirs, where our marked birds were observed in the mixed flocks of herons and gulls that often preyed on small fish. Our best-fitted capture-recapture model provided support for lower survival rate of first-year birds, when compared with adults, but the constant model with no agerelated variation in survival had only slightly worse fit and, thus, both models were technically non-distinguishable. While the best-fitted model indicated considerable differences in survival between both age classes (ϕ ad = 0.85 vs. ϕ fy = 0.48) and SDs for the estimates were relatively narrow (see the results section), similar fit of the constant model could be due to low sample size of first-year birds and, consequently, insufficient statistical power to convincingly demonstrate any age-related variation. Survival rate under the constant model was estimated at ϕ = 0.81, so our capturerecapture analysis provided strong and consistent evidence for high survival rates of adult egrets, while the evidence for lower survival of first-year individuals must be treated with caution, taking into consideration limitations of our data. Nevertheless, age-related differences in survival, where juvenile individuals show higher mortality than adults, is a widespread phenomenon in birds (Clark & Martin, 2007;Guillemain et al., 2010), and we suggest it is a likely scenario in our study population. In general, juveniles show weaker competitive ability, low level of predation avoidance or poor foraging efficiency in comparison with more experienced adult individuals (Anders et al., 1997;Calvert, Walde & Taylor, 2009;Robinson et al., 2004;Woodrey, 2000). For example, survival of adult Little Egrets Egretta garzetta from breeding colony in Camarque was estimated at 74%, whereas in juvenile birds it varied between 6.5% and 55% in different seasons (Hafner et al., 1998). Survival rate of adult Reddish Egret Egretta rufescens was 99% per each month of the breeding period and 94% per non-breeding month, resulting in the annual survival rate of 73% (Koczur, Ballard & Green, 2017). A study of Great Egrets in Florida revealed slightly higher survival rate of first-year birds (ranging from 52 to 66%) than our study (48%) (Sepúlveda et al., 1999). First-year Grey Herons from Great Britain showed survival probability between 25% and 72%, which was primarily dependent on winter severity (North, 1979). In populations subjected to high level of shooting pressure survival rate of first-year birds was 33% in Grey Heron (Scandinavia) and 28.9% in Great Blue Heron Ardea herodias (the United States) (Olsson, 1958;Owen, 1959). Adult survival in both populations was higher -76.3% and 75.5%, respectively. Negative impact of shooting activity at fish farms is probably low in Poland, as Great Egret is protected by law and illegal shooting of this species is probably marginal. Our study indicates that survival of Great Egrets (both immatures and adults) from a newly colonized areas in Central Europe is relatively high and comparable to survival rates of other ardeid species from their core populations. This may suggest that Great Egrets rapidly adapted to novel ecological and environmental conditions and that the populations established after northward range expansion are, at least to certain extent, self-sustainable. This hypothesis is supported by several resightings of birds fledged at Jeziorsko reservoir, which returned to their natal colony to breed. In the future, high survival rate of birds from Central Europe may also help to produce a surplus of new recruits that will participate in the colonization processes of Western Europe. CONCLUSIONS As far as we are aware, our study provided the first information on migratory behaviour and survival rates in a newly established Great Egret population in central Europe. We found that main wintering areas of our study population were located in the western Europe, which in combination with relatively high survival rate, can promote further expansion of Great Egrets towards western part of the continent. We believe that our study improves the understanding of ecological mechanisms associated with the processes of range expansion, and we plea for a joint effort in large-scale ecological monitoring of avian populations that expand their range.
4,957.4
2020-04-30T00:00:00.000
[ "Environmental Science", "Biology" ]
Stochastic Blockmodeling of the Modules and Core of the Caenorhabditis elegans Connectome Recently, there has been much interest in the community structure or mesoscale organization of complex networks. This structure is characterised either as a set of sparsely inter-connected modules or as a highly connected core with a sparsely connected periphery. However, it is often difficult to disambiguate these two types of mesoscale structure or, indeed, to summarise the full network in terms of the relationships between its mesoscale constituents. Here, we estimate a community structure with a stochastic blockmodel approach, the Erdős-Rényi Mixture Model, and compare it to the much more widely used deterministic methods, such as the Louvain and Spectral algorithms. We used the Caenorhabditis elegans (C. elegans) nervous system (connectome) as a model system in which biological knowledge about each node or neuron can be used to validate the functional relevance of the communities obtained. The deterministic algorithms derived communities with 4–5 modules, defined by sparse inter-connectivity between all modules. In contrast, the stochastic Erdős-Rényi Mixture Model estimated a community with 9 blocks or groups which comprised a similar set of modules but also included a clearly defined core, made of 2 small groups. We show that the “core-in-modules” decomposition of the worm brain network, estimated by the Erdős-Rényi Mixture Model, is more compatible with prior biological knowledge about the C. elegans nervous system than the purely modular decomposition defined deterministically. We also show that the blockmodel can be used both to generate stochastic realisations (simulations) of the biological connectome, and to compress network into a small number of super-nodes and their connectivity. We expect that the Erdős-Rényi Mixture Model may be useful for investigating the complex community structures in other (nervous) systems. Supplementary Figures . Membership structure of the neurons in the Spectral fit. Neurons are coloured coded according to their ganglion type. Erdős-Rényi Mixture Model For comprehensiveness, we give a thorough review of the ERMM, proposed by Daudin, Picard and Robin [2], and we offer detailed and more complete proofs than found in the original references. We define G to be a simple random graph which is fully specified by the binary and symmetric adjacency matrix X = ((X ij )) 1≤i,j≤n . This matrix has several obvious characteristics, the first is that the principal diagonal is 0, since the graph is simple, and the second is that the number of data points in X is given by n(n − 1)/2 which is just the count of entries in the upper or lower triangular matrix. In order to describe the ERMM, we first concentrate on the assumptions about the nodes (vertices). For the graph G, the set of all vertices, labelled as {V i } i∈1,...,n , is assumed to be divided into Q unknown blocks where the membership structure of each such block is determined by a 1 × Q dimensional vector ..,n . In particular, the elements of Z i are the mutualy independent latent variables Z iq which label vertices according to their block membership, thus we have Furthermore, for a division of G into Q blocks, Z i is assumed to follow the single trial multinomial (or categorical) distribution where the parameter α is the Q × 1 vector of the probabilities α = (α 1 , . . . , α Q ). Subsequently, the probability that a randomly chosen vertex in a network is located in a q-th block is given as with constraint that the Q q=1 α q = 1. The immediate interpretation of this assumption is that the vertex belongs to one group only and, in the common parlance, this is known as the hard partitioning. To complete description of the ERMM, we focus next on the assumptions about the edges. For this, the ERMM specifies that, given the block assignments of the vertices, the elements of X are conditionally independent Bernoulli random variables with rates given by their corresponding elements in the connectivity matrix π = ((π ql )) 1≤q,l≤Q . In other words, if a vertex V i belongs to a block q and a vertex V j belongs to block l, then or for the vertices V i and V j located in the same block For the subsequent proofs, it is convenient to express the elements of the connectivity matrix π as the conditional probabilities π qq = P(X ij = 1|Z iq = 1, Z jq = 1). Following the traditional notation of Paul Erdős and Alfréd Rényi, we can define the Erdős-Rényi Mixture Model as G = G(n, π) where, for a fixed Q, we have Q(Q + 1)/2 mini Erdős-Rényi models (ERMs), posed not only on the blocks but, also, on the relationships between the blocks. Thus, just as in the ordinary ERM, we can obtain the distribution of degrees and this is summarised in the following proposition. Proposition: In an Erdős-Rényi Mixture Model G = G(n, π), given the class membership of a vertex, the conditional distribution of the degree of this vertex (ρ(V i )) is Binomial (approximately Poisson) where π q = Q l=1 α l π ql and λ q = (n − 1)π q . Proof We consider random variable ρ(V i ) = n j=1 X ij . The value of this variable (ρ(V i )) increases only when X ij = 1 and because of this we need to consider the following probability: Furthermore, ((X ij )) 1≤i =j≤n are conditionally independent, given the classes of vertices V i and V j , al-lowing us to conclude ρ(V i )|Z iq = 1 ∼ Bin(n − 1, π q ). As the Binomial distribution can be approximated by the Poisson, we get With this, the distribution of degrees is then defined as a mixture of Poisson distributions such that Maximising the likelihood with the variational approach For a fixed Q and the parameters ψ = {α, π}, the complete data log likelihood is given as To verify this, we consider log L(x, z; ψ) = log L(z; α) + log L(x|z; π). As Z i follows a multinomial distribution, its likelihood is given as Taking logarithms, we get Furthermore, we have and combing everything completes the verification. To estimate the model parameters, however, we need the likelihood of the observed data X, which is typically obtained by taking a sum over expression (10) with respect to all possible values of Z. Unfortunately, this sum is not tractable and the standard strategy, like the Expectation Maximisation (EM) algorithm, provides some reduction in the computational burden but imposes a drastic reduction in the size of networks that can be handled by the analysis. To resolve these issues, Daudin, Picard and Robin [2] proposed to use the variational approach [3,4] which requires that the distribution of Z is of the form where P (Z i = z i ; τ i ) is the multinomial distribution with parameter τ i = (τ i1 , . . . , τ iQ ) and τ iq = P(Z iq = 1|X = x). The form of the joint distribution given in the expression (11) is suggested by the model assumption by which the latent variables are independent. In the context of the variational approximation, the goal is to maximise the following quantity where KL[·||·] is the Kullback-Leibler divergence. This gives the following estimating equations. Proposition: Given parameters α and π, the optimal variational parametersτ i = arg max {τi} J (P(Z); ψ, τ ) satisfy the following point relation Proof To show this J (P(Z); ψ, τ ) is maximised with respect to the variational parameter τ i , subject to the constraint Q q=1 τ iq = 1, that is, the goal is to maximise the following quantity: where ξ i is the Lagrange multiplier and J (P(Z); ψ, τ ) is given as Substituting for J (P(Z); ψ, τ ) into equation (14), differentiating with respect to τ iq and setting this expression to zero, we get allowing us to concludeτ Proposition: Given the variational parameters τ i , the values of the parameters α and π that maximise Proof Maximising with respect to α, subject to the constraint Q q=1 α q = 1 this gives:α Similarly, maximising with respect to ππ Estimation of the number of blocks via the ICL criterion The model selection is handled by the ICL criterion, which was proposed by Biernacki et al. [5]. The construction of the ICL criterion relays on the lemma which states that, if the prior parameter distributions for a model with Q blocks M Q , p(α|M Q ) and p(π|M Q ), are such that then log L(x, z|M Q ) = log L(x|z, M Q ) + log L(z|M Q ). Proposition: For a model M Q with Q blocks, the ICL criterion is where M Q denotes a model with Q blocks andẑ denotes its estimate such that the elements ofẑ î Proof Considering, log L(x, z|M Q ) = log L(x|z, M Q ) + log L(z|M Q ) . The first term is obtained by application of large sample Laplace integral approximation (i.e., Bayesian Information Criterion BIC), so we have For the second term we use the Dirichlet prior, D(δ), as its conjugate is the multinomial distribution, and we get log Γ(n q + δ) − log Γ(n + Qδ) . Setting δ = 1 2 , as it corresponds to the Jeffreys prior, and replacing z with its estimateẑ, we get: The Clustering Coefficient The probabilistic definition of the clustering coefficient as proposed by Daudin, Picard and Robin in [2] is given as follows. Proposition: In the ERMM, the clustering coefficient iŝ C DPR = q,l,sα qαlαsπqlπqsπls q,l,sα qαlαsπqlπqs . (28) Proof where for the last step, we used conditional independence of X and the independence of Z. Spectral Algorithm The problem of community detection is usually defined as finding the partition of a network into communities of densely connected vertices while minimising the number of connections between the communities. The goodness of a graph partition is generally assessed with a quality function whose the most frequently used version is known as modularity and it was proposed by Newman and Girvan [6]. The idea behind the concept of modularity is that the communities are found by the comparison of the actual density of connections in the subgraphs and the density one would expect to find if the vertices were connected at random. This random version of the original graph, that acts as a reference point in the modularity function, is called a null model and, typically, it is tailored to preserve some of the features of the original graph like the same number of edges or the same degree distribution [7][8][9]. In practice, the standard null model of modularity preserves the degree distribution of the original graph by generating half-edges so that each vertex in a null model receives as many half-edges as its corresponding degree in the original graph. Thus, the probability of randomly picking V i is expressed as a proportion of V i 's degree in the total sum of the degrees, that is, ρ(V i )/2m. Furthermore, the probability that vertices V i and V j form a complete edge is given as: ρ(V i )ρ(V j )/4m 2 , while the expected count of edges considered for V i and V j is ρ(V i )ρ(V j )/2m := P ij . According to this, the modularity is defined as where c i and c j denote the communities of vertices V i and V j , respectively, while δ(c i , c j ) = 1 if V i and V j are located in the same community, 0 otherwise. The Spectral algorithm [10,11] optimises the modularity by utilising the eigenvalues and eigenvectors associated with the modularity matrix D, whose elements are Let s be an indicator vector which decomposes the nodes into 2 communities, with s i = 1 if the vertex V i is located in the first community and s i = −1 if the vertex is located in the second community. This modifies the modularity function (29) as follows Moreover, the vector s can be written as a linear combination of the normalised eigenvectors u i associated with the matrix D, thus, s = i a i u i and a i = u T i s. Using this along with the fact that β i is the eigenvalue of D corresponding to the eigenvector u i , we get The idea is to look for the largest positive eigenvalue of D, and then group the vertices according to the elements of the corresponding eigenvector. The extension of the algorithm to more than two communities is reflected in the consideration of the additional contribution ∆f mod to the modularity after dividing a community g with size n g into two communities ∆f mod = 1 2m where for the last step, we note that using the Kronecker delta notation δ ij , we can write i,j∈g such that D (g) is n g × n g matrix whose elements are: D The algorithm stops when there are no more positive eigenvalues.
3,084.2
2014-07-02T00:00:00.000
[ "Computer Science" ]
Conceptual Design of a High-flux Multi-GeV Gamma-ray Spectrometer We present here a novel scheme for the high-resolution spectrometry of high-flux gamma-ray beams with energies per photon in the multi-GeV range. The spectrometer relies on the conversion of the gamma-ray photons into electron-positron pairs in a solid foil with high atomic number. The measured electron and positron spectra are then used to reconstruct the spectrum of the gamma-ray beam. The performance of the spectrometer has been numerically tested against the predicted photon spectra expected from non-linear Compton scattering in the proposed LUXE experiment, showing high fidelity in identifying distinctive features such as Compton edges and non-linearities. High-energy gamma-ray beams are of central interest for a wide range of physical subjects, and present appealing characteristics for a series of practical applications. For instance, a wide range of astrophysical phenomena generate gamma-ray beams with energies spanning from a few MeV up to several TeV 1 . The understanding of high-energy gamma-ray astronomy is indeed one of the main routes towards a detailed understanding of high-energy astrophysical phenomena. On a laboratory scale, brilliant sources of gamma-ray beams are an ideal tool to study nuclear phenomena (see, for instance, ref. 2 ) and to investigate fundamental quantum electrodynamic processes 3 . These sources are mainly produced by exploiting bremsstrahlung radiation resulting from the propagation of ultra-relativistic electron beams through a high-Z solid target (see, for instance, refs. [4][5][6] or via incoherent Compton scattering of an electron beam through the focus of an intense laser 7,8 . Other mechanisms, exploiting the near-term generation of multi-PW laser facilities, include direct laser irradiation of solids 9,10 , or electromagnetic cascades 11 . Conversion efficiencies from laser to gamma-ray photons exceeding 10% can be achieved with the aforementioned methods. High-power laser systems are also opening up the possibility of studying high-field quantum electrodynamics in a controlled laboratory environment. Exotic phenomena such as quantum radiation reaction 12,13 , stochastic photon emission 14 , and pair production and cascading in a laser field 15,16 are now experimentally accessible. Several large-scale facilities and experimental campaigns are currently being explored, including the LUXE experiment at the Eu-XFEL 17 and the E-320 experiment at FACET-II, following the first seminal experiment in the area carried out at SLAC 18,19 . These processes are accompanied by the emission of ultra-short, high-brilliance, and high-energy gamma-ray beams 7,8,12 . Measuring the spectrum of these photons is expected to provide precious information about the behaviour of particles interacting with ultra-high fields. Providing on-shot, detailed spectral measurements of high-brightness and high-energy gamma-ray beams is thus highly desirable for the progress of these research areas. Different systems have been proposed: methods based on pair production in a high-Z target 4,20,21 are able to detect high-energy photons but are currently not designed to work at a high flux. Similarly, methods based on measuring the transverse and longitudinal extent of cascading in a material are designed to work only at a single-photon level, or present limited energy resolution for high fluxes 22 . Compton-based spectrometers (such as the one in ref. 23 ) do work at high-fluxes but can only meaningfully measure spectra up to photon energies of a few tens of MeV. Cherenkov radiation is also used 24 but, again, it is best suited to perform single-particle detection. Large-scale detectors such as the EUROBALL cluster 25 and the AFRODITE germanium detector array 26 can also resolve up to 10-20 MeV but their significant size make their implementation in many laboratories infeasible. In this paper, we report on a design of a compact gamma-ray spectrometer, which can provide live and non-invasive information on the absolutely calibrated spectrum of high-energy (scalable from hundreds of MeV to tens of GeV) and high-flux gamma-ray beams. In a nutshell, the photons are converted into electron-positron pairs during propagation through a thin high-Z solid target. The measured spectra of the pairs generated are then used to reconstruct the primary gamma-ray beam. The minimum number of photons realistically detectable is of the order of 10 5 photons/GeV/event and an energy resolution of the order of a few percent at 10 GeV can be achieved. The performance of the system is numerically tested for the expected Compton-scattered spectra from the LUXE experiment (see Fig. 7 in ref. 17 ). Interaction of multi-GeV photons with a high-Z material In this article, we will consider a solid target with a thickness that is much smaller than its radiation length. In this case, the main process via which a multi-GeV photon interacts with a material is pair production in the nuclear field and multi-step cascades can be neglected. This is demonstrated by Fig. 1a, which shows the total attenuation and that due to pair production in the nuclear field of a photon through tungsten as a function of its energy 27 . Above 100 MeV, attenuation is entirely dominated by pair production. The total cross-section for pair production in the nuclear field can be expressed, in the ultra-relativistic approximation, as 28 : where α ≈ 1/137 is the fine-structure constant, Z is the element atomic number, r e ≈ 2.8 × 10 −13 cm is the classical electron radius, E γ is the photon energy, and m e c 2 is the rest energy of the electron. There is only a weak logarithmic dependence on the photon energy, implying approximately the same amount of electron-positron pairs generated regardless of photon energy. For GeV-scale photon energies and a tungsten nucleus, Eq. 1 predicts a cross section of the order of σ ≈ 10 −23 cm 2 . For a 10 micron thick solid converter, this results into a conversion of photons into pairs of the order of 0.1%. Moreover, the emitted electrons and positrons will present an almost www.nature.com/scientificreports www.nature.com/scientificreports/ flat spectral distribution. This is elucidated by Fig. 1b, which shows the electron and positron spectra generated during the propagation of a pencil-like photon beam of different energies through 10 μm of tungsten. The data shown is obtained from Monte-Carlo simulations, using the code FLUKA 29 , where 10 7 mono-energetic photons of different energies were made to interact with a 10 μm thick tungsten foil. Based on these simulations, we can then extract a dependence of the spectral distribution of pairs as a function of photon energy, as shown in Fig. 1c. The number of electrons/positrons per incoming photon per GeV can be expressed as: N e /GeV/primary ≈ 1.7 × 10 −3 E[GeV] −0.93 . As expected, the number of pairs per energy interval scales as the inverse of the photon energy, in quantitative agreement with the estimates from Eq. 1. Choosing a different material will not change the power law dependence on energy but only the multiplying coefficient (i.e., 1.7 × 10 −3 in this case of a 10 μm thick tungsten target). As described later, this dependence will be used to reconstruct the spectrum of the primary photon beam from the recorded spectra of electrons and positrons after the converter. As to what concerns the spatial distribution of the generated pairs, one can assume, in an ultra-relativistic regime and for thin converters, that their scattering inside the material be negligible. The divergence of the pairs θ e at the exit of the converter target will thus be dominated by the initial divergence of the photon beam θ γ and the cone-angle of the pair-production process, which is of the order of the inverse of the Lorentz factor (γ e ) of the particle: This relation is found to be in good agreement with the numerical simulations discussed below. Tracking and Signal-to-Noise considerations The main function of the spectrometer (sketched in Fig. 2) is thus to measure the spectrum of electron/positron pairs exiting the converter target, from which the spectrum of the primary gamma-ray photons can be reconstructed. To do this, a simple magnetic spectrometer consisting of a dipole magnet and two detector regions can be used. It is interesting to note that the pair production process effectively produces identical spectra of electrons and positrons. Measuring both simultaneously thus provides a useful consistency check of the system and, probably more importantly, allows one to efficiently identify noise sources in the system. Particular care must be taken in optimising the signal-to-noise ratio (S/N). The main sources of noise in the system can be identified as: events involving an interaction with any component of the spectrometer other than the converter -such as dipole magnets and collimators -, off-axis photons and low-energy electron and positron pairs exiting the converter. These lower energy electrons and positrons (sub-GeV) will exit with broad divergences, and could thus be redirected by the dipole magnet onto the detectors. If we assume that we are interested in photon beams with energies exceeding the GeV, and that these beams will have relatively small divergences, i.e., of the order of a mrad, most of the GeV-scale electron and positron pairs will exit the converter with a similar divergence. We can then introduce high-Z, small-aperture long collimators to kill off-axis particles and photons and select only the high-energy part of the electron-positron pairs generated. As an example, we show in Fig. 3 results from a FLUKA simulation of the propagation of a 15 GeV photon beam (10 7 primaries initialised in the simulation) through the proposed spectrometer design sketched in Fig. 2. For the rest of the article, we assume the whole system to be in vacuum, to significantly reduce the computational cost of the simulations. While we acknowledge that running the whole system in vacuum might represent a significant experimental challenge, it is a preferable option also from an experimental point of view, since the system would not be susceptible to the interaction of the pairs with air, which would add complication to the data analysis and create an additional source of noise. In these simulations, we assumed a magnetic field of B = 0.5 T and the geometrical quantities defined in Fig. 2: L B = 1 m, L S = 2 m, and L D = 4 m. Two 50-cm long collimators, one right after the collimator and one one meter away are assumed. They are both made of lead and have apertures on axis with a diameter of 8 mm, corresponding to an angular acceptance of 4 mrad. As shown in Fig. 3, they are effective in minimising off-axis noise from the converter, with only two spikes of noise corresponding to low energy particles hitting the dipole magnet frame (Fig. 3b). However, these spikes are outside the range of detection (dashed rectangles in Fig. 3) and can thus be ignored. A double-collimator system is to be preferred to a single collimator even though this increases the overall size of the spectrometer. This is because internal reflection of particles within the aperture of a single collimator would still represent a significant source of noise, which is well mitigated by the second collimator. In Fig. 3c we show the transverse distribution of electrons, positrons, www.nature.com/scientificreports www.nature.com/scientificreports/ and photons at the back plane of the system (shown as a dashed rectangle in Fig. 2). As one can see the spectrally dispersed positron and electron streaks are clearly detectable with S/N > 10 and a signal of ≈10 −4 -10 −3 particles/ primary photon/cm 2 . Reconstruction of the gamma-ray spectrum Once the electron and positron spectra are recorded, the curves in Fig. 1b,c can be used to reconstruct the spectrum of the primary photon beam. In a nutshell, the number of electrons and positrons at the highest recorded energy is measured, and the number of photons responsible for that population of particles is extracted. Then, the spectrum of the electrons and positrons generated by these highest energy photons is subtracted from the original electron and positron spectra. The procedure is then repeated for progressively smaller photon energies. A critical quantity that has to be defined in this procedure is the size of the energy bin over which the estimation of the number of electrons and positrons is carried out. Intuitively, a small energy bin will result in higher energy resolution but will contain fewer particles. To identify the ideal energy bin size, one should then first consider what is the intrinsic energy resolution of the spectrometer. In the ultra-relativistic limit, this can be expressed as 30 : where L S , L D , and L B are geometrical quantities defined in Fig. 2, E e is the particle energy, B is the magnetic field strength, and θ e is the particle divergence at that energy. From Eq. 2, the divergence of the electrons and positrons is dictated by the divergence of the primary photon beam and the spreading induced by the pair-production process in the converter. For a 1 GeV photon, this spread is ≈1/γ e ≤ 0.5 mrad (down to 50 μ rad at 10 GeV). As an example we plot, in Fig. 4, the energy-dependent resolution of the spectrometer for the parameters aforementioned. Neglecting for now the pixel size of the detectors, an ideally collimated photon beam (no divergence) could in principle be spectrally resolved with a relative uncertainty in energy of ≤1%. This results from the divergence (θ e in Eq. 3) induced by the pair production process inside the converter. A more realistic photon beam divergence in the mrad range will result in spectral resolutions of the order of 10% at 10 GeV (1% at 1 GeV). It is thus not meaningful to choose energy bin sizes smaller than these quantities. In the example below, the bin size for the gamma-ray www.nature.com/scientificreports www.nature.com/scientificreports/ reconstruction is kept constant throughout the spectrum at 250 MeV. This is an idealised case to show the performance of the spectrometer, with the understanding that the energy binning size is strongly dependent on the specific setup to be adopted, as it is influenced, for instance, by the divergence of the photon beam to be measured and the physical size of the detectors' pixels. As an example of the effectiveness of the spectrometer, we show here the performance of the proposed design in measuring the structured spectra of photon beams expected from Compton scattering in the LUXE experiment ( Fig. 7 in ref. 17 ), where the 17.5 GeV electron beam from the Eu-XFEL will be collided with a focussed laser pulse with a maximum dimensionless intensity of the order of  a 2 0 . We choose two cases: non-linear (a 0 = 2) and linear (a 0 = 0.2) Compton scattering spectra (Fig. 7 of ref. 17 and red lines in Fig. 5b,d, respectively). Photon beams with such spectra are sent, using the Monte-Carlo code FLUKA, through the system sketched in Fig. 2 and the resulting electron and positron spectra at the detector plane are recorded. The reconstruction algorithm is then applied to these electron and positron spectra to retrieve the original gamma-ray spectrum. The results are shown in Fig. 5, where the electron and positron spectra obtained from FLUKA simulations are shown in Fig. 5a,c for the cases a 0 = 2 and a 0 = 0.2, respectively, and the corresponding predictions of the reconstruction algorithm are compared with the original gamma-ray spectra in Fig. 5b,d. In this example, the maximum divergence of the Compton-scattered photons is obtained in the non-linear regime (a 0 > 1) 7,8 and is of the order of θ γ ≈ a 0 · m e c 2 /E γ ≈ 60 μrad, corresponding to an energy resolution at the percent level (Fig. 4). The algorithm yields an accuracy in photon yield of the order of 20% (see Fig. 6). These values are mostly due to the reconstruction algorithm and the size of the energy bin considered, and should be integrated with the uncertainty resulting from the electron and positron detection systems. However, it is clear from Fig. 5 that the system is able to precisely identify distinctive features in the spectra, such as the linear Compton edge and the different levels of perturbative non-linear contributions (labelled in Fig. 5b). It is to be intended that additional aspects, specific to the particular setup to be adopted, will also factor in determining the energy resolution of the system. For instance, care has to be taken in choosing the electron and positron detectors, since the detector pixel size, together with the divergence of the photon beam to be spectrally resolved, will constrain the energy resolution and the amount of signal detected per pixel. From Fig. 3c, one can see that approximately 10 −4 -10 −3 particles/primary photon/cm 2 will be incident on the detectors, implying www.nature.com/scientificreports www.nature.com/scientificreports/ 10 5 -10 6 particles/cm 2 for a realistic primary photon beam containing 10 9 photons. If we assume the idealised case of a pencil-like photon beam, the spectrometer could in principle reach down to a resolution of the order of 1% (i.e., 100 MeV at 10 GeV, see Fig. 4). To guarantee this energy resolution at 10 GeV, one would need a pixel size along the dispersion axis of the spectrometer of approximately 500 μm, easily attainable with modern scintillators (approximately 1.3 mm for the example of a 250 MeV energy binning). We can then assume a 20 cm × 2 cm detector with 500 μ m × 1 cm pixels (800 pixels in total). In this case, the detector will receive a measurable quantity of the order of 10 4 particles per pixel. Due to the non-linear energy dispersion relation of the dipole magnet, it is however not necessary to keep a constant pixel size throughout the detector, with the possibility of having larger sizes at the low-energy end, thus reducing the overall number of pixels required. As a final remark, the spectrometer design is virtually transparent to the gamma-ray beam (≥99% of the photons propagate unperturbed through the system), allowing for downstream profiling and calorimetry to be fielded simultaneously with the spectrometer (sketched in Fig. 2). conclusions In conclusions, we report on a conceptual design for a gamma-ray spectrometer, specifically designed for high-fluxes and high-energies. The system exploits the approximately flat spectral distribution of the electron/ positron pairs generated during the propagation of the gamma-ray beam through a thin high-Z converter. A possible design is presented for the LUXE experiment, showing the capability of spectrally resolving gamma-ray beams in an energy range between 1 and 10 GeV, with a signal-to-noise exceeding 10, a spectral resolution of the order of a few percent, and an accuracy in predicting the gamma-ray yield of the order of 20%. It is proposed that similar setups could be used to spectrally resolve high-flux and high-energy gamma-ray beams in a compact configuration, yielding precious information in high-energy physics and ultra-high intensity laser experiments. Figure 6. Accuracy in yield of the spectrometer (a) Relative difference between the input gamma-ray spectrum and the prediction of the reconstruction algorithm as a a function of energy for the case shown in Fig. 5b and a constant energy bin size of 250 MeV. (b) The distribution of the residuals is reasonably approximated by a Gaussian distribution with a standard deviation of 23%.
4,442.2
2020-06-18T00:00:00.000
[ "Physics" ]
Resolution of the exponent puzzle for the Anderson transition in doped semiconductors The Anderson metal-insulator transition (MIT) is central to our understanding of the quantum mechanical nature of disordered materials. Despite extensive efforts by theory and experiment, there is still no agreement on the value of the critical exponent $\nu$ describing the universality of the transition --- the so-called"exponent puzzle". In this work, going beyond the standard Anderson model, we employ ab initio methods to study the MIT in a realistic model of a doped semiconductor. We use linear-scaling DFT to simulate prototypes of sulfur-doped silicon (Si:S). From these we build larger tight-binding models close to the critical concentration of the MIT. When the dopant concentration is increased, an impurity band forms and eventually delocalizes. We characterize the MIT via multifractal finite-size scaling, obtaining the phase diagram and estimates of $\nu$. Our results suggest an explanation of the long-standing exponent puzzle, which we link to the hybridization of conduction and impurity bands. The Anderson metal-insulator transition (MIT) is the paradigmatic quantum phase transition, resulting from spatial localization of the electronic wave function due to increasing disorder [1]. As for any such transition, universal critical exponents capture its underlying fundamental symmetries. This universality allows to disregard microscopic detail and the Anderson MIT is expected to share a single set of exponents. The last decade has witnessed many ground-breaking experiments designed to observe Anderson localization directly: with light [2][3][4][5][6][7][8][9][10][11], photonic crystals [9,12], ultrasound [13,14], matter waves [15], Bose-Einstein condensates [16] and ultracold matter [17,18]. The mobility edge [19], separating extended from localized states, was only measured for the first time in 2015 [20]. The hallmark of these experiments is the tunability of the experimental parameters and the ability to study systems where many-body interactions are absent or can be neglected. Under such controlled conditions, the observed exponential wave function decay, the existence of mobility edges and the critical properties of the transition [21,22] are in excellent agreement with the non-interacting Anderson model [1]. Furthermore, scaling at the transition [23] leads to highprecision estimates of the universal critical exponent ν from transport simulations (ν = 1.57 (1.55, 1.59) [24]) and wave function statistics (ν = 1.590(1.579, 1.602) [25]). Anderson's original challenge was to describe localization in doped semiconductors. For these ubiquitous materials, the existence of the MIT was confirmed indirectly by measuring the scaling of the conductance σ ∼ (n − n c ) ν when increasing the dopant concentration n beyond its critical value n c . However, a puzzling discrepancy remains: a careful analysis by Itoh et al. [26] highlights that the value of ν can change significantly with the control of dopant concentration around the transition point, the homogeneity of the doping, and the purity of the sample. Following Stupp et al. [27], they suggest that the intrinsic behaviour of an uncompensated semiconductor gives ν ≈ 0.5 [28], while any degree of compensation results in ν ≈ 1 [29]. Evidently, these values disagree with the aforementioned theoretical and experimental studies. The inability to characterize the Anderson transition in terms of a single, universal value for ν is known as the "exponent puzzle" [27,30]. Most theoretical models that have been applied to this problem lack the ability to capture the full complexity of a semiconductor. The Anderson model, for example, ignores the detail of the crystal lattice and the electronic structure, and also simplifies the physics by ignoring many-body interactions and interactions between dopant and host material. These factors are known to change the universal behavior [31,32] and the value of ν, as shown in studies on correlated disorder [33][34][35] and hydrogenic impurities in an effective medium, where ν ≈ 1.3 [36,37]. Here we propose a fundamental shift from studying localization using highly-simplified tightbinding Anderson models, to atomistically correct ab initio simulations [38,39] of a doped semiconductor. We illustrate the power of our approach for sulfur-doped silicon, Si:S, where the MIT occurs for concentrations in the range 1.8-4.3 × 10 20 cm −3 [40]. We model the donor distribution in Si:S by randomly placing the impurities in the lattice [41]. While we concentrate on Si:S here, our method is straightforwardly applicable to other types of impurities (Si:P; Si:As; Ge:Sb), hole doping (Si:B) and co-doping (Si:P,B; Ge:Ga,As). With this approach we observe the formation of the impurity band (IB), upon increasing n, and its eventual merger with the conduction band (CB). States in the IB become delocalized, as measured directly via multifractal statistics of wave functions [42], and we observe and characterize the MIT. In Fig. 1 we plot how n c and ν vary for energies ε in the IB below the Fermi energy ε F . For ε ∼ ε F , the values are ν ∼ 0.5, while deeper in the IB the exponents increase to about ν ∼ 1, reaching values around 1.5. As we will show below, our simulations of an uncompensated semiconductor suggest that the reduction in ν at ε F is due to the hybridization of IB and CB. Deep in the IB the physics of the Anderson transition reemerges with ν reaching the range of its proposed universal value [24,37,43]. Experiments can readily access these higher values by moving ε F via compensation [26] -intentional or otherwise. ization [38] and discovery [39]. With the choice of Si:S, we can observe the transition in systems of up to 11 × 11 × 11 unit cells, i.e., 10648 atoms. These large system sizes can in principle be reached by linear-scaling DFT [44], but despite this, the necessity to average over many hundreds of disorder realizations, makes repeated DFT calculations impractical for our purposes [45]. We therefore devise a hybrid approach: linearscaling DFT calculations are performed, using the ONETEP code [46], on prototype systems of 8 × 8 × 8 diamond-cubic unit cells (4096 atoms), employing geometry optimization to allow for the lattice to accommodate single or multiple S impurities. We include nine in-situ-optimised non-orthogonal local orbitals for each site (in atomic Si, atomic orbitals are occupied up to level 3p; for better convergence we additionally consider the five 3d orbitals). When embedded in silicon, sulfur, like the other chalcogens, acts as a deep donor. Such defects have highly localized potentials that are well-described in a local orbital basis [47]. The resulting Hamiltonians and overlap matrices, represented in terms of the nonorthogonal local orbital basis, are used to construct three catalogs of local Hamiltonian blocks (cf. Fig. 2). For each system size L, concentration of impurities n and disorder realization, we build the effective tight-binding Hamiltonians H and overlap matrices O from these catalogs (cf. Fig. 2) and solve the large generalized eigenvalue problem [48,49] Hψ j = ε j Oψ j , j = 1, . . . , 9L 3 (1) for eigenenergies ε j and normalized eigenvectors ψ j . In binding matrices of size 95832 × 95832). We average up to 1000 different disorder realizations for each L and n (cf. Table I). Characterizing the IB and its DOS is interesting for its spin and charge transport properties [50,51]. We compute the density of states (DOS) of the IB from the ε j 's while changing the number of impurities N S . We define ε F as the midpoint between the highest occupied IB state at energy ε IB and the lowest unoccupied CB state at ε CB . To obtain the average DOS for given N S and L, we shift the spectrum of each realization such that ε F = 0. The DOS shown in Fig. 3 is calculated by summing over Gaussian distributions of standard deviation σ = 0.05 mHa = 1.36 meV centered on ε j − ε F . We find that the IB has a peak at ε − ε F ∼ −0.1 eV and a tail extending towards the VB with increasing n. This agrees with known features of the IB in doped semiconductors [52]. We emphasize that Si:S is particularly interesting for intermediate-band photovoltaic devices, where the efficiency increases when deep IB states can capture low-energy photons [50]. In order to avoid electron-photon recombination, the IB states should be delocalized such that they can contribute to the photocurrent. The determination of n c and the pronounced tail of the IB as presented in Fig. 3 therefore provide essential information for future device applications. CHARACTERIZING THE METAL-INSULATOR TRANSITION In the last decade, multifractal analysis [42,53,54] has become the method of choice to reliably and accurately extract the localization properties from wave functions [14,25,55]. In its essence, it describes the scaling of various moments of the spatial distribution of |ψ j | 2 , which is encoded in the singularity strengths α q . At criticality, the universality class of the transition determines the scaling of α q with n and L. We capture this behavior using the well-established framework of finite-size scaling. Following [25], we assume that the data for each L meet at the critical point w = 0 with a value α crit q , and scale polynomially with ρL 1/ν , where ρ(w) = w + m ρ m=2 b m w m includes higher-order dependencies on the dimensionless concentration w = (n − n c )/n c . We hence fit the data against the function [56] α q (n, L) with n c , ν, α crit q , the a i 's, and the b i 's as fitted parameters, and m L and m ρ as expansion orders [57]. We illustrate the localization and scaling properties of the wave functions using the moments α 0 and α 1 . Figure 1 shows the results of the fits as ε is varied, obtained from (2) (see tables 1 and 2 [57]). They include systems with up to L 3 = 10648 atoms, varying n and number of realizations for each L as given in Table I. Crucially, we only accept estimates of n c and ν after consistently and rigorously checking their robustness against perturbations in n and stability when increasing m L and m ρ [24,25,57]. Following this recipe, we identify the Anderson MIT and reconstruct the energy dependence of the mobility edge n c (ε) in Si:S. It exhibits (i) a maximum close to ε F and a decrease until ε − ε F ≈ −0.09 eV. (ii) For lower energies, n c increases again and the mobility edge moves towards the tail of the IB (cf. Fig. 3). These findings suggest a natural split into two different regimes, as also seen in the energy dependence of ν. Values of ν in regime (i) increase continuously from ν ≈ 0.5 at ε F to about ν ∼ 1. In regime (ii), we find a larger spread of values 1 ν 1.5. This spread is consistent with the statistical uncertainty of each estimated ν in Fig. 1, which is dominated by the range of L and the ensemble size N (cf. Tab. I). However, the trend in ν observed in regime (i) requires a different explanation. HYBRIDIZATION OF IMPURITY AND CONDUCTION BAND STATES In Fig. 4, we present the distribution of states resolved in both energy ε and α 0 . Perfectly extended states correspond to α 0 = 3, while increasing localization results in α 0 → ∞. The data for N S = 40 (n = 4.9 × 10 20 cm −3 ) show metallic states of the CB with α 0 ≈ 3 at ε ≈ ε F . The IB is characterized by (i) a majority region of states with α 0 ≈ 3. For N S = 40 we show the density plot of the distribution (from blue for low to red for high density, see color scale) and the contour lines enclosing 68% (white) and 95% (black) of the α 0 's. For N S = 100 we indicate the same contours (red, dashed). As in Fig. 3, the shading denotes the delocalized region (in the L → ∞ limit) according to Fig. 1. (the metallic limit) close to ε F . This observation is intriguing when tensioned against the simultaneous decrease in the value of ν at ε ∼ ε F (cf. Fig. 1). Apparently the localization of the IB states is substantially modified by the presence of the states from the CB. In Fig. 5, we show the α 0 data for N = 4096 as a function of ε and n. For small impurity concentrations, the IB consists of localized states with some of the largest values of α 0 ∼ 3.6, while the CB contains delocalized states with α 0 3. Upon increasing n, the IB develops and its states become more delocalized. Initially, this trend is most pronounced where the DOS of the IB is large (see Fig. 3), i.e. around ε − ε F ∼ −0.12 eV. Simultaneously, states at the top of the IB exhibit α 0 values close to those denoting extended states in the CB, even before the band gap has fully closed. When reliable scaling is possible, we eventually see how the two mobility edges emerge. At the lower mobility edge, we find values of ν ∼ 1 − 1.5. At the upper mobility edge, we observe lower estimates for ν coinciding with lower α 0 values at the transition due to the strong hybridization of IB and CB. Let us discuss how this observed hybridization and the resulting enhanced metallic behavior can affect the value of ν. The leading scaling behavior from (2) is α crit 0 − α 0 ∼ wL 1/ν for w > 0. A decrease in the effective α 0 yields an increase in α crit 0 − α 0 , which is consistent with a reduced exponent ν as observed in Fig. 1 for (ε−ε F ) −0.1 eV. An argument similar to the famous "Gang of Four" result [23] can be made directly for the transport experiments, where an increase in σ ∼ w ν for 0 ≤ w 1, i.e. close to the critical point, is also consistent with a reduced ν. [26] that in experiments a change from 0.5 to ν ∼ 1 can be induced by compensation. Taken together, compensation and band hybridization provide two important pieces to complete the "exponent puzzle": modelling the Anderson transition in doped semiconductors needs to include the CB (VB for hole-doped materials) together with the IB provided by the Anderson modelthe experiments obviously include both and hence find ν values which, depending on their state of compensation, are occasionally different from the predictions based solely on the Anderson model of the IB. How exactly the hybridization changes the value of ν, as well as whether the value of ν deep in the IB is different from the non-interacting predictions, remain challenges for future high-precision studies. Still, the approach we present here exploits and transfers the accuracy and versatility of modern ab initio simulations to the study of Anderson localization in doped semiconductors-at a fraction of the computational cost. Beyond bulk semiconductors, other disordered systems [59], 2D [60][61][62] and layered materials [63] are also well within reach, as is the investigation of the influence of many-body physics by, e.g., studying the interaction-enabled MIT in 2D Si:P [64,65]. We find that the critical concentration agrees quantitatively with a previous experiment in Si:S by Winkler et al. [40]. Our approach is hence capable of modeling fundamental physical phenomena while also mak-ing material-specific predictions. METHODS Our simulations use the ONETEP linear-scaling DFT package [46] with the PBE exchange-correlation functional [66]. We include nine orbitals of radius 10 Bohr radii on each site, and a psinc grid with an 800 eV plane-wave cutoff [67]. This gives an accuracy equivalent to plane-wave DFT for Si and other materials [68]. The first catalog of local Hamiltonian blocks describes the Si host material, i.e., a set of onsite energies and hopping terms, starting at a central Si atom and extending to 10 shells of Si neighbors. The second corresponds to the energies and hopping terms when the central atom is S, and the third catalog to pairs of neighboring S atoms. Here, we define a "neighbor" as being at most 4 shells apart. If two S atoms are 5 or more shells apart, each S atom is unaffected by the presence of the other [57]. The impurity distribution is generated by randomly substituting the impurity atoms onto lattice sites. This follows the experimental techniques used to achieve high S concentrations, combining ion implantation with nanosecond pulsed-laser melting and rapid resolidification [40]. The impurities are effectively trapped in the substitutional sites [41]. With φ α denoting the non-orthogonal local orbitals, we write the eigenvectors ψ j = α M α j φ α of Eq. (1) in a "site" basis by summing over the nine orbital coefficients of each site k, i.e. |Ψ j k | 2 = α∈k,β M α j O αβ M β j . We coarse grain Ψ by fixing a box size l < L and partitioning the domain in (L/l) 3 = λ −3 boxes. The amplitudes of the coarsegrained wave function µ are given by µ s = k∈B s |Ψ k | 2 , i.e. by summing all |Ψ k | 2 pertaining to the same box B s . After rescaling the amplitudes as log µ s / log λ, we compute their arithmetic mean α 0 = log µ s s / log λ and weighted mean α 1 = s µ s log µ s / log λ (proportional to the von Neumann entropy [69]). Finally, for each n and L we take the ensemble average α q (n, L), where q = 0 or 1. Further details on the DFT simulations and the scaling analysis can be found in [57].
4,153.8
2017-10-04T00:00:00.000
[ "Physics" ]
Fourth-Order Neutral Differential Equation: A Modified Approach to Optimizing Monotonic Properties : In this article, we investigate some qualitative properties of solutions to a class of functional differential equations with multi-delay. Using a modified approach, we first derive a number of optimized relations and inequalities that relate the solution x ( s ) to its corresponding function z ( s ) and its derivatives. After classifying the positive solutions, we follow the Riccati approach and principle of comparison, where fourth-order differential equations are compared with first-order differential equations to obtain conditions that exclude the positive solutions. Then, we introduce new oscillation conditions. With regard to previous relevant results, our results are an extension and complement to them. This work has theoretical significance in that it uncovers some new relationships that aid in developing the oscillation theory of higher-order equations in addition to the applied relevance of neutral differential equations. Introduction Diverse areas of pure and applied mathematics, physics, and engineering all involve the study of differential equations (DEs); these disciplines are all interested in the characteristics of various forms of DEs.Applied and pure mathematics emphasize the existence and uniqueness of solutions, as well as the precise justification of the methods for approximate solutions.Nearly every physical, technological, or biological process, including celestial motion, bridge design, and neuron interactions, is modeled in large part using DEs.DEs that are intended to address real-world issues may not always be directly solved; for instance, they may lack closed-form solutions.Alternative methods to this include approximating the results using numerical techniques [1]. Understanding these problems and events requires knowing how these equations are solved.However, DEs used to address real-world issues may not always be directly solvable, that is, they may not have closed-form solutions (see [2,3]).For this reason, the study of the qualitative theory, which is concerned with differential equation behavior through methods other than finding solutions, has been highly utilized.It evolved from Henri Poincaré's and Alexandre Lyapunov's works.Although there are relatively few DEs that can be solved directly, one can "solve" them in a qualitative sense by employing techniques from analysis and topology to learn more about their characteristics [4]. Fourth-order delay DEs are used to numerically depict biological, chemical, and physical phenomena.Problems of elasticity, the deformation of structures, or soil settlement are a few examples of these applications.A fourth-order oscillatory equation with delay can be used to simulate the oscillatory traction of a muscle, which occurs when the muscle is subjected to an inertial force [5]. In the theory of linear DEs, the oscillation theory has numerous significant applications.It can be used, for instance, to examine the stability of solutions to linear DEs.The theorem can be used, in particular, to demonstrate the lack of nontrivial solutions that converge to zero as time reaches infinity.In order to analyze eigenvalue problems, the oscillation theorem is very useful.The Schrodinger equation in quantum physics is one example of a DE whose eigenvalues and eigenfunctions can be studied using the theorem.The oscillation theorem can be used to estimate the number of eigenvalues present in a particular interval and to learn more about how eigenfunctions behave [6,7]. As a solution to (1), we represent a real-valued function x that is four times differentiable and satisfies (1) for all sufficiently large s.Our attention is restricted to those solutions of (1) that satisfy the condition sup{|x(s)| : s ≥ U} > 0, for all U ≥ s 0 .A solution x to (1) is referred to as oscillatory or non-oscillatory depending on whether it is essentially positive or negative.If all of the solutions to an equation oscillate, the equation is said to be oscillatory. In order to understand the asymptotic and oscillatory behavior of solutions to neutral DEs, it is crucial to understand the relationship between the solution x and its associated function z.The authors were able to determine a number of additional criteria that simplified and enhanced their previous research findings as a result of this relationship. We now list some of the relationships that have been inferred in the literature. For the second order, in the canonical case, the usual relationship is typically employed, and in the noncanonical case, the relationship is usually used (see [21,22]).Moaaz et al. [23] took into account the oscillatory behavior of r(s) z (s) where β is a quotient of odd positive integers and ∈ Z + .As an improvement to (3), they offered the following relationships: , for p > 1 and n ∈ Z + is even, and , for p < 1, and n ∈ Z + is odd, where τ [h] (s) = τ τ [h−1] (s) , for h = 1, 2, . . ., 2m. In their study of the equation (r(s)(z (s)) γ ) + q(s)x γ (σ(s)) = 0, Bohner et al. [25] created the new relation where where τ [0] (s) = s and τ [j] (s) = τ τ j−1 (s) for all j ∈ N, which is an improvement of (4).Also, they added additional oscillation criteria that, in essence, improve a number of pertinent criteria from the literature.In order to oscillate for the solutions to neutral nonlinear even-order DEs with variable coefficients of the form where f (x) is a continuous function, several sufficient conditions are found by Zhang et al. [26].Agarwal et al. [27] studied the oscillatory behavior of the equation and created criteria that improve the results published in the literature.Moaaz et al. [28] tested the oscillation of (r(s)(z (s)) γ ) + q(s)x γ (σ(s)) = 0. Using an iterative method, they were able to develop a new criterion for the nonexistence of the so-called Kneser solutions.Also, they employed a variety of techniques to find various criteria.Using the relation they improved (3).By using some inequalities and the Riccati transformation method, Muhib et al. [29] established some improved criteria for the equation without necessitating the existence of unknown functions; where β is a quotient of odd positive integers. and where dv, Q := min q j (s) for j = 1, 2, . . ., , . Then, (8) is oscillatory. They found new properties that enable them to use more effective terms.To obtain criteria that excluded the positive decreasing solutions, they used the general form of Riccati and the comparison approach. Lemma 1 ([31]).Assume that φ ∈ C n ([s 0 , ∞), (0, ∞)).If the derivative φ (n) (s) is eventually of one sign for all large s, then there exists an s x such that s x ≥ s 0 and an integer l, 0 ≤ l ≤ n, with n + l even for φ (n) (s) ≥ 0, or n + l odd for φ Lemma 2 ([32]).Let γ be a ratio of two odd positive integers.Then For multi-delay functional DEs, we refer to some qualitative aspects of solutions.We begin by deriving a set of optimized relations and inequalities that connect the solution x(s) to its corresponding function z(s) and its derivatives using a modified methodology.Once the positive solutions have been categorized, we use the Riccati technique and the comparison, where fourth-order DEs are compared with first-order DEs to create criteria that exclude the positive solutions.Then, we provide new oscillation conditions.The new results add to and complete the previous relevant results.In addition to the practical value of neutral DEs, this study has theoretical value in that it uncovers some novel relationships that help advance the oscillation theory of higher-order equations. In order to obtain the cases (1) and ( 2) for the function z(s) and its derivatives, we use Lemma 2.2.1 in [33]. Remark 1.By using the notation B 1 , we can identify the class of all eventually positive solutions whose corresponding functions satisfy Case (1). This implies z (s) Applying this information, we determine that Hence, This completes the proof. Proof.Assume that x ∈ B 1 and that p 0 < 1.We determine that As a result, (13) becomes which, when combined with (1), yields (19).Conversely, suppose that p 0 > 1.The definition of z(s) implies that , and so on.As a result, we arrive at Thus, are the result of the fact that s ≤ τ [−2i+1] (s), z (s) > 0 and (z(s)/π 2 (s)) ≤ 0. Inequality (20) then changes to as a result, which, when combined with (1), yields (19).This completes the proof.Now, using the Riccati approach, we obtain the following theorem: then B 1 = ∅. Integrating the aforementioned inequality from s 0 to s, we obtain Using Equation ( 21), we encounter a contradiction.This completes the proof. In the results that follow, the monotonic features of the solutions in class B 1 are enhanced, and better criteria are then reached to support the claim that B 1 = ∅, for which the following notations will be used: Lemma 7. Assume that x ∈ B 1 .Then, eventually, z (s) Proof.Assume that x ∈ B 1 .From (19), we obtain where ω(s) = r 1/γ (s)z (s).By integrating the aforementioned inequality from s 0 to s, we obtain From (18), we obtain Combining ( 26) and ( 27), we obtain Multiplying this inequality by we obtain z (s) Using this fact, we obtain This implies z (s) Hence, Now, the connection (13) becomes x(s) > p(σ(s), m)z(s). The proof is therefore complete. Now, using a comparison principle, we obtain the following theorem: for any m ≥ 0, then B 1 = ∅. Proof.Assume that x ∈ B 1 .From Lemma 7, we arrive at where ω(s) = r 1/γ (s)z (s), just as we had in the proof of Lemma 7. By integrating (31) twice from s 0 to s, we obtain By changing (32) into (25), we arrive to the conclusion that By setting Ω = r(s)(z (s)) γ , we can show that Ω is a positive solution to the inequality However, condition (30) confirms the oscillation of all solutions to (33), which is in disagreement with [35] (Theorem 2.1.1).The proof is therefore complete. Application in Oscillation Theory and Discussion Finding conditions that individually rule out each case of the derivatives of the solution is necessary to determine the oscillation criterion.In this theorem, the criterion for testing oscillation for (1) will be formed by combining the conditions that are obtained to rule out the existence of solutions that satisfy Case (1) with the conditions that are acquired in the literature to rule out Case (2) of the derivatives of the solution. Proof.Assume that x(s) is to be an eventually positive solution to (1).Lemma 3 has a solution that satisfies either of the possibilities of Case (1) or Case (2).B 1 = ∅ is obtained by applying Theorems 2-4.Case (2) is thus valid.We finally come to a contradiction with (10) in the exact same way as [29] (Theorem 2).This completes the proof. We define p It is easy to confirm that p(σ(s), m) = φ.Also, Using Theorem 2 and choosing λ(s) = s 4 , we have Then, B 1 = ∅ if Equation ( 35) is satisfied.Once again, applying Theorem 3, we find that Then, B 1 = ∅ if Equation ( 36) is satisfied.In addition to Theorem 4, we find that B Finally, by applying condition (10) of Theorem 1, we find that Thus, if the conditions ( 35) and (38) are satisfied, then (34) is oscillatory. On the other hand, using Corollary 2.1 in [36], Equation (39) is oscillatory if q 0 > 109.74.Therefore, our results provide a better criterion for oscillation.With regard to previous relevant results, our results are an improvement and a complement to them. Conclusions The classification of positive solutions according to the sign of their derivatives always comes first when examining oscillations for neutral delay DEs.The constraints that disallow each case of derivatives of the solution determine the oscillation criterion.In the oscillation theory of neutral DEs, the relationships between the solution and the corresponding function are crucial.By using the modified monotonic features of positive solutions, we strengthen these relationships.We then developed criteria to demonstrate that Category B 1 has no solutions based on these relationships.Then, to create a set of oscillation criteria, we brought together results from previous studies that had been published in the literature with new relationships and features.Finally, we provided an example and a comparison with previous work to emphasize the importance of the results.This comparison showed how our findings enhance and add to those in [36].Recent scientific work has focused heavily on the characteristics of the solution to fractional DEs.Applying our results to fractional DEs might thus be interesting.
2,912
2023-10-21T00:00:00.000
[ "Mathematics" ]
“Business performance assessment of small and medium-sized enterprises: Evidence from the Czech Republic” Business performance assessment is one of the basic tasks of management. Business performance can be assessed using a number of methods. The basic ones include financial analysis, Balanced Scorecard or Economic Value Added (EVA). The paper is focused on SME business performance assessment based on Economic Value Added, calculated using the INFA build-up model. According to this method, companies were divided into four categories. The first category included companies with a positive EVA value. The second category included companies with negative EVA, but with the eco- nomic result above the risk-free rate. The third category included companies with a positive economic result above the risk-free rate. The fourth category included compa- nies with a negative economic result. The model did not include companies with negative equity. The input represented 15 predictors based on their financial statements. The data were normalized and all extreme values, likely caused by a data rewriting error, were removed. Company performance is visualized by comparing Principal Component Analysis and Kohonen neural networks. Compared to similar research, the methods are compared using the data that analyzes the performance of companies. Both methods made it possible to visualize the given task. With regard to the purpose of facilitating the interpretation of the results, for the given case, the use of PC seems to be more appropriate. be very useful for the interpretation of the results supporting the decision-making process (Marakas, 1999). The objective is to compare the use of the PCA method and Kohonen neural networks (Matlabacademy, 2019) for the visualization purposes in the classification of businesses. A com-parison of these methods (Brosse et al., 2001) has already been ana-lyzed in technical fields (Blayo & Demartines, 1991). Newly, these methods are also used to analyze economic factors predicting the performance of small and medium-sized enterprises in the Czech Republic. Thus, the paper is aimed at assessing SMEs’ business performance based on Economic Value Added, calculated using the INFA build-up model. INTRODUCTION When running a business, it is often necessary to make decisions in very complex processes (Synek, 2011). Assessing the influence of individual predictors is often difficult and time-consuming, especially in the case of a dimensional decision problem when individual predictors influence the result (Oo & Thein, 2019). For business managers, mathematical models are often difficult to understand and interpret. Here, the visualization of data can be very useful for the interpretation of the results supporting the decision-making process (Marakas, 1999). The objective is to compare the use of the PCA method and Kohonen neural networks (Matlabacademy, 2019) for the visualization purposes in the classification of businesses. A comparison of these methods (Brosse et al., 2001) has already been analyzed in technical fields (Blayo & Demartines, 1991). Newly, these methods are also used to analyze economic factors predicting the performance of small and medium-sized enterprises in the Czech Republic. Thus, the paper is aimed at assessing SMEs' business performance based on Economic Value Added, calculated using the INFA build-up model. Business performance assessment There are several approaches to assessing business performance. The traditional approach deals with horizontal and vertical financial analysis (Vochozka, 2011), where it is possible to assess a wide range of indicators from activity to Return on Equity (ROE), which is the most frequently used one. ROE is calculated as follows: , where EAT is earning after tax, E is equity. The approach using horizontal and vertical analysis has a number of benefits and shortcomings. The main shortcomings include an independent view of individual indicators that can often be distorted by the character of the business or high degree of risk, which is not considered in the formula. Another possible approach is Value Based Management (Nývltová & Marinič, 2010). This method compares the overall benefit of the investment with its costs. The calculation is carried out using the following formula: where Rt is the total return to the shareholder; Pt + 1 is the value (price) of investment at the end of the period (given by the share price and number of shares); Pt is the value (price) of investment at the beginning of the period (given by the share price and number of shares); Dt + 1 is dividend yield. This method is the base of the approaches based on Market value added and Economic value added. Performance can be assessed in terms of economic value added (Neumaierová, 1998), whose results can be used for business management (Neumaierová, 2003). Moreover, EVA results can be used as an input for business valuation (Mařík, 2011), as an assessment of financial health of companies (Vrbka & Rowland, 2019) or as a motivation system for managers (Kislingerová, 2007). On the other hand, the performance can be assessed by Balanced Scorecard (Kaplan & Norton, 1992). Alternative possibilities of business performance assessment can include neural networks and others (Machová & Vochozka, 2019). This study also focuses on Economic Value Added, since it is a clearly measurable method that is also suitable for small and medium-sized enterprises. Economic Value Added (EVA) EVA is calculated using the build-up model as follows (Neumaierová, 1998). There is an alternative approach based on the CAPM method (Vochozka, 2011); however, it is not suitable for small and medium-sized enterprises. First of all, cost of equity is calculated: where A represents total assets, E is Equity, D is long-term liabilities, r d is cost of borrowed capital, and WACC -average weighted cost of capital. The average weighted cost of capital is calculated as follows: where r business is business risk, r FinStab is financial stability risk, r f is risk-free rate, and r LA is risk involved in capital structure. Economic value added is finally calculated as follows: where ROE is return on equity, and r e -alternative cost of equity. Principal component analysis Principal component analysis is a method that reduces the number of predictions. Generally, the reduction of decision problem to the key components is very important for the management, since it clarifies the decision process and makes the interpretation easier. The reduction is carried out by means of converting the original predictors, which are partly correlated into a new space with a reduced number of predictors (Shaw, 2003) that are independent of each other. Due to this fact, complex methods with reduced data can be applied, and new predictors can be used to visualize the task more easily. The method appeared at the beginning of the 20th century (Pearson, 1901), and was subsequently developed and named (Hotelling, 1933). Its greater use is associated with the development of information technology, where visualization plays a significant role and is not time-consuming in terms of individual partial measurements. An important feature of this method is that each of the new components is a linear combination of the original predictors. This prevents the loss of the original data. The linear combination of the individual parts of the predictors into a component enables monitoring the variance of the relevant component. The greater the variance, the more important is the component for the prediction. Sorting the individual components by variance allows dividing the components into significant and less significant and determining the percentage importance of the component. Business management thus has the information on the parameter that influences decision making, as well as on the importance of this parameter. In practice, it is not possible to address absolutely all the facts in the micro and macro environment. For this reasons, various systems are used, such as ABC, where management is first and foremost committed to the most important components with the greatest impact on the result, and subsequently to other components. Figure 1 shows the principle of PCA. On the left side, there is a data set whose location in space is determined by two predictors. On the right side, there is a line that best defined the data in the given space. The slope of the line is determined on the basis of minimizing the distance between the individual dots from the line. The line in Figure 1 (the right side) is a new component (dimension) of PCA, which shows the greatest variance for the given task. Due to this, it is possible to redraw the task into a one-dimensional space ( Figure 2). This redrawing causes minimal distortion of the data compared to the situation when the data is entered in the x-or y-axis (in the previous case). Thus, it was possible to reduce the number of variables with a minimum loss of information. The PCA method also allows other dimensions to be calculated and data displayed with nearly zero loss of information. Kohonen networks Kohonen networks (Kohonen, 1982) are neural networks that learn without a teacher (Vojáček, 2006). The basic idea consists in the random arrangement of neurons in two-dimensional space (Kohonen, 1989). In the following steps, the individual neurons are moved to represent certain data clusters on the basis of the predictors (Buhmann & Kuhnel, 1992). Each predictor is connected with an individual neuron. The strength of the connection determines the position of a given neuron (Vondrák, 2000). The principle is shown in Figure 3. Figure 3 shows Kohonen network with nine neurons (3x3), and two inputs (predictors), which are connected with the individual neurons. METHODOLOGY AND DATA After generation from the Bisnode's Albertina database, the data set contained a total of 42,592 data rows. Each row contained the following information: 1. Identification of a company: name, company identification number, municipality, region, municipality size. 2. Information about a company: NACE, number of employees, code of NACE5A, M_NACE, OKEČ5A, year of financial statement. 3. Financial statements for the given year: balance sheet, profit and loss account, statement of cash flows. Preparation of data (MS EXCEL): 1. Calculation of EBIT (by adding taxes, interests and EAT). Figure 3. Kohonen network Source: Own processing according to Vondrák (2000). The modification reduced the data set from 42,592 rows to 29,611 rows (in Table 2). The resulting data set also contains a complete financial statement with several calculated data stated above. The data can thus be considered predictors (more than 100). For these reasons, the resulting set of companies will be reduced to the main components, and in accordance with the Neumaiers' methodology (Ministry of Trade and Industry, 2019), a category of the companies will be determined following the scheme below. • Companies with positive profit and negative EVA value, but exceeding the risk-free rate rf -re > ROE > rf. • Companies with positive profit, where ROE does not achieve the risk-free ratere > rf > ROE > 0. • Companies with negative profit. The data will be involved in further analyses. The main predictors are as follows: • Total assets -CZK thousands. • Fixed intangible and tangible assets depreciation -CZK thousands. • Income tax on ordinary and extraordinary activity -CZK thousands. Displaying more than 20,000 companies was complicated because their high number caused the creation of continuous color clusters that covered less frequent clusters in groups. For these reasons, the number of companies was reduced to 4,000. The percentage of individual categories remained the same. The data was normalized for the methods, as otherwise, the methods would provide erroneous results (Abdi & Williams, 2010). The normalization was carried out using the "normalize" command. Furthermore, extreme values were removed using the "outliers" command. RESULTS AND DISCUSSION In the first phase, the PCA analysis was carried out. With the normalized data set, the following command was executed: By means of pareto (procenta) command, a new graph was generated ( Figure 4). Figure 4 clearly shows that it is possible to obtain about 80% of the information from the first three components, and more than 70% from the first two components. The remaining components are thus of relatively negligible importance. By means of the "biplot" command, it is possible to see how the individual parts participate in a given component. A positive value represents a positive correlation, while a negative value represents a negative correlation. The result is shown in Figure 5. Using the "gscatter" command, the position of companies in two-dimensional space can be visualized. In the next stage, Kohonen neural network with the dimensions of 5x5 was created. When creating this network, it is possible to see the mutual dependence of the individual predictors. The result can be seen in Figure 7. The more different color for each node, the less dependent the relevant pre-Source: Own processing. Particularly the right part of the graph shows that for certain clusters, Kohonen network al-lows creating a representative element (neuron), which will be in the relevant category by company performance. By means of the "gscatter" command, the position of companies in two-dimensional space can be visualized. Moreover, it is possible to distinguish the individual sets of companies from each other by means of color (see Figure 6). Figure 6 clearly shows 4 groups of companies created in the graph. These groups represent the category of a company according to the INFA methodology. Thus, to Source: Own processing. Figure 5. Individual parts involved in a component Source: Own processing. CONCLUSION This study calculated business performance based on Economic Value Added, which is the key information for business management. The calculation method was based on the INFA build-up model. On the basis of this value, performance of these enterprises was visualized using selected items of accounts. Despite its spatial complexity, it was possible to carry out the visualization so that it is useful for the management. Both methods can be used to visualize company performance. Performance visualization can facilitate the decision-making process in a number of cases (e.g. about the cooperation with a given company, equity investment, etc.). Both methods can provide analytic tools to identify which parameters were used to decide on the classification in a concrete group. With regard to the allowed extent of the paper, these analytic methods were presented and described from the perspective of the most important outcomes. The objective of the analysis was to simplify the decision-making process by means of visualization. In other words, the visualization was supposed to lead to a segmentation that could be easily interpreted. Given the purpose of the analysis, PCA seems to be a more effective visualization method in this particular case due to easier and more unambiguous identification of the classification of individual companies into sets that express company performance. It is also easier to understand and interpret a company's position in a given space. The limitation of the study is mainly in the input data, which is based on the obligation of companies in the Czech Republic to publish their financial statements. Nevertheless, the statements are subject to tax optimization, which can be quite easy to implement in the case of small enterprises. In other words, a category 3 or 4 company may, in fact, bring a sufficient return on resources for the owner. However, this return is not shown in the financial statement with respect to the tax deduction. Finally, it shall be mentioned that, despite the legal obligations, not all companies complete financial statements. This is especially true for companies in difficulty. This distorts a number of companies in individual categories.
3,533.6
2021-09-22T00:00:00.000
[ "Business", "Economics" ]
The Role of E-learning in Studying English as a Foreign Language in Saudi Arabia : Students ’ and Teachers ’ Perspectives Over the past few decades, there have been tremendous increase in technology advancement and the significance of this in the field of education cannot be overemphasised. The adoption and use of E-learning in studying EFL, in particular, is one such areas that has experienced such fast-paced development for some time now. As a result, the government all over the world are committing a lot of resources to keep up with this technology advancement. In this light, the government of Saudi Arabia through its Ministry of Education has recently made commitment, both as the practical and policy levels, with the hope to also benefit from using E-learning in studying EFL in Saudi Schools. However, little is known about the perception of students and teachers regarding the role of E-learning is studying EFL in the Saudi context. In an attempt to contribute to this research base, this paper draws on an empirical investigation using group interviews with students and teachers in order to gain insight into their perception about the role of E-learning in studying EFL in Saudi Arabia. The findings are presented and discussed in four thematic areas: promoting key learning skills, independent learning, flexible learning and interactive learning. The paper also highlights the limitations of the research and concludes by making a number of recommendations. Introduce the Problem As information technology rapidly develops and spreads, there is an increasing body of literature that emphasizes the importance of introducing E-learning to facilitate the studying of English as a foreign/second language (EFL/ESL) depending on country or context (Yang & Chen, 2007;Allam & Elyas, 2016).This is particularly the case in Saudi Arabia in recent years (Al-Hamidi, 2013).The term E-learning, although a contested concept, is defined throughout this paper as computer-enabled learning of EFL.Today, this typically involves the use of the internet as a medium for teaching and learning, either as a principal or supplementary educational resource.There is also ample evidences regarding the relative potential benefits of this type of technology use, for both students and teachers.For example, it is suggested that E-learning offers the option to remove the temporal and spatial restrictions that apply in traditional learning contexts (Smith, 2000).In addition, some E-learning applications permit students learning English to readily access beneficial language resources and communicate directly with native English speakers.Furthermore, students can study English listening, verbal communication, reading, and written communication skills in authentic contexts (Debski & Gruba, 1999;Yang & Chen, 2007;Al-Qahtani, 2016;Al-Hassan & Shukri, 2017).However, Westbrook (2006) has argued that incorporating E-learning into the studying of EFL is not delivering anticipated outcomes.Debski and Gruba (1999) also suggested that while the successful inclusion of E-learning for the teaching and learning of EFL is measurable, proper assessment methods that capture the perceptions of both students and teachers towards technology use still demand consideration.Yet, the Saudi government, through the Ministry of Education is hoping to benefit from E-learning, and has progressively encouraged its implementation for studying EFL, particularly in high schools; i.e. ages 15 to 18 (Al-Hamidi, 2013).Thus, it seems there is a pressing need to explore the prevailing beliefs and opinions of both students and teachers relative to E-learning adoption for educational purposes.The significance of this paper lies in the fact that although numerous studies have been designed to comprehend the views of both students and teachers, regarding the successes and limitations of E-learning technology (Toni Mohr, Holtbrügge, & Berg, 2012), there is a dearth of such relevant studies on this subject matter in the Saudi Arabian context.This paper reports the results of a qualitative research study designed to explore teachers' and students' perceptions regarding the introduction of E-learning into the domain of studying EFL in Saudi Arabia. Literature Review Computer based technological innovations have a long history in its use for the teaching and learning of EFL (Davies, 2012a(Davies, , 2012b)).In a broader sense, language education has utilised computer-based technologies since the 1960s, when educational researchers first showed an interest in using their capabilities in instruction following the development of commercial mainframe computers in the 1950s (Davies, 2012a).Over time, the popularity of technology adoption in the domain of education has increased, especially since the emergence of the World Wide Web. A significant component of this technology advancement is the development of the E-learning environment which has been recognised as having transformative potential in terms of English language teaching and learning methodology (Hellebrandt, 1999).Specifically, students can use E-learning resources to acquire the four main English language skills (listening, speaking, reading, and writing) (Yang & Chen, 2007;Shuchi, & Islam, 2016).This section elaborates on the affordances of E-learning today, and discusses the implications for Saudi students, while also identifying the challenges associated with these aspects when balancing the argument. One of the major limitations encountered with traditional face-to-face studying of EFL in Saudi Arabia is that students cannot be provided with an authentic English learning environment, as public life is primarily dominated by Arabic language.Furthermore, class sizes are very large, meaning there are limited opportunities for individual students to contribute or communicate one-to-one with their teachers.E-learning, therefore, offers a platform on which students can develop their communication (speaking) abilities in English by engaging with other students in the virtual-world (Yang & Chen, 2007).It is worth noting that these limitations are not peculiar to the Saudi context but affects most other countries where English is used as a foreign language (Yang & Chen, 2007). In light of this, Lee (2002) conducted a pilot study using synchronous electronic chat together with task-based instructions, to enhance learners' communication skills.The outcome of that study suggested the combined use of online interaction and task-based instruction improves students' communication skills by creating a lively environment in which they can respond to conversations in real-time about topics relevant to their interests.Furthermore, Warschauer (1999) and Yang and Chen (2007) pointed out that the benefits of E-learning for developing speaking skills include the opportunity for more equal participation than supported during face-to-face interaction.In addition, that communication need not be confined to the local level, but can be easily and unprecedentedly extended to the international setting, opening up opportunities for learners to develop their cross-cultural knowledge (Al-saggaf, 2004).For example, teachers in Saudi Arabia can open up discussion groups for their students on any topic and invite participants from elsewhere to broaden and enrich the discussion, without the need to leave the country or physically mix in a sex-segregated environment.However, it is important to note here that, in Saudi Arabia, discussion platforms of this nature are considered more acceptable for university level students and less acceptable for use in relation to the school context (Madini & de Nooy, 2014). In terms of developing speaking skills, for those students who rarely have an opening to speak with native speakers, and for others who are shy, automatic speech recognition technology provides opportunities for them to practise speaking (Yang & Chen, 2007).As noted by other researchers, including (Chiu, Liou, & Yeh, 2007), the use of automatic speech recognition systems that allow students to engage in speech interactions with a computer is an advantage of E-learning.A web-based conversation environment called Candle Talk has also been developed to enable students to communicate with their computers interactively (Chiu et al., 2007).This software allows EFL learners to access explicit speech training programmes, thereby enhancing their oral skills.Additionally, the application of automatic speech recognition software as used by college freshmen can facilitate the teaching of oral communication.Importantly, the majority of students have welcomed instructional methods based on speech recognition software. Another value addition of E-learning is that it is useful as a tool for creating successful learning environments to motivate students and create meaningful and worthwhile learning activities and outcomes (Garrison, 2011;Yang & Chen, 2007).For instance, Garrison (2011) has argued that the text-based E-learning communication, generated by e-mail messages or discussion threads, has unique and valuable attributes that can facilitate critical discourse and reflection. Examining the significance of such text based tools, Al-Menei (2008) investigated the effectiveness of the computer-assisted English writing skills of Saudi students.His study demonstrated a significant improvement in the writing capabilities of Saudi EFL students when they had used computer-assisted programmes to correct their grammar and paragraph writing, as the E-learning setting provides ample time for students to reflect and focus.Farzi (2016) also observed that computers can be programmed to provide corrective instruction to identify any mistakes in writing.This arguably helps students to correct their mistakes, enriching their writing. Furthermore, E-learning provide unprecedented opportunities when developing their reading skills, due to the unrestricted availability of course materials online (Brandl, 2002).Online information enables students to overcome the confines of textbook based learning, by promoting access to knowledge at any time and from anywhere.Opportunities for listening to authentic language also abound online.Indeed, Romeo (2008) observed the importance of listening exercises to understand relative clauses and audio prompts available through online applications.He reports on evidence that suggests that when more syntactically complex clauses are used, learners alter their method of approach to learning and understanding. The E-learning interactions identified above do not only support the development of students' English language skills, but also foster students' interest and motivation in language learning in general.However, the benefits of an E-learning system cannot be maximized if students and teachers do not use it.The next section explains the methods used by the author to research the various benefits of E-learning mentioned above, from the perspective of teachers and students; while ensuring attention was also directed towards any disadvantages of E-learning that might emerge. Method To explore students' and teachers' perceptions of the role of E-learning in studying EFL in Saudi Arabia, a qualitative approach based on group interviews was used.Qualitative methods are well-established as in depth tools for exploring the perceptions of individuals and/or groups about particular phenomena including E-learning (Creswell, 2009).Therefore, this study employs a qualitative approach to gain insights into the meanings and interpretations the research participants ascribed to the role E-learning plays in studying EFL.Using a qualitative approach means participants' social reality can be conceived of as a constantly changing phenomenon with emergent properties (Bryman, 2004).Finally, understanding the construction of meaning was a central issue in this research and so the interview method assisted the researchers in learning how different individuals explain the role E-learning plays in studying EFL (Bogdan & Biklen, 1982;Creswell, 2009). Group interviews were used in this study and were preferred over other methods, because they allowed: rapid information gathering, cost-effective, the generation of new ideas, and raised issues and concerns that the researcher might not have encountered in individual interviews (Kumar, 1987;Ritchie & Lewis, 2003).The study adopted a purposive sampling strategy and sought voluntary participation (Jupp, 2006;Mann & Stewart, 2000).In total, 24 participants were selected from among the students and teachers since the aim was to achieve 'depth' rather than 'breadth' (Blaxter, Hughes, & Tight, 2010;King & Horrocks, 2010).The sample distribution included 16 students and 8 teachers. The student participants were selected and grouped according to gender, English proficiency, and whether they had previous E-learning experience.Table 1 illustrates the distribution of the student participants.The decision to include these different criteria was made to ensure the selected sample was as diverse as possible within the defined population boundaries.According to Ritchie and Lewis (2003, p. 197) "diversity in group composition enriches the discussion, but there also needs to be some common ground between participants-based on how they relate to the research topic or their socio-demographic characteristics".In total there were four group interviews (lasting between 45 minutes and 1 hour).In total two groups of eight students and two groups of four teachers (males and females were interviewed separately in both cases, due to cultural and religious constraints on gender mixing).The rationale for involving both students and teachers (the primary users of E-learning in education) was to gain information about different experiences and ensure some diversity in the participants' characteristics.Collectively the sample size and distribution helped the authors to provide the diversity required to explore the topic and meet the aim of the study, as stated in the introduction. The data collected during the group interviews was transcribed manually and then analysed thematically.This involved identifying, examining and interpreting themes in textual data and then asking how these themes helped address the research aim.The steps involved gaining familiarity with the data; generating initial codes; searching for themes; reviewing and naming themes; and conducting the analysis (Braun & Clarke, 2006).The group analysis involved treating the data produced by the group as a whole, rather than focusing on individual contributions (Ritchie & Lewis, 2003).Therefore, the groups were the units of analysis and were treated in the same way as units of individual data.Group analysis was used in this particular study, because it enabled the researchers to compare the differences and similarities between genders, as well as between teachers and students.The researcher chose thematic analysis as an analytical tool, because it was seen as an ongoing, fluid, and cyclical procedure occurring throughout the data collection stage, as well as involving data entry and analysis phases (Bryman, 2004). Finally, ethical issues raised during the research were dealt with in the strictest confidentiality and data was anonymised in order to protect the identities of the respondents.The findings are presented below detailing the role of E-learning as perceived by each group. The Role of E-learning in Promoting Key Learning Skills (Listening, Speaking, Reading, and Written) Despite the many benefits provided by E-learning in English education, the study participants in general focused on the development of listening and speaking skills only.As previously mentioned in the literature review, both skills are subject to considerable limitations in the context of traditional face-to-face English education in Saudi Arabia.The participants therefore saw E-learning as a platform, through which students would be able to develop their speaking and listening in real-world situations.For example, the male group shared some of their comments regarding the usefulness of E-learning, focusing on listening skills.Moreover, according to the M.S. group: …Listening is one of the most important skills when learning English.You know, E-learning can provide audio and video to listen and watch as much as you can, which will improve students with weak listening skills and something like this e-learning should be used. The above quotation illustrates a significant point associated with learning language; i.e. that E-learning can promote the development of students' listening skills.The respondents' position was also that E-learning could be instrumental in developing their listening skills more fully, particularly as they feel the current system is not student centred.The textbooks provided do not seek to develop all areas of students' skills, resulting in weaknesses in these areas.Resources in Saudi Arabia are also woefully inadequate, video-based activities are largely inappropriate, and little attempt is made to teach following a learner-centred methodology.This highlights the space for the usefulness of E-learning in a context such as Saudi Arabia.Another quotation considered interesting by the M.S. group is as follows: There are many positive aspects I can think of, for example speaking, E-learning can maintain openness in communication that, if used in the right way, can be extended not only in the school community but all over the world.You know, speaking in English is one of the main skills, E-learning can support speaking skills when using chat rooms that are available in English.Since we are in a non-English speaking country, I see that E-learning will help to develop students' speaking skills. This group also focused on highlighting speaking as a particular skill, considered useful in E-learning.They specifically felt that students might find chat rooms beneficial for practicing their speaking skills, providing opportunities currently unavailable in the traditional teaching and learning context.However they cautioned against total dependence on chat rooms for E-learning, because people might then feel isolated from one another and their teachers.The outcome of this research supports research on communication by Warschauer (1996) who argues that, "… the benefits of communication are seen as many: feeling part of a community, developing thoughts and ideas, learning about people and cultures, and students' learning from each other" (p.39). In this research the respondents identified development in their speaking skills as a core affordance of E-learning. On the other hand, E-learning was also perceived as potentially inhibiting some aspects of English learning.For example, the F.T. group stated: …, it should not affect the other skills that students acquire from face-to-face teaching and learning, such as handwriting skills... Thus, a need for moderation emerged when interpreting this groups' perceptions of the usefulness of E-learning.That is, while the participants recognised the usefulness of E-learning (implicitly); they cautioned against potential negative impacts, such as students not developing handwriting appropriately. Overall, the analyses showed that E-learning helps to develop students' listening and speaking skills, which is important as proficiency in these areas is lacking in individuals who receive a traditional face-to-face English education in Saudi Arabia only.This is because, as mentioned above, Saudi Arabia does not offer the opportunity for students to be exposed to a natural English learning environment, as Arabic languages dominate public life. Interestingly, the groups were relatively silent on the other benefits of E-learning for promoting reading and writing skills.Possibly, this is because these areas are already well provided for by traditional methods of teaching and learning.However, research reported elsewhere asserts that mastery of reading and writing in English can be supported through E-learning (Al-Menei, 2008;Brandl, 2002). What was also clear from the responses of the teachers and students was that they spoke generally about E-learning and the use of technology, but did not elaborate on how it can be applied to the development of learning EFL.This means that both teachers and students emphasised the technology itself, rather than how this facilitates studying EFL.The reason for this could be that E-learning is a new development in Saudi Arabia, especially at the school level, and the research respondents may have lacked the awareness or knowledge of its utilities.This research outcome concurs with findings reported by Yang and Chen (2007), who also found that the majority of students who research technology-enhanced language learning, appear to place greater emphasis on the technology than on the language learning.What also appears to be missing is recognition of the fact that speaking and listening helps forge relationships (Purdy, 1997).This research outcome therefore calls for a need to develop teacher training in this area. The Role of E-learning in Promoting Independent Learning During the interviews, the potential for E-learning to foster independent learning came to the fore as a key advantage.For instance, one F.T. suggested that: I think if it is implemented in the right way, this will reduce the effort I do in the school, and this [E-learning] helps students to rely on themselves and have different learning styles of English. The above quotation from this teacher highlights the possibility that applying E-learning provides different learning styles.Kinsella (1995) defines 'styles' more generally as: "being an individual's natural, habitual, and preferred way of absorbing, processing, and retaining new information and skills" (p.171).Christison ( 2003) also acknowledges the numerous ways of characterising learning styles, including: cognitive style, sensory style, and personality styles.The identification of learning styles is however particularly useful when understood relative to the needs of this research, because as argued in the work of Oxford (2002), "when allowed to learn in their favourite way, unpressured by learning environment or other factors, students often use strategies that directly reflect their preferred learning" (p.127).One F.S. also made a useful comment in this regard, as follows: The one thing that stimulates me to use E-learning is the huge amount of information that is easily accessible on it. The above statement suggests that through E-learning a lot of information can be made readily available to students to enable learning regardless of context.Students do not have to be in the classroom to access relevant information, nor do they always require the presence of a teacher.This was the characteristic of technology that respondents considered very user friendly.However, another student who had never used E-learning had the following to You know, I don't have any experience with E-learning, you know, I don't think it will be easy for me to use it. This student was clearly concerned that it would not be easy to use E-learning without any experience and limited guidance.This highlights the importance of the teacher's role in supporting students when using E-learning tools. Moreover, the F.S. group commented that E-learning promotes independent learning by providing feedback.For example they mentioned: … E-learning sometimes gives feedback immediately, which is sometimes helpful, especially when you don't need a teacher to know if your answer is correct or not. …, this really encourages us… This suggests that students consider it motivating that they can use E-learning technology to learn independently without their teachers watching or intruding.As stated, a key aspect of this is the opportunity to access instant feedback when using E-learning.Indeed, the capacity for computers to provide instant and individualised feedback has long been recognised by educators as beneficial to the learning process, including foreign language educators (Salaberry, 2001;Alrabai, 2017).Additionally, findings from research by Ghanizadeh, Razavi and Jahedizadeh ( 2015), demonstrate that modern technologies improve the quality of input, the authenticity of communication, and the relevance of feedback.All of the above corroborates the position of the F.S. group. The Role of E-learning in Promoting Flexible Learning Many of the participants stated that the flexibility that E-learning offers can promote English learning.For example, according to one of the F.T group: Through [E-learning] students have access to the coursework 24 hours/day which gives them more flexibility on time to follow up what they missed in the classroom and I think that will help to improve their English. This statement contends that students who are willing and able to practice more, will take the opportunities offered by E-learning tools and thereby effectively improve their English.Emphasis should therefore be placed on the flexibility provided by the chance to practice any time, rather than just the mere availability of the resources online.In relation to flexibility of place, the respondents stated: … They can have access to coursework from schools or home or where ever they have a computer and internet connection.Teachers also can have the same flexibility to monitor students' progress. The concept of flexibility mentioned above, extends beyond students to include teachers.What this means is that with E-learning, both students and teachers can perform their duties from anywhere.A similar viewpoint was raised in the other groups.For instance, the F.S group mentioned: We can use it [E-learning] any time which means we have flexibility to use E-learning to complete more exercises or to do homework. The male group also provided insight into how they consider E-learning to be flexible to meet their teaching and learning needs.Quoting the M.S. group: Most of students spend time on the internet, you know; in the traditional classroom I have limited time to learn but with the use of E-learning I will have unlimited access to the lessons for learning in my free time.In our learning of English nowadays, we are restricted in learning only in the classroom, which means we don't have flexibility.I think E-learning will help us to overcome all geographical and spatial barriers for students to learn English and exchange knowledge. The above comment goes a step further than previous comments made by the participants, observing that E-learning removes geographical and spatial barriers for students.This points to the fact that utilising E-learning resources, students from more than one geographical setting can communicate easily.This functionality can then motivate students to engage with others irrespective of geography, which can then influence them to use the interactive affordances of E-learning. Additionally, this feature could broaden students' horizons both socially and culturally, through their interactions with the outside world and when reading for pleasure.According to Yang and Chen (2007), Internet technology has a global reach and can provide extensive international resources.Similarly, E-learning enables English students to access useful language learning resources and communicate directly with native English speakers.In the former, students are able to practice the application of information, while in the latter case, they can overcome the decontextualized nature of English language learning.Students can also learn listening, speaking, reading and writing English in an integrated form via E-learning.Finally, E-learning offers students the opportunity to broaden their international perspectives, and appreciate different cultures. The Role of E-learning in Promoting Interactive Learning The participants indicated that E-learning is an interactive tool allowing very effective communication between students or with their teachers.For instance, one teacher mentioned that E-learning can provide a means of communication between teachers and students outside the classroom, enabling them to augment everyday learning and teaching of English.The F.T. mentioned: …, the E-learning environments is different to traditional learning because E-learning can be a complete set of technology tools, which allow teachers and students to interact in a new style via the internet outside the classroom, to support daily learning and teaching of English… The above comment suggests that learning and teaching in English using E-learning can help develop an interactive relationship among the students themselves and between teachers and students.This could play an essential role in bolstering students learning English.This opinion was then echoed by other teachers in the same group, who also perceived E-learning as an interactive teaching tool.The views of the F.S. group on this subject are captured below: Our educational system now, it doesn't support interaction with students who come from different regions, while we in the school come from the same area, I think E-learning will help me to interact somehow with other students, even from different countries to practice my language. The implication here is that as a result of the interactive component of E-learning students from different countries can learn from one another.This was also suggested by Shumin (2002) who argued that, "because of the lack of opportunity in foreign language settings to interact with native speakers, the need for exposure to many kinds of scenes, situations, and accents as well as voices is particularly critical" (p.209).The view expressed here is in many ways similar to the research outcome by Yang and Chen (2007), discussed above in the section concerning The role of E-learning in promoting flexible learning.Students appear to have a more global perspective on English language learning.Indeed, some students in the F.S. group mentioned: I will then use it [E-learning], because it will add something new to learning English, which interaction with other students or teacher is more open, I meant, I can interact outside the school. The male gender group also suggested a similar view: …, the online interaction aspect that E-learning will provide to students and teachers is one of the most important advantages of E-learning, such as, marking, sending and receiving the homework.I think in this way, E-learning will increase the possibility of contact between students and teachers, this may include email, discussion boards and chat rooms.So, the students will have more time to participate and interact when learning English outside the classroom. The above two comments suggested that although they considered interactivity between teachers and students as something important when learning English, they identified that E-learning facilitates this better outside the classroom setting.For instance, in an E-learning environment, it seems essential to facilitate students' and teachers' proactive involvement for English learning and teaching through various forms of interaction, including online collaboration and the provision of instant feedback.Thus, effective collaboration between teachers and students are key to an effective teaching and learning process, both online and offline; a position that echoes existing research, highlighting the importance of a collaborative approach to E-learning.This is facilitated through increased contact between student-student and/or student-teacher.This conclusion is supported by findings in Chen's (2014) study, which found that, "the nature of interactivity and immediate feedback of a WBEL environment has a positive effect on the stimulation of students' interest and proficiency in English learning" (p.160).This research outcome demonstrates that knowledge is constructed as a result of constant negotiation between students and teachers.It also emphasises the belief that learning is a social communication process. Conclusions This paper aimed to explore the perception of students and teachers about the role of E-learning in studying EFL in Saudi Arabia.The research outcome has been quiet revealing in a number of ways.In particular, special prominence was given to E-learning benefits in relation to individuals' speaking and listening skills.It was suggested by both students and teachers that using E-learning in studying EFL in Saudi Arabia provides opportunities for the development of students speaking and listening skills which might be lacking in the current curriculum.Although this was perceived to be a good thing, it was also observed that it might come at a cost of other skills, such as writing, reading and grammar, neither teachers nor students raised these skills.E-learning also allows learners to communicate with different people worldwide via chatrooms in a relatively easier, flexible and interactive manner.These key attributes (e.g.flexibility and interactivity) and consequences (e.g.usefulness) of using E-learning when applied for learning EFL in Saudi Arabia might result in successful implementation of such technology.Furthermore, special prominence was given to how E-learning promotes independent learning with less intrusion from their teachers. However, it seems that both students and teachers were focusing more on the attributes of E-learning rather than how it develops their use of EFL and how it can be integrated into the Saudi curriculum in order to augment results.In the current study, it is also noticeable that comparing the views of students and teachers demonstrated that the former seemed more informed than the later about such technology.The implication is that teachers might lack the requisite knowledge to bring together the two pedagogies (traditional and E-learning).This suggests a clear need to offer training to teachers regarding how to apply such technology to the educational curriculum.The study therefore prescribes Hampel and Stickler (2005) seminal series of skills, ranging from technological to pedagogical, that teachers could be encouraged to acquire for effective teaching using E-learning.This is supported by Hung (2016), who contended that for users to use E-learning effectively they require skills, such as the ability to identify resources for learning, selecting and implementing learning strategies, monitoring personal performance, and effectively applying skills and knowledge to reach learning objectives.More importantly, Lai, Yeung, and Hu (2016) have argued that teachers need to share strategies with their students about how to comprehend authentic materials and learn from them.They contend that doing so will help guide students to develop the skills and strategies they require to process authentic materials. Herein, the research outcomes establish the importance of setting realistic E-learning systems to meet students' and teachers' expectations and to promote learning EFL in Saudi Arabia.Such a system is necessary in order to develop key English learning skills (i.e.reading, writing, speaking and listening) in an easy, interactive and fixable ways.The system should also be able to facilitate independent learning.In conclusion, this research is limited in a number of ways.For instance, with a small sample size means that caution must be applied in its application, as findings might not be transferable to a large population.The study was also conducted in Saudi Arabia, which is heavily influenced by social norms, meaning the views reported in the study might be both culture and context dependent. Table 1 . Student participants according to proficiency, e-learning experience and genderThe teacher participants were divided according to gender and whether they had E-learning experience, as shown in Table2below. Table 2 . Teacher participants according to experience and gender
7,306.6
2018-04-15T00:00:00.000
[ "Education", "Computer Science", "Linguistics" ]
Evaluation of StreamwiseWaveform on a High-SpeedWater Jet by Detecting Trajectories of Two Refracted Laser Beams Free surface fluctuations on a high-speed water jet were measured by a laser beam refraction technique. This method can be used to obtain quantitative time-series data on local surface fluctuations. The developed system employs two pulsed laser diodes, and it uses a high-speed optical sensor to detect the instantaneous positions of the laser beams that are refracted at the free surface. Fluctuations in the slope angle are measured at two locations separated by 1.27 mm. The wave speed of each free surface wave, which is determined by the zero-upcrossing method, is experimentally evaluated by the cross-correlation method. A twodimensional waveform is obtained by integrating the slope angle data. The local mean wavelength and mean wave steepness are evaluated for average jet velocities up to U = 10 m/s. Streamwise waveforms of the high-speed water jet at several locations exhibit appreciable asymmetry and have steep profiles. Introduction The material flux through a gas/liquid interface, the heat transfer rate, and the interfacial friction vary significantly depending on whether waves are generated on the free surface or not [1,2].Consequently, much effort has been devoted to measuring free surface waves and to determining their statistical properties using electronic and optical measurement techniques. With the exception of flush-mounted probes embedded in channel walls (which can perform measurements only in a limited range of liquid depths), electric level gauges generally have intrusive electrodes [3,4].Nonintrusive optical techniques are much more preferable since they do not penetrate the surface. Of these nonintrusive optical techniques, we have focused on time-series measurements of the local properties (i.e., absorption, reflection, and refraction) of a narrow laser beam.This approach can provide high-frequency data for slope angle and liquid depth at a fixed point on a free surface, making it more suitable for evaluating the statistical properties of free surface waves than conventional optical methods (e.g., colorimetry [5][6][7] and moiré topography [8,9]); most conventional methods can obtain only relative information about free surface fluctuations. Lilleleht and Hanratty [10] measured liquid height fluctuations from light intensity variations.They used a chopped light beam that passed through a stratified wavy flow containing methylene blue dye to evaluate the rootmean-square displacement and obtain frequency spectra.However, this light absorption method is susceptible to noise for long light paths.Furthermore, the ratio of fluid depth to wave height is generally limited. Hashimoto and Suzuki [11] obtained frequency spectra of a thin liquid film by detecting the displacement of reflected and refracted laser beams.Yoshino et al. [12] measured the wavy water surface on a rotating drum by following the laser beam reflected at the free surface.They used a onedimensional (1D) photodiode array to measure the beam displacement and so could achieve high data acquisition rates of up to 20 kHz.However, to adapt this method for a general free surface, the 2D position of a light beam on a photodiode must be measured.Duke et al. [13] used a 2D photodiode array as a light sensor and obtained slope angle fluctuation data from a stratified wavy free surface.In this measurement, the measurement range of the free surface slope angle is strongly restricted by the limited area of the optical receiver due to problems with high-speed scanning of photodiode arrays.Consequently, the response frequency of Duke's experiment was limited to 285 Hz. To overcome this limitation, we employ a high-speed single-plane photodiode as the light detector.Previously, we achieved a response rate of 33 kHz for free surface slope angle measurements, and we evaluated the spectral characteristics of free surface fluctuations [14,15] of a high-speed (up to 20 m/s) water jet.This kind of detector has recently been used to measure surface gradients.Savelsberg et al. [16,17] and Snouck et al. [18] measured 2D slope angle profiles on wavy free surfaces by single laser scanning at a scanning frequency of 2 kHz. In the present study, by extending the light sources, we measure fluctuations in the local slope angle at two locations on a wavy surface.By increasing the number of measurement locations, the wave velocity can be experimentally evaluated using the cross-correlation technique.Although we have evaluated waveforms from slope angle data in a previous study [14] using a single continuous-wave laser as the light source, information about the wave velocity was limited in a speculation by a linear stability theory concerning with the laminar shear layer underneath the free surface.Moreover, linear theory is not applicable when the average jet velocity exceeds ∼8 m/s since the transition from a laminar boundary layer to a turbulent boundary layer occurs at the nozzle exit in the experimental conditions we use [14,15].Therefore, this study evaluates the nonlinear and unpredictable free surface fluctuations for higher jet velocities (up to 10 m/s; cf.≤ 5 m/s in the previous study) by extending the experimental arrangement of the previous study [19]. Pulsed operation is required to prevent photocurrents overlapping when a single photodiode is used to detect multiple light beams.Data processing is used in this study to obtain the positions of the two pulsed laser beams on the diode.The wave velocity and spatial elevation of the free surface are determined for each wave period by applying the zero-upcrossing method to time-series slope angle data.The statistical properties of the mean wave steepness and the spatial waveform obtained by integrating the slope angle data are reported. Experiment Figure 1 shows a schematic diagram of the test section, which is made from transparent acrylic resin.The system uses water as the working fluid at room temperature and atmospheric pressure.A plane water jet is generated from a 2D convergent nozzle, and it flows along a flat horizontal wall.The nozzle exit height is 10 mm and its width is 100 mm.The jet width is fixed by the sidewalls and the jet free surface is open to the atmosphere to permit optical measurements and visual observations.The x-axis lies in the streamwise direction, the y-axis is in the spanwise direction, and the z-axis is vertical (see Figure 1). Figure 1 also shows the optical arrangement used to measure free surface waves.Two laser beams illuminate the water surface vertically.Two pulsed laser diodes (Premier LC, Global Laser, Gwent, UK; wavelength: 655 nm; output power: 1 mW; maximum pulse rate: 300 kHz) are employed as light sources.A focusing lens (focal length: 150 mm) and cylindrical collimator (diameter: 1 mm; length: 3 mm) reduce the beam diameter at the water surface to ≤24 μm.The two laser beams are separated by a distance L = 1.27 mm on the free surface in the streamwise direction.The distance L is adjusted to be smaller than the mean wavelength at the typical measurement locations to reduce deformation of free surface waves when they pass the two illuminated positions.The lasers are alternately pulsed at a frequency of 40 kHz.The output signal sampling rate was limited to 40 kHz in the present experiment.The pulse frequency is determined by the dynamic response of the optical sensor (see Section 4).Since the laser beam size on the jet free surface (24 μm) is much smaller than the typical mean wavelength (≥0.47 mm), the limited time response of the alternating illumination mainly restricts the measurable range of waves in the present experiment. Experiments were performed using cross-sectional average velocities at the nozzle exit of U = 6, 7, 8, 9 and 10 m/s.U is calculated from the flow rate, which was measured using an orifice flow meter (FLG-N, Nippon Flow Cell, Tokyo, Japan) upstream of the test section.The accuracy of the flow rate measurements was less than ±2.0%(±0.2 m/s for the cross-sectional velocity).Optical measurements of the free surface were performed at 45 locations between x = 0.64 and 80.64 mm for each cross-sectional average velocity condition.The optical system and sensors were translated along the central axis of the test section by high-precision motion stages.The measurement location is indicated by a halfway between the two laser beams in the streamwise direction. The present approach evaluates the free surface slope angle from the refracted laser beam position on the diode.Figure 2 shows the relationship between the beam displacement on the optical sensor r x and the local slope angle θ x of the free surface.The beam is refracted by the water surface and passes through the water, the transparent back wall, and air before reaching the optical sensor (S1881, Hamamatsu Photonics, Shizuoka, Japan).The streamwise displacement , sin where n w , n b , and n a are the refractive indices of water, acrylic resin, and air, respectively [13]. The instantaneous free surface slope angle θ x is calculated by substituting the detected displacement r x into (1) and ( 2).The solution for θ x in ( 1) and ( 2) can be obtained by an iterative numerical method; however, this requires a long data processing time.The relation between r x and θ x is determined by least-squares fitting ( 1) and ( 2) using a fifth-order polynomial, and it is used to calculate θ x .Figure 3 shows the fifth-order polynomial curve together with calibration data that was obtained experimentally by inclining a 0.15-mm-thick cover glass on the stationary water surface.The base data for least-squares fitting, which are the numerical solutions of ( 1) and ( 2), are not shown in Figure 3 because they coincide almost exactly with the solid line in the figure.The theoretical curve deviates by less than ±0.02 rad from the calibration data.We also confirmed that fluctuations in the jet thickness D w have only a small effect on the measured value of θ x .For example, a 10% (=1 mm) increase in the jet thickness (which is four times the typical mean wave height) shifts the theoretical curve by only 0.01 rad under the present experimental conditions (see the dotted line in Figure 3).The overall uncertainly of this measurement is evaluated to be less than ±0.03 rad.The maximum slope angle measurable by this method depends on the light path length, the refractive indices of the materials, and the optical sensor size; it was ±0.80 rad for the present geometry. The present detector can measure the 2D displacement (r x , r y ) of a laser beam.It is necessary to greatly reduce the probability of beams missing the detection area.However, since the jet velocity far exceeds the phase velocity of the waves in the present experimental conditions, the waves move very little in the spanwise direction during the short period when they pass through the measurement location.This makes it difficult to discuss the spanwise waveform from the displacement data of r y , whereas basic statistical parameters can be measured qualitatively (e.g., the rootmean-square deviation (RMSD) of r y in Figure 11 of [13]).Consequently, the present study focuses on evaluating the gradient in the streamwise direction of the jet free surface. Data Acquisition and Processing The optical sensor (S1881) used in this experiment consists of a single pin photodiode.It can detect the 2D displacement of a beam on its detection area.As shown in Figure 4, the beam induces photocurrent signals X 1 and X 2 in the streamwise direction and Y 1 and Y 2 in the spanwise direction by the photovoltaic effect.These signals are amplified by a high-response op-amp (T-IVA001B, Turtle Industry, Ibaraki, Japan) and are sampled independently.The instantaneous displacements r x and r y are calculated using where L x = L y = 26 mm are the side lengths of the detection area (including the nonactive area) of the optical sensor.This single-photodiode sensor is suitable for the present measurements since it has a maximum response frequency of 300 kHz for a continuous beam.However, if two or more beams are simultaneously illuminated on its detection area, it detects only the median point of each beam due to overlapping of the induced photocurrents.Therefore, we turned on the two laser diode beams alternately to prevent them from simultaneously illuminating the detection area.Although this switching operation generates a delay between the data acquisition of the two laser diodes, the dominant time lag of the cross-correlation coefficient can be determined by comparing the acquisition time of each data point with the switching data for the diodes.Therefore, the signals that control the timing of laser beams are recorded at a higher rate of 400 kHz simultaneously with the output voltage of the optical sensor. The cross-correlation coefficient R(τ) is represented in terms of fluctuations in the slope angles θ x1 and θ x2 , where τ is the time lag due to the spatial separation between the measurement locations.The wave speed c x is evaluated from the dominant time lag τ a , which is related to the maximum cross-correlation coefficient by c x = L/τ a .Since each wave on the free surface is considered to propagate at a different wave speed, τ a can be calculated for individual waves, which are separated from the time-series data for θ x1 using the zero-upcrossing method.The zero-upcrossing method is commonly used to determine the wave period in time-domain analysis of irregular or randomly fluctuating data [20].The wave period is determined as the time interval between successive crossings of the mean level of the data in the upward direction (Section 4 gives an example of crossings in the measured slope angle data).The spatial waveform is reconstructed as follows.Denoting the free surface shape by z = η x (x), the local slope angle θ x in the streamwise direction can be written as where ∂x/∂t is the speed of the free surface wave that passes through the measurement point.If waves are assumed to propagate with frozen profiles when passing through the measurement point, the wave shape η x is obtained by integrating (5) with respect to time: where the subscript wave denotes an individual wave; the frozen profile is assumed to last for only a short period as the wave passes the measurement location.Thus, η x is calculated by numerically integrating (6) by substituting the evaluated wave speed c x for individual wave periods.The wave height h is obtained from the interval between the maximum and minimum values of η x .The wavelength λ is calculated as the product of c x and each wave period for θ x .Examples of estimated results for η x are given below (see Figure 11) after first considering the threshold value to confirm the validity of assumption of the frozen profile in the cross-correlation method. Results Figure 5 shows typical data for beam displacements r x and the signals for controlling the switching of the laser diodes.to 40 kHz, free surface fluctuations can be observed by monitoring reliable data obtained from two measurement locations.The data acquisition time for θ x2 lags 0.0125 ms behind that for θ x1 because of the switching.In each measurement, 39 321 reliable data points are obtained per laser diode in a sampling period of 0.98 s. The cross-correlation coefficient in ( 4) is calculated for each wave period extracted from the time-series data for θ x using the zero-upcrossing method.Figure 6 the intervals between successive crossings of the average level in the upward direction.Crossings are indicated by the solid circles in the upper figure in Figure 6.The cross-correlation coefficients are calculated for individual sequences of data between successive crossings (e.g., wave 1, wave 2, wave 3, etc. in Figure 6).The wave speed c x is evaluated from the dominant time lag τ a corresponding to the maximum cross-correlation coefficient R max .If the assumption of a frozen profile in (6) ceases to hold, the cross-correlation coefficient may decrease due to deformation of the waveform.Such deformed waves are eliminated when calculating statistical properties by applying a threshold; the target wave is considered to be either lost or deformed at a downstream measurement location when R max is below the threshold value R th .In the present study, threshold values R th of R max ≥ 0.90, 0.95 and 0.98 were tested experimentally.The streamwise variation of the mean wave steepness is found to be almost independent of the tested threshold value, whereas the number of detected waves decreases with an increase in the threshold value.The results for R th = 0.90 and 0.98 are reported in this paper. Moreover, waves with extremely high speeds or short periods cannot be captured due to the limited time response.It is difficult to evaluate the waveform when the wave period T(= λ/c x ) becomes smaller than the limit T min = 0.075 ms because less than three slope angle data points are obtained over the wave period.Only waves that satisfy the condition λ/c x ≥ T min can be detected in the present experiment. Figure 7 shows typical microflash (10 μs) pictures of the jet free surface.When U = 6 m/s, three distinct regions can be identified with respect to wave development in the flow direction.In the first region (0 mm < x ≤ 9 mm), the jet is smooth with almost no visible waves.This is followed by a second region (9 mm < x ≤ 15 mm) where there are 2D periodic waves with a dominant wavelength of 0.5 to 1.0 mm and wave amplitudes that increase with increasing distance x from the nozzle exit.Finally, the 2D structure of the waves decays into less regular threedimensional (3D) wave patterns.The smooth region is characterized by intermittent time-series slope angle data [15].The 2D wave region is distinguished by a clear peak in the power spectrum density of the slope angle fluctuations [14].These regions are indicated in Figure 7.The smooth and 2D wave regions become shorter and eventually disappear as the jet velocity is increased.When the jet velocity exceeds ∼8 m/s, capillary waves appear on the free surface of the jet immediately downstream from the nozzle exit.Linear instability analysis [21,22] indicates that periodic wave generation is related to shear-mode instabilities under the free surface.The relaxation process in the separate nozzlewall boundary layer may be related to the development of the 2D wave region.However, linear analysis is applicable only to the initial growth of perturbations during which the velocity gradient below the free surface is relaxed.Linear analysis applies only to the initial jet region x ≈ 1.1 D e (where D e is the nozzle exit height) for U ≤ 8 m/s [14].Linear theory is inapplicable at higher velocities (U > 8 m/s) because the nozzle-exit boundary layer will exhibit transient or turbulent property.Therefore, highly nonlinear and irregular free surface waves that develop downstream of the linear amplification region near the nozzle exit or develop on the turbulent high-speed jet cannot be characterized by the theoretical prediction. International Journal of Optics Figure 8 shows the streamwise variation of the number of identified waves, N , using the present optical technique. N represents the number of θ x data sets that have a crosscorrelation coefficient greater than R th and a wave period greater than T min .Since the free surface is accelerated after exiting nozzle and capillary waves are initially generated and grow, N increases rapidly near the nozzle exit.N varies due to the local variation in the wave period and wave velocity.It reaches a steady state downstream of the point where the free surface of the jet becomes stable.At all the tested velocities, increasing the threshold R th drastically reduces N .Waves with extremely high speeds or short periods also cannot be captured due to the limited time response.However, the sampling frequency could be improved by developing a detector that has a higher dynamic response rate for discontinuous beams.The required time response can be improved by increasing the distance L between the two lasers, although this would generate large uncertainties in the evaluated wave velocity. Figure 9 shows a plot of the mean wavelength λ ave (an ensemble average of N sets of wavelength data), against the distance x from the nozzle exit for different jet velocities.The dotted line indicates that the RMSD for each wavelength data point with R th = 0.90.λ ave increases with increasing x for all the velocities, and it decreases with increasing average jet velocity.For comparison, the wavelength evaluated from the luminance profile of a still photograph is plotted.100 profiles of streamwise luminance along the center axis of the test section were extracted from photographs (see Figure 7). in the streamwise direction during the sampling period dt with velocity c x .In Figure 11, the plots are shifted by −λ/2 in the transverse direction to center the waveforms at ξ = 0.It should be noted that the data in Figure 11 does not follow the same wave propagating on the free surface.It can be observed that capillary waves are generated and grow with increasing distance from the nozzle exit.The waves are steepest at x ≈ 20 mm; they relax into moderate waveforms with a further increase in x.To emphasize the nonlinear and asymmetric form of the observed waves, the solid line indicates the thirdorder approximation of Stokes waves [24]; it is given by kη = ka cos kξ + 1 2 (ka) 2 cos 2kξ + 3 8 (ka) 3 cos 3kξ, (7) where the corresponding experimental value for ka is substituted into this equation for each wave.Some of the measured waveform agrees with the profile for Stokes waves at the downstream location where the mean wave steepness is below the limit for a Stokes wave.However, highly deformed waveforms have been observed at the higher velocities using the present optical measurement technique. Conclusion Optical measurements of free surface waves on a water jet were performed for an average jet velocity of U ≤ 10 m/s.The present technique employs an optic sensor with one photodiode to detect the displacements of two pulsed laser beams refracted at two locations separated by 1.27 mm on the free surface of the jet.Time-series slope angle data is obtained at a rate of 40 kHz.The wave speed is evaluated from the cross-correlation coefficient for each wave.The shape of the free surface wave is evaluated by integrating the slope angle data. The mean wavelength obtained by this technique is slightly greater than that obtained by photographic measurements because of the limited temporal response of the system that we used.However, the results demonstrate that this technique is capable of observing the linearly unpredictable free surface as variations in the mean wave steepness and the waveform. Figure 1 : Figure 1: Schematic diagram of test section and optical system. Figure 2 :Figure 3 : Figure 2: Schematic diagram showing relationship between free surface slope angle and displacement on optical sensor. Figure 5 ( Figure 5 shows typical data for beam displacements r x and the signals for controlling the switching of the laser diodes.Figure 5(a) shows examples of time intervals that include two wave periods.The shorter interval between 0.4000 and 0.5000 ms is magnified in Figure 5(b).The control signal voltage (shown in the lower figures in Figures 5(a) and 5(b)) is positive when the upstream laser (LD1) is switched on and the downstream laser (LD2) is switched off.The streamwise displacements of the two lasers are intermingled in the raw data, as shown by the open circles in the upper figures of Figures 5(a) and 5(b).Moreover, transition signals appear when the two lasers are alternately illuminated.The transitional motion is bounded by the horizontal arrows between 0.4500 and 0.5000 ms in Figure 5(b).This signal is generated by the intermediate output of the optic sensor, while the laser diodes alternate.It has a much shorter period than the waves.Similar signals were observed in the preliminary test when a stationary nonfluctuating water surface was measured.This temporal output is considered to arise from jumping of the beam positions or the variation in the beam intensity during switching.Consequently, only signals immediately prior to switching are used for statistical analysis.These reliable data are indicated by the red and blue solid circles in Figures 5(a) and 5(b).The acquisition frequency for this reliable data was chosen such that the output signal from the present sensor reached a plateau immediately prior to switching in the stationary water surface test.The relation between the control signal for the laser diodes and the reliable data obtained by the optic sensor is indicated by the vertical arrows for 0.4250 ms (LD1) and 0.4375 ms (LD2) in Figure 5(b).Although removing the transitional data restricts the practical sampling rate Figure 5 : Figure 5: Typical streamwise displacement data obtained by two lasers. UFigure 6 : Figure 6: Typical slope angle data and crossings detected by zeroupcrossing method. Figure 7 : Figure 7: Structures on free surface of jet for U = 6 and 10 m/s.
5,675.6
2011-04-05T00:00:00.000
[ "Physics" ]
Spontaneous Symmetry Breaking in Hyperbolic Field Theory We study a non-compact analog to the U(1) symmetry group. The Higgs potential is obtained as a transversal section of the λϕ4 potential possessing symmetry under the action of this group. Both the spontaneous symmetry breaking and the uniform exact symmetry scenarios are obtained as particular cases. We then study an extension of the U(1) group by taking the direct product of it with this non-compact generator. In particular we obtain an expression for the mass terms of the potential after spontaneous symmetry breaking. Introduction Hyperbolic numbers appeared in the literature for the first time in a James Cockle article in the year of 1848 [1]. Despite having appeared since so long, applications of them in physical contexts haven't been developed until recent years. The progress done however has shown (see [2]) that a lot of structures used in physics, usually built in terms of complex numbers, admit definitions in terms of hyperbolic numbers. In particular, studies of relativistic quantum mechanics and relativistic wave equations [3], [4], and even a formulation of a gravito-electromagnetism theory [5], have been done systematically. The present work is based on the more detailed study [6]. Hyperbolic numbers can be introduced rigorously as one of the three possible (up to isomorphism) real commutative unital algebras [2]. More general hypercomplex systems are defined as finite-dimensional (not necessarily commutative) real unital algebras [7]. In this work we do not take this approach; we make practical definitions and present only the properties that will be needed. In section 2 we introduce the hyperbolic number system, the usual arithmetic operations are presented and the hyperbolic phases, or "versors", are defined. In section 3 we study the potential of the λϕ 4 theory over the hyperbolic numbers, and its respective spontaneous symmetry breaking (SSB) realization. In section 4 we extend the number system of hyperbolic numbers by combining them with the usual complex numbers, thus obtaining the bicomplex number system. Finally, in section 5 we present the case of the bicomplex λϕ 4 and one of its possible SSB scenarios. Conclusions are presented in section 6. Hyperbolic numbers Complex numbers can be constructed heuristically by taking all the formal combinations x + iy with i, called the imaginary unit, is such that i 2 = −1, and x, y ∈ R, but obviously i / ∈ R. In a similar way we define hyperbolic numbers as the set of all numbers of the form x + jy, with x, y ∈ R, and j is a new quantity, called hyperbolic imaginary unit, with the property j 2 = +1, but j / ∈ R, in particular j ̸ = ±1. We will denote this set as D, i.e. D ≡ {x + jy|x, y ∈ R}. We define sum, subtraction and multiplication in the natural way: for any z 1 , z 2 ∈ D, with z 1 = x 1 + jy 1 and z 2 = x 2 + jy 2 , Analogously to the usual complex case, we define a hyperbolic conjugation for any z = x+jy ∈ D, z ≡ x − jy. The modulus or norm is |z| 2 ≡ zz = x 2 − y 2 . Note that this "norm" is not positive definite; in particular |z| 2 = 0 whenever y = x; these numbers of the form z = x + jx are called null hyperbolic numbers. The (multiplicative) inverse of z is denoted by z −1 and defined as We can see that null numbers do not possess an inverse. Finally we define hyperbolic phases or "versors" in the hyperbolic analogous to the Euler formula: The hyperbolic λφ 4 model In this section we analyze the potential of a massive self-interacting hyperbolic scalar field. We propose the following lagrangian where This potential is invariant under rescaling by hyperbolic phases, i.e. under the transformations The graph of the potential (as a function of the real and hyperbolic parts of φ, denoted by Re(φ) and Hy(φ), respectively) is shown in figure 1. The solid black lines correspond to the SSB (for Hy(φ) = 0) and non-SSB (for Re(φ) = 0) scenarios of a real scalar field. One interesting thing to notice is that the full hyperbolic potential always presents SSB, independent of the sign of the (squared) mass parameter. This can be seen in figure 2, where the potential is plotted for both positive (figure 3) and negative (figure 3) mass term. Both figures are qualitatively equal, the only difference is a 90 degree rotation in field space, i.e., the change from a real to a tachyonic mass amounts to make the discrete transformation ( We now proceed to make explicit the SSB of this potential. First we write it in terms of φ 1 and φ 2 , The extrema of this function are located at φ 1 = φ 2 = 0 and at the hyperbola defined by . The former is a saddle point, which can be seen from the figure or by evaluating the trace of the Hessian matrix of V at φ 1 = φ 2 = 0, while the latter are the true minima of the potential. We have then For m 2 < 0 the simplest choice is φ 1 min = K and φ 2 min = 0, so we define the shifted fields χ 1 and χ 2 by In terms of these the potential reads We can see that χ 2 is now a Goldstone boson which has lost its mass, while χ 1 doubled the value of its mass term. Both have quartic self-interactions and cubic and quartic interactions with each other. This is not very different from the usual complex case. Bicomplex numbers We now introduce the bicomplex number system, denoted as H, which is basically the direct product of complex and hyperbolic numbers. We define z ∈ H as a number of the form with x, y, v, w ∈ R. Addition, subtraction and multiplication of bicomplex numbers are defined in the natural way. Complex conjugation now acts both on i and j:z ≡ x − iy − jv + ijw, so the first and last terms do not change sign. The "norm" of z is Now note that |z| 2 is now, in general, not even a real number, but it is hermitian in the sense that it is invariant under bicomplex conjugation; we call this type of numbers (the ones of the form a + ijb) hybrid numbers. The set of all hybrid numbers has a couple of nice features: it is closed under sum and multiplication, and any number z = a + ijb always has a multiplicative inverse (as long as a 2 + b 2 ̸ = 0). We can also define the bicomplex phases: e iα+jβ ≡ e iα e jβ = cos α cosh β + i sin α cosh β + j cos α sinh β + ij sin α sinh β. In general a bicomplex number has 4 "degrees of freedom", however in the following sections we will be interested in the simplest generalization of a complex field, which has only two. To reduce the number of components of the bicomplex field, then, we will assume the following relations of proportionality: x = βw , y = βv, where β ∈ R is a constant which in principle could take any value; we will nevertheless see below that in some cases we will have to restrict its range of values to a certain interval. Using (15) we can write The norm is now given by The relations (15) are the only ones consistent with both (circular and hyperbolic) invariances of this norm. For instance, the identifications x = βv and y = βw would be inconsistent with the circular symmetry. The bicomplex λψ 4 model We now study the lagrangian where a = ±1 is just a constant that allows us to control the sign of the mass term, and ψ is a bicomplex field with the restrictions described in the previous section, i.e. it has the form (16). Also, to have a bounded potential (here with "bounded" we mean that both the real and the ij components of the potential are bounded), we have to assume that the mass and self-coupling parameters are hybrid: The potential (19) only depends on the combinationψψ, so it takes only hybrid values. Then we can write it as The vacuum conditions are the usual ones: or The first option corresponds to the origin of field space and is not very interesting to us. The second can be rewritten in terms of the components of ψ as We can solve these two equations for v 0 and w 0 , however from the analysis of the hessian matrix of the potentials V R and V H we see that the points (v 0 , w 0 ) determined by (25) and (26) are saddle points unless one of the expectation values v 0 or w 0 vanishes. We assume then that w 0 = 0 (the other choice, v 0 = 0, is completely analogous), which implies In this work we will only analyze the case with the simplifying condition λ R = λ H ≡ λ. We also assume β 2 ̸ = 1. With these assumptions (27) simplifies to The latter equation implies that In the following we assume a = +1, λ > 0, and −1 − √ 2 < β < −1 + √ 2 so the above inequality holds. (Actually, from the analysis of the hessian, the allowed interval for producing a stable vacuum is β ∈∼ (−0.2, 0.2), which is within the interval on which the condition (29) holds. ) We can now rewrite the potentials as where figure 3. We can see that they have more or less the shape of the traditional mexican hat potential, however now the minima are just two points, located at the bottom of the red valleys. Once we choose a specific value for the vacuum and expand the lagrangian around that point, we obtain that the quadratic terms (i.e. the mass terms) reduce to So the field w has lost both of its mass components, while v still has a mass in the hybrid sense, though its real part vanishes. Conclusions We have developed the λϕ 4 theory for both hyperbolic and a bicomplex fields. We have shown that the hyperbolic potential contains both the SSB and non-SSB scenarios of a single real scalar field. Also, it is qualitatively insensitive to the sign of the mass term, leading to a hyperbolic SSB scenario whether mass is real or tachyonic. In the more general bicomplex case, we saw that in some cases the incorporation of the new imaginary unit j leads to a deformation of the mexican hat potential, reducing its vacuum manifold to a set of only two points.
2,555
2016-10-19T00:00:00.000
[ "Physics" ]
Standard Model Baryon Number Violation Seeded by Black Holes We show that black holes with a Schwarzschild radius of the order of the electroweak scale may act as seeds for the baryon number violation within the Standard model via sphaleron transitions. The corresponding rate is faster than the one in the pure vacuum and baryon number violation around black holes can take place during the evolution of the universe after the electroweak phase transition. We show however that this does not pose any threat for a pre-existing baryon asymmetry in the universe. We show that black holes with a Schwarzschild radius of the order of the electroweak scale may act as seeds for the baryon number violation within the Standard model via sphaleron transitions. The corresponding rate is faster than the one in the pure vacuum and baryon number violation around black holes can take place during the evolution of the universe after the electroweak phase transition. We show however that this does not pose any threat for a pre-existing baryon asymmetry in the universe. Introduction. It is well-known that, within the Standard Model (SM) of electroweak interactions, the baryon (B) and the lepton (L) symmetries are accidental and it is not possible to violate their corresponding charges at any order of perturbation theory. Nevertheless, nonperturbative effects may give rise to processes which violate the baryon and the lepton numbers. Indeed, the presence of the non-abelian group SU (2) L within the SM gauge group implies that the ground state is the sum of an infinite number of vacua which are classically degenerate and have different baryon (and lepton) numbers. Static configurations, called sphalerons [1], corresponding to unstable solutions of the equations of motion and to saddle points of the energy functional, interpolate between two nearby vacua. The probability of baryon number violation to occur in the vacuum through sphaleron transitions is exponentially suppressed [2] Γ B ∼ e −4π/αW ∼ e −150 , where α W = g 2 2 /4π is the SU (2) L gauge coupling constant. Such an exponential factor is interpreted as the probability of making a transition from one classical vacuum to the closest one by quantum tunneling, going through a barrier of energy E sph ∼ 10 TeV thanks to the formation of a sphaleron. In more extreme situations, like the primordial Universe, baryon and lepton number violation processes may be however faster through classical transitions induced by the high-temperature environment and play a significant role in the generation of the baryon asymmetry [3]. There are also arguments suggesting that all global symmetries, including the baryon one, are violated when including gravity [4]. In particular, no-hair theorems tell us that global charges are swallowed by Black Holes (BHs). Indeed, quanta with global charge may scatter with a BH, leaving behind a BH with a slightly larger mass, but indeterminate global charge as dictated by the no-hair theorem. At the level of effective field theory, one can imagine to integrate out virtual BH states of mass M BH arising from quantum gravity, leading to higher-dimensional baryon number violating operators suppressed by powers of M BH , where M BH might be as small as the Planck mass M Pl . What about baryon number violation induced by sphaleron transitions in the presence of BHs? In general, tunneling processes may be catalysed by the presence of impurities. A BH is a gravitational impurity and indeed it has been shown that BHs can trigger electroweak SM vacuum instability in their vicinity, both at zero temperature [5][6][7][8] and in the early universe [9][10][11][12][13][14], and baryon number violations through interactions with skyrmions [15,16]. Since we are dealing with SM sphaleron configurations, a simple estimate tells us that the typical Schwarzschild radius of the BH able to alter the rate of the baryon number violation is where G = 1/M 2 Pl and v = 246 GeV is the Vacuum Expectation Value (VEV) of the Higgs field. This leads to BH masses in the ballpark of i.e. to BHs which evaporate with a typical lifetime of O(1) yr and which might have been present during the evolution of the Universe. We are going to show that baryon number violation through sphaleron transitions in the presence of such BHs can be faster than in the pure vacuum and we will offer as well some considerations about what may happen should these tiny BHs be present during the evolution of the universe. Baryon number violation seeded by BHs. To study the influence of BHs on the sphaleron transitions we start from the action of the Higgs doublet field φ along with a SU (2) L gauge field W a µ (including the abelian hypercharge group U (1) Y does not change our results) in a curved spacetime arXiv:2102.07408v2 [astro-ph.CO] 21 Jun 2021 where V (φ) is the Higgs potential and we have added the Gibbons-Hawking-York boundary term as we deal with a spacetime manifold M with a BH horizon. The spacetime geometry around the BH can be taken static and spherically symmetric, such that its metric takes a Schwarzschild-like form where A(r) vanishes at the horizon A suitable ansatz for the gauge and Higgs field is where Since we are ultimately interested in the energy functional, we perform an analytical continuation of the action to the Euclidean metric with t = iτ , taking τ to be periodic with period 1/T (to be identified with the relevant temperature of the system). By setting ξ = g 2 vr and expanding the mass with respect to its value at the horizon we can write the equations of motion Here λ is the quartic coupling of the Higgs and we have written the Higgs potential as The second term is due to the vacuum polarization effect of the Hawking radiation originating at one-loop from the interactions of the Higgs with the other SM particles in the vicinity of the horizon of the BH [17]. This term is very similar to the finite temperature correction to the mass squared of the Higgs ∼ T 2 h 2 in a plasma at finite temperature T . The key difference is that the effective temperature depends on the distance from the horizon [18,19] (being ξ S ≡ g 2 vr S the dimensionless BH horizon) so that, close to the horizon, the correction to the potential acquires the familiar form is the Hawking temperature. We adopt here the Unruh vacuum [21] as the most appropriate vacuum for our physical situation. Indeed, in the following we will consider the case in which the temperature of the universe is different from the Hawking temperature. As such, the Hartle-Hawking vacuum [22] is not the proper one as it assumes full and static thermal equilibrium with the surrounding plasma. The effective couplingλ is given bỹ computed in terms of the g 2 , g 1 (the gauge coupling of the U (1) Y group), and top Yukawa coupling y t , all evaluated at the electroweak scale [23]. Since α 1, one can approximate δ 0 and, given that the metric has to approach the Minkowski spacetime at infinity, the leading order solution of Eq. (10) gives δ 0. The equations for the gauge and Higgs fields then simplify to In order to solve the equations of motion, we have to impose proper boundary conditions. At infinity the metric has to approach the Minkowski spacetime and the fields have to be in their true vacuum, At the BH horizon ξ → ξ S , one can impose the boundary conditions setting the fields in the false vacuum The numerical solutions of the equations of motion can be found in the left panel of Fig. 1 for different rescaled BH horizons. For small enough BHs, there exists a critical radius below which the vacuum polarization effect induced by the Hawking radiation leads to the restoration of the symmetry close to the horizon, nevertheless allowing for a sphaleron solution interpolating between the unbroken and broken phase. The characteristic mass contribution at infinity is which has to be thought as the sphaleron energy in the presence of a BH, Indeed, in the limit of flat spacetime with no BH (r S 1/g 2 v), we get which reproduces the standard result for the current physical mass of the Higgs (i.e. for 0.3), see right panel of Fig. 1. Notice that the effect of the vacuum polarization in the Higgs potential, in the limit of tiny BH masses, is minor because the radius of the sphaleron configuration is located away from the Schwarzschild radius. For small BH masses the sphaleron radius is large compared to the Schwarzschild radius and its energy is only slightly perturbed compared to the vacuum solution. As the seed BH masses increase, the sphaleron radius approaches the horizon and the BH helps catalysing the sphaleron transitions. For larger BH masses, it is energetically more costly to generate the sphaleron solution as its characteristic size is required to be larger than the BH horizon and therefore larger than ∼ 1/g 2 v. Notice also that the minimum BH mass is consistent with the estimate (3). Rate of baryon number violation seeded by BHs. How fast can the baryon number violation take place in the vicinity of a BH? The vacuum decay rate takes the form [24] where sph is the size of the sphaleron configuration. For large BH masses, it turns out to be comparable to the Schwarzschild radius r S , while for small BH masses it is of the order of 1/g 2 v. The dimensionless term B/2π comes from the normalization of the zero mode associated with time translation symmetry, in terms of the exponent B given by the difference between the Euclidean action of the bounce solution and the one before the transition. For a static solution this coincides with the difference between the BH areas at the horizon and at infinity. As one can easily check, for the static solution the bulk part of the energy functional vanishes due to the Hamiltonian constraint, while the boundary terms give the BH Bekenstein-Hawking entropy at the BH horizon [24] A nice and useful interpretation of this formula may be obtained by expanding at the leading order M ∞ = M BH + δM ∞ (we have checked this approximation to be valid in the BH mass range we are concerned with). One obtains where in the last passage we have recognised the BH temperature T H = 1/8πGM BH . The expression (23) tells us that the exponential factor exp(−B) for the sphaleron transition may be thought of as the standard Boltzmann suppression factor in a thermal environment where the temperature of the system is indeed the Hawking temperature. This interpretation allows as well to smoothly interpolate between the zero and the finite temperature limits. In the case in which the BH is immersed into a plasma at finite temperature T , as in the case of the Primordial BHs (PBHs), the sphaleron baryon number violating rate is expected to go as exp(−E sph (M BH )/T ) for T ∼ > T H , fitting the exponential factor (21) at T ∼ T H . In such limit one must use the VEV of the Higgs field at finite temperature v(T ), see Refs. [25,26]. In Fig. 2 we have plotted, on the left panel, the bounce factor B as a function of the BH mass and for two choices of the plasma temperature. One can see that, for enough small BH masses, the bounce B can be much smaller than the vacuum bounce 4π/α W ∼ 150, reaching values of order unity for masses M BH ∼ 10 −24 M , below which the validity of the computation breaks down. This happens in the region where the expression (23) applies. Moreover, as the BH masses increases, the finite temperature of the thermal bath dominates over the Hawking temperature, leading to a suppression of the bounce exponent. Some further considerations. PBHs with masses of the order of 10 −22 M may have populated the early Universe, even if with an abundance, normalized to the dark matter one, f PBH = Ω PBH /Ω DM ∼ < 10 −4 to avoid bounds from Big Bang nucleosynthesis [27,28]. If they are formed by the collapse of large overdensities within a horizon volume, their formation temperature is [29] T f 10 10 while their Hawking temperature is in Eq. (13). The light PBHs we are concerned with are always born with a Hawking temperature which is smaller than the plasma temperature. The rate of evaporation for the masses under consideration is quite small and given by [30] Γ H ∼ 4 · 10 −33 10 −22 M M BH In first approximation, one may consider the PBH masses as constant in time for our considerations. A comparison between the baryon number violation rate and the evaporation rate, both in terms of the Hubble rate, can be found in the right panel of Fig. 2. The evaporation rate becomes relevant only for BH masses smaller than 10 −28 M , for which evaporation is effective at temperatures around 100 GeV. Now, at very high temperatures thermal fluctuations induce unsuppressed baryon number violation through sphaleron transitions till the electroweak phase transition takes place [3]. In the SM this happens at T EW 163 GeV for the current mass of the Higgs. At smaller temperatures and away from the PBHs, the sphalerons are inactive and baryon number violation is suppressed by the exponential exp(−E sph (0)/T ). However, even after the electroweak phase transition, baryon number violation can take place at a rate faster than the rate of the expansion of the universe around the PBHs, see Fig. 2 right panel, where for each BH mass we have taken the maximum between the plasma temperature and the Hawking temperature to evaluate the suppression factor. Does this represent a threat for the scenarios where the baryon asymmetry of the universe is generated before or at the electroweak phase transition? At the time of formation, the fraction of PBHs per horizon is given by [29] β Big Bang nucleosynthesis bounds limit the PBH mass fraction at formation to be β(T f ) 10 −23 for the range of masses of interest [27]. The number N of causally independent regions at a time during the radiation-dominated era with temperature T and currently within our horizon is given by N ∼ 10 34 (T /GeV) 3 . This means that the number density of PBHs at a given temperature T normalized to the photon number density n γ is approximately given by where we have introduced the baryon asymmetry η = n b /n γ normalised to the current constrained value [25]. Luckily, the PBH density is too small to have any impact on the pre-exisisting baryon asymmetry. We believe that such conclusion would be hardly changed envisaging other scenarios of PBH formation in the early universe. Conclusions. We have studied the violation of the baryon number within the SM induced by sphaleron transitions around a BH. Our findings indicate that the bounce for such transitions may be much smaller than the one in the absence of BHs if their Schwarzschild radius is of the order of the electroweak scale. Around PBHs the violation of the baryon number takes place at temperatures below the electroweak phase transition. However our findings indicate that the baryon asymmetry of the universe is unlikely to be wiped out by the presence of PBHs acting as seeds of the sphaleron transitions.
3,638.6
2021-02-15T00:00:00.000
[ "Physics" ]
Applying GIS in Blue-Green Infrastructure Design in Urban Areas for Better Life Quality and Climate Resilience : The expansion of urban centers and peri-urban zones significantly impacts both the natu-ral world and human well-being, leading to issues such as increased air pollution, the formation of urban heat islands, and challenges in water management. The concept of multifunctional greening serves as a cornerstone, emphasizing the interconnectedness of ecological, social, and health-related factors. This study aimed to identify potential locations for three specific types of blue-green infrastructure (BGI): bioswales, infiltration trenches, and green bus stops. Leveraging geospatial datasets, Geographic Information System (GIS) technology, and remote sensing methodologies, this study conducted a comprehensive analysis and modeling of spatial information. Initial cartographic representations were developed to identify specific locations within Olsztyn, a city in Poland, deemed appropriate for the implementation of the designated blue-green infrastructure (BGI) components. Following this, these models were combined with two additional models created by the researchers: a surface urban heat island (SUHI) model and a demographic model that outlined the age structure of the city’s population. This synergistic approach resulted in the development of a detailed map, which identified potential locations for the implementation of blue-green infrastructure. This was achieved by utilizing vector data acquired with a precision of 1 m. The high level of detail on the map allows for an extremely accurate representation of geographical features and infrastructure layouts, which are essential for precise planning and implementation. This infrastructure is identified as a key strategy for strengthening ecosystem resilience, improving urban livability, and promoting public health and well-being. Introduction The important and urgent need to cope with climate change, coupled with the evolution of urban and peri-urban landscapes, emphasizes the need to seek solutions focused on mitigating and, in retrospect, counteracting the effects of this situation.Consequently, methods for monitoring the environmental impact of human activities on the urbanization processes taking place are increasingly important [1][2][3][4][5][6][7].Multifunctional greening is fundamental to providing a wide range of ecosystem goods and services that benefit the urban population.The motivation for undertaking this research was the prevailing opinion in the literature that there are not many methods and approaches to quantify and monitor multifunctional greening at different spatial scales also with GIS usage [8][9][10].Many of the urban areas that are elements of blue-green infrastructure are undergoing development, thus maintaining a minimal amount of green space, for which the type and area are mainly the result of existing urban planning regulations rather than a necessity arising from minimizing future problems [11][12][13][14][15][16].Currently, geospatial data are the most meaningful type of data and thus a key element for increasing the scope of information about the state of space.Ongoing research on blue-green infrastructure provides examples of the use of GIS tools in spatial analysis [17][18][19][20]. The integration of blue-green infrastructure in urban areas is proposed as a strategy to lessen the impact of escalating temperatures from climate change on health outcomes [11,[21][22][23][24][25][26][27].Highlighting the wealth of research confirmed the positive effects of BGI; the focus has been directed toward elements seamlessly connecting water and natural features, specifically bioswales, infiltration ditches, and green bus stops [28][29][30]. The BGI encompasses design and spatial solutions rooted in the intrinsic qualities of a specific location, often referred to as nature-based solutions (NBSs) [31].The elements of BGI are designed to create a cohesive system where services and spatial configurations enhance and support each other reciprocally [18].In urban environments, BGI serves multiple purposes concurrently.It can store and purify water, enhance the visual appeal of an area [32], absorb carbon dioxide, mitigate air pollution, and counteract the urban heat island effect by regulating the air temperature.Furthermore, these elements provide habitats for urban plants and wildlife, fostering ecological continuity and contributing to environmental education.When properly designed, they mitigate excessive surface runoff and the risk of flooding [33][34][35].BGI contributes to improving mental health and positively influences the well-being of city dwellers by mitigating the negative impacts of climate change with the provision of cohesive ecosystem services [31,36,37]. The technology under discussion ensures the effective collection, storage, and handling of spatial data, enabling the processing and examination of these data to generate novel, dependable spatial information, namely, details associated with a specific spatial location.Some research in this arena focuses on employing the GIS platform to recognize established blue-green infrastructure (BGI) [38,39] as well as whether there are enough BGI elements to ensure at an appropriate level the quality of life of the population and whether this is in line with current legal regulations [15,[40][41][42].According to the literature, there are several difficulties in measuring quality of life, mainly due to the many interpretations of the concept and the lack of a complete, agreed definition [3,43,44].This paper assumes that the concept of quality of life is defined by a model that combines objective and subjective indicators, encompasses different domains of life, and integrates both personal autonomy and individual preferences with actual assessments.This approach suggests that a statistical measurement of quality of life should consider the multifaceted nature of the concept.It proposes that such measurements should include both objective conditions of an individual's life and their subjective experiences or subjective well-being.In conclusion, the term living conditions encompasses a multitude of factors, including material conditions, health, education, economic activity, leisure activities, social relations, personal security, and the quality of the natural environment at the place of living.The multifaceted domains addressed by BGI, as previously mentioned, effectively translate into both objective improvements in living conditions and subjective perceptions of well-being.A holistic approach that takes into account the optimal realization of BGI is in line with the emphasis in the literature on including both tangible and intangible aspects of life in quality of life assessments.Consequently, BGI not only supports the technical functioning of a city but also enriches the lives of its inhabitants by creating healthier, more sustainable, and enjoyable living environments. The referenced studies demonstrate that Geographic Information System (GIS) tools are crucial for identifying areas in need of further development to ensure sustainability and for effectively selecting components of blue-green infrastructure.Additionally, substantial research focuses on leveraging remote sensing data to analyze land cover change dynamics, especially in scenarios characterized by swift and significant urbanization [45][46][47].Nevertheless, a unifying theme emerges from these studies-the automation of conducted analyses and the capacity to execute them over expansive geographical areas, objectives that this study also espouses.It is also noteworthy that spatial analyses developed using GIS permit the precise identification of phenomena and processes occurring within the context of dynamic suburban development [48][49][50][51][52][53]. The form of urban spaces, as well as areas under pressure from urbanization processes, have been permanently shaped over the years.Thus, it should be noted that it is much easier to design new elements of blue-green infrastructure in open and nature-rich areas than in an area where everything is already occupied, and such spaces are difficult to find.Given the location of the research area, this study aligns with the scientific article [54] in examining information relevant to BGI from the standpoint of its inception, particularly in zones where recreation and tourism are of central importance.The feasibility of performing analyses and spatial data modeling was validated with a GIS, which also facilitated the evaluation of how practicable it was to implement three selected BGI components.Consequently, a map was produced showcasing potential sites for three specified types of blue-green infrastructure within the urban setting, encompassing bioswales, infiltration trenches, and green bus stops. This research methodology utilized geospatial data, including remote sensing data supported by GIS technology, to identify and optimize the location selection process for BGI (blue-green infrastructure) components.The results obtained confirm the validity of the use of specific spatial datasets as well as the applied spatial analysis methods in improving the location optimization process for different BGI components. Multifunctional Greening and Blue-Green Infrastructural Components The implementation analyses of BGI are intended to assist planners and landscape architects in integrating nature-based solutions into urban planning, serving as either an alternative or a complement to traditional infrastructure.These activities, based on the principles of sustainable development, aim to improve the quality of life and protect the environment.It is also essential to conduct research on effective land management, which is crucial for sustainable urban development and the development of rural areas [55,56]. These ecosystems offer critical services, essential for tackling climate change through mitigation and adaptation efforts, notably including [57,58] cooling and insulation; the absorption of CO2; the utilization of low-carbon materials; and the promotion and implementation of sustainable development goals (SDGs) [59].It also aims at increasing the continuity of natural areas within cities, supporting their ecological role and substantially increasing urban biodiversity [60]. Subsequently, the specific BGI components that are most suitable for the optimal use of urbanized areas are delineated, either independently or in combination.The utilization of BGI elements permits the enhancement of environmental and thermal comfort, whereas the deployment of multiple elements would have a more synergistic impact on the urban environment [4].The integration of the elements collectively or individually is contingent upon the specific urban environmental objectives, the availability of space, and the specific climatic conditions of the area.The combined use of the BGI elements analyzed ensures the adequate irrigation of the vegetation and, simultaneously, reduces excessive evaporation through appropriate shading [61].Concurrently, each of these elements contributes to the improvement in water quality through filtration, which has a positive impact on soil quality.Furthermore, the absorption of water directly into the ground serves to reduce the load on urban drainage systems, preventing the flooding of urban areas.At the same time, the introduction of appropriate vegetation and a shading function significantly enhances urban aesthetics, making it more welcoming and attractive for residents. Table 1 presents the concept and configuration, highlighting the spatial attributes favorable to their installation and offering insights into the proposed locations for the various BGI components [57,[62][63][64][65][66].Concurrently, this study aims to collate a set of spatial features (geodata), which represents a fundamental preliminary step for subsequent analyses to identify optimal locations for these components [57].In the context of this study, the researchers examined the spatial attributes identified in the referenced literature as key to identifying optimal locations for BGI elements [57,62,[64][65][66][67].The analysis conducted unveiled that, with regard to specific features, publicly accessible spatial datasets exhibited variations in accessibility and quality.This allowed for the identification of parameters from the aforementioned table that could contribute to establishing a composite indicator for determining the optimal locations of individual BGI components.It is notable that, in the analysis, the decision was taken to primarily rely on publicly available registries that are distinguished by high quality, timeliness, and standardization.Consequently, the chosen attributes used in the spatial analyses enabled the precise identification of locations and, more critically, ensured that these analyses could be replicated in future years. Geodata Analysis for Optimum BGI Locations This study focuses on the city of Olsztyn, covering an area of 88.3 km 2 .Forests constitute 21.3% of this area, with the City Forest alone comprising 18 km 2 .The majority of the 5.6 km 2 of urban green spaces in Olsztyn are parks and cemeteries.The city also boasts eleven lakes and five smaller water bodies, predominantly located in the western part.Four rivers, the Łyna, Wadąg, Kortówka, and Skanda, flow through the city.With a population exceeding 172,000, Olsztyn provides each resident with an average of 139 m 2 of urban forest, 3.5 m 2 of parkland, and 43.5 m 2 of water within the city limits.Despite abundant natural resources, the increasing prevalence of residential and service-oriented development poses a threat to green spaces, often leading to their usurpation or degradation.This trend is confirmed by the heat island map and analysis conducted for this study.The necessity to identify optimal sites for blue-green infrastructure (BGI) deployment is highlighted, aiming to enhance urban sustainability.The developed method assists in selecting sites with favorable spatial characteristics for BGI, ensuring their viability and effective integration.The optimal BGI location map is instrumental in the decision-making process, revealing potential sites that traditional methods might overlook. The geoinformatic analysis was performed to develop a cartographic model of the optimal location of BGI following the scheme presented in Figure 1.The spatial data were categorized into three sets (Figure 1): • Group A-refers to forms of land use that preclude the optimal location of the BGI elements analyzed.Surface waters, buildings, roads, communication zones, railways, cemeteries, sports facilities, and forests were included in this category (Figure 2).The BDOT10k database, which provides the required accuracy and timeliness of data, was used to identify these areas [68]. • Group B-relates to elements that determine the optimal placement of the analyzed types of BGI.The geospatial features listed in Table 1 and the datasets that allow for their identification that are conducive to the occurrence of bioswales, infiltration trenches, and green bus stops are mentioned later in this paper. • Group C-spatial datasets that allow for the identification of urban heat islands (SU-HIs) on the analyzed urban area and the analysis of the distribution of residents' places of residence, taking into account their age (DM-Demography Model).The BDOT10k and DTM datasets were used for this analysis.The BDOT10k database, a vector-based repository, contains details of the location and characteristics of topographic features, facilitating the production of 1:10,000 scale maps.The DTM dataset, on the other hand, is a point model of terrain elevation created using aerial laser scanning (ALS).This research used a grid model with a 1 m × 1 m mesh in the PL-KRON86-NH height reference frame to represent the terrain elevation.These datasets were chosen for their clarity, ease of acquisition, and completeness in relation to the area analyzed.Concurrently, this study proposed the utilization of the vector data from OpenStreetMap as a supplement to the topographic data, which may be more up to date in certain urban areas.However, it should be emphasized that currently the BDOT10k dataset is the main source of complete and high-quality data at a national level.10mar The use of these different datasets and their analysis with specialized tools allowed for the identification of methods to determine the optimal locations for the analyzed BGI elements, taking into account the specificities presented in Table 1. MBGI for Bioswale, Infiltration Trench, and Green Bus Stops The first BGI element subjected to a geoinformatic analysis was the bioswale.In examining this BGI component, the spatial analysis considered factors such as the areas with slopes of less than 5%, flood-prone areas, the minimum and maximum catchment sizes, and the maximum planned width of the bioswale.In addition, using the favorable characteristics analyzed, the best locations for the bioswale were identified near roads, footpaths, cycle paths, car parks, and public areas.These elements were identified as spatial characteristics that would support the integration of the specified BGI component. The analyzed facilities required analyses of the downslopes to be first carried out.In accordance with the adopted assumptions concerning BGI (Table 1), areas with slopes of less than 5% were sought.Next, the BDOT10k dataset was used to remove surface water and woodland areas, as well as building and road areas, from areas considered to have the required slope.At the same time, buffer zones of 30 m for bicycle paths and existing roads were established, which would enable the indication of areas with slopes of up to 5% that should be considered in the analysis as the preferred areas for bioswales.The next step assumed, based on the SCALGO and DTM application algorithms, the development of runoffs in vector form for the area under analysis.It was then assumed that the BGI component, i.e., the bioswale, should be established to maximize its function on the existing runoff lines.At the same time, the maximum width of the trench was adopted to be 5 m in accordance with the above findings (Table 1).This was followed by the identification of common areas (intersect) for specific trenches with a width of 5 m and the areas located in the vicinity of roads and paths as well as squares and car parks.In consideration of the fact that bioswales are also designed to impede surface runoff, the analysis included areas that had been flooded with a minimum of 10 mm of rainfall.The analysis, extended to include the possibility of considering the function of minimizing the risk of flooding in the city area, allowed for the location for the BGI component concerned to be made more specific.Finally, based on the analysis concerned, an MBGI cartographic model for bioswales was developed, which helped identify 1615 optimum locations (Figure 3). Another component of blue-green infrastructure that was analyzed in this study was the infiltration trench.Due to the similar nature, it was adopted that the spatial analysis conducted could also be applied to determine the optimum location of rain gardens in a container (bioretention planter).The evaluation of the spatial characteristics of this BGI component took into account factors such as the largest catchment areas, the planned maximum width of the component, and the local drainage systems.At the same time, the favorable characteristics identified in the analysis were used to identify prime locations in the vicinity of playgrounds, sports complexes, recreational areas or open communal spaces, and parking facilities.These elements were recognized as spatial attributes that facilitate the integration of the chosen BGI component. The first step assumed the determination of a drainage basin with a maximum overall area of 5 ha within the area of the city under analysis.As regards the infiltration trench, as was performed for the bioswale, the basic analysis was to determine the runoff lines over the entire area under analysis.Next, the drainage lines that passed through areas covered by surface water and forest, as well as areas where buildings and roads were located, were removed from further analysis using the BDOT10k data.At the same time, the maximum width of the infiltration trench was adopted to be 2.5 m in accordance with the adopted findings (Table 1).This was followed by the indication of common areas (intersect) for trenches with a preset width and recreational and sports areas, squares, and car parks, which were extracted from the BDOT10k database.As a result of the analyses, 62 locations for the BGI components, i.e., infiltration trenches, were identified and then juxtaposed with the layer representing a drainage basin with a maximum area of 5 ha.Finally, 39 optimum locations were selected (Figure 4), which enabled the generation of an MBGI cartographic model of the locations of infiltration trenches in the Olsztyn city area.The last blue-green infrastructure component analyzed in this study was the green bus stops.In examining the spatial aspects of this BGI feature, considerations included the location of public transport stops, the maximum size of catchment areas, and the flood risk zones within the area.These considerations were identified as spatial attributes favorable to the deployment of the selected BGI feature. In order to inventory bus stops in the city area, the BDOT10k and OSM databases were used.However, due to the lack of data or incorrect location of facilities, there was a need for verification and updating, which was carried out using an up-to-date orthophotomap.The above-mentioned measures resulted in the establishment of a database containing information on the location of 369 bus stops.According to the adopted assumptions (Table 1), in relation to the bus stops, the area under analysis covered a maximum area of 60 m 2 .The data compiled in the SCALGO application, representing areas that are subject to flooding following significant daily rainfall of at least 10 mm, were then used.When implementing the additional function of the green bus (Table 1), which concerns the reduction in the risk of local floods and overloading of the storm drainage system, the bus stops, whose drainage basins overlapped spatially with flooded areas, were selected.The analysis resulted in the selection of 39 bus stops, which represented a cartographic model (MBGI) of the optimum locations of green bus stops (Figure 5). Compilations of Surface Urban Heat Island (SUHI) Model and Demography Model (DM) Applied to the Study Area Urban heat islands were identified by measuring the land surface temperature (LST) using data from the Landsat 8 mission.Following the recommendations of the United States Geological Survey (USGS), thermal infrared sensor (TIRS) data were used with a spatial resolution of 30 m for channel 10 [70]. The chosen methodology, necessary for accurate measurements, included atmospheric correction and consideration of different emissivity according to the nature of the surface.The single-channel algorithm method [71] was used, which includes atmospheric functions, atmospheric correction, and a threshold method for the normalized vegetation index difference.This method actively incorporates emissivity thresholds using the NDVI in the measurement process [72].Given its proven effectiveness in numerous studies, especially on Landsat-8 OLI/TIRS mission data, the methodology described in this study was considered suitable [71,[73][74][75][76].It is worth noting that, of the currently developed methods, the one by Sobrino et al. [77] stands out for having the most accurate results. In view of the accuracy and trueness of the analyses performed, historical meteorological data and the maximum cloud cover of 12% were taken into account in the basic conditions for the selection of the available data.The analysis covered the summer periods from the beginning of June to the end of August from the years 2021-2022.Based on the indicators analyzed as part of the procedure for calculating the actual temperature of the area under study, the datasets developed for five days, i.e., 17 June 2021, 12 July 2021, 13 August 2021, 12 June 2022, 28 June 2022, and 14 July 2022, were analyzed (USGS, no date).The next step compared the maximum temperature of a particular day based on archival data [78]. Based on the satellite observations over the examined area, the highest temperature noted at 10:00 was chosen for comparison.Historical meteorological records revealed that the peak temperature, reaching 30.9 °C, occurred on 28 June 2022 [78,79].This resulted in the data collected on that particular day being selected for detailed analysis (Figure 6).In order to analyze the age structure within the study area, demographic tables from the Department of the Olsztyn City Council were used.These data were compared with the boundaries of the residential areas after they had been collected and collated.This approach made it possible to segment the population into defined age groups, 0-10, 11-65, and over 65, across different residential areas.The results of this segmentation are shown in Figure 7, which highlights the regions with a higher concentration of people aged 65 and over who are more susceptible to the adverse effects of elevated temperatures, particularly in the summer months. Results In the context of the factors considered when determining the optimal locations of the specific BGI elements, the researchers emphasized the possibility of utilizing diverse data sources, including digital terrain models (DTMs), a topographic database (BDOT10k), and hydrological data.The utilization of the digital terrain model allowed for obtaining information about the terrain's morphology, which is crucial for analyzing terrain slopes, groundwater levels, and surface water runoff directions.On the other hand, the BDOT10k topographic database enabled the identification of both preferred areas, such as parks, areas along roads, and parking lots, suitable for BGI components, as well as unsuitable areas, encompassing forests and areas occupied by existing buildings or infrastructure.The last analyzed aspect was related to the hydrological data and analyses, which resulted in the creation of watershed and surface runoff maps.Determining the flow directions of water and identifying areas prone to flooding supported the selection of the BGI component locations and contributed to flood risk reduction and the mitigation of the negative impact of extreme rainfall events. In the following step of this investigation, comparisons were made between the MBGI cartographic models, depicting the best locations for the blue-green infrastructure components, and the findings from both the surface urban heat island (SUHI) analysis and the demographic model (DM), as presented in Figure 8.The integration of the different mapping models involved aligns the individual datasets containing information on optimal BGI locations with the temperature data, the age structure of the population, and the geospatial factors that preclude the location of the individual BGI elements.By extending the focus to health challenges, the aim of this phase was to propose targeted solutions in specific zones of the urban study area.As a result, the MBGI mapping models developed became a valuable tool for visualizing, analyzing, and selecting the optimal locations for the BGI components.The aim was to maximize their positive impact on temperature reduction, which translates into more favorable living conditions in the city.It is important to note that the impact of BGI on urban microclimates may vary depending on the specific implementation and environmental context [80].Variability due to geographical and climatic factors should inform the design and implementation of BGI elements, taking into account local conditions.Climatic differences, such as the increased cooling and humidification effects of water bodies in hot and dry climates compared to temperate regions, are important to consider.Similarly, vertical greening may be more effective in urban areas with high solar exposure.The structure and layout of a city also have a significant impact on the effectiveness of BGI.In densely built cities, small enclaves of green infrastructure can provide the necessary relief from heat, while in more dispersed cities, larger and more continuous green spaces may be needed to achieve similar effects.The cultural and social context also plays a role in influencing the use and maintenance of BGI.The utilization of public green spaces exhibits considerable variation between continents, contingent upon the extent of public involvement and municipal support.Economic factors, such as the financial resources available to implement and maintain BGI, also influence its quality, scope, and effectiveness.Cities with greater financial resources are able to implement more comprehensive and technologically advanced BGI systems.Consequently, city planners and city managers should adapt BGI strategies to specific local conditions.It is of the utmost importance to conduct comprehensive climate surveys prior to the implementation of BGI, to analyze community engagement, and to continuously monitor and adjust BGI projects based on their performance and community feedback in order to identify key conditions and optimize projects.Minimizing the negative effects of high temperatures is directly linked to reducing the risk of escalating hazards such as hyperthermia, dehydration, increased chronic diseases, social isolation, and sleep disturbances in vulnerable people, particularly the elderly [81][82][83][84]. Thus, the conducted analyses and developed cartographic models can significantly contribute to health prevention strategies and risk management related to extreme temperatures among the elderly population residing in urban areas.The process of developing MBGI cartographic models represented a pivotal stage in the research, aiming to integrate diverse data and thereby incorporate various factors for identifying optimal locations for BGI components.These developed cartographic models hold practical significance, particularly for urban planning and making informed decisions regarding sustainable city development and improving residents' quality of life.According to the outcomes of this research, proactive measures could be utilized to protect residents during extreme climate change impacts in crisis management, as well as by enhancing air quality, reducing heat islands, and providing suitable recreational spaces that positively impact residents' health.Furthermore, with regard to increasing biodiversity, the analyses pertaining to blue-green infrastructure locations could facilitate the creation of new habitats for various animal and plant species, the preservation of ecological corridors, improvement in water quality, reduction in spatial fragmentation, and the protection of endangered species. Considering such a multitude of aspects in pinpointing optimal BGI locations allows for the optimal allocation of financial resources and the pursuit of a sustainable urban policy concerning infrastructure and the environment.Moreover, the adaptability and applicability of the models in various contexts and urban areas underscore the universality and practical value of the conducted research. Discussion It was stressed that data quality is of paramount significance within the analysis and that data quality is a crucial determinant in this study.Lessons learned from the BDOT10k data revealed prevalent challenges related to data completeness and timeliness within the dataset under review.Despite updating the topographic database every two years, the dynamic nature of spatial development meant that not all components were adequately represented in the dataset.The resolution of issues pertaining to the completeness and timeliness of the topographic database data necessitates a multifaceted approach.Due to financial constraints that preclude more frequent updates of the BDOT10k database, it is prudent to consider the integration of supplementary data sources, particularly those of open access, such as OpenStreetMap.Furthermore, a more intensive utilization of satellite data and remote sensing, in conjunction with machine learning and artificial intelligence techniques, could facilitate the ongoing detection of land use change and data updating.It is also of interest to obtain data for surveys from social networks or via mobile applications and online platforms, where residents can report changes and create spatial data.This solution would allow for the provision of up-to-date and realistic data, which, once verified, would significantly increase the usability and accuracy of topographic data. The biggest challenge in utilizing the BDOT10k data is the aspect of changes occurring between subsequent updates, leading to discrepancies between the dataset and the actual spatial conditions.As a result, inaccuracies in representing new roads, buildings, or changes in land use may arise, impacting the quality of conducted analyses and potentially leading to erroneous conclusions.It should also be noted that the updating of the topographic dataset relies heavily on reference databases; however, in cases where data are missing from these references, updates are based on the visual identification of changes using orthophotomaps.Consequently, this could lead to the incorrect identification of urban development elements by the person updating the database.Nevertheless, it is important to highlight that urban centers, particularly provincial ones, are updated more frequently than every two years.As a consequence, frequent updates lead to the discovery and rectification of errors within the dataset, thus improving the overall quality of the dataset. Given the availability of the BDOT10k data in the vector format, there is an opportunity to merge it with additional datasets.Integration provides an opportunity to improve the quality control of individual datasets while promoting a higher degree of timeliness within the combined dataset.As part of the research methodology, it was decided to include OSM data within the BDOT10k dataset.The decentralized data collection process driven by volunteer communities through the OSM database played a significant role in this decision.Volunteers often reside in specific regions and regularly update data based on their accurate knowledge of the updated area.This implies that data updates occur with greater frequency and reflect actual, real-time changes.Consequently, for specific components, such as drainage facilities, support was derived from the OSM database, which does not necessitate an administrative mode of data update.This implies that access to up-to-date data is facilitated in urbanized areas.Certainly, as was the case with the location of the existing bus stops, the dataset used also required a visual verification and supplementation of data based on the current orthophotomap.An example of errors that should have been corrected for the purposes of this study was the lack of road continuity (Figure 9). Considering the advantages as well as the limitations of each dataset, the decision was made to rely on data integration.On the one hand, BDOT10k, based on reference sources, was used, which allowed for the identification and reliability of technical infrastructure elements.On the other hand, the speed of OSM updates favored the identification of new elements that emerged between the BDOT10k update cycles.The integration of data from disparate sources, as exemplified by the present study, facilitated cross-comparison and cross-validation, thereby enhancing the precision of the analysis.The utilization of multiple data sources not only compensates for individual dataset gaps but also contributes to a more comprehensive and reliable understanding of the studied phenomena. In this study, the relevant emission coefficients were identified, and atmospheric corrections were applied to obtain a plausible SUHI model.Unfortunately, one of the unavoidable elements of this study was the data acquisition time.A review of the literature indicates that the SUHI phenomenon intensifies during the nighttime hours, while satellite data in Poland are recorded during the daytime, specifically between 09:30 and 10:00 for the selected area.Nevertheless, it was determined that, despite these challenges, the data can still indicate optimal locations for blue-green infrastructure (BGI). At the same time, the research assumed the use of demographic data, which due to being obtained from official registers required additional activities, including their ordering and aggregation.A significant challenge was the absence of a clearly defined location, which was addressed by the generalization of data pertaining to the age structure of residents to districts.Obviously, acquiring these data was extremely time-consuming, and the presented analysis allows for areas to be indicated through the extent of the residential community to the location of the BGI components.Moreover, given that the implementation of BGI-related facilities is largely overseen by city management policies, it is deemed sufficient to narrow down the location of potential investments.Highlighting the strengths of the applied methodology, it would be important to mention the use of geoinformation systems, enabling the integration and comprehensive, precise analysis of spatial data from various sources.It is also important to highlight the interdisciplinary approach employed in the design of solutions for individual blue-green infrastructure elements.This approach encompasses a number of different disciplines, including geodesy, demography, hydrology, and geography.Application-oriented, the developed models, using appropriate visualization, can be a key role as a tool for decisionmakers and planners, creating solutions that will contribute to improving the quality of life of residents, as well as strengthening individual settlement units on the consequences of climate change.It is postulated that the models developed may serve as a foundation for further analysis, taking into account additional spatial features pertinent to the location of BGI.Important elements from the point of view of the analysis could also be the soil conditions, infiltration capacity of the ground, depth of the groundwater level, and ventilation model of the city.However, the data were excluded from the analyses due to their unavailability, incompleteness, and untimeliness.The utilization of open data exchange formats within geoinformation systems is expected to significantly facilitate this task. In the context of good practices for the implementation of open data initiatives, and especially for the promotion of local sustainability, the issues of interoperability and standardization of data exchange formats are crucial [85][86][87][88][89][90].The solution is to introduce universal standards, open-source software, and appropriate regulations that legitimize the solutions adopted [91].In the context of data exchange, currently the key format is GML, which is gradually being implemented as the primary spatial data format.Nevertheless, a considerable quantity of data remains in analogue form, necessitating its conversion to the vector format for exchange.Furthermore, facilitating exchange, integration, and verification by merging multiple spatial datasets is the concept of sharing data through web services, such as WMS, WMTS, and WFS.This technology enables wide interoperability between different systems and organizations utilizing geographic information.Independence from specific software ensures the use of these standards.Furthermore, the popularization of GIS applications available under open licenses, such as QGIS, SAGA GIS, gvSIG, as well as ETL (Extract, Transform, Load) software, which is used in data processing, allowing for data to be efficiently extracted from various sources, transformed to fit business needs, and loaded into target systems, such as databases or data warehouses, is also related to this.The realization of these goals will not be possible without the active cooperation of government entities and commercial entities.In order to achieve this, it is necessary to create appropriate regulations to guide the creation, use, and sharing of open spatial datasets.Furthermore, the implementation of such regulations will facilitate the attraction of financial support, which is crucial for the maintenance of infrastructure, data management, and distribution.Furthermore, public institutions should encourage the use and creation of open datasets by organizations, particularly non-profit organizations, through concerted action. One of the most crucial and practical aspects of BGI localization is the economic costs associated with implementing blue-green infrastructure (BGI) projects, as well as potential sources of funding for these projects.However, these issues are not addressed in this article as they are a broad and separate research problem.The cost of constructing BGI depends largely on the quality and quantity of these elements and the characteristics of the space in which they are to be located.This implies that the costs may vary considerably from one instance to another.Consequently, it is crucial to highlight that the research conducted and the methodology developed are employed to identify the most suitable location and to inform subsequent activities, including their implementation. Conclusions This research has shown the remarkable effectiveness of geoinformation systems, ranging from data collection and integration to spatial analysis, during the process of indicating the optimal locations of selected blue-green infrastructure (BGI) elements.The presented approach, using the availability of spatial data, through the use of appropriate formats and services, based on publicly available open-source GIS solutions, allows for significant automation of the performed analyses.The method used enables the precise identification of areas that exhibit an appropriate set of spatial features, which are optimal locations for the arrangement of the analyzed BGI elements. However, in view of the possibility of errors in the data obtained, it is essential that those responsible for selecting the optimal BGI sites undertake verification based on other data sources and appraise the suitability of the selected locations, including through onsite inspection.This approach offers additional control and enables the improvement in the quality of the geoinformation analyses carried out. Cartographic models, including the SUHI model and DM, were developed for this study as filters that increase the information layer and thus support decision-making regarding the location of blue-green infrastructure (BGI) elements.The interest in the surveyed aspect of spatial management, which includes improving the living conditions of residents as well as increasing the quantity and diversity of urban vegetation complexes, was also driven by the following aspects: • The benefits of reducing pollution, improving air quality in urban areas, and thereby improving the health and well-being of residents.In addition, locating BGI elements within the city allows for an increase in biodiversity, thus providing habitats for a variety of plant and animal species.Water conservation is also an important element in relation to the environment, as the blue-green infrastructure helps to filter rainwater. • Improving the aesthetics of the urban landscape by giving the urban space a unique character.Through the creation of new parks, ponds, and accompanying architectural elements, as well as elements creating urban vegetation, the aesthetics of streets and public spaces are improved, offering residents a place for relaxation, recreation, and social integration. • Elements of blue-green infrastructure allow for resilience to be built against the effects of climate change, in particular, by providing natural and effective thermal insulation as well as rainwater retention.Green spaces within the city allow for absorption in the event of precipitation, reducing the risk of flooding, as well as the retention of water in the ground in the event of extremely high temperatures. In future research, it would be advisable to consider the use of alternative sets of spatial data to enhance the quality and accuracy of the data [20,46,91,92].The integration of diverse spatial datasets and advanced data acquisition technologies significantly strengthens the reliability of spatial analyses.Data from public government sources that provide statistics on land use and regulatory-driven changes are crucial for informed and effective spatial planning.In addition, observations of actual user behavior on social media platforms make it possible to assess their reactions to changes and to better understand the direction and dynamics of community action in urban environments.Consequently, the advancement of remote sensing technologies, including drones (UAVs) and LiDAR systems, enables the acquisition of high-resolution data, which are essential for the accurate and ongoing analysis of physical changes in space [93].Satellite data, with its ability to cover large areas, play a pivotal role in environmental and urban monitoring, supporting large-scale strategic planning and providing valuable data for historical analysis [94].The integration of these data sources and the use of modern technology enables planners and analysts to develop comprehensive, multidimensional models.The models presented support effective data-driven decision-making in urban development and other spatial analyses [95].With such tools, it is not only possible to plan accurately but also to adapt quickly to changing conditions and societal needs, resulting in increased quality of life for residents and sustainable spatial development.Furthermore, machine learning techniques and artificial intelligence algorithms could be employed to automate the process of data updates and validation, thereby further enhancing the reliability of the analysis. The selection of locations for BGI elements should be complemented by taking into account the opinions and needs of the community during the design process of these spaces.Such activities can influence the successful implementation and long-term maintenance of BGI elements.Socially desirable facilities and a sense of collective action positively influence the acceptance of such concepts.Therefore, proposals for the selection of these spaces for such investments should be based on the developed map of the location potential of BGI elements.This should be followed by a field visit and public consultations to justify and strengthen the design and implementation activities. In accordance with the aforementioned, this study presented here constitutes the initial stage of an analysis aimed at developing a matrix of optimal locational features for the deployment of blue-green infrastructure elements.This research will facilitate a more precise assessment and selection of optimal locations, taking into account a range of social, spatial, and environmental factors.Space management in this aspect is a key element of adaptation strategies to the challenges of today, allowing for the minimization of both the negative effects of urbanization processes as well as climate change.Refining and expanding this methodology is essential for the successful design and execution of blue-green infrastructure.The process involves technical aspects of urban planning and a holistic approach to creating spaces that serve environmental and social purposes.By prioritizing the development of these methodologies, urban planners and developers can ensure that blue-green infrastructure is not only effective in managing natural resources, such as water and green spaces, but also in creating environments that are inherently more welcoming and harmonious for their inhabitants. Figure 1 . Figure 1.Diagram for the optimization of a blue-green infrastructure (BGI) location. Figure 2 . Figure 2. Data used for identifying areas unsuitable for BGI based on the topographic database BDOT10k.Utilizing a geoinformation application enhances the precision of analyzing the phenomena and processes taking place in space.To ensure the integrity of the data, it was decided to use spatial datasets maintained by the National Land Surveying and Cartographic Resource.The data and their sources used in the analyses are listed below: • Hydrological corrections from SCALGO LIVE [69] (date acquired: 10 March 2023); • Digital Terrain Model (DTM) from GUGiK [68]-the Head Office of Geodesy and Cartography (date acquired: 10 March 2023); • Topographic Reference Database (BDOT10k) surface water from GUGiK [68]-the Head Office of Geodesy and Cartography (date acquired: 15 October 2020); • Topographic Reference Database (BDOT10k) from GUGiK [68]-the Head Office of Geodesy and Cartography (date acquired: 10 March 2023); • OpenStreetMap (OSM) (date acquired: 10 March 2023). Figure 3 . Figure 3.A map model (MBGI) showing the optimal location of the bioswale for the city of Olsztyn. Figure 4 . Figure 4.A map model (MBGI) showing the optimal location of the infiltration trench for the Olsztyn city. Figure 5 . Figure 5.A map model (MBGI) showing the optimal location of the bus stops for the Olsztyn city. Figure 7 . Figure 7. DM model for Olsztyn city: population in residential communities. Figure 8 . Figure 8.The results of the analysis conducted to determine the optimal locations for the selected BGI features include. Figure 9 . Figure 9.A part of the Olsztyn area near Lake Track-a visualization of the errors in the acquired data contained in the BDOT10k database.
10,064.6
2024-06-18T00:00:00.000
[ "Environmental Science", "Geography", "Engineering" ]
“ Methodological approaches to investment property valuation ” ARTICLE INFO Olena Fomina, Olena Moshkovska, Olena Prokopova, Nataliya Nikolenko and Svitlana Slomchynska (2018). Methodological approaches to investment property valuation. Investment Management and Financial Innovations, 15(4), 367-381. doi:10.21511/imfi.15(4).2018.30 DOI http://dx.doi.org/10.21511/imfi.15(4).2018.30 RELEASED ON Wednesday, 26 December 2018 RECEIVED ON Wednesday, 07 November 2018 ACCEPTED ON Wednesday, 19 December 2018 INTRODUCTION Continuous development of the up-to-date technologies does stipulate the need for seeking opportunities for effective disposal of the real estate items in order to retain competitive market positions. Separation of items suitable for restoration, retrofitting and further use as the investment properties (administrative, retail, warehousing, etc.) is considered to be the one of the available tools for the above. Proper formation of the funding sources, fields of the investment property use, as well as scrupulous implementation of the methodological approaches to the item recognition and valuation will have positive effect on the recovering processes in the establishment and provide for additional gain in earnings, as well as enable improving of the informational support required for making the managerial decisions and contribute in competitive growth in domestic and international markets. However, the controversial essentials of the investment property identification as an asset, as well as a lack of well structured and clear algorithm of the fair value measurement are considered to be the key challenges preventing from improvement and effective management of the investment property items. Therefore, tasks of working out the scientifically grounded approaches to interpretation of the investment property, singling out the criteria of identification thereof as set forth by the IFRS requirements, as well as improvement of the methodological approaches to the fair value measurement, have become of great importance now. LITERATURE REVIEW Global trends in the investment property markets were studied by a number of the leading scientific professionals, who in particular were dealing with: risk assessment impact on the real estate value management (Baum & Hartzell, 2012 The purpose of the authors' (Baum & Hartzell, 2012;Ball, Lizieri, & MacGregor, 2012) academic study was to disclose the conceptual framework of the immovable property as an investment item and identify risks, whereby real estate valuation is affected, thus providing for improvement of quality and fairness of the information required for making effective managerial decisions on potential asset management options. In consideration of the authors' significant contribution to development of the theoretical basis, it shall be noted about a lack of the practical guidelines for improvement of the real estate item valuation and management processes. Tax authority and the national taxation system effect on the commercial property and land valuation was studied by T. Boyd and S. Boyd (2012), Liapis, Kantianis, and Galanos (2014). The authors used mathematical models, whereby they acknowledge the material effect that the national monetary and credit, as well as fiscal policy has on the net current cost of investment in the commercial property and land as the investment. The process of planning within the system of making managerial decisions on the fields of the investment property use was reviewed by Jackson and Wat k ins (2011) through the example of assets held by the British companies. The authors developed six-level model of making managerial decisions, where the key element is the strategic planning of the further asset management, including assessment of the political environment and development of relations with the local governments. Such authors as Vakhrushyna and Borodin (2012), Druzhylovska (2014), Ilysheva and Neverova (2010) and Mirzoian (2015) were engaged in studies of the theoretical and practical problems with regard to recognition and implementation of the investment property item valuation methods, assessment of the effect that financial crisis and institutional interrelationships have on formation of the investment property valuation policies. The authors investigated the first priority problems of the investment property valuation through the example of the international accounting, in particular of the Russian Federation. Consequently, a set of recommendations was worked out with regard to improvement of the investment property valuation and accounting procedures, in particular specification of the terms and definitions, measurement of the said assets initial value in consideration of the sources of origin thereof for establishments with whatever industry affiliation and ownership structure. The authors laid an emphasis on the key problems of the applicable Russian investment property valuation and accounting rules and standards: • measurement of liabilities at the time of recognition thereof is not considered for valuation; • no discounting is applied for valuation of the investment property taken on lease. Scope of studies included working out of the real estate item market value measurement methodology (in consideration of quantitative adjustment methods implemented, expert appraisals and sales analysis) and proposals for improvement of the methodological approaches to property valuation in terms of the commodity-money relations. However, authors' developments were mainly dedicated to application of the investment property valuation methods according to the national accounting standards with due consideration of implementation of the international standards into the Russian accounting system. Chyzhevska, 2011). The author's proposals were focused on the collision of the statutory regulation of the investment property valuation, accounting and management, necessity of harmonization of provisions of the national and international accounting and financial reporting standards. Some authors (Shevchenko, 2015) reviewed the organizational and methodological guidelines for the investment property valuation based upon subjective approach (where the investment property value measurement is done by the internal specialists (accounting valuation), qualified assessors (independent valuation) or as ordered by the court (expert valuation)) and objective approach (which is based upon division of items into balance, out-of balance and off-balance ones). Bondar and Voinarenko (2009) reviewed the substantiation of options for application of the methodological approaches to the investment property item valuation in consideration of advantages and disadvantages of each method applied based upon formation of fair and relevant information to be reported as of the date of balance prepared with financial statements, as well as procedure for asset identification through following a concept of baseline and derived estimates. At the same time, challenging issues of practical implementation of the investment property valuation and accounting systems by Ukrainian companies, in particular imposed with an obligation to prepare financial statements according to requirements of the international standards, however, with simultaneous meeting the imperative provisions of the applicable national laws and regulations still retain neglected. Purpose of the study The purpose of thus study is to substantiate the methodological approaches to working out of the practical guidelines for accounting measurement and management of the investment properties against a background of the convergence of the international financial reporting standards. Study methods used The following methods were applied for study of the theoretical basis and methodological approaches to the investment property valuation: theoretical generalization and comparative methods (applied for determination of the micro-and macroeconomic role of the investment property), computational and analytical and graphical methods (applied for making tables and plotting of pictures, performing computations and reporting of the study results), as well as analysis and synthesis methods (applied to reveal main weaknesses of disclosure of information on the investment properties and fair value thereof in financial reports of Ukrainian, Russian and European companies). Special attention was paid in the article to harmonization of the investment property fair value measurement algorithm according to the IFRS and IAS requirements with due consideration of the economic and mathematical methods. Appropriate system of comprehensive indicators and criteria of the investment property management efficiency assessment was developed due to generalization and systemization of the results obtained. MAIN RESULTS OF THE STUDY Notwithstanding the geopolitical uncertainty and slow-up of the global economic cycle, the recent analytical studies are indicative of unprecedented growth of real estate investments in 2017: by 18%, i.e. up to USD 1.62 billion (as compared with USD 1.43 billion in 2016). The said indicator has been still demonstrating the further growth in 2018 (USD 1.43 billion by the end of the 3 rd quarter). The Asian investors have played the determinative role and become a sectoral driver as funds incoming from that region made up to over a half of all capital attracted and 46% of the international investments. Despite the USA, China, Great Britain, Germany and Japan are still remaining among leading investment-attractive countries, intensification of the international investment activities has been also noted for Ukrainian real estate market. They have been increased by 54% up to USD 280 million. At the same time the global rate of return on investments into domestic property made up to 12.25% of the office property (mean value for Europe 4.4%), 9.5% of retail property (mean value for Europe 3.25%) and 13.25% of warehousing and logistic property (mean value for Europe 5.9%). Trends of economically conditioned growth and downfall of rate of return on property investments in terms of trend line plotting (geometric display of the mean value (у = 0.1068е^0.0392х; у = 0.0327е^0.1467х)) in consideration of the approxiomation validity (R^2) in the international context are shown in Figure 1 based upon changes in such rate of return in 2017 (see Figure 1). Cushman and Wakefield's quarterly European Fair Value Index -which analyzes 123 European office, retail and logistics markets -continued its downward trend in Q4 2017 to reach a level last recorded in Q1 2006. This reflects both the advanced stage of the property cycle and the availability of fewer attractive prime (high quality) opportunities. In Q4 2017, just 19% of the index was classified as 'underpriced'. Logistics remains the most attrac-tive sector, with 39% of the markets classified as 'underpriced', and only two as 'fully priced'. Moscow remains at the top of the underpriced European markets table, ranked first and third for its retail and office sectors, respectively. Budapest (retail market) was second with Budapest (logistics) and Dublin (logistics) completing the top five. Top five 'fully priced' shortlisted markets include: Istanbul, Wien and Oslo (office property), Milano and Rome (retail trade). Ukrainian cities were not directly considered for the purpose of the study because of their insignificant cross section in terms of market trend formation. Ukrainian real estate market is classified by the general fair value index as 'underpriced'. Such a situation existing in the domestic market is conditioned by a series of the destabilizing factors having adverse effect on the market performance and slowing sound structural transformations required for increase of profitability thereof. Political turbulence against a background of the future presidential and parliamentary elections, battle actions in the East of the country, scheduled repayments of the government debt to the international creditors in 2019-2020, as well as consistent high level of corruption are recognized to be the material risks for the further activating of the real estate market transactions. Real estate market situation is one of the key indicators, whereby the level of social and economic development of Ukraine is defined based upon close relevant relationship with the other real sectors of economy. Average property share percentage of the Ukrainian GDP in 2017 made only up to 2% with money multiplier of UAH 6.76 (EUR 0.19). It shall be noted for reference that the average property share percentage of GDP in the key European markets makes up to: 9.8% (Germany), 9% (Poland and Austria), 10.9% (Finland), 11.4% (France) and 12.5% (Italy) (see Figure 2). Due to reduction of the investment risks, relative stabilization of the national currency and economy, increase in number of companies investing in property for the purpose of accumulated capital investment and/or placement of own operating business, growth of the investors' interest in commercial properties was evidenced in 2017. Key property market players are privately held Ukrainian investment companies, national logistic companies, large-scale retailers, as well as local, foreign and international investors. Secondary investment transactions in the Ukrainian property market volume up to USD 137 mil-lion in 2017, which is 56% up compared with 2016. According to expert estimates of the transactions in commercial property market, anticipated investment volumes are in the range of USD 200-350 million. Therefore, commercial property remains the most attractive and profitable investment asset, especially with regard to the high quality items. Specific functioning of the domestic investment property market is significantly influenced by its specific evolutionary development. Process of privatization of the state owned properties begun in the 90's of the past century has laid the foundation for the formation of the modern investment property market. Reassignment of the rights to and in the state owned properties has triggered growth of the market. Taking into consideration rather short period of time during which the investment property market has been autonomously functioning since Ukraine become independent, the most part of properties is characterized by poor quality, incompliance with modern construction and building standards, high deterioration and obsolescence of the infrastructure, territorial disproportion, absence of the uniform approaches to property valuation and unavailability of the market information whatsoever. Moreover, there is uneven development of the certain property segments what is typical of the High rate of return on both individual segments and of the entire investment property market of Ukraine is accompanied with major risks connected with not only economic and political factors, but first and utmost of conflict legislative provisions regulating market rules; lack of harmonized methodology for accounting, measurement and management of the investment property items according to the international economical environment, existing and potential investors requirements to improvement of the companies' financial reporting transparency for alignment of asymmetry in information available in the global property markets that grew up after global crisis in 2007-2009. The purpose of global convergence of the accounting systems is to provide transparent accounting and reporting of the actual economic situation by the companies, thus assisting in making effective managerial decisions based upon sound and true information. Considering the modern trends and potential growth of the Ukrainian investment property market, its attractiveness for the foreign investors and global convergence of currently prevailing occupational standard systems against a background of the need for improvement of the national legislative environment, the key issues arise that assume identifying the investment property as individual item, recognition, measurement, accounting and strategic management of the investment property items. High rate of return on property investments does provide for increasing of capital investment volumes and number of investment entities as establishments count not only on earning profit from lease, but also increase in market value of the investment properties. This is just a reason for interesting in segregation of the investment property within the asset account for the purpose of determination of the effective alternative options for management of real estate items. At the same time, some issues regarding specific identifica-tion and recognition of the investment property as accounting item still remain unsolved and debating. An ambiguity of the identification essentials is one of the key challenges preventing from improvement and development of the investment property accounting system, as well as providing for efficient asset use (allocation). Therefore, the critical tasks arise, whereby it is assumed to work out scientifically grounded approaches to interpretation of the investment property and building hierarchy of its identification criteria in consideration of recommendations provided for in the international standards. Rules of the investment property recognition and valuation, as well as its reporting in accounts in terms of the international accounting system are regulated by requirements of the IAS 40: Investment properties. Since development and adoption in 2003 of the International Accounting Standard -IAS 40, different countries have been implementing appropriate national investment property accounting standards, either directly or through introduction of specific IAS 40 requirements into their national standards ( Table 1). The most part of different investment property accounting standards simply repeat, either in whole or partially, the IAS 40 text with due consideration of the national accounting practice, traditions and institutional factors in the context of global standardized accounting mode (Fearnley & Gray, 2015). However, there are key weaknesses of the IAS 40 that aggravate implementation of the standard, i.e. because of extremely loose adaptation and application of the accounting principles, as well as insufficient description of the certain accounting approaches. This applies especially to countries, where the accounting system is currently being subject to the process of liberalization, retreating from command and administrative management system and reformation in line with market relationship requirements. In order to avoid adverse effect of the subjective professional opinions with regard to recognition and valuation of the investment properties in the accounting systems of the developing countries, it shall be necessary to provide details and specify formalization component of property accounting. According to the IAS 40: Investment Properties, the investment property is a property (land plot or building or any part or combination thereof) held (by owner or tenant under the contract of financial lease) with the purpose of getting paid rental fees or increasing cost of capital or both of them (para. 5). The key identification criteria defined according to the IAS 40 include: • probability of getting economic benefits in the form of rental fees and/or increase of own capital; • fairness of asset recognition. At this stage of identification it shall be reasonable to define the hierarchical subordination of the aforesaid criteria. Implementation of the accounting principle based upon common monetary measurement requires preliminary measurement of any item value for the purpose of the further gen-eralization of transactions therewith in the financial statements of the company. Therefore, it shall be reasonable to determine probability and ways of getting economical benefits from its use only after measurement of value thereof (para. 16). At the same time, the new IFRS 16: Leases to be effective as of January 1, 2019 will supplement criteria of the investment property transactions recognition as lease or those containing lease component, in particular: • asset identification; • getting economic benefits; • right to resolve on way, in which the asset to be used. Identification of the asset is done through its specifying in the contract of lease. Moreover, any part of the asset may be identified, should it be possible to determine its physical parameters or 'cross section' as a part of the property item (IFRS 16, Section В20-13). Getting of economic benefits doses assume the right for getting pretty much economic benefits from use of the identified asset during the entire period of use thereof (IFRS 16, Section В21-23). The right to resolve on way, in which the asset to be used, does assume the company's (tenant's) right to set forth ways and purpose of use of the asset during the entire period of operation thereof; tenant shall be also entitled to manage and dispose of the asset during the entire period of use, however no right to amend item operation rules set forth in advance shall be vested in the tenant (IFRS 16, Section В24-27). Therefore, the right of use, but not the right of possession or financial lease, is laid as foundation for recognition of the property item as an asset (IFRS 16, Section В9). New IFRS 16 requirements are of great importance for companies making investments into property on a leasehold basis. Such a practice is of particular prevalence in Great Britain and Hong Kong. If the company was previously entitled to resolve independently whether to recognize or not recognize properties got on an operating leasehold basis as a part of the investment property, now it is obliged to report it as the investment property, provided it does comply with the other recognition criteria. Following publication of draft version of the IFRS 16 a number of scientific professionals began to investigate effect of the operating lease capitalization upon financial performance of companies of whatever industry affiliation existing in the international and local markets. Significant estimates of the authors are as follows: • operating lease capitalization will have moderate effect on companies financial performance ( Line of reasoning for implementation of the new rules of lease is based upon active use of the offbalance financing model today. Therefore, investors and bond rating agencies have to make allowances for the operating lease liabilities (mean ratio 8 shall be applied to the lease costs). According to the study performed by the IASB, such allowances are recognized to be of rather general character and therefore lead to undervaluation or overvaluation of different companies debts. However, recognition of the total leases on the balance sheet will improve the accuracy and enable simplifying of the measurement. According to the IASB data, the total amount of not recognized liabilities under contracts of operating lease makes up today to USD 2.2 billion. Adoption of the IFRS 16 will have significant effect upon reported financial indicators. The tenant will have gain in assets, however at the same time his debt liabilities will grow as well; the total costs of lease will be higher at the initial lease period, even if the rental fees are regularly paid. Apart from increase of EBIDTA, implementation of the IFRS 16 will also lead to increase of the net debt accordingly ( Table 2). In order to make lease related provisions of the IAS 40 and IFRS 16 brought into conformity in so far as it regards identification and recognition of the investment property, criterion of holding property on operating lease shall be removed. Implementation of the IFRS 16 has made it possible to choose the basis for valuation. When making a decision on recognition of investment property held on operating lease as an asset, the company previously had to apply the fair value measurement model to all investment property items. Now the company may independently choose either fair value or initial value measurement model depending on approved accounting policy. It shall be noted that the fair value measurement methods still remains under discussion today. Modern foreign authors pay the utmost attention just to issues of substance and grounds for application of fair value measurement method, genesis of such method evolution in accounting systems of different world countries, new aspects of fair value based accounting as provided for in the IFRS 13, as well as critical analysis of the key provisions of the said standard. Among other things the most authors report about the need for application of the fair value measurement of the company's assets and liabilities in order to provide for developing of proper and fair information on their actual financial health and resources available. However, it must be said about low reliability of obtained data, which is conditioned by a lack of common generally accepted approaches to the fair value measurement and measurement method (Nellessen & Zuelch, 2011). Having followed up the evolutionary effect that the financial capital has upon on the commercial property valuation in the UK through preparing a historiography of the investment cost measurement beginning from 1960, thus supporting views By reasoning the need for the fair value measurement method application, the authors, inter alia, bring forward the arguments of high concern of company directors and investors about getting fair information on actual value of the item, which does comply with the existing market indicators. Appointment of the qualified assessors will provide for avoiding misrepresentation of information on the actual value of the item (Yamamoto, 2014; Taplin, Yuan, & Brown, 2014), while conlservative accounting system, if being still used, will make possible for the companies to have more accurate estimates of the future cash flows (Bandyopadhyay, Chen, & Wolfe, 2017). Not diminishing the importance of the authors' contribution into investigation of the range of valuation related problems, as well as its effect upon financial results of the companies from allover the world, it shall be said about absence of the common harmonized methodology for the investment property valuation and clear regulations for step-by-step implementation of methods and approaches thereto. The issue of practical implementation of the said method in the countries, where no active investment property markets exist, still remains pending. It should be stressed within the given context that the fair value often serves as a tool for spec- IFRS and GAAP, harmonization of approaches and development of the common algorithm for fair value measurement, the International Accounting Standard Board adopted in 2011 the IFRS 13: Fair Value Measurement, where the conceptual basis of the fair value measurement method is given. IFRS 13 states that the fair value is a market-based, rather than entity-specific, measurement. The objective of fair value measurement is to estimate the price at which an orderly transaction to sell an asset or to transfer a liability would take place between market participants at the measurement date under current market conditions (i.e. an exit price that market participant holding assets or having liability considers to be fair at the meabsurement date). The international standards, whereby accounting approach to the company assets is set forth, state the fair value as an amount, for which an asset would be exchanged between knowledgeable, willing and independent parties in an arms length transaction (IAS 2, para. 6; IAS 16, para. 6; IAS 38, para. 8; IAS 40, para. 5; and IAS 41, para. 8). With reference to the guidelines provided in the IFRS 13, as well as specific features of the invest-ment property as accountable item, we hereby propose to apply the following steps for investment property fair value measurement (see Figure 3). Following the study of financial statements of the Ukrainian companies it has been found that no information on the investment property and fair value thereof was disclosed in the Notes to financial statements as set forth by requirements of the IFRS 40 (para. 75, para. 78, and para. 79). Such a situation is typical of the most companies in the ex-USSR countries with the appropriate level of accounting system development (Aletkin, Samitova, & Kulikova, 2014). Consequently, financial statements of the said companies may not be classified as made in compliance with the IFRS requirements. In consideration of no IFRS recommended standard form of the notes to financial statements available, it is advisable to develop the common form for informational reporting of changes in the investment property value during the accounting period (Table 3). Proposed form will provide for harmonized reporting of the narrative and financial information on the carrying value of the investment property as of the opening and closing day of the accounting period measured with alternative methods. Obtained information on value, income earned from lease and/or changes in value, as well as operating expenses borne makes it possible to work out a system of harmonized indicators of investment property management with setting key flags for efficiency assessment of each group of indicators with due consideration of their effect upon reduction of expenses, growth in income and change of the investment property item value. It is advisable to divide a system of harmonized indicators, which is determinative for development of the company strategy of the further management of the investment property items, into four groups: • financial results (key criteria: growth in income from property use and reduction of the operating costs); • client portfolio (key criteria: strengthening of the reputational component of the company, including the level of reliability and significance for the national economy, widening a range and quality of additional services, as well as customer support); • upgrades and innovations (key criteria: improvement of the management accounting instruments, investment property use and disposal intensification and diversification, prompt implementation of the innovative developments); • training and growth (key criteria: granting an access to the information sources for the purpose of getting and systematization of required information, working out of the performance efficiency motivation system, as well as employee training of the of modern management approaches) (see Figure 4). STEP 1 -INVESTMENT PROPERTY ITEM IDENTIFICATION Proposed are not static, but variable indicators that vary because of either external or internal factors. The authors propose to assess their efficiency through determination of indicators effect on the investment value of the property based upon the formula below: where PV n -the investment value of the property; VW c -weight of implemented criteria of the investment property management in profits gained from the investment property use; VW i -weight of the і-th investment property item in the total value of the investment asset; LR j -a new lease rate after the j-th criterion implementation; NA j -the space rented for implementation of the j-th criterion of the investment property management; OR -the occupancy rate of the item during the period under review; k -the net profit ratio as a part of the total earnings from lease; E j -costs of implementation of the j-th criterion of the investment property management; r -discount cash flow rate; t -number of a year, whereby the projection period is covered. The most widely used method for making grounded conclusion on expediency of investments under current conditions is the cash flow discounting method, which concept is based upon considering changes in value of used money through exposure to the factors enlisted. There are no harmonized methods for discount rate based measurement of the investment value of property available today, which would satisfy demands of the financial analysts and would not come under criticism. Ukraine is known for rather troublesome application of the discount rate with a glance to statutory discontinuities, economical and political turbulence, as well as inflation fluctuations. Determination of the projected period is considered to be of critical importance as it has effect upon fairness of the obtained data. Considering permanent inflation and exchange fluctuations, legislative modifications and changing of the state fiscal policies typical of the emerging countries, it is advisable to use four-year projection period in order to provide for fairness of estimates and reduce uncertainties in the time of the general risk impact assessment. Ultimate calculation results will enable determination of effect from each implemented cri-terion on formation of the investment value of property due to growth in income from lease conditioned by lease rate increase and change of the vacancy rate of each individual property item net of costs for implementation of the entire harmonized indicator system during the projected period. CONCLUSION Global business and capital integration provide for more strict requirements to quality, completeness, fairness, timeliness and correlation of the information sources, thus contributing into the need for rejection of traditional measurement of assets at the initial value thereof taking into account their usability impairment and therefore development of the methodological approaches to the investment property measurement just at fair value. Following modifications in the investment property value measurement principles (changing for fair value measurement method), the authors have built the hierarchy of the investment property recognition criteria, which makes it possible to identify it properly as the accounting item and civil law relation matter, as well as to improve quality and fairness of the information data used for reporting noncurrent assets in the financial statements. Analysis of the conceptual approaches to the investment property fair value measurement and due systematization thereof have enabled the authors to develop appropriate methodology for measurement of the fair value of properties, thus providing for making estimates of trends in changing of market value of such properties and cash flows from transactions therewith, as well as building approximation for their curves in the international context with a view to the investment property market stagnation.
7,219.8
2018-12-26T00:00:00.000
[ "Business", "Economics" ]
Antifungal and prophylactic activity of pumpkin ( Cucurbita moschata ) extract against Aspergillus flavus and aflatoxin B 1 Amna The antifungal effect of pumpkin (Cucurbita moschata) fruit aqueous extract against Aspergillus flavus was investigated in vitro. The result showed that incubation of different pumpkin aqueous extract (0.5 to 2%) with living mass of the fungus has suppressive effect on the fungal growth after 6 days compared with untreated control sample. However 2% of pumpkin aqueous extract was the most effective in inhibiting the fungal growth. The protective efficacy of pumpkin fruit aqueous extract against either A. flavus fungus infection or aflatoxin B1 (AFB1) toxicity induced renal damage in rats was also investigated. A. flavus and AFB1 were administered intraperitoneally (0.1 ml/100 g of body weight) for 15 consecutive days. The result revealed that infection of rats with A. flavus or intoxication with AFB1 significantly induced renal damage as indicated by marked increased levels of serum urea, uric acid and creatinine as well as histopathological pictures compared with normal healthy rats. Oral co-administration of aqueous extract of pumpkin fruits (1.0 mg / kg of body weight) to either rat groups infected with A. flavus or intoxicated with AFB1 for 20 consecutive days effectively normalized the serum kidney function biomarkers and confirmed by histomorphologic pictures which showed normal histological structure. INTRODUCTION Fungal infections are mainly caused by opportunistic fungi and are usually associated with immunosuppression (Shoham and Levitz, 2005).Aflatoxins (AFs) are highly toxic secondary fungal metabolites produced by the species of Aspergillus, especially Aspergillus flavus and Aspergillus parasiticus.These fungi can grow on a wide variety of foods and feeds under favorable temperature and humidity (Giray et al., 2007).There are four naturally occurring aflatoxins, the most toxic being aflatoxin B1 (AFB1), and three structurally similar compounds namely aflatoxin B2 (AFB2), aflatoxin G1 (AFG1) and aflatoxin G2 (AFG2).Aflatoxins are not only contaminant our food stuffs, but also are found in edible tissues, milk and eggs after consumption of contaminated feed by farm animals (Fink-Gremmels, 1999;Bennett and Klich, 2003;Aycicek et al., 2005). Aflatoxins are well known to be potent mutagenic, carcinogenic, teratogenic and immunosuppressive agent; also inhibit several metabolic systems, causing liver, kidney and heart damage (Wogan, 1999;Bintvihok, 2002;Wangikar et al., 2005).These toxins have been incriminated as the cause of high mortality in livestock and some cases of death in human being (Salunkhe et al., 1987). The mechanism of AFB1 toxic effect has been extensively studied.It has been shown that AFB1 is activated by hepatic cytochrome P450 enzyme system to produce a highly reactive intermediate, AFB1-8,9epoxide, which subsequently binds to nucleophilic sites in DNA, and the major adduct 8,9-dihydro-8-(N7guanyl)-9hydroxy-AFB1 (AFB1 N7-Gua) is formed (Sharma and Farmer, 2004).The formation of AFB1-DNA adducts is regarded as a critical step in the initiation of AFB1induced carcinogenesis (Preston and Williams, 2005).Although the mechanism underlying the toxicity of aflatoxins is not fully understood, several reports suggest that toxicity may ensue through the generation of intracellular reactive oxygen species (ROS) like superoxide anion, hydroxyl radical and hydrogen peroxide (H 2 O 2 ) during the metabolic processing of AFB1 by cytochrome P450 in the liver (Towner et al., 2003;Naaz et al., 2007;Shi et al., 2012).These species may attack soluble cell compounds as well as membranes, eventually leading to the impairment of cell functioning and cytolysis (Berg et al., 2004). The use of synthetic drugs as antimicrobials was greatly effective, but their application has led to a number of ecological and medical problems due to residual toxicity, carcinogenicity, teratogenicity, hormonal imbalance, spermatotoxicity, etc. (Pandey, 2003).Natural products and their active principles as sources for new drug discovery and treatment of diseases have attracted attention in recent years.Herbs are generally considered safe and proved to be effective against various human ailments.Their medicinal use has been gradually increasing in developed countries.So, natural substances that can prevent AFB1 toxicity would be helpful to human and animal health with minimal cost in foods and feed.Traditional medicinal plants were applied by some authors for their antifungal, anti-aflatoxin and antioxidant activity (Joseph et al., 2005;Kumar et al., 2007). Pumpkin , belonging to the genus of Cucurbita and family Cucurbitaceae frequently refers to any one of the species Cucurbita pepo, Cucurbita mixta, Cucurbita maxima, and Cucurbita moschata (Itis.gov., 2009).Pumpkin was reported to have medicinal properties.C moschata was reported to have antioxidant components including vitamin A, vitamin E, carotenes, xanthophylls and phenolic compounds which have the principal role in protecting against oxidative tissue damage ( Chanwitheesuk et al., 2005). Tetrasaccharide glyceroglycolipids were obtained from Cucurbita moschata and showed significant glucoselowering effects in streptozotocin-and high-fat-dietinduced diabetes in mice (Jiang and Du, 2011).watersoluble extract, named PG105, prepared from stem parts of C moschata, contains potent anti-obesity activities in a high fat diet-induced obesity mouse model (Choi et al., 2007).The plant is also used as laxative ,and in the treatment of headaches, heart disease, high blood cholesteroland high blood pressure (AL-Sayed, 2007). The aim of this study was to evaluate antimicrobial impact of C. moschata fruit aqueous extract against toxigenic A. flavus and also to investigate the in vivo protective effect of the extract against A. flavus infection and AFB1 toxicity induced renal dysfunction and histolopathological structural changes in rats. Preparation of aflatoxin B1 standard (AFB1) AFB1 was obtained from Sigma-Aldrich (St. Louis, MO, USA).A concentration of 2 mg/ml of AFB1 was prepared using dimethyl sulphoxide.AFB1 was administered to experimental animals intraperitoneally using a dosage of 0.2 mg/Kg of body weight (Ha et al., 1999) Organism under study A. flavus toxigenic strain was obtained from The MERCIN unit, College of Agriculture, Ein-Shams University-Cairo, Egypt. Preparation of Aspergillus flavus cultures Fifty ml of Sabourod dextrose agar were added to sterile Petri dishes and 0.1 ml spore suspension were inoculated.The inoculated media were incubated at 25 ± 2°C for 6 days. The effect of different concentrations of pumpkin fruit aqueous extract on Aspergillus flavus bio mass The effect of pumpkin fruit aqueous extract on the mycotic spindle formation of the experimental fungus was studied on the liquid Sabouraud Dextrose media.Exactly 1 ml of different concentrations (0.5, 1.0, 1.5 and 2.0%) of pumpkin fruit aqueous extract was added to a flask contained 50 ml of the liquid media.Control group flasks with no added pumpkin fruit aqueous extract were also prepared.All prepared media were sterilized by bacterial filtration and then fertilized with disks of A. flavus fungus with diameter of 5 mm.Cultures were filtered after incubation at 25 ± 2°C for 3 and 6. Plant specimens Pumpkin (Cucurbita moschata) fruits were purchased from the local market in Jeddah, Saudi Arabia during summer 2011. Preparation of plant aqueous extract The fruit extract was prepared according to the method of Xia and Wang, (2006b).One hundred grams of dried fruits were mixed with 1000 ml of distilled water and the mixture was boiled at 100°C under reflux for 30 min.The decoction obtained was centrifuged, filtered, frozen at -20°C, and then lyophilized.The lyophilized product of plant extract obtained was either dissolved in water before oral administration to rats or sterilized by bacterial filter for the antifungal studies. Animal and treatments Animal experiment was performed in accordance with legal ethical guidelines of the Medical Ethical Committee of King Abdelaziz University, Jeddah, KSA.Sixty healthy female albino rats (150 to AFB1 was dissolved in dimethyl sulphoxide and administered intraperitoneally (0.1 mg/100 g body weight) to the rats of AFB1 intoxicated groups.A. flavus spore suspension was injected intraperitoneally (0.2 mg/Kg) to A. flavus rat infected groups.AFB1 and A. flavus fungus were administered to rats for 15 consecutive days.pumpkin fruit aqueous extract was administered orally (1.0 mg / kg of body weight) to either rat groups treated with AFB1 or infected with A. flavus fungus for 20 consecutive days. After the experimental period (20 days), the blood samples were collected from each animal in all groups into sterilized tubes for serum separation.Serum was separated by centrifugation at 3000 rpm for 10 min and used for biochemical serum analysis of kidney function.After blood collection, rats of each group were sacrificed under ether anesthesia and the kidney samples were collected for histopathological examination. Histopathological examination A small pieces of kidneys were fixed by 10% formalin and then embedded into paraffin, sectioned for 5 to 6 μm thick, and mounted on the glass microscope slides using standard histopathological techniques.The sections were stained with hematoxylin-eosin and examined by light microscopy. Statistical analysis Data were analyzed by comparing values for different treatment groups with the values for individual controls.Results are expressed as mean ± S.D. The significant differences among values were analyzed using analysis of variance (one-way ANOVA) followed by Bonferroni as a post-ANOVA test.Results were considered significant at P < 0.05 (Hill, 1971). RESULTS The effect of different concentrations of pumpkin fruit aqueous extract on the vitality of bio mass of A. flavus fungus was illustrated in Table 1.The result showed that incubation of different concentrations of pumpkin aqueous extract (0.5 to 2%) with living mass of the fungus has suppressive effect on the fungus growth after 6 days compared with untreated control sample.2% of the plant extract was the most effective in inhibiting the fungal growth. The protective role of pumpkin fruit aqueous extract on kidney serum function biomarkers against A. flavus infection and AFB1 toxicity in rats is depicted in Table 2.The result showed that infection of rats with A. flavus (G3) or treated with AFB1 (G5) for 15 consecutive days, dramatically causes renal tissue damage as indicated by marked increases in serum urea, uric acid and creatinine compared with normal rats (P ≤ 0.001).Oral coadministration of pumpkin fruit aqueous extract to either groups (G3 or G5), effectively protected the kidney from the damaging effect of either A. flavus (G4) or AFB1 (G6) as documented by the normalization of the studied kidney function biomarkers.Histopathological observation supported the biochemical result.kidney section of rat infected with A. flavus indicatedits damaging deleterious impact on rat kidney as observed by the damaged architecture of proximal tubular epithelia, such as cell swelling and lysis, cytoplasm vacuolation, nuclear membrane breakdown, cell shrinkage, nuclear condensation, and necrosis of some glomeruli and tubules (Figure 1c).Co-administration of pumpkin fruit aqueous extract to rats infected with A. flavus showed more or less normal histological structure of kidney (Figure 1d).Injection of rats with AFB1 showed sever damage to histomorphological picture of rat kidneys (Figure 1e and f) as observed by marked dilatation and congestion of renal blood vessel (Figure 1e), degeneration of most renal tubules, swelling and lysis of cells, cytoplasm vacuolation and nuclear pyknosis.Coadministration of pumpkin fruit aqueous extract to AFB1 intoxicated rats (Figure 1g) showed more or less normal histological structure of kidney.Administration of pumpkin fruit aqueous extract to normal rats showed no side effects on rat kidneys as indicated by normal kidney function and supported by histomorphologic picture of rat kidneys which showed normal histological structure DISCUSSION Plants have been used in medicine for a long period of time, since they are easy to obtain and used in treatment of various diseases (Sardi et al., 2011).It contains phosphorus, iron and zinc, vitamin E and vitamin B complex, very poor in sodium (Shaaban, 2005). Regarding the search for new antifungal ideal agent must have a broad spectrum of fungicidal activity without causing toxicity to the host (Carrillo-Mu˜noz et al., 2006).The treatment of fungal infections is not always effective because of resistance to drugs in addition to presenting high toxicity for human cells.For this reason, there is a continuing search for new drugs which are more potent antifungal, but safer, than existing drugs (Fenner et al., 2006). The antifungal activity of aqueous extract of pumpkin fruits against A. flavus fungus was studied.The result revealed that different concentrations of pumpkin aqueous extract (0.5 to 2%) showed inhibitory beneficial effect on the growth of fungus living mass after 6 days compared with untreated control.2% of the plant extract was the most effective one in inhibiting the fungal growth.This is consistent with the results of Saddiq, (2010) who reported high ability of the pumpkin fruits and seeds alcohol extract to inhibit the growth of pathogenic bacteria Staphylococcus aureus and Escherichia coli as well as against the toxigenic fungus A. flavus.Also this result is supported by previous study by Wang and Ng. (2003) who isolated antifungal peptide from C. moschata seeds namely cucurmoschin which has abundant arginine, glutamate and glycine residues.Cucurmoschin inhibited mycelial growth in the fungi Botrytis cinerea, Fusarium oxysporum and Mycosphaerella oxysporum.The authors also stated that the cucurmoschin showed a translationinhibitory activity (IC50 = 1.2 µM) which was more effective than some of the antifungal proteins (Lam et al., 2000, Wang and Ng, 2000, 2002) and the antifungal peptides from red bean, pinto bean (Ye and Ng, 2001) and chickpea (Ye et al., 2002).Ribosome-inactivating proteins inhibit translation in rabbit reticulocyte lysate with a much higher potency (IC 50 in pM concentration) (Barbieri et al., 1993) and they also inactivate fungal ribosomes (Roberts and Selitrennikoff, 1986). The protective role of aqueous extract of pumpkin fruits on kidney serum function biomarkers against A. flavus infection and AFB1 toxicity in rats was investigated.The result showed that either infection of rats with A. flavus or intoxication with AFB1 dramatically induce nephrotoxicity in rats, as demonstrated by the significantly increased levels of serum urea, uric acid and creatinine.The alteration in these kidney function biomarkers was more evidence in rats intoxicated with AFB1.The reno-toxic effect of A. flavus or AFB1 was further confirmed by the severely destructed renal tissue, as shown in the histological analysis.Histomorphological picture of rat kidneys infected with A. flavus showed damaged of renal proximal tubular epithelia, such as cell swelling and lysis, cytoplasm vacuolation, nuclear membrane breakdown, cell shrinkage, nuclear condensation, and necrosis of some glomeruli and tubules.Injections of rats with AFB1 showed sever damage to histomorphological picture of rat kidney indicated by vacuolar degeneration and necrosis of the renal tubular cells.Our results are similar to other studies indicated impairment of kidney functionsand abnormal pathological changes with severe inflammatory response of kidney in animals intoxicated with AFB1 (Valdivia et al., 2001;Salim et al., 2011).Although the mechanism underlying the toxicity of aflatoxins is not fully understood, several reports suggest that toxicity may ensue through the generation of intracellular reactive oxygen species (ROS) like superoxide anion, hydroxyl radical and hydrogen peroxide (H 2 O 2 ) during the metabolic processing of AFB1 by cytochrome P450 in the liver (Towner et al., 2003 ).These species may attack soluble cell compounds as well as membranes, eventually leading to the impairment of cell functioning and cytolysis (Berg et al., 2004).Souza et al. (1999) reported that the oxidative stress is the principle manifestations of AFB1-induced toxicity which could be mitigated by antioxidants.Administration of aqueous extract of pumpkin fruits to rats infected with A. flavus or intoxicated with AFB1 markedly protected the kidney from the damaging deleterious impact on kidney tissue indicated by normalization of kidney function biomarkers in rats administered A. flavus or AFB1 simultaneously with the used plant extract.The beneficial effect of this extract on kidney function biomarkers was supported by histomorphologic picture of kidney which showed more or less normal histological structure.The ameliorative effect of the used plant extract on kidney dysfunction and its histopathological pictures induced by fungal infection or AFB1 may indicate to its antimicrobial effect or antioxidant potential action.Pumpkin was reported to have antioxidant components including vitamin vitamin E, carotenes, xanthophylls and phenolic compounds which have the principle role in protecting against oxidative tissue damage ( Chanwitheesuk et al., 2005;Saddiq, 2010). Evaluation of the adverse effects of the natural products, accepted as remedies, is important in implementing safety measures for public health.The present work proved that the administered dose of C. moschata fruit aqueous extract (1.0 mg / kg of body weight) to normal healthy rats has no adverse effects which was indicated by the normal levels of the serum renal tested function in comparison to normal untreated rats.This result was confirmed by normal renal histomorphological picture. In conclusion, aqueous extract of pumpkin fruits has a protective role against A. flavus or AFB1-induced renal damage which may be related to the antioxidant constituents of the plant extract. hhggffgFig 1 .Figure 1 . Fig 1.Sections of rat kidneys of different experimental groups, (a) kidney of normal healthy rats showing normal histological structure of renal parenchyma, (b) kidney section of rat ingested with pumpkin fruit aqueous extract showing normal histological structure of renal parenchyma, (c) kidney section of rat infected with A. flavus showing various degrees of damage to the architecture of proximal tubular epithelia, such as cell swelling and lysis, cytoplasm vacuolation (arrow head), nuclear membrane breakdown, cell shrinkage, nuclear condensation, and necrosis of some glomeruli and tubules (thin arrows), (d) section of rat kidney infected with A. flavus and co-administered with pumpkin fruit aqueous Figure 1.Sections of rat kidneys of different experimental groups, (a) kidney of normal healthy rats showing normal histological structure of renal parenchyma, (b) kidney section of rat ingested with pumpkin fruit aqueous extract showing normal histological structure of renal parenchyma, (c) kidney section of rat infected with A. flavus showing various degrees of damage to the architecture of proximal tubular epithelia, such as cell swelling and lysis, cytoplasm vacuolation (arrow head), nuclear membrane breakdown, cell shrinkage, nuclear condensation, and necrosis of some glomeruli and (thin arrows), (d) section of rat kidney infected with A. flavus and co-administered with pumpkin fruit aqueous extract showing more or less normal histological structure of kidney, (e and f) sections of rat kidney intoxicated with AFB1 showing severe damage to the renal architecture such as marked dilatation and congestion of renal blood vessel (e, arrow), degeneration of most renal tubules (arrow heads), swelling and lysis of cells, cytoplasm vacuolation (stars) and nuclear pyknosis (f, arrows), (g) section of rat kidney intoxicated with AFB1 and co-administered with pumpkin fruit aqueous extract showing more or less normal histological structure of kidney. Table 1 . Effect of various concentrations of pumpkin fruit aqueous extract on bio mass of Aspergillus flavus grown on liquid media after 6 days (mg ± SE). light/dark cycle.The animals were provided with commercial rat pellet diet and deionized water ad libitum.After one week acclimation, the rats were randomly divided into 6 groups (each of 10 rats) as follow: G1: Normal healthy rats, G2: Normal rats ingested with pumpkin fruit aqueous extract, G3: Rats infected with A. flavus, G4: Rats infected with A. flavus and co-administered with pumpkin fruit aqueous extract, G5: Rats intoxicated with AFB1, G6: Rats intoxicated with AFB1 and co-administered with pumpkin fruit aqueous extract. Table 2 . Levels of serum kidney function biomarkers (mean ± SE) in different experimental animals. b Different letters in the same column are significantly different P ≤ 0.001.
4,328.8
2012-10-27T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Relaxation and edge reconstruction in integer quantum Hall systems The interplay between the confinement potential and electron-electron interactions causes reconstructions of Quantum Hall edges. We study the consequences of this edge reconstruction for the relaxation of hot electrons injected into integer quantum Hall edge states. In translationally invariant edges, the relaxation of hot electrons is governed by three-body collisions which are sensitive to the electron dispersion and thus to reconstruction effects. We show that the relaxation rates are significantly altered in different reconstruction scenarios. I. INTRODUCTION The kinetic properties of one-dimensional (1D) quantum systems are an active area of current research. 1,2 What makes the field exciting is that many-particle physics is drastically different in one spatial dimension. This difference is already evidenced in basic nonequilibrium properties such as the microscopic mechanisms of relaxation. Within the scope of Fermi-liquid theory, relaxation processes in higher dimensions proceed by pair collisions of electrons which provide an efficient mechanism for relaxation of initial nonequilibrium states. In contrast, conservation of energy and momentum strongly restricts scattering in one spatial dimension so that pair collisions necessarily result in zeromomentum exchange or an interchange of the momenta of the colliding particles. Neither process causes relaxation. This poses the fundamental question of the microscopic origin of relaxation in 1D systems. Notably, the absence of relaxation by pair collisions, which holds regardless of the strength of interaction, has received experimental support. 3 The question of equilibration emerges in a diverse set of 1D many-body systems. These include energy and momentum-resolved tunneling experiments with nanoscale quantum wires, 3,4 quench dynamics of cold atomic gases, 5,6 as well as energy-spectroscopy experiments on quantum Hall edge states driven out of equilibrium. [7][8][9][10] The present paper is motivated by the latter experiments which are carried out in a highmobility two-dimensional electron system at Landau level filling factor ν = 2. This system hosts two copropagating edge states which can be driven out of equilibrium by inter-edge tunneling in the vicinity of quantum point contacts. This generates a nonequilibrium distribution of electron energy (in the sense of electronic edge transport) downstream from the contact which is monitored as a function of the propagation distance by means of a quantum-dot-based energy spectrometer. The experiments show that the initial nonequilibrium distribution relaxes to a stationary form which is close to the ther-mal distribution but with an effective temperature and chemical potential. Edge-state equilibration was also probed in experiments at Landau-level filling factor ν = 1. 11 Heat is carried unidirectionally by the single chiral edge mode as confirmed by thermopower measurements along the edge. These experiments found that hot electrons injected locally into the edge cool down while propagating along the edge. It is worth emphasizing that the standard chiral-Luttinger-liquid model for quantum Hall edge states 12 does not account for equilibration effects. Indeed, this model is exactly solvable and as usual, its integrability is an obstacle to thermalization. In early works 13 this apparent difficulty was overcome by assuming a disordered edge where impurity mediated scattering allows for interchannel equilibration. These experimental discoveries led to a flurry of theoretical activity. We briefly summarize these contributions and place our work into their context. Two initial publications 14,15 used entirely different concepts. Ref. 14 was based on a Boltzmann kinetic equation for a disordered edge. Since translation invariance is broken, momentum is no longer a good quantum number and relaxation becomes possible even by two-particle collisions. Ref. 15 adopted a bosonization approach and combined it with a phenomenological model for the plasmon distribution generated at the quantum point contact. Within this model, thermalization was interpreted as a consequence of plasmon dispersion which causes the electron wave packets to broaden as they propagate with different group velocities. This picture was elegantly elaborated and extended in Refs. [16][17][18]. A third mechanism was proposed in the context of electronic Mach-Zehnder interferometers 19,20 based on electron-plasmon scattering. 21 This mechanism relies on scattering of high-energy electrons by low-energy plasmons enabled by the curvature of the fermionic spectrum. Despite the insight provided by these theories, important issues need to be sorted out. First, these works do not give a definitive answer whether relaxation is possible in translationally-invariant clean edges. Specifically, the dispersion of plasmon modes may lead to a steady state but does not constitute true relaxation as the energy in each plasmon mode is conserved. Second, the edge of quantum Hall systems can be reconstructed due to Coulomb interactions. The precise nature of reconstruction depends on the steepness of the confinement potential, ranging from no reconstruction for very sharp confinement potentials 22 to alternating compressible and incompressible stripes for very smooth edges. 23 Indeed, experiments 24,25 point towards an important role of reconstruction effects in energy transfer along the edge. The purpose of the present study is to address these issues within minimal models of unreconstructed and reconstructed edges. Specifically, we consider energy relaxation of a hot particle injected into translationally invariant quantum Hall edges at Landau level filling factors ν = 1, 2. With the assumption that the velocity v 1 of the injected particle differs sufficiently from the Fermi velocity v F , we treat the Coulomb interaction perturbatively. 26,27 In this limit, relaxation processes are dominated by three-body collisions which depend sensitively on the electron dispersion and hence on the edge reconstruction. We begin with a discussion of energy relaxation for the unreconstructed edge in Sec. II. We then discuss two simple models of reconstructed quantum Hall edges. In Sec. III, we discuss relaxation processes for a spin-reconstructed edge for filling factor ν = 2. In Sec. IV, we turn to a minimal model of charge reconstruction of a ν = 1 edge which provides the simplest realization of counter-propagating edge modes. We conclude in Sec. V. II. UNRECONSTRUCTED EDGE A confinement potential V c (x) that is sharp on the scale of the Coulomb interaction (i.e., V c e 2 /(κl 2 B ), where κ is the dielectric constant and l B denotes the magnetic length) remains stable against interactioninduced reconstructions and the electron dispersion ε(k) can be obtained approximately from the noninteracting Schrödinger equation. 22 A generic electronic dispersion of an unreconstructed edge is sketched in Fig. 1a, exhibiting a confinement-induced bending of the Landau levels near the edge of the sample. In the limit of high magnetic fields (V c ω c /l B ), the electron states near the edge can be described by the lowest Landau level wave functions in the Landau gauge. Here, k = X/l 2 B and L denotes the length of the sample edge (taken along the y direction). The defining feature of the unreconstructed edge is the sharp zero-temperature occupation function ν σ (X) = Θ(−X) of Landau level states with guiding center X when the Zeeman splitting ε Z is negligible [see Fig. 1b]. The single particle dispersion near the Fermi energy (corresponding to momentum k F ) is controlled by the confinement potential and can be approximated as (2) The dispersion is parametrized through the edge velocity v F = V c l 2 B at the Fermi energy and the curvature 1/m c = V c l 4 B . Note that these parameters become maximal for an infinitely sharp edge for which v F ∼ ω c l B and 1/m c ∼ 1/m. 22 Note however, that a description in terms of the wave functions in Eq. (1) is no longer valid in this extreme limit. The finite curvature of the dispersion implies that at least three particles are required for an energy and momentum conserving relaxation process. Relaxation of a high-energy electron (labeled by i = 1 in Fig. 1a) is possible by scattering two electrons (labeled i = 2, 3 in Fig. 1a) near the Fermi energy. Indeed, due to the curvature of the dispersion near the Fermi energy, exciting electron i = 2 from the Fermi energy requires more energy than scattering electron i = 3 deeper into the Fermi sea. Clearly, this relaxation process relies on finite temperature and typical energy transfers for electrons i = 2, 3 at the Fermi energy are of the order of T . Quantitatively, this process can relax the hot particle with excess energy is the momentum transferred to particle i in the collision. Note that q 1 q 3 so that relaxation occurs in many small steps v F q 1 ∼ T 2 /ε. For Landau-level filling factor ν = 2, these considerations apply when the Zeeman splitting is small compared to temperature. In the opposite limit ε Z T , the curvature of the dispersion implies that the Fermi momenta and hence the Fermi velocities differ for the two spin directions. In this case, relaxation is dominated by processes in which the electrons i = 1 and i = 2 have opposite spins, and thus different Fermi momenta k F j and Fermi velocities v j with j = 1, 2. To include a finite Zeeman splitting at Landau level filling factor ν = 2 as well as for later convenience, it is thus beneficial to consider a modified dispersion which is linearized in the vicinity of each of the three particles, including the hot particle with velocity v 1 and momentum k 1 . This captures the behavior in the regime of strong Zeeman splitting ε Z T on which we will focus in the following. Nevertheless, we can also recover the results for the quadratic dispersion and weak Zeeman splitting ε Z T by identifying v 2 − v 3 with the typical velocity difference T /(v F m c ) due to the curvature of the dispersion. Using the dispersion in Eq. (3), energy and momentum conservation leads to is controlled by the Zeeman splitting which we assume to be small compared to the excitation energy ε such that A. Three-body scattering formalism Energy relaxation by processes of the kind shown in Fig. 1a was already discussed in the context of quantum wires in Ref. 26. While our calculation here follows the same outline, there are characteristic differences related to the nature of the interaction matrix elements. The energy relaxation rate via three-body collisions is again given by (5) where n i is the Fermi-Dirac distribution function at k i . The factor involving q 1 weights the out-scattering rate with the relative relaxed energy, accounting for the fact that the hot particle relaxes only a fraction of its energy in a single collision. The three-body matrix element can be evaluated by the generalized golden rule Here, G 0 is the free Green's function, is the generic two-body interaction potential, and the subscript c emphasizes that only connected processes contribute which involve all three particles. The calculation for quantum Hall edges differs from that for quantum wires in the form of the Coulomb matrix element V q (k 1 −k 2 ) which now has to be evaluated using the Landau level wave functions in Eq. (1). For quantum Hall systems, the Coulomb matrix element is exponentially suppressed by a factor of exp(−q 2 l 2 B /2) for large momentum transfers. This is especially relevant because large momentum transfers yield the leading contribution to relaxation in quantum wires. 26 Moreover, V q (k 1 −k 2 ) does not only depend on the momentum transfer but also on the initial momentum difference which controls the distance between the guiding centers of the interacting electrons. Focusing on the remaining low momentum transfer processes (q 1/l B ), one obtains (see Appendix-A for details) with the understanding that at small q, the matrix elements will be eventually cut off by a large length scale λ l B which is given by the distance to a screening gate. For k 1 − k 2 1/l B the Coulomb matrix element is that of a quantum wire of width l B . For k 1 − k 2 1/l B , the interaction is that of electrons in two quantum wires separated by a distance of (k 1 − k 2 + q)l 2 B which equals the average of the guiding center distances of the electrons before and after the collision. With the absence of large momentum transfer processes the three-body scattering is dominated by the direct matrix element. The importance of the (k 1 − k 2 ) dependence of the Coulomb matrix element can be seen from the fact that the linearized dispersion of Eq. (3) leads to a vanishing direct matrix element for a quantumwire-like Coulomb interaction V q (0) (see Appendix B). In contrast, when reiterating the derivation 28 of the direct matrix element T 123 1 2 3 including the dependence of V q (k 1 − k 2 ) on the initial momenta, the result does not vanish and takes the form Here we used k 1 − k 2 ≈ k 1 − k F 2 = ∆k. This expression is applicable under the assumption that all initial momentum-differences are large compared to 1/l B to also suit the reconstruction effects that will be discussed later. For the unreconstructed edge, it is however more reasonable to assume k 2 − k 3 1/l B (for typical ε Z , T e 2 /κl B ) in which case the last term of Eq. (8), involving V q1 , does not show up (see Appendix B). B. Results for the unreconstructed edge For the unreconstructed edge, the momentum and velocity differences are linked by the curvature of the confinement potential via v 2 − v 3 = (k 2 − k 3 )/m c and v 1 − v 2 = ∆k/m c . The direct matrix element then takes the form Since for large Zeeman energy the particles at k 2 and k 3 have opposite spins, there is no exchange contribution (remember that exchange is appreciable for small momentum transfers only) and Eq. (9) fully determines the three-body matrix element. The corresponding energy relaxation rate can then be obtained by power counting which yields Here and we also used ∆k = m c (v 1 − v 2 ) = ε/v 1 . In obtaining Eq. (10), a factor of L/(v 1 − v 2 ) emerges from eliminating the energy δ-function in Eq. (5), each summation over the remaining k 2 , k 3 , q 3 contributes a phase space factor of ∼ T /v 2 and the weighting factor v 1 q 1 /ε takes the form Finally we have to account for the competition between excitation (q 1 > 0) and relaxation (q 1 < 0) of the hot particle. The latter is slightly favored because the momentum transfer working against the Fermi distribution is reduced by a fraction Equation (10), valid at ε m c v 1 e 2 /κ and ε Z T implies that the relaxation rate is strongly temperature dependent and can be enhanced by increasing the magnetic field. As mentioned above, the relaxation rate in the opposite limit of weak Zeeman splitting ε Z T can be obtained up to prefactors by replacing (v 2 −v 3 ) ∼ T /(v 2 m c ). Note that this regime allows for a low momentum transfer exchange term T 123 1 3 2 because the particles 2 and 3 are no longer necessarily of opposite spin. T 123 1 3 2 can then be obtained from Eq. (9) by replacing q 3 → k 2 − k 3 , which does not change the power counting argument. It is therefore possible to combine both cases by setting v 2 − v 3 = max{ε Z , T }/(v 2 m c ). In the case v 1 ≈ v 2 = V c l 2 B it is then possible to rewrite Eq. (10) as which applies in the regime V c l 2 B ε(V c l 2 B /V c ) e 2 /κ. For the later comparison of the relaxation rates before and after edge reconstruction it will be useful to consider the unreconstructed case as the v 1 v 2 ∼ e 2 /κ limit of Eq. (10) [which can be applied for ε (e 2 /κl B ) 2 /(V c l 2 B )]. Formally, this regime leaves the condition of applicability for the Taylor expansion of the confinement potential that defines 1/m c = V c (µ)l 4 B and would lead to another inverse mass V c (µ+ε)l 4 B for curvature effects at energies of the order of ε. Distinguishing these different masses does however not lead to qualitative changes of the results and for brevity of the presentation we assume a quadratic confinement potential where we used that in this regime m c v 2 1 ∼ . The crossover between Eqs. (11) and (12) can be obtained at their limits of applicability by setting ε = e 2 V c /(κl 2 B V c ) and V c = e 2 /(κl 2 B ). Note that for a spin polarized edge, Eqs. (10)- (12) only apply if the Coulomb interaction is not screened for momenta of the order of T /v 2 . For a screened short range interaction (T /v 2 1/λ), the Pauli principle then leads to a suppression of the energy relaxation rate by an additional factor of (T λ/v 2 ) 4 1. 27 III. SPIN RECONSTRUCTION Edge reconstruction in quantum Hall systems results from the competition between the Coulomb interaction and the confinement potential. Spin reconstruction at ν = 2 takes place when the confinement potential V c varies sufficiently slowly so that V c < e 2 /κl 2 B and can be understood at the level of the Hartree-Fock approximation. [29][30][31][32] Once the slope of the confinement potential becomes weaker than that of the repulsive Hartree potential V H , it is favorable to deposit charges outside the edge. This can be done without paying exchange energy by a relative shift of the Fermi momenta of spin up and spin down particles, as depicted in Fig. 2. In the absence of a Zeeman splitting, ε Z = 0, this is a second order phase transition with spontaneous breaking of the spin symmetry. Then, the distance of the two Fermi momenta varies as k F 2 −k F 3 ∝ (|V H |−V c ) 1/2 , eventually saturating at ∼ 1/l B . 29 For finite Zeeman splitting ε Z , the spin symmetry is lifted by the Zeeman field and the transition is smeared on the scale of k F 2 − k F 3 ∼ ε Z /v 2 . Spin reconstruction leads to characteristic changes in the single particle dispersion that develops an "eye structure"[cf. Fig. 2a]. Important for the relaxation dynamics is the increase of v 2 − v 3 = (k F 2 − k F 3 )/m c , which enhances the typical energy transferred per step of relaxation [cf. Eq. (4)]. For truly long range interactions, the Hartree-Fock approximation predicts a logarithmic singularity ∼ e 2 /κ ln(|k − k F |l B ) of the particle velocity at the Fermi energy, which is however cut off in the presence of screening, say by a nearby gate electrode. The Fermi velocity is thus still of the order of v 2 , v 3 ∼ e 2 /κ for typical choices of the screening length. Even with spin reconstruction, the relaxation of hot particles can be described within the model dispersion of Eq. (3). We consider the case where the hot particle (not shown in Fig. 2) is injected well outside the energy window e 2 /(κl B ) of the reconstructed region. This is compatible with the condition for the validity of a perturbative expansion, which reduces to v 1 v 2 for the case of the Fermi velocity determined by the interaction. The energy relaxation rate 1/τ (s) E can now be derived in the same way as for the unreconstructed edge and consequently, Eq. (10) also applies to spin reconstructed edges. The crucial difference is that the velocity difference v 2 −v 3 is now strongly enhanced by the spin reconstruction, taking values up to v 2 −v 3 ∼ 1/(m c l B ). Comparing the rates before [v 2 − v 3 ∼ max{ε Z , T }/(m c v 2 )] and well after spin reconstruction, we find an enhancement of the relaxation rates given by (13) IV. CHARGE RECONSTRUCTION For confinement potentials that vary even more smoothly, changing by e 2 /κl B over a region w > l B , charge reconstruction may occur such that part of the electrons at the edge are pushed away from the bulk by a length of the order of l B . 31,32 It leads to a nonmonotonic behavior of the dispersion with momentum and the creation of two additional counter-propagating 33 edge modes, as depicted in Fig. 3. A minimal model for charge reconstruction considers filling factor ν = 1 within the Hartree-Fock approximation. 31 It is convenient to formally model the confinement potential by a positive background charge which is distributed spatially as if it was occupying lowest Landau level wave functions ψ X with occupation numbers ν c (X) = Θ(−X). The advantage of this model is that such a confinement potential exactly cancels the Hartree potential of the electrons for an unreconstructed edge. In this case, the electron occupation of the unreconstructed edge is stabilized by the (attractive) exchange potential. The reconstruction transition can then be modeled by changing the abrupt drop of ν c (X) into a linear decrease over a length w. For the unreconstructed electron occupations, this leads to negative (at X < 0) and positive (at X > 0) excess charges, causing a dipole field that favors separating electrons from the bulk. Once this dipole field overcomes the exchange potential, a charge reconstruction transition takes place. Within the Hartree-Fock approximation, this happens for w ∼ 8l B . Due to the particle-hole symmetric choice of the confinement potential around X = 0, the width and the distance of the additional stripe from the bulk electron droplet both take the same value b. Moreover, the transition is of first order in the sense that b changes abruptly at the transition from zero to a value of the order of l B . Note that the same mechanism induces new (weaker) effective dipole fields at each of the three Fermi points as the edge becomes yet smoother. Thus, increasing w even further causes additional stripes to appear, eventually approaching the limit of a compressible edge which is expected for w l B . 23 In the following we will focus on w > ∼ l B , remaining well outside the compressible limit. Energy relaxation in the charge reconstructed case can also be captured by the dispersion (3) when setting v 3 < 0 and choosing the particle i = 2 to lie in one of the co-propagating branches. 34 The three Fermi velocities of the charge reconstructed edge are essentially determined by the variation of the exchange potential, which is short ranged such that b > ∼ l B already approximates the bulk edge (b → ∞) behavior. Consequently, the magnitudes of the Fermi velocities are equal to that of the unreconstructed edge and ∼ e 2 /κ. In line with the discussions above, we consider the relaxation of a hot particle injected well outside the reconstructed region with v 1 e 2 /κ. The nonmonotonic behavior of the dispersion introduces a new relaxation process which relaxes the hot particle by exciting two counter-propagating electron-hole pairs [see Fig. 3]. This eliminates the restriction that the energy transfers at the Fermi energy cannot exceed the temperature and makes the relaxation process similar to that for non-chiral quantum wires. 26 Unlike for quantum wires, however, the momentum transfers at the Fermi en-ergy of the co-and counter-propagating branch are of the same order. The three-body matrix element of Eq. (8) still applies in the presence of charge reconstruction because its derivation did not require a specific sign of v 3 . Note however that for the charge reconstructed case v 2 −v 3 ∼ v 2 ∼ e 2 /κ and is therefore not connected to k F 2 − k F 3 ∼ 1/l B by the curvature of the confinement potential. Assuming that there are no substantial curvature effects on the scale of the reconstructed region V c l 2 B e 2 /(κl B ) the last term of Eq. (8) dominates the three-body matrix element and Eq. (9) modifies to The crucial difference for the energy relaxation rates compared to the unreconstructed case arises from the large allowed momentum q 3 ∼ 1/l B , which is limited only by the size of the reconstructed region for which the linearized dispersion applies. This increases both the momentum phase space to (L/l B ) 3 and the typical relaxed momentum to Moreover, excitation and relaxation processes no longer need to be balanced when e 2 /κl B T , and we find which applies for ε (e 2 /κl B ) 2 /(V l 2 B ) and allows for relaxation even at T = 0. Equation (15) implies that the increased phase space and the energy relaxation step size leads to a dramatic enhancement of the relaxation rate compared to the unreconstructed case [see Eq. (12)] as where we used the limit T ε Z . V. CONCLUSIONS We studied three-body processes as an intrinsic mechanism for relaxation of hot electrons in clean integer quantum Hall edges at Landau level filling factors ν = 1 and ν = 2. These processes rely crucially on the form of the electron dispersion and are thus susceptible to edge reconstruction effects. For an unreconstructed edge, energy relaxation requires a finite temperature which determines the phase space for the relaxation processes. The energy given up by the hot electron in a single three-body collision is controlled by curvature effects on the scale of temperature or Zeeman energy so that the relaxation rate can be tuned by a magnetic field once ε Z T . While unreconstructed edges are expected for steep confinement potentials, smoother confinement potentials with V c < ∼ e 2 /(κl 2 B ) may lead to an interaction-induced spin reconstruction, which causes a relative shift of the Fermi momenta of the two spin species by ∼ 1/l B . The three-body processes are then controlled by curvature effects on the scale of the interaction energy e 2 /(κl B ) which causes a strong increase of the relaxation rate [see Eq. (13)]. Even softer confinement may cause charge reconstruction which introduces additional co-and counterpropagating edge modes. The presence of counterpropagating modes allows for relaxation even at T = 0. Consequently, the phase space for three-body collisions is no longer controlled by temperature but by the size of the reconstructed region ∼ e 2 /(κl B ) which ensues an additional dramatic enhancement of the relaxation rate [see Eq. (16)]. Experimental studies of interaction-induced reconstruction transitions in high magnetic fields have been performed. 24 Our study suggests that it would be rewarding to experimentally investigate relaxation processes in such systems. Within this section we provide all essential details needed for the derivation of Eq. (7) presented in the main text. We assume that the edge is smooth enough that we can approximate the electron wave functions by those of the bulk. We start from the interaction matrix element in real space (A1) In the following we will measure all lengths scales in units of magnetic length l B . In this units the guiding center coordinate directly translates to momenta. With the lowest Landau level wave functions of Eq. (1) we then find where we used the screened Coulomb potential which carries an extra factor e − √ ∆x 2 +∆y 2 /λ with λ being the distance to a screening gate. The integration over ∆y gives 2K 0 (|∆xdX|) in the case when dX 1/λ, where K 0 is the Bessel function of imaginary argument. If, however, dX 1/λ the integral is cut off and the result changes to 2K 0 (|∆x/λ|). We will derive results for the dX 1/λ case and keep in mind appropriate changes for the other limit. After y integration, that gives a factor of L we obtain the intermediate step Performing now the Gaussian integral over x, that gives a factor of π/2, followed by using the Landau gauge to replace guiding center coordinates by momenta one arrives at where we used short-hand notation X − X = kl 2 B = k (with l B = 1). Note that V q (k) = V q (−k − 2q) and is therefore not symmetric, which plays an important role. We see immediately that scattering processes with large momentum transfer q 1 are exponentially suppressed. We therefore concentrate on the opposite limit of q 1 when the exponential prefactor e −q 2 /2 can be set to unity. Let us study limiting cases of Eq. (A4). In the case when k 1 one can approximate the exponential under the integral by the delta-function √ 2πδ(k + q + ξ), and thus obtains Using the asymptotic form of the Bessel function and restoring units of l B one recovers the second limit in Eq. (7). In the other limiting case when k 1 one can approximate the exponential under the integral of Eq. (A4) by e −ξ 2 /2 and then complete integration exactly with the result With the logarithmic accuracy at small q this translates into the first limit of Eq. (7). Appendix B: Calculation of the three-body matrix element T 123 1 2 3 In general the three-particle scattering amplitude 1 2 3 |V G 0 V |123 c contains six terms: one direct and five exchange contributions. 28 As explained in the text we need only the former one which reads explicitly 28 where the spin structure is δ Σ,Σ = δ σ1,σ 1 δ σ2,σ 2 δ σ3,σ 3 and the Coulomb matrix element V q (k) was derived in the preceding section. Now using the dispersion relation from Eq. (3), and constrain on momentum transfers from Eq. (4), imposed by the conservation laws, one can simplify T 123 1 2 3 to where we used the property V q (k) = V q (−k − 2q). It is important to stress that the above expression would vanish by ignoring the dependence of the Coulomb matrix element on initial momenta, namely for V q (k) = V q . To proceed further we make use of the assumption that injected particle is of high energy, such that v 1 v 2,3 and k 1 k 2, 3 . In this case we expand V qi (∆k + q i ) in q i . For the interaction V q (k) = − 2e 2 κ ln(|kq|l 2 B ) we obtain after the expansion . (B3) Note that if we are in the regime when k 2 − k 3 l −1 B we have to use the interaction potential V q (k) = − 2e 2 κ ln(|q|l B ), which has a vanishing derivative with respect to k. This can be accounted for by removing the two terms with V q1 (. . .) in the above formula for T 123 1 2 3 . Finally, to leading logarithmic order we can set V q1 (k 1 − k 2 ) = V q1 (k 1 − k 3 ) as well as V q2 (k 2 − k 1 ) = V q3 (k 3 −k 1 ) = V q3 (k 1 −k 2 ) and V q3 (k 3 −k 2 ) = V q2 (k 2 −k 3 ) to obtain Eq. (8) since the spin summation is equal to unity.
7,492.8
2012-06-01T00:00:00.000
[ "Physics" ]
Cognitive load assessment based on VR eye-tracking and biosensors In this paper I present the status of my doctoral research project, a general overview of the research topic and future developments. The main research focus will be to study and develop an extended reality solution for cognitive load assessment in adaptive virtual environments, based on eye tracking and bio-signals. The main objective is to respond to the need for healthcare and training becoming more personalized and location- and time-independent. The end goal is to establish a framework that serves as a quantitative basis for adaptive rehabilitation and training by pushing cognitive load assessment towards ubiquitous computing through immersive technologies. With the growing interest in virtual reality (VR) applications in the medical field there is a greater emphasis on medicine and digital therapies becoming more personalized and tailored for individual patient needs.From this PhD research project's perspective, of interest is the use of VR in conjunction with digital biomarkers such as eye-tracking and biosensors for cognitive load assessment.While inside a virtual experience, the mental focus seems to lie on the elements of the digital environment and, as such, measurable cognitive parameters are fully controllable by the applications algorithms.In cognitive load assessment, particularly neurodegenerative disorders, current immersive digital experiences have the main advantage of providing an alternative screening modality for cognition.The main disadvantage is that they are only based on VR environments alone, not using integrated approaches such as VR eye-tracking pair or bio-signal combinations.To tackle this disadvantage and to further enhance location-and time-independent nature of immersive technologies, the PhD research aims to answer the following questions: • How can immersive technologies be improved by eye tracking and biosensors?• How can cognitive load assessment be used as feedback for adaptive virtual environments?• What is the impact of such a cognitive load based adaptive environment on training and rehabilitation? The result provided by this PhD research will enable an integrated approach towards adaptive environments based on physiological data.This strategy of applying a unified optimized methodology holds for the integration of feedback mechanisms to create novel immersive environments, increasing the chances of a positive outcome. RELATED WORK Several studies have shown that VR technologies can be implemented in clinical settings and trials targeting cognitive assessments.Cognitive load assessment using VR environments is being used for the assessment of mood disorders [1], psychosis [1], schizophrenia [2], and neurodegenerative disorders [4] such as Parkinson's disease [3] [7] or dementia [8].These findings prove that immersive and especially VR experiences are well-fitting applications for adapting to a user's cognitive metrics. The current state of the art regarding VR based cognitive load assessment involves the use of CAVIR (Cognitive Assessment in Virtual Reality) [1] [5], an immersive VR cognitive assessment test of everyday life functions performed in a virtual kitchen, AGT (Art Gallery Test) [6], a VR based cognitive assessment based on visual search in an art gallery scenario, Virtual Reality Functional Capacity Assessment Tool (VRFCAT-SL) [7], real-world tasks implemented in virtual environment, Box and Blocks Test [3] [8], Box and Blocks test [3], University of California Performance based Skills Assessment (UPSA) [7], Complex Task Performance Assessment (CTPA) [7], or Functional Assessment Short Test (FAST) [1]. PROBLEM STATEMENT AND HYPOTHESIS The PhD study will focus on developing an adaptive system that makes use of both virtual reality and digital biomarkers for cognitive assessment.The main hypothesis is that by integrating eye tracking, biosignals and virtual reality into a multimodal system, and using the user's cognitive assessment as feedback, then a big leap forward for adaptive immersive technologies can be achieved. Combining immersive environments with eye-tracking and sensor fusion for evaluation purposes is a novel trend that has only emerged in the last five years, with the increase in VR devices that have integrated eye-tracking capabilities [10].The PhD research proposes to collect and analyze digital biomarkers by means of wearable devices, and eye-tracking, and use the result as feedback for adapting training and rehabilitation to the user during the VR session.The detection of stress levels, fatigue, and estimation of cognitive load using bio-signals from wearable physiological sensors is an established area of research.The most studied data modalities taken into consideration for the PhD are skin conductance and temperature, heart rate and heart rate variability, and photoplethysmogram [11].Also, stress indicators will be determined from eye-tracking data, including blink rate, pupil dilation, fixation, and saccades [12]. The proposed research will be conducted in two use cases: rehabilitation in neurodegenerative diseases, and adaptive training for healthcare providers.The main motivation for selecting these use cases is the current need for the medical field to become personalized and location independent.In both use cases the immersive environment will be adaptive by using a cognitive load-digital biomarker pair as feedback. CURRENT STATUS AND FUTURE WORK 4.1 Current Status I am currently working on developing a fully functional data acquisition framework.Up until this point I have managed to successfully integrate VR environments with eye tracking and biosensors, and to perform several tests.The test scenario involves the use of a virtual garage in which the test subject must find five objects.Once an object is found, the subject must focus his gaze on that specific object for 3-5 seconds for that object to be considered "found".Also, during session run, at a random moment in time a distraction occurs, to better simulate real life instances.The distraction involves one of the objects inside the garage falling, followed by a loud noise.The main purpose of this entire setup is to evaluate the cognitive load during a visual search and focus task, and to evaluate the cognitive response of the subject during a distraction.It is worth mentioning that during this immersive experience the subject's heart rate and galvanic skin response are measured using wearable devices, while the eye tracking parameters area is measured with the built-in eye tracker of the VR headset. The initial results point to the fact that while inside an immersive environment, there is a link in changes occurring in cognitive load, heart rate variation and galvanic skin response.Figure 1 presents digital biomarkers' evolution in time during a virtual reality setting, while figure 2 displays the virtual environment. In figure 1 the red line highlights the moment where the distraction occurred.As seen after the distraction, the cognitive load (yellow graph) and the galvanic skin response and heart rate increase.Also, it is worth noting that each peak on the cognitive load graph, before the red line, corresponds to the test subject finding an object and focusing on that respective object.In this instance the galvanic skin response and heart rate increase as well.In figure 2 the virtual environment is displayed.The interaction between the user and the virtual setting is done through gaze, represented as white circle, by highlighting the objects that need to be found. Future Work Future work will involve building an updated framework that will enable real-time sensor data acquisition and processing.Currently, sensor data is processed at the end of the running session.Also, future work will also include a gamified approach towards training and rehabilitation using cognitive load as feedback.One research idea is to have multiple visual search and focus levels inside the same virtual environment with various stages of difficulty, the access to superior levels and subsequent update of the virtual scene being enabled by the cognitive load assessment of the subject at the end of each session. BROADER IMPACT The core mission of the PhD is to perform research in the field of adaptive technologies for healthcare with the purpose of developing immersive environments tailored to user needs.By the end of my PhD studies in the second half of 2027, the proposed system is to be delivered and functional.The integration of immersive technologies and digital biomarkers, together with a gamified approach based on user cognitive load towards clinical rehabilitation and training represent unique and novel aspects of the PhD research. 1 INTRODUCTIONI am doing my PhD at NOVA University Lisbon, Portugal under the supervision of Prof. Rui Neves Madeira in his research group at NOVA LINCS, a research lab focused on Computer Science.I am currently in my second year of PhD having started in March 2023 and aim to finish it in 2027.In Portugal, the PhD Thesis usually consists of 4 or more number of publications in journals or conferences and then a summary linking them all together.The research component of my PhD is carried out at the Center for Digital Health and Social Innovation of the St. Pölten University of Applied Sciences (STPUAS), where I am employed as a Junior Researcher.At STPUAS I work closely with Dr. Vanessa Leung, my secondary PhD supervisor, in EyeQTrack, an Austrian-funded project to develop innovative solutions in adaptive XR training and rehabilitation.The joint supervision of my doctoral studies was made possible by the European University Alliance E 3 UDRES 2 , to which both Prof. Madeira and Dr. Leung belong. , a fully immersive VR based version of the Box and Blocks test for upper limb function in Parkinson's, and Virtual multiple errands test (VMET) [9], an immersive exploration based assessment placed in a virtual supermarket, for executive functions deficit.Trial based studies of the current state-of-the-art solutions have shown that VR tests for cognitive load assessment display the same statistical relevance as well-established tests such as Montreal Cognitive Assessment test (MoCA) [5] [6], Abbreviated Mental Test (AMT) [8], Mini-Mental State Examination (MMSE) Figure 1 : Figure 1: Digital biomarkers during a virtual reality setting.
2,184
2023-12-03T00:00:00.000
[ "Computer Science", "Medicine", "Engineering" ]
REACTIVE BIMOLECULAR COLLISIONS STUDIED WITH COMBINED PULSED LASERS AND PULSED , CROSSED , SUPERSONIC MOLECULAR Pulsed, supersonic molecular beams and pulsed lasers are particularly well matched tools when combined in molecular reaction dynamics studies. Salient features of an experiment using two pulsed molecular beam sources, a pulsed ultra-violet laser for creating reactive atoms by laser ablation and a pulsed dye laser for performing laser-induced fluorescence detection of the products are described. Differences with steady-state molecular beam experiments are outlined with respect to the following points: facility of inverting the data, possibility of obtaining high signal-to-background ratios and wide ranges of collision energy. These points are illustrated with some results concerning the reactions:C(n3P)J 2][3] In particular, the most severe limitation encountered in crossed, continuous molecular beam scattering studies, the signal versus background problem, was virtually elimi- nated due to higher instantaneous molecular beam intensities, which are not limited by differential pumping requirements, and time-of-flight discrimination of the background signal.Another major advantage of pulsed molecular beam sources suggested at that time, which became fully justified afterwards, was their facile interfacing with powerful pulsed ultra-violet lasers for generating high fluxes of reactive chemical species. By laser photolysis of a stable precursor molecule, or by laser ablation of a solid target, many interesting reactive species, including free radicals and atoms, could be produced.Moreover, by focussing the laser beam in the hydrodynamic region or in the free molecular region of the pulsed molecular flow, or by varying the delay between the pulsed valve and the pulsed laser triggering, various degrees of cooling of the translational and internal degrees of freedom of the reactive species of interest could be obtained. 4Finally, other exotic species could be synthetised by laser- induced chemical processes resulting from interactions of photolysed or photo- ablated products with reactive gases in the hydrodynamic region. The highly efficient pulsed valve-pulsed ultra-violet laser combination launched reactive scattering experiments which could hardly have been performed by using the conventional continuous molecular beam approach.1a It is noteworthy that successful studies have since been reported by other groups.State-resolved differential cross sections have thus been obtained for the reaction D + Ha HD + H" the high potential energy barrier being overcome by the kinetic energy mainly due to the fast D atoms (2.2 eV in the laboratory frame) generated by ArF photolysis of DeS in the collisionless region. 13The excitation function and the product state distribution have z.!so been found for another isotopic exchange reaction: CH + Da--CD + HD, the supercooled CH(arrl/a, v" 0, N" 1) radicals being created by ArF laser-induced chemistry in the hydrodynamic region of expanding CH3I/Xe/Ha mixtures.TM CROSSED BEAM EXPERIMENTS In our molecular beam experiments, reactions between ground-state atoms of more or less refractory elements and small oxidant molecules have been studied in the single collision regime, by crossing collimated, pulsed, supersonic molecular beams of short duration at right angles.Product quantum-state distributions and their collision energy dependence could be obtained by using pulsed laser-induced fluorescence (LIF) at the crossing point and by varying the velocity of the refractory atom beam.Such data can provide information about the nature of the potential energy hypersurface (PES) connecting reactants and products. The present paper aims at describing some salient differences with the steady-state approach to such experiments.The first consists of the production of intense beams of atoms.The second one concerns data reduction to extract quantitative informa- tion from the spectra, which requires the solution of the problem of density-flux transformation.The last two features discussed hereafter result from the characteristics of our pulsed atom beam source, i.e. the significant increase in the signal- to-background ratio, and the ability to scan a wide range of relative translational energy. Pulsed Atom Beam Source A schematic view of the experiment is given on Figure 1.As the apparatus has been the object of a recent paper describing all the experimental details, 5 this section only relates the key points of the metal-atom beam source design. The idea of the pulsed supersonic metal-atom beam came from the spectroscopy PV2 ALB Rod 10 mm Figure 1 Schematic (cutaway through the molecular beam axis plane) of the experiment.The whole assembly is inside the vacuum chamber.PV1 and PV2: pulsed valves for the oxidiser beam and the metal-atom beam, ALB: ablation laser beam, DLB: dye laser beam.Typical density contours FWHM of the two molecular beams in the vicinity of the scattering centre at the probing time are shown. '17 Laser ablation of a solid metal inside the throat of a pulsed nozzle using a pulsed laser (generally doubled Nd:Yag) was found so efficient and so promising that it literally induced an explosion of the metal cluster field.However, our own goal being to produce atom beams if possible free of any aggregates, the supersonic metal cluster source was redesigned for the production of monoatomic species. In the supersonic metal cluster experiments cited above, optimum cluster growth for various elements such as Cu, Mo, W or Nb was found to occur by focussing 7-15 mJ in 6 ns pulses of 532 nm radiation (2.33 eV or 225 kJ mole-1) onto a pl.5 mm target resulting in fluences of 0.4-0.85104 J m-2.Our own approach was to use higher photon energy together with higher fluence in order to induce complete dissociation of all the fragments ablated from the solid by multiphoton absorption. This was first achieved with spatially filtered radiation of a KrF laser operated with unstable resonator optics.Pulses of 248 nm radiation (5 eV, or 482 kJ mole-1) with the following characteristics" 7 mJ of energy within 1.2 mrad and 15 ns full-width at half-maximum (FWHM), could be focussed onto the target as 0.4 mm spots, resulting in a fluence of 5.5 104 J m-2.Such a fluence, the highest that could be obtained with the type of laser used (Lambda Physik EMG 101 E with EMG 70 unstable resonator), was very satisfactory for A1 atom experiments but insufficient for some of the C atom studies.Indeed, unlike most of the other refractory elements, carbon gives small clusters having strong chemical bonds (D0(C2)= 6.21 eV, D0(C3) 7.31 eV) which perturb the system for two reasons.Firstly, C2 is an extremely reactive radical which could yield the same LIF detected product: a case study is the C + NO CN + O system with the reaction Ce + NO CN + CO occurring simultaneously.Secondly, C3 exhibits intense absorption bands (Arru-XE + transitions), extending throughout the ultra-violet and visible regions due to an unusually low bending mode (63 cm-1), which could blur out the LIF spectra of the scattered product.Ce and C3 densities in the carbon beam could be efficiently lowered in experiments performed up to ca. 20 104 J m -2 fluence using 7 ns pulses of 266 nm radiation delivered by a quadrupled Nd Yag laser (Quantel SA YG585 with temperature phase-matched quadrupler), le Following quasi-complete dissociation of the ablated species into atoms, cluster growth was limited by minimising three body recombination reactions.These collision processes occur in the sonic extension channel, between the vaporising point and the vacuum, and afterwards in the hydrodynamic region of the expansion of the carrier gas in vacuum.Cluster growth is therefore favoured by increasing the gas load of the pulsed valve and the extension-channel length.The supersonic cluster experiments at Rice were performed with a magnetically operated double-solenoid pulsed valve giving under typical conditions a total gas output of ca. 70mm 3 per pulse (normal pressure and temperature conditions) and a sonic channel length extending up to 30 mm after the vaporising point.In our metal-beam source, the Gentry and Giese pulsed valve model used (Beam-Dynamics VCD-1) gave a gas load less than 20 times this value, and the sonic channel length was reduced to 3 mm.These characteristics were nonetheless found to ensure efficient electronic quenching of the atom metastable states, reducing them to negligible amounts compared to the atom ground-state concentration. 5Any ions produced were easily removed from the beam with an electrostatic field. The Laboratory to Centre of Mass Transformation In crossed molecular beam experiments, the data obtained in the laboratory must be transposed in the centre-of-mass (CM) system before any interpretation of the results is made.This is common practice in reactive scattering experiments using an ionisation mass detector scanning the range of accessible laboratory angles: in general the laboratory intensity distribution is strongly weighted by those contri- butions which have low relative velocities.When LIF is performed at the beam crossing point, it yields laboratory quantum-state product densities which must also be transformed into fluxes in the CM frame.A simple density to flux conversion taking the recoil velocity vectors into account was introduced with the pioneering experiments in this field.TM Its application, however, requires the angular distribution to be known, which is generally not the case. The simple transformation invoked above is also constrained to a limited domain of validity.It cannot be applied when performing experiments with pulsed beams of short physical length because a steady state is not reached when the LIF detection process is triggered.A misuse of this conversion also arises when experiments have been done with a laser beam not irradiating all the reaction zone but only its central part, which is very common in practice.In replacement, a mathematical model has been developed in our laboratory for reactive atom-diatomic systems.Product densities within the laser irradiated volume are calculated from the outcome of all the reactive events taking place in the beam overlap region up to the detection time. 19he model shows that the LIF detection efficiency can become extremely dependent not only on the internal energy state but also on the CM scattering angle.Moreover, changing the geometric arrangement of the experiment without changing the collision energy can drastically shift the maxima and minima of detection efficiency towards other values of the internal energy and recoil angle; in other words, for sharply peaked scattering angle distributions, the apparent rovibrational distribu- tions of the excitation spectra can look totally different.Typical results for the reaction" C(3pj) + NO(X27rr)--> CN(X2'+) + O(3pj), /k60 =-1.27 eV, are displayed on the axonometric plots of Figure 2 which give the conversion function or LIF detection efficiency of the CN radical as a function of the CM recoil energy and the CM scattering angle.For example, point A on Figure 2a corresponds to radicals with no internal energy scattered forwards, point B to radicals at the excitation limit (almost no recoil energy) scattered backwards.The product flux in any given internal energy state can be obtained when dividing the intensity of a corresponding LIF rovibrational line by the integral of the conversion function over the scattering angle distribution.Effects of beam collimation and pulse duration are clearly seen on these plots.It is worth noting that operating conditions can be found where the LIF detection efficiency is almost independent of the scattering angle (Figure 2b), simply by selecting the diameters of the collimators, which is of great interest when the angular distribution is unknown.Gross features of the scattering can also be deduced by performing experiments with different collimators (Figures 2a and 2b), the LIF signal behaving differently in the case of forward, backward or symmetric scattering.While these possibilities exist with pulsed, supersonic beam sources because the size and location of skimmers, which essentially act as collimators, are not critical parameters due to the absence of shock structure, 4 they are, however, unrealistic in continuous beam experiments.The latter experiments can also have the drawback of giving very peculiar figures of the LIF detection efficiency (Figure 2c), unsuitable for correct determination of the internal energy partitioning.Such a very unfavourable case as in Figure 2c cannot be found using even mildly collimated pulsed beams of short duration because the overlap (both in space and time) of the beams away from the scattering centre is limited. The Obtention of High Signal-to-Background Ratios The signal-to-background ratio problem, which remains the major limitation in Figure 2 Conversion function or LIF detection efficiency Y(etr,' 0)' z axis, versus CM recoil energy etr.' y axis and CM scattering angle 0: x axis, of the CN(X2E /) product from the C(3pj) d-NO(X2gr) reaction at tr 0.234 eV (beam velocities: vc 2140 ms -1 and VNO 830 ms-l), in the case of a laser beam irradiating the central part of the collision volume, a: pulse durations FWHM: 6to 4.3/s and 6tNO 35 /S, beam diameters FWHM at the collision zone: de 17 mm and dNo 14 mm; b: same as a except dNo 6 mm; c: 6t and 6tNO , d 17 mm and dNo 6 mm.molecular beam scattering experiments, is well illustrated with the case of the reaction: C(3pj) -k-N20 (xlz +) --CN(X2E +) + NO(X27rr), Ae0 -2.78 eV Carbon is one of the most refractory elements and obtaining a carbon beam with a metal-oven effusive source can only be achieved by operating the oven at an extremely high temperature (up to 3500 K). 2 At such a temperature, the effusive carbon beam contains roughly equal amounts of C, C2 and C3 species.On the one hand, complete time-of-flight discrimination of these species, which have different velocity distributions, by using a multiple chopper-disk velocity selector is hardly conceivable without loosing most of the C flux.On the other hand, leaving a small fraction of the highly rovibrationally excited C3 radical in the C(3pj) beam, would result in a background LIF signal throughout the whole spectral region of the CN violet system.Such an effusive source is therefore poorly suited to reaction dynamics studies using LIF detection. 7] This problem was finally solved as stressed above by increasing the fluence used for the laser ablation of graphite.The CN(XE +) excitation spectrum of Figure 3a, obtained this time at larger nozzle- crossing point distances and with beams collimated to 10 FWHM, has a larger signal-to-noise ratio and therefore contains dynamical informations of higher quality.The high rovibrational excitation results in prominent bandheads up to v" 6.The (6-7) band is not clearly observed although it should also exhibit a head in the vicinity of the (5-6) one.However, due to the spectral congestion in this region, it is difficult to ascertain whether the rovibrational manifold is populated above v" > 6 or not.Synthetic spectra 12 in fair agreement with the experimental one could be obtained by including a vibrational distribution only up to v" 6 (Figure 3b and 3c). It must be stressed at this level that the relative translational energy for a given CN quantum-state is not as well defined as in the C + NO CN + O reaction.It can, rather, take any discrete value allowed by the energy conservation law, depending upon the partitioning of the remaining available energy between fragment relative translation and NO internal degrees of freedom.As a consequence, some assump- tions are needed to compute the CN LIF detection efficiency.Synthetic spectra have thus been computed assuming either similar internal energy distributions for CN and NO, or no NO internal energy at all (Figures 3b and 3c, respectively). The following dynamical trends, independent of the hypotheses, can be derived from the inspection of the spectra of Figure 3.The rotational excitation is high" the distributions are definitely non-Boltzmann; however, the mean energies correspond to "rotational temperatures" as high as 11 000 K for CN (v" 1).The CN vibrational distribution is bell shaped, peaking at v" 3 and exhibiting a sharp drop for v" > 6. This apparent excitation limit, roughly at half the total energy available to the products, strongly suggest that an important fraction of the reaction energy is deposited in NO.Such behaviour is consistent with the existence of a deep well in the PES 1 which may favour a balanced energy partitioning between two vibrating rotators having almost identical vibrational and rotational constants. The Variation of the Relative Translational Energy The possibility of scanning the collision energy over a wide range of relative translational energies is another specification of a crossed pulsed supersonic beam apparatus.A study of the reaction: Mg(1So) -I-Y20(xl +) MgO(Xl +) + Y2(Xl-), Ago -1.5 eV, presently in progress illustrates its use. Previous works by several groups have always remained at a qualitative level.The reaction was not observed with a Broida-type apparatus at 300 K. 2z LIF detection of MgO(XIE +) failed at a first attempt in beam-gas configuration with a magnesium- oven effusive source operated at 1400 K and the scattering gas at 300 K but succeeded LASER EXCITATION WAVELENG]-H (nm) Figure 3 LIFspectrumofCN(X2E+)productfromtheC(3pj) + N20(XIF,+)reactionat0.28eVrelativetranslational energy.CN(B2F, X2Y, +) transition, Av sequence; v' v": vibronic assignment, a: experimental spectrum, b: synthetic spectrum computed when assuming similar internal distributions for CN and NO, with the following relative vibrational populations, Nv,, 0.26, 0.67, 1.00, 0.66, 0.57, 0.28 for v" 1-6, respectively, c: synthetic spectrum computed when assuming no NO internal energy, with Nv,, 0.21, 0.60, 1.00, 0.75, 0.74, 0.27 for v" 1-6, respectively.4] Successful LIF detection was also reported in a flow experiment at 520 K but no spectrum was published.25 Finally MgO(XIZ +) LIF was obtained with high signal-to-noise ratio by mixing Mg vapor from an oven at 1100 K with N20 preheated at this temperature. 26learly, the apparent increase of reactivity with collision energy of the Mg(1So) + N20 system can be ascribed to the presence of a high potential energy barrier in the entrance channel of the ground-state PES which has been predicted by ab initio calculations. 27-0 2-2 :3-3 500.498.496. An excitation spectrum of the MgO(XIE +) product taken at Etr-" 0.9 eV is presented on Figure 4. Several experiments were performed at a set of lower values of Etr obtained by varying the velocity of the Mg beam, keeping constant conditions on the N20 beam, and recording alternatively the LIF intensities of the MgO(0-0) bandhead and of the Mg(1pI-IS0) resonance transition at 285.21 nm.The variation of the relative reactive cross section with collision energy could thus be extracted with the assumption that the internal energy partitioning and the angular distribution of the MgO product did not change markedly in this energy range.First, quantitative Relative translational energy (eV) Figure 5 Relative reactive cross-section of the Mg(1So) + N20 --') MgO(XE +) + N2(XlEg+) product channel, as a function of the relative translational energy.
4,202.8
1990-01-01T00:00:00.000
[ "Chemistry", "Physics" ]
A Feasibility Study of Transformer Winding Temperature and Strain Detection Based on Distributed Optical Fibre Sensors The temperature distribution and deformation of the transformer windings cannot be measured in a distributed manner by the traditional method and failure location cannot be performed. To solve these problems, we present a transformer winding temperature and strain based on a distributed optical fibre sensing detection method. The design of the optical fibre winding composite model is developed and simulated winding temperature rise test and local deformation test distinguish between measuring the winding temperature and the strain curve. The test results show that the distributed optical fibre can transmit wire strain efficiently. Optical fibres, in the process of winding, have a certain pre-stress. Using the Brillouin–Raman joint measuring method, one can effectively extract the optical fibre temperature and strain information and measure the length of the winding direction of the temperature and strain distribution curve to a temperature measurement precision of ±2 °C and strain detection accuracy of ±50 με. The system can carry out local hot spot and deformation localisation, providing new ideas for the transformer winding state monitoring technology. Introduction Power transformers play an important role in power systems and their safe operation has a direct influence on the reliability and safety of the power supply. Statistics show that insulation damage is the main cause of transformer failure with the winding being the part with the highest failure rate. The winding temperature and accurate and real-time deformation checks are significant to the safe operation of transformers. Currently, there are several ways to check transformer temperature such as the top oil temperature measuring methods [1][2][3], fluorescence optical fibre temperature measuring methods [4,5], fibre Bragg grating methods, etc. [6][7][8][9]. The measuring scope of the top oil temperature measuring method is small due to its low measurement accuracy; fluorescence optical fibre temperature measuring offers a higher measuring accuracy, but it is a point-type measure and is still limited in scope since the number of sensors has to be increased to measure different parts; the fibre Bragg grating method is a point-type Detection Principle The incident pulse light will experience Rayleigh scattering, Brillouin scattering, and Raman scattering, as shown in Figure 1. Raman scattering is only sensitive to temperature and it is divided into Stokes and anti-Stokes scattered light. Anti-Stokes scattered light is sensitive to temperature while Stokes light is less affected by temperature, and the intensity of these two kinds of scattered light is in direct proportion to the temperature change [23]: where where I as is the anti-Stokes light intensity; I s is Stokes light intensity; Parameters λ s and λ as are Stokes and anti-Stokes optical wavelengths, respectively; c is the velocity of light in vacuo; h is the Planck coefficient; T is the temperature; k is the Boltzmann constant, and v is the Raman offset. where Ias is the anti-Stokes light intensity; Is is Stokes light intensity; Parameters λs and λas are Stokes and anti-Stokes optical wavelengths, respectively; c is the velocity of light in vacuo; h is the Planck coefficient; T is the temperature; k is the Boltzmann constant, and v is the Raman offset. The temperature at a measuring point can be obtained by measuring and calculating the intensity ratio between Stokes and anti-Stokes light. The frequency shift of Brillouin scattering is related to the speed of sound in the optical fibre. The sound velocity can be affected by the thermo-optic effect and the elasto-optical effect which are related to refraction rate, Young's modulus, Poisson's ratio, and the density of the optical fibre material, so the temperature and strain in the optical fibre can both lead to changes of the Brillouin frequency shift and intensity. Its result is represented by the good linearity between the axial strain and temperature and the Brillouin frequency shift of the optical fibre, i.e., ( ) ( ) where, υB (T,ε) is the Brillouin frequency shift of the optical fibre with temperature T and strain ε; υB0 (T0,ε0) is the Brillouin frequency shift of the optical fibre under initial temperature T0 and initial strain ε0; CυT and Cυε are the temperature and strain response coefficients of the Brillouin frequency shift; ΔΤ and Δε are the changes relative to the initial temperature and initial strain. When the temperature and strain of the optical fibre are measured with Brillouin scattering, an effective distinction between the temperature and strain sensing information must be made. Here, a Brillouin-Raman joint measuring method is adopted, the strain sensing optical fibre and temperature sensing optical fibre of the same length are laid in the same thermal environment, and accurate temperature ΔΤ and strain Δε are obtained by solving Formulas (1)-(3): where TR and TR0 are the target temperature and the initial temperature obtained by Raman scattering measurements, respectively. The Design of Optical Fibre Composite Wire One kind of optical fibre composite wire structure is designed in this paper. A groove is cut in the centre of the wide edge of the wire. A single-mode fibre (SMF) and a multi-mode fibre (MMF) are The temperature at a measuring point can be obtained by measuring and calculating the intensity ratio between Stokes and anti-Stokes light. The frequency shift of Brillouin scattering is related to the speed of sound in the optical fibre. The sound velocity can be affected by the thermo-optic effect and the elasto-optical effect which are related to refraction rate, Young's modulus, Poisson's ratio, and the density of the optical fibre material, so the temperature and strain in the optical fibre can both lead to changes of the Brillouin frequency shift and intensity. Its result is represented by the good linearity between the axial strain and temperature and the Brillouin frequency shift of the optical fibre, i.e., where, υ B (T,ε) is the Brillouin frequency shift of the optical fibre with temperature T and strain ε; υ B0 (T 0 ,ε 0 ) is the Brillouin frequency shift of the optical fibre under initial temperature T 0 and initial strain ε 0 ; C υT and C υε are the temperature and strain response coefficients of the Brillouin frequency shift; ∆T and ∆ε are the changes relative to the initial temperature and initial strain. When the temperature and strain of the optical fibre are measured with Brillouin scattering, an effective distinction between the temperature and strain sensing information must be made. Here, a Brillouin-Raman joint measuring method is adopted, the strain sensing optical fibre and temperature sensing optical fibre of the same length are laid in the same thermal environment, and accurate temperature ∆T and strain ∆ε are obtained by solving Formulas (1)- (3): where T R and T R0 are the target temperature and the initial temperature obtained by Raman scattering measurements, respectively. The Design of Optical Fibre Composite Wire One kind of optical fibre composite wire structure is designed in this paper. A groove is cut in the centre of the wide edge of the wire. A single-mode fibre (SMF) and a multi-mode fibre (MMF) are put on the groove ( Figure 2). The SMF is used as the strain-sensing optical fibre and the MMF is used as the temperature-sensing optical fibre. The fibres are fixed in the groove by insulating paint cured to ensure that the optical fibre and wire are deformed synchronously and at the same temperature. Since the optical fibre diameter is small and the area of the wire grooving is less than 2% of the cross-sectional area, the current carrying capacity and mechanical strength of the wire is negligible. Apart from the winding inside the transformer, there are mechanical parts such as the iron core, yoke, and clamping piece as well as interference factors such as transformer oil flow and machine vibration. Traditional electric measurements are usually affected by those factors mentioned above, thus the lower measuring accuracy; however, the inspection frequency of optical fibre measurements is usually above 10 GHz and is unaffected by the machine vibration signal. Meanwhile, the strain inspection optical fibre is integrated with the wire and is deformed synchronously with the wire, and the measured strain curve is only related to the deformation of the wire itself. To ensure the stable working of the optical fibre in high temperatures in the transformer, a high-temperature resistant optical fibre coated with a polyimide is used as it is stable at T > 200 • C. put on the groove ( Figure 2). The SMF is used as the strain-sensing optical fibre and the MMF is used as the temperature-sensing optical fibre. The fibres are fixed in the groove by insulating paint cured to ensure that the optical fibre and wire are deformed synchronously and at the same temperature. Since the optical fibre diameter is small and the area of the wire grooving is less than 2% of the cross-sectional area, the current carrying capacity and mechanical strength of the wire is negligible. Apart from the winding inside the transformer, there are mechanical parts such as the iron core, yoke, and clamping piece as well as interference factors such as transformer oil flow and machine vibration. Traditional electric measurements are usually affected by those factors mentioned above, thus the lower measuring accuracy; however, the inspection frequency of optical fibre measurements is usually above 10 GHz and is unaffected by the machine vibration signal. Meanwhile, the strain inspection optical fibre is integrated with the wire and is deformed synchronously with the wire, and the measured strain curve is only related to the deformation of the wire itself. To ensure the stable working of the optical fibre in high temperatures in the transformer, a high-temperature resistant optical fibre coated with a polyimide is used as it is stable at T > 200 °C. SMF Insulation paper Wire Insulation paint MMF The Simulation of Inter-Turn Electric Field Since winding deformation mostly occurs in low-voltage windings, the wire used in the low-voltage winding of an SFSZ7-31,500/110 kV transformer is modelled to establish a 2-d groove wire model. The voltage grade of the low-voltage winding is 10.5 kV. The wire is 2 mm wide, 6 mm high, the groove is 0.3 mm both in width and depth, and the wide surface is covered in a 0.2-mm thickness of insulation paper. The optical fibre has a double-layer structure with 0.125 mm of fibre core diameter and 0.25 mm of coating layer diameter. As for the 10.5 kV low-voltage winding, the potential difference across two adjacent turns is about 40 V. The relative dielectric constant of each material is listed in Table 1. The inter-turn electric field of the wire after optical fibre installation is shown in Figure 3. It is shown that the maximum electric field intensity after the wire grooving is located at the round corner of the groove (644.59 V/mm), which is 119.6% higher than the inter-turn electric field intensity. The average value of the electric field intensity in the groove is 40% that of the inter-turn electric field intensity. It is far from being enough to affect the insulation performance of the oil paper. The Simulation of Inter-Turn Electric Field Since winding deformation mostly occurs in low-voltage windings, the wire used in the low-voltage winding of an SFSZ7-31,500/110 kV transformer is modelled to establish a 2-d groove wire model. The voltage grade of the low-voltage winding is 10.5 kV. The wire is 2 mm wide, 6 mm high, the groove is 0.3 mm both in width and depth, and the wide surface is covered in a 0.2-mm thickness of insulation paper. The optical fibre has a double-layer structure with 0.125 mm of fibre core diameter and 0.25 mm of coating layer diameter. As for the 10.5 kV low-voltage winding, the potential difference across two adjacent turns is about 40 V. The relative dielectric constant of each material is listed in Table 1. The inter-turn electric field of the wire after optical fibre installation is shown in Figure 3. It is shown that the maximum electric field intensity after the wire grooving is located at the round corner of the groove (644.59 V/mm), which is 119.6% higher than the inter-turn electric field intensity. The average value of the electric field intensity in the groove is 40% that of the inter-turn electric field intensity. It is far from being enough to affect the insulation performance of the oil paper. Maximum electric field intensity Figure 3. The inter-turn electric field of optical fibre composite wire. The Power Frequency Resistance Test of the Groove Wire To check the influence of actual wire grooving on insulation, an inter-turn power frequency voltage breakdown test is conducted on the wire before and after grooving. Three layers of insulation paper are added between the wires ( Figure 4). The power frequency voltage breakdown test is conducted on the wire before and after grooving and the mean value of ten test results is taken. The test shows that the mean values of power frequency breakdown voltage before and after wire grooving are 6.82 kV and 6.75 kV, respectively, the breakdown positions are all at the edge of the wire and there is no breakdown at the groove. Therefore, it can be considered that the grooving of the wide side of the wire has no influence on winding insulation performance. The theoretical Calculation of Embedded Optical Fibre Strain Transfer Since the current direction in high-and low-voltage windings is opposite when the transfer is subject to a short-circuit electromotive force, the direction of action of the axial short-circuit force between two windings mutually repels it [24]. The low-voltage winding is subject to an inward compressive stress around its whole circumference. Since the winding often is wound on the stay, the wire between two adjacent stays will also produce bending stress under the action of axial short-circuit force. When the fibre is deformed synchronously with the wire, the fibre will be subjected to the combined action of compressive and bending stress. There has been some research into the strain transfer calculation produced by the uniform axial force applied to the embedded fibre and matrix [25]. When the distance from the end is more than 0.0125 m, the optical fibre strain transfer coefficient is 1. According to the theory of material The Power Frequency Resistance Test of the Groove Wire To check the influence of actual wire grooving on insulation, an inter-turn power frequency voltage breakdown test is conducted on the wire before and after grooving. Three layers of insulation paper are added between the wires ( Figure 4). The power frequency voltage breakdown test is conducted on the wire before and after grooving and the mean value of ten test results is taken. The test shows that the mean values of power frequency breakdown voltage before and after wire grooving are 6.82 kV and 6.75 kV, respectively, the breakdown positions are all at the edge of the wire and there is no breakdown at the groove. Therefore, it can be considered that the grooving of the wide side of the wire has no influence on winding insulation performance. The Power Frequency Resistance Test of the Groove Wire To check the influence of actual wire grooving on insulation, an inter-turn power frequency voltage breakdown test is conducted on the wire before and after grooving. Three layers of insulation paper are added between the wires ( Figure 4). The power frequency voltage breakdown test is conducted on the wire before and after grooving and the mean value of ten test results is taken. The test shows that the mean values of power frequency breakdown voltage before and after wire grooving are 6.82 kV and 6.75 kV, respectively, the breakdown positions are all at the edge of the wire and there is no breakdown at the groove. Therefore, it can be considered that the grooving of the wide side of the wire has no influence on winding insulation performance. The theoretical Calculation of Embedded Optical Fibre Strain Transfer Since the current direction in high-and low-voltage windings is opposite when the transfer is subject to a short-circuit electromotive force, the direction of action of the axial short-circuit force between two windings mutually repels it [24]. The low-voltage winding is subject to an inward compressive stress around its whole circumference. Since the winding often is wound on the stay, the wire between two adjacent stays will also produce bending stress under the action of axial short-circuit force. When the fibre is deformed synchronously with the wire, the fibre will be subjected to the combined action of compressive and bending stress. There has been some research into the strain transfer calculation produced by the uniform axial force applied to the embedded fibre and matrix [25]. When the distance from the end is more than 0.0125 m, the optical fibre strain transfer coefficient is 1. According to the theory of material The theoretical Calculation of Embedded Optical Fibre Strain Transfer Since the current direction in high-and low-voltage windings is opposite when the transfer is subject to a short-circuit electromotive force, the direction of action of the axial short-circuit force between two windings mutually repels it [24]. The low-voltage winding is subject to an inward compressive stress around its whole circumference. Since the winding often is wound on the stay, the wire between two adjacent stays will also produce bending stress under the action of axial short-circuit force. When the fibre is deformed synchronously with the wire, the fibre will be subjected to the combined action of compressive and bending stress. There has been some research into the strain transfer calculation produced by the uniform axial force applied to the embedded fibre and matrix [25]. When the distance from the end is more than 0.0125 m, the optical fibre strain transfer coefficient is 1. According to the theory of material mechanics, since the diameter of the transformer winding is much greater than the wire width and thickness, the wire can be taken as a small curvature beam to calculate the normal stress in bending [26]. Now the bending strain transfer coefficient is calculated and the following assumptions are made: 1. Each interface of the sensor is always tightly connected under bending. 2. Materials of different layers are all isotropic, linear elastic, bodies. 3. The optical fibre centroid coincides with that of the glue layer. 4. Both the wire and groove have no rounded corners. When the wire is subject to axial short-circuit action, the parameters of the embedded optical fibre wire are shown in Figure 5. The intersection of the axis of symmetry and the neutral axis layer of the model section is taken as the origin of the coordinates. To simplify the calculation, the wire section is divided into five areas and y 1 , y 2 , y 3 , y 4 , and y 5 are the distances from the centroid of each part to the wire bottom, thus y 2 = y 3 = y 4 = y 5 . Sensors 2018, 18, x FOR PEER REVIEW 6 of 13 mechanics, since the diameter of the transformer winding is much greater than the wire width and thickness, the wire can be taken as a small curvature beam to calculate the normal stress in bending [26]. Now the bending strain transfer coefficient is calculated and the following assumptions are made: 1. Each interface of the sensor is always tightly connected under bending. 2. Materials of different layers are all isotropic, linear elastic, bodies. 3. The optical fibre centroid coincides with that of the glue layer. 4. Both the wire and groove have no rounded corners. When the wire is subject to axial short-circuit action, the parameters of the embedded optical fibre wire are shown in Figure 5. The intersection of the axis of symmetry and the neutral axis layer of the model section is taken as the origin of the coordinates. To simplify the calculation, the wire section is divided into five areas and y1, y2, y3, y4, and y5 are the distances from the centroid of each part to the wire bottom, thus y2 = y3 = y4 = y5. Accordingly: where, σi (i = 1, 2, …, 5) is the stress in the corresponding number of area; Ei (i = 1, 2, …, 5) is the elastic module of the corresponding area, and, obviously E1 = E2 = E5; ρ is the radius of curvature of the optical fibre composite wire; y is the distance between any layer and the neutral axis layer. In combination with the definition of the static bending moment, the distance yc from the neutral layer of the optical fibre composite fibre to the bottom is Accordingly, the strain in any layer of the model is where, Ii (i = 1, 2, …, 5) is the second moment of area about the x-axis of each area: Accordingly: where, σ i (i = 1, 2, . . . , 5) is the stress in the corresponding number of area; E i (i = 1, 2, . . . , 5) is the elastic module of the corresponding area, and, obviously E 1 = E 2 = E 5 ; ρ is the radius of curvature of the optical fibre composite wire; y is the distance between any layer and the neutral axis layer. In combination with the definition of the static bending moment, the distance y c from the neutral layer of the optical fibre composite fibre to the bottom is Accordingly, the strain in any layer of the model is where, I i (i = 1, 2, . . . , 5) is the second moment of area about the x-axis of each area: According to Equations (6) and (7), the strain at the optical fibre centre is as follows: The strain at the wire surface is where, y m is the distance from the wire surface to the bottom. As indicated, the optical fibre strain is related to the distance from the optical fibre centroid to the neutral layer. When the optical fibre is closer to the wire surface, the strain increases and the detection sensitivity to wire deformation is higher. Since BOTDR has a certain spatial resolution, the strain measured in the length of the spatial resolution is the average strain along the gauge length and, therefore, the average value of the strain transfer rate in the perceived length is given by Strain Transfer Test A groove with a 0.3 mm side-length is cut on the wide side of a straight copper bar which is 1 m long, 30 mm wide, and 3 mm thick according to the method above, and insulating paint is applied to fix the distributed optical fibre in the groove. Both ends of the copper bar are fixed. A vertical downwards force is applied at the centre in 10 N increments over four steps each with an accuracy of 0.01 N ( Figure 6). The Brillouin optical time domain reflectometer (BOTDR) system is used to measure the strain at a constant room temperature. The measurement spatial resolution is 2 m, and the sampling resolution is 0.2 m. Finally, the strain value corresponding to the wire surface at the maximum strain of the optical fibre is taken to calculate the strain transfer coefficient of the fibre at different stresses ( Figure 7). The range of the optical fibre strain sensor is set to be 48~49 m. According to Equations (6) and (7), the strain at the optical fibre centre is as follows: The strain at the wire surface is where, ym is the distance from the wire surface to the bottom. As indicated, the optical fibre strain is related to the distance from the optical fibre centroid to the neutral layer. When the optical fibre is closer to the wire surface, the strain increases and the detection sensitivity to wire deformation is higher. Since BOTDR has a certain spatial resolution, the strain measured in the length of the spatial resolution is the average strain along the gauge length and, therefore, the average value of the strain transfer rate in the perceived length is given by Strain Transfer Test A groove with a 0.3 mm side-length is cut on the wide side of a straight copper bar which is 1 m long, 30 mm wide, and 3 mm thick according to the method above, and insulating paint is applied to fix the distributed optical fibre in the groove. Both ends of the copper bar are fixed. A vertical downwards force is applied at the centre in 10 N increments over four steps each with an accuracy of 0.01 N ( Figure 6). The Brillouin optical time domain reflectometer (BOTDR) system is used to measure the strain at a constant room temperature. The measurement spatial resolution is 2 m, and the sampling resolution is 0.2 m. Finally, the strain value corresponding to the wire surface at the maximum strain of the optical fibre is taken to calculate the strain transfer coefficient of the fibre at different stresses (Figure 7). The range of the optical fibre strain sensor is set to be 48~49 m. As shown in Figure 7, since a BOTDR system is limited by its spatial resolution, the strain measured by the optical fibre is actually the comprehensive strain within the spatial resolution length with the measuring point as the start point [27]. The strain on the wire surface can be transferred to the fibre when the test fibre is in synchronous deformation with the wire. The strain curve detected by the optical fibre can reflect the strain state of the winding. The strain variation region is 45~51 m, the pulse width of BOTDR is 20 ns, and the spatial resolution is 2 m, so there is a strain transition distance of approximately 2 m at both ends of the 1 m sensing fibre. Meanwhile, when the measured strain in the optical fibre is smaller than the actual strain, the strain transfer rate increases with increasing stress and it is asymptotic to its theoretical value. Building the Test Platform A spiral winding model is made according to the size of a low-voltage winding of SFSZ7-31,500/110 kV transformer. For convenient deformation setting, eight pieces of wires are wound in parallel and the outermost wire is changed to an optical fibre composite wire made according to the above method. Finally, a winding model with an outer diameter of 700 mm, composed of 40 cakes with a total length of about 90 m is made (Figure 8a). To eliminate the measurement error caused by the head-end blind area and tail-end reflection, a 20 m optical fibre pigtail is connected to the head-and tail-ends of the model. To simulate the uneven distribution of the winding temperature and local overheating in a real transformer, one resistance wire is parallel wound and attached to the outermost wires of the No. 10-12 and 30-32 cakes, over about 20 m in total length. A thermocouple is used to measure the wire surface temperature for comparative measurement (Figure 8b). As shown in Figure 7, since a BOTDR system is limited by its spatial resolution, the strain measured by the optical fibre is actually the comprehensive strain within the spatial resolution length with the measuring point as the start point [27]. The strain on the wire surface can be transferred to the fibre when the test fibre is in synchronous deformation with the wire. The strain curve detected by the optical fibre can reflect the strain state of the winding. The strain variation region is 45~51 m, the pulse width of BOTDR is 20 ns, and the spatial resolution is 2 m, so there is a strain transition distance of approximately 2 m at both ends of the 1 m sensing fibre. Meanwhile, when the measured strain in the optical fibre is smaller than the actual strain, the strain transfer rate increases with increasing stress and it is asymptotic to its theoretical value. Building the Test Platform A spiral winding model is made according to the size of a low-voltage winding of SFSZ7-31,500/110 kV transformer. For convenient deformation setting, eight pieces of wires are wound in parallel and the outermost wire is changed to an optical fibre composite wire made according to the above method. Finally, a winding model with an outer diameter of 700 mm, composed of 40 cakes with a total length of about 90 m is made (Figure 8a). To eliminate the measurement error caused by the head-end blind area and tail-end reflection, a 20 m optical fibre pigtail is connected to the headand tail-ends of the model. To simulate the uneven distribution of the winding temperature and local overheating in a real transformer, one resistance wire is parallel wound and attached to the outermost wires of the No. 10-12 and 30-32 cakes, over about 20 m in total length. A thermocouple is used to measure the wire surface temperature for comparative measurement (Figure 8b). As shown in Figure 7, since a BOTDR system is limited by its spatial resolution, the strain measured by the optical fibre is actually the comprehensive strain within the spatial resolution length with the measuring point as the start point [27]. The strain on the wire surface can be transferred to the fibre when the test fibre is in synchronous deformation with the wire. The strain curve detected by the optical fibre can reflect the strain state of the winding. The strain variation region is 45~51 m, the pulse width of BOTDR is 20 ns, and the spatial resolution is 2 m, so there is a strain transition distance of approximately 2 m at both ends of the 1 m sensing fibre. Meanwhile, when the measured strain in the optical fibre is smaller than the actual strain, the strain transfer rate increases with increasing stress and it is asymptotic to its theoretical value. Building the Test Platform A spiral winding model is made according to the size of a low-voltage winding of SFSZ7-31,500/110 kV transformer. For convenient deformation setting, eight pieces of wires are wound in parallel and the outermost wire is changed to an optical fibre composite wire made according to the above method. Finally, a winding model with an outer diameter of 700 mm, composed of 40 cakes with a total length of about 90 m is made (Figure 8a). To eliminate the measurement error caused by the head-end blind area and tail-end reflection, a 20 m optical fibre pigtail is connected to the head-and tail-ends of the model. To simulate the uneven distribution of the winding temperature and local overheating in a real transformer, one resistance wire is parallel wound and attached to the outermost wires of the No. 10-12 and 30-32 cakes, over about 20 m in total length. A thermocouple is used to measure the wire surface temperature for comparative measurement (Figure 8b). BOTDR technology uses a single mode fibre as the sensing element. Due to differences in the optical fibre material and process, the performance parameters of the single-mode tightly buffered optical fibres from different manufacturers, modes, and sheathing materials are different. Therefore, Sensors 2018, 18, 3932 9 of 13 temperature calibration and strain calibration tests have to be conducted. Multiple calibration tests have been conducted for the single mode optical fibre used here: the temperature and strain coefficients obtained are 1.32 MHz/ • C and 0.0528 MHz/µε respectively. When making coils, the wire and optical fibre will be influenced by positional changes and pulling. In order to ensure that the optical fibre is not damaged during winding, BOTDR is used to monitor the optical fibre strain. A Raman optical time-domain reflectometry (ROTDR) is used to measure the temperature curve on the winding wire and temperature compensation applied according to Equation (4). BOTDR and ROTDR are produced by WEIHAI BEIYANG OPTOELECTRONIC INFO-TECH CO.LTD. The settings of the instrument parameters are shown in Table 2. The temperature sensing length of the optical fibre is set as 100 m, and the ROTDR system measurement error is ±1 • C in the range of 20-90 • C, as shown in Figure 9a. The optical fibre strain sensor range is set to be 105~120 m, and the measurement error of BOTDR is ±50 µε in the range of 0~5000 µε as shown in Figure 9b. Figure 10 shows that during the winding of the coil, the strain change is smaller than 1400 µε. It is far smaller than the range of optical fibre strain measurement. It suggests that the distributed optical fibre sensor maintained its good strain monitoring performance. BOTDR technology uses a single mode fibre as the sensing element. Due to differences in the optical fibre material and process, the performance parameters of the single-mode tightly buffered optical fibres from different manufacturers, modes, and sheathing materials are different. Therefore, temperature calibration and strain calibration tests have to be conducted. Multiple calibration tests have been conducted for the single mode optical fibre used here: the temperature and strain coefficients obtained are 1.32 MHz/°C and 0.0528 MHz/με respectively. When making coils, the wire and optical fibre will be influenced by positional changes and pulling. In order to ensure that the optical fibre is not damaged during winding, BOTDR is used to monitor the optical fibre strain. A Raman optical time-domain reflectometry (ROTDR) is used to measure the temperature curve on the winding wire and temperature compensation applied according to Equation (4). BOTDR and ROTDR are produced by WEIHAI BEIYANG OPTOELECTRONIC INFO-TECH CO.LTD. The settings of the instrument parameters are shown in Table 2. The temperature sensing length of the optical fibre is set as 100 m, and the ROTDR system measurement error is ±1 °C in the range of 20-90 °C, as shown in Figure 9a. The optical fibre strain sensor range is set to be 105~120 m, and the measurement error of BOTDR is ±50 με in the range of 0~5000 με as shown in Figure 9b. Figure 10 shows that during the winding of the coil, the strain change is smaller than 1400 με. It is far smaller than the range of optical fibre strain measurement. It suggests that the distributed optical fibre sensor maintained its good strain monitoring performance. Temperature Rise Test The voltage of the resistance wires on wires of the No. 10-12 cakes and the No. 30-32 cakes increased with a voltage regulator so as to increase the winding temperature to 40 °C and 60 °C, and the temperature of the corresponding wire is measured by thermocouple as a reference. The Brillouin Temperature Rise Test The voltage of the resistance wires on wires of the No. 10-12 cakes and the No. 30-32 cakes increased with a voltage regulator so as to increase the winding temperature to 40 • C and 60 • C, and the temperature of the corresponding wire is measured by thermocouple as a reference. The Brillouin frequency shift curve as measured by the BOTDR system after the winding temperature increase is shown in Figure 11a. Figure 11b shows the optical fibre temperature and strain curve after the temperature compensation with the Raman temperature measuring technology. Temperature Rise Test The voltage of the resistance wires on wires of the No. 10-12 cakes and the No. 30-32 cakes increased with a voltage regulator so as to increase the winding temperature to 40 °C and 60 °C, and the temperature of the corresponding wire is measured by thermocouple as a reference. The Brillouin frequency shift curve as measured by the BOTDR system after the winding temperature increase is shown in Figure 11a. Figure 11b shows the optical fibre temperature and strain curve after the temperature compensation with the Raman temperature measuring technology. To analyse the measurement results, the actual winding temperature rise and actual measurement position, the mean temperature measured by distributed optical fibres for the temperature rise, and the mean temperature measured by the standard thermocouple are compared (Table 3). The measured position of the temperature change is slightly higher than the actual position because the spatial resolution of the ROTDR system is 2 m and the measured data are the mean To analyse the measurement results, the actual winding temperature rise and actual measurement position, the mean temperature measured by distributed optical fibres for the temperature rise, and the mean temperature measured by the standard thermocouple are compared (Table 3). The measured position of the temperature change is slightly higher than the actual position because the spatial resolution of the ROTDR system is 2 m and the measured data are the mean temperatures within that spatial resolution. Therefore, there is a temperature response transition distance of 2 m at the sudden temperature change position. The discrepancy between the measured temperature where the winding temperature rises based on the distributed optical fibre sensing and the measured value of the standard thermocouple is <±2 • C. The system is able to locate the point where the winding temperature changes. Meanwhile, the system response time is about 2 to 10 s and it has a fast response to winding temperature change and can reflect the real-time winding temperature distribution. As shown in Figure 11, the frequency shift curve as measured by the BOTDR system before temperature compensation changes greatly and it has a higher measurement sensitivity to the temperature change in the optical fibre. After temperature compensation by Raman temperature measuring system, the winding strain curve is the same as the original strain, the correlation coefficient reaches 0.999 and the strain error is less than 50 µε. Winding Temperature Rise and Deformation Test When the transformer encounters a short-circuit failure, the wire will experience sudden temperature changes due to the thermal effect of the short-circuit current while the winding is deformed due to the action of the short-circuiting electromotive forces. Therefore, the voltage of the resistance wires on the wires of the No. 30-32 cakes increases so as to increase the wire temperature to 40 • C and radial bulging deformation is set on the wire between two the adjacent stays of the No. 30-34 cakes. Figure 12a shows the frequency shift measurements before and after temperature compensation in the BOTDR system. The comparisons between the ROTDR measured temperature curve and compensated stress measuring curve and the original curves are shown in Figure 12b. temperature distribution. As shown in Figure 11, the frequency shift curve as measured by the BOTDR system before temperature compensation changes greatly and it has a higher measurement sensitivity to the temperature change in the optical fibre. After temperature compensation by Raman temperature measuring system, the winding strain curve is the same as the original strain, the correlation coefficient reaches 0.999 and the strain error is less than 50 με. Winding Temperature Rise and Deformation Test When the transformer encounters a short-circuit failure, the wire will experience sudden temperature changes due to the thermal effect of the short-circuit current while the winding is deformed due to the action of the short-circuiting electromotive forces. Therefore, the voltage of the resistance wires on the wires of the No. 30-32 cakes increases so as to increase the wire temperature to 40 °C and radial bulging deformation is set on the wire between two the adjacent stays of the No. 30-34 cakes. Figure 12a shows the frequency shift measurements before and after temperature compensation in the BOTDR system. The comparisons between the ROTDR measured temperature curve and compensated stress measuring curve and the original curves are shown in Figure 12b. As shown in Figure 12, the frequency shift curve as measured by the BOTDR system is affected by temperature and strain. After a temperature curve is obtained from the ROTDR system measurements, the real strain curve is found. As shown in Table 4, the winding deformation, as measured by the BOTDR system, is greater than that of the temperature rise, which is consistent with the actual test settings. However, since the spatial resolution of the BOTDR system is set to be 2 m, the scope of measured sudden strain change is larger than the actual setting. Besides, since the measured optical fibre strain is the average strain within the spatial resolution, the measured optical fibre strain is much lower than the actual wire strain. In the next step, the relationship between system spatial resolution settings and measurement location accuracy will be studied so as to locate positions undergoing deformation more accurately. As shown in Figure 12, the frequency shift curve as measured by the BOTDR system is affected by temperature and strain. After a temperature curve is obtained from the ROTDR system measurements, the real strain curve is found. As shown in Table 4, the winding deformation, as measured by the BOTDR system, is greater than that of the temperature rise, which is consistent with the actual test settings. However, since the spatial resolution of the BOTDR system is set to be 2 m, the scope of measured sudden strain change is larger than the actual setting. Besides, since the measured optical fibre strain is the average strain within the spatial resolution, the measured optical fibre strain is much lower than the actual wire strain. In the next step, the relationship between system spatial resolution settings and measurement location accuracy will be studied so as to locate positions undergoing deformation more accurately. Conclusions A distributed optical fibre detection method for the temperature and strain of transformer winding based on Brillouin-Raman joint measuring was presented, an optical fibre composite transformer winding was designed and fabricated, and the temperature rise and deformation measurement tests were conducted. The following conclusions may be drawn: a. The wire grooving slightly increased the maximum inter-turn field strength, but was not enough to affect the inter-turn insulation strength of the wire; the embedded optical fibre was able to transfer the wire strain and temperature, and the change in strain, as measured by BOTDR, reflected the change of state of the winding. b. The optical fibre is subject to tensile and compressive forces during wire winding and a certain pre-stress developed after the wire was wound. The optical fibre shall be laid at the pressure withstanding side as much as possible to avoid the bending of the optical fibre. c. Brillouin-Raman joint measuring was able to distinguish the optical fibre temperature and strain information, and monitor (in real-time) the winding temperature and strain distribution, and locate hot-spots and deformation positions. At the same time, the strong insulation properties of the optical fibre determined its potential for the on-line monitoring of transformer state and could overcome the shortcomings in traditional detection methods. d. In future research, the correlation between the change of optical fibre strain curve and the winding wire deformation and type, and pattern recognition will be studied so as to provide more accurate state information for engineers responsible for transformer maintenance.
9,944.2
2018-11-01T00:00:00.000
[ "Physics" ]
Ab initio predictions link the neutron skin of 208Pb to nuclear forces Heavy atomic nuclei have an excess of neutrons over protons, which leads to the formation of a neutron skin whose thickness is sensitive to details of the nuclear force. This links atomic nuclei to properties of neutron stars, thereby relating objects that differ in size by orders of magnitude. The nucleus 208Pb is of particular interest because it exhibits a simple structure and is experimentally accessible. However, computing such a heavy nucleus has been out of reach for ab initio theory. By combining advances in quantum many-body methods, statistical tools and emulator technology, we make quantitative predictions for the properties of 208Pb starting from nuclear forces that are consistent with symmetries of low-energy quantum chromodynamics. We explore 109 different nuclear force parameterizations via history matching, confront them with data in select light nuclei and arrive at an importance-weighted ensemble of interactions. We accurately reproduce bulk properties of 208Pb and determine the neutron skin thickness, which is smaller and more precise than a recent extraction from parity-violating electron scattering but in agreement with other experimental probes. This work demonstrates how realistic two- and three-nucleon forces act in a heavy nucleus and allows us to make quantitative predictions across the nuclear landscape. Heavy atomic nuclei have an excess of neutrons over protons, which leads to the formation of a neutron skin whose thickness is sensitive to details of the nuclear force.This links atomic nuclei to properties of neutron stars, thereby relating objects that differ in size by orders of magnitude.The nucleus 208 Pb is of particular interest because it exhibits a simple structure and is experimentally accessible.However, computing such a heavy nucleus has been out of reach for ab initio theory.By combining advances in quantum many-body methods, statistical tools, and emulator technology, we make quantitative predictions for the properties of 208 Pb starting from nuclear forces that are consistent with symmetries of low-energy quantum chromodynamics.We explore 10 9 different nuclear-force parameterisations via history matching, confront them with data in select light nuclei, and arrive at an importance-weighted ensemble of interactions.We accurately reproduce bulk properties of 208 Pb and determine the neutron skin thickness, which is smaller and more precise than a recent extraction from parity-violating electron scattering but in agreement with other experimental probes.This work demonstrates how realistic two-and three-nucleon forces act in a heavy nucleus and allows us to make quantitative predictions across the nuclear landscape. Neutron stars are extreme astrophysical objects whose interiors may contain exotic new forms of matter.The structure and size of neutron stars are linked to the thickness of neutron skins in atomic nuclei via the neutronmatter equation of state [1][2][3] .The nucleus 208 Pb is an attractive target for exploring this link in both experimental 4,5 and theoretical 2,6,7 studies, due to the large excess of neutrons and its simple structure.Mean-field calculations predict a wide range for R skin ( 208 Pb) because the isovector parts of nuclear energy density functionals are not well constrained by binding energies and charge radii 2,[7][8][9] .Additional constraints may be obtained 10 by including the electric dipole polarisability of 208 Pb, though this comes with a model dependence 11 which is difficult to quantify.In general, estimation of systematic theoretical uncertainties is a challenge for mean-field theory. In contrast, precise ab initio computations, which provide a path to comprehensive uncertainty estimation, have been accomplished for the neutron-matter equation of state [12][13][14] and the neutron skin in the medium-mass nucleus 48 Ca 15 .But up to now treating 208 Pb within the same framework was out of reach.Due to breakthrough developments in quantum many-body methods, such computations are now becoming feasible for heavy nuclei [16][17][18][19] .The ab initio computation of 208 Pb we report here represents a significant step in mass number from the previously computed tin isotopes 16,17 , as illustrated in Figure 1.The complementary statistical analysis in this work is enabled by emulators (for mass number A ≤ 16) which mimic the outputs of many-body solvers, but are orders of magnitude faster. In this paper we develop a unified ab initio framework to link the physics of nucleon-nucleon scattering and few-nucleon systems to properties of medium-and heavy-mass nuclei up to 208 Pb, and ultimately to the nuclear matter equation of state near saturation density. Linking models to reality Our approach to constructing nuclear interactions is based on chiral effective field theory (EFT) [21][22][23] .In this theory the long-range part of the strong nuclear force is known and stems from pion exchanges, while the unknown short-range contributions are represented as contact interactions; we also include the ∆ isobar degree of freedom 24 .At next-to-next-to leading order in Weinberg's power counting, the four pion-nucleon low-energy constants (LECs) are tightly fixed from pion-nucleon scattering data 25 .The 13 additional LECs in the nuclear potential must be constrained from data. We use history matching 26,27 to explore the modeling capabilities of ab initio methods by identifying a nonimplausible region in the vast parameter space of LECs, for which the model output yields acceptable agreement with selected low-energy experimental data-here denoted history-matching observables.The key to efficiently analyze this high-dimensional parameter space arXiv:2112.01125v2[nucl-th] 22 Aug 2022 2015 2015 2016 78 Ni 2016 100 Sn 2018 132 The bars highlight years of first realistic computations of doubly magic nuclei.The height of each bar corresponds to the mass number A divided by the logarithm of the total compute power R TOP500 (in flop/second) of the pertinent TOP500 list 20 .This ratio would be approximately constant if progress were solely due to exponentially increasing computing power.However, algorithms which instead scale polynomially in A have greatly increased the reach. is the use of emulators based on eigenvector continuation [28][29][30] that accurately mimic the outputs of the ab initio methods at several orders of magnitude lower computational cost.We consider the following historymatching observables: nucleon-nucleon scattering phase shifts up to an energy of 200 MeV; the energy, radius, and quadrupole moment of 2 H; and the energies and radii of 3 Here, experimental observations, z, are related to emulated ab initio predictions M (θ) via random variables ε exp , ε em , ε method , ε model that represent experimental uncertainties, emulator precision, method approximation errors, and the model discrepancy due to the EFT truncation at next-to-next-to leading order, respectively.The parameter vector θ corresponds to the 17 LECs at this order.The method error represents, e.g., model-space truncations and other approximations in the employed ab initio many-body solvers.The model discrepancy ε model can be probabilistically specified since we assume to op-erate with an order-by-order improvable EFT description of the nuclear interaction (see Methods for details). The final result of the five history-matching waves is a set of 34 non-implausible samples in the 17-dimensional parameter space of the LECs.We then perform ab initio calculations for nuclear observables in 48 Ca and 208 Pb, as well as for properties of infinite nuclear matter. Ab initio computations of 208 Pb We employ the coupled-cluster (CC) 12,31,32 , the inmedium similarity renormalization group (IMSRG) 33 and many-body perturbation theory (MBPT) methods to approximately solve the Schrödinger equation and obtain the ground-state energy and nucleon densities of 48 Ca and 208 Pb.We analyze the model-space convergence and use the differences between CC, IMSRG and MBPT results to estimate the method approximation errors, see Methods and Extended Data Figures 3 and 4. The computational cost of these methods scales (only) polynomially with increasing numbers of nucleons and single-particle orbitals.The main challenge in computing 208 Pb is the vast number of matrix elements of the three-nucleon force which must be handled.We overcome this limitation by using a recently introduced storage scheme in which we only store linear combinations of matrix elements directly entering the normal-ordered two-body approximation 19 (see Methods for details). Our ab initio predictions for finite nuclei are summarized in Figure 2. The statistical approach that leads to these results is composed of three stages.First, history matching identified a set of 34 non-implausible interaction parametrizations.Second, model calibration is performed by weighting these parametrizations-serving as prior samples-using a likelihood measure according to the principles of sampling/importance resampling 37 .This yields 34 weighted samples from the LEC posterior probability density function, see Extended Data Figure 5. Specifically we assume independent EFT and many-body method errors and construct a normally distributed datalikelihood encompassing the ground-state energy per nucleon E/A and point-proton radius R p for 48 Ca, and the energy E 2 + of its first excited 2 + state.Our final predictions are therefore conditional on this calibration data. We have tested the sensitivity of final results to the likelihood definition by repeating the calibration with a non-diagonal covariance matrix or a Student-t distribution with heavier tails, finding small (∼ 1%) differences in the predicted credible regions.The EFT truncation errors are quantified by studying ab initio predictions at different orders in the power counting for 48 Ca and infinite nuclear matter.We validate our ab initio model and error assignments by computing the posterior predictive distributions, including all relevant sources of uncertainty, for both the replicated calibration data (blue colour) and the history-matching observables (green colour), see Figure 2. The percentage ratios σ tot /z of the (theory dominated) total uncertainty to the 1 for numerical specification of experimental data (z), errors (σ i ), medians (white circle) and 68% credibility regions (thick bar).The prediction for R skin ( 208 Pb) in the bottom panel is shown in an absolute scale and compared to experimental results using electroweak 5 (purple), hadronic 34,35 (red), electromagnetic 4 (green), and gravitational waves 36 (blue) probes (from top to bottom; see Extended Data Figure 7b for details).experimental value are given in the right margin. Finally, having built confidence in our ab initio model and underlying assumptions, we predict R skin ( 208 Pb), E/A and R p for 208 Pb, α D for 48 Ca and 208 Pb as well as nuclear matter properties, by employing importance resampling 37 .The corresponding posterior predictive distributions are shown in the lower panels of Figure 2 (pink colour).Our prediction R skin ( 208 Pb) = 0.14 − 0.20 fm exhibits a mild tension with the value extracted from the recent parity-violating electron scattering experiment PREX 5 but is consistent with the skin thickness extracted from elastic proton scattering 35 , antiprotonic atoms 34 and coherent pion photoproduction 4 as well as constraints from gravitational waves from merging neutron stars 36 . We also compute the weak form factor F w (Q 2 ) at momentum transfer Q PREX = 0.3978(16) fm −1 , which is more directly related to the parity-violating asymmetry measured in the PREX experiment.We observe a strong correlation with the more precisely measured electric charge form factor F ch (Q 2 ), as shown in Figure 3b.While we have not quantified the EFT and method errors for these observables, we find a small variance among the 34 non-implausible predictions for the difference F w (Q 2 ) − F ch (Q 2 ) for both 48 Ca and 208 Pb as shown in Figure 3c. Ab initio computations of infinite nuclear matter We also make predictions for nuclear matter properties by employing the CC method on a momentum-space lattice 38 with a Bayesian machine-learning error model to quantify the uncertainties from the EFT truncation 14 and the CC method (see Methods and Extended Data Figure 6 for details).The observables we compute are the saturation density ρ 0 , the energy per nucleon of symmetric nuclear matter E 0 /A, its compressibility K, the symmetry energy S (i.e. the difference between the energy per nucleon of neutron matter and symmetric nuclear matter), and its slope L. The posterior predictive distributions for these observables are shown in Figure 3a.These distributions include samples from the relevant method and model error terms.Overall, we reveal relevant correlations among observables, previously indicated in mean-field models, and find good agreement with empirical bounds 39 .The last row shows the resulting correlations with R skin ( 208 Pb) in our ab initio framework.In particular, we find essentially the same correlation between R skin ( 208 Pb) and L as observed in mean-field models (See Extended Data Figure 7b). Discussion The predicted range of the 208 Pb neutron skin thickness (see Extended Data Table 2) is consistent with several extractions 4,40,41 , each of which involves some modeldependence, and in mild tension (approximately 1.5σ) with the recent PREX result 5 .Ab initio computations yield a thin skin and a narrow range because the isovector physics is constrained by scattering data 8,13,42 .A thin skin was also predicted in 48 Ca 15 . We find that both R skin ( 208 Pb) = 0.14 − 0.20 fm and the slope parameter L = 37 − 66 MeV are strongly correlated with scattering in the 1 S 0 partial wave for laboratory energies around 50 MeV (the strongest two-neutron channel allowed by the Pauli principle, with the energy naively corresponding to the Fermi energy of neutron matter at 0.8ρ 0 ), see Extended Data Figure 7a.It is possible, analogous to findings in mean-field theory 1,43 , to increase L beyond the range predicted in this work by tuning a contact in the 1 S 0 partial wave and simultaneously readjusting the three-body contact to maintain realistic nuclear saturation.But this large slope L and increased R skin come at the cost of degraded 1 S 0 scattering phase shifts, well beyond the expected corrections from higher-order terms (see Extended Data Figure 8).The large range of L and R skin obtained in mean-field theory is a consequence of scattering data not being incorporated.It will be important to confront our predictions with more precise experimental measurements 44,45 . If the tension between scattering data and neutron skins persists, it will represent a serious challenge to our ab initio description of nuclear physics. Our work demonstrates that ab initio approaches using nuclear forces from chiral EFT can consistently describe data from nucleon-nucleon scattering, few-body systems, and heavy nuclei within the estimated theoretical uncertainties.Information contained in nucleon-nucleon scattering significantly constrains the properties of neutron matter; this same information constrains neutron skins, which provide a non-trivial empirical check on the reliability of ab initio predictions for the neutron matter equation of state.Moving forward, it will be important to extend these calculations to higher orders in the effective field theory, both to further validate the error model and to improve precision, and to push the cutoff to higher values to confirm regulator independence.The framework presented in this work will enable predictions with quantified uncertainties across the nuclear chart, advancing toward the goal of a single unified framework for describing low energy nuclear physics. METHODS Hamiltonian and model space.The many-body approaches used in this work [CC, IMSRG, and manybody perturbation theory (MBPT)] start from the intrinsic Hamiltonian Here T kin is the kinetic energy, T CoM is the kinetic energy of the center of mass, V NN is the nucleon-nucleon, and V 3N is the three-nucleon interaction.In order to facilitate the convergence of heavy nuclei, the interactions employed in this work used a non-local regulator with a cutoff Λ = 394 MeV/c.Specifically, the Results should be independent of this choice, up to higherorder corrections, provided renormalization-group invariance of the EFT.However, increasing the momentum scale of the cutoff leads to harder interactions, considerably enlarging the required computational effort.We represent the 34 non-implausible interactions that resulted from the history-matching analysis in the Hartree-Fock basis in a model-space of up to 15 major harmonic oscillator shells (e = 2n + l ≤ e max = 14 where n and l denote the radial and orbital angular momentum quantum numbers, respectively) with oscillator frequency ω = 10 MeV.Due to storage limitations, the three-nucleon force had an additional energy cut given by e 1 + e 2 + e 3 ≤ E 3max = 28.After obtaining the Hartree-Fock basis for each of the 34 non-implausible interactions, we capture 3N force effects via the normal-ordered two-body approximation before proceeding with the CC, IMSRG and MBPT calculations 46,47 .The convergence behaviour in e max and E 3max is illustrated in Extended Data Figure 3.In that figure, we use an interaction with a high likelihood that generates a large correlation energy.Thus, its convergence behaviour represents the worst case among the 34 non-implausible interactions.The model-space converged results are investigated with E 3max → 3e max and e max → ∞ extrapolations.The functions 2(e max + 7/2)b (b is the harmonic-oscillator length; c i s and d i s are the fitting parameters) are used as the asymptotic forms for E 3max 19 and e max 48,49 , respectively.Through the extrapolations, the ground-state energies computed with e max = 14 and E 3max = 28 is shifted by −75 ± 60 MeV.Likewise, the extrapolations of proton and neutron radii with the functional form given in Refs. 19,48,49yield a small +0.005 ± 0.010 fm shift of the neutron skin thickness. In-medium similarity renormalization group calculations.The IMSRG calculations 33,50 were performed at the IMSRG(2) level, using the Magnus formulation 51 .Operators for the point-proton and pointneutron radii, form factors, and the electric dipole operator were consistently transformed.The dipole po-larizablility α D was computed using the equations-ofmotion method truncated at the 2-particle-2-hole level (the EOM-IMSRG(2,2) approximation 52 ) and the Lanczos continued fraction method 53 .We compute the weak and charge form factors using the parameterization presented in Ref. 54 , though the form given in Ref. 55 yields nearly identical results. Many-body perturbation theory calculations.MBPT theory calculations for 208 Pb were performed in the Hartree-Fock basis to third order for the energies, and to second order for radii. Coupled-cluster calculations.The CC calculations of 208 Pb were truncated at the singles-and-doubles excitation level, known as the CCSD approximation 12,31,32 .We estimated the contribution from triples excitations to the ground-state energy of 208 Pb as 10% of the CCSD correlation energy (which is a reliable estimate for closedshell systems 32 ). Extended Data Figure 4 compares the different manybody approaches used in this work, i.e.CC, IMSRG, MBPT, and allows us to estimate the uncertainties related to our many-body approach in computing the ground-state observables for 208 Pb.The point proton and neutron radii are computed as ground-state expectation values (see e.g.Ref. 15 for details).For 48 Ca we used a Hartree-Fock basis consisting of 15 major oscillator shells with an oscillator spacing of ω = 16 MeV, while for 3N forces we used E 3max = 16, which is sufficiently large to obtain converged results in this mass region.Here we computed the ground-state energy using the Λ-CCSD(T) approximation 56 which include perturbative triples corrections.The 2 + excited state in 48 Ca was computed using the equation-of-motion CCSD approach 57 , and we estimated a −1 MeV shift from triples excitations based on EOM-CCSD(T) calculations of 48 Ca and 78 Ni using similar interactions 58 . For the history-matching analysis we used an emulator for the 16 O ground-state energy and charge radius that was constructed using the recently developed subspace coupled-cluster method 30 .For higher precision in the emulator we went beyond the SP-CCSD approximation used in Ref. 30 and included leading-order triples excitations via the CCSDT-3 method 59 .The CCSDT-3 ground-state training vectors for 16 O were obtained starting from the Hartree-Fock basis of the recently developed chiral interaction ∆NNLO GO (394) of Ref. 60 in a model-space consisting of 11 major harmonic oscillator shells with the oscillator frequency ω = 16 MeV, and E 3max = 14.The emulator used in the history matching was constructed by selecting 68 different training points in the 17-dimensional space of LECs using a spacefilling Latin hypercube design in a 10% variation around the ∆NNLO GO (394) LECs.At each training point we then performed a CCSDT-3 calculation in order to obtain the training vectors for which we then construct the sub-space projected norm and Hamiltonian matrices.Once the SP-CCSDT-3 matrices are constructed we may obtain the ground-state energy and charge radii for any target values of the LECs by diagonalizing a 68 by 68 generalized eigenvalue problem (see Ref. 30 for more details).We checked the accuracy of the emulator by cross-validation against full-space CCSDT-3 calculations as demonstrated in Extended Data Figure 4a and found a relative error that was smaller than 0.2%. The nuclear matter equation of state and saturation properties are computed with the CCD(T) approximation which includes doubles excitations and perturbative triples corrections.The three-nucleon forces are considered beyond the normal-ordered two-body approximation by including the residual three-nucleon force contribution in the triples.The calculations are performed on a cubic lattice in momentum space with periodic boundary conditions.The model space is constructed with (2n max +1) 3 momentum points, and we use n max = 4(3) for pure neutron matter (symmetric nuclear matter) and obtain converged results.We perform calculations for systems of 66 neutrons (132 nucleons) for pure neutron matter (symmetric nuclear matter) since results obtained with those particle numbers exhibit small finite size effects 38 . Iterative history matching.In this work we use an iterative approach known as history matching 26,27 in which the model, solved at different fidelities, is confronted with experimental data z using Eq. ( 1).Obviously, we do not know the exact values of the errors in Eq. ( 1), hence we represent them as random variables and specify reasonable forms for their statistical distributions, in alignment with the Bayesian paradigm. For many-body systems we employ quantified method and (A = 16) emulator errors as discussed above and summarized in Extended Data Table 1.For A ≤ 4 nuclei we use the no-core shell model in Jacobi coordinates 61 and eigenvector continuation emulators 29 .The associated method and emulator errors are very small.Probabilistic attributes of the model discrepancy terms are assigned based on the expected EFT convergence pattern 62,63 .For the history-matching observables considered here we use point estimates of model errors from Ref. 64 . The aim of history matching is to estimate the set Q(z) of parameterizations θ, for which the evaluation of a model M (θ) yields an acceptable-or at least not implausible-match to a set of observations z.History matching has been employed in various studies involving complex computer models [65][66][67][68] ranging, e.g., from effects of climate modeling 69,70 to systems biology 71 . We introduce the individual implausibility measure which is a function over the input parameter space and quantifies the (mis-)match between our (emulated) model output M i (θ) and the observation z i for an observable in the target set Z. We mainly employ a maximum implausibility measure as the restricting quantity.Specifically, we consider a particular value for θ as implausible if with c I ≡ 3.0 appealing to Pukelheim's three-sigma rule 72 .In accordance with the assumptions leading to Eq. ( 1), the variance in the denominator of Eq. ( 3) is a sum of independent squared errors.Generalizations of these assumptions are straightforward if additional information on error covariances or possible inaccuracies in our error model would become available.An important strength of the history matching is that we can proceed iteratively, excluding regions of input space by imposing cutoffs on implausibility measures that can include additional observables z i and corresponding model outputs M i with possibly refined emulators as the parameter volume is reduced.The history matching process is designed to be independent of the order in which observables are included, as is discussed in 67 .This is an important feature as it allows for efficient choices regarding such orderings.The iterative history matching proceeds in waves according to a straightforward strategy that can be summarized as follows: 1.At wave j: Evaluate a set of model runs over the current NI volume Q j using a space-filling design of sample values for the parameter inputs θ.Choose a rejection strategy based on implausibility measures for a set Z j of informative observables. 2. Construct or refine emulators for the model predictions across Q j . 3. The implausibility measures are then calculated over Q j using the emulators, and implausibility cutoffs are imposed.This defines a new, smaller non-implausible volume Q j+1 which should satisfy 4. Unless (a) computational resources are exhausted, or (b) all considered points in the parameter space are deemed implausible, we may include additional informative observables in the considered set Z j+1 , and return to step 1. If 4(a ) is true we generate a number of acceptable runs from the final non-implausible volume Q final , sampled according to scientific need. The ab initio model for the observables we consider comprises at most 17 parameters; four subleading pionnucleon couplings, 11 nucleon-nucleon contact couplings, and two short-ranged three-nucleon couplings.To identify a set of non-implausible parameter samples we performed iterative history matching in four waves using observables and implausibility measures as summarized in Extended Data Figure 1b.For each wave we employ a sufficiently dense Latin hypercube set of several million candidate parameter samples.For the model evaluations we utilized fast computations of neutron-proton scattering phase shifts and efficient emulators for the few-and many-body history-matching observables.See Extended Data Table 1 and Extended Data Figure 2 for the list of history-matching observables and information on the errors that enter the implausibility measure (3).The input volume for wave 1 incorporates the naturalness expectation for LECs, but still includes large ranges for the relevant parameters as indicated by the panel ranges in Extended Data Figure 1a.In all four waves the input volume for c 1,2,3,4 is a four-dimensional hypercube mapped onto the multivariate Gaussian probability density function (PDF) resulting from a Roy-Steiner analysis of pion-nucleon scattering data 73 .In wave 1 and wave 2 we sampled all relevant parameter directions for the set of included two-nucleon observables.In wave 3, the 3 H and 4 He observables were added such that the three-nucleon force parameters c D and c E can also be constrained.Since these observables are known to be rather insensitive to the four model parameters acting solely in P −waves, we ignored this subset of the inputs and compensated by slightly enlarging the corresponding method errors.This is a well known emulation procedure called inactive parameter identification 26 .For wave 4 we considered all 17 model parameters and added the ground-state energy and radius of 16 O to the set Z 4 and emulated the model outputs for 5 × 10 8 parameter samples.By including oxygen data we explore the modeling capabilities of our ab initio approach.Extended Data Figure 1a summarizes the sequential non-implausible volume reduction, wave-by-wave, and indicates the set of 4,337 non-implausible samples after the fourth wave.We note that the use of history matching would in principle allow a detailed study of the information content of various observables in heavy-mass nuclei.Such an analysis, however, requires an extensive set of reliable emulators and is beyond the scope of the present work.The volume reduction is determined by the maximum implausibility cutoff (4) with additional confirmation from the optical depths (which indicate the density of non-implausible samples; see Eqs. ( 25) and ( 26) in Ref. 71 ).The nonimplausible samples summarise the parameter region of interest, and can directly aid insight regarding interdependencies between parameters induced by the match to observed data.This region is also where we would expect the posterior distribution to reside and we note that our history-matching procedure has allowed us to reduce its size by more than seven orders of magnitude compared to the prior volume (see Extended Data Figure 1b). As a final step, we confront the set of non-implausible samples from wave 4 with neutron-proton scattering phase shifts such that our final set of non-implausible samples has been matched with all history-matching observables.For this final implausibility check we employ a slightly less strict cutoff and allow the first, second and third maxima of I i (θ) (for z i ∈ Z final ) to be 5.0, 4.0, and 3.0, respectively, accommodating the more extreme max-ima we may anticipate when considering a significantly larger number of observables.The end result is a set of 34 non-implausible samples that we use for predicting 48 Ca and 208 Pb observables, as well as the equation of state of both symmetric nuclear matter and pure neutron matter. Posterior predictive distributions.The 34 nonimplausible samples from the final history matching wave are used to compute energies, radii of proton and neutron distributions, and electric dipole polarizabilities (α D )for 48 Ca and 208 Pb.They are also used to compute the electric and weak charge form factors for the same nuclei at a relevant momentum transfer, and the energy per particle of infinite nuclear matter at various densities to extract key properties of the nuclear equation of state (see below).These results are shown as blue circles in Figure 3. In order to make quantitative predictions, with a statistical interpretation, for R skin ( 208 Pb) and other observables we use the same 34 parameter sets to extract representative samples from the posterior PDF p(θ|D cal ).Bulk properties (energies and charge radii) of 48 Ca together with the structure-sensitive 2 + excited-state energy of 48 Ca are used to define the calibration data set D cal .The IMSRG and CC convergence studies make it possible to quantify the method errors.These are summarized in Extended Data Table 1.The EFT truncation errors are quantified by adopting the EFT convergence model 74,75 for observable y with observable coefficients c i that are expected to be of natural size, and the expansion parameter Q = 0.42 following our Bayesian error model for nuclear matter at the relevant density (see below).The first sum in the parenthesis is the model prediction y k (θ) of observable y at truncation order k in the chiral expansion.The second sum than represents the model error as it includes the terms that are not explicitly included.We can quantify the magnitude of these terms by learning about the distribution for c i which we will assume is described by a single normal distribution per observable type with zero mean and a variance parameter c2 .We employ the nuclear matter error analysis for the energy per particle of symmetric nuclear matter (described below) to provide the model error for E/A in 48 Ca and 208 Pb.For radii and electric dipole polarizabilities we employ the next-to leading order and next-to-next-to leading order interactions of Ref. 60 and compute these observables at both orders for various Ca, Ni, and Sn isotopes.The reference values y ref are set to r 0 • A 1/3 for radii and to the experimental value for α D .From this data we extract c2 and perform the geometric sum of the second term in Eq. ( 5).The resulting standard deviations for model errors are summarized in Extended Data Table 1. At this stage we can approximately extract samples from the parameter posterior p(θ|D cal ) by employing the established method of sampling/importance resampling 37,76 .We assume a uniform prior probability for the non-implausible samples and we introduce a normallydistributed likelihood, L(D cal |θ), assuming independent experimental, method, and model errors.The prior for c 1,2,3,4 is the multivariate Gaussian resulting from a Roy-Steiner analysis of πN scattering data 73 .Defining importance weights we draw samples θ * from the discrete distribution {θ 1 , . . ., θ n } with probability mass q i on θ i .These samples are then approximately distributed according to the parameter posterior that we are seeking 37,76 . Although we are operating with a finite number of n = 34 representative samples from the parameter PDF, it is reassuring that about half of them are within a factor two from the most probable one in terms of the importance weight, see Extended Data Figure 5. Consequently, our final predictions will not be dominated by a very small number of interactions.In addition, as we do not anticipate the parameter PDF to be of a particularly complex shape, based on the results of the history match, consideration of the various error structures in the analysis, and on the posterior predictive distributions (PPDs) shown in Figure 3, and as we are mainly interested in examining such lower 1-or 2-dimensional PPDs, this sample size was deemed sufficient and the corresponding sampling error assumed subdominant.We use these samples to draw corresponding samples from This PPD is the set of all model predictions computed over likely values of the parameters, i.e., drawing from the posterior PDF for θ.The full PPD is then defined, in analogy with Eq. ( 7), as the set evaluation of y which is the sum where we assume method and model errors to be independent of the parameters.In practice, we produce 10 4 samples from this full PPD for y by resampling the 34 samples of the model PPD (7) according to their importance weights, and adding samples from the error terms in (8).We perform model checking by comparing this final PPD with the data used in the iterative historymatching step, and in the likelihood calibration.In addition, we find that our predictions for the measured electric dipole polarizabilities of 48 Ca and 208 Pb as well as bulk properties of 208 Pb serve as a validation of the reliability of our analysis and assigned errors.See Figure 2 and Extended Data Table 1. In addition, we explored the sensitivity of our results to modifications of the likelihood definition.Specifically, we used a student-t distribution (ν = 5) to see the effects of allowing heavier tails, and we introduced an error covariance matrix to study the effect of possible correlations (with ρ ≈ 0.7) between the errors in binding energy and radius of 48 Ca.In the end, the differences in the extracted credibility regions was ∼ 1% and we therefore present only results obtained with the uncorrelated, multivariate normal distribution. Our final predictions for R skin ( 208 Pb), R skin ( 48 Ca) and for nuclear matter properties are presented in Figure 3 and Extended Data Table 2.For these observables we use the Bayesian machine learning error model described below to assign relevant correlations between equationof-state observables.For model errors in R skin ( 208 Pb) and L we use a correlation coefficient of ρ = 0.9 as motivated by the strong correlation between the observables computed with the 34 non-implausible samples.It should be noted that S, L, and K are computed at the specific saturation density of the corresponding non-implausible interaction. Bayesian machine learning error model.Similar to Eq. ( 1) the predicted nuclear matter observables can be written as: where y k (ρ) is the CC prediction using our EFT model truncated at order k, ε k (ρ) is the EFT truncation (model) error, and ε method (ρ) is the CC method error.In this work we apply a Bayesian machine learning error model 14 to quantify the density dependence of both method and truncation errors.The error model is based on multitask Gaussian processes that learn both the size and the correlations of the target errors from given prior information.Following a physically-motivated Gaussian process (GP) model 14 , the EFT truncation errors ε k at given density ρ are distributed as: with Here k = 3 for the ∆NNLO(394) EFT model used in this work, while c2 , l and Q are hyperparameters corresponding to the variance, the correlation length, and the expansion parameter.Finally, we choose the reference scale y ref to be the EFT leading-order prediction.The mean of the Gaussian process is set to be zero since the order-by-order truncation error can either be positive or negative and the correlation function r(ρ, ρ ; l) in ( 11) is the Gaussian radial basis function. We employ Bayesian inference to optimize the Gaussian process hyperparameters using order-by-order predictions of the equation of state for both pure neutron matter and symmetric nuclear matter with the ∆-full interactions from Ref. 64 .In this work, we find cPNM = 1.00 and l PNM = 0.92 fm −1 for pure neutron matter and cSNM = 1.55 and l SNM = 0.48 fm −1 for symmetric nuclear matter. The above Gaussian processes only describe the correlated structure of truncation errors for one type of nucleonic matter.In addition, the correlation between pure neutron matter and symmetric nuclear matter is crucial for correctly assigning errors to observables that involve both E/N and E/A (such as the symmetry energy S).For this purpose we use a multitask Gaussian process that simultaneously describes truncation errors of pure neutron matter and symmetric nuclear matter according to: where K 11 and K 22 are the covariance matrices generated from the kernel function c2 R ε k (ρ, ρ ; l) for pure neutron matter and symmetric nuclear matter, respectively, while K 12 (K 21 ) is the cross-covariance as in Ref. 77 . Regarding the CC method error, different sources of uncertainty should be considered.The truncation of the cluster operator and the finite-size effect are the main ones and the total method error is then ε method = ε cc + ε fs .Following the Bayesian error model we have a general expression for the method error: with R me (ρ, ρ ; l me ) = y me,ref (ρ)y me,ref (ρ )r(ρ, ρ ; l me ).( 14) Here the subscript "me" stands for either the cluster operator truncation "cc" or finite-size effect "fs" method error.For the cluster operator truncation errors ε cc the reference scale y me,ref is taken to be the CCD(T) correlation energy.The Gaussian processes are then optimized with data from different interactions by assuming that the energy difference between CCD and CCD(T) can be used as an approximation of the cluster operator truncation error.The correlation lengths learned from the training data are l me,PNM = 0.81 fm −1 for pure neutron matter and l me,SNM = 0.34 fm −1 for symmetric nuclear matter.Based on the convergence study we take ±10% of the correlation energy as the 95% credible interval which gives cme = 0.05 for ε cc .As for the finite-size effect ε fs , the reference scale is taken to be the CCD(T) ground-state energy.Then following Ref. 38, we use ±0.5% (±4%) of the ground-state energy of the pure neutron matter (the symmetric nuclear matter) as a conservative estimation of the finite-size effect (95% credible interval) when using periodic boundary conditions with 66 neutrons (132 nucleons) around the saturation point.This leads to cme,PNM = 0.0025 and cme,SNM = 0.02 for ε fs .The finite-size effects of different densities are clearly correlated while there are insufficient data to learn its correlation structure.Here we simply used 0.81 fm −1 (0.34 fm −1 ) as the correlation length for pure neutron matter (symmetric nuclear matter) and assume zero correlation between pure neutron matter and symmetric nuclear matter. Once the model and method errors are determined, it is straightforward to sample these errors from the corresponding covariance matrix and produce the equationof-state predictions using Eq. ( 9) for any given interaction.This sampling procedure is crucial for generating the posterior predictive distribution of nuclear matter observables shown in Figure 3a.CCD(T) calculations for nuclear-matter equation of state and the corresponding 2σ credible interval for method and model errors are illustrated in Extended Data Figure 6.The sampling procedure is made explicit with three randomly sampled equation-of-state predictions.Note that even though the sampled errors for one given density appear to be random, the multitask Gaussian processes will guarantee that the sampled equation of state of nuclear matter are smooth and properly correlated with each other.Finally, the proportion of the parameter space deemed non implausible is listed in the last column.Note that no additional reduction of the non-implausible domain is achieved in the fourth and final waves, in which 16 O observables are included, but that parameter correlations are enhanced.Histogram of importance weights for the 34 non-implausible interaction samples.These are obtained from likelihood calibration as defined in Eq. (6).shown along with the corresponding method error (blue shade) and EFT truncation error (green shade) for one representative interaction.Errors are correlated as a function of density ρ and the dashed orange, green and purple curves illustrate predictions with randomly sampled method and model errors drawn from the respective multitask Gaussian processes.Correlations extend between pure neutron matter (E/N ) and symmetric nuclear matter (E/A) energies per particle which is represented here by curves in the same colour., p34 , and GW170817 36 .All these results involve modeling input as the neutron skin thickness cannot be measured directly.The quoted experimental error bars include statistical and some systematical uncertainties except for Ref. 34 that is statistical only and the GW170817 constraint which is a 90 % upper bound from relativistic mean-field modeling of the tidal polarizability extracted in Ref. 36 . Extended Data Table 2 | Predictions for the nuclear equation of state at saturation density and for neutron skins.Medians and 68%, 90% credible regions (CR) for the final PPD including samples from the error models (see also Figure 3 and text for details).The saturation density, ρ 0 , is in (fm −3 ), the neutron skin thickness, R skin ( 208 Pb) and R skin ( 48 Ca), in (fm), while the saturation energy per particle (E 0 /A), the symmetry energy (S), its slope (L), and incompressibility (K) at saturation density are all in (MeV).Empirical regions shown in Figure 3 are E 0 /A = −16.0± 0.5, ρ 0 = 0.16 ± 0.01, S = 31 ± 1, L = 50 ± 10 and K = 240 ± 20 from Refs. 39,88,89 4), Extended Data Table 1 and Extended Data Figure 2. b, Illustration of the freedom in Skyrme parametrizations to adjust L while preserving ρ 0 and E 0 /A.The parameters x 0 , t 0 , x 3 , t 3 correspond to the functional form given in e.g. 90 .The black circles correspond to different parameter sets, while the red line indicates the result of starting with the SKX interaction and modifying the x 0 , x 3 parameters while maintaining the binding energy per nucleon E/A of 208 Pb.The right column also shows the 208 Pb point-proton and point-neutron radii (R p and R n , respectively) and neutron skin thickness R skin for different parametrisations.The gray bands indicate a linear fit to the black points with r 2 the coefficient of determination.Skyrme parameter sets included are SKX, SKXCSB 9 , SKI, SKII 91 , SKIII-VI 92 , SKa, SKb 93 , SKI2, SKI5 94 , SKT4, SKT6 95 , SKP 96 , SGI, SGII 97 , MSKA 98 , SKO 99 , SKM * 100 . Figure 1 | Figure 1 | Trend of realistic ab initio computations for the nuclear A-body problem.The bars highlight years of first realistic computations of doubly magic nuclei.The height of each bar corresponds to the mass number A divided by the logarithm of the total compute power R TOP500 (in flop/second) of the pertinent TOP500 list20 .This ratio would be approximately constant if progress were solely due to exponentially increasing computing power.However, algorithms which instead scale polynomially in A have greatly increased the reach. Figure 3 | Figure3| Posterior predictive distribution for R skin ( 208 Pb) and nuclear matter at saturation density.a, Predictions for the saturation energy per particle E 0 /A and density ρ 0 of symmetric nuclear matter, its compressibility K, the symmetry energy S, and its slope L are correlated with the those for R skin ( 208 Pb).The bivariate distribution include 68% and 90% credible regions (black lines) and a scatter plot of the predictions with the 34 non-implausible samples before error sampling.Empirical nuclear-matter properties are indicated by purple bands (see Extended Data Table2).b, Predictions with the 34 non-implausible samples for the electric F ch versus weak F w charge form factors for 208 Pb at the momentum transfer considered in the PREX experiment5 .c, Difference between electric and weak charge form factors for48 Ca and 208 Pb at the momentum transfers Q CREX = 0.873 fm −1 and Q PREX = 0.3978 fm −1 that are relevant for the CREX and PREX experiments, respectively.Experimental data (purple bands) in panels b and c are from Ref.5 , the size of the markers indicate the importance weight, and blue lines correspond to weighted means. 1 | History-matching waves.a, The initial parameter domain used at the start of history-matching wave 1 is represented by the axes limits for all panels.This domain is iteratively reduced and the input volumes of waves 2, 3, and 4 are indicated by green/dash-dotted, blue/dashed, black/solid rectangles.The logarithm of the optical depths log 10 ρ (indicating the density of non-implausible samples in the final wave) are shown in red with darker regions corresponding to a denser distribution of non-implausible samples.b, Four waves of history matching were used in this work plus a fifth one to refine the final set of non-implausible samples.The neutron-proton scattering targets correspond to phase shifts at six energies (T lab = 1, 5, 25, 50, 100, 200 MeV) per partial wave: 1 S 0 , 3 S 1 , 1 P 1 , 3 P 0 , 3 P 1 , 3 P 2 .The A = 2 observables are E( 2 H), R p ( 2 H), Q( 2 H), while A = 3, 4 are E( 3 H), E( 4 He), R p ( 4 He).Finally, A = 16 targets are E( 16 O), R p ( 16 O).The number of active input parameters is indicated in the fourth column.The number of inputs sets being explored, and the fraction of non-implausible samples that survive the imposed implausibility cutoff(s) are shown in the fifth and sixth columns, respectively. Extended Data Figure 2 | 14 Extended Data Figure 3 | Neutron-proton scattering phase shifts.34 interaction samples survive the final implausibility cutoff with respect to neutron-proton phase shifts δ in S and P waves up to 200 MeV.The red circles are from the Granada phase shift analysis79 , while the 2σ error bars are dominated by the estimated EFT truncation errors64 .Convergence of energy and radius observables of 208 Pb with the e max and E 3max truncations.a, Ground state energy as a function of E 3max .The dashed lines indicate a Gaussian fit.b, Ground state energy (extrapolated in E 3max as a function of e max .The smaller error bar on the adopted value indicate the error due to model space extrapolation, and the larger error bar also includes the method uncertainty.c, Neutron skin as a function of E 3max .d, Neutron radius as a function of oscillator basis frequency ω for a series of e max cuts. 4 |Extended Data Figure 5 | Precision of sub-space coupled-cluster emulator and many-body solvers.a, Cross-validation of the SP-CCSDT-3 emulator for the ground-state energy of 16 O.Results from full computations using CCSDT-3 are compared with emulator predictions for 50 samples from the 17 dimensional space of LECs.The standard deviation for the residuals ∆E SPCC−CC is 0.19 MeV.b,c,d, Differences between IMSRG and CC results versus differences between MBPT and CC results for the ground-state energy per nucleon ∆E/A (panel b), the point-proton radius ∆R p (panel c), and the neutron-skin ∆R skin (panel d) of 208 Pb using the 34 non-implausible interactions obtained from history matching (see text for more details).The CC results for the ground-state energy include approximate triples corrections.Importance weights. 7 | Correlation of R skin ( 208 Pb) with scattering data and L. a Correlation of computed R skin ( 208 Pb) with the proton-neutron 1 S 0 phase shift δ( 1 S 0 ) at a laboratory energy of 50 MeV, shown in blue.The error bars represent method and model (EFT) uncertainties.The green band indicates the experimental phase shift 79 , while the purple line (band) indicate the mean result (one-sigma error) of the PREX experiment 5 .The dashed line indicates the linear trend of the ab initio points with r 2 the coefficient of determination.b Correlation of neutron skin R skin ( 208 Pb) vs slope of the symmetry energy L. Relativistic and non-relativistic mean-field calculations are indicated with open symbols 87 , while ab initio results using the 34 non-implausible samples are indicated with filled circles.Experimental extractions of R skin ( 208 Pb) shown in the figure are from PREX 5 , MAMI 4 , RCNP 64 Extended Data Figure 8 | Parameter sensitivities in ab initio models and Skyrme parametrizations.a, Tuning the C 1S0 LEC in our ab initio model to adjust the energy slope parameter L while compensating with the three-nucleon contact c E to maintain the saturation density ρ 0 and energy per nucleon E 0 /A of symmetric nuclear matter.The green pentagons correspond to results with one of the 34 interaction samples while the black squares indicate the results after tuning the C 1S0 and c E of that interaction.The right column shows the scattering phase shift δ in the 1 S 0 channel at 50 MeV, the ground-state energies in 3 H and16 O and the point-proton radius R p in16 O.The red diamonds and the dashed lines indicate the experimental values of target observables and the red bands indicate the corresponding c I = 3 non-implausible regions, see Eq. ( H, 4 He, and 16 O.We perform five waves of this global parameter search-see Extended Data Figures 1 and 2-sequentially ruling out implausible LECs that yield model predictions too far from experimental data.For this purpose we use an implausibility measure (see Methods) that links our model predictions and experimental observations as
11,473
2021-12-02T00:00:00.000
[ "Physics" ]