text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Corporate social responsibility vs. financial interests: the case of responsible gambling programs
Corporate social responsibility (CSR) is supposed to play an important part in public health. Critics argue that opposing financial interests can prevent companies from implementing effective CSR programs. We shed light on this discussion by analyzing CSR programs of gambling operators. Two data sets are used: (1) seven responsible gambling (RG) programs of German slot machine hall operators and (2) a survey carried out among 512 problem gamblers in treatment who play primarily in slot machines halls. Results show that the RG programs list mostly mandatory measures with one major exception: to approach possible problem gamblers with the intention to help them. However, operators’ staff approach only 1% of problem gamblers. We argue that the observed ineffective implementation of voluntary CSR measures is grounded in the strong financial incentive of operators to serve precisely the group they should stop from playing: problem gamblers. We conclude that financial interests reduce the effectiveness of CSR.
Introduction
Voluntary self-regulatory measures often come under criticism for only being implemented to deter stronger mandatory regulations. In this article, we examine corporate social responsibility in an area that has been a controversial business for centuries and today is a concern for public health: gambling (Leung and Snell 2017). While gambling yields pleasure to some consumers, it can harm pathological gamblers and their environment. Often referred to as a demerit good requiring treatment different from other goods, gambling regulators regularly demand strong corporate social responsibility (CSR) from operators.
Alcohol experts have voiced similar concerns about the self-regulatory measures and responsible drinking practices of the alcohol industry (Savell et al. 2016). Because of the alcohol industry's profitability objectives, some studies argue that the alcohol industry is purposively producing responsible drinking campaigns for marketing purposes (Barry and Goodson 2010). These not only fail to achieve significant behavioral change among drinkers but can also be considered harmful, reinforcing current drinking behaviors (Pettigrew et al. 2016).
In 2012, a similar critique was raised in Germany when it became mandatory by law for private operators of slot machines to have a responsible gambling (RG) program but the operators were given broad scope in drafting these programs. While some measures within these programs were mandatory, the idea was that operators would also implement voluntary measures. Critics raised the suspicion that such programs not only lack effective voluntary measures, but even the mandatory ones might not be fully complied because of the lack of sanctions. Following this critique, the concept of responsible gambling programs is an ineffective player protection measure that not only produces costs but might even hinder the implementation of more effective measures.
In this article, we investigate whether this critique is correct by (1) analyzing seven responsible gambling programs to determine if they contain voluntary measures and whether these measures can potentially yield effective player protection and (2) surveying 512 gamblers in treatment, who mainly gamble at slot machine halls, to discern whether the measures within the programs are put into practice.
Literature Review and Hypotheses
The primary goal of a private business, including gambling operators, is to maximize profits. Operators pursue this goal under the requirements of regulatory constraints. However, the behavior of companies often still has consequences that are external to its profit function and negatively affect the public, including health effects. In an attempt to deal with these consequences and achieve a balance among the economic, environmental, and social aspects, as well as imperatives, firms engage in corporate social responsibility (CSR) efforts (Swathi 2018). CSR has been defined as "a concept whereby companies integrate social and environmental concerns in the business operation and their interactions with their stakeholders on a voluntary basis" (The Commission of the European Communities 2001, p. 6). Cai et al. (2012) posed the question: Can firms in controversial industries be socially responsible while producing products harmful to a human being, society, or environment? Examining the CSR engagement of firms involved in supplying goods with negative health effects (e.g., tobacco, gambling, alcohol), the study sought to investigate the relation between a firm's choice of CSR activities and its market value, finding that after controlling for various firm characteristics, CSR engagement was positively associated with firm value, consistent with a value-enhancement hypothesis. In essence, this shows that businesses in controversial industries utilizing CSR as a means to improve transparency, strategies, and philanthropy also enhance firm value (Cai et al. 2012). This finding suggests that businesses have an intrinsic interest in engaging in CSR. But is that also true when CSR opposes financial interests, for example, when a firm is selling addictive goods and is supposed to block access to some of their customers?
CSR in the alcohol market
The discussion on CSR in markets for demerit goods with negative effects on adult public health focuses mostly on alcohol. Under the general framework of corporate social responsibility, the alcohol industry has increased 'responsible drinking' prevention activities. Most of these have often been described as instrumental to the industry's economic interest (Babor and Robaina 2013) and designed to maximize the long-term profits (Garriga and Melé 2004). The very term "responsible drinking" has been challenged by researchers, who consider such messaging to be intentionally ambiguous and potentially part of strategies to protect the industry's interests (Hessari and Petticrew 2017).
A content analysis study concluded that these 'responsible drinking' campaigns strategically enable the industry to confuse the presentation of health information and sometimes undermine official government instructions on alcohol harm (Jones et al. 2017). It has been shown that viewers unequivocally interpret government-developed campaigns as warnings against the harmful consumption of alcohol while industry messages are associated with a range of interpretations, sometimes understood as an encouragement to consume more alcohol (Jones et al. 2017). Thus, these initiatives instead serve to brighten the image of the industry while being ineffective in moderating consumption (Jones et al. 2017).
Studies also indicate that cost-effective interventions are those, often neglected, that are focused on total populations, controlling availability, affordability, marketing of alcohol, and drinking and driving (Casswell and Thamarangsi 2009;Room et al. 2005). By contrast, campaigns targeting individual behavioral change have not yet been proven to be significantly effective when it comes to harm reduction. On the contrary, they can sometimes prove to be counterproductive as most ads tend to focus on short-term harms while the most difficult alcohol-related problems are associated with longterm usage, emphasizing individual risk management and responsibilization (Dunstone et al. 2017) again.
Responsible gambling, CSR, and public health
By the late 1990s, gambling expansion was established as a public health issue (Korn and Shaffer 1999), garnering attention to address concerns regarding problem gambling and reducing associated harms to players and the community. In turn, governments and organizations providing gambling services, particularly electronic gambling machines, felt pressure to demonstrate their commitment to CSR and adopt responsible gambling (RG) policies (Hing 2002;Hing 2010;Reith 2008).
Calls to understand CSR in dangerous consumption industries, such as pharmaceuticals, tobacco, alcohol, and gambling, have been issued (Devinney 2009;Leung and Snell 2017). As reviewed by Leung and Snell (2017), CSR in the gambling industry has received scholarly attention with studies investigating the impact of CSR on financial performance, consumers' perceptions of gambling, and perceptions of RG by casino employees. However, whether these RG programs achieved the positive effect on player protection and public health that they were designed for was not sufficiently tested.
Since 2004, the construction of RG has primarily been associated with a series of position papers referred to as the Reno Model I-V (Blaszczynski et al. 2004;Blaszczynski et al. 2008;Blaszczynski et al. 2011;Collins et al. 2015;Ladouceur et al. 2017). Responsible gambling, as defined by the Reno model authors Blaszczynski et al. (2004), comprises policies and practices designed to prevent and reduce potential harms associated with gambling, incorporating a diverse range of interventions designed to promote consumer protection, community/consumer awareness and education, and access to efficacious treatment. In this light, gaming corporations and associated governmental bodies rest upon an individual's right to informed freedom of choice.
To classify RG programs as supporting public health, there is an expectation that these codes will exceed the mandatory measures demanded by law and include additional voluntary measures. Mandatory measures for gambling halls in Germany are, most notably, the legal age of 18 years and the prohibition of smoking and alcohol in slot machine halls (but not at machines in bars). The German gambling treaty of 2012 states that gambling operators are obliged to "encourage gamblers to play responsibly and prevent the emergence of gambling addiction" ( §6 gambling treaty). This rather general rule is made more concrete in the appendix of the treaty that defines these mandatory measures for RG programs: (1) appointment of a director for the development of the RG program, (2) training of staff to detect potential problem gambling behavior; (3) regular documentation and reporting on the effects of the RG programs, (4) provision of information to players about, for example, chances of winnings, a self-test of gambling problems, and a hotline for gambling problems; (5) a prohibition against sharing revenues with senior staff; (6) a prohibition against any of the staff participating in gambling.
The combined requirements to train staff and to encourage players to gamble responsibly could be understood as operators having the duty to proactively approach potential problem gamblers with the intention to help them by, for example, referring them to treatment. While such an approach is a promising prevention effort, we do not see it as a mandatory rule but rather as a voluntary measure for two reasons: (1) neither the gambling treaty nor its appendix explicitly state that the encouragement to play responsibly has to involve a proactive personal approach by staff members or that it should involve a referral to treatment; (2) if mandatory, such an obligation would induce a liability of the operators to compensate customers for damages to a pathological gambler in case of non-compliance-a liability that has not been recognized by courts. Still, we acknowledge that an interpretation of approaching potential problem gamblers as a mandatory measure is tenable. In that case a violation against the rule would be more severe, since it is mandatory and not voluntary.
Revenue sharing with problem gamblers
Spending for gambling is highly concentrated in a small group of high-intensity gamblers (Fiedler et al. 2019). For example, 80% of revenue from fixed-odds sports betting is generated by 5.7% of high-intensity gamblers (Tom et al. 2014). In poker, revenue is yet more concentrated: 1% of the gamblers account for 60% of operators' revenue, 5% account for 83%, and the top 10% of players deliver 91% of the operators' income (Fiedler 2012, p. 17). The dose-response relationship suggests that gambling problems and the amount of money spent are positively correlated (Currie et al. 2009;Brosowski et al. 2015) and hence that problem gamblers account for a relatively large proportion of spending.
A number of studies provide evidence on the diverging spending habits of recreational gamblers and problem gamblers (Smith and Wynne 2002;Wiebe et al. 2006;Volberg and Bernhard 2006;Fiedler et al. 2019). The gambling report by the Australian Productivity Commission explores asymmetries in gambling expenses in even greater depth and concludes that addicted slot machine gamblers play more often, play in longer sessions, and wager more per time unit (Productivity Commission 2010). The share of gambling revenue derived from specific game forms is not well documented but the existing literature suggests that slot machines have a rather large share. Williams and Wood (2007) Fiedler (2016, p. 360) estimates the share of gambling hall revenues from pathological gamblers to lie between 67% and 77%. These findings show that slot machine operators have a strong financial incentive to serve problem gamblers. If they would engage in perfect prevention and exclude all problem gamblers as clients, they would lose a very large share of their revenues.
Hypotheses
In this article, we operationalize CSR as introducing effective RG measures that either prevent problem gambling or reduce existing harms to problem gamblers. Against the backdrop of the concentration of revenues in problem gamblers and financial incentives of slot machine operators, we hypothesize that voluntary measures of RG in RG programs are limited in number and extent of implementation. We operationalize this in two working hypotheses: Hypothesis H A : On paper, responsible gambling programs contain mainly mandatory measures whereas additional effective voluntary measures are absent.
Hypothesis H B : In practice, voluntary measures of responsible gambling programs are not sufficiently implemented.
Methods
Our study contains two data sets: (1) the contents of responsible gambling programs of seven German slot machine operators and (2) survey data of gamblers in treatment. The first data set was analyzed in the form of a manual content analysis to test H A , and the second data set was used to test H B with SPSS and descriptive statistics.
Data set 1: Responsible gambling programs
The first data set consists of seven RG programs of the following companies: 1. Löwen Play 2. Crown Automaten 3. Schmidt Gruppe 4. AMA Arbeitsausschuss Münzautomaten 5. MoHR Spielhallen 6. Rösner Automaten GmbH 7. Vulkan Stern These were all RG programs of slot machine hall operators available at the time of the analysis. Each of these documents was systematically analyzed in detail to qualitatively identify RG measures aimed at protecting gamblers. We classified each measure as either mandatory or voluntary based on current German legislation. This data set was used to test hypothesis H A by investigating whether the prevention measures outlined above are present in the respective RG programs.
Data set 2: Survey of gamblers in treatment
In 2014, a survey was conducted with 705 gamblers in treatment centers across Germany. The target group was problem gamblers who are targeted by the measures of RG programs, particularly the voluntary measure to be approached by staff and referred to treatment. To reach this sample, a collaboration with the German treatment network was established that distributed the questionnaire to treatment centers in 15 of the 16 German federal states. Each respondent was approached by the treatment center and gave their consent to answer the questionnaire and participate in the study. The final sample consisted of 655 gamblers who had actively participated in gambling over the past 12 months and thus after the new law requiring RG programs was in effect. This left 50 abstinent gamblers excluded from the analysis. The data set was then further reduced to 512 gamblers who indicated that they primarily played in slot machine halls. Gamblers using the other gambling forms were too few to derive separate statistically significant results for RG programs of operators of other gambling forms.
Each respondent answered the CCCC test, a brief screen for problem gambling, adapted from the Cage alcoholism diagnostic screen (Mayfield et al. 1974). A preliminary diagnosis of pathological gambling is determined if participants positively answer two or more questions (Petry 2003). Of the 512 primarily slot machine gamblers, 91.4% scored positively on two items or more. This data set was used to test hypothesis H B .
Analysis of responsible gambling programs
To test Hypothesis H A , we analyzed the content of all seven RG programs regarding the player protection measures mentioned, distinguishing between voluntary and mandatory measures. All RG programs contain the mandatory RG measures. Some voluntary measures were also found in more than one program and claimed to be "continually advancing the RG program" (mentioned six times), "supporting initiatives to protect players" (5 out of 7), "appointing a commissioner for prevention efforts" (4 out of 7), "training staff without customer contact" (2 out of 7), or "establishing a culture of player protection" (2 out of 7). All these measures are indirect in the sense that they do not have an effect on player protection per se but only through additional actions that might be induced by the measures, for example, a commissioner developing a new prevention measure. Only two voluntary direct measures were found, each in only one program. One relates to the enforcement of the program itself by "regular controls and mystery shopping." Another program mentioned denying entrance to a slot machine hall for people under the age of 21. Further voluntary measures were not found, including measures that could have been expected, for example, a (self-)limitation system for players, a (self-)exclusion system for players, or a reduction of the maximum playing speed. While we found mentions of banning alcohol, it was only in reference to slot machine halls where alcohol is prohibited by law, but not for bars, where it could have been an effective voluntary measure; it could thus not be classified as a voluntary measure.
Most importantly, in all seven programs, we found the mandatory measure to train staff and a further measure to approach potential problem gamblers (7 out of 7) and to refer them to the help system (6 out of 7). We thus have mixed results for Hypothesis H A : Only few voluntary measures were found in the RG programs that could have a direct effect on player protection, while some player protection measures that could have been expected were missing. However, all RG programs contain the potentially very useful measures of approaching problem gamblers and referring them to treatment.
Whether these measures are effectively implemented is tested in another data set.
Survey of gamblers in treatment
To test Hypothesis H B , gamblers in treatment who prefer to gamble at slot machines in gambling halls were asked in a survey (1) whether staff noticed when they had significant losses (interpreted as an indicator of gambling problems) and (2) whether they had the impression that staff noticed their gambling problems. The critical questions were about the reaction of staff members when seeing significant losses or gambling problems: did they approach gamblers to discourage gambling as the RG programs intend them to do? Or instead, were gamblers encouraged to continue gambling?
Of the 512 slot machine gamblers, 43.6% (n = 223) felt that their significant gambling losses were noticed by the staff (see Fig. 1), while 41.7% did not perceive that staff were aware of their losses, and 14.1% reported being unsure. Furthermore, of the gamblers who felt that staff took notice of their large losses, 43.5% (n = 97) reported not being approached, i.e., staff neither encouraged nor discouraged them from continuing to gamble. Only 3.1% (n = 7) of the individuals approached were strongly discouraged from continuing to gamble and 9.9% (n = 22) of the individuals were lightly discouraged. Most notably, 43.4% (n = 97) of gamblers reported being either lightly or strongly encouraged to continue playing.
With respect to gambling problems noticed by staff (see Fig. 2), survey results indicate that 36.9% (n = 189) of slot machine gamblers felt that staff members noticed their gambling problems. However, staff approached only 1% (n = 5) of problem gamblers and referred them to the help systems (e.g., treatment, hotline) as outlined as a voluntary measure in the responsible gambling programs. This confirms hypothesis H B that voluntary responsible gambling measures are not sufficiently implemented.
Discussion
Social benefits of gambling arise through the pleasure of recreational gambling, revenue to operators, and the creation of jobs and taxes (Fiedler 2016). Conversely, gambling creates social costs in the form of problem gamblers and the harm they impose upon themselves and their environment, as well as the overall society, through reduced quality of life, reduced income and productivity, treatment costs, and follow-up costs of delinquency (Fiedler 2016). Effective RG efforts reduce problem gambling and the associated costs and can thus be seen as CSR that benefits public health.
We argue that a gambling operator who is concerned about public health should implement an RG program that exceeds the mandatory measures and includes voluntary measures that promise to have a substantial effect on reducing problem gambling and its consequences. The content analysis of the RG programs showed mixed results. Most programs list mandatory measures while the mentioned voluntary measures are not explicit. There are only indirect measures such as appointing a commissioner for the program while the expected measures such as (self-)limitation or (self-)exclusion systems are missing. This non-finding can be interpreted as either a lack of knowledge on behalf of operators about these measures or a lack of interest in integrating them.
The one major exception was the voluntary measure to train staff to detect and refer problem gamblers to help services and treatment. However, while this measure was mentioned in all RG programs, data from a survey among 512 gamblers in treatment centers who had primarily played slot Fig. 1 Staff reaction to large gambling losses. Out of 512 gamblers in treatment, 223 had the impression that staff members noticed when they had large gambling losses. Among them, 29 gamblers felt discouraged and 97 gamblers encouraged to continue gambling machines during the last 12 months clearly showed noncompliance with this voluntary measure: only 1% of problem gamblers are actually approached with the intention to refer them to the help system. While we acknowledge the possibility that operators might simply lack expertise in the implementation of RG programs or had inadequate staff trainings, we deem it is more likely that the operators lack interest in effective RG efforts.
We argue that slot machine hall operators lack interest in effective CSR because of financial incentives. As shown in the literature review, evidence clearly suggests that a large portion of revenues from slot machines comes from problem gamblers and that problem gamblers are the best customers for slot machine operators. With the majority of revenue coming from exactly the group of customers that should be stopped from gambling, it is obvious that effective RG measures are in opposition to the financial interests of operators. In fact, there is a strong financial incentive for gambling operators not to voluntarily implement effective prevention measures as the operator would otherwise lose the revenue share from problem gamblers, who contribute more than half of their total revenue. In a competitive market like the German slot machine business, a different operator might choose not to implement such effective RG measures and benefit at the expense of those operators who act in compliance with public health. It could thus be financially too detrimental for an operator to implement effective voluntary RG measures. This explanation could be generalized to the hypothesis that corporate responsibility as a set of rules does not work when it is opposed to financial incentives.
With respect to RG, this logic implies that one of the most important tasks is to remove the conflict of interest between financial interests and helping problem gamblers. A solution could be to remove the responsibility for preventing problem gambling from the operators, instead giving full authority to an external decision maker, for example, the Ministry of Health. The operators then could adhere to mandatory rules set by the external decision maker.
Limitations
A few limitations need to be considered when interpreting the results. First, the sample of gamblers in treatment is not fully representative of all gamblers who should be approached according to RG programs. That is, staff should approach not only problem gamblers who are in treatment but also all problem gamblers as well as potential problem gamblers. It could be argued that gamblers in treatment might show (or have shown) more signs of problems than problem gamblers not in treatment or at least more than potential problem gamblers. They would then be easier to identify by staff. Therefore, the results for the approached gamblers could be an overestimation, and the share of all problem gamblers that are actually approached is even lower.
Second, we note that there is a level of discretion when it comes to the implementation of RG programs and staff training. For example, if an employee does not approach a potential problem gambler, the reason could be that they do not perceive the gambler's behavior as potentially problematic Fig. 2 Staff reaction to gambling problems. Of 512 gamblers in treatment, 189 had the impression that staff members noticed their gambling problems. Eight gamblers were actually approached by staff and six gamblers actually helped or that they perceived the behavior as problematic, but actively decided not to approach the gambler. Hence, approaching a gambler depends on a subjective component related to the staff as well as their willingness to approach gamblers. The particular reasons for not approaching a gambler could be multifold, e.g., pure laziness, being too busy, or being implicitly told not to approach customers and disturb their playing rhythm.
Finally, the results are based on a survey from 2014. The introduction of mandatory RG programs only came into effect as of July 1, 2012. It is possible that the interviewed gamblers may have been responding to their experience before the introduction of these programs. In that case, the reported approaches are likely an underestimation. But this is rather unlikely, since only respondents who gambled in the past 12 months primarily in slot machine halls were selected in the study. It could further be the case that some gamblers played in slot machine halls that were operated by companies that were not part of the analysis and that do not require their staff to approach potential problem gamblers. However, this can only represent a very small group, since all major operators were included in our study.
Conclusion
We examined the clash between financial interests and corporate social responsibility in the gambling market using an example of seven German slot machines operators and their RG programs. We analyzed the programs qualitatively and observed that they mainly include measures that are mandatory by law but only one voluntary measure deemed an effective tool to prevent gambling addiction: approaching potential problem gamblers by staff with the intention to reduce their gambling exposure or refer them to the help system. Other voluntary measures such as (self-)limitation and (self-)exclusion systems or a reduction in playing speed were absent.
Furthermore, we analyzed the implementation of the voluntary measure of approaching potential problem gamblers by a survey of 705 gamblers in treatment. Of the 512 past-year gamblers who indicated doing so primarily in slot machine halls, only 29 were approached by staff members with the intention to discourage further gambling after significant losses had been noticed, while 97 gamblers were actually encouraged to continue gambling. In total, only five gamblers were approached with the intention to refer them to treatment after their gambling problems had been witnessed. We interpret this as a clear violation of the alleged goals of the RG programs. Together with the absence of other potentially effective voluntary measures in these programs, we conclude that the examined gambling operators do not surpass the level of player protection that is required and enforced by law. This finding would be even stronger if the measure to approach potential problem gamblers were not seen as voluntary but as mandatory. In that case operators would not only violate their own conduct but show an even lower level of player protection than demanded by law.
We argue that this finding can be explained by the opposing interests between player protection and the financial incentives of serving problem gamblers as a significant customer base with high spending that account for a large share of the total revenue of slot machine halls. This result suggests that corporate responsibility is ineffective when it is opposed to financial incentives. We thus argue that businesses are only acting voluntarily in the interest of public health if it either generates more profits (e.g., it is rewarded by the consumers) or at least is not harming profitability. Therefore, whenever public health is opposed to financial interests, enforced mandatory rules are the standard that can be expected in a competitive market, but nothing more. If a business upholds higher standards that harm its bottom line, its profits will decrease, and the business will lose out to competitors and eventually cease to exist. Therefore, corporate social responsibility works best when business interests are not at stake, while otherwise mandatory rules with enforcement are needed. Optimally, such mandatory rules will align financial interests with public health.
Acknowledgements Ingo Fiedler received funding from Hamburg's Ministry of the Interior to conduct this study (no grant number).
Funding Information Open Access funding provided by Projekt DEAL.
Compliance with ethical standards
Conflict of interest Ingo Fiedler, Sylvia Kairouz, and Jennifer Reynolds declare that they have no conflict of interest.
Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Informed consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 6,869.8 | 2020-02-06T00:00:00.000 | [
"Business",
"Economics"
] |
Use of a Distributed Micro-sensor System for Monitoring the Indoor Particulate Matter Concentration in the Atmosphere of Ferroalloy Production Plants
Airborne particulate matter (PM) is a concern for both occupational health and the environment, and, in the ferroalloy industry, the level of such particles in the air can be considerable. Small, low-cost sensors for measuring PM have generated interest in recent years, providing widespread monitoring of PM levels in the environment. However, such sensors have not yet been sufficiently tested under conditions relevant for the indoor environment of the metallurgical industry. This study aims to bridge this gap by benchmarking the commercial, low-cost Nova PM SDS011 particle sensor in two different ferroalloy plants. Benchmarking was performed against the Fidas 200S, which has been suitability-tested and certified according to the latest EU requirements (EN 15267, EN 16450). Twelve Nova sensors were tested over 3 months at a silicomanganese alloy (SiMn) plant, and 35 sensors were tested during 1 month at a silicon (Si) plant. The results showed that the low-cost Nova sensors exhibited all the same trends and peaks in terms of PM concentration, but measured lower dust concentrations than the Fidas 200S. The difference was larger at the silicon plant, which is in line with expectations, due to the size and mass fractions of particles in Si dust compared to SiMn dust, and to the larger measurement range of the Fidas, measuring down to 180 nm compared to the Nova which measures down to 300 nm. Despite the difference in absolute values, the Nova sensors were found to provide data for comparing dust levels over time for different processes, at different locations, and under different operational conditions.
INTRODUCTION
Airborne particulate matter (PM) is considered a concern for both occupational health and the environment. The effects of PM on human health have been found to include asthma, lung cancer, and cardiovascular diseases, 1,2 the risk and prognosis of which relate to the size, composition, and properties of the particles. The smaller the particles, the further into the human system they can penetrate, into the bronchi for PM up to 10 lm (PM10), the alveoli for PM up to 2.5 lm (PM2.5), and even through the lungs and into the circulatory system for ultrafine particles below 0.1 lm (PM0.1). [3][4][5] In metallurgical plants producing silicon and silicomanganese alloys, the level of PM can be considerable. This PM is formed both mechanically, through fines generation during raw material handling, and thermally, through reduction and oxidation of raw materials and products. Thermallygenerated SiMn fumes formed by oxidation of liquid (Si) and evaporated (Mn) metals consist mainly of Si, Mn, and O, forming various complex oxides. Secondary elements include Mg, Ca, Al, and K, and trace elements include Na, Fe, Zn, Cu, and Cl. 6 The main component of these fumes, SiO 2 in Si plants and MnO in SiMn plants, have molar masses of 60.08 g/mol and 70.94 g/mol, and densities of 2.096 g/cm and 5.43 g/cm 3 , respectively.
The industrial average aerodynamic diameter of these fume particles, as recorded by an electrical low pressure impactor, is on average approximately 100 nm, 6 while scanning electron microscopy (SEM) analysis of fumes generated experimentally on a laboratory scale by Ma et al. 7 show that the majority of protoparticles (the singular particles, in this case mostly spheres, defined before agglomeration and clustering) have a diameter between 50 and 200 nm, although fume particles generated at higher temperatures are notably smaller. However, for agglomerate size fractions measured through laser diffraction on the same dust, the majority of particulates have a diameter in the range of 500-2000 nm, and are also less influenced by temperature. Thermally-generated Si fumes formed by the oxidation of liquid Si consists mainly of Si and O, forming silica dust. 8 Average protoparticle sizes range from 66 to 91 nm. 8 These particles also agglomerate after formation, leading to the size fractions measured through laser diffraction being much higher. Figure 1a and b shows SEM imagery of the thermallygenerated fume particles from SiMn and Si production, respectively. 7,8 According to current EU regulations, exposure to PM10 in ambient air should be limited to a maximum of 50 lg/m 3 averaged over a 24-h period, with a maximum of 35 permitted exceedances per year. The yearly average is limited to 40 lg/m 3 for PM10 and 25 lg/m 3 for PM2.5. Specific limits for other pollutants, such as SO2, NO2, Pb, As, Cd. Ni, and PAH, are also included in these regulations. 9 While dust analysis and distinguishing between different particles should be considered, these are not the focus of this study. Workplace PM exposure is often monitored by personal portable devices, while monitoring of ambient plant PM levels are often measured using one or more fixed measurement stations that measure for long periods at a time. These stations are expensive to set up, which limits the number of spatial measurement points that can be realistically achieved. The use of less-expensive, portable setups would circumvent this issue, and allow for a much higher spatial resolution, which can be of particular use in the extremely varied environment that is the case for metal production plants. A better spatial resolution allows for tracking the flow of particles in the plant, and can work as a tool for evaluating measures taken to reduce and capture PM emissions.
There are several categories of low-cost microsensors available that can provide better spacial resolution at a much lower price for a single low-cost sensor, down to less than 0.1% of the price of a stateof-the-art dust sensor. In the current study, however, the aim was to investigate and benchmark the performance, in terms of precision and reliability, of a specific low-cost sensor, the Nova PM SDS11 (hereafter, Nova), in two different metallurgical plant environments. The Nova sensor, a nephelometer, which has been developed for low-cost fume monitoring, was benchmarked against the Fidas 200S (hereafter, Fidas), a state-of-the-art optical particle counter (OPC).
Small particles scatter light of varying wavelengths given their size and optical properties, and this scattered light can be measured and correlated to a PM concentration. In the case of nephelometers, particles are measured as an ensemble, and the scattered light is measured across a wide range of angles. The total scattering amplitude is correlated to a calibrated mass measurement, such as from a filter sampler. 10 OPCs work in a very similar way to nephelometers, but, instead of measuring a number of particles in an ensemble, they measure the light scattered by individual particles, and assign each pulse to a size bin based on its intensity. The optical properties, such as refraction index and particle shape, of the measured particles are of significant importance to the scattering of light, and, as such, it is equally important for nephelometers and OPCs to calibrate with the correct dust to achieve a high accuracy.
The output given by the Nova sensor is the concentration of particles with aerodynamic diameter < 2.5 lm (PM2.5) and < 10 lm (PM10), which are extrapolated through calibration values from the two size bins actually measured by the sensor, 0.3-0.8 lm and 0.7-1.7 lm, respectively. The Nova sensor does not yet have any EU certifications. The Fidas 200S measures the individual particles with aerodynamic diameters in the range of 0.18-100 lm, with the output coming in the form of the concentration of particles below certain sizes: 1, 2.5, 4, and 10 lm (PM1, PM2.5, PM4, and PM10). Precise optics, high light output from the poly-chromatic LED used, and powerful signal processing using logarithmic A/D conversion allow the Fidas to detect particles down to 180 nm in diameter, 11 and, as it dries the sample fumes before it reaches the sensor, it is better suited for measurements in high relative humidity environments. The Fidas is approved for simultaneous monitoring of PM10 and PM2.5 according to standards VDI 4202-1, VDI 4203-3, EN 12341, EN 14907, EN 16450, and the EU Equivalence Guide GDE, and certified in compliance with standards EN 15267-1 and -2. 11 which specify maximum permissible measurement uncertainties and testing requirements. 9 The quality of the components used is also a potentially important factor in regards to stability of measurements and lifetime for the sensor. The central technical parameters of the two sensors are described in Table I.
The Nova sensor has been the subject of several studies in varied settings. Genikomsakis et al. performed mobile field testing, comparing the Nova with a AP-370 by HORIBA suitable for constant air pollution measurements, on an electric bike in the city of Mons, Belgium. PM values ranged from 0 to 5 lg/m 3 , with the resulting R 2 values ranging from 0.93 to 0.95, after taking temperature and relative humidity into account. 13 Badura et al. compared a group of three copies of the Nova sensor, together with groups of three other similarly low-cost systems in a common box, under the same measurement conditions over half a year near a park and a residential area in Wroclaw, Poland. The Nova was found to be one of the most precise in terms of reproducibility between units, and also when compared to the control unit with an R 2 value of 0.82 using 15-min averages, but it was found to be sensitive to high relative humidities (RH > 80%). 14 Liu et al. tested the Nova sensor by co-locating three of the sensors at an official, air quality monitoring station equipped with reference-equivalent instrumentation in Oslo, Norway, over a 4-month period, and found inter-sensor correlation R values higher than 0.97, and confirmed the sensor's susceptibility to high relative humidity. They concluded that, when used correctly, the Nova sensor could have significant potential for implementing dense monitoring networks in areas with relative humidities below 80%. 5 In industrial settings, there has been less work carried out to test these sensors, but the Nova sensors were found by the current authors to provide useful data in the aluminum industry, where the value of having multiple groups spaced out was shown. 15 When compared to similar lowcost sensors, the Nova sensor has been shown to be among the best in several studies, 16,17 but, as mentioned, it is less reliable at higher humidities, which was further investigated by Jayaratne et al. along with other sensors, where several showed an increase in PM level above a relative humidity of 75%. 18 This work aims to compare how well the Nova sensor compares to the Fidas sensor when measuring the PM concentration in two different metallurgical plants. The first measurement campaign was performed at a silicomanganese (SiMn) plant, and the second at a plant producing metallurgical grade silicon (MG-Si). The thermally and mechanically produced fumes formed during the metallurgical processes at these plants, as outlined above, vary greatly, particularly in regards to size fractions, which is believed to affect the measurements. An additional objective is to study the long-term performance of the Nova sensors in high dust level environments.
INDUSTRIAL MEASUREMENTS
The complete setup for the Nova sensor system included the Nova PM SDS011 sensor connected to a microchip, together with a temperature and humidity sensor 19 placed in a closed box, as shown in Fig. 2. The system was powered with 5 V, 1 A of electricity provided from an external power source, and, while the system protects the components to a degree, the model was not airtight. During the measurement periods described in this work, locations were chosen based on the space available at the plant, particularly in regards to the large Fidas sensor, and so as not to risk equipment failure due to heat, nor that measurement equipment became a problem for the running of the plants. While available areas with varying fume loads and fumes originating from different processes were found, continued studies are expected to include measurements much closer to the relevant areas, such as near the tapping zone, given adequate protection against heat radiation in newer sensor setups.
Silicomanganese (SiMn) Plant
At the SiMn plant, the measurement period was divided into two parts. The first, extended period lasted for more than 2 months with only the Nova sensors, while the subsequent calibration period lasted for almost 24 h when the Nova sensors were placed close to the Fidas. For both periods, twelve Nova sensors were divided into three groups of four sensors stacked on top of each other. In both periods, the sensors were placed in a hallway adjacent to the metal tapping hall, with one wall section being an opening towards the furnace hall, and another being the outer walls of the furnace itself. Figure 3 shows the approximate sensor locations for the measurement periods, where the ceiling height was 6.45 m, and the entire section leading out to the smelting hall was open, allowing for free flow of fumes into the measurement area. During the middle period, four Nova sensors were each placed at points 1, 2, and 3, roughly 1.5 m above the floor along the wall section. During the last period, all twelve Nova sensors were placed together at point 3, with the Fidas sensor placed with the fume intake approximately 30 cm from the Nova sensors.
Silicon (MG-Si) Plant
At the MG-Si plant, there was one measurement period of close to 1 month, with 35 Nova sensors placed in vertical groups of 5 near the inlet for a Fidas sensor for the full duration. The sensors were placed on a mezzanine floor above the furnace body, where the electrode feeding takes place, inside the hall in which tapping is performed. Figure 4 shows the approximate location of the sensor group along with the relevant process locations. All 35 Nova sensors were placed with their fume inlets within 20 cm of the Fidas' fume inlet. There is a fume hood designed to capture most of the tapping fumes, and there are also several layers of partial flooring between the tapping and stoking areas and the sensors. Fumes and smoke not captured by the fume hood will eventually flow up along the sides of the furnace and reach the sensors, and fumes that gather below the roof will also be picked up by the sensors, which are only a couple meters below.
The fumes measured at the silicon plant are assumed to be mostly thermally-generated oxides originating from the Si melt during the tapping process, and during other periods in which molten Si is in contact with the open air. Figures 5(a, b), 6(a, b), 7(a, b), and 8(a, b) show the PM10-PM2.5 and PM2.5 values measured by the Nova sensors during the extended periods of the two measurement campaigns, as well as during a shorter period, together with the Fidas data, in addition to the diurnal patterns for PM10.
SiMn-Plant
The long-term measurements at the SiMn plant show the erratic day-to-day changes in fume levels, but from the diurnal pattern it can be seen that the PM levels are generally at a lower value in the evening and night, and at a maximum around noon. Another clear pattern is the bi-hourly peaks which likely correlate to process routines such as tapping, casting, product transportation, stoking, etc. that are relatively stable on a day-to-day basis. It is also easy to detect differences between days and periods which could be correlated to changes in the weather, internal processes, routines, or events. For instance, the first week, as well as daytime on day 12, show a clearly higher PM-level compared to the latter half of the period shown in Fig. 5a When comparing the Nova and Fidas measurements, it can be noted that the two sensors pick up on most of the same peaks and changes in dust levels, both for PM2.5 and PM10-PM2.5. While there are some peaks where the difference is large, the trends are similar for most of the period. The difference seems to be higher for PM2.5, which is natural due to the lower minimum measurement boundary on the Fidas compared to the Nova. The level of the larger fumes (PM10-PM2.5) are generally slightly higher than the level of the smaller fumes (PM2.5) on average, but there are notable spikes with a higher level of fine dust which could relate to specific process or workplace events producing and/or dispersing more fine particles.
Si-Plant
For the long-term Nova measurements at the Si plant shown in Fig. 6a, one can see that the fumes have an overall much larger fraction of PM2.5 compared to PM10-PM2.5, while the change in PM levels are slightly less varied compared to at the SiMn plant, with the average number of significant peaks per day being around one for this period. The diurnal pattern shows an opposite trend compared to the SiMn plant, with higher values in the morning and the lowest values around noon. The difference between the peak values and the baseline fume level is higher at the Si plant, which might stem from the higher proximity to the fume source at this location.
Also at the Si plant, it can be noted that the two sensor types pick up on most of the same peaks and changes in dust levels, both for PM2.5 and PM10-PM2.5. Here, a notable difference in fume levels is apparent, with the difference being largest for PM2.5, which was also the dust fraction with the most variation over time. The larger difference in especially PM2.5 measurements at the Si plant are believed to be due to better calibration for the Fidas, which was calibrated towards SiO 2 , and should therefore have quite accurate assumptions for particle density and to some degree the optical properties.
The largest fraction measured by the Fidas for most of the period is PM1, as can be seen in Fig. 9b. The fraction of PM10-PM2.5 is quite low for Si, with the exception of a few clear spikes.
When considering the dust level differences between Si and SiMn fumes, it is important to note the difference in density between the different fumes, as the measuring equipment has to calculate the mass of the detected fumes to provide the standard units for PM (lg/m 3 ). For the Si fumes, a typical density used is that of amorphous Si (2.2 g/ cm 3 ). 6 For the SiMn fumes, a typical density model would be to assume pure MnO (5.37 g/cm 3 ) which is most prominent in the SiMn fumes, almost to the exclusion of other elements when generated from SiMn melts at below 1500 C. 7 With the sensors not being calibrated for the specific dusts and their densities, it can be assumed that there will be a similar discrepancy in the measured fume mass per volume as there is a difference in fume density. In this case, the density of the lighter fumes (Si) are less than half the density of the heavier fumes (SiMn), which speaks to the necessity in calibrating for the correct fumes when using PM sensors to avoid getting inaccurate data.
The calibration of low-cost sensors would require a carefully designed setup, but, once in place, the time and cost of each calibration should be low. Having multiple sensors in each group makes noticing outliers and sensor drifting much easier compared to having a single sensor, and, while significant drift was not seen across the sensors during the 1-2 months they were tested in this work, re-calibration is expected to be necessary at least once during the Nova sensor's 1-year lifespan. As can be seen in Fig. 9(a and b), the largest fraction of PM measured by the Fidas is almost always PM1 for Si fumes, and usually by a large margin. For SiMn, the largest fraction varies between PM10-PM4 and PM1. This seems in line with the deviation found between the Nova and the Fidas, and how it differs from SiMn to Si fumes, as the Fidas measures particles in the range of 180-300 nm, which the Nova does not. It seems evident that the Si fumes have more of the smaller agglomerates, which in turn leads to a larger deviation between the Nova and the Fidas for Si fumes. The variations in size fractions over time can be related to events and activities in the vicinity of the sensors, as different fume sources are likely to produce different fumes. From the comparisons between the Nova and the Fidas shown in Fig. 10, the difference between the SiMn and Si fumes in regard to the fraction of larger particles measured by the Nova compared to the Fidas becomes apparent. For the Si fumes, the relationships between the two sensors PM10-PM2.5/PM2.5 shows that the Nova, while heavily underestimating the fume concentrations in general, as seen from the PM2.5 comparisons, actually overestimates the fraction of larger particles. This overestimation is likely due to the lower average particle size of the fumes at the Si plant, as can be seen in Fig. 6(a). Due to the Nova not being able to effectively ''see'' particles above 1.7 lm in diameter, and instead estimating PM10 values based on its calibration settings, as previously mentioned, it overestimates the concentration of larger particles when, in reality, the fraction of large particles is much smaller than those it was calibrated for.
Quantitative Measurement Differences Between the Nova and Fidas Sensors
For the SiMn fumes, both ratios are much closer to one, showing that there was a smaller difference between the two sensor types than for the Si fumes. The Fidas also measures a larger amount of SiMn fumes, but this could be due to both the difference in calibration, with larger measurements on the low end, and the distance from the wall. For Si fumes in particular, this points to the need for better calibration of the Nova sensor.
The heavier nature of Mn fumes is likely to make the concentration measured higher in the SiMn plant compared to the Si plant when no density calibration or post-processing has been performed, and this should be taken into account when reading the data in this work. It is not known to which density the Nova was calibrated when delivered, but the Fidas was calibrated for SiO 2 , and a difference in calibration could explain some of the differences between the measurements made by the sensors. Figure 11a and b shows the relative deviation from the mean for the Nova sensors within one of the groups at the Si and SiMn plants, respectively, together with their 95% confidence intervals. The limited time period used for the deviation graphs is due to the failure of several sensors at the Si plant, as discussed further in the ''Sensor Reliability'' section, and the data from the SiMn plant were limited to a similar time period to allow for easier comparison. Due to the spacing in the placement of the three groups used at the SiMn plant, the relative deviation between the groups is not relevant here, but a figure showing the relative deviation between the groups from the Si plant is shown in supplementary Fig. S1 (refer to the online supplementary material). The data in Fig. 11a and b are presented to show how the deviation in measurements for the sensors in each group, or between the different groups, changes over time. Stable deviation curves relate to a systematic difference between the sensors that can be compensated for or be mostly eliminated through calibration.
Deviations Between Individual and Groups of Nova Sensors
Only one group from each plant was used here to show the trend, but the remaining groups showed similar trends over time. Thus, from Fig. 11(a and b), showing the relative deviation in and between the sensor groups, one can infer that, over time, the individual sensors within each group tend to have a relatively stable deviation from the mean value, barring changes caused by the loss of sensors. Full 2-month comparisons of relative deviations at the SiMn plant showed a similar pattern over time as the 20-h period. The individual variation of the sensors' deviation is for most of the sensors within the 15% relative deviation level provided by the manufacturer. 12 It is slightly larger than the internal deviation between three Nova sensors of the maximum 5% found by Liu et al., 1 but the PM concentrations during these measurements were around ten times lower than in this study. That the internal deviation is to a degree stable over time is very useful information, as it would imply that a large part of the deviation between the sensors could be corrected through simple calibration, where each sensor's measurements are multiplied by a correction factor or equation, which is in line with conclusions from similar work testing the viability of low-cost sensors. 20 It can be noted that the deviation is lower for the SiMn measurements, both in total relative deviation, and in the variations of that value for each sensor. This could be due to the larger fraction of smaller particles in the Si fumes, as the fraction of particles in the size range, where there is uncertainty whether they would be detected by the Nova sensor, would be much larger.
Sensor Reliability
Most of the sensors had already been used for several months in different campaigns before the campaign at the Si plant, and, as there was a consistent high concentration of dust in the areas where the sensors were placed in both plants, sensor failures were expected to some degree during both campaigns, particularly at the Si plant. Six sensors were removed from the pool of 35 after the measurement period at the Si plant, as the data they provided became erroneous instead of stopping completely, leading to a pool of a maximum 29 sensors. During the measurement period, more of the sensors cut out at some point, but restarting the sensors worked in getting some of them back up. Of the twelve sensors in the SiMn campaign, three did not give measurements during the entire measurement period, while almost all of them cut out at some point during the measurement period at the Si plant. Over the entire measurement period, the mean up-time of the sensors was 21.7% at the Si plant and almost 100% at the Mn plant. This limited the accessible data for the Si campaign, but, due to the many sensors placed during the campaign, the amount of data available is still considered sufficient for analysis, particularly using the periods and groups in which a larger fraction of the sensors were active. The highest measured relative humidities were below 40% at the SiMn plant and below 45% at the Si plant, which is significantly lower than the boundary of around 80% where condensation causes inflation in the PM readings for the Nova, and is therefore not considered to have influenced the PM readings in this work.
While the higher degree of failure during the Si campaign could just be due to wear over time, it is also possible that the Si fumes affected the electronics to a higher degree than the dust from SiMn, leading to a faster decay in functionality. This is also supported by the fact that the campaign at the SiMn plant lasted for more than 3 months, and that, at the end of the campaign, all 12 sensors were functional after being reset and having their system blown clear of excess dust. The sensors yielding erroneous data instead of stopping completely at the Si plant also supports this theory, as previous campaigns did not show similar signs of malfunctioning, besides a complete stop in the flow of measurements. In such a case, this problem should be solvable by using an airtight case for the sensor systems. In some of the cases, blowing through the system to clear it out is enough for the sensor to start working again, and, in some cases, just restarting the system worked, but for other sensors did not, and in such cases replacement of malfunctioning parts or wires would most likely be necessary to get the sensor up and running again.
CONCLUSION
A low-cost PM sensor for PM2.5 and PM10, the Nova PM SDS011, was tested and benchmarked against the Fidas 200S sensor during two measurement campaigns at a SiMn and a MG-Si production plant, where 12 and 35 Nova sensors in groups of 4 and 5 were used, respectively. The long-term data (around 1 month) for the Nova sensors were studied in regard to deviation within each group, and to investigate the differences between the two plants. Short-term data (around 24 h) with both sensor types were studied to compare the deviation between the sensors for both PM10-PM2.5 and PM2.5. More detailed size fraction comparisons were compiled from the Fidas data, highlighting the difference between the SiMn and Si fumes.
The main conclusions inferred within each category previously discussed are: For measurements in both the SiMn and MG-Si production plants, the Nova sensors picked up on almost all the same peaks as the Fidas sensor, and the increases and decreases in fume levels were similarly captured by both sensor systems. Over time, clear diurnal patterns emerged, which show when the greatest amount of fuming occurs during each day. The PM2.5 measurement comparison between the Nova and the Fidas at the SiMn plant showed a small spread and a linearity close to 1:1, with a small deviation towards the Nova measuring less than the Fidas. The same comparison at the Si plant showed more spread, and a linearity of around 1:5, with the Fidas measuring overall around five times as high values for PM2.5. This is believed to be in part due to the Fidas being well calibrated for SiO 2 , and also able to detect particles in the size range of 180-300 nm, which the Nova could not. In addition, the Nova factory calibration is possibly not adequate to accurately measure the SiO 2 fumes, although repeated attempts at getting these details from the manufacturer were unsuccessful. The relationship between the Nova and the Fidas for larger particles (PM10-PM2.5) divided by smaller particles (PM2.5) is strongly clustered, and shows a linearity close to 1:1 for the measurements at the SiMn plant. For the measurements at the Si plant, this relationship is more spread, with the Nova sensors measuring on average a much higher fraction of larger particles. This is believed to be due to the Nova overestimating the fraction of the larger particles (> 1.7 lm) that it cannot measure directly, which becomes prevalent with the overall low concentration of larger particles in the Si fumes. Deviation within each group of Nova sensors and between groups for both the SiMn and Si campaigns showed a relatively stable deviation from the mean value. Given a stable deviation over time, it would be possible to compensate for the internal deviation of the Nova sensors, through a calibration period to obtain a much lower spread of measurements. For most groups, the spread was within ± 20% relative deviation, close to the 15% relative deviation level provided by the manufacturer. For future industrial measurement campaigns, an improved and preferably airtight casing for the Nova system is considered important, to improve the length of life, and it is believed that using 4-5 sensors in each group, to have room for 1-2 failures before service and potential replacements are needed, would provide sufficient lifetime for the system as a whole to not cause unnecessary expense in this regard. | 7,323.4 | 2022-10-02T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Mitochondrial retrograde signaling connects respiratory capacity to thermogenic gene expression
Mitochondrial respiration plays a crucial role in determining the metabolic state of brown adipose tissue (BAT), due to its direct roles in thermogenesis, as well as through additional mechanisms. Here, we show that respiration-dependent retrograde signaling from mitochondria to nucleus contributes to genetic and metabolic reprogramming of BAT. In mouse BAT, ablation of LRPPRC (LRP130), a potent regulator of mitochondrial transcription and respiratory capacity, triggers down-regulation of thermogenic genes, promoting a storage phenotype in BAT. This retrograde regulation functions by inhibiting the recruitment of PPARγ to the regulatory elements of thermogenic genes. Reducing cytosolic Ca2+ reverses the attenuation of thermogenic genes in brown adipocytes with impaired respiratory capacity, while induction of cytosolic Ca2+ is sufficient to attenuate thermogenic gene expression, indicating that cytosolic Ca2+ mediates mitochondria-nucleus crosstalk. Our findings suggest respiratory capacity governs thermogenic gene expression and BAT function via mitochondria-nucleus communication, which in turn leads to either a thermogenic or storage mode.
and decreased expression of thermogenic genes including Ucp1 and Cidea 7 . However, gene expression profiling in BAT with impaired respiratory capacity is incomplete and the molecular mechanism by which mitochondria exert transcriptional control over those nuclear genes remains to be addressed.
In the present study, we explored the role of respiratory capacity in thermogenic gene expression by manipulating LRPPRC in an adipose-specific manner and by treating brown adipocytes with an inhibitor of mitochondrial respiration. We find that impaired respiratory capacity triggers a retrograde signaling pathway that represses thermogenic and oxidative genes, favoring decreased fuel oxidation and energy storage. Furthermore, we provide evidence that this information is transmitted via Ca 2+ -mediated mitochondrial retrograde signaling, which ultimately controls whether BAT participates in thermogenesis or energy storage.
Results
LRPPRC fat-specific knockout (FKO) mice exhibit impaired respiratory capacity in BAT. To examine whether respiratory capacity controls BAT gene expression in vivo, we generated fat-specific LRPPRC knockout mice (hereafter, FKO mice) by crossing LRPPRC floxed mice with Adiponectin-Cre mice. mRNA and protein levels of LRPPRC was reduced by >90% in BAT from FKO mice (Fig. 1a,c). Compared to WT mice (LRPPRC fl/fl), expression of mitochondrial-encoded ETC genes were globally reduced and COXI protein levels were also decreased (Fig. 1b,c). Interestingly, several nuclear-encoded subunits were also reduced at both mRNA and protein level (Fig. 1c,d). Abrogated expression of the ETC subunits resulted in impaired activities of respiratory complexes (Fig. 1e). Electron microscopy and image analysis revealed that WT mitochondria exhibited tightly packed lamellar cristae whereas FKO mitochondria displayed dysmorphic cristae architecture alongside reduced number of cristae (Fig. 1f,g). This is in agreement with the previous observation that heart-specific loss of LRPPRC leads to disorganized cristae 13 . Deficits in respiratory capacity were unlikely due to large changes in mitochondrial biogenesis, since markers of mitochondrial mass (VDAC, citrate synthase and mtDNA) were unchanged (Fig. 1c,h). Furthermore, lactate levels were increased 1.8-fold in FKO mice (Fig. 1i), consistent with previous studies demonstrating that pharmacological inhibition of the ETC causes increased lactate production due to increased glycolysis 14 . Impaired respiratory capacity attenuates thermogenic gene expression. Having established a model of deficient respiratory capacity in BAT, we assessed BAT function and gene expression. On gross examination, BAT from FKO mice housed at room temperature (22 °C at our facility) was pale and enlarged ( Supplementary Fig. 1a, upper). Increased lipid deposition with unilocular droplets was apparent in histological sections, an appearance associated with reduced respiratory activity ( Supplementary Fig. 1a, lower). Although Ucp1 mRNA levels were decreased, we observed that UCP1 protein was stabilized in FKO mice housed at room temperature ( Supplementary Fig. 1b,c). 22 °C is a mild cold stressor to mice and such stabilization of UCP1 protein in cooler environments has been reported 15 . Upon acute cold exposure, these mice were not cold sensitive in spite of impaired respiratory capacity ( Supplementary Fig. 1d). Although not formally assessed, augmented shivering thermogenesis due to housing under mild cold stress may compensate for UCP1-mediated non-shivering thermogenesis, enabling effective defense against cold. Cold also stimulates β-adrenergic signaling 3 . Since β-adrenergic signaling is a key regulator of both thermogenic and respiratory programs 16,17 , we sought to determine whether impaired respiratory capacity per se affects BAT function and gene expression under circumstances devoid of β-adrenergic stimulation. To do so, mice were acclimated at thermoneutrality (30 °C) for 4 weeks, a timeframe that is sufficient to offset the impacts of thermal stress. Even at thermoneutrality, FKO mice maintained larger lipid droplets in BAT (Fig. 2a). Like FKO mice housed at room temperature, thermoneutral-acclimated FKO mice displayed robust depletion of LRPPRC and severe reduction in levels of COXI and nuclear-encoded respiratory subunits while VDAC was unchanged and CS was slightly reduced (Fig. 2b). In these mice, expression of thermogenic genes was severely decreased (Fig. 2c). Notably, both Ucp1 mRNA and protein levels were severely reduced (Fig. 2c,d), and mice were exquisitely sensitive to cold stress (Fig. 2e).
We next assessed expression of genes that regulate fatty acid oxidation (FAO), adipogenesis, lipogenesis and mitochondrial biogenesis. Interestingly, FAO genes were globally reduced (Fig. 2f). Alongside decreased nuclear-encoded ETC genes, down-regulation of the FAO genes may favor transitioning of BAT into an energy-storing mode. In contrast, Pparg, a master regulator of adipogenesis, and its target lipogenic genes were unaltered or upregulated (Fig. 2g). Ppargc1b and Erra (Esrra) mRNA levels were reduced but not Ppargc1a mRNA (Fig. 2h). Although these genes are involved in mitochondrial biogenesis, as stated earlier, markers of mitochondrial mass were unchanged, suggesting alterations in various gene programs were not simply the result of reduced mitochondrial biogenesis. Mice housed at room temperature showed almost identical expression patterns of the aforementioned genes ( Supplementary Fig. 1b,e,f,g), suggesting that mitochondrial retrograde signaling acts independent of β-adrenergic signaling. Furthermore, in support of normal β-adrenergic signaling in FKO mice living at thermoneutrality, the relative fold change for induction of Ppargc1a and Ucp1 was comparable to control mice, following a cold stress ( Supplementary Fig. 2a,b). If some brown adipocytes still contained residual LRPPRC, possibly due to inefficient recombination, one would predict a normal fold change of gene induction, following cold exposure. To exclude this possibility, we measured phosphorylated PKA, which is activated by β-adrenergic signaling. In BAT, pPKA was unchanged in FKO mice living at room temperature, further supporting that the β-adrenergic signaling pathway was not altered ( Supplementary Fig. 2c,d).
In summary, these data indicate that impaired respiratory capacity triggers a retrograde signaling pathway that represses thermogenic and oxidative genes, favoring decreased fuel oxidation and thus energy storage. This may explain why lipid accumulation was increased in LRPPRC-deficient BAT. Impaired respiratory capacity interferes with the recruitment of PPARγ to thermogenic gene promoters. We were interested in the transcriptional basis by which deficits in respiratory capacity affects thermogenic gene expression. PPARγ governs many aspects of brown fat development and maintenance 18,19 . Protein levels of PPARγ and coactivators including SRC1 and PGC-1α, however, were unchanged in FKO mice (Fig. 3a). Even so, PPARγ has been shown to exhibit promoter specificity under certain metabolic conditions 20 . We, therefore, queried whether the recruitment of PPARγ to various transcriptional regulatory units was altered using ChIP assays (Fig. 3b). As shown in Fig. 3c, the recruitment of PPARγ to the enhancer region of Ucp1 and the promoters of other thermogenic genes was reduced in FKO mice. Because PPARγ is required for the expression 14 week-old male, n = 3. Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001, one-tailed (i) and two-tailed unpaired Student's t-test (a,b,d,e,g,h). CS, citrate synthase; mtDNA, mitochondrial DNA. of these genes 21,22 , reduced recruitment to these regulatory regions might explain their reduced transcription. However, recruitment of PPARγ to the promoters of lipogenic genes was unchanged with some minimally affected in FKO mice (Fig. 3d), a finding consistent with intact lipogenic gene expression (Fig. 2g). These data suggest that mitochondrial retrograde signaling influences promoter specific recruitment of PPARγ, a metabolic switch that governs whether or not BAT adopts a thermogenic or storage phenotype.
Cytosolic calcium may mediate retrograde signals from mitochondria to nucleus. To further identify the mechanism by which thermogenic gene expression is attenuated, we established two different cell culture models with impaired respiratory capacity: genetic impairment of respiratory capacity via LRPPRC knockdown and pharmacological inhibition of respiratory complex. Similar to our previous observation 7 and FKO mice, brown adipocytes with LRPPRC knockdown exhibited decreases in ETC subunits and UCP1 ( Supplementary Fig. 3). As a pharmacological model, we treated brown adipocytes with Antimycin A (AA), an inhibitor of complex III. Since electrons entered from complexes I and II converge at complex III, inhibiting complex III will block the entire electron transit, which impairs respiratory capacity. As in BAT from FKO mice, lactate levels were increased in AA-treated brown adipocytes (Fig. 4a). Notably, AA treatment resulted in reduced mRNA levels of most of the thermogenic genes tested (Fig. 4b) but a minimal effect on lipogenic genes with some genes upregulated (Fig. 4c). UCP1 protein was also reduced whereas PPARγ and coactivators SRC-1 and PGC-1α were unaltered (Fig. 4d). Overall, both genetic and pharmacological impairment of ETC in cell culture recapitulates the findings from FKO mice, providing in vitro models to study downstream signaling pathway. These data also suggest that the effects of impaired respiratory capacity on thermogenic gene expression are cell autonomous. Several studies have shown that ETC dysfunction leads to increased cytosolic Ca 2+ levels [23][24][25] . These studies demonstrated that Cathepsin L (Ctsl) was induced in a Ca 2+ -dependent manner. First, as an indirect measure of altered cytosolic Ca 2+ levels, we quantified Ctsl mRNA in BAT from LRPPRC FKO mice and AA-treated cells. Ctsl gene expression was induced in FKO mice housed at 22 °C and 30 °C (Fig. 5a) and AA-treated cells (Fig. 5b), indicating elevated levels of cytosolic Ca 2+ . Next, we quantified steady-state levels of cytosolic Ca 2+ , and observed significant increases in LRPPRC knockdown cells (Fig. 5c) and AA-treated cells (Fig. 5d). We then examined whether reduction of free cytosolic Ca 2+ levels can rescue the repression of thermogenic genes. Ca 2+ -free media was able to rescue reduced thermogenic gene expression in LRPPRC knockdown cells, the effect being partial for some genes (Fig. 5e). Ca 2+ -free media had no substantial impact on lipogenic genes (Fig. 5f). In addition, with minimal effects on lipogenic genes, BAPTA-AM, a cell-permeable form of Ca 2+ chelator BAPTA, partially rescued decreases in thermogenic genes (Fig. 5g,h), suggesting that multiple mediators are involved in thermogenic gene regulation in both models. BAPTA was also able to partially reverse AA-dependent induction of Ctsl (Fig. 5g), indicating that BAPTA effectively blocked Ca 2+ -dependent alterations in gene expression. In summary, under impaired respiratory capacity, mitochondrial-nuclear crosstalk is likely multi-factorial relying in part on cytosolic Ca 2+ .
Finally, we tested whether increasing cytosolic Ca 2+ mimics the effects of impaired respiratory capacity to some extent by silencing sarco/endoplasmic reticulum (SR/ER) Ca 2+ -ATPase (SERCA) in brown adipocytes. Since SERCA transports Ca 2+ from cytosol into SR/ER at the expense of ATP, cytosolic Ca 2+ is expected to be increased in SERCA-deficient cells. Although three paralogous genes encode SERCA (Atp2a1, Atp2a2 and Atp2a3), mouse brown adipocytes only express Atp2a2 and Atp2a3 with the latter being induced upon differentiation (data not shown). Immunoblot analysis confirmed protein expression of ATP2A2 (SERCA2) in cultured brown adipocytes but ATP2A3 (SERCA3) was undetectable with our immunoblotting methods in the same cells ( Supplementary Fig. 4). Even if low levels of ATP2A3 protein are expressed, ATP2A3 has unusually low Ca 2+ affinity, rendering it essentially inactive at normal intracellular Ca 2+ concentration (≤0.1 μM) 26 . Therefore, we chose to silence a single isoform: ATP2A2. Two different sequences of ATP2A2 shRNA yielded moderate to severe silencing (Fig. 6a). Oil red O staining and unaltered PPARγ protein indicated no apparent effect of ATP2A2 knockdown on differentiation (Fig. 6a). Cytosolic Ca 2+ was increased in cells with severe knockdown but not with modest knockdown (Fig. 6b); we speculate that any change that may be caused by moderate knockdown of ATP2A2 appears to be outside the detection range of the method. Alternatively, although this work focused on Ca 2+ concentration, Ca 2+ signaling is also transduced via cytosolic Ca 2+ oscillations 27 . In cells with ATP2A2 (SERCA2) haploinsufficiency, there were no significant changes in baseline cytosolic Ca 2+ levels 28 . Instead, the Ca 2+ oscillatory pattern was altered 28 . This may explain why modest knockdown of ATP2A2 still had an impact on thermogenic genes, and the potential changes in Ca 2+ oscillation may be another important factor in LRPPRC-deficient cells, AA-treated cells and cells with severe knockdown of ATP2A2 along with increased cytosolic Ca 2+ levels. Nonetheless, Ucp1 mRNA and protein were reduced in proportion to the extent of ATP2A2 knockdown (Fig. 6a,c). Levels of other thermogenic genes were also similarly decreased (Fig. 6c) whereas lipogenic genes were marginally affected (Fig. 6d) and genes regulating mitochondrial biogenesis exhibited minimal changes (Fig. 6e). Finally, Lrpprc and mitochondrial-encoded respiratory genes were unaltered, indicating that SERCA knockdown affects Ca 2+ trafficking through a distinct mechanism (Fig. 6e,f). Together, these data support that cytosolic Ca 2+ is one of the second messengers for mitochondrial retrograde signaling in brown adipocytes.
Discussion
In this study, we tested whether reducing respiratory capacity in mouse BAT affects thermogenic gene expression and BAT function. We modeled impaired respiratory capacity by ablating LRPPRC in an adipose-specific manner. Impaired respiratory capacity activated retrograde signaling pathway to attenuate thermogenic and oxidative gene expression. The transcriptional basis for this repression was the reduced recruitment of PPARγ to the promoters of those genes. Using shRNA against LRPPRC, an inhibitor of respiratory complex and shRNA against SERCA pump in cultured brown adipocytes, and conversely the means of reducing cytosolic Ca 2+ under respiration-impaired conditions, we also showed that Ca 2+ mediates the crosstalk between mitochondria and nucleus. Overall, our work illustrates an adaptive coordination of respiratory capacity with the expression of BAT-enriched thermogenic genes.
Mitochondrial retrograde signaling is triggered by various mitochondrial stresses [29][30][31] . This signaling pathway affects nuclear gene expression, resulting in a multitude of cellular adaptive responses [29][30][31] . Our data highlight an adaptive response of brown adipocytes to impaired respiratory capacity, which is an unfavorable condition for thermogenesis. In contrast, adipose-specific loss of TFAM, an activator of mitochondrial transcription and positive regulator of mtDNA replication, had no such defects in BAT 32,33 . Despite the reduced expression of mitochondrial-encoded ETC genes, however, oxygen consumption, FAO, and citrate synthase activity were paradoxically increased in TFAM-deficient BAT 32,33 , which confounds interpretation. Importantly, the phenotypic similarities between LRPPRC ablation and pharmacological inhibition of respiratory complex exclude pleiotropic effects of LRPPRC loss. Moreover, impaired respiratory capacity was associated with remodeling of oxidative program in BAT. In LRPPRC FKO mice, gene programs involved in mitochondrial respiration (nuclear-encoded) and fatty acid oxidation were impaired. Some of the changes might be explained by reduced expression of Ppargc1b and Erra (Esrra), both of which govern mitochondrial biogenesis and fatty acid oxidation and ref. 17. It is noteworthy that LRPPRC depletion in hepatocytes had no overt effect on the oxidative gene program 34 . This differential gene regulation suggests tissue specificity of mitochondrial retrograde signaling. Attenuated expression of genes involved in fatty acid oxidation was not simply due to decreased mitochondrial content. Interestingly, impaired respiratory capacity in BAT was not associated with a compensatory increase in mitochondrial content. This is in contrast to inhibition of oxidative phosphorylation (OXPHOS) in skeletal muscle where induction of (e,f) mRNA levels of thermogenic genes (e) and adipogenic/ lipogenic genes (f) in LRPPRC knockdown cells. Cells were incubated in Ca 2+ -free media supplemented with 100 μM EGTA for 8-12 hr. (g,h) mRNA levels of thermogenic genes (g) and adipogenic/lipogenic genes (h) in immortalized brown adipocytes co-treated with AA and BAPTA. 40 μM BAPTA-AM (a cell-permeable form of BAPTA) was loaded into cells for 1 hr, followed by treatment with 20 nM AA for 18 hr. (a) n = 3-5. Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001, two-tailed unpaired Student's t-test (a-h).
PGC-1 coactivators promotes mitochondrial biogenesis to presumably compensate for OXPHOS deficits 35 . This dichotomy is interesting as it indicates the status of respiratory capacity in BAT globally determines adipocyte function: storage versus heat dissipation. With normal respiratory capacity, BAT is committed to function as a thermogenic organ and thermogenic and oxidative gene expression is maintained. However, upon impaired respiratory capacity, a condition unfavorable for thermogenesis, thermogenic and oxidative gene expression is suppressed. Concurrently, glycolysis is increased, which can supplement ATP and enhance de novo lipogenesis 36 . These mechanisms converge to reprogram BAT into a storage mode.
One could intuitively believe that a coordinated regulation of thermogenic and lipogenic programs is required to ensure continuous fuel supply for thermogenesis in BAT. Indeed, cold promotes lipogenic function of BAT by inducing certain lipogenic genes and enhancing lipogenic flux to replenish lipid pools that are rapidly consumed during thermogenesis 37,38 . However, in our experimental conditions for mice and cultured brown adipocytes, conditions that involve mild cold stress or no apparent cold challenge, we do not observe such coordinated regulation of thermogenic and lipogenic functions in BAT. Our work, therefore, suggests that respiratory capacity-Ca 2+ axis is linked to 'basal' expression of thermogenic genes but not lipogenic genes. In addition, many thermogenic and lipogenic genes are controlled by PPARγ with the thermogenic genes generally coactivator-requiring but the lipogenic genes being coactivator-independent. The functions of PPARγ coactivators may be specifically impaired in BAT with impaired respiratory capacity, which could lead to independent regulation of thermogenic and lipogenic genes at least at basal state.
While we have shown that reduced recruitment of PPARγ to the promoters of thermogenic genes may be responsible for their attenuated expression, precisely how PPARγ is dislodged from those promoters remains unknown. As stated above, a coactivator complex consisting of PGC-1α, SRC-1/3 and other general coactivators is necessary for PPARγ-dependent thermogenic gene expression but is dispensable for expression of lipogenic genes 39,40 . Based on our findings of attenuated (coactivator-dependent) thermogenic gene expression but intact (coactivator-independent) lipogenic gene expression, disrupted coactivator complex may be a potential mechanism. SRC-1 and SRC-3 are shown to be jointly required for recruitment of PPARγ to a PPRE site on the Ucp1 enhancer in BAT but not to lipogenic gene promoters 39 . One possibility is that impaired respiratory capacity interferes with the function of SRC family as a PPARγ coactivator, leading to diminished docking of PPARγ on the thermogenic promoters. In addition, given a model in which binding of PGC-1α to PPARγ promotes recruitment of SRC-1 and CBP/p300 41 , abrogation of physical interaction between PPARγ and PGC-1α could indirectly hinder PPARγ docking by sequestering SRC-1 and possibly SRC-3 from PPARγ coactivator complexes.
We provide evidence of cytosolic Ca 2+ as a signal that may mediate mitochondria-nucleus crosstalk triggered by impaired respiratory capacity in BAT. It has been reported that cytosolic Ca 2+ was increased by certain mitochondrial stresses including depletion of mtDNA and inhibition of the respiratory chain in various cell types, which ultimately affected nuclear gene expression 23,25,42,43 . Defective respiratory chain function is also associated with deranged mitochondrial Ca 2+ handling 44,45 . We speculate that impaired respiratory capacity impairs Ca 2+ buffering by mitochondria, leading to increased cytosolic Ca 2+ . In brown adipocytes with genetically or pharmacologically (c-f) mRNA levels of thermogenic genes (c), adipogenic/ lipogenic genes (d), mitochondrial biogenesis genes (e) and mitochondrial-encoded ETC genes (f). Data are mean ± SEM. *P < 0.05, **P < 0.01, ***P < 0.001, two-tailed unpaired Student's t-test (b-f). impaired respiratory capacity, increased cytosolic Ca 2+ was at least in part responsible for repressed thermogenic genes. To our knowledge, this is a first report describing Ca 2+ -mediated mitochondrial retrograde signaling in BAT. In contrast, Ca 2+ is also known to positively regulate BAT thermogenesis. Unlike our model, β3-adrenergic stimulation of brown adipocytes led to a rise in intracellular Ca 2+ that is evoked from mitochondria, ER and entry across plasma membrane 46 . It has been suggested that Ca 2+ influx mediated by TRPV2, a Ca 2+ -permeable non-selective cation channel, was required for isoproterenol-induced expression of Ppargc1a and Ucp1 in brown adipocytes 47 . Moreover, activation of TRPM8, a cold-sensing non-selective cation channel, induced UCP1 expression through Ca 2+ -mediated PKA phosphorylation in brown adipocytes 48 . This discrepancy suggests that mitochondrial retrograde signaling involves a Ca 2+ signaling pathway that is distinct from the one in stimulated brown adipocytes. Furthermore, the sources of Ca 2+ could be an important determinant of the responses in brown adipocytes. For example, neurons respond distinctly to different sources of Ca 2+ influx 49 . This 'source-specificity hypothesis' may explain why the outcomes of Ca 2+ signaling pathways activated by β3-adrenergic stimulation (Ca 2+ from extracellular space, ER and mitochondria) and mitochondrial retrograde signaling (Ca 2+ from mitochondria) are different. Although unclear at present, investigating how Ca 2+ influences PPARγ and possibly its coactivator complex may help elucidate the distinct mechanism, which could prove of therapeutic utility.
In summary, our study demonstrates that BAT coordinates its respiratory status with the expression of thermogenic and oxidative genes through retrograde signaling to determine its metabolic commitment. When respiratory capacity is impaired, BAT adopts a storage phenotype by turning off thermogenic genes and down-regulating genes involved in fuel oxidation. Our work may provide the important framework for future research on mitochondrial control of thermogenic gene pathway and energy dissipation.
Methods
Animals. Lrpprc flox/flox mice were generated as previously described 50 Cold exposure. Mice were acclimatized at 30 °C for 4 weeks. The mice were then housed individually and acutely exposed to cold (4 °C). Rectal temperature was measured hourly using a digital thermometer (MicroTherma 2 T, Thermoworks) and a rectal probe (RET-3, Physitemp) for up to 8 hours. The end point was a 10 °C drop in temperature (approximately 27 °C) and mice were immediately euthanized.
Histology. For hematoxylin and eosin (H&E) staining, brown adipose tissue was collected, washed in ice-cold PBS and fixed in 4% paraformaldehyde with gentle shaking at 4 °C overnight. Subsequent procedures were performed by UMass morphology core facility.
Transmission electron microscopy (TEM). BAT was dissected and chopped finely in PBS, followed by overnight fixation in 0.1 M cacodylate buffer (pH 7.2) containing 2.5 M glutaraldehyde. Sample preparation and image acquisition were performed by UMMS core electron microscopy facility using FEI Tecnai Spirit 12 TEM.
Reverse transcription-quantitative PCR (RT-qPCR). Total RNA was isolated from cell culture using Trizol according to the manufacturer instructions (Invitrogen). For mouse adipose tissue, the aqueous phase prepared from Trizol extraction was subject to acidic phenol extraction (pH 4.4) to remove residual lipid, followed by purification using RNeasy (Qiagen) or GeneJET RNA columns (Thermo Scientific). cDNA was synthesized from 0.5-1 μg RNA, using MultiScribe reverse transcriptase (Applied Biosystems). Quantitative PCR was performed using Fast SYBR Green Master Mix (Applied Biosystems) on a 7500 FAST Real-Time PCR system (Applied Biosystems). For a normalization purpose, several widely used internal control genes were tested in all experimental groups and the most stable one was selected. Relative gene expression was calculated by the comparative C T method. Coefficient of variation for the reference genes was less than 1% across samples. Primers are listed in Table S1.
Quantification of mitochondrial DNA content. Approximately 5-10 mg of frozen brown fat were lysed in 300 μL tissue lysis buffer (50 mM Tris-Cl pH 7.5, 50 mM EDTA pH 8.0, 100 mM NaCl, 1% Triton X-100, 5 mM DTT and 100 mg/ml proteinase K) at 56 °C for 6 hours. DNA isolation and quantitative PCR were performed as previously described 50 . Immunoblotting. Approximately 10 mg of frozen brown fat or 50 mg of frozen inguinal white fat were placed in ice-cold RIPA buffer supplemented with protease inhibitor cocktail (Sigma), phosphatase inhibitor cocktail (Sigma) and sodium β-glycerophosphate. The tissue was then homogenized using a bead mill homogenizer (Qiagen TissueLyzer). The lysates were vortexed vigorously for 5 seconds, incubated on ice for 10 minutes and cleared by centrifuging at 13200 rpm for 15 minutes at 4 °C. Preparation of lysates from cell culture was performed as above without using TissueLyser. Protein concentration was determined using a BCA kit (Pierce). Indicated amounts of proteins were separated on a polyacrylamide gel and blotted onto a PVDF membrane. The membrane was blocked in 5% non-fat milk in TBS-tween, followed by incubation with primary antibodies directed against proteins of interest and HRP-conjugated secondary antibodies. The protein signals were visualized with Amersham ECL kit (GE Healthcare) or WestPico ECL kit (Thermo Scientific) and digitally recorded using Amersham Imager 600 (GE Healthcare). The antibodies used are as follows: LRPPRC (produced in-house using mice); UCP1 (U6382, Sigma); PPARγ (sc-7273, Santa Cruz); COXI (ab14705, Abcam); COXVa (ab110262, Abcam); NDUFS3 (ab110246, Abcam); Citrate Synthase (GTX110624, GeneTex); VDAC (4866, Cell Signaling); GAPDH (sc-25778, Santa Cruz); ATP2A2 (sc-8095, Santa Cruz); ATP2A3 (sc-81759, Santa Cruz).
Lactate measurement. Lactate levels were measured in homogenates prepared from approximately 5-10 mg of BAT using Lactate Assay kit II (Biovision). For AA-treated cells, lactate secretion was measured in culture medium using the same kit.
Complex activity and citrate synthase activity. Complex activity and citrate synthase activity were measured in BAT homogenates as previously described 50, 51 . Chromatin immunoprecipitation (ChIP). Interscapular brown fat was collected, washed with ice-cold PBS and finely minced. Minced tissue was cross-linked in 10 volume of PBS containing 1% paraformaldehyde for 10 minutes at room temperature on a rotator. Cross-linking was quenched by adding a final concentration of 125 mM glycine. The samples were then dounced on ice 10 times, washed twice with ice-cold PBS. Disaggregated tissue was placed in 1 ml of RSB buffer (3 mM MgCl2, 10 mM NaCl, 10 mM Tris-Cl pH 7.4, 0.1% NP-40 and protease inhibitor cocktail [Sigma]), dounced on ice 30 times, incubated on ice 5 minutes and filtered through 100 μM cell strainer. The homogenate was centrifuged and the pellet was resuspended in nuclei lysis buffer (1% SDS, 10 mM EDTA, 50 mM Tris-Cl pH 8.1 and protease inhibitor cocktail). The chromatin was subject to three sonication cycles (a cycle of 10 minutes with a duty of 30 seconds on/30 seconds off) using Diagenode Bioruptor. The samples were cleared by centrifugation, diluted in ChIP dilution buffer (1% Triton-X100, 2 mM EDTA, 150 mM NaCl, 20 mM Tric-Cl pH 8.0 and protease inhibitor cocktail) and incubated overnight at 4 °C with 2 μg of anti-PPARγ antibody (sc-7196, Santa Cruz). Immunocomplexes were recovered with protein A/G beads (Pierce) and eluted DNA was further purified using the QIAquick gel extraction kit (Qiagen). Quantitative real-time PCR was performed using specific primers for the indicated gene promoters. Primers are listed in Table S2.
Cell culture. Primary stromal vascular fraction containing preadipocytes was isolated from interscapular depot of P0-P2 newborn Swiss Webster mice (Taconic Biosciences) as previously described 7 . Prior to induction of differentiation, primary brown preadipocytes were grown to >90% confluence in DMEM (Corning) containing 20% FBS, 20 mM HEPES and 1 mM sodium pyruvate. Immortalized brown preadipocytes were grown in the same condition except that 10% FBS was used. For adipocyte differentiation, confluent cells were exposed to DMEM containing 0.5 μM dexamethasone, 125 μM indomethacin, 0.5 mM isobutylmethylxanthine, 20 nM insulin, 1 nM T3 and 10% FBS for 2 days, after which medium was switched to DMEM containing 20 nM insulin, 1 nM T3 and 10% FBS, and replenished every 2 days. On day 6 post differentiation, cells were treated as indicated. At least three independent experiments were performed. Lentiviral transduction. shRNA oligomers were annealed and cloned into pLKO.1-hygro lentiviral vector as described in the protocol available from Addgene. 21 bp sense sequences for shLRP-PRC and shATP2A2 are as follows: shLRPPRC: 5′-TGAAGCTAGATGACCTGTTTC-3′; shATP2A2 #1: 5′-GGCGAGAGTTTGATGAATTAA-3′; shATP2A2 #2: 5′-TGACTCTGCTTTGGATTATAA-3′; shScramble (shScr): negative control; Addgene #1864. To produce lentiviruses, HEK-293T cells were transfected with the pLKO.1-hygro construct, psPAX2 and pMD2.G using lipofectamine 2000 (Invitrogen) according to the manufacturer instructions. Medium was replaced after 16-20 hours of incubation with the DNA:lipofectamine mixture. At 48 hours post transfection, medium was harvested and passed through 0.45 μm filter (Thermo Scientific). The medium was diluted 2-fold in fresh medium and added to subconfluent immortalized brown preadipocytes plated in 12-well plate with 4 μg/ml polybrene. After overnight incubation, cells were replenished with fresh medium and incubated for additional 24-30 hours. Cells were then trypsinized and seeded in 100 mm dish in the presence of 400 μg/ml hygromycin, after which medium was replaced every 48 hours. At day 5 post selection, hygromycin was removed and cells were used for differentiation. Calcium measurement. Preadipocytes were plated and differentiated in a 96-well clear bottom black plate (Costar). Fully differentiated cells (day 6) were washed with 150 ul HBSS (Gibco 14175-095) supplemented with 1.8 mM CaCl 2 , 0.8 mM MgSO 4 , 1 nM T3 and 20 nM insulin. Cells were then loaded with 4 μM Fluo 4-AM (Invitrogen) in 100 ul HBSS for 1 hour at 37 °C (30 °C for LRPPRC knockdown cells and ATP2A2 knockdown cells), followed by two washes with 150 ul HBSS. Fluorescence was measured at 485/520 nm in 100 ul HBSS using a microplate reader (POLARstar Omega, BMG LABTECH). Three independent experiments were performed and each experiment included biological duplicates.
Statistics. Statistical analyses were performed using GraphPad Prism 6 (ver. 6.07). The statistical tests used were specified in the figure legends. Statistical significance was defined as P < 0.05. The cutoff for a not-significant (ns) P-value to show the exact number is 0.07. | 6,918.8 | 2017-05-17T00:00:00.000 | [
"Biology",
"Medicine"
] |
Generating Convergent Laguerre-Gaussian Beams Based on an Arrayed Convex Spiral Phaser Fabricated by 3D Printing
A convex spiral phaser array (CSPA) is designed and fabricated to generate typical convergent Laguerre-Gaussian (LG) beams. A type of 3D printing technology based on the two-photon absorption effect is used to make the CSPAs with different featured sizes, which present a structural integrity and fabricating accuracy of ~200 nm according to the surface topography measurements. The light field vortex characteristics of the CSPAs are evaluated through illuminating them by lasers with different central wavelength such as 450 nm, 530 nm and 650 nm. It should be noted that the arrayed light fields out from the CSPA are all changed from a clockwise vortex orientation to a circular distribution at the focal plane and then a counterclockwise vortex orientation. The circular light field is distributed 380–400 μm away from the CSPA, which is close to the 370 μm of the focal plane design. The convergent LG beams can be effectively shaped by the CASPs produced.
Introduction
As known, vortex light beams demonstrate some potential applications such as quantum communication [1][2][3][4][5], trapping and manipulation of particles [6][7][8][9] and biomedicine [10][11][12], due to their unique orbital angular momentum [13][14][15][16][17]. In particular, the light fields of the Laguerre-Gaussian (LG) beams [18][19][20][21] contain the key Laguerre polynomials [22][23][24][25], which have a property of orthogonal normalization, so the LG beams can be used to form a complicated optical mode. Generally, any vortex beam can be considered as a linear combination of LG beams. Therefore, the research about LG beams becomes a hot topic and currently is mainly focused in the vortex beam characters. Considering the situation that the beam size will be widened gradually with the propagation distance, its application has still been greatly restricted.
A kind of convex spiral phaser array for generating convergent LG beams based on spiral phase plate (SPP) [26][27][28][29] is constructed by us. The fabrication of the micro-structures is completed using 3D printing technology [30][31][32][33]. According to the surface topography measurement charts, it can be seen that the micro-structures present an ideal manufacturing accuracy and a needed integrity.
Design and Fabrication
In general, the method of combining a SPP and a convex lens can be used to present the effect of generation convergent LG beams. But this poorly integrated system will limit its application in imaging micro-systems, so as to inspire us to integrate a miniaturized convex lens and a SPP leading to a convex spiral phaser. A "Solidworks" software is utilized to draw a schematic diagram of the SPP and the convex spiral phaser, as shown in Figure 1. Comparing Figure 1a,b, it can be found that the top surface of the SPP is formed by a straight linear spiraling around a central-axis. The top surface of the convex spiral phaser is formed by a circular arc-spiraling around the central-axis. The ideal is to achieve a full beam convergence through this structure like a convex lens.
Micromachines 2020, 11, x FOR PEER REVIEW 2 of 15 seen that the micro-structures present an ideal manufacturing accuracy and a needed integrity. According to the measurements of their surface roughness, the processing accuracy is already about 200 nm. During the optical measurements, three lasers with different central wavelength such as 450 nm, 530 nm and 650 nm, are used. The light intensity distribution corresponding to different lasers has following characters in common: (1) the convergent LG beams being successfully generated and then converged at a distance of about 380-400 μm, which is already close to the focal plane expected at ~370 μm; (2) the light intensity distribution exhibiting a clockwise vortex orientation before the focal plane of CSPA and after the focal plane being changed to a counterclockwise vortex orientation, which means a vortex reversal corresponding to the focal plane of CSPA; (3) the light intensity being significantly low in the far field. The difference is that the number of the spiral lobes corresponding to different wavelength is variable but the same as the topological charge (TC) [34][35][36][37][38] corresponding to the light wavelength. It should be noted that the method of generation convergent LG beams achieved by us means a possibility for its efficient long-distance propagation.
Design and Fabrication
In general, the method of combining a SPP and a convex lens can be used to present the effect of generation convergent LG beams. But this poorly integrated system will limit its application in imaging micro-systems, so as to inspire us to integrate a miniaturized convex lens and a SPP leading to a convex spiral phaser. A "Solidworks" software is utilized to draw a schematic diagram of the SPP and the convex spiral phaser, as shown in Figure 1. Comparing Figure 1a,b, it can be found that the top surface of the SPP is formed by a straight linear spiraling around a central-axis. The top surface of the convex spiral phaser is formed by a circular arc-spiraling around the central-axis. The ideal is to achieve a full beam convergence through this structure like a convex lens. The main design parameters are shown in Figure 2. In formula (1), n0 represents the refractive index of the surrounding medium, and n the refractive index of the material for constructing the structure mentioned above, and λ the wavelength of incident light beams, and l the TC. Among them, l is generally an integer. If H is not an integer corresponding to the wavelength, the phase of incident beams at each phase step will be discontinuous, so as to destroy the circular intensity distribution of the transmitted light [39][40][41][42]. We set the central wavelength at ~650 nm, where the refractive index of Nanoscribe IP-Dip is 1.545 and the l being 5, and then the parameter H = 5.963 μm can be calculated. The value of the key parameters are shown in Table 1: From the parameter table, it can be seen that our processing accuracy is at the sub-micron level. In the past, due to the technological limitation, it is difficult to make a spiral phase plate with a relatively smooth surface, and generally a multi-level step spiral phase plate [43][44][45] is used instead. But with a rapid advancement of 3D printing technology, it becomes possible to shape a very smooth spiral phase plate through 3D manufacturing equipment according to the processing principle based on the two-photon absorption effect. The main design parameters are shown in Figure 2. In formula (1), n 0 represents the refractive index of the surrounding medium, and n the refractive index of the material for constructing the structure mentioned above, and λ the wavelength of incident light beams, and l the TC. Among them, l is generally an integer. If H is not an integer corresponding to the wavelength, the phase of incident beams at each phase step will be discontinuous, so as to destroy the circular intensity distribution of the transmitted light [39][40][41][42]. We set the central wavelength at~650 nm, where the refractive index of Nanoscribe IP-Dip is 1.545 and the l being 5, and then the parameter H = 5.963 µm can be calculated. The value of the key parameters are shown in Table 1: From the parameter table, it can be seen that our processing accuracy is at the sub-micron level. In the past, due to the technological limitation, it is difficult to make a spiral phase plate with a relatively smooth surface, and generally a multi-level step spiral phase plate [43][44][45] is used instead. But with a rapid advancement of 3D printing technology, it becomes possible to shape a very smooth spiral phase plate through 3D manufacturing equipment according to the processing principle based on the two-photon absorption effect. In the actual processing, the photosensitive material of Nanoscribe IP-Dip is used, and the laser fabrication wavelength is ~780 nm, pulse width 120 fs, repetition frequency 80 MHz, peak power 6 kw, horizontal resolution ~200 nm, longitudinal resolution ~1 μm. First, the photosensitive material is spin-coated on a glass substrate, and then placed on a precision moving platform, and thus a focused laser beam is used to localized expose it. Generally, the two-photon absorption only occurs in a limited three-dimensional area near the focus of the 3D printer. So, the complete threedimensional structure can be thoroughly exposed through moving the platform, and then the unexposed areas dissolved by acetone, and finally the residual solution air-dried with nitrogen. A 5 5 CSPA with a period of 25 μm and 30 μm is successfully fabricated, respectively. The manufacturing equipment (high-speed femtosecond laser three-dimensional direct writing system 1.0-HUST) and the fabricated sample are shown in Figure 3. From Figure 3b, we can see that because the material used for fabricating the sample is transparent in the visible wavelength region, the surface morphology of the sample cannot be directly observed. So, a laser confocal microscope is utilized to present the detailed structural character of the sample fabricated. In the actual processing, the photosensitive material of Nanoscribe IP-Dip is used, and the laser fabrication wavelength is~780 nm, pulse width 120 fs, repetition frequency 80 MHz, peak power 6 kw, horizontal resolution~200 nm, longitudinal resolution~1 µm. First, the photosensitive material is spin-coated on a glass substrate, and then placed on a precision moving platform, and thus a focused laser beam is used to localized expose it. Generally, the two-photon absorption only occurs in a limited three-dimensional area near the focus of the 3D printer. So, the complete three-dimensional structure can be thoroughly exposed through moving the platform, and then the unexposed areas dissolved by acetone, and finally the residual solution air-dried with nitrogen. A 5 × 5 CSPA with a period of 25 µm and 30 µm is successfully fabricated, respectively. The manufacturing equipment (high-speed femtosecond laser three-dimensional direct writing system 1.0-HUST) and the fabricated sample are shown in Figure 3. From Figure 3b, we can see that because the material used for fabricating the sample is transparent in the visible wavelength region, the surface morphology of the sample cannot be directly observed. So, a laser confocal microscope is utilized to present the detailed structural character of the sample fabricated. In the actual processing, the photosensitive material of Nanoscribe IP-Dip is used, and the laser fabrication wavelength is ~780 nm, pulse width 120 fs, repetition frequency 80 MHz, peak power 6 kw, horizontal resolution ~200 nm, longitudinal resolution ~1 μm. First, the photosensitive material is spin-coated on a glass substrate, and then placed on a precision moving platform, and thus a focused laser beam is used to localized expose it. Generally, the two-photon absorption only occurs in a limited three-dimensional area near the focus of the 3D printer. So, the complete threedimensional structure can be thoroughly exposed through moving the platform, and then the unexposed areas dissolved by acetone, and finally the residual solution air-dried with nitrogen. A 5 5 CSPA with a period of 25 μm and 30 μm is successfully fabricated, respectively. The manufacturing equipment (high-speed femtosecond laser three-dimensional direct writing system 1.0-HUST) and the fabricated sample are shown in Figure 3. From Figure 3b, we can see that because the material used for fabricating the sample is transparent in the visible wavelength region, the surface morphology of the sample cannot be directly observed. So, a laser confocal microscope is utilized to present the detailed structural character of the sample fabricated.
Topography Measurement
First, a VK-X200K laser confocal microscope (Japan Keyence Corporation) is utilized to perform the surface topography measurement. The overall morphology of the CSPA sample is shown in Figure 4. As shown, an initial test area is fabricated for performing equipment debugging before actually processing the CSPA samples. It can be seen that the overall morphology including both the samples with different period such as 25 µm and 30 µm and a test area with an unfilled small corner is roughly complete. The CSPA with a period of 30 µm has attached by a small stain nearing the bottom edge. not exceeding 5%. However, several small concave and convex burrs on the surface of the structure indicated by areas B and C, which are more clear than that in Figure 5, can be observed. This is mainly because the photoinitiator molecules are excited by incident beams to generate free radicals and then undergo a chain reaction. The monomer molecules are polymerized to form high polymers. When the molecular weight in the polymer network reaches a critical value, the polymer will not be dissolved by subsequent immersion in acetone solution. The appearance of the concave area-B is due to the insufficient molecular weight of the polymer formed by the chain reaction, so as to lead to the subsequent dissolution when soaked in acetone, thereby forming a small pit. The convex area-C is due to the fact that the chain reaction occurs in a relatively large area, so as to result in the surrounding monomer molecules also forming a polymer. As shown, the height curve of the area-A shows an up and down fluctuation, which can be attributed to the machining accuracy limitation. Therefore, Figure 7 is an enlarged view of the overall contour of area-A. The roughness of area a is quantitatively detected with vk analysis software, as shown in Figure 8. The roughness parameters are shown in Table 2. As shown, the maximum roughness is approximately ~190 nm which is still less than the 3D printing accuracy of 200 nm. So, the surface of the manufactured sample is not smooth and thus presents a stepped profile, which is a typical diffraction phase outline. Next, a high-power objective lens is used to observe the details of the fabricated CSPAs. Figures 5 and 6 are morphological diagrams of the CSPAs with a period of 25 µm and 30 µm, respectively. As shown in Figure 5, although existing some concave and convex burrs over small region of the CSPA with a period of 30 µm, the overall structure presents a better surface morphology and a high shaping precision of the phase step. According to the measurements, the structural error between the fabricated CSPA and the designed structure is within 5% to acquire an ideal fabrication result.
It can be seen from Figure 6 that the structural error of the acquired CSPA is still in a range of not exceeding 5%. However, several small concave and convex burrs on the surface of the structure indicated by areas B and C, which are more clear than that in Figure 5, can be observed. This is mainly because the photoinitiator molecules are excited by incident beams to generate free radicals and then undergo a chain reaction. The monomer molecules are polymerized to form high polymers. When the molecular weight in the polymer network reaches a critical value, the polymer will not be dissolved by subsequent immersion in acetone solution. The appearance of the concave area-B is due to the insufficient molecular weight of the polymer formed by the chain reaction, so as to lead to the subsequent dissolution when soaked in acetone, thereby forming a small pit. The convex area-C is due to the fact that the chain reaction occurs in a relatively large area, so as to result in the surrounding monomer molecules also forming a polymer. As shown, the height curve of the area-A shows an up and down fluctuation, which can be attributed to the machining accuracy limitation. Therefore, Figure 7 is an enlarged view of the overall contour of area-A. The roughness of area a is quantitatively detected with vk analysis software, as shown in Figure 8. The roughness parameters are shown in Table 2. As shown, the maximum roughness is approximately~190 nm which is still less than the 3D printing accuracy of 200 nm. So, the surface of the manufactured sample is not smooth and thus presents a stepped profile, which is a typical diffraction phase outline.
Experimental Measurement
We set up a measurement system for acquiring common optical characteristics of the CSPAs. As shown in Figure 9, the laser beams firstly pass through a beam expander to form an uniform planewave, and then is incident upon the CSPA measured. After exiting from the CSPA, a vortex beam is formed. Finally, the shaped light intensity distribution is measured by a beam profiler. The detailed measuring operations are as follows.
Experimental Measurement
We set up a measurement system for acquiring common optical characteristics of the CSPAs. As shown in Figure 9, the laser beams firstly pass through a beam expander to form an uniform planewave, and then is incident upon the CSPA measured. After exiting from the CSPA, a vortex beam is formed. Finally, the shaped light intensity distribution is measured by a beam profiler. The detailed measuring operations are as follows. First, a red laser beam with a wavelength range of 635-671 nm is used to evaluate the CSPA fabricated. The light intensity distribution around the focal plane of the CSPA located at different distance are shown in Figure 10. It can be seen that the light field out of the CSPA initially exhibits a clockwise vortex distribution, and then the vortex-like light spot is gradually gathered as the distance increasing. When the distance is ~300 μm, the light intensity distribution presents the same morphology of the phaser. When the distance is further increased to ~400 μm, the light intensity distribution is already a micro-ring-shaped bright spot, which is a typical LG beam type of light field intensity distribution. When the distance continuously increasing, the exited light field becomes a counterclockwise vortex distribution, which is opposite to the initial vortex direction. Afterwards, the counterclockwise vortex distribution spread out gradually.
(a) First, a red laser beam with a wavelength range of 635-671 nm is used to evaluate the CSPA fabricated. The light intensity distribution around the focal plane of the CSPA located at different distance are shown in Figure 10. It can be seen that the light field out of the CSPA initially exhibits a clockwise vortex distribution, and then the vortex-like light spot is gradually gathered as the distance increasing. When the distance is~300 µm, the light intensity distribution presents the same morphology of the phaser. When the distance is further increased to~400 µm, the light intensity distribution is already a micro-ring-shaped bright spot, which is a typical LG beam type of light field intensity distribution. When the distance continuously increasing, the exited light field becomes a counterclockwise vortex distribution, which is opposite to the initial vortex direction. Afterwards, the counterclockwise vortex distribution spread out gradually.
increasing. When the distance is ~300 μm, the light intensity distribution presents the same morphology of the phaser. When the distance is further increased to ~400 μm, the light intensity distribution is already a micro-ring-shaped bright spot, which is a typical LG beam type of light field intensity distribution. When the distance continuously increasing, the exited light field becomes a counterclockwise vortex distribution, which is opposite to the initial vortex direction. Afterwards, the counterclockwise vortex distribution spread out gradually. Then we amplify the field strength of the clockwise vortex distribution, the annular field intensity distribution, the counterclockwise vortex distribution, and the far field distribution. The obtained three-dimensional field intensity distribution map is obtained by subsequent measurement and does not correspond to the two-dimensional field intensity distribution map, and there are subtle differences. And according to Figure 10, it can be seen that the changes of the two periodic structures are basically the same, so we only intercept the three-dimensional field intensity distribution of the Then we amplify the field strength of the clockwise vortex distribution, the annular field intensity distribution, the counterclockwise vortex distribution, and the far field distribution. The obtained three-dimensional field intensity distribution map is obtained by subsequent measurement and does not correspond to the two-dimensional field intensity distribution map, and there are subtle differences. And according to Figure 10, it can be seen that the changes of the two periodic structures are basically the same, so we only intercept the three-dimensional field intensity distribution of the structure with a period of 25 µm.
From the Figure 11, we can see that before the focal plane, the three-dimensional distribution of the field strength in the area corresponding to the structure is concave, and the field strength is significantly lower than the surrounding area. The energy of the incident light field is mainly distributed in the blank areas between the structures. When the distance is increased to 400 µm, the field strength of the structure area is significantly higher than the field strength of the surrounding area. The incident field intensity distribution presents a bright ring shape, which is an obvious vortex beam field intensity distribution. When the distance continuously increasing, the ring-shaped bright spots gradually spread out, and the incident light field energy is gradually distributed in the blank areas between the structure. When the distance is increased to 2500 µm, the overall field strength of the array area is significantly lower than the surrounding area, and the incident light field energy distribution is the area outside the array structure. The radius of the bright ring is 10.75 µm through measurement. The scan and enlarged view of the annular LG beam at the focus is showed at Figure 11e Later, we replaced the red-light source with a green light source and a blue light source which with a center wavelength approximately in the range of 501-561 nm and 430-473 nm, respectively. The light intensity distributions at different distances are shown in Figure 12. It can be seen from Figure 12 that after passing through the convex spiral phaser, the field intensity distribution of both the green light beams and blue light beams is the same as that of red light beams. But light intensity distribution is changed from clockwise vortex to a circular distribution at the focal plane, and then continuously counterclockwise vortex. In the far field, the light intensity of the area corresponding to the CASP is also significantly lower. The number of the lobes of the vortex field is the same as the TC corresponding to its wavelength.
Similarly, from the Figure 13, we intercept the three-dimensional field intensity distribution of an array with a period of 25 µm under the illumination of a light source with a center wavelength of 530 nm and 450 nm, respectively. The change of the field intensity is the same. The incident light field energy in the area before and after the focal plane is mainly concentrated in the blank area between the structures. At the focal plane, the field intensity presents a bright ring distribution of a typical vortex beam and is located in the area of the structure. In the far field, the field strength of the array structure is significantly lower than the field strength of the surrounding un-structured area. The bright ring radius of the incident light with a central wavelength of 530 nm and 450 nm is 11.25 µm and 12.58 µm, respectively.
As shown, the light intensity of three light sources at the focal plane present in a ring shape, and the number of spiral lobes is also the same with the TC. However, neither the ring distribution nor the spiral lobe shape are ideal. This is mainly caused to the following reasons: The light source is broad-spectrum, not an ideal light source with a single wavelength. Therefore, light beams with different wavelengths are superimposed on the plane after passing through the CSAP, forming ghost noise; sidelobe noise comes from the interference between the central singular point and the edge of the aperture. In addition, the difference in intensity between the inside and outside of the aperture edge leads to straight-side diffraction, which intensifies the diffraction noise. distribution is the area outside the array structure. The radius of the bright ring is 10.75 μm through measurement. The scan and enlarged view of the annular LG beam at the focus is showed at Figure 11e-h. The noise level out of the ring structure can be obtained, the signal-to-noise ratios (SNRs) are 87.5 dB, 81.3 dB, 63.3 dB, 64.2 dB, respectively. The SNRs are all above 60 dB, and the signal parameters of the light field distribution diagram can effectively characterize the vortex light field of the plane. Figure 12 that after passing through the convex spiral phaser, the field intensity distribution of both the green light beams and blue light beams is the same as that of red light beams. But light intensity distribution is changed from clockwise vortex to a circular distribution at the focal plane, and then continuously counterclockwise vortex. In the far field, the light intensity of the area corresponding to the CASP is also significantly lower. The number of the lobes of the vortex field is the same as the TC corresponding to its wavelength.
Green laser(~501-561 nm) (a) Blue laser(~430-473 nm) Similarly, from the Figure 13, we intercept the three-dimensional field intensity distribution of an array with a period of 25 μm under the illumination of a light source with a center wavelength of the spiral lobe shape are ideal. This is mainly caused to the following reasons: The light source is broad-spectrum, not an ideal light source with a single wavelength. Therefore, light beams with different wavelengths are superimposed on the plane after passing through the CSAP, forming ghost noise; sidelobe noise comes from the interference between the central singular point and the edge of the aperture. In addition, the difference in intensity between the inside and outside of the aperture edge leads to straight-side diffraction, which intensifies the diffraction noise.
Discussion
A 3D printing method is successfully used to fabricate CSPAs in this paper. Its theoretical processing accuracy is~150 nm, and the actual processing accuracy~200 nm according to practical measurements of the surface roughness. Due to the machining accuracy, the final stepped surface is not ideally smooth but already satisfy the requirement for generation a needed vortex light field. Considering the precision of focused ion beam (FIB) processing can reach an accuracy of~4 nm, it can be expected that the CASPs will present a very smooth and entire surface through FIB technique. However, if the processing area being 150×150 µm or even larger and the etching depth being more than~6 µm, the FIB processing will be time-consuming and expensive. It can be seen that the 3D printing technology has achieved a better balance between processing accuracy and fabrication cost for manufacturing large-area micro-optical devices.
It should be noted that from the light field characters and the light intensity distribution shown in Figures 9 and 11, the secondary bright rings in addition to the main bright rings also exist at the focal plane. From the perspective of sidelobe suppression, the main bright rings are created by the diffraction of incident light beams passing through the outer circle area of the CSPA. The ineffective secondary bright rings or the sidelobe are caused by the diffraction of incident beam going through the central small region of each vortex phaser of the CSPA. Therefore, as long as an annular spiral phaser with an appropriate width being used, the operation of eliminating the sidelobe of the vortex beam should be achieved. In the follow-up work, we plan to hollow out the small central region of each vortex phaser of the CSPA in order to obtain the best annular light field.
Next we discuss the relationship between the bright ring radius and the wavelength of the incident light beam. In formula (2), r is the radius of the bright ring, and l the TC, and w 0 the beam waist radius of the Gaussian beam, and z the transmission distance, and k the wave number [46].
Considering the safe optical power range that the detector can withstand and the convenience of matching the light source with the structural region, a beam expander is added behind the laser, so the planewave light source used in the experimental measurement. Since a planewave light being used, there is no corresponding parameter of the Gaussian beam, so it is impossible to draw an accurate relationship between the wavelength and the radius of the bright ring from the above formula. According to measurement results, the bright ring radius of beams including the red, the green, and the blue light, are obtained, which are 10.75 µm, 11.25 µm and 12.58 µm, respectively. We plot a relationship between the central wavelength of the incident light beams and the radius of the bright ring to obtain Figure 14. From the figure, we can see that as the wavelength increasing, the radius of the bright ring gradually decreased and then the decreasing trend tends to be gentle gradually. According to the following relation curve, the variation trend of the radius of the bright ring with the central wavelength of the incident beam is understood, which is convenient for predicting the bright ring radius corresponding to the incident beam with a wider wavelength range, and clarifies a good direction for in-depth exploration of the vortex optical field distribution. radius of the bright ring gradually decreased and then the decreasing trend tends to be gentle gradually. According to the following relation curve, the variation trend of the radius of the bright ring with the central wavelength of the incident beam is understood, which is convenient for predicting the bright ring radius corresponding to the incident beam with a wider wavelength range, and clarifies a good direction for in-depth exploration of the vortex optical field distribution.
Conclusions
In this paper, a new type of CSPA for generating convergent LG beams is proposed. A typical 3D printing technology is utilized to successfully produce CSAPs and then the processing accuracy already ~200 nm according to the measurements of the surface roughness. The fabricated CSPAs present a needed appearance. The common optical testing using three lasers with different central wavelength demonstrates that the obtained light fields are basically the same, and the light intensity distribution is effectively changed from a clockwise vortex to a focal circular light field distribution and then a counterclockwise vortex, which indicates an obvious vortex rotation redirection. In the far field, the vortex light field spreads gradually. It can be seen that the CSPA can successfully generate a convergent LG beams so as to lay a foundation for practical application of the LG beams in a longdistance range.
Conclusions
In this paper, a new type of CSPA for generating convergent LG beams is proposed. A typical 3D printing technology is utilized to successfully produce CSAPs and then the processing accuracy already~200 nm according to the measurements of the surface roughness. The fabricated CSPAs present a needed appearance. The common optical testing using three lasers with different central wavelength demonstrates that the obtained light fields are basically the same, and the light intensity distribution is effectively changed from a clockwise vortex to a focal circular light field distribution and then a counterclockwise vortex, which indicates an obvious vortex rotation redirection. In the far field, the vortex light field spreads gradually. It can be seen that the CSPA can successfully generate a convergent LG beams so as to lay a foundation for practical application of the LG beams in a long-distance range. | 7,243.6 | 2020-08-01T00:00:00.000 | [
"Physics"
] |
MTA1 Coregulation of Transglutaminase 2 Expression and Function during Inflammatory Response*
Although both metastatic tumor antigen 1 (MTA1), a master chromatin modifier, and transglutaminase 2 (TG2), a multifunctional enzyme, are known to be activated during inflammation, it remains unknown whether these molecules regulate inflammatory response in a coordinated manner. Here we investigated the role of MTA1 in the regulation of TG2 expression in bacterial lipopolysaccharide (LPS)-stimulated mammalian cells. While studying the impact of MTA1 status on global gene expression, we unexpectedly discovered that MTA1 depletion impairs the basal as well as the LPS-induced expression of TG2 in multiple experimental systems. We found that TG2 is a chromatin target of MTA1 and of NF-κB signaling in LPS-stimulated cells. In addition, LPS-mediated stimulation of TG2 expression is accompanied by the enhanced recruitment of MTA1, p65RelA, and RNA polymerase II to the NF-κB consensus sites in the TG2 promoter. Interestingly, both the recruitment of p65 and TG2 expression are effectively blocked by a pharmacological inhibitor of the NF-κB pathway. These findings reveal an obligatory coregulatory role of MTA1 in the regulation of TG2 expression and of the MTA1-TG2 pathway, at least in part, in LPS modulation of the NF-κB signaling in stimulated macrophages.
Inflammation is an adaptive immune response triggered by the body against detrimental stimuli and conditions such as microbial infection and tissue injury (1,2). Inflammation is usually a healing response, but it becomes detrimental if targeted destruction and assisted repair are not properly activated (3). Primarily, macrophages and mast cells recognize the infection and produce a wide variety of inflammatory mediators such as chemokines, cytokines, etc., all contributing to the elicitation of an inflammatory response (1). The inflammatory response is characterized by coordinated regulation of signaling pathways that regulate the expression of both the pro-inflammatory and the anti-inflammatory cytokines including IL-1, IL-6, TNF-␣, receptor activator of NF-B ligand (RANKL), etc. (4). The inability of host to regulate inflammatory response results in sepsis, organ dysfunction, and even death (5). These inflammatory cytokines are under the tight control of master gene transcriptional factor NF-B in promoting the inflammation, and in turn, innate immunity (6). Furthermore, transcriptional control of such NF-B genomic targets is also under a tight control of nucleosome-remodeling coregulators and complexes, leading to either the stimulation or the repression of gene transcription at the molecular level (7)(8)(9)(10).
In recent times, metastatic tumor antigen 1 (MTA1) 3 has been recognized as one of the major coregulators in mammalian cells. MTA1 is a ubiquitously expressed chromatin modifier, having an integral role in nucleosome-remodeling and histone deacetylase (NuRD) complexes (11). MTA1 is widely up-regulated in a wide variety of human tumors and has been shown to play a role in tumorigenesis (11)(12)(13)(14). MTA1 regulates transcription of its targets by modifying the acetylation status of the target chromatin and cofactor accessibility to the target DNA. Recent work from this laboratory has shown that MTA1 plays a key role in inflammatory responses both as a target and as a component of the NF-B signaling by regulating a subset of lipopolysaccharide (LPS)-induced proinflammatory cytokines (8) or by directly regulating the MyD88, a proximal component of NF-B signaling (15). In addition to these functions, MTA1 also plays an essential role in Hepatitis B Virus X Protein stimulation of NF-B signaling and in the expression of NF-B target gene products with functions in inflammation and tumorigenesis (16).
In addition, MTA1 is a newly added regulator of inflammation, and it is also regulated by a number of genes including transglutaminase 2 (TG2) (17). TG2 is a multifunctional enzyme involved in several cellular functions such as apoptosis (18), signaling (19), signal transduction (20), cytoskeleton rearrangements and extracellular matrix stabilization (17), and wound healing (21). Aberrant activation and functions of TG2 have been linked with a variety of inflammatory diseases that include celiac disease, diabetes, multiple sclerosis, rheumatoid arthritis, and sepsis (5,22). Results from a mouse model system revealed that TG2 is also involved in the NF-B activation, which induces the transcription of proinflammatory cytokines, causing continuous activation of inflammatory process and contributing to the development of sepsis, whereas depletion of TG2 brings partial resistance to sepsis * This work was supported, in whole or in part, by National Institute of Health Grants CA98823 and CA98823-S1 (to R. K.). □ S The on-line version of this article (available at http://www.jbc.org) contains supplemental Tables 1-4. 1 Both authors contributed equally to this work. 2 To whom correspondence should be addressed. E-mail: bcmrxk@gwumc. edu. (5). Apart from its role in inflammation, elevated levels of TG2 are associated with many types of cancers (17,(23)(24)(25), a property shared with MTA1. In addition to this, increased expression of TG2 in cancer cells leads to increased drug resistance, metastasis, and poor patient survival (23,24,26), a property also shared with MTA1. Although TG2 expression parallels with MTA1 during inflammation, it remains unclear whether these molecules are trans-regulated by inflammation. Here we report that MTA1 is an obligatory coregulator of TG2 expression and that the MTA1-TG2 pathway plays a mechanistic role, at least in part, in bacterial LPS modulation of the NF-B signaling in stimulated macrophages.
EXPERIMENTAL PROCEDURES
Cell Culture, Antibodies, and Reagents-All cells used in this study were cultured in Dulbecco's modified Eagle's medium/F12 medium supplemented with 10% fetal bovine serum. Raw264.7 and MCF7 cells were obtained from American Type Culture Collection, whereas HC11-pcDNA and HC11-MTA1, MTA1 ϩ/ϩ , and MTA1 Ϫ/Ϫ mouse embryonic fibroblasts (MEFs) have been described in our earlier studies (27). Transglutaminase 2 (catalogue number 3557) and NF-B p65 (catalogue number sc-372) antibodies were purchased from Cell Signaling Technology and Santa Cruz Biotechnology, respectively. Antibodies against MTA1 (catalogue number A300-280A) and RNA polymerase II (pol II) (catalogue number A300-653A) were purchased from Bethyl Laboratories, whereas normal mouse IgG, rabbit IgG, and antibodies against vinculin were from Sigma. Bacterial LPS was purchased from Sigma. Whenever needed, LPS was used at the concentration of 1 g/ml of the medium. MTA1 siRNA (catalogue number, M-004127-01) and NF-B p65 siRNA (catalogue number, sc-29411) were purchased from Dharmacon RNAi Technologies (Lafayette, CO) and Santa Cruz Biotechnology, respectively.
Microarray Expression Profiling and Analysis-The microarray expression profiling and analysis were carried out as described elsewhere (28). The RNA isolated by the TRIzol method was further checked for purity by analyzing on a 2100 Bioanalyzer (Agilent Technologies). From the 2 g of the total purified RNA, rRNA reduction was performed using RiboMinus TM transcriptome isolation kit (Invitrogen). Reduced rRNA was now labeled using the GeneChip WT cDNA synthesis/amplification kit and hybridized on a GeneChip Mouse Exon 1.0 ST array. Scanning of the hybridized arrays was carried out using Affymetrix GeneChip scanner 3000 7G. The obtained microarray data were analyzed using GeneSpring GX10.0.2. Gene ontology (GO) analysis was performed on statistically significant samples using GeneSpring GX10.0.2.
Quantitative Real Time PCR (qPCR) and RT-PCR Analysis-For qPCR and RT-PCR analysis, the total RNA was isolated by using Trizol reagent (Invitrogen), and first-strand cDNA synthesis was carried out with SuperScript II reverse transcriptase (Invitrogen) using 2 g of total RNA and oligo(dT) primer. cDNA from macrophages was synthesized using the FastLane cell cDNA synthesis kit (Qiagen). qPCR and RT-PCR were performed using gene-specific primers listed in supplemental Table 1. qPCR analysis was carried out using a 7900HT sequence detection system (Applied Biosystems). The levels of mRNA of all the genes were normalized to that of -actin mRNA.
Cloning of Murine TG2 Promoter-Murine TG2 promoter was PCR-amplified from mouse genomic DNA and cloned into pGL3 basic vector using the In-Fusion 2.0 dry-down PCR cloning kit (Clontech). The PCR amplification was carried out using the primers listed in supplemental Table 2.
Isolation of Peritoneal Macrophages-Peritoneal macrophages were isolated as described elsewhere (8). After LPS treatment, peritoneal lavage was done with 10 ml of sterile ice-cold PBS, and the peritoneal lavage fluid was collected. The cells were washed and resuspended in Dulbecco's modified Eagle's medium/F12 medium supplemented with 10% fetal bovine serum, cultured overnight, and then washed to remove nonadherent cells.
siRNA Transfection-siRNA against MTA1 and negative control siRNA were purchased from Dharmacon. Raw or MCF-7 cells were seeded at 40% density in 6-well plates, the day before transfection, and siRNA transfections were performed with Oligofectamine reagent (Invitrogen) according to the manufacturer's instructions. After 48 h of transfection, cells were harvested for Western blot analysis or used for either confocal or reporter assays.
Reporter Assays-TG2 promoter assay was performed according to the manufacturer's instructions (Promega), and the results were normalized against the -galactosidase activity, an internal control. Some assays were performed in the presence of control siRNA or MTA1 siRNA as described previously (8).
Confocal Analysis-After transfecting the MCF-7 cells with MTA1 siRNA, MTA1 and TG2 expression was determined by indirect immunofluorescence. The cells were grown on sterile glass coverslips, fixed in 4% paraformaldehyde, permeabilized in 0.1% Triton X-100, and blocked in 10% normal goat serum in PBS. Cells were incubated with MTA1 and TG2 antibodies, washed three times in PBS, and then incubated with secondary antibodies conjugated (TG2,goat anti-rabbit; MTA1, antimouse) with Alexa Fluor 488 (for TG2) and Alexa Fluor 555 (MTA1) from Molecular Probes (Eugene, OR). The DAPI (Molecular Probes) was used as a nuclear stain. Microscopic analysis was performed using an Olympus FV300 laser-scanning confocal microscope (Olympus America Inc., Melville, NY) using sequential laser excitation to minimize fluorescence emission bleed-through.
Chromatin Immunoprecipitation (ChIP) and Western Blot Analysis-ChIP analysis using p65, MTA1, and RNA pol II antibodies and Western blotting were carried out following the methods described previously (8). The primers used are listed in supplemental Table 3.
Electrophoretic Mobility Shift Assay-For electrophoretic mobility shift assay (EMSA), nuclear extracts were prepared using a Nonidet P-40 lysis method (16). EMSA for NF-B DNA binding was performed using the annealed and [␥-32 P]ATP end-labeled oligonucleotides in a 20-l reaction mixture for 15 min at 20°C. Samples were run on a nondenaturing 5% polyacrylamide gel and imaged by au-toradiography. Specific competitions were performed by adding a 100 M excess of competitor to the incubation mixture, and supershift EMSAs were performed by adding 1.5 l of either the NF-B p65 (Santa Cruz Biotechnology 286-H) or the TG2 (catalogue number 3557) or MTA1 antibodies. Oligonucleotides used were listed in supplemental Table 4.
Statistical Analysis and Reproducibility-The results are given as the means Ϯ S.E. Statistical analysis of the data was performed by using Student's t test.
RESULTS AND DISCUSSION
MTA1 Regulates TG2 Expression-From an ongoing separate microarray analysis, we unexpectedly identified decreased expression of TG2 mRNA in MTA1 Ϫ/Ϫ MEFs as compared with the wild-type MEFs (Fig. 1A). Further, we found that depletion of endogenous MTA1 also reduces the levels of TG2 mRNA and protein (Fig. 1, B and C). In addition, MTA1 overexpression in MEFs and murine mammary HC11 cells resulted in a stimulated expression of the TG2 mRNA and protein (Fig. 1, C and D), suggesting that MTA1 affects TG2 expression. As MTA1 is positively regulated by the expression of TG2, we next tested this hypothesis by analyzing the levels of MTA1 and TG2 in a widely studied experimental model of breast cancer progression involving isogenic MCF-10A cells (nonmalignant), MCF-10AT cells (weakly tumorigenic cells), MCF-10CA1D cells (undifferentiated metastatic cells), and MCF-10DCIS cells (highly proliferative, aggressive, and invasive cells) (29). The levels of both MTA1 and TG2 were progressively up-regulated from noninvasive MCF10A to highly invasive MCF-10DCIS cells (Fig. 1E). In addition, MTA1 silencing in the MCF-7 cells also down-regulated the levels of TG2 mRNA (Fig. 1F) and protein (Fig. 1H). Effective knockdown of MTA1 protein in MCF-7 cells by MTA1 siRNA was shown in Fig. 1G. Taken together, these finding suggest that MTA1 regulates the expression of TG2.
Mechanistic Role of MTA1 in LPS Induction of TG2 Expression-Aberrant TG2 expression and functions have been reported in inflammatory diseases and cancer (5,22). Having previously demonstrated that MTA1, as a component of NF-B signaling, is involved in the regulation of inflammatory responses (8,15) and in TG2 expression (this study), we next wished to investigate the role of MTA1 during LPS-mediated expression of TG2. As LPS induces the expression of MTA1 in Raw264.7 macrophage (Raw) cells (8,15), we assessed the levels of TG2 in LPS-stimulated Raw cells. We found that both MTA1 expression and TG2 expression were co-induced by LPS ( Fig. 2A). Further, an experimental reduction in the endogenous MTA1 in the Raw cells by specific siRNA compromised the ability of LPS to induce the expression of TG2 mRNA (Fig. 2B). As results of siRNA-mediated knockdown studies are drawn by the extent of target knockdown, we next validated these findings in genetically MTA1depleted MEFs and cultured peritoneal macrophages from wild-type and MTA1 Ϫ/Ϫ mice (8) and investigated the effect of LPS on TG2 expression. We found that MTA1 deficiency substantially compromised the ability of LPS to induce TG2 mRNA in MTA1 Ϫ/Ϫ macrophages (Fig. 2C) and TG2 protein in MTA1 Ϫ/Ϫ MEFs (Fig. 2D). These findings suggest that MTA1 is a required cellular coregulator for LPS induction of TG2.
MTA1 Stimulates TG2 Transcription-To understand the basis of MTA1 regulation of TG2, we cloned the TG2 promoter into a pGL3-basic-luciferase reporter system. We observed that LPS is a potent inducer of TG2 transcription and that reducing the levels of the endogenous MTA1 by MTA1 siRNA (Fig. 3A) and in MTA1 Ϫ/Ϫ MEFs (Fig. 3B), compromised TG2 transcription as well as the ability of LPS to induce TG2 promoter activity. In addition, we observed an increased TG2 promoter activity upon MTA1 overexpression (Fig. 3C). These observations suggest that MTA1 regulates TG2 expression at the transcriptional level.
To gain a deeper insight into the molecular mechanism underlying the noticed MTA1 regulation of TG2 expression, we carried out a detailed ChIP analysis in Raw cells treated with or without LPS and mapped the recruitment of MTA1 onto three regions of TG2 promoter at Ϫ320 to Ϫ491, Ϫ630 to Ϫ849, and Ϫ1878 to Ϫ2139 (Fig. 4, A and B). However, we only observed enhanced recruitment of MTA1 in response to LPS stimulation onto the Ϫ630 to Ϫ849 region of TG2 promoter (Fig. 4B), indicating a role for this region in LPS regulation of TG2. In addition, we noticed the recruitment of MTA1-pol II coactivator complex only to this region (Ϫ630 to Ϫ849) under basal as well as LPS stimulation. Together, these results suggest the involvement of MTA1 in regulating To carry out ChIP-based promoter walk, murine TG2 promoter was divided into five regions. The double-headed arrow represents the MTA1 binding region on murine TG2 promoter. B, recruitment of MTA1 to TG2 chromatin in Raw cells treated with or without LPS for 1 h. Raw cells that were treated with 1% formaldehyde to cross-link the histones to DNA were lysed by sonication and immunoprecipitated by either anti-MTA1 antibody or IgG antibody. The immunoprecipitates (IP) were collected by adding beads; beads were washed, DNA was eluted from the beads, and purified DNA was subjected to PCR. C, double ChIP analysis of recruitment of MTA1-pol II complex onto the TG2-chromatin (Ϫ630 to Ϫ849) in Raw cells treated with LPS for 1 h. The first ChIP was carried out with anti-MTA1 antibody followed by second ChIP with anti-RNA polymerase II. From the same elutes of ChIP analysis, qPCR analysis was also performed.
TG2 transcription by targeting a specific region of TG2 chromatin in LPS-stimulated macrophages.
TG2 Is an NF-B-regulated Gene-As MTA1 modulates NF-B signaling (8,15,16) as well as TG2 transcription in LPS-stimulated Raw cells (this study) and because LPS is a potent inducer of NF-B (30), we next focused on the mechanism of LPS regulation of TG2 expression. We found that parthenolide, a pharmacological inhibitor of NF-B (31), attenuated both basal and LPS-stimulated TG2 protein, mRNA expression, and promoter activity (Fig. 5, A-C) in the Raw cells, suggesting that LPS may regulate TG2 expression via NF-B signaling. In this context, transient expression of p65RelA increased the TG2 promoter activity in the basal as well as LPS-stimulated conditions in the Raw cells (Fig. 5D), whereas depletion of NF-B p65 resulted in decreased TG2 transcription (Fig. 5E), suggesting that TG2 is an NF-B target gene.
To understand the molecular details of NF-B regulation of TG2, we conducted a ChIP-based promoter walk with p65RelA antibody in the Raw cells with or without LPS treatment. We observed the recruitment of p65RelA onto the Ϫ630 to Ϫ839 region of the TG2 promoter, and this was further enhanced in the presence of LPS (Fig. 6A). We also ob- served that p65-MTA1 and p65-pol II complexes are also corecruited to the same region of the TG2 chromatin (Fig. 6, A and B). Importantly, parthenolide effectively blocked the recruitment of p65RelA to this region of the TG2 promoter (Fig. 6C). To support these results, we showed that p65 could be coimmunoprecipitated along with MTA1 under both the basal and the LPS-stimulated Raw cells as evident by an increased association of p65 in LPS-stimulated cells as compared with the level in the control cells (Fig. 6D). Further, the interaction of MTA1 and p65 was also validated in vitro (Fig. 6E) using 35 S-labeled MTA1 protein and glutathione S-transferase-NF-B-p65 fusion protein. These results support the notion of a direct interaction of MTA1 with NF-B-p65. Collectively, these results suggest that p65-MTA1-pol II may exist in the same complex under physiological condition.
Scanning of the MTA1-targeted region of TG2 promoter (Ϫ630 to Ϫ839) for available transcriptional factors using the ALGGEN-PROMO software revealed the presence of only one potential NF-B site (GGGAATTATC, Ϫ758 to Ϫ749). To demonstrate the direct binding of p65RelA to the TG2 promoter, we next performed an EMSA analysis using oligonucleotides encompassing this site (both wild-type and mutant NF-B) using the nuclear extracts from LPS-stimulated Raw cells (Fig. 6F). We found the appearance of a distinct protein-DNA complex in the LPS-stimulated condition (Fig. 6F, lane 3). The specificity of this protein-DNA complex was verified by supershift analysis using p65RelA or MTA1 antibodies. We noticed supershifts with p65 or MTA1 antibodies but not by the control IgG antibody (Fig. 6F, lanes 4 -6). No protein-DNA complex was observed with the oligonucleotides having mutant NF-B sequence (Fig. 6F, lanes 9 -14). These results suggest that LPS stimulates TG2 transcription via direct recruitment of the p65-MTA1 complex onto the TG2 promoter and that TG2 is an NF-B target. A schematic representation of recruitment of p65 and MTA1 to the TG2 promoter is shown in Fig. 6G. Together, these findings revealed an inherent role of MTA1 in the regulation of TG2 expression, which, at least in part, may constitute one of the mechanisms involved in the modulation of LPS-induced NF-B signaling by MTA1 in stimulated macrophages (Fig. 7).
In brief, our finding of MTA1 regulation of TG2 expression introduces a new regulatory player of NF-B signaling during induction of proinflammatory cytokines and innate immunity. Several reports represented a positive correlation between the expression of NF-B and TG2 expression (32,33) and between the expression of NF-B and MTA1 (8,16,34), suggesting the existence of a positive regulatory mechanism among these genes. Increased TG2 activity triggers NF-B activation by inducing the polymerization of I-B␣ rather than stimulating I-B␣ kinase (35). This polymerization of I-B␣ results in a direct activation of NF-B, resulting in constitutive expression of various target genes involved in inflammation (5,35). During sepsis mediated by LPS, NF-B is activated, leading to the induction of cytokines and inflammatory mediators. The absence of TG2 could be an advantage during endotoxic shock because this deficiency appears to be associated with an activation of NF-B that is transient, thus allowing the restoration of immunological equilibrium. In this con-text, studies from the TG2 Ϫ/Ϫ knock-out mice revealed that TG2 offers a protection against liver injuries caused by CCl 4 (36,37). Although many reports have shown an association between the levels of TG2 during inflammatory diseases, its regulation and its role in inflammatory process remain poorly understood. In this context, our present study demonstrates the involvement of MTA1 in the modulation of NF-B signaling leading to TG2 transcription and proinflammatory cytokines in LPS-stimulated cells. These findings suggest that TG2 is a target of MTA1 and that its transcription is positively regulated by TG2 because of a direct binding of the MTA1, p65RelA, and pol II complex to the TG2 promoter (Figs. 4 and 6). We propose that MTA1 offers protection against the invading pathogens either directly by regulating the strength of the NF-B signaling and/or by regulating its target genes like TG2. | 4,729.2 | 2010-12-14T00:00:00.000 | [
"Biology"
] |
Transforming Prosthodontics and oral implantology using robotics and artificial intelligence
The current review focuses on how artificial intelligence (AI) and robotics can be applied to the field of Prosthodontics and oral implantology. The classification and methodologies of AI and application of AI and robotics in various aspects of Prosthodontics is summarized. The role of AI has potentially expanded in dentistry. It plays a vital role in data management, diagnosis, and treatment planning and administrative tasks. It has widespread applications in Prosthodontics owing to its immense diagnostic capability and possible therapeutic application. AI and robotics are next-generation technologies that are opening new avenues of growth and exploration for Prosthodontics. The current surge in digital human-centered automation has greatly benefited the dental field, as it transforms towards a new robotic, machine learning, and artificial intelligence era. The application of robotics and AI in the dental field aims to improve dependability, accuracy, precision, and efficiency by enabling the widespread adoption of cutting-edge dental technologies in future. Hence, the objective of the current review was to represent literature relevant to the applications of robotics and AI and in the context of diagnosis and clinical decision-making and predict successful treatment in Prosthodontics and oral implantology.
Introduction
There is a rapid change happening in the field of Prosthodontics with newer inventions, concepts and technologies and materials.Incorporation of these changes into practice, education and research has allowed Prosthodontics to evolve in response to changing needs.Among the recent developments, there has been the introduction of artificial intelligence (AI) and robots in Prosthodontics.The idea of robotics was first applied in 1969.Automation of any tasks in laborious jobs can be done due to technological advancements in robotics and artificial intelligence.Robotics has therefore entered the operating room, and its uses are evolving continuously (1,2).Training robots were introduced in dentistry to help the students to get experience of humanlike machines which proved much better than the earlier used mummies.The first artificially intelligent robot created was called Showa Hanako and could mimic human gestures and reactions.
Artificial intelligence is the branch of Computer Science that creates smart machines that can carry out human tasks.It is the ability of a computer or computer-controlled robot to perform human activities.AI has allowed robots to mimic human cognitive functions such as perceiving, navigating, learning from experience (past data), and making decisions.These robots can even make decisions in ambiguous situations that are too complex for humans to handle.Robots learn all these human processes using Machine Learning (ML) which is again a part of artificial intelligence.AI has given robots a computer vision to perform all these human tasks more efficiently compared to humans (3,4).
Digital dental techniques are standardized constantly and are a part of regular treatment plans.Specifically, the use of CAD/CAM (computer-aided design and computer-aided manufacturing) has become a routine practice in clinical and laboratory settings.Dental digitization is still evolving, and new developments such as the use of artificial intelligence are starting to emerge (5,6).In the last few decades, AI was in stages of inception, but at present AI is used as an adjuvant aid in many fields including healthcare and dentistry.This article explains in detail how robotics using AI is changing the face of Prosthodontics and Implantology.
Classification of AI and methodologies
There are two types of AI: strong AI and weak AI.Strong AI refers to AI with human-like intellect and capabilities; it performs with the same flexibility and reactivity as people.The goal of strong AI is to create a multitasking algorithm that can make decisions in multiple domains.Weak AI relies on applying a program designed to accomplish a single or narrow set of tasks (7, 8) (Figure 1).
ML is a subclass of weak AI.Machine learning methods enable computers to identify patterns.The machine learning system is capable of making precise predictions on fresh data after it has been trained on preexisting datasets.In order for the algorithm to identify the required pattern, the training datasets frequently need to be made simpler.The degree to which these interventions are executed with precision determines how well ML systems operate (6,9).
A popular type of machine learning at the moment is deep learning (DL).Deep learning refers to a "set of computational models composed of multiple layers of data processing, which make it possible to learn by representing these data through several levels of abstraction" (6,9).Deep neural networks (DNNs) require more data than machine learning since they can individually learn from and organize the training data set in a hierarchical fashion.The three main types of deep learning systems are artificial neural networks (ANN), convolutional neural networks (CNN), and generative adversarial networks (GAN).The primary applications for all three models are in image production and recognition (6,9).
Artificial neural networks mimic the networks of neurons seen in the human brain to understand and make judgments in a Lotfi Zadeh created fuzzy logic (FL) in 1965.Humans struggle to find solutions to the many nebulous challenges they encounter in their daily lives.In these kinds of circumstances, fuzzy logic is helpful for solving fuzzy problems.By enabling the use of more sophisticated decision-tree processing and improved integration with rules-based programming, the FL serves as the foundation for artificial intelligence.FL mimics how humans make decisions, which entails considering options in between the digital values YES and NO.It generates workable answers to vague and insufficient challenges (3,4,10).
Data mining in robotics
Data is the core of robotics.Data is a storehouse of knowledge and insights which can be processed using ML to extract relevant information.It can be utilized in the medical field to diagnose patients' conditions.Robots handle a lot of data to carry out specified activities i.e., from data collection to the production of significant results (internal data).Data is accessible in a variety of formats.Thus, handling and organizing all this data is crucial.Data mining refers to the entire process of gathering data, cleaning it, and using it to produce insightful analyses and forecasts (4).
Applications of AI and robotics in Prosthodontics and dental implantology
Prosthodontics mainly focuses on the treatment of completely and partially edentulous conditions with fabrication of removable and fixed dental prosthesis or implants and construction of a maxillofacial prosthesis.
AI assisted diagnosis
AI application for accurate diagnosis in Prosthodontics is based on AI-based imaging analysis.Intra oral scanners and CBCT scans generate large amounts of digital data, AI algorithms can extract valuable information and assist in diagnosis.ML makes these functions possible by teaching computers rules based on data and identifying inherent statistical patterns and structures in the data and thus help in analyzing any anomalies in the tooth structure (11-13).
AI assisted treatment planning
AI-assisted treatment planning algorithms play a vital role in simplifying and optimizing the treatment planning process.These algorithms create personalized treatment regimens by analyzing patient data, such as clinical records, diagnostic pictures, and patient-specific characteristics, using artificial intelligence approaches.AI algorithms can identify patterns and correlations in a big dataset to establish the best course of action for each patient by utilizing machine learning and data mining.To create individualized treatment plans and optimize treatment outcomes, factors such as patient's current state of oral health, aesthetic preferences, functional requirements, and anatomical concerns are considered (11).
AI supported advances in CAD/CAM
AI enabled CAD software has revolutionized prosthodontic design by incorporating AI algorithms to improve the precision, effectiveness, and personalization of dental prosthesis design.AI algorithms can be used to evaluate intraoral scans, digital impressions, and 3D models, to design precisely fitting dental prosthesis.The overall effectiveness and precision of prosthodontic design procedures are enhanced by CAD software with AI capabilities, which automate some design tasks and make recommendations for changes based on historical data and anatomical considerations.Optimization of designing process can be done using ML algorithms which consider various factors such as occlusion, aesthetics, and functional requirements (12,14).AI and CAD/CAM have improved the quality of removable dentures while optimizing laboratory procedures.Reducing or eliminating the amount of manual laboratory procedures allows the dentist and dental technician to ensure the accuracy and repeatability of the prosthesis.This results in a reduction of the overall time required for rehabilitation.AI is used to assist in creating the greatest and most aesthetic prosthesis for patients while considering variables like ethnicity, facial proportions, anthropological calculations, and patient's demand.CAD/CAM technology, subtractive milling, and additive manufacturing methods, such as 3D printing is used for fabricating the prosthesis (15).Henriette Lerner et al. introduced an AI approach to lessen the errors when cementing CAD/CAM crowns.This artificial intelligence model was intended to support the production of fixed implant prostheses with monolithic zirconia crowns.The AI model helped to identify the abutment's subgingival margins (16).
AI and Removable Prosthodontics
AI can be applied to 3D printing technology to fabricate removable partial dentures (RPD).AI algorithms can also help establish customized approach to RPD design by analyzing patient data to produce a design that is specific to each patient's needs and preferences and anatomy.AI can be used to evaluate the fit and function of RPD (17).A study developed an AI model with the help of CNN for classification of dental arches to assist in the fabrication of dentures (18).
AI and Fixed Prosthodontics
The precision and effectiveness of tooth preparation are increased with the application of AI.AI algorithms can learn and analyze from a vast database of effective crown designs, providing insights into the best contour and extension of finish line in the prepared teeth.AI can automate the extraction of marginal lines with precision thereby assisting in the tooth margin preparation, which generally requires advanced technical skills.A study used convolutional neural network model called Sparse Octree to extract accurate margins (19).Studies have mentioned the use of AI algorithms for ceramic shade selection included fuzzy logic, back-propagation neural networks, CNN, ANN, support vector machine algorithms (20).
AI and Maxillofacial Prosthesis
The prosthetic devices powered by AI can assist patients with maxillofacial anomalies or injuries in restoring both esthetics and their function by mimicking human neurons through CNNs.AI powered artificial eyes, for example, can let patients see without surgery, while voice-activated smart reading glasses can help the visually impaired read text and recognize faces (21).AI has also been used in tissue engineering to create skin replacements for wound healing.Artificial skin grafts serve as long-term skin replacements or transient wound dressings.They are primarily used to supply oxygen, avoid dehydration, encourage healing, and guard against infections (22).
AI and Implant Prosthodontics
CNN models can be used to classify implants using periapical and panoramic radiographs.The bone quality and quantity can be assessed by using AI algorithms using CBCT images, the clinicians can measure the amount of bone loss more accurately and detect areas of potential bone loss (6,23).A 3D model of the patient's jawbone can be created using AI from CBCT pictures, which helps in implant design and placement.This can assist in determining the ideal site and angle for placement of implant, thereby improving the success rate (24).
Digital planning software can be used by clinicians to generate a virtual surgical guide to assist with implant placement during surgery.Fabrication of physical surgical guide using rapid prototyping can be utilized during surgery to ensure precise implant placement (17,25).AI in implant surgical planning helps in being precise and effective by enabling the creation of personalized treatment plans for individual patients by combining multiple scan and data.This can enhance patient outcomes and guarantee the success of the implant surgery (17, 26).
AI and digital smile designing
AI generated smile designing software can be used for smile designing.These software's are highly accurate, generating natural simulated images, facilitating engagement of the patient and case acceptance, enhances patient satisfaction, enhances connection and efficiency in teamwork.Using complex algorithms, the software analyzes the patient's facial features, such as facial symmetry, lip line, and tooth size and form.A virtual patient is created using several images, intraoral and CBCT scans entered in the program.The ability to simulate, customize, and test the results allows the patient and the dentist to both completely understand the situation before proceeding with planned dentition modifications (27,28).
Application of robotics 3.1 Tooth arrangement robots for complete dentures
The most important step in the fabrication of conventional complete dentures is arrangement of teeth in centric occlusion, which is effectively done by a specialized dentist or a skilled dental technician.The conventional way of fabricating denture systems has been superseded by robotic fabrication.Complete dentures range greatly in terms of tooth size, dental arch curvature, and the placement and alignment of individual teeth relative to one another.The advantage with robots is the flexibility which can be configured to fabricate dentures (29).A single manipulator robotic system developed by CRS robots manufactured in Canada is used for the fabrication of complete denture prosthesis via 6 DOF (Degree of Freedom).The threedimensional virtual tooth arrangement software of the robot uses patients mandibular arch parameters to create patients' medical history data, expert-drawn jaw, and dental arch curves.Following the three-dimensional display of the proposed dentitions, an interactive virtual observation environment is offered, allowing users to modify each tooth's position.This system is based on the use of a unique photosensitive substance that hardens when illuminated, which selected standard teeth and fixed in a standard position.Nevertheless, it was discovered that artificial teeth were difficult for the system to precisely grip and manipulate.Advanced robotic systems with more DOF were developed (30,31).
Tooth preparation robot
Yuan et al. presented a robotic system for tooth preparation that included the following hardware components: (a) a dental fixture that joins the robotic tool to the target tooth and shields the nearby teeth from the laser's cutting.;(b) a 6 degree of freedom robotic arm; (c) an efficient low-heat laser for hard tissue preparation; (d) CAD/CAM software to create the target form for tooth preparation and to produce a 3D motion path for the laser (e) an intraoral 3D scanning device to acquire 3D data about the target tooth, neighboring teeth, and the subject's dental fixture (32).The outcomes showed improved performance when compared to the clinician's crown preparation using the robotic system's average repetition ability (33).
Robots in Dental Implantology
Robotic implant systems typically need real time surgical tracking for accurate implant placement.The First robot-guided dental implant placement was presented by Boesecke et al. in 2002.The working region scope of robot system was 70 cm, could execute drilling guide for implant osteotomy, and could place 48 dental implants within 1-2 mm of the apical border.In addition, a 3-DOF robotic system with a stereo camera was developed.This system could recognize and control the dental handpiece, ensuring that implants were placed in accordance with the preoperative protocol.The computer automatically applied the planned surgical process to guarantee the right cutting site and applied the right amount of force (1,2).The first computerized navigation robotic system with FDA approval to improve the clinical accuracy of dental implant surgery is called YOMITM (Neocis, Miami, FL, USA), introduced in 2017.It is a computerized navigation device designed to assist dental implant surgery in both the pre-operative and intra-operative phases.Vibrational feedback is used by the navigation system to prepare dental implant osteotomy with excellent predictability and precision.By offering physical guidance for the depth of the drill, position and orientation, YOMI helped to prevent the need for a customized surgical guide and deviation of operator's hand.Zhao unveiled the first automated implant placing technology in history in 2017.Surgical procedures can be altered automatically and with a high degree of autonomy, and treatments can be carried out without the need for a dentist's intervention.Nevertheless, there aren't many validation data available for the robot's intelligent decision-making or the viability and dependability of implant placing.Autonomous dental implant robot was also developed in 2017 by Fourth Military Medical University Hospital (Xi'an, China) and Beijing University which prevented errors during surgery and addressed the shortage of competent dentists.Robotically manufactured surgical guides are less costly, less intrusive, and less likely to result in human error during clinical procedures.They also provide complete control over the trajectory of the implant (1,2,34).
An innovative development in oral healthcare is the integration of robotics and AI into implant dentistry.Robots can optimize implants placements as they are equipped with precise algorithms, thereby enhancing longevity and effectiveness of the implant prosthesis.AI complements this by analyzing large datasets, facilitating evidence-based, customized treatment plans, and offering crucial support for anything from surgical feedback to diagnostics.AI algorithms can identify patterns, foresee potential complications, and suggest the most accurate implant designs, optimizing the overall treatment process.Robotic systems have been used in performing complex implant surgeries with high precision, thereby increasing patient satisfaction and implant survival rates.AI algorithms have made it easier to accurately identify the quantity and quality of bone, which has improved implant placement and decreased the chance of implant failure.These developments have made it possible to reduce surgical time, achieve better esthetic results, and provide greater patient comfort (1).
Robots in Gnathology
Waseda Yamanashi developed a master-slave system to facilitate training of mouth opening.Jaw movements can be replicated by robotic articulators with or without the assistance of jaw movement tracking sensors.A full veneer crown was created in an experiment on the working cast of a single patient utilizing a robotic articulator.The most recent development is a commercially available robot, called the "Bionic Jaw Motion" (Bionic Technology, Vercelli, Italy).The robot uses a combination of a robot articulator and a movement analyzer (a high-speed camera) to replicate mandibular kinematics (35).
Robots for treating TMJ disorders
The Waseda Asahi Oral-rehabilitation robot No. 1 (WAO1) is an oral rehabilitation robot developed by Waseda University in Japan.Comprising a headrest-equipped body, two arms with six degrees of freedom and plungers, a control box, a computer, and an automated massage trajectory generation system with virtual compliance control, the robotic apparatus massages the patient's face by pressing or rubbing a plunger whose action is automatically controlled by a computer.The first robot to massage a patient's facial tissues, masticatory muscles (masseter and temporalis), and oral structures like the parotid gland and duct, is the WAO1.It can therefore be used to treat dry mouth and TMJ disorders (36).
Speech robots
This was developed in 2005 at the university in the Canadian province.There are two 3-DOF parallel manipulators that drive the two TMJs, one at each end of the jaw.The purpose is to examine how jaw movements impact our ability to receive and understand facial communication (36).
Limitations of AI and robotics
Integration of AI in Prosthodontics causes concerns about patient privacy and confidentiality, as well as the ethical implications of AI-generated diagnoses or treatment plans.AI may have biases built into its programming that provide unfair or discriminating results.Concerns are also raised about the potential loss of jobs for dental technicians, as AI takes over their tasks (37).While using AI to solve issues, a thorough algorithm with multiple applications is required to answer a particular question.Artificial intelligence cannot provide direct interpretation; instead, errors in algorithms may lead to misunderstandings.If the diagnostic task grows overly reliant on the AI system, there will also be a growing risk of responsibility (38).Practitioners may have doubts regarding the workings of the AI software, they do not understand and may compromise its clinical significance.Clinicians should always exercise caution and vigilance when evaluating data provided by AI.Another crucial factor in dentistry is data misuse and AI security concerns.Entrusting a computer with decisions about human health and relying only on the machine to make decisions about health care services is crucial (37).
Robots have proven to be efficient, accurate, and repeatable.The introduction of this novel technology into existing clinical set up may have several obstacles of varying nature.One of the obstacles the dentist faces is that the applications are very expensive.The robotic systems are complex, specialized knowledge and expertise is needed to ensure optimal performance (1).Dentist may lack of knowledge and expertise to program and control the robotic systems.Furthermore, dentists' acceptance and compliance with the unknown patient may be another crucial factor (4).The research in the field of robotics dentistry is limited due to the lack of accessible systems and lack of expertise to program and regulate robotic systems.The research in the field of robotics relies on effective collaboration between engineers and dentists (39).
Conclusion
AI and robotics are emerging fields in dentistry.Dentistry is progressing toward a new era of robotically assisted and datadriven care.Robotic devices and 3D monitoring will become routine in dental setup and continue to enhance patient care as technology is incorporated into clinical practice.Nevertheless, the most recent advancements in AI and robotics have not yet been fully integrated into research in dentistry, nor have they reached a point of technological readiness and economic viability where they can be commercialized.
FIGURE 1
FIGURE 1Schematic representation of types of artificial intelligence. | 4,751.6 | 2024-07-29T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
News and Public Opinion Multioutput IoT Intelligent Modeling and Popularity Big Data Analysis and Prediction
Based on the news and public opinion multioutput Internet of Things architecture, this article analyzes and predicts its popularity with big data. Firstly, the model adopts a three-tier architecture, in which the bottom layer is the data layer. It is mainly responsible for the collection of the terminal sensor data of the Internet of Things, and it uses intelligent big data as the data warehouse. Secondly, the computing layer on the data layer mainly provides the computing framework. Using the open-source SQL query engine, a cluster environment based on memory computing is constructed to realize the parallelization of data computing. It is used for interactive operations between the system and users. It receives and forwards the query requests submitted by the client browser, transmits them to the server cluster for execution, and displays the results in the browser. The end is displayed to the user. After that, combined with the needs of the development and application of news and public opinion big data, the data collection process was analyzed and designed, and the distributed data collection architecture was built. The intelligent Internet of Things was adopted for data storage, the data storage structure was analyzed and designed, and the data storage structure was designed to avoid catching. The repeat check algorithm is used to repeatedlystore the obtained page data. At the same time, according to the analysis of the business needs of the news and public opinion information platform, the overall functional structure of the platform was designed. The database and platform interface were designed in detail. The simulation results show that the model realizes the statistical query of the collected sensor alarm information and historical data on the user system, combines the historical operating data to analyze the relationship between the supply/return water temperature of the heat exchange station and the outdoor temperature, and realizes chart visualization of data analysis.
Introduction
With the development of modern information technology, emerging technologies, such as the Internet, cloud computing, and big data, have been widely used in various fields of social economy. e sensor transmission speed of the Internet of ings is very fast, and the application of a large number of sensor devices will definitely lead to a substantial increase in the output of news and public opinion data. e Internet of ings and big data are closely related, and the data generated by sensors can also be processed by the big data platform [1,2]. News and public opinion Internet of ings big data is different from Internet big data [3]. In addition to the general characteristics of big data, they also have strong relevance and timing. erefore, the traditional Internet big data processing methods are not fully applicable. New solutions need to be designed specifically to properly analyze the Internet of ings data and extract more important information from the Internet of ings monitoring equipment [3][4][5].
To improve the efficiency of the analysis of news and public opinion IoT data, it is necessary to implement a lowlatency query system based on the Hadoop data warehouse, which can handle a large number of concurrent query requests. A single query can return the results faster, thereby improving the efficiency of data processing and analysis [6]. With the increasing maturity of the wireless sensor network technology,the combination of RFID, other sensors and wireless communication technology can realize the information exchange between objects,as well as long-distance monitoring and management if connected to the Internet. e internet realizes human-to-human interaction with people as the object, and the Internet of ings will further realize the interaction between things and realize the concept of ubiquitous networks. Using distributed query strategies and memory-based computing methods, the main purpose is to quickly query the information in massive amounts of data, provide timely feedback to users, and improve query efficiency. For massive data, traditional processing methods are difficult to meet real-time requirements. Distributed technology has become a research hotspot in the field of big data [7][8][9]. Based on this, the system takes the news and public opinion as the research object and uses computer network and communication technology under the framework of the Internet of ings to store the news and public opinion-related information and deliver it to the users in a timely and accurate manner, realizing news and public opinion detection, management, and marketing network. e traditional news and public opinion detection technology has been improved to improve the efficiency of news and public opinion detection, personnel management, and business decision-making. In the process of platform construction, a data storage architecture based on Mysql and intelligent Internet of ings was designed, SpringMVC development framework was adopted. Spring Framework was used as the core container, and Dubbo was used as the distributed architecture of the entire platform. e platform system is powerful, safe and scalable, which can handle high concurrency and massive data storage.Dubbo chooses intelligent big data as a data storage tool and creates news, public opinion, and IoT big data based on an opensource memory computing engine. Query the system and use data mining algorithms to analyze the historical data of the operation of the heat exchange station in the news and public opinion detection.
Related Work
With the continuous increase in the popularity of big data research, the application fields of big data are becoming more and more extensive. Domestic internet companies use big data technologies, such as Hadoop, to handle PB-level data problems in data storage, data mining, and high concurrency. Using Hadoop-based architecture for data collection, data analysis, etc., major universities are also conducting academic research and applications in the big data environment. Under the general situation, big data has been widely used in internet finance, medical health, transportation, and communication operations for other aspects [10][11][12].
Losavio et al. [13] proposed a news and public opinion big data acquisition and analysis platform (IBDP) that integrates HDFS, Spark, intelligent big data, HBase, Flum, Sqoop, OpenStack, etc., suitable for the acquisition and analysis of news and public opinion data. Yigitcanlar et al. [14] proposed and developed a smart city system based on the Internet of ings using the Hadoop ecosystem and big data analysis technology, combined with Spark over Hadoop to achieve the efficiency of big data processing. Hossein Motlagh et al. [15] proposed the use of the Hadoop software environment, including data collection, data storage, data normalization and analysis, and data visualization components to realize the parallel processing of large heterogeneous data for IoT network security monitoring. Scholars have developed a farmland observation data management system based on the integration of wireless sensor networks that realize the automatic acquisition of a large amount of news and public opinion data. e system has been applied in the city for the related processing and analysis functions. Chin et al. [16] studied news and public opinion big data from the connotation of news and public opinion big data, the acquisition of news and public opinion big data, and the status quo of news and public opinion big data and combined the current internet technology and big data technology to look forward to news and public opinion big data.
Van Deursen and Mossberger [17] elaborated on the four key technologies of big data research: data collection and preprocessing, data storage, data analysis and mining, and data presentation and application, and gave the architecture diagram of big data collection, data warehouse, parallel storage architecture.In addition, they also introduced the high-availability technology of mass storage system, parallel computing, real-time computing, streaming computing, deep learning, data privacy protection technology, and other related technologies, and provided reference learning cases. According to the characteristics of massive data, the researcher designed the system structure of the massive data management system based on Hadoop and introduced the distributed storage and distributed computing of the massive data in detail, which has a strong reference value [18]. At present, there are many types of enterprise information management systems, and each system is managed independently, resulting in the wastage of resources and poor scalability of the system. In this backdrop, Lu designed a hierarchical storage system using the Hadoop key technology to design data mining function, data migration module, etc., to provide data mining and data migration system based on Hadoop architecture. It shows that with the continuous update of internet technology, the complexity of the news and public opinion management information system is also increasing. However, few news and public opinion management information systems can use all the functions of the internet and the emerging accurate news and public opinions and face the news and public opinion management information. e system's requirements for accurate news and public opinion and traditional management information systems require that the implementation of these systems is more complex. Some studies are based on the need of identifying accurate news and public opinions and realizing news and public opinion management by evaluating modern networks [19][20][21].
Intelligent IoT Hierarchical
Nesting. At the intelligent IoT level, data analysis needs to move a subset of data to the data warehouse, and the speed of data analysis in Hadoop is very slow. However, with the development of SQL query engines, big data technology can already be used in business analysis scenarios. By building a data model in Hadoop or other databases, large-scale historical data accumulated and stored for a long time are used in the big data processing system for information mining [22]. Figure 1 is the hierarchical topology of the intelligent Internet of ings. e data collected by the sensors play an important role in detection monitoring and data mining analysis. In addition to real-time monitoring of data generated by IoT terminal devices, it is also necessary to store historical data accumulated in the process of news and public opinion detection and provide real-time statistical query analysis and the function of generating data reports. (1) In the data monitoring system of IoT devices, the sensor devices transmit the monitoring data to the data processing platform using various transmission methods, such as HTTP, TCP, and MQTT and store them in MySQL after parsing to provide real-time query support. e massive historical data adopts the intelligent big data storage warehouse to provide large-scale data support for the data mining and analysis of the system.
When performing statistical query and data mining analysis on historical data, a query engine based on memory calculation is used to improve the speed of the system query and analysis. At the application layer, the output of the big data platform layer is used for chart display and to build a web server platform to provide a visual interface for data display and analysis. e processing of data collected by the terminal equipment includes real-time monitoring, statistical query analysis, and data mining analysis.
e terminal collection system consists of wireless IoT sensing devices, gateways, and data storage servers. e terminal devices regularly transmit data to the gateway through a network protocol. e collected data is parsed by the gateway and stored in a unified format.
Multioutput Process of News and Public Opinion.
News and public opinion have different output data nodes, and the collected data information will also have different degrees of difference. It is necessary to unify the data access methods of all IoT application terminals and storage standards for different forms of data at different nodes and unify the storage and management of multisource data. At the same time, it is necessary to provide hybrid computing capabilities for multisource data to improve the efficiency of multisource data management and analysis. e web server receives the request submitted by the client browser and sends the query task to the Presto computing cluster, and finally, it returns the result of the query execution to the user and displays the result on the client browser in the form of a web page for the data analyst to provide a friendly visual interface. e front-end is mainly implemented with Ajax, JavaScript, jsp, CSS, html, and other technologies, and ECharts is used to realize chart visualization. Figure 2 is a multioutput fan chart of news and public opinion.
When the coordinating node of the ZigBee network receives the single-point-sent data packet from itself or other devices, the network layer will continue to pass it on according to the program we set up in advance according to the requirements. If the target node is its neighbor, the data packet will be directly transmitted to the target node. If the target node is not its neighboring node, the coordinator node will retrieve the record that matches the destination address of the data to be transmitted and the routing table. If there is a match, the data will be transmitted to the next level network in the record; otherwise, the router will initiate a path search. When a node device receives an RREQ data packet, it will forward the data packet, in turn, and add the latest connection cost value. By analogy, the RREQ data packet follows all the connections it passes through and carries the sum of all connection costs until the RREQ data packet successfully reaches the destination device node. e routing node will select the one with the lowest connection cost among the received RREQ packets and send a route reply packet RREP to the source device. e RREP packet is a single-point sending packet that will return to the node device that sent the request along the path where the RREQ packet came from. After the path is established, the data packets can be sent and received along the established path. At this time, the node will send an RERR packet to all the node devices waiting to receive the data of this node to set the path as invalid. Each node will also update its routing table according to the received data packet.
Popularity Demand Analysis.
In the computing structure of popularity, the data stream is composed of countless tuples. It is the smallest unit of data that contains many keyvalue pair data. e spout is the entrance to the data source. It provides many simple API interfaces, including sensor output API interfaces, website hits, messages from social networking sites, and logs of various applications. It converts the received data source into a tuple data stream and transmits the data to the next component specified. e main purpose of writing a spout program is to obtain data from the data source. Bolt is a function in the storm Computational Intelligence and Neuroscience 3 program. It is responsible for calculating and processing the input data stream. Its main functions include data filtering, data fusion, data calculation processing, and writing data to or reading data from the database. e terminal sensor device sends data to a unified data processing platform through the gateway so that each device can share and exchange data seamlessly. At the same time, it integrates the processing of real-time data and historical records and provides a unified operating environment for the application platform of the Internet of ings. Integrate demand analysis and solutions in a variety of application scenarios so that the system can meet the various needs of the users. Figure 3 shows the popularity requirement scalability architecture. NewSQL mainly refers to the improved SQL database with scalability and superior performance. As it is an improvement and innovation based on the original SQL database, the NewSQL is compared to the original SQL database technology. Since it is an improvement and innovation of SQL technology, it still has the functions of traditional SQL, supports SQL queries, and meets the transactional and consistency requirements of the database queries. At the same time, the improved NewSQL database also has scalability and flexibility similar to NoSQL databases. HDFS supports the redundant backup storage of data blocks. As HDFS requires a combination of relatively lowcost small computers, these small computers are not highly reliable, and hence, HDFS is designed to be highly faulttolerant. It is able to detect and respond to the failure of each machine node in time and ensure the stability of the system. For this part of the data source, a variety of data acquisition interfaces need to be developed in the system, including the RS485 interface, Ethernet interface, AD conversion interface, and 24 V switch detection interface. By deploying data to HDFS, it can support large-capacity, high concurrency, and high-throughput big data computing tasks, while keeping file systems consistent across the nodes. An HDFS cluster must contain a name node NameNode (aka master node) and multiple DataNodes (aka slave nodes). Each slave node in the HDFS system is an ordinary or cheap computer. e name node can provide a naming service for each storage unit in the HDFS system, record and maintain the mapping information of the entire system data block, and receive requests to access HDFS from the corresponding client. Data nodes are mainly used to perform specific tasks, such as storing file blocks scheduled by the client and the NameNode. At the same time, HDFS has a client interface that interacts with the outside world. It mainly implements the external access requests to HDFS, including interacting with the NameNode to read file storage information, interacting with the DataNode to read the data in the HDFS system, and so on.
Design of Big Data Forecasting Function.
e Hadoop system has many components, including HDFS, MapReduce, HBase, smart big data, Sqoop, Flume, Zookeeper, etc. It can run efficiently on the Linux platform. It supports multiple programming languages and has high reliability and fault tolerance. It can process the data reliably and efficiently and has relatively good scalability. It is used to store and process large amounts of data for analysis, and thus, Hadoop has become a very popular solution. When the Hadoop cluster becomes insufficient, it is usually improved by adding new computers or storage devices with common methods. e Hadoop ecosystem can perform distributed computing well, and the users can develop without knowing the details of the underlying storage. e data collection terminal mainly relies on wireless network sensor equipment for real-time data collection. e wireless node performs data collection after registering in the network. At the same time, the data is sent to the gateway through the MQTT protocol and Modbus bus at regular intervals. e data of multiple nodes are summarized. e collected data is preprocessed and sent to the relational database server in a unified format, and the application server side processes the data. Figure 4 is the distribution of prediction accuracy of public opinion big data.
In streaming computing, the data continuously flows into the system. e streaming computing system analyzes and calculates the continuous data in real-time and quickly in the memory, and then, it feeds the results back to the user in real-time or stores them for subsequent queries. Traditional streaming computing systems are mostly designed based on event mechanisms, and the amount of data that can be processed is limited. However, new streaming computing technologies, such as S4, Storm, and Spark, are mainly oriented to the use of streaming processing. e NameNode enables the clients to quickly access the required blocks for regular operations, adopts block placement strategies and replication mechanisms to ensure data availability and durability, and allocates new block locations while maintaining the load balance of the cluster. e main task of the DataNode is to store data. When a new data request is stored in HDFS, it will be split into blocks with a fixed preconfigured size, stored in the DataNode, replicated a fixed number of preconfigured times, and stored in different nodes. e DataNodes and NameNode generally communicate by a heartbeat message mechanism every few seconds so that the NameNode can know which node is unavailable e client directly communicates with the DataNode when accessing data. When the client forwards the request to HDFS, the NameNode, firstly, sends back the location of the block required by the client after verifying the relevant license, and then, the client directly commands the DataNode to execute the required block.
Smart IoT Big Data Query.
is paper uses ATMEGA32-16AU type single-chip microcomputer of ATLEL Company, which is a low-power and high-performance 8 bit AVR microprocessing chip that uses an RISC structure. e single-chip core contains a rich instruction set and up to 32 general-purpose registers with 32K bytes of in-system programmable flash, 2K bytes of SRAM, 1K bytes of EEPROM, and 32 general-purpose I/O interfaces to meet the needs of the system. e I/O interface of ATMEGA32 onechip computer can visit through the IN and OUT order and carry on data transmission between 32 general registers and I/O interfaces. e address of the register from 0x00 to 0xlf can be directly addressed by CBI and SBI commands, and the value of a certain bit in the address can be detected by SBIS and SBIC. Figure 5 is the distribution of news and public opinion data transmission query speed. When querying, the embedded device sends a query command to the server. e server queries the database according to the command type and displays the data result on the embedded device via the coordinator, routing node, and terminal node. e entire architecture of the platform is mainly based on MySQL and MongoDB databases with Spring Framework as the core container combined with Zookeeper as the registration center and Apache Shiro as the authority authorization layer. My Batis is used as the persistence of the data access layer, and Redis is used as the cache database to improve the database access speed. In the data analysis layer, the Flume component can obtain a large amount of heterogeneous data in the HDFS storage system and supply it to the Hadoop offline batch processing system for analysis and processing. e processing results are written into the corresponding database. e Kafka component obtains the sensor network data stream and provides it to the Storm real-time data processing system for data analysis. e Storm system will process the analyzed data and write it into the corresponding database using the bolt component. Finally, there is the application layer. is layer queries and reads the data in the database according to the requirements of different applications or reads the processing and analysis results of the storm in real time.
Multioutput Platform Simulation of News and Public
Opinion. When researching the news and public opinion information platform based on big data, the paper chooses the data storage mode combining Mysql and intelligent Internet of ings. Mysql is mainly used to store traditional business data, such as user information table, permission table, payment information table, agricultural material shop table, order information table, customer service information table, agricultural material evaluation information table, etc. However, the platform collects a large amount of user behavior data and a variety of types. Relational databases, such as Mysql can no longer meet the requirements. For alarm data and cloth length data with high real-time requirements, it is necessary to increase the frequency of data collection. e intelligent Internet of ings is stored in the BSON structure. Mass data storage has obvious advantages, and hence, the authors choose to use the intelligent Internet of ings to store the user behavior data. At the same time, for the business data on Mysql, a data backup is made in the intelligent Internet of ings. Among them, the data acquisition module uses various types of high-precision sensors and converts the collected analog signals into digital signals that can be identified and processed by the chip. In terms of communication designed in the thesis, the ZigBee communication technology is used for short-distance wireless transmission, and the GPRS and GSM communication networks are used for long-distance communication. e processing control module paper uses a single-chip microcomputer for processing information, a relay for controlling operations, a memory chip, etc. An integrated circuit board is made. Figure 6 is the distribution of multiple output signals of news and public opinion.
As the big data system mostly uses NoSQL database technology, a comparative study of this type of database technology is carried out so that a suitable database system can be selected according to the needs.
ere are many classification methods for NoSQL data inventory, and various classification methods may also overlap. If the database is compared according to the four types of data models, namely the key-value model, column model, document model, and graphical model, and the specific correlations of the four types of data models are shown in Figure 6. erefore, it can be concluded that the increase in the amount of calculation data does not weaken Storm's computing power, which indicates that the data are effectively cached under the action of the Kafka component and ensures the smooth and efficient operation of the storm system. Figure 7 is the distribution of news and public opinion calculation data.
When a device that has joined the network receives this sentence and the RejoinNetwork parameter is set to 0x00, NLME will send an NLMEJoIN. e confirm statement and the parameter value is INvALI. When a device that is not currently joining any network receives this sentence and the RejoinNetwork parameter is set to 0xol, the device will try to join the network specified by ExtendPANId, and then, NL dirty will issue MLME-AsSOCIATE. CoorAddress parameter setting is an address determined according to the situation of the router. is statement defines the initialization of the upper-layer device, allowing the upper-layer device to start a new ZigBee network and use itself as the coordinator.
e Beaconorder parameter in the code represents the command for the network beacon formed by the upper layer. Super deorder means the command for the network superframe formed by the upper layer. BatteryLifeExtension means that if the value of this parameter is TRUE, NL will request coordination.Otherwise, NL will request that the coordinator does not support the battery life extension mode.
e dynamic data sources of the warp knitting machine are diverse, including the warp let-off PLC of the warp knitting machine, electric meters, and various sensors. If a semifunctional node receives this sentence, NLME will send out an NLME-NETw0RK-FORMAT10 N whose parameter status is set to REQUEST confirm statement. e network cannot be established at this time. If a coordination node receives this sentence command, the device will be initialized as a coordination. e wireless module is connected with the server using the RS232 level interface, and the serial port converter is connected to the upper computer. e serial port converter adopts the six-in-one multifunction of Technology Co., Ltd. e serial port module CP2102, USB, TTL, RS232, and RS485 four levels can be switched using the switch to realize the six serial conversion functions of USB to TTL, USB to 232, USB to 485, TTL to 232, TTL to 485, and 232 to 485. Figure 8 shows the conversion efficiency distribution of big data wireless modules.
Example Application and
GPRS specifies four forms of channel coding, namely CS-1, CS-2, CS-3, and CS-4, and their corresponding data rates are 9.05 kbps, 13.4 kbps, 15.6 kbps, and 21.4 kbps. e transmission rate is proportional to the wireless environment. It can be seen that the Cs-4 channel coding method has the highest requirements for C/I. At present, GPRS has been developed to support the multicoding mode and multislot technology fusion transmission, and its maximum speed can reach 171 kbps.
e system software structure consists of a data source, query engine, and application server. e specific structure of the data query framework is shown in the text, where each component can be multiple in the system. Using MySQL and smart big data as the connection data source, MySQL is mainly used as a real-time query database. Smart big data can be used as a data warehouse for storing historical data. In the cluster, the query plan is executed by Presto. Firstly, the data is converted from the serial port data into IP data, and then, it is sent out through the GPRS transmission system. e processing control module sends data to the GPRS module through the Rs232 serial port. e packet data will be encapsulated by an SGSN, and the data will communicate with the gateway support node GGSN via the GPRS backbone network after being encapsulated. Figure 9 is the distribution of news and public opinion serial port data acceptance rate.
In this experiment, the GPRS long-distance wireless transmission module is selected. e module uses the design scheme of the built-in protocol stack of the module. ere is no chip outside the module, and hence, the module has higher stability. It supports up to 4 network connections and can connect to the serial port. e received data is sent to 4 servers at the same time. e module is set with a keep-alive mechanism to ensure that the network connection will not be disconnected when there is no heartbeat packet. It supports remote configuration parameters that can be configured by sending AT commands via SMS. e commonly used method of measuring the distance between the two nodes is based on the difference in the arrival time. is method uses the sending node to send two signals with different propagation speeds at the same time. Firstly, the receiving node calculates the difference between the arrival times of the two signals. Secondly, it combines their Computational Intelligence and Neuroscience propagation speeds to calculate the distance between the two nodes. When querying in the same data table that stores the data records, the system time-consuming will increase with the increase of the query data. When the number of records in the data storage is different but the number of records in the query is the same, the system response time will increase as the total number of storage tables increases.
Conclusion
e article analyzes the specific sources of news and public opinion big data, the specific collection methods of data sources, and the methods of various database storage technologies, combined with wireless sensor network technology, open-source big data processing technology, and distributed data storage technology, and it proposes to solve big data. e article mainly works from two aspects of acquisition and storage. In terms of data acquisition, firstly, the application research on the sensor data acquisition network is carried out. en, the sensor data acquisition system is designed and implemented. Finally, the system is used in news data acquisition and processing. e application was experimented with. In terms of data storage, the article mainly compares the database technologies under big data applications, designs the structure of the HBase data storage system when it is applied to news and public opinion sensor data storage, and conducts an experimental test on its storage performance. e system analyzed the actual needs of the project and the characteristics of big data of news and public opinion Internet of things, combined with the existing big data processing technology, and designed a system that can be used for quick query and analysis of big data of news and public opinion Internet of things.
e system is implemented and tested on the Presto framework to achieve the effect of supporting the centralized storage and rapid query of massive news and public opinion detection data, as well as fit analysis and visual display based on historical data.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
ere are no conflicts of interest in this article. Computational Intelligence and Neuroscience 9 | 7,316.4 | 2022-02-12T00:00:00.000 | [
"Computer Science"
] |
Between Nonlinearities, Complexity, and Noises: An Application on Portfolio Selection Using Kernel Principal Component Analysis
This paper discusses the effects of introducing nonlinear interactions and noise-filtering to the covariance matrix used in Markowitz’s portfolio allocation model, evaluating the technique’s performances for daily data from seven financial markets between January 2000 and August 2018. We estimated the covariance matrix by applying Kernel functions, and applied filtering following the theoretical distribution of the eigenvalues based on the Random Matrix Theory. The results were compared with the traditional linear Pearson estimator and robust estimation methods for covariance matrices. The results showed that noise-filtering yielded portfolios with significantly larger risk-adjusted profitability than its non-filtered counterpart for almost half of the tested cases. Moreover, we analyzed the improvements and setbacks of the nonlinear approaches over linear ones, discussing in which circumstances the additional complexity of nonlinear features seemed to predominantly add more noise or predictive performance.
Introduction
Finance can be defined as the research field that studies the management of value-for an arbitrary investor that operates inside the financial market, the value of the assets that he/she chose can be measured in terms of how profitable or risky they are. While individuals tend to pursue potentially larger return rates, often the most profitable options bring along higher levels of uncertainty as well, so that the risk-return relationship induces a trade-off over the preferences of the economic agents, making them seek a combination of assets that offer maximum profitability, as well as minimum risk-an efficient allocation of the resources that generate the most payoff/reward/value.
As pointed out in Miller [1], one of the main milestones in the history of finance was the mean-variance model of Nobel Prize laureate Harry Markowitz, a work regarded as the genesis of the so-called "Modern Portfolio Theory", in which the optimal portfolio choice was presented as the solution of a simple, constrained optimization problem. Furthermore, Markowitz [2]'s model shows the circumstances in which the levels of risk can be diminished through diversification, as well as the limits of this artifice, represented by a risk that investors can do nothing about and therefore must take when investing in the financial market.
While the relevance of Markowitz [2]'s work is unanimously praised, the best way to estimate its inputs-a vector of expected returns and a covariance matrix-is far from reaching a consensus. While the standard estimators are easy to obtain, recent works like Pavlidis et al. [3] and Hsu et al. [4] argue in favor of the introduction of nonlinear features to boost the predictive power for financial variables over traditional parametric econometric methods, and in which existing novel approaches, such as machine-learning methods, can contribute to better forecasting performances. Additionally, many studies globally have found empirical evidence from real-world financial data that the underlying patterns of financial covariance matrices seem to follow some stylized facts regarding the big proportion of "noise" in comparison to actually useful information, implying that the complexity of the portfolio choice problem could be largely reduced, possibly leading to more parsimonious models that provide better forecasts.
This paper focused on those questions, investigating whether the use of a nonlinear and nonparametric covariance matrix or the application of noise-filtering techniques can indeed help a financial investor to build better portfolios in terms of cumulative return and risk-adjusted measures, namely Sharpe and Sortino ratios. Moreover, we analyzed various robust methods for estimating the covariance matrix, and whether nonlinearities and noise-filtering managed to bring improvements to the portfolios' performance, which can be useful to the construction of portfolio-building strategies for financial investors. We tested different markets and compared the results, and discussed to which extent the portfolio allocation was done better using Kernel functions and "clean" covariance matrices.
The paper is structured as follows: Section 2 presents the foundations of risk diversification via portfolios, discussing the issues regarding high dimensionality in financial data, motivating the use of high-frequency data, as well as nonlinear predictors, regularization techniques, and the Random Matrix Theory. Section 3 describes the Markowitz [2] portfolio selection model, robust estimators for the covariance matrix, and the Principal Component Analysis for both linear and Kernel covariance matrices. Section 4 provides details on the empirical analysis and describes the collected data and chosen time periods, as well as the performance metrics and statistical tests for the evaluation of the portfolio allocations. Section 5 presents the performance of the obtained portfolios and discusses their implication in view of the financial theory. Finally, Section 6 presents the paper's conclusions, potential limitations to the proposed methods, and recommendations for future developments.
Portfolio Selection and Risk Management
In financial contexts, "risk" refers to the likelihood of an investment yielding a different return from the expected one [5]; thus, in a broad sense, risk does not necessarily only have regard to unfavorable outcomes (downside risk), but rather includes upside risk as well. Any flotation from the expected value of the return of a financial asset is viewed as a source of uncertainty, or "volatility", as it is more often called in finance.
A rational investor would seek to optimize his interests at all times, which can be expressed in terms of maximization of his expected return and minimization of his risk. Given that future returns are a random variable, there are many possible measures for its volatility; however, the most common measure for risk is the variance operator (second moment), as used in Markowitz [2]'s Modern Portfolio Theory seminal work, while expected return is measured by the first moment. This is equivalent to assuming that all financial agents follow a mean-variance preference, which is grounded in the microeconomic theory and has implications in the derivation of many important models in finance and asset pricing, such as the CAPM model [6][7][8], for instance.
The assumption of rationality implies that an "efficient" portfolio allocation is a choice of weights w in regard to how much assets you should buy which are available in the market, such that the investor cannot increase his expected return without taking more risk-or, alternatively, how you can decrease his portfolio volatility without taking a lower level of expected return. The curve of the possible efficient portfolio allocations in the risk versus the expected return graph is known as an "efficient frontier". As shown in Markowitz [2], in order to achieve an efficient portfolio, the investor should diversify his/her choices, picking the assets with the minimal association (measured by covariances), such that the joint risks of the picked assets tend to cancel each other.
Therefore, for a set of assets with identical values for expected return µ and variance σ 2 , choosing a convex combination of many of them will yield a portfolio with a volatility value smaller than σ 2 , unless all chosen assets have perfect correlation. Such effects of diversification can be seen statistically from the variance of the sum of p random variables: w i = 1 (negative-valued weights represent a short selling), the volatility of a generic portfolio w 1 x 1 + w 2 x 2 + ... + w p x p with same-risk assets will always diminish with diversification.
The component of risk which can be diversified, corresponding to the joint volatility between the chosen assets, is known as "idiosyncratic risk", while the non-diversifiable component of risk, which represents the uncertainties associated to the financial market itself, is known as "systematic risk" or "market risk". The idiosyncratic risk is specific to a company, industry, market, economy, or country, meaning it can be eliminated by simply investing in different assets (diversification) that will not all be affected in the same way by market events. On the other hand, the market risk is associated with factors that affect all assets' companies, such as macroeconomic indicators and political scenarios; thus not being specific to a particular company or industry and which cannot be eliminated or reduced through diversification.
Although there are many influential portfolio selection models that arose after Markowitz's classic work, such as the Treynor-Black model [9], the Black-Litterman model [10], as well as advances in the so-called "Post-Modern Portfolio Theory" [11,12] and machine-learning techniques [13][14][15], Markowitz [2] remains as one of the most influential works in finance and is still widely used as a benchmark for alternative portfolio selection models, due to its mathematical simplicity (uses only a vector of expected returns and a covariance matrix as inputs) and easiness of interpretation. Therefore, we used this model as a baseline to explore the potential improvements that arise with the introduction of nonlinear interactions and covariance matrix filtering through the Random Matrix Theory.
Nonlinearities and Machine Learning in Financial Applications
Buonocore et al. [16] presents two key elements that define the complexity of financial time-series: the multi-scaling property, which refers to the dynamics of the series over time; and the structure of cross-dependence between time-series, which are reflexes of the interactions among the various financial assets and economic agents. In a financial context, one can view those two complexity elements as systematic risk and idiosyncratic risk, respectively, precisely being the two sources of risk that drive the whole motivation for risk diversification via portfolio allocation, as discussed by the Modern Portfolio Theory.
It is well-known that systematic risk cannot be diversified. So, in terms of risk management and portfolio selection, the main issue is to pick assets with minimal idiosyncratic risk, which in turn, naturally, demands a good estimation for the cross-interaction between the assets available in the market, namely the covariance between them.
The non-stationarity of financial time-series is a stylized fact which is well-known by scholars and market practitioners, and this property has relevant implications in forecasting and identifying patterns in financial analysis. Specifically concerning portfolio selection, the non-stationary behavior of stock prices can induce major drawbacks when using the standard linear Pearson correlation estimator in calculating the covariances matrix. Livan et al. [17] provides empirical evidence of the limitations of the traditional linear approach established in Markowitz [2], pointing out that the linear estimator fails to accurately capture the market's dynamics over time, an issue that is not efficiently solved by simply using a longer historical series. The sensitivity of Markowitz [2]'s model to its inputs is also discussed in Chen and Zhou [18], which incorporates the third and fourth moments (skewness and kurtosis) as additional sources of uncertainty over the variance. Using multi-objective particle swarm optimization, robust efficient portfolios were obtained and shown to improve the expected return in comparison to the traditional mean-variance approach. The relative attractiveness of different robust efficient solutions to different market settings (bullish, steady, and bearish) was also discussed.
Concerning the Dynamical Behavior of Financial Systems, Bonanno et al. [19] proposed a generalization of the Heston model [20], which is defined by two coupled stochastic differential equations (SDEs) representing the log of the price levels and the volatility of financial stocks, and provided a solution for option pricing that incorporated improvements over the classical Black-Scholes model [21] regarding financial stylized facts, such as the skewness of the returns and the excess kurtosis. The extension proposed by Bonanno et al. [19] was the introduction of a random walk with cubic nonlinearity to replace the log-price SDE of Heston's model. Furthermore, the authors analyzed the statistical properties of escape time as a measure of the stabilizing effect of the noise in the market dynamics. Applying this extended model, Spagnolo and Valenti [22] tested for daily data of 1071 stocks traded at the New York Stock Exchange between 1987 and 1998, finding out that the nonlinear Heston model approximates the probability density distribution on escape times better than the basic geometric Brownian motion model and two well-known volatility models, namely GARCH [23] and the original Heston model [20]. In this way, the introduction of a nonlinear term allowed for a better understanding of a measure of market instability, capturing embedded relationships that linear estimators fail to consider. Similarly, linear estimators for covariance ignore potential associations in higher dimensionality interactions, such that even assets with zero covariance may actually have a very heavy dependence on nonlinear domains.
As discussed in Kühn and Neu [24], the states of a market can be viewed as attractors resulting from the dynamics of nonlinear interactions between the financial variables, such that the introduction of nonlinearities also has potential implications for financial applications, such as risk management and derivatives pricing. For instance, Valenti et al. [25] pointed out that volatility is a monotonic indicator of financial risk, while many large oscillations in a financial market (both upwards and downwards) are preceded by long periods of relatively small levels of volatility in the assets' returns (the so-called "volatility clustering"). In this sense, the authors proposed the mean first hitting time (defined as the average time until a stock return undergoes a large variation-positive or negative-for the first time) as an indicator of price stability. In contrast with volatility, this measure of stability displays nonmonotonic behavior that exhibits a pattern resembling the Noise Enhanced Stability (NES) phenomenon, observed in a broad class of systems [26][27][28]. Therefore, using the conventional volatility as a measure of risk can lead to its underestimation, which in turn can lead to bad allocations of resources or bad financial managerial decisions.
In light of evidence that not all noisy information of the covariance matrix is due to their non-stationarity behavior [29], many machine-learning methods, such as the Support Vector Machines [30], Gaussian processes [31], and deep learning [32] methods have been discussed in the literature, showing that the introduction of nonlinearities can provide a better display of the complex cross-interactions between the variables and generate better predictions and strategies for the financial markets. Similarly, Almahdi and Yang [33] proposed a portfolio trading algorithm using recurrent reinforcement learning, using the expected maximum drawdown as a downside risk measure and testing for different sets of transaction costs. The authors also proposed an adaptive rebalancing extension, reported to have a quicker reaction to transaction cost variations and which managed to outperform hedge fund benchmarks.
Paiva et al. [34] proposed a fusion approach of a Support Vector Machine and the mean-variance optimization for portfolio selection, testing for data from the Brazilian market and analyzing the effects of brokerage and transactions costs. Petropoulos et al. [35] applied five machine learning algorithms (Support Vector Machine, Random Forest, Deep Artificial Neural Networks, Bayesian Autoregressive Trees, and Naïve Bayes) to build a model for FOREX portfolio management, combining the aforementioned methods in a stacked generalization system. Testing for data from 2001 to 2015 of ten currency pairs, the authors reported the superiority of machine learning models in terms of out-of-sample profitability. Moreover, the paper discussed potential correlations between the individual machine learning models, providing insights concerning their combination to boost the overall predictive power. Chen et al. [36] generalized the idea of diversifying for individual assets for investment and proposed a framework to construct portfolios of investment strategies instead. The authors used genetic algorithms to find the optimal allocation of capital into different strategies. For an overview of the applications of machine learning techniques in portfolio management contexts, see Pareek and Thakkar [37].
Regarding portfolio selection, Chicheportiche and Bouchaud [38] developed a nested factor multivariate model to model the nonlinear interactions in stock returns, as well as the well-known stylized facts and empirically detected copula structures. Testing for the S&P 500 index for three time periods (before, during, and after the financial crisis), the paper showed that the optimal portfolio constructed by the developed model showed a significantly lower out-of-sample risk than the one built using linear Principal Component Analysis, whilst the in-sample risk is practically the same; thus being positive evidence towards the introduction of nonlinearities in portfolio selection and asset allocation models. Montenegro and Albuquerque [39] applied a local Gaussian correlation to model the nonlinear dependence structure of the dynamic relationship between the assets. Using a subset of companies from the S&P 500 Index between 1992 and 2015, the portfolio generated by the nonlinear approach managed to outperform the Markowitz [2] model in more than 60% of the validation bootstrap samples. In regard to the effects of dimensionality reduction on the performance of portfolios generated from mean-variance optimization, Tayalı and Tolun [40] applied Non-negative Matrix Factorization (NMF) and Non-negative Principal Components Analysis (NPCA) for data from three indexes of the Istanbul Stock Market. Optimal portfolios were constructed based on Markowitz [2]'s mean-variance model. Performing backtesting for 300 tangency portfolios (maximum Sharpe Ratio), the authors showed that the portfolios' efficiency was improved in both NMF and NPCA approaches over the unreduced covariance matrix.
Musmeci et al. [41] incorporated a metric of persistence in the correlation structure between financial assets, and argued that such persistence can be useful for the anticipation of market volatility variations and that they could quickly adapt to them. Testing for daily prices of US and UK stocks between 1997 and 2013, the correlation structure persistence model yielded better forecasts than predictors based exclusively on past volatility. Moreover, the paper discusses the effect of the "curse of dimensionality" that arises in financial data when a large number of assets is considered, an issue that traditional econometric methods often fail to deal with. In this regard, Hsu et al. [4] argues in favor of the use of nonparametric approaches and machine learning methods in traditional financial economics problems, given their better empirical predictive power, as well as providing a broader view of well-established research topics in the finance agenda beyond classic econometrics.
Regularization, Noise Filtering, and Random Matrix Theory
A major setback in introducing nonlinearities is keeping them under control, as they tend to significantly boost the model's complexity, both in terms of theoretical implications and computational power needed to actually perform the calculations. Nonlinear interactions, besides often being difficult to interpret and apart from a potentially better explanatory power, may bring alongside them a large amount of noisy information, such as an increase in complexity that is not compensated by better forecasts or theoretical insights, but instead which "pollutes" the model by filling it with potentially useless data.
Bearing in mind this setback, the presence of regularization is essential to cope with the complexity levels that come along with high dimensionality and nonlinear interactions, especially in financial applications in which the data-generating processes tend to be highly chaotic. While it is important to introduce new sources of potentially useful information by boosting the model's complexity, being able to filter that information, discard the noises, and maintain only the "good" information is a big and relevant challenge. Studies like Massara et al. [42] discuss the importance of scalability and information filtering in light of the advent of the "Big Data Era", in which the boost of data availability and abundance led to the need to efficiently use those data and filter out the redundant ones.
Barfuss et al. [43] emphasized the need for parsimonious models by using information filtering networks, and building sparse-structure models that showed similar predictive performances but much smaller computational processing time in comparison to a state-of-the-art sparse graphical model baseline. Similarly, Torun et al. [44] discussed the eigenfiltering of measurement noise for hedged portfolios, showing that empirically estimated financial correlation matrices contain high levels of intrinsic noise, and proposed several methods for filtering it in risk engineering applications.
In financial contexts, Ban et al. [45] discussed the effects of performance-based regularization in portfolio optimization for mean-variance and mean-conditional Value-at-Risk problems, showing evidence for its superiority towards traditional optimization and regularization methods in terms of diminishing the estimation error and shrinking the model's overall complexity.
Concerning the effects of high dimensionality in finance, Kozak et al. [46] tested many well-established asset pricing factor models (including CAPM and the Fama-French five-factor model) introducing nonlinear interactions between 50 anomaly characteristics and 80 financial ratios up to the third power (i.e., all cross-interactions between the features of first, second, and third degrees were included as predictors, totaling to models with 1375 and 3400 candidate factors, respectively). In order to shrink the complexity of the model's high dimensionality, the authors applied dimensionality reduction and regularization techniques considering 1 and 2 penalties to increase the model's sparsity. The results showed that a very small number of principal components were able to capture almost all of the out-of-sample explanatory powers, resulting in a much more parsimonious and easy-to-interpret model; moreover, the introduction of an additional regularized principal component was shown to not hinder the model's sparsity, but also to not improve predictive performance either.
Depending on the "noisiness" of the data, the estimation of the covariances can be severely hindered, potentially leading to bad portfolio allocation decisions-if the covariances are overestimated, the investor could give up less risky asset combinations, or accept a lesser expected profitability; if the covariances are underestimated, the investor would be bearing a higher risk than the level he was willing to accept, and his portfolio choice could be non-optimal in terms of risk and return. Livan et al. [17] discussed the impacts of measurement noises on correlation estimates and the desirability of filtering and regularization techniques to diminish the noises in empirically observed correlation matrices.
A popular approach for the noise elimination of financial correlation matrices is the Random Matrix Theory, which studies the properties of matrix-form random variables-in particular, the density and behavior of eigenvalues. Its applications cover many of the fields of knowledge of recent years, such as statistical physics, dynamic systems, optimal control, and multivariate analysis.
Regarding applications in quantitative finance, Laloux et al. [47] compared the empirical eigenvalues density of major stock market data with their theoretical prediction, assuming that the covariance matrix was random following a Wishart distribution (If a vector of random matrix variables follows a multivariate Gaussian distribution, then its Sample covariance matrix will follow a Wishart distribution [48]).The results showed that over 94% of the eigenvalues fell within the theoretical bounds (defined in Edelman [48]), implying that less than 6% of the eigenvalues contain actually useful information; moreover, the largest eigenvalue is significantly higher than the theoretical upper bound, which is evidence that the covariance matrix estimated via Markowitz is composed of few very informative principal components and many low-valued eigenvalues dominated by noise. Nobi et al. [49] tested for the daily data of 20 global financial indexes from 2006 to 2011 and also found out that most eigenvalues fell into the theoretical range, suggesting a high presence of noises and few eigenvectors with very highly relevant information; particularly, this effect was even more prominent during a financial crisis. Although studies like El Alaoui [50] found a larger percentage of informative eigenvalues, the reported results show that the wide majority of principal components is still dominated by noisy information.
Plerou et al. [51] found similar results, concluding that the top eigenvalues of the covariance matrices were stable in time and the distribution of their eigenvector components displayed systematic deviations from the Random Matrix Theory predicted thresholds. Furthermore, the paper pointed out that the top eigenvalues corresponded to an influence common to all stocks, representing the market's systematic risk, and their respective eigenvectors showed a prominent presence of central business sectors.
Sensoy et al. [52] tested 87 benchmark financial indexes between 2009 and 2012, and also observed that the largest eigenvalue was more than 14 times larger than the Random Matrix Theory theoretical upper bound, while only less than 7% of the eigenvalues were larger than this threshold. Moreover, the paper identifies "central" elements that define the "global financial market" and analyzes the effects of the 2008 financial crisis in its volatility and correlation levels, concluding that the global market's dependence level generally increased after the crisis, thus making diversification less effective. Many other studies identified similar patterns in different financial markets and different time periods [53,54], evidencing the high levels of noise in correlation matrices and the relevance of filtering such noise for financial analysis. The effects of the covariance matrix cleaning using Random Matrix Theory in an emerging market was discussed in Eterovic and Eterovic [55], which analyzed 83 stocks from the Chilean financial market between 2000 and 2011 and found out that the efficiency of portfolios generated using Markowitz [2]'s model were largely improved.
Analogously, Eterovic [56] analyzed the effects of covariance matrix filtering through the Random Matrix Theory using data from the stocks of the FTSE 100 Index between 2000 and 2012, confirming the distribution pattern of the eigenvalues of the covariance matrix, with the majority of principal components inside the bounds of the Marčenko-Pastur distribution, while the top eigenvalue was much larger than the remaining ones; in particular, the discrepancy of the top eigenvalue was even larger during the Crisis period. Moreover, Eterovic [56] also found out that the performance improvement of the portfolios generated by a filtered covariance matrix filtering over a non-filtered one was strongly significant, evidencing the ability of the filtered covariance matrix to adapt to sudden volatility peaks.
Bouchaud and Potters [57] summarized the potential applications of the Random Matrix Theory in financial problems, focusing on the cleaning of financial correlation matrices and the asymptotic behavior of its eigenvalues, whose density was enunciated in Marčenko and Pastur [58]-and especially the largest one, which was described by the Tracy-Widom distribution [59]. The paper presents an empirical application using daily data of US stocks between 1993 and 2008, observing the correlation matrix of the 500 most liquid stocks in a sliding window of 1000 days with an interval of 100 days each, yielding 26 sample eigenvalue distributions. On average, the largest eigenvalue represents 21% of the sum of all eigenvalues. This is a stylized fact regarding the spectral properties of financial correlation matrices, as discussed in Akemann et al. [60]. Similar results were found in Conlon et al. [61], which analyzes the effects of "cleaning" the covariance matrix on better predictions of the risk of a portfolio, which may aid the investors to pick the best combination of hedge funds to avoid risk.
In financial applications, the covariance matrix is also important in multi-stage optimization problems, whose dimensionality often grows exponentially as the number of stages, financial assets or risk factor increase, thus demanding approximations using simulated scenarios to circumvent the curse of dimensionality [62]. In this framework, an important requirement for the simulated scenarios is the absence of arbitrage opportunities, a condition which can be incorporated through resampling or increasing the number of scenarios [63]. Alternatively, [64] defined three classes for arbitrage propensity and suggested a transformation on the covariance matrix's Cholesky decomposition that avoids the possibility of arbitrage in scenarios where it could theoretically exist. In this way, the application of the Random Matrix Theory on this method can improve the simulated scenarios in stochastic optimization problems, and consequently improve the quality of risk measurement and asset allocation decision-making.
Burda et al. [65] provided a mathematical derivation of the relationship between the sample correlation matrix calculated using the conventional Pearson estimates with its population counterpart, discussing how the dependency structure of the spectral moments can be applied to filter out the noisy eigenvalues of the correlation matrix's spectrum. In fact, a reasonable choice of a 500 × 500 covariance matrix (like using the S&P 500 data for portfolio selection) induces a very high level of noise in addition to the signal that comes from the eigenvalues of the population covariance matrix; Laloux et al. [66] used daily data of the S&P 500 between 1991 and 1996, and found out that the covariance matrix estimated by the classical Markowitz model highly underestimates the portfolio risks for a second time period (approximately three times lower than the actual values), a difference that is significantly lower for a cleaned correlation matrix, evidencing the high level of noise and the instability of the market dependency structure over time.
In view of the importance of controlling the complexity introduced alongside nonlinearities, in this paper we sought to verify whether the stylized behavior of the top eigenvalues persists after introducing nonlinearities into the covariance matrix, as well as the effect of cleaning the matrix's noises in the portfolio profitability and consistency over time, in order to obtain insights regarding the cost-benefit relationship between using higher degrees of nonlinearity to estimate the covariance between financial assets and the out-of-sample performance of the resulting portfolios.
Mean-Variance Portfolio Optimization
Let a 1 , a 2 , ..., a p be the p available financial assets and r a i be the return vector of the i-th asset a i , where the expected return vector and the covariance matrix are defined, respectively, as Markowitz [2]'s mean-variance portfolio optimization is basically a quadratic programming constrained optimization problem whose optimal solution w = (w 1 , represents the weights allocated to each one of the p assets, such that the portfolio P = w 1 a 1 + w 2 a 2 + ... + w p a p . Algebraically, the expected return and the variance of the resulting portfolio P are: With the non-allowance of a short selling constraint, the quadratic optimization problem is defined as: Minimize : which yields the weights that give away the less risky portfolio that provides an expected return equal to R; therefore, the portfolio P that lies on the efficient frontier for E[P ] = R. The dual form of this problem has an analogous interpretation-instead of minimizing the risk at a given level of expected return, it maximizes the expected return given a certain level of tolerated risk. Markowitz [2]'s model is very intuitive, easy to interpret, and enjoys huge popularity to this very day, making it one of the main baseline models for portfolio selection. Moreover, it has only two inputs which are fairly easy to be estimated. Nevertheless, there are many different ways of doing so, which was the motivation of many studies to tackle this question, proposing alternative ways to estimate those inputs to find potentially better portfolios. The famous Black and Litterman [10] model, for example, proposes a way to estimate the expected returns vector based on the combination of market equilibrium and the expectations of the investors operating in that market. In this paper, we focus on alternative ways to estimate the covariance matrix, and whether features like nonlinearities (Kernel functions) and noise filtering (Random Matrix Theory) can generate more profitable portfolio allocations.
Covariance Matrices
While Pearson's covariance estimator is consistent, studies like Huo et al. [67] pointed out that the estimates can be heavily influenced by outliers, which in turn leads to potentially suboptimal portfolio allocations. In this regard, the authors analyzed the effect of introducing robust estimation of covariance matrices, with the results of the empirical experiments showing that the use of robust covariance matrices generated portfolios with larger profitabilities. Zhu et al. [68] found similar results, proposing a high-dimensional covariance estimator less prone to outliers and leading to more well-diversified portfolios, often with a higher alpha.
Bearing in mind the aforementioned findings of the literature, we tested KPCA and the noise filtering to many robust covariance estimators as well, in order to further investigate the effectiveness of nonlinearities introduction and the elimination of noisy eigenvalues to the portfolio's performance. Furthermore, we intended to check the relative effects of said improvements to Pearson and robust covariance matrices, and whether robust estimators remained superior under such conditions.
In addition to the Pearson covariance matrix Σ = 1 , where x i is the return vector (centered in zero) of the i-th asset and T is the number of in-sample time periods, in this paper we considered four robust covariance estimators: the minimum covariance determinant (henceforth MCD) method [69], as estimated by the FASTMCD algorithm [70]; the Reweighted MCD, following [71]'s algorithm; and the Orthogonalized Gnanadesikan-Kettenring (henceforth OGK) pairwise estimator [72], following the algorithm of [73].
The MCD method aims to find observations whose sample covariance has a minimum determinant, thus being less sensitive to non-persistent extreme events, such as an abrupt oscillation of price levels that briefly come back to normal. Cator and Lopuhaä [74] demonstrated some statistical properties of this estimator, such as consistency and asymptotic convergence to the Gaussian distribution. The reweighted MCD estimator follows a similar idea, assigning weights to each observation and computing the covariance estimates based on the observations within a confidence interval, making the estimates even less sensitive to outliers and noisy datasets, as well as boosting the finite-sample efficiency of the estimator, as discussed in Croux and Haesbroeck [75]. Finally, the OGK approach takes univariate robust estimators of location and scale, constructing a covariance matrix based on those estimates and replacing the eigenvalues of that matrix with "robust variances", which are updated sequentially by weights based on a confidence interval cutoff.
Principal Component Analysis
Principal component analysis (henceforth PCA) is a technique for dimensionality reduction introduced by [76] which seeks to extract the important information from the data and to express this information as a set of new orthogonal variables called principal components, given that the independent variables of a dataset are generally correlated in some way. Each of these principal components is a linear combination of the set of variables in which the coefficients show the importance of the variable to the component. By definition, the sum of all eigenvalues is equal to the total variance, as they represent an amount of observed information; therefore, each eigenvalue represents the variation explained of the i-th principal component PC i , such that their values reflect the proportion of information maintained in the respective eigenvector, and thus are used to determine how many factors should be retained.
In a scenario with p independent variables, if it is assumed that the eigenvalues' distribution is uniform, then each eigenvalue would contribute to 1 p of the model's overall explanatory power. Therefore, taking a number k < p of principal components that are able to explain more than k p of the total variance can be regarded as a "gain" in terms of useful information retaining and noise elimination. In the portfolio selection context, Kim and Jeong [77] used PCA to decompose the correlation matrix of 135 stocks traded on the New York Stock Exchange (NYSE). Typically, the largest eigenvalue is considered to represent a market-wide effect that influences all stocks [78][79][80][81].
Consider Σ as a covariance matrix associated with the random vector X = [X 1 , X 2 ..., X p ] with eigenvalues λ 1 ≥ λ 2 ... ≥ λ p ≥ 0, where the rotation of the axis in R p yields the linear combinations: where Q i are the eigenvectors from Σ. Thus, the first principal component Y 1 is the projection in the direction in which the variance of the projection is maximized. So, we obtained Y 1 , Y 2 ...Y p orthonormal vectors with maximum variability.
To obtain the associated eigenvectors, we solved for det(Σ − λI) = 0 to obtain the diagonal matrix composed by the eigenvalues. The variance of the i th principal component of Σ is equal to its i-th eigenvalue λ i . By construction, the principal component are pairwise orthogonal-that is, the covariance between the eigenvectors is cov(Q i X, Q j X) = 0, i = j. Algebraically, the i-th principal component Y i can be obtained by solving the following expression for a i [82]: In the field of dimensionality reduction, the interest in entropy, the entropy-based distance metric, has been investigated, where [83] developed kernel entropy component analysis (KECA) for data transformation and dimensionality reduction, an extension of PCA mixture entropy and n dimensionality decomposition. [84] shows that by using kernel entropy component analysis in an application on face recognition algorithm based on Renyi entropy component, certain eigenvalues and the corresponding eigenvectors will contribute more to the entropy estimate than others, since the terms depend on different eigenvalues and eigenvectors.
Kernel Principal Component Analysis and Random Matrix Theory
Let X be a T × p matrix, T being the observations, p the variables, and Σ the covariance matrix p × p. The spectral decomposition of Σ is given by: being λ ≥ 0 the eigenvalues and Q the eigenvectors.
If the values of matrix X are random normalized values generated by a Gaussian distribution, then if T → ∞ and p → ∞ where Ψ = T p ≥ 1 the eigenvalues of matrix Σ result in the following probability density function [61]: where λ max and λ min are the bound given by: This result basically states that the eigenvalues of a purely random matrix based on distribution (3) tend to fall inside the theoretical boundaries; thus, eigenvalues larger than the upper bound are expected to contain useful information concerning an arbitrary matrix, whilst the noisy information is dispersed into the other eigenvalues, whose behavior is similar to the eigenvalues of a matrix with no information whatsoever.
There are many applications of the Random Matrix Theory (RMT) in the financial context. Ref. [85] used RMT to reduce the noise into data before to model the covariance matrix of assets on Asset Pricing Theory Models by using the Bayesian approach. The posteriori distribution was adjusted by Wishart Distribution using MCMC methods.
The procedures proposed by RMT for dispersion matrices noise filter in a finances context require careful use. The reasons are due to the "stylized facts" present in this type of data as logarithmic transformations in the attempt for symmetric distributions of returns and the presence of extreme values. The work of [86] deals with these problems and uses Tyler's robust M-estimator [87] to estimate the dispersion matrix to then identify the non-random part with the relevant information via RMT using [58] bounds.
The covariance matrix Σ can be factored as: where Λ is a diagonal matrix composed by p eigenvalues λ i ≥ 0, i = 1, 2, ..., p and each one of the p columns of Q, q i , i = 1, 2, ..., p, are the eigenvectors associated with the i-th eigenvector λ i . The idea is to perform the decomposition of Σ following Equation (5) and to filter out the eigenvalues which fall inside the boundaries postulated in Equation (4) and reconstruct Σ by multiplying back the filtered eigenvalue matrix to the eigenvector matrices, and then using the filtered matrix as input to Markowitz [2]'s model. Eigenvalues smaller than the upper bound of Equation (4) were considered as "noisy eigenvalues", while eigenvalues larger than the upper bound were considered "non-noisy". For the eigenvalue matrix filtering, we maintained all non-noisy eigenvalues and replaced all the remaining noisy ones by their average in order to preserve the stability (positive-definitiveness) and keep a fixed sum for the matrix's trace, following Sharifi et al. [88] and Conlon et al. [61].
For eigenvalue matrix filtering, we maintained all non-noisy eigenvalues in Λ and replaced all the remaining noisy ones λ noise i by their average (λ noise i ): After the filtering process, we multiplied back the filtered eigenvalue matrix to yield the "clean" covariance matrix: where Λ * is a diagonal matrix composed of the cleaned eigenvalues. The nonlinear estimation of the covariance matrix was achieved by means of a Kernel function, defined as: where ϕ : R p ⇒ R q , p < q transforms the original data to a higher dimension, which can even be infinite, and the use of the kernel function prevents the need to explicitly compute the functional form of ϕ(x); instead, κ computes the inner product of ϕ. This is known as the kernel trick. The use of the Kernel function can circumvent the problem of high dimensionality induced by ϕ(x) without the need to explicitly compute its functional form; instead, all nonlinear interactions between the original variables are synthesized in a real scalar. Since the inner product is a similarity measure in Hilbert spaces, the Kernel function can be seen as a way to measure the "margin" between the classes in high (or even infinite) dimensional spaces.
The following application of the Kernel function to the linearly estimated covariance matrix: allows one to introduce a high number of nonlinear interactions in the original data and transform Σ into a Kernel covariance matrix: In this paper, we tested the polynomial and Gaussian Kernels as κ. Both Kernels are widely used functions in the machine learning literature. The polynomial Kernel: has a concise functional form, and is able to incorporate all cross-interactions between the explanatory variables that generate monomials with a degree less than or equal to a predefined q. This paper considered polynomial Kernels of degrees 2, 3, and 4 (q = 2, 3, 4). Note that the polynomial Kernel with q = 1 and d = 0 precisely yields the Pearson linear covariance matrix, so the polynomial Kernel covariance matrix is indeed a more general case of the former. The Gaussian Kernel is the generalization of the polynomial Kernel for q → ∞, and is one of the most widely used Kernels in machine learning literature. It enjoys huge popularity in various knowledge fields since this function is able to induce an infinite dimensional feature space while depending on only one scattering parameter σ. The expression of the Gaussian Kernel is given by: The Kernel Principal Component Analysis (henceforth KPCA) is an extension of the linear PCA applied to the Kernel covariance matrix. Basically, the diagonalization problem returns linear combinations from the Kernel function's feature space R q , instead of the original input space R p with the original variables. By performing the spectral decomposition in the Kernel covariance matrix: (12) and extracting the largest eigenvalues of the Kernel covariance eigenvalue matrix Λ κ , we obtained the filtered Kernel covariance eigenvalue matrix Λ * κ , which was then used to reconstruct the filtered Kernel covariance matrix: Finally, Σ * κ was used as an input for the Markowitz portfolio optimization model, and the resultant portfolio's profitability was compared to the portfolio generated by the linear covariance matrix and other aforementioned robust estimation methods, as well as their filtered counterparts. The analysis was reiterated for data from seven different markets, and the results are discussed in Section 5.
The pseudocode of our proposed approach is displayed as follows: 1. Estimate Σ for training set data; 2. Perform spectral decomposition of Σ: Σ = QΛQ −1 ; 3. From the eigenvalues matrix Λ, identify the noisy eigenvalues λ noise i based on the Random Matrix Theory upper bound; 4. Replace all noisy by their average:λ noise i to obtain the filtered eigenvalue matrix Λ * ; 5. Build the filtered covariance matrix QΛ * Q −1 ; 6. Use the filtered covariance matrix as input for Markowitz model and get the optimal portfolio weights from in-sample data; 7. Apply the in-sample optimal portfolio weights for out-of-sample data and obtain performance measures.
Performance Measures
The trade-off between risk and return has long been well-known in the finance literature, where higher expected return generally implies a greater level of risk, which motivates the importance of considering risk-adjusted measures of performance. Therefore, it is not sufficient to view a portfolio's attractiveness only in terms of the cumulative returns that it offers, but instead, whether the return compensates for the level of risk that the allocation exposes the investor to. The Sharpe ratio [89] provides a simple way to do so.
Let P be a portfolio composed by a linear combination between assets whose expected return vector is r, considering w as the weight vector of P and r f t as the risk-free rate at time t. Defining the mean excess return over the risk-free asset of P along the N out-of-sample time periods as: and defining the sample standard deviation of portfolio P as: The Sharpe ratio of portfolio P is given by: While the Sharpe ratio gives a risk-adjusted performance measure for a portfolio and allows direct comparison between different allocations, it has the limitation of considering both the upside and the downside risks. That is, the uncertainty of profits is penalized in the Sharpe ratio expression, even though it is positive for an investor. Therefore, as discussed in works like Patton and Sheppard [90] and Farago and Tédongap [91], the decomposition of risk in "good variance" and "bad variance" can provide better asset allocation and volatility estimation, thus leading to better investment and risk management decisions. Therefore, instead of using the conventional standard deviation, which considers both methods of variance, Sortino and Price [92] proposed an alternative performance measure that became known as the Sortino ratio, which balances the mean excess return only by the downside deviation. The Sortino ratio for portfolio P is given by: where σ − P is the downside deviation: Note that the downside deviation represents the standard deviation of negative portfolio returns, thus measuring only the "bad" side of volatility; for periods that the portfolio yields a better return than the mean excess return over the risk-free asset, this upside deviation is not accounted for by the Sortino ratio.
Furthermore, we tested the statistical significance of the covariance matrix filtering improvement on the portfolio's performance. That is, instead of just comparing the values of the ratios, we tested to which extent the superiority of the noise-filtering approach was statistically significant. For each model and each analyzed market, we compared the Sharpe and Sortino ratios of the non-filtered covariance matrices with their respective filtered counterparts using Student's t tests. The null and alternative hypothesis are defined as follows: Rejection of both null hypotheses implies that the Sharpe/Sortino ratio of the portfolio generated using the filtered covariance matrix is statistically larger than the portfolio yielded by the non-filtered matrix. The p-values for the hypothesis tests are displayed in Tables 1-7.
Data
For the empirical application, we used data from seven markets-namely, the United States, United Kingdom, France, Germany, Japan, China, and Brazil; the chosen financial indexes representing each market were, respectively, NASDAQ-100, FTSE 100, CAC 40, DAX-30, NIKKEI 225, SSE 180 and Bovespa. We collected the daily return of the financial assets that composed those indexes during all time periods between 1 January 2000 and 16 August 2018, totaling 4858 observations for each asset. The data was collected from the Bloomberg terminal. The daily excess market return over the risk-free rate was collected from Kenneth R. French's data library (http://mba.tuck.dartmouth.edu/pages/ faculty/ken.french/data_library.html).
We split the datasets into two mutually exclusive subsets: we allocated the observations between 1 January 2000 and 3 November 2015 (85% of the whole dataset, 4131 observations) for the training (in-sample) subset and the observations between 4 November 2015 and 16 August 2018 (the remaining 15%, 727 observations) for the test (out-of-sample) subset. For each financial market and each covariance matrix method, we estimated the optimal portfolio for the training subset and applied the optimal weights for the test subset data. The cumulative return of the portfolio in the out-of-sample periods, their Sharpe and Sortino ratios, information regarding the non-noisy eigenvalues and p-values of tests (19) and (20) are displayed in Tables 1-7.
Results and Discussion
The cumulative returns and risk-adjusted performance metrics are presented in Tables 1-7, as well as information regarding the non-noisy eigenvalues and the p-values of the hypothesis tests. Figures 1-7 show the improvement of filtered covariance matrices over their non-filtered counterparts for each market and estimation method. The results are summarized as follows: Table 1. Summary results for assets of the NASDAQ-100 Index: CR is the cumulative return of the optimal portfolio in the out-of-sample period; λ * is the number of non-noisy eigenvalues of the respective covariance matrix; λ * variance (%) is the percentage of variance explained by the non-noisy eigenvalues; λ top is the value of the top eigenvalue; λ top variance (%) is the percentage of variance that the top eigenvalue is responsible for; p Sharpe is the p-value of the hypothesis test (19); and p Sortino is the p-value of the hypothesis test (20). Figure 1. Cumulative return improvement of noise-filtered covariance matrices over non-filtered ones for assets of NASDAQ-100 Index during the out-of-sample period. Table 2. Summary results for assets of the FTSE 100 Index: CR is the cumulative return of the optimal portfolio in the out-of-sample period; λ * is the number of non-noisy eigenvalues of the respective covariance matrix; λ * variance (%) is the percentage of variance explained by the non-noisy eigenvalues; λ top is the value of the top eigenvalue; λ top variance (%) is the percentage of variance that the top eigenvalue is responsible for; p Sharpe is the p-value of the hypothesis test (19); and p Sortino is the p-value of the hypothesis test (20). Figure 2. Cumulative return improvement of noise-filtered covariance matrices over non-filtered ones for assets of FTSE 100 Index during the out-of-sample period. Table 3. Summary results for assets of the CAC 40 Index: CR is the cumulative return of the optimal portfolio in the out-of-sample period; λ * is the number of non-noisy eigenvalues of the respective covariance matrix; λ * variance (%) is the percentage of variance explained by the non-noisy eigenvalues; λ top is the value of the top eigenvalue; λ top variance (%) is the percentage of variance that the top eigenvalue is responsible for; p Sharpe is the p-value of the hypothesis test (19); and p Sortino is the p-value of the hypothesis test (20). Figure 3. Cumulative return improvement of noise-filtered covariance matrices over non-filtered ones for assets of CAC 40 Index during the out-of-sample period. Table 4. Summary results for assets of the DAX-30 Index: CR is the cumulative return of the optimal portfolio in the out-of-sample period; λ * is the number of non-noisy eigenvalues of the respective covariance matrix; λ * variance (%) is the percentage of variance explained by the non-noisy eigenvalues; λ top is the value of the top eigenvalue; λ top variance (%) is the percentage of variance that the top eigenvalue is responsible for; p Sharpe is the p-value of the hypothesis test (19); and p Sortino is the p-value of the hypothesis test (20). Figure 4. Cumulative return improvement of noise-filtered covariance matrices over non-filtered ones for assets of the DAX-30 Index during the out-of-sample period. Table 5. Summary results for assets of the NIKKEI 225 Index: CR is the cumulative return of the optimal portfolio in the out-of-sample period; λ * is the number of non-noisy eigenvalues of the respective covariance matrix; λ * variance (%) is the percentage of variance explained by the non-noisy eigenvalues; λ top is the value of the top eigenvalue; λ top variance (%) is the percentage of variance that the top eigenvalue is responsible for; p Sharpe is the p-value of the hypothesis test (19); and p Sortino is the p-value of the hypothesis test (20). Figure 5. Cumulative return improvement of noise-filtered covariance matrices over non-filtered ones for assets of the NIKKEI 225 Index during the out-of-sample period. Table 6. Summary results for assets of the SSE 180 Index: CR is the cumulative return of the optimal portfolio in the out-of-sample period; λ * is the number of non-noisy eigenvalues of the respective covariance matrix; λ * variance (%) is the percentage of variance explained by the non-noisy eigenvalues; λ top is the value of the top eigenvalue; λ top variance (%) is the percentage of variance that the top eigenvalue is responsible for; p Sharpe is the p-value of the hypothesis test (19); and p Sortino is the p-value of the hypothesis test (20). Figure 6. Cumulative return improvement of noise-filtered covariance matrices over non-filtered ones for assets of the SSE 180 Index during the out-of-sample period. Table 7. Summary results for assets of Bovespa Index: CR is the cumulative return of the optimal portfolio in the out-of-sample period; λ * is the number of non-noisy eigenvalues of the respective covariance matrix; λ * variance (%) is the percentage of variance explained by the non-noisy eigenvalues; λ top is the value of the top eigenvalue; λ top variance (%) is the percentage of variance that the top eigenvalue is responsible for; p Sharpe is the p-value of the hypothesis test (19); and p Sortino is the p-value of the hypothesis test (20). Figure 7. Cumulative return improvement of noise-filtered covariance matrices over non-filtered ones for assets of Bovespa 100 Index during the out-of-sample period.
For the non-filtered covariance matrices, the overall performance of the linear Pearson estimates was better than robust estimation methods in most markets, although it was outperformed by all three robust methods (MCD, RMCD, and OGK) for the CAC and SSE indexes. In comparison to the nonlinear covariance matrices induced by the application of Kernel functions, the linear approaches performed better in four out of the seven analyzed markets (CAC, DAX, NIKKEI, and SSE), although in the other three markets the nonlinear models performed better by a fairly large margin. Between the robust estimators, the performance results were similar, slightly favoring the OGK approach. Amongst the nonlinear models, the Gaussian Kernel generally performed worse than the polynomial Kernels-an expected result, as the Gaussian Kernel incorporates polynomial interactions that effectively tends to infinity-degree, which naturally inserts a large amount of noisy information; the only market where the Gaussian Kernel performed notably better was the Brazilian one, which is considered to be an "emerging economy" and a less efficient market compared to the United States or Europe; even though Brazil is the leading market in Latin America, this market's liquidity, transaction flows, and informational efficiency are quite smaller compared to major financial markets (For a broad discussion about the dynamics of financial markets of emerging economies, see Karolyi [93]). Therefore, it is to be expected that such a market contains more levels of "noise", such that a function that incorporates a wider range of nonlinear interactions tend to perform better.
As for the filtered covariance matrices, the Pearson estimator and the robust estimators showed similar results, with no major overall differences in profitability or risk-adjusted measures-Pearson performed worst than MCD, RMCD, and OGK for NASDAQ and better for FTSE and DAX. In comparison to MCD and OGK, the RMCD showed slightly better performance. Similarly to the non-filtered cases, the polynomial Kernels yielded generally better portfolios in most markets. Concerning the Gaussian Kernel, even though its filtered covariance matrix performed particularly well for FTSE and Bovespa, it showed very bad results for the German and Chinese markets, suggesting that an excessive introduction of nonlinearities may bring along more costs than improvements. Nevertheless, during the out-of-sample periods, the British and Brazilian markets underwent exogenous events-namely the effects of the "Brexit" referendum for the United Kingdom and the advancements of the "Car Wash" (Lava Jato) operation, which led to events like the prison of Eduardo Cunha (former President of the Chamber of Deputies) in October 2016; and Luis Inácio da Silva (former President of Brazil) in April 2018-that may have affected their respective systematic levels of risk and profitability, potentially compromising the market as a whole. In this sense, the fact that the Gaussian Kernel-filtered covariance matrices in those markets performed better than the polynomial Kernels is evidence that the additional levels of "complexity" in those markets may demand the introduction of more complex nonlinear interactions to make good portfolio allocations. These results are also consistent with the finding of Sandoval Jr et al. [94], which pointed out that covariance matrix cleaning may actually lead to the worst portfolio performances in periods of high volatility.
Regarding the principal components of the covariance matrices and the dominance of the top eigenvalue discussed by the literature, the results showed that for all models and markets, the first eigenvalue of the covariance matrix was much bigger than the theoretical bound λ max , which is consistent with the stylized facts discussed in Section 2. Moreover, for the vast majority of the cases (44 out of 54), the single top eigenvalue λ top contained more than 25% of all the variance. This result is consistent with the finding of previous similar works stated in the literature review section (Sensoy et al. [52] and others): the fact that a single principal component concentrated over 25% of the information is evidence that it captures the systematic risk, the very slice of the risk which cannot be diversified-in other words, the share of the risk that persists, regardless of the weight allocation. The results persisted for the eigenvalues above the upper bound of Equation (4): in more than half of the cases (31 out of 54), the "non-noisy" eigenvalues represented more than half of the total variance. The concentration of information in non-noisy eigenvalues in polynomial Kernels was weaker than the linear covariance matrices, while for the Gaussian Kernel the percentage of variance retained was even larger-around 70% of the total variance for all seven markets.
Finally, the columns p Sharpe and p Sortino show the statistical significance of the improvement of Sharpe and Sortino ratios brought about by the introduction of noise filtering based on the Random Matrix Theory. The results indicate that, while in some cases the noise filtering worked very well, in other cases it actually worsened the portfolio's performances. Therefore, there is evidence that better portfolios can be achieved by eliminating the "noisy eigenvalues", but the upper bound given by Equation (4) may be classifying actually informative principal components as "noise". Especially concerning Kernel covariance matrices, the effects of the eigenvalues cleaning seemed unstable, working well in some cases and very bad in others, suggesting that the dynamics of the eigenvalues in nonlinear covariance matrices follow a different dynamic than linear ones, and the information that is considered to be "noise" for linear estimates can actually be informative in nonlinear domains. At the usual 95% confidence level, evidences of statistical superiority of filtered covariance matrices was present in 25 out of 54 cases for the Sharpe ratio (rejection of null hypothesis in (19)) and 26 out of 54 for the Sortino ratio (rejection of null hypothesis in (20)). The markets in which more models showed significant improvement using the Random Matrix Theory were the French and the German; on the other hand, again, for a less efficient financial market like the Brazilian one, the elimination of noisy eigenvalues yielded the worst performances (the profitability of all portfolios actually dropped), again consistent with the finding of Sandoval Jr et al. [94].
Conclusions
In this paper, the effectiveness of introducing nonlinear interactions to the covariance matrix estimation and its noise filtering using the Random Matrix Theory was tested with daily data from seven different financial markets. We tested eight estimators for the covariance matrix and evaluated the statistical significance of the noise-filtering improvement on portfolio performance. While the cleaning of noisy eigenvalues did not show significant improvements in every analyzed market, the out-of-sample Sharpe and Sortino ratios of the portfolios were significantly improved for almost half of all tested cases. The findings of this paper can potentially aid the investment decision for scholars and financial market participants, as well as providing both theoretical and empirical tools for the construction of more profitable and less risky trading strategies, as well as exploring potential weaknesses of traditional linear methods of covariance estimation.
We also tested the introduction of different degrees of nonlinearities to the covariance matrices by means of Kernel functions, with varied results: while in some cases, the Kernel approach managed to get better results, for others the addition yielded a much worse performance, indicating that the use of Kernels represent a high boost of the models' complexity levels, which are not always compensated by better asset allocations, even when part of the said additional complexity is filtered out. This implies that the noise introduced by nonlinear features can surpass the additional predictive power which they aggregate to the Markowitz model. To further investigate this result, future developments include testing other Kernel functions besides the polynomial and the Gaussian to investigate whether alternative frameworks of nonlinear dependence can show better results. For instance, the results shown by different classes of Kernel functions [95] may fit better into the financial markets' stylized facts and reveal underlying patterns based on the Kernel's definition. Tuning the hyperparameters for each Kernel can also influence the model's performance decisively.
Although the past performance of a financial asset does not determine its future performance, in this paper we kept in the dataset only the assets that composed of the seven financial indexes during the whole period between 2000 and 2018, thus not considering the possible survivorship bias in the choice of the assets which can affect the model's implications [96]. As for future advancements, the difference between the "surviving" assets from the others can be analyzed as well. Other potential improvements include the replication of the analysis for other financial indexes or markets and other time periods, incorporation of transaction costs, and comparison with other portfolio selection models apart from Markowitz's. This paper focused on the introduction on nonlinear interactions to the covariance matrix estimation. Thus, a limitation was the choice of the filtering methods, as the replacement procedure that we adopted was not the only one that the literature on the Random Matrix Theory recommends. Alternative filtering methods documented by studies like Guhr and Kälber [97] and Daly et al. [98], such as exponential weighting and Krzanowski stability maximization, may allow for better modeling of underlying patterns of financial covariance structures and also lead to better portfolio allocations, such that the application of those methods and comparison to our proposed methods can be a subject of future research in this agenda. | 14,323.4 | 2019-04-01T00:00:00.000 | [
"Mathematics",
"Business",
"Economics"
] |
A Road Surface Identification Method Improved Early Detection Performance Using Ultrasonic Sensors
Currently, the number of elderly people in the world is increasing. As a result, the number of accidents involving elderly people falling when movement is increasing. Development of mobility support systems is necessary for them to move safely. Therefore, the system was developed to help wheelchairs move by identifying the type of road surface in front of them. The system used ultrasonic sensors attached to the wheelchair to identify the road surface. Then, the method for identifying four types of road surfaces using Support Vector Machines (SVM) was proposed for the road surface identification method that constitutes the mobility support system. However, in the previous study, only the case where measured road surface didn't change was verified. This made it impossible to make early identification when the road surface changed during measurement. In this paper, the new road surface identification method using ultrasonic sensors is proposed. The proposed method makes it possible to identify the boundary of a road surface when it changes. In addition, the method improves the early detection performance. In order to verify the performance of early identification road boundary, two road surfaces with different roughness were measured in succession. As a result, the proposed method was able to identify at before entering the road boundary. This confirms the effectiveness of the road surface identification method that takes the time series into account for sample obtainment.
Introduction
The number of elderly people in the world is increasing, and it is predicted that this trend will continue for several decades (1) . Especially, the increase in the rate of elderly people in developed countries has become social problems. The problems include a declining workforce, a shortage of caregivers, and economic pressures due to the burden of medical costs. Accidents involving falls during mobility for the elderly are increasing as the population ages. Mobility aids such as wheelchairs, canes, and walkers are used to help elderly people move and prevent falls. These devices improve the safety of their daily lives (2) . On the other hand, elderly people acting alone are at high risk of falling, even when using mobility aids. To further improve the safety of elderly, it is necessary to develop mobility support systems. Currently, there are several mobility support systems that use sensors such as ultrasonic sensors, cameras, and lasers (3,4) . Ultrasonic sensors have the advantage among them that they are compact and have low computational cost compared to other sensors. The study has been conducted to detect steps and obstacles based on the distance information obtained by ultrasonic sensors (5)(6)(7)(8) . However, the cause of falls is not only steps and obstacles, but also uneven surfaces such as grass and gravel. It is important to identify these surfaces in order to prevent falls. In the previous study, the road surface identification method was proposed using the feature that the road surface roughness depends on the ultrasonic reflection intensity (9) . Reflected waves is stronger on smooth surfaces and weaker on rough surfaces. This change is used to identify the road surface. Nevertheless, the previous study couldn't early identify when the road surface changed during the measurement. This is because the road surface can't be identified until the measurement of the sample size is completed. It was also necessary to measure additional sample sizes for the next road surface identification. In this study, in order to improve the performance of early road surface identification for the mobility support system, we build a classifier using SVM. In addition, samples are obtained by sliding one by one in the time series to verify the performance of early identification to the road surface change. This paper is organized as follows. The general introductions are in Section. Section II explain road surface identification method using the ultrasonic sensor. Section III, IV describes the experimental setup and the verification. Section V concludes the paper.
Measurement method
This section describes the method of road surface identification using ultrasonic sensors. Fig.1 shows a simplified diagram of the reflected wave measurement device. The device consists of two ultrasonic sensors (UT1612MPR/UR1612MPR), transmitters / receivers circuit, a microcomputer and a Bluetooth module. An ultrasonic sensor transmits by amplifying a 40 kHz signal sent from the microcomputer with a transistor. The ultrasonic wave reflected from the target surface are then received an ultrasonic sensor, and the reflected wave is measured through amplification, half-wave rectification and envelope identification. Voltage-time integral is the integration of reflected wave in time (10) . Also, the reflected wave depends on three factors (11) . The first is the distance between the sensor and the road surface. Since ultrasonic waves are sound, it attenuates as it propagates through the medium. Furthermore, since ultrasonic propagates radially, the longer the distance, the weaker the reflected waves. The second is the incident angle of ultrasonic wave to the road surfaces. As this angle increases, the reflected wave sizes can be received decrease. Therefore, as this angle becomes smaller, the reflected waves become stronger. The third is the roughness of the road surfaces. On rough surfaces, ultrasonic waves are scattered. As a result, the reflected wave received by the sensor becomes weaker. Also, on rough surfaces, the reflected waves change because the surface shape varies greatly from place to place. Fig.2 shows measurement of reflected waves with different roughness and location. The black arrows show the strength of the reflected wave at each measurement location. The green arrows show the reflection of the ultrasonic waves. The reflected wave does not vary even if the measurement location is changed on smooth surfaces. On the other hand, the reflected waves vary from the different measurement location on rough surfaces. This is because the incident angles and reflection angles for the ultrasonic waves change irregularly due to the unevenness of the road surface. Thus, it is impossible to identify rough road surfaces by only measuring the reflected waves at the same measurement location. It is necessary to use the variation of the reflected wave as a parameter to identify the road surface roughness. The average and standard deviation of the voltage-time integrals were used to measure the variation of the road surface roughness. In this study, the voltage-time integrals are measured continuously while moving on the road surface. The measurement result is divided into multiple divisions by the number of samples. Then, the standardized average and standard deviation of each division are used as feature values to build SVM classifiers. Fig. 3 shows the method used to obtain the samples for SVM. The graph in Fig. 3 shows the obtained voltage-time integrals in chronological order. In the previous method, the voltagetime integrals are collected for the determined sample size. After that, the average and standard deviation were calculated, and this sample obtaining process was repeated. The time to obtain a sample is the time it takes for the voltage-time integrals to be measured for the sample size. In other words, it is impossible to early identify road surface change due to the longer time interval of the samples. Increasing the sample size also makes the correctness of the road surface identification higher (12) . In contrast, the sample size needs to be decreased to early identify the road surface. To solve the problem, the proposed method changes the way of obtaining samples. As in the previous method, the first sample is obtained by collecting the voltage-time integrals for the determined sample size. After that, the process of obtaining the next sample is repeated by sliding the obtainment range of the sample by one voltage-time integral. Since this method shortens the interval between samples, road surface changes can be early identified. If the sample size is reduced significantly, it isn't possible to identify the Fig. 1. Reflected wave measurement device. road surface boundary accurately, because one data point has a greater influence on the average and standard deviation. In the experiment, the effectiveness of the sample size was also verified.
SVM
The measured voltage-time integrals are classified by SVM. It is a supervised learning algorithm designed to solve the binary classification problem (13) . It determines the hyperplane by maximizing the margin. The margin is the distance between the nearest data and the hyperplane. In the experiment, four different regularization parameters were tested with a soft margin SVM. It built a one-to-one classifier and performed multi-class classification. In addition, we used C = {0.1, 1, 10, 100} as the regularization parameter. The data was divided into 70% of the training data and 30% of the test data. The percentage of accuracy score was verified for each parameter C. The correct answer is the label attached to the measurement.
Evaluation value
Vehicles don't stop immediately when the brakes are applied. There are situations where the brake is being operated but isn't working. The distance the vehicle drives in this situation is called the free running distance. The distance from when the brakes begin to act until the vehicle comes to stop is called the braking distance (14) . The sum of these two distances is called the stopping distance. h s is the reaction speed. v km/h is the velocity of the vehicle. μ is the coefficient of friction. It is between the tires and the road. K m is the free running distance. S m is the braking distance. T m is the stopping distance. The free running, braking and stopping distances are described by equations (1), (2) and (3) respectively. The reaction rate and the friction coefficient were set to h = 1.0 and μ = 0.7, respectively, since the experimental environment was a dry road (15) . The first moment when the road surface is identified as dangerous is explained. The distance between the front wheels and the road surface boundary is called the identification distance. tnd sec is the measured time of the first voltage-time integral to be identified as a dangerous road surface. dc m is the distance from the start of measurement to the boundary distance. vw m/s is the average velocity of the wheelchair. di m is the identification distance. Equation (4) shows the identification distance.
Setup
To verify the effectiveness of the road surface identification method sampled by the proposed method by experiments. For this purpose, two experiments were conducted in sequence. The first was to verify the regularization parameters and sample size. The second is the early identification performance of road surface changes by the proposed method. The first experimental environment is described below. The measurement device was attached to a wheelchair, and the reflected waves were measured on the floor, mat, artificial turf, and gravel. This was done in order to keep the distance constant and limit the factor of change in the reflected waves to the road surface roughness. The device was attached to the right armrest extension of the wheelchair. The incident angle of ultrasonic was set at 20 deg.
The distance between the sensors and the road surface was 60 cm. Fig. 4 shows a simplified representation of the experimental environment. In Fig. 4, there were four types of surfaces to be measured: floor, mat, artificial turf, and gravel. The measurement procedure of the reflected waves is described below. The wheelchair equipped with the measurement device is placed 30 cm in front of the start position shown in Fig. 4. The wheelchair was pushed in a straight line. Measurements were started at the starting point and continued until the front wheels of the wheelchair reached the goal. This flow was taken as one set, and 30 sets of measurements were taken for each road surface. The labeled voltage-time integral for each road surface were obtained. All measurement sets for each label were ordered and sampled by sliding them one by one with the determined sample size. The sample size was set to N = {100, 200, 300, 400}. Fig. 4(b) shows the environment of the second experiment The equipment and mounting of the measuring device were the same as in the first experiment. The target surfaces for measurement in Fig. 4 are four types of road surfaces that change from safe to dangerous. The safe surfaces are floors and mats, and the dangerous surfaces are artificial turf and gravel. These two types of road surfaces were set up so as to keep the measurement range of the device the same. The number of sets was changed to 20 and the method of obtaining the samples was the same as in the first experiment. The identification distance was calculated from equation 4. The identification distance was compared to the stopping distance. Tables 1 and 2 show the percentage of correct answers to the training and test data when the sample size and regularization parameters are changed. From the experimental results, the sample size and regularization parameters of the classifier are considered. The classifier built with the proposed method has accuracy rate of more than 90% for both training and test data. The correct answer rate exceeded 95% when the sample size was larger than 200. Therefore, the sample size used in the early detection of road surface changes was determined to be 200. This result is true for all the sample sizes considered in this study. The accuracy scores didn't change by more than 1 % when the regularization parameters were changed. Thus, the five regularization parameters tested in this experiment are considered to have little effect on the classifier. The regularization parameter used in the early detection of road surface changes was determined to be C = 1. Fig. 6 and Fig. 7 show the transition of the road surface on the classifier when the road surface changes from floor to dangerous road surface and from mat to dangerous road surface, respectively. Table 2 shows the identification distance, the stopping distance and boundary distance for each road surface change. Fig. 6 and Fig. 7 show that the transitions at each road surface change are finely identified. Table 3 also shows that it is possible to identify the road surface before the front wheels of the wheelchair enter the dangerous surface. On the other hand, when the stopping distance was taken into account, the wheelchair couldn't identify before the stopping distance when the safe surface was the floor. The reason for this could be the difference in the voltage-time integral and the shaking of the wheelchair. The differences in the voltage-time integrals are mentioned. The average of the voltage-time integral of the floor is much larger than that of the dangerous surface. In addition, the standard deviation is much smaller for the floor than for the dangerous surface. For these reasons, the distributions of the floor and the dangerous road surface are opposite to each other on the classifier. This caused a delay in identifying road surface changes. The shaking of the wheelchair is mentioned. In the first experiment, the wheelchair was actually driven over all road surfaces. The wheelchair was shaking while measuring the voltage-time integral on the gravel surface. On the other hand, in the second experiment, the measurement distance was determined by considering the measurement range. Therefore, most of the measurements for the dangerous road surface were taken from the safe road surface. This resulted in less swaying during the measurement. The average of the gravel differed greatly between the two experiments. These results indicate that the averages of the floor and gravel were close to each other, and it took time to identify them.
Conclusions
In this paper, the road surface identification method using ultrasonic sensors was proposed. In this method, samples are obtained by sliding them one by one in a time series, which enables early detection of road surface changes. The accuracy score of the proposed method was verified by experiments, and the results showed that the accuracy was high and the number of samples that could be obtained increased. In addition, the method was also able to early identify the road surface changes before entering the dangerous road surface. The method can be applied to mobility support systems for elderly people, visually impaired people, and automobiles. | 3,760.4 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
In silico design of a promiscuous chimeric multi-epitope vaccine against Mycobacterium tuberculosis
Tuberculosis (TB) is a global health threat, killing approximately 1.5 million people each year. The eradication of Mycobacterium tuberculosis, the main causative agent of TB, is increasingly challenging due to the emergence of extensive drug-resistant strains. Vaccination is considered an effective way to protect the host from pathogens, but the only clinically approved TB vaccine, Bacillus Calmette-Guérin (BCG), has limited protection in adults. Multi-epitope vaccines have been found to enhance immunity to diseases by selectively combining epitopes from several candidate proteins. This study aimed to design a multi-epitope vaccine against TB using an immuno-informatics approach. Through functional enrichment, we identified eight proteins secreted by M. tuberculosis that are either required for pathogenesis, secreted into extracellular space, or both. We then analyzed the epitopes of these proteins and selected 16 helper T lymphocyte epitopes with interferon-γ inducing activity, 15 cytotoxic T lymphocyte epitopes, and 10 linear B-cell epitopes, and conjugated them with adjuvant and Pan HLA DR-binding epitope (PADRE) using appropriate linkers. Moreover, we predicted the tertiary structure of this vaccine, its potential interaction with Toll-Like Receptor-4 (TLR4), and the immune response it might elicit. The results showed that this vaccine had a strong affinity for TLR4, which could significantly stimulate CD4+ and CD8+ cells to secrete immune factors and B lymphocytes to secrete immunoglobulins, so as to obtain good humoral and cellular immunity. Overall, this multi-epitope protein was predicted to be stable, safe, highly antigenic, and highly immunogenic, which has the potential to serve as a global vaccine against TB.
Introduction
Tuberculosis (TB), a highly contagious disease caused by Mycobacterium tuberculosis, is ranked by World Health Organization (WHO) as the top cause of death from a single infectious agent [1][2][3]. In 2021, the estimated number of TB deaths and new cases reached 1.6 million and 10.6 million, respectively [4]. Currently, the clinical treatment of TB is relatively scarce, and the combination of multiple antimicrobial drugs is mainly used. This chemotherapy cycle is very long, usually taking nine to twelve months, or even longer [5], which increases the risk of drug-resistant mutations in M. tuberculosis [6,7]. In recent years, chemotherapy has become less effective because of the emergence and increasing proportion of multi-drug and extensively drug-resistant M. tuberculosis [6]. Preventing TB from developing may be more effective than treating it. Vaccination is well known to be an effective way to protect the host from pathogenic bacteria [8]. Currently, Bacillus Calmette-Guérin (BCG), developed over 100 years ago, is the only clinically approved TB vaccine [9]. Unfortunately, BCG only protects newborns and infants and is largely ineffective against adolescents and adults [2,10], although WHO reports that 89% of TB cases in 2021 were adults [4]. Therefore, there is an urgent need to develop a novel and effective anti-TB vaccine, especially for adolescents and adults.
TB vaccine development is complicated by multiple features of mycobacteria, such as latent infection, persistence, and immune evasion [11][12][13]. An ideal TB vaccine should be designed to target the proteins/pathways responsible for these properties in M. tuberculosis and be able to efficiently induce CD4 + and CD8 + T cell-mediated immune responses [14]. Moreover, an effective vaccine should also target the host's major histocompatibility complexes (MHC), which are highly polymorphic [15]. These characteristics put forward very high requirements for the versatility of the vaccine, which obviously cannot be achieved by a single natural protein. Multi-epitope vaccine, a recombinant protein consisting of a series of or overlapping epitopes (peptides) [16], is a novel type of vaccine candidate that may address the above issues. In recent years, multi-epitope vaccines have attracted much attention due to their advantages of higher immunity and lower allergenicity than conventional vaccines [17,18]. Currently, multi-epitope vaccines have been designed against many pathogenic microorganisms, including Shigella spp. [19], footand-mouth disease virus [20], Helicobacter pylori [21,22], hepatitis B virus [23], Toxoplasma gondii [24], Leishmania infantum [25], Nipah virus [26], Onchocerca volvulus [27], Pseudomonas aeruginosa [28], and leukosis virus [29]. In particular, the emergence of the COVID-19 pandemic has strengthened the application of this technology [16,[30][31][32]. As for TB, several multi-epitope vaccines have been designed to target inherently active TB [33][34][35][36][37][38][39] and latent TB [40,41]. Among them, three vaccine candidates were designed in the form of DNA [34,36,40], and two of them incorporated epitopes into the protein backbones to generate recombinant vaccines [34,36]. It should be noted that the candidate proteins for some of the above multi-epitope vaccines are randomly selected, and the population coverage of these vaccines requires further studies. Moreover, two multi-epitope TB vaccine candidates with broad population coverage were designed, one epitope selected from immunogenic exosomes vesicle proteins with pathogenic properties [39], the other does not focus on candidate proteins, but directly selects highly conserved and experimentally validated epitopes from the Immune Epitope Database (IEDB) [38]. However, these candidate proteins lack functional enrichment, and the ability of vaccine candidates to induce interferon-γ (IFN-γ) secretion remains to be improved.
Previous study has deduced that rational optimization of epitopes can be achieved by a combination of MHC binding capacity and the epitope's ability to react with T cell receptors [42]. Furthermore, they predicted that vaccines with cytotoxic T lymphocyte (CTL) A1, A2, A3, A24, and B7 binding epitopes would have coverage of nearly 100% in the major ethnic groups (Blacks, Asians, Hispanics, and Caucasians). However, until now there has been no similar approach to design a TB vaccine. In this study, we designed a highly promiscuous multi-epitope TB vaccine using various antigenic features of eight function-enriched proteins. The chimeric vaccine candidate possesses 15 CTL epitopes, 16 helper T lymphocyte (HTL) epitopes with IFN-γ-inducing properties, and 10 linear B-cells epitopes. Immuno-informatics analysis demonstrated that this vaccine candidate was 'all-encompassing', making it a potential cornerstone to achieve the 'The End TB strategy'.
Protein selection and sequence retrieval
To construct a multi-epitope vaccine against TB, we first selected proteins of the M. tuberculosis complex, which are deposited in the IEDB database [43] and have been validated as MHC class I and II binding epitopes. Amino acid sequences (primary structure) of proteins from M. tuberculosis H37Rv strain were obtained from the UniProt database [44]. Alignment-independent predictions of prospective antigens based on physicochemical properties were obtained from the VaxiJen 2.0 server [45], which underwent automatic and cross-covariance (ACC) transformation of protein sequences into a uniform vector of major amino acid properties, with the antigenicity threshold set at 0.4 for each bacterial protein [45,46]. Functional annotation of proteins was assessed using Database for Annotation, Visualization and Integrated Discovery (DAVID) 6.8 [47]. Secreted proteins were further enriched using two categories: extracellular space and pathogenesis through the DAVID and BioCyc [48] databases, respectively. The proteome of Homo sapiens GRCh38.p13 was downloaded in FASTA format from the National Centre for Biotechnology Information (NCBI) database [49]. BLASTp was used to predict homology (E-value =1e-5) between secreted proteins and H. sapiens proteins.
T-cell epitope prediction
Prediction and selection of epitopes are crucial steps in the construction of multi-epitope vaccines. MHC I molecules bind short peptides (9-11 amino acids) because the peptide-binding cleft of MHC I molecules consisting of a single α chain is closed [50]. The freely accessible NetMHCpan-4.1 [51] was used for CTL epitopes prediction, which uses NNAlign_MA to generates percentage ranks (% rank) based on a combination of MHC I binding affinities and eluted ligands. The "% Ranking" of a query sequence was determined by comparing its prediction score to the distribution of prediction scores for the relevant MHC calculated using a set of randomly chosen native peptides. Epitopes with a % ranking < 0.5% were considered strong binders, while epitopes with a % ranking < 2% were considered weak binders [51]. Although up to 12 supertype MHC class I epitopes can be predicted on the server, we only used A1, A2, A3, A24, and B7 because these five supertypes basically cover 100% of the major human races [42]. We selected strong binders and predicted their antigenicity using VaxiJen2.0 [45], then, we predicted class I immunogenicity using the International Epitope Database (IEDB) [52], which uses 3-fold cross validation. Finally, we arranged epitopes that were both antigenic and immunogenic according to % ranking and selected 15 low-scoring epitopes, three for each supertype and at least one for each candidate protein, except for candidate protein that could not have a strong CTL binding epitope that are antigenic and immunogenic. Finally, IC50 values for each CTL epitope were predicted from NetMHC-4.0 [53].
Class II MHC molecule bind to antigenic peptides, and the resulting complex can be recognized by HTL. Typically, antigenic peptides range in length from 12 to 20 amino acid residues, but peptides between 13 and 16 residues in length are frequently observed [54]. In fact, the 15-mers were the most abundant MHC II epitopes for M. tuberculosis and have been deposited in IEDB. As a result, we used the NetMHCIIpan-4.0 [51,55] to predict the binding of 15-mer peptides to Human Leukocyte Antigen-DR (HLA-DR), HLA-DQ, HLA-DP and H-2-1 alleles. The prediction was also based on NNAlign_MA with % ranking < 2% and < 10% considered as strong and weak binders, respectively [51]. Also, we predicted 15-mer IFNγ inducing epitopes for candidate proteins using the IFNepitope server [56], which uses a support vector machine hybrid approach that allows virtual screening of IFN-γ inducing peptide/epitope in a peptide library consisting of IFN-γ-inducible and non-inducible MHC II binders that activate T-helper cells. We then predicted the antigenicity of the IFN-γ inducing epitopes [45], and finally, we selected the 16 most promiscuous epitopes that were strong MHC-II binding, IFN-γ inducing, and antigenic.
It is important to note that signal peptides were removed from candidate proteins prior to the epitope prediction. In this study, signal peptides were screened using SignalP 5.0 [57] and TargetP-2.0 [58].
Linear B-cell epitope prediction
Linear B-cells epitopes (16-mers) were predicted using ABCpred [59,60] with a default threshold of 0.51. Moreover, to increase the reliability of the prediction results, we also used BepiPred 2.0 [61] to predict linear B-cells epitopes. Epitopes obtained from these two softwares were further subjected to antigenicity prediction using VaxiJen2.0 [45]. Finally, we selected ten linear B-cell epitopes based on high ABCpred scores and antigenicity, with at least one epitope selected for each candidate protein.
Construction of the multi-epitope vaccine candidate with chimeric properties
The designed multi-epitope vaccine contains one HBHA (heparin-binding hemagglutinin) adjuvant, one Pan HLA DR-binding epitope (PADRE), 15 CTL, 16 HTL, 10 linear B-cells epitopes, and one His× 6 tag (Fig. 3). Linkers were used to join epitopes, prevent the production of junction epitopes, and enhance the procession and regeneration of individual epitopes in chimeric vaccines [62]. For the construction of this vaccine candidate, the HBHA adjuvant (UniProt ID: P9WIP9) was located at the N-terminus and linked to the downstream PADRE via an EAAAK linker. Then, the HTL epitopes joined by the GPGPG linkers were linked to PADRE. Moreover, CTL epitopes joined by AAY linker were connected to HTL epitopes via HEYGAEALERAG linker, which also joined CTL epitope unit to linear B-cell epitopes linked using KK linkers. Finally, a His× 6 tag was attached to the C-terminus of the chimeric protein.
Antigenicity, allergenicity and physicochemical properties
The antigenicity of the multiple-epitope vaccine and the eight component proteins were predicted by the VaxiJen 2.0 server [45], while the allergenicity of these proteins was predicted by the Al-lerTOP 2.0 server [63]. AllerTOP 2.0 uses amino acid E-descriptors, ACC transformation of protein sequences, and k-nearest neighbors (kNN) for allergen classification. The method achieved 85.3% accuracy with 5-fold cross-validation. For the prediction of physicochemical properties such as half-life, isoelectric point, instability index, aliphatic index, and grand average of hydropathicity (GRAVY) of this multiple-epitope vaccine, the ExPASy ProtParam server [64] was used. Further, the solubility of multi-epitope vaccine peptide was assessed using the proteinSol (PROSO II) server [65] based on a classifier exploiting the subtle differences between the well-known insoluble proteins from TargetDB and the soluble proteins from both TargetDB and PDB [66]. When evaluated using 10-fold cross-validation, it achieved 71.0% accuracy (area under ROC curve = 0.785).
Immune simulation
To characterize the immune response profile and immunogenicity of the vaccine, in silico immune simulations were performed using the C-ImmSim server [67]. C-ImmSim predicts immune interactions using position-specific scoring matrices derived from machine learning techniques for peptide prediction. It concurrently simulates three compartments representing three separate anatomical regions found in mammals: (i) the bone marrow, where hematopoietic stem cells were simulated to produce new lymphocytes and myeloid cells; (ii) the thymus, where naive T cells were selected to avoid autoimmunity; and (iii) the lymphatic organ such as lymph nodes. To effectively prime and boost the vaccine, we followed the approach of [68] where two injections were administered four weeks apart. All simulation parameters were set to default values, with time steps set to 10 and 94 (each time step is eight hours).
Disordered region prediction
Intrinsically disordered regions (IDRs) are present in many proteins. The disordered region was predicted using DISOPRED3 [69], which uses DISOPRED2 and two other machine-learning based modules trained on large IDRs to identify disordered residues. They were then annotated as protein-bound after using an additional SVM classifier [69].
Secondary and tertiary structure prediction
The secondary structure of the designed vaccine was predicted by the PSIPRED 4.0 server [70], which first uses PSI-BLAST to identify sequences closely related to the query protein. The tertiary structure of this vaccine was predicted using the Iterative Threading Assembly Refinement (I-TASSER) server [71]. There are four key steps in I-TASSER modeling; a) threading template identification; b) iterative structure assembly simulation; c) model selection and refinement; and d) structure-based functional annotation [72,73]. I-TASSER generated five models, which were screened using ProSA-web [74], and the model with the lowest Z-score was selected for further refinement. ProSA-web compares the model scores obtained from experimentally verified structures deposited in PDB. A local quality score plot helps identify problematic areas in the model, and the same scores were represented using a color code on the presentation of the 3D structure. This is useful for early structural determination and refinement.
Tertiary structure refinement
The "coarse" 3D model of the vaccine candidate obtained by I-TASSER was refined in two steps using two servers; first with ModRefiner [75] followed by GalaxyRefine [76]. ModRefiner uses Cα traces to affect the construction and refinement of proteins obtained by two-step atomic-level energy minimization. First, the Cα traces were used to construct the main chain, followed by refinement of side chain rotamers and backbone atoms using physics-and knowledge-based composite force fields. GalaxyRefine utilizes multiple templates to generate reliable core structures, while unreliable loops or terminals were generated by optimization-based modeling.
Tertiary structure validation
The refined structure of the vaccine candidate was validated by Ramachandran plots generated from the PROCHECK [77] and Mol-Probity [78] databases. Ramachandran plots evaluate the backbone conformation of proteins by dividing amino acid residues into two regions: allowed and disallowed. PROCHECK utilizes stereochemistry to assess the net quality of protein structures by comparing them to the refined structures at the same resolution and then presenting regions requiring further analysis. Molprobity validates local and global macromolecule (proteins and nucleic acids) models by a mix of X-ray, NMR, computational, and cryoEM criteria [79]. The power and sensitivity to optimize hydrogen placement and all-atom contact analysis are widely used in an updated version of the covalent geometry and torsion angle criteria [80].
Discontinuous B-cell epitopes
Discontinuous B-cell epitopes in the native protein structure were predicted using ElliPro [81]. ElliPro implements three algorithms to approximate protein shape as an ellipsoid, calculates the residue protrusion index (PI), and clusters neighboring residues based on their PI values. ElliPro provides each output epitope with a score described as the averaged PI value for the epitope residue. An ellipsoid with a PI value of 0.9 consists of 90% of the contained protein residues, while the remaining 10% of residues lie outside the ellipsoid. For each epitope residue, the PI value is calculated from the centre of mass of residue lying outside the largest possible ellipsoid.
Molecular docking of chimeric proteins
Molecular docking of the designed vaccine (ligand) with Toll-Like Receptor-4 (TLR4) (PDB ID: 3FXI) immune receptor was performed using Patchdock [82]. The top 10 models were then refined using FireDock [83]. PatchDock replaces the Connolly dot surface representation of the molecules with concave, convex and flat patches. The models were then scored based on geometric fit and atomic desolvation. [82]. FireDock optimizes side chain conformations and orientation of the rigid-body and generates an output of a 3D refined complex based on the binding energy [83]. We selected the first model of Firedock based on global energy as the docking complex. Finally, the binding energy and dissociation content within the docking complex were predicted using the PRODIGY server [84].
Molecular dynamics simulation
Molecular dynamic simulations were performed on proteins using the fast and freely-accessible web-server, internal coordinates normal mode analysis server (iMODS) [85], and consistent and optimal docking result were obtained from the PatchDock-FireDock server. In internal coordinates, Normal Mode Analysis (NMA) generates collective motions critical for macromolecular function. iMODS presents mechanisms for exploring these modes as vibration analysis, motion animations, and morphing trajectories that were carried out almost interactively at different resolutions [85].
Reverse translation, codon optimization, and in silico cloning of the vaccine
To effectively express the vaccine candidate in Escherichia coli cells, cDNA was generated in silico through codon optimization and reverse translation using the Java Codon Adaptation Tool (JCAT) [86]. Optimization involved (i) avoiding rho-independent transcriptional terminators, ii) avoiding prokaryotic ribosome binding sites, (iii) avoiding cleavage site of restriction enzymes NcoI and XhoI, which serves as N-terminal and C-terminal restriction sites for the insertion of cDNA template of vaccine, and (iv) only partial optimization to apply site directed mutagenesis. Codon Adaptation Index (CAI) and GC content predicted the quality of the cDNA with an opal stop codon (TGA) inserted after the His× 6 tag. Then, the optimized DNA fragment of the chimeric vaccine candidate was integrated into the reverse strand of pET-28a(+) using the SnapGene tool [87]. 1 summarized the overall concept of this multi-epitope vaccine design. Briefly, we analyzed the antigenicity of M. tuberculosis proteins obtained from IEDB and selected eight antigenic candidates for multi-epitope vaccine construction, considering their functional classification and subcellular localization. Then, we predicted the antigenic epitopes of these eight proteins and selected 15 CTL epitopes, 16 HTL epitopes, and 10 B-cells epitopes, which were further linked to HBHA adjuvant and PADRE through suitable linkers. Furthermore, we predicted the antigenicity, allergenicity, physicochemical properties, and immunogenic profile of the designed multi-epitope vaccine. Meanwhile, we also predicted the secondary and tertiary structures of the vaccine and analyzed its potential interaction with TLR4 immune receptor.
Retrieval of M. tuberculosis proteins for multi-epitope vaccine construction
To construct a multi-epitope vaccine against TB, we first analyzed the antigenicity of proteins in M. tuberculosis H37Rv. A total of 492 proteins with validated immunological properties were obtained from the IEDB database, of which 402 proteins (81.70%) were predicted to be antigenic. For the functional characterization of these 492 proteins, 353 were enriched in the DAVID database, while 139 were not. Localization analysis showed that 97 of the 353 enriched proteins were localized in the extracellular compartment, the space outside the cell surface. Proteins localized in the extracellular compartment include outer membrane proteins and secreted proteins. Among the 97 proteins, 49 are secretion proteins. Further enrichment of 49 secreted proteins revealed that five proteins were secreted to the extracellular space and five were required for pathogenesis (experimentally validated by BioCyc) [48]. Moreover, when these ten proteins were aligned with the total proteins of H. sapiens, two proteins (Mpt83 and Mpt70) were highly similar to H. sapiens proteins. Therefore, the remaining eight proteins not similar to H. sapiens proteins (Table 1) were considered as final candidates for the construction of multi-epitope vaccine.
Prediction of T cell and B-cell epitopes of candidate proteins
Prior to epitope prediction, signal peptides were identified and deleted from candidate proteins. A total of 623 CTL epitopes from eight candidate proteins with 135 strong binders (16 of A1, 40 of A2, 22 of A3, 19 of A24, and 38 of B7 supertypes of MHC class I) were predicted by the NetMHCpan-4.1 server ( Fig. 2A, B). Among the 135 strong binders, 36 were both antigenic and immunogenic. Finally, we selected 15 out of 36 CTL epitopes according to the following criteria: three epitopes per supertype and at least one per candidate protein (Note: EsxA had no strong binder that was both antigenic and immunogenic) ( Table 2). Moreover, we predicted their HTL epitopes (MHC class II epitopes). 15-mer epitopes from four alleles (HLA_DRB, HLA_DP, HLA_DQ, and H-2-I) with a percentage ranking ≤ 2.0 were considered strong binders and ranked from the lowest to the highest score. The HLA-DR alleles had the highest number of MHC II binders (2990), while the HLA-DP had the least (601) (Fig. 2C, D). It should be noted that the immune response in granulomatous disorders such as tuberculosis, leprosis and sarcoidosis is dominated by IFN-γ producing T-helper-1 (Th1) cells [97]. In addition, previous reports revealed that the expression of Th1 cytokines (IFN-γ and IL-2) decreased in tuberculosis patients [98]. Therefore, to stimulate Th1 cells, we predicted 15-mer IFN-γ inducing peptides of these proteins. A total of 534 epitopes with IFN-γ inducing activity were predicted using NetMHCIIpan 4.0, of which 315 epitopes are antigenic. Of these 315 epitopes, 103 showed strong binding activity to at least one HLA allele. Also, polymorphism in HLA alleles results in allelic variation, leading to widely distinct peptide-binding specificities [55]. As a result, we selected one epitope with the strongest binding activity (most promiscuous epitope) from each of the eight candidate proteins. In addition, the remaining 95 epitopes were ranked based on binding strength, and then an additional eight epitopes were selected, with a maximum of two epitopes per candidate protein. Finally, we selected 16 out of 103 HTL epitopes according to the following criteria: at least one epitope and maximum three epitopes per candidate protein, strongest binding activity per candidate protein, and most antigenic and IFN-γ inducing. We then used them to construct this multi-epitope vaccine ( Table 2).
As for linear B-cell epitope prediction, two servers (BepiPred2.0 and ABCpred) were used and 166 epitopes were obtained. We predicted the antigenicity of the top five epitopes for each protein, and then selected ten antigenic epitopes (16-mer, at least one from each candidate protein) ( Table 2).
HBHA has previously been placed at the N-terminus of some multiepitope vaccines as adjuvant [105,107]. In addition to adjuvants, PADRE, which is safe for humans, can significantly enhance vaccine immunogenicity because it binds with high affinity to multiple mouse and human MHC-II alleles with high affinity to induce Th cell-mediated responses [47,62]. HBHA and PADRE were connected by a helical EAAAK linker, which provides rigidity while separating the two protein components to enhance efficiency and reduce interference. Further, a Glu-Lys salt bridge formed within the segments stabilized the linker. Since PADRE can enhance the activity of vaccine HTL epitopes [54], we placed the 16 HTL epitopes after PADRE. The GPGPG linker was used to link HTL epitopes to each other and to PADRE since it facilitates immune cell progression and epitope presentation [108,109]. Moreover, GPGPG induces HTL immune response, which can disrupt junctional immunogenicity and restore the immunogenicity of individual epitopes after processing [108]. For the 15 CTL epitopes, the individual epitope was joined by AAY linker, which helps the epitope produce suitable sites to bind to the TAP transporter and enhance epitope presentation [47]. Further, the entire CTL epitope subunit was linked to the upstream HTL epitopes via the HEYGAEALERAG linker that possesses five appropriate cleavage sites (A7-L8, A5-E6, Y3-G4, R10-A11, and L8-E9) essential for eukaryotic proteasomal and lysosomal degradation systems. In eukaryotes, proteasome and lysosomes are the most important proteolytic machineries, which utilize the ubiquitin-proteasome system (UPS) and autophagy pathway, respectively [110]. HEYGAEALERAG linker was also used to link the CTL epitope subunit to the 10 downstream linear B-cell epitopes linked together by KK linkers. According to previous studies, the KK linker prevents the induction of antibodies into the amino acid sequence resulting from the combination of the two peptides, thereby facilitating the specific display of each peptide to antibody [111]. Finally, a His× 6 tag was attached to the C-terminus of the last linear B-cell epitope for subsequent purification and characterization of the vaccine (Fig. 3).
The final chimeric protein consists of 933 amino acid residues, starting with HBHA adjuvant, followed by 41 epitopes, and ending with a His× 6 tag (Fig. 3). Since this chimeric protein is composed of multi-epitope antigens from eight proteins, we named it MTBV8, the Multi-epitope TB Vaccine derived from 8 candidate proteins.
Analysis of the antigenicity, allergenicity, and physiochemical parameters of MTBV8
Previous studies have demonstrated that antigenicity is required for human vaccines in order to elicit humoral immune response, leading to the generation of memory cells directed against epitopes of infectious agent [112]. Therefore, we predicted the antigenicity of the multi-epitope vaccine MTBV8 using the VaxiJen 2.0 server and obtained an antigenicity of 0.97, which is significantly higher than the threshold value (0.4) [45,46] for a bacterial protein to be considered antigenic. Notably, MTBV8 was more antigenic than all component proteins (Fig. 4), suggesting that it could effectively stimulate host immune responses.
Moreover, many microbial macromolecules have been reported to have the potential to induce hypersensitivity reactions in humans [113]. To analyze hypersensitivity issue of MTBV8 and the eight constituent proteins, we predicted their allergenicity using the Al-lerTOP 2.0 server. Two of the eight component proteins (LprA and EspC) were predicted to be allergens (Table 1). It should be noted that although the designed multi-epitope vaccine (MTBV8) contained ten epitopes obtained from these two proteins, it was predicted to be non-allergic, indicating that this vaccine is safe.
To further characterize MTBV8, we analyzed its physicochemical properties through the ExPASy ProtParam server. The recombinant protein has a molecular weight of 94.84 kDa and an isoelectric point of 8.91. Moreover, the protein was considered stable with a calculated instability index (II) of 26.44 (proteins with instability index > 40 were considered unstable) [64]. Estimated in vitro half-life is 30 h in mammalian reticulocytes, longer than in vivo 20 h in yeast, Finally, the proportional solubility value for MTBV8 (0.54) was greater than the average for the population dataset (0.45), indicating that the vaccine candidate was more soluble than half of the E. coli proteins [114]. The above results indicated that MTBV8 has good physical and chemical properties and is suitable for use as a vaccine.
Immunogenic profile of MTBV8
To assess the immunogenicity profile of MTBV8, we analyzed the immune responses in silico via the C-IMMSIM immune server. As shown in Fig. 5A, the IgM and IgG1 titers were substantially increased after secondary immune stimulation by MTBV8. Moreover, the level of active B-cells increased and remained at high level after each immunization (Fig. 5B). Even for B cells in plasma, the B isotypes IgM and IgG1 remained high after immunization ( Supplementary Fig. 1A, B). As for the effect on T cells, the levels of Th memory cells (y2) (Supplementary Fig. 1C) and active Th cells (Fig. 5C) increased rapidly after primary immunization and were significantly boosted after secondary immunization. Regulatory T cells (primarily active cells) were stimulated upon initial immunization and then declined rapidly (Supplementary Fig. 1D). Interestingly, the level of active cytotoxic T (Tc) cells increased rapidly after primary immunization, remained high in the second immunization, and then declined steadily, whereas resting Tc cells showed the reverse trend to active Tc cells (Fig. 5D). In contrast, the levels of anergic (y2) Tc cells (Fig. 5D) and memory Tc cells ( Supplementary Fig. 1E) remained unchanged upon immunization. In addition, it should be noted that the levels of the immune factors IFN-γ and IL-2 secreted by T cells that are critical for the immune response against M. tuberculosis [115] also significantly increased after MTBV8 immunization (Fig. 5E).
Furthermore, we predicted the effect of MTBV8 vaccination on innate immune cell populations ( Fig. 5F and Supplementary Fig. 1F-H). Presenting-2 cells in DCs and macrophages increased rapidly upon the first immunization, whereas only a small increase was observed after the second immunization ( Fig. 5F and Supplementary Fig. 1G). In addition, in macrophages, the first immunization resulted in a simultaneous increase or decrease in active and resting macrophages. A few weeks after the second immunization, active macrophages decreased rapidly while resting macrophages increased rapidly. (Supplementary Fig. 1G). Notably, natural killer cells and active epithelial cell populations exhibited a fairly constant pattern upon immunization. (Supplementary Fig. 1F, H). Taken together, these predictions suggested that MTBV8 can potently stimulate both innate and adaptive immune responses, making it a potentially effective vaccine candidate.
Predicted secondary and tertiary structures of MTBV8
To better characterize MTBV8, we predicted its secondary and tertiary structures, followed by refining its tertiary structure. The secondary structure of MTBV8 consisted of 42.12% helices, 11.79% strands and 46.09% coils (Supplementary Fig. 2). According to DIS-OPRED3 predictions, 27.11% of amino acid residues were disordered.
The tertiary structure of MTBV8 was predicted by the I-TASSER server, and 5 models with good C-scores were obtained. The C-score is between −5.0 and 2.0 and is proportional to the reliability of the prediction. These models were subjected to ProSA-web analysis and model 4 was selected due to the following characteristics; C score = −3.8, Z score = −4.4. Ramachandran plot analysis of this tertiary structure showed that less than 70.0% of the residues were in preferred regions, thus, further refinement of the tertiary structure is required.
The tertiary structure obtained by I-TASSER was refined by the ModRefiner server, followed by the GalaxyRefine server to generate five models. Among the five refined models, model 2 had the best structure when considering the following parameters: Global distance test high accuracy (0.89), RMSD (0.58), MolProbity (2.31), clash score (14.8), poor rotamer score (0.80) and Ramachandran plot score (86.5%) (Fig. 6A). Ramachandran plot analysis of this structure using PROCHECK revealed that of the 753 non-glycine, non-proline, and non-terminal residues, 78.9% of them were in the most favorable region [A, B, L], 17.3% were in the additionally allowed region [a, b, l, p], and 1.5% in the generally allowed region [∼a, ∼b, ∼l, ∼p], with only 2.4% in the disallowed region (Fig. 6B). When analyzing the Ramachandran plot of the refined structure using Molprobity, we found that 86.5% of the amino acid residues were located in the favorable region, which was the same as obtained by the GalaxyRefine server (Fig. 6C). Furthermore, 98.0% of the residues were in the allowable region, while 19 residues were outliers (phi, psi) (Fig. 6C). Although the refined model has more residues in the favored region compared to the model generated by I-TASSER, it needs further validation.
Model validation is a key step in the model building process as it can identify potential errors in predicted 3D models [116]. To validate the refined model of MTBV8, ProSA-web and ERRAT analyses were performed. ProSA-web analysis showed a Z-score of − 5.76 for MTBV8 (Fig. 6D), which was slightly outside the experimentally determined range of scores for proteins of similar size. However, ERRAT analysis showed an overall quality factor of 70.94 for refined MTBV8. Since an ERRAT score greater than 50 represents good quality model [117], a score of 70.94 indicated that we have high confidence in the modeled structure for subsequent analysis.
For discontinuous conformational B-cell epitopes, we predicted 10 of them, with scores ranging from 0.85 to 0.53 (Supplementary Table 1).
Molecular docking of MTBV8 with immune cell receptors
The strong affinity of vaccines for immune cell receptors produces a stable immune response [118]. To gain insight into the potential interactions between MTBV8 and immune cell receptors, we carried out molecular docking between MTBV8 and the immune receptor TLR4 (Fig. 7A and B). Previous studies have shown that TLR4 ligands can activate DCs, which in turn activate naive T cells, effectively polarize T cells (CD4 + and CD8 + cells) to secrete IFN-γ, and induce T cell-mediated-cytotoxicity [106]. These processes would lead to an increase in the pool of effector memory cells [119]. Since the interaction of TLR4 with antigenic peptides leads to responses against TB infection, we focused on the interactions between MTBV8 and TLR4 (Fig. 7A). Molecular docking results from Firedock showed that the most favored refined model had a global energy of − 40.28, an attractive van der Waals energy (aVdW) of − 19.86, a repulsive energy (rVdW) of 2.19, an atomic contact energy of 4.70 and a hydrogen bond (HB) contribution of − 1.03 (Fig. 7B). Additionally, Prodigy predicted a binding energy between TLR4 and MTBV8 of − 8.6 kcal/mol. The results above indicated that MTBV8 has a strong affinity to TLR4, enabling it to generate a stable immune response.
Molecular dynamics simulation
The molecular dynamic simulation was used to analyze the motion of atoms in the vaccine [120]. Main chain deformability reveals the extent to which a molecule can deform at each constituent residue. Regions of high deformation indicated the position of the chain "hinge" (Fig. 7C), and experimental B factor was obtained from the corresponding PDB data bank while the calculation was performed by multiplying the NMA mobility by 8pi^2 (Fig. 7D). The eigenvalue assigned to each normal mode is a measure of motion stiffness, which is directly related to deformation energy of the entire structure. The eigenvalue is directly proportional to the ease of deformation (Fig. 7E). The variance of each normal mode is inversely proportional to the eigenvalue. Individual variances are colored in brown while cumulative variances are colored in green) (Fig. 7F). In the covariance matrix, there is coupling between pairs of residues, and correlated (red), uncorrelated (white) and anti-correlated (blue) motions were displayed (Fig. 7G). Elasticity network represented pairs of atoms connected by springs, and each dot represented a spring between corresponding pairs of atoms. The color of the dots signified their stiffness; darker gray dots illustrate stiffer springs and vice versa (Fig. 7H). Molecular dynamics simulations using the iMODS server suggested that the docking complex between MTBV8 and TLR4 was stable.
In silico optimization and cloning of MTBV8
Protein synthesis is a prerequisite for its activity [121]. In line with this, increased transcriptional and translational efficiencies of the vaccine are necessary for its overexpression in E. coli cells via a self-replicating plasmid pET28a(+) [103]. Its optimization was thereof achieved by the Codon Adaptation Index (CAI) of 0.99, and a GC content (54.81%) between 30% and 70%, which are generally considered favorable for protein expression in host. The DNA fragment of mtbv8 could be cloned into pET-28a(+) vector between the NcoI and XhoI restriction sites using SnapGene to generate a pET-28a (+)-mtbv8 recombinant plasmid (Supplementary Fig.3).
The emergence of multi-epitope vaccines against tuberculosis
Vaccination is the most reliable way to fight tuberculosis infection. For many years, scientists have identified numerous promising vaccine candidates that could replace BCG, the only approved TB vaccine [8100, [122][123][124][125]. Recently, recombinant TB vaccines have received increasing attention. Several recombinant vaccines have demonstrated efficacy in preclinical and clinical trials, including the ID93/GLA-SE consisting of four vaccine candidates Rv2608 (PE/PPE family), Rv1813 (expressed under stress/hypoxia), ESXV, and ESXW, which are promising in mice pre-clinical trial [122]. For clinical trials, GamTBvac, obtained by fusing Ag85a and ESAT6-CFP10, was successful in phase I trials [125], while M72, consisting of two candidates [Mtb39A (PPE18) and Mtb32A], yielded significant potency in phase II trials [123]. Successful trials of these recombinant vaccines indicates the greater potency of epitope-based vaccines that simply incorporates the immunological properties of different candidate proteins into a synthetic protein [124]. Notably, several TB subunit vaccines have been derived from a large number of vaccine candidates, and they have emerged as potentially attractive candidates in animal model studies [100]. For instance, MP3RT, a multiepitope peptide TB vaccine candidate consisting of six immunogenic HTL peptides, induced significantly higher levels of IFN-γ and CD3 + IFN-γ + T lymphocytes and lower colony forming units (CFUs) in the lung and spleen of humanized mice than wild-type mice [8]. Going forward, the research for greater potency in these vaccines has led to an enrichment of potential protein candidates. Further, proteins secreted by M. tuberculosis have also been considered in the design of MTBV8 (Table 1), because they are important for TB pathogenesis and virulence [126]. In addition, some secreted proteins of the outer layers of M. tuberculosis cells can serve as useful antigens. For instance, EsxA and EsxB have been purified from the capsule of M. tuberculosis [127] and retention of EsxA in the capsule has been implicated in cytotoxicity [128]. In addition, EsxA is required for bacterial cell wall integrity [129]. Overall, secreted proteins have sufficiently exposed surfaces such that they are targets of host immune responses [130,131].
Potential stimulation of the innate immune system by MTBV8
Screening for epitope in candidate proteins involves activation of the innate immune system followed by antigen recognition by lymphocytes, because both factors are required for the subsequent activation of the adaptive immune system [50]. For the innate response, the formation of a stable complex between MTBV8 (aided by the TLR4 stimulator HBHA as an adjuvant) and TLR4 of DCs (Fig. 5F) is important because TLR4 ligands can activate DCs and activated DCs can in turn activate naive T cells, effectively polarize CD4 + and CD8 + T cells to secrete IFN-γ, and induce T cell-mediated cytotoxicity [106] and subsequently increased pool of effector memory cells [119]. This complex likely contributed to the increase in memory cell (y2) population in B lymphocytes ( Supplementary Fig. 1A) and Th lymphocytes ( Supplementary Fig. 1C) compared to non-memory cells. Immune stimulation by MTBV8 is comparable to that obtained with other multiepitope vaccines against TB [33,37,39] and other diseases [27,62,107,124,132]. This hinted at the consistency of the multi-epitope vaccine mechanism.
Potential stimulation of adaptive immune system by MTBV8
B and T lymphocytes are the primary effector cells that coordinate adaptive immune responses through humoral or cellmediated immunity by recognizing parts of invading pathogen, referred to as antigens [50]. To induce a humoral adaptive immune responses, we incorporated high scoring B-cell epitopes ( Table 2) for recognition and destruction by antibodies expressed and secreted by B lymphocytes [50,133]. Vaccine stimulation of active B lymphocytes (Fig. 5B), mostly IgM-and IgG1-secreting B cells ( Supplementary Fig. 1A), may result in excessive secretion of IgM and IgG1 (Fig. 5A). The high antigenicity of the vaccine (Fig. 4) may have important implications for enhancing antibody responses (Fig. 5A). Indeed, cell-mediated immune responses are highly dependent on the ability of T cells to recognize antigens [134]. To achieve this, we selected strong antigenic and strong MHC-binding CTL and HTL epitopes (Table 2), which enhanced peptide:MHC complex (pMHC) stability, which is the key factor controlling MHC peptide immunogenicity [135] and plays a role in the priming of CD8 + and CD4 + T cells by DCs, leading to T lymphocyte responses (Fig. 5C, Supplementary Fig. 1C) [136]. The incorporation of appropriate linkers and synthetic oligopeptides (PADRE and HEYGAEALERAG) and precise positioning of each component could facilitate epitope procession and presentation (Fig. 3). Due to allele variation among humans, we selected CTL epitopes based on recognition of MHC-I alleles of major human races Supplementary Table 1(Table 2), and promiscuous HTL epitopes were selected after screening for IFN-γ inducing properties and antigenicity.
The development prospects of MTBV8
This in silico vaccine design is a relevant indicator in the screening phases in terms of the synthesis and efficacy of candidate vaccines. For synthesis, our predictions revealed that it can be synthesized in the E. coli system, however, other systems such as insect and mammalian cells could also be considered if E. coli-based systems present difficulties. Wet laboratory validation is the next stage in confirming vaccine candidates after in silico studies. This method has actually been used effectively in some vaccine candidates, which were also designed first in silico and later proved effective in the experimental phase [137].
Conclusion
A key support of the Sustainable Development Goal of WHO for the complete eradication of TB is access to an potent vaccine to prevent M. tuberculosis infection. Previous studies have shown that chimeric multi-epitope vaccines can exploit the immune/antigenic properties of certain proteins to enhance immune efficacy. In this study, we used an in silico approach to design a chimeric vaccine (MTBV8) by integrating 41 promiscuous epitopes derived from eight antigenic proteins secreted by M. tuberculosis H37Rv. MTBV8 was predicted to be stable, soluble, safe, highly antigenic and immunogenic. Immuno-informatics analysis showed that MTBV8 exhibited good affinity for the major immune cell receptor TLR4. Importantly, vaccination stimulation of MTBV8 could significantly increase the levels of B cells, T cells, and innate immune cell populations and stimulated the production of immunoglobulins (IgG1, IgM, etc.) as well as immune factors (IFN-γ, IL-2, etc). Taken together, the multi-epitope vaccine MTBV8 is a relevant indicator in the design phase of anti-TB vaccines and may become a potential cornerstone in the realization of the 'The End TB strategy'.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 9,611 | 2023-01-16T00:00:00.000 | [
"Biology"
] |
Site-specific Binding Affinities within the H2B Tail Domain Indicate Specific Effects of Lysine Acetylation*
Acetylation of specific lysines within the core histone tail domains plays a critical role in regulating chromatin-based activities. However, the structures and interactions of the tail domains and the molecular mechanisms by which acetylation directly alters chromatin structures are not well understood. To address these issues we developed a chemical method to quantitatively determine binding affinities of specific regions within the individual tail domains in model chromatin complexes. Examinations of specific sites within the H2B tail domain indicate that this tail contains distinct structural elements and binds within nucleosomes with affinities that would reduce the activity of tail-binding proteins 10–50-fold from that deduced from peptide binding studies. Moreover, we find that mutations mimicking lysine acetylation do not cause a global weakening of tail-DNA interactions but rather the results suggest that acetylation leads to a much more subtle and specific alteration in tail interactions than has been assumed. In addition, we provide evidence that acetylation at specific sites in the tail is not additive with several events resulting in similar, localized changes in tail binding.
Nucleosomes are the fundamental repeating subunits of eukaryotic chromatin, comprising about 200 bp of DNA each, 147 bp of which are wrapped about 1.65 times around an octamer of the four core histone proteins (1,2). Strings of nucleosomes are assembled into secondary structures such as the 30 nm diameter chromatin fiber and the higher order tertiary structures perhaps exemplified by the ϳ400 nm chromonema fibers (3)(4)(5)(6). The organization of nucleosomes within secondary and tertiary chromatin structures and the molecular interactions responsible for their formation are poorly understood (4,7,8).
Approximately 75% of the mass of each of the core histones is organized into a largely ␣-helical domain that is assembled into the protein spool onto which the DNA is wrapped (1,9). The remaining mass is contained with the core histone tail domains, which project out from the interior of the nucleosome core and are accessible to both DNA and protein targets in chromatin (10 -12). The tails are essential for formation of higher order secondary and tertiary chromatin structures and participate in short and long range inter-nucleosome interactions (5,7,13,14). Although the tails adopt random coil conformations when released from their binding sites in moderate ionic strength solutions (Ͼ0.4 M NaCl) (15)(16)(17)(18) and are often referred to as "unstructured" domains, evidence suggests they adopt defined structures and make localized interactions when bound within chromatin (19 -23). For example, a specific interaction between a region within the H4 tail domain and a charged surface formed by the H2A/H2B histone fold domains contributes to stability of the folded chromatin fiber (1,7,24). However, in general the structures and interactions of the tail domains are poorly understood.
The core histone tail domains play key roles in the epigenetic regulation of gene expression, and post-translational modifications such as acetylation of specific lysines within these domains are closely linked to transcriptionally active regions of the genome (25)(26)(27). Lysine acetylation can function as an epigenetic "mark" facilitating the recruitment of additional factors to specific loci to facilitate transcription (28,29). In addition, acetylation within the tail domains directly alters the accessibility of nucleosomal DNA and reduces the ability of nucleosome arrays to assemble into higher order structures (5, 30 -34). However, the mechanism by which lysine acetylation directly alters the structure and functionality of chromatin remains unclear. Acetylation results in neutralization of the positive charge on lysine, and thus it is often assumed that acetylation weakens histone-DNA interactions, resulting in a more open and transcriptionally permissible chromatin structure (35,36). However, recent evidence suggests that the effect of acetylation may be much more complex. For example, acetylation of lysine 16 within the H4 tail domain abrogates an interaction between the H4 tail and a surface of H2A/H2B, resulting in a reduction in the ability of nucleosome arrays to condense into higher order structures (24). Moreover, UV laser crosslinking and circular dichroism experiments with native nucleosomes indicate that acetylation does not lead to a detectable reduction in histone tail-DNA interactions (37) and may actually bring about an increase in the ␣-helical content of the tails (19). Finally, biophysical analyses suggest that the effect of acetylation on salt-dependent folding of nucleosome arrays is inconsistent with a simple charge neutralization mechanism (19,20,32,38).
To better understand the structures and interactions of the tail domains and the effects of post-translational modifications on these interactions, we developed a chemical reactivity assay to quantitatively evaluate binding of individual regions within the * This work was supported by National Institutes of Health Grant GM52426.
The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. □ S The on-line version of this article (available at http://www.jbc.org) contains supplemental Figs. S1 and S2. 1 To whom correspondence should be addressed. E-mail: Jeffrey_Hayes@ urmc.rochester.edu.
core histone tail domains. In this study, we focused on histone H2B tail-DNA interactions and the effect of acetylation on this domain.
Our results indicate that acetylation in the H2B tail domain does not cause a global weakening of tail interactions but rather induces specific alterations in tail structure.
EXPERIMENTAL PROCEDURES
Preparation of Core Histones-Coding sequences for H2B mutants were obtained from the wild type Xenopus H2B sequence by the Stratagene QuikChange kit and were cloned into the pET3a expression plasmid. Bacterial cells expressing these proteins were resuspended in 10 mM EDTA, 0.5% Triton X-100 and incubated on ice for 30 min. The mixture was then acidified with HCl to a final 0.4 M, incubated on ice for 30 min, then centrifuged for 15 min at 13,000 rpm. The supernatant was neutralized with 10 M NaOH to pH 8.0, and the NaCl concentration was adjusted to 1 M. Recombinant Xenopus histone H2A was expressed in bacterial cells separately, and H2A/H2B dimers were purified as described (39). All buffers included 10 mM DTT. 2 (H3/H4) 2 tetramers were purified from the chicken red blood cell nuclei as described (39).
Reconstitution of Nucleosomes and Purification of Nucleosome Core Particles-Calf thymus DNA was sonicated into ϳ1-2-kb fragments. H2A/H2B, H3/H4, and calf thymus DNA were reconstituted by salt dialysis in the presence of 10 mM -mercaptoethanol and then condensed to 2.5 mg/ml and digested with micrococcal nuclease (70 units/ml) at 37°C for 10 min in 10 mM Tris, pH 8.0, 2 mM CaCl 2 to generate NCPs. The digestion was stopped by adding EDTA to 2.5 mM, and NCPs were purified on 5-20% sucrose gradients centrifuged at 4°C for 18 h at 34,000 rpm.
Reconstitution of Nucleosome Arrays and Purification of Mononucleosomes-The 208-12 DNA template, which contains 12 tandem 208-bp repeats of Lytechinus 5 S rDNA, was prepared and reconstituted as described (40). The molar ratio of histone octamer to 5 S DNA repeat DNA was maintained at 0.9 to produce subsaturated nucleosome arrays. 5 S mononucleosome was purified on 5-20% sucrose gradients after digestion of the array with EcoRI (32).
Reaction of Nucleosomes and H2A/H2B Dimers with Fluorescein 5-Maleimide-NCPs containing H2B cysteine mutations were incubated in 10 mM DTT for 1-2 h at room temperature. and then DTT was removed by exhaustive buffer exchange to 10% glycerol TE using an Amicon YM-10 concentrator. NCPs were rapidly frozen and stored at Ϫ80°C for extended periods. Samples of 0.8 M NCPs containing different NaCl concentrations were reacted with 5.6 M FM for different times, and the reaction was then stopped by adding DTT to a final concentration of 5 mM. 5 S mononucleosomes and arrays were treated in the same way. Reactions of free H2A/H2B dimers with FM was performed by chemical quench-flow (Kintec) to examine reaction points in the millisecond time scales. Reactions were initiated by mixing equal volumes of dimers (1.6 M) and FM (11.2 M) for various times, and then the reaction was quenched with 4 volumes of 10 mM DTT. In selected cases urea was added to 5 M final concentration to the reactions to induce unfolding of the dimers (41).
Determination of Reaction Rate Constants-The extent of FM conjugation with H2Bs was analyzed by separation of samples in 15% SDS-polyacrylamide gels, and then either whole gel fluorimetry (GE Healthcare) or digital photography of gels was illuminated by 365 nm light on a UV light box. The images were analyzed using ImageQuant (GE Healthcare) software, and reaction rate constants and global fits were determined using GraphPad Prism 4 software. Band volumes were determined at appropriate time intervals (A t ), and a maximal extent of reaction (A 0 ) was determined from points in the curve where d(A t )/ dt ϭ 0. k NCP was taken from fits of the standard single phase exponential equation, A t ϭ A 0 (1 Ϫ e Ϫkt ). Errors were propagated by the partial differential method from source values.
Model, Assay, and Theory-Our experimental model is illustrated in Equations 1 and 2. The histone tail domains are dynamic structures in NCPs and exist in two states, either bound within the NCP (B) or unbound (U).
The unbound form reacts preferentially with FM, The free H2A/H2B dimer and U react with FM with identical kinetics (see below and Equation 3), The B and U form are related by a conformational equilibrium constant (see Ref. 42) and that the reaction of NCPs with FM can be described by overall rate constant k NCP , as shown in Equation 10, we have Equation 11, There are two limiting cases for Equation 11 (see Refs. 42,43).
Case I-Slow conformational transition limit is shown in Equation 12, In this limit, Equation 11 is reduced to Equation 13, Case II-Rapid conformational pre-equilibrium limit, which is opposite to the case I, is shown in Equation 14, Now Equation 11 can be rewritten as Equation 15, At very low salt concentrations, K eq should less than 1. So, for case I, Equation 16, 16) gives Equation 17, For case II, combining Equation 7, we find that k NCP is linearly proportional to the [FM] in the reaction (see below). Thus the dependence of k NCP on [FM] can be used to distinguish these two models. For case II, this dependence is first-order, whereas case I is zero order. Fig. 1E shows that the dependence of the overall reaction rate constant, k NCP , on [FM] at two different NaCl concentrations is first order. So case II, rapid conformational pre-equilibrium limit, applies. Thus from Equations 8, 14, and 15 we have Equation 18, and by rearranging we have Equation 19, As mentioned above and in the text, we assume that k 3 Ј, the intrinsic rate of reaction of the NCP U form with FM, is equivalent to the intrinsic rate of reaction for the corresponding free H2A/H2B dimer. Both the U form of NCPs and free dimer have highly mobile tail domains (16,18). Indeed we find that the rate of reaction of the free dimer and the NCP at elevated salts (0.5 M) where the tails are expected to be completely released from binding sites within nucleosomes (17) are equivalent (see text).
RESULTS
Despite numerous studies of core histone tail domains, actual binding affinities of these domains in the context of nucleosomes have not been determined. We reasoned that evaluation of the binding parameters for several sites within a tail domain would reveal fundamental aspects of tail function and of post-translational modifications within these domains. Thus we developed an assay based on a prior observation that the reactivity of cysteines placed within the histone tail domains reflected the binding state of the tail; very low reactivity was observed in low salt (TE) conditions, whereas reactivity was greatly enhanced when the ionic strength was raised to levels expected to shift the equilibrium toward a state where the tails were dissociated from the nucleosome surface. 3 We focused on the H2B tail domain for these initial studies, first examining the reactivity of a cysteine substituted at the 6th position in the H2B tail domain (Fig. 1A, H2BS6C) with FM. Note that cysteine is nearly isosteric with serine, and thus no significant alterations of tail interactions are expected from this single substitution. Reaction of NCPs containing H2B S6C with FM resulted in specific modification of H2B ( Fig. 1B) and exhibited first-order kinetics, with an apparent rate constant k NCP that increased with increasing concentrations of NaCl in the reaction (Fig. 1C, see "Experimental Procedures"). A plot of k NCP versus [NaCl] (Fig. 1D) shows the overall reaction rate increases up to about 500 mM NaCl, a salt concentration at which the tails are completely dissociated from the nucleosome surface (15,17).
However, k NCP describes the overall reaction as shown in Reaction 1 (where Nuc indicates nucleosome), Nuc-FM REACTION 1 Examination of the dependence of k NCP on [FM] indicates that this reaction behaves according to a pre-equilibrium kinetic model ( Fig. 1E and see "Experimental Procedures"). Thus, the tail-bound and tail-unbound states are in equilibrium, and it is possible to determine a conformational equilibrium constant, K eq , relating the concentrations of these species as shown Equation 20, However, calculation of K eq requires determination of k 3 , reflecting the intrinsic rate of reaction of the unbound tail with FM. We assumed that the intrinsic rate of reaction of FM with the tail-unbound (nucleosome) species is equivalent to the rate of the reaction of free H2A/H2B S6C dimers with FM. In support of this assumption, we note that the rates of FM reaction with NCPs containing H2B S6C at Ն500 mM NaCl, where the tails are expected to be completely dissociated from the nucleosome surface, are equivalent to the rates observed with free H2A/H2B S6C dimers at the same salt concentration (results not shown). A plot of k 3 Ј (see "Experimental Procedures") versus [NaCl] as determined by reaction of the free H2A/H2B S6C dimers with FM shows that the intrinsic rate of the reaction is salt-dependent in a manner opposite to the salt dependence observed for k NCP (Fig. 1F).
Repeating the assay at selected [NaCl] in the presence of 5 M urea indicates that the reaction rates for the free protein are not influenced by salt-dependent alterations in protein structure (Fig. 1F, diamonds).
Calculation of K eq S6C over a range of [NaCl] (Fig. 1, G and H) yields data that can be fit by a sigmoidal curve and describes the release of the tail from the nucleosome surface, with the steepest dependence of K eq S6C on [NaCl] between 150 and 500 mM NaCl. K eq S6C values increase more than 2000-fold from 0 to 500 mM NaCl (0.00152 at 0 mM and ϳ3 at 500 mM NaCl). The data indicate that the region of the H2B N-terminal tail encompassing residue 6 binds very tightly within NCPs at very low ionic strengths. However, at physiological salt (ϳ150 mM NaCl) K eq S6C is about 0.1 indicating that the end of the H2B tail is quite mobile, spending about 1/10 of the time dissociated from the nucleosome surface. Importantly, the data are consistent with previous quantitative determinations of the effect of salt on tail binding behavior (15, 18, 44 -46), in support of the contention that we have determined bona fide K eq over the range of [NaCl] for the end of the H2B tail domain. We also note that the nucleosomes used in this analysis remained intact over the range 0 -500 mM NaCl as expected ( Fig. 2 and results not shown), as also indicated by the lack of labeling of the cysteine within H3 at position 110 (Fig. 1B).
Other Sites in the H2B Tail-We next determined K eq values for two other sites within the ϳ27-amino acid H2B tail domain by substitution of residues at positions 14 or 21 for cysteine (Fig. 1A). Interestingly, we find that these sites exhibit about Samples were analyzed by SDS-PAGE followed by fluorography and Coomassie staining. C, determination of k NCP . Reactions such as shown in B at various [NaCl] as indicated were quantified as described under "Experimental Procedures" and data fit to a first order rate equation with slope Ϫk NCP . In all cases r 2 Ն 0.95. D, plot of k NCP versus [NaCl] for H2B S6C in NCPs. Data from fits as shown in C were plotted versus NaCl concentrations. Error bars represent two standard deviations derived from at least three determinations of k NCP at each [NaCl]. Data were fitted to a Boltzmann sigmoidal curve (r 2 ϭ 0.9931). E, dependence of k NCP on [FM] at 0 and 150 mM [NaCl] (left and right axes, respectively) indicates that the reaction behaves according to a pre-equilibrium kinetic model (see "Experimental Procedures"). F, plot of k 3 Ј, the intrinsic rate of H2B S6C reaction with FM versus [NaCl]. Data were obtained by reaction of free H2A/H2B S6C dimers with FM. The reaction at selected [NaCl] was repeated in the presence of 5 M urea as indicated by the filled diamonds. Data were fit to a one-phase exponential decay curve. G and H, independent K eq values at various [NaCl] were calculated from data as shown in D and F, then plotted. Note that at elevated [NaCl], k NCP approaches k 3 Јand K eq Ͼ 1 becomes indeterminate, thus only the lower range [NaCl] data are plotted. G, K eq for NCPs containing H2B S6C, H2B S14C, or H2B T21C versus [NaCl]. H, same data shown in G, expanded over the range 0 -200 mM NaCl. Data were fitted to Boltzmann sigmoidal curves as in D.
a 10-fold greater binding affinity at low salt concentrations and ϳ5-fold tighter association within the nucleosome in the range of physiological ionic strengths (150 mM NaCl) compared with site 6 (Fig. 1H). The dependence of the conformational equilibrium constants on ionic strength at sites 14 and 21 also fit sigmoidal curves as observed at site 6, with K eq values for these two sites increasing more than 10,000-fold from 0 to ϳ500 mM NaCl (ϳ0.0001 at 0 mM and ϳ1 at 500 mM NaCl). Moreover, the affinities of both sites are identical from 0 to ϳ350 mM [NaCl] suggesting that both sites are components of the same structural element in the H2B tail when bound to DNA, and distinct from site 6.
Effect of Acetylation Mimics on Tail Binding Affinity-We next attempted to gain insight to the direct effect of acetylation on tail interactions by examining the binding behavior of H2B N-terminal tail domains containing Lys 3 Gln substitutions to mimic lysine acetylation. Glutamine has a neutral side chain that resembles acetylated lysine in charge and structure and can functionally replace this modification in vivo. For example, in yeast H4 containing glutamine substitutions within the tail functions as acetylated H4 to prevent Sir3p binding and spreading of heterochromatin (47) and an H3-K56Q mutation partially suppresses the loss of acetylation of this residue in asf1 cells (48). Within the H2B N-terminal tail domain, 6 lysine residues have been reported to be acetylated in vivo, 5,12,15,20,24, and 27 ( Fig. 1A) (2, 49 -51). We first examined the effect of single Lys 3 Gln substitutions at each of these positions on tail binding within NCPs containing the appropriate mutants of H2B S6C. Determination of K eq over a range of ionic strengths showed that in general most single substitutions did not result in significant alterations in tail binding affinities (Fig. 3, A and B). Substitution of Lys-5, Lys-12, Lys-20, or Lys-27 for Gln resulted in no significant change in the range of 0 to ϳ300 mM NaCl, whereas substitution of Lys-15 and Lys-24 resulted in very modest increases of about a 1.2-fold in K eq at 150 mM NaCl with diminishing differences at lower NaCl concentrations.
Because multiple acetylation events often occur together on the same tail, we next examined multiple Lys 3 Gln substitutions in the H2B tail domain. Surprisingly, combinations of 4 or 6 Lys 3 Gln substitutions within the same tail (H2B S6CK5Q,K12Q,K15Q,K20Q (H2B S6C4KQ) and H2B S6CK5Q,K12Q,K15Q,K20Q,K24Q,K27Q (H2B S6C6KQ)) in NCPs did not result in further significant changes in the conformational equilibrium constant K eq at any [NaCl] examined (Fig. 3, C and D). Because k 3 Ј, reflecting the intrinsic rate of cysteine reactivity, was nearly identical for all S6C glutamine substitution mutants tested (supplemental Fig. S1), we compared k NCP data for each of these mutants directly (Fig. 3, E and F). This analysis allowed comparison over a more extensive range of [NaCl] than is possible via calculation of K eq (see "Experimental Procedures") and shows that k NCP values for the wild type H2B tail and Lys 3 Gln mutant proteins are virtually identical over the range 0 -1 M NaCl. These data indicate that mutations modeling acetylation do not cause a wholesale weakening of the binding affinity for the region of the H2B tail domain encompassing residue 6.
We next examined the effect of mutations modeling acetylation events at site 14 within the H2B tail domain in NCPs. In contrast to the effect on K eq S6C , most single Lys 3 Gln mutations resulted in significant increases in K eq at position 14. For example, substitution of lysines at all positions except residue 5 with glutamine resulted in an ϳ10-fold increase in K eq S14C at ϳ50 mM NaCl and an ϳ2-3-fold increase in the range of 150 mM NaCl (Fig. 4, A and B). Moreover, we find that multiple Lys 3 Gln mutations caused increases in K eq S14C similar or equivalent to that caused by the single changes. For example, single Lys 3 Gln mutations at sites 15, 24, and 27 had the same K eq values at physiological salt concentrations (150 mM NaCl) as that observed with the 4 and 6 Lys 3 Gln mutants (Fig. 4C), whereas substitutions sites 12 and 20 resulted in alterations in K eq S14C that were not statistically different from the tails containing multiple mutations in this range. Interestingly, k NCP for the native (unacetylated) H2B S14C tail attains a maximal value at ϳ500 mM NaCl, while a maximal value for the multiply "acetylated" tail is attained at ϳ150 mM NaCl (Fig. 4E). Similar results were obtained when comparing double mutants at sites 12 and 15 with the multiple site mutants (Fig. 4D) suggesting the effects of acetylation in the region of residue 14 within histone H2B tail domain are not additive. In general mutations modeling acetylation cause about a 10-fold decrease in binding affinity in K eq S14C at low salt concentrations, about 3-fold at moderate salt conditions (150 mM), and decreasing differences at higher [NaCl] as the tails become fully dissociated from the nucleosome surface.
The effects of mutations modeling acetylation at residue 21 (H2B T21C6KQ), a position near the histone fold domain, appear to be intermediate with respect to that measured at NOVEMBER 9, 2007 • VOLUME 282 • NUMBER 45 positions 6 and 14. An H2B tail with 6 Lys 3 Gln mutations result in only a modest increase in K eq T21C , about 1.5-fold, at physiological salt ionic strengths (Fig. 5, A and B). Thus acetylation appears to have much greater effects on binding of the middle of the H2B tail domain than the N-or C-ends of this domain.
Effect of Acetylation on H2B Tail Interactions
The native H2B tail contains 9 lysines and no other positively charged residues. Surprisingly, as described above there was no alteration in tail affinity as reflected by K eq S6C when 6 of the 9 lysines were substituted for glutamines. Moreover, although 6 Lys 3 Gln substitutions induced significant changes in cysteine reactivities at positions 14 and 21, our analysis indicates that the H2B tail in these regions still exhibits salt-dependent binding behavior with K eq values approaching 2 ϫ 10 Ϫ4 at low salt (TE) conditions. Because it is assumed that the vast majority of binding free energy is contributed by electrostatic interactions of positively charged residues with the polyanionic backbone of DNA, we investigated this issue further by mutating all 9 lysines to glutamine and measuring the tail binding strength in the NCPs. Surprisingly, we find that the tail still exhibits similar binding affinities at all three sites probed in the 9 Lys 3 Gln mutants as were found in the 6 Lys 3 Gln mutants ( Fig. 6 and results not shown). Thus, tails completely lacking positively charged residues and containing only a single positive charge at the N terminus of the protein still exhibited a distinct salt-dependent binding behavior. These data suggest that constituents other than the positively charged lysine residues make significant ionic strengthdependent contributions to the binding free energy of the H2B tail domain (see below).
Linker DNA and Specific DNA Sequences Have No Effect on H2B Tail Binding Affinity-The addition of linker DNA to NCPs can lead to a relocation of histone tail-DNA interactions and increase histone tail-DNA cross-linking (22,52,53).
To determine whether the presence of linker DNA has any effect on the observed binding affinity of the H2B tail domain, we reconstituted nucleosome arrays using the Lytechinus variegates 208-12mer array DNA template (54) and our recombinant core histones, including H2B cysteine substitution mutants. Arrays were reconstituted at a substoichiometric ratio of histone octamer to 208-bp repeat to eliminate any complication from NaCl-dependent folding of the array (5). Note also that monovalent salts do not induce significant condensation of H1-lacking nucleosome arrays (5,55). We found that K eq values for the 6th position within the H2B tail were not significantly different when measured within mononucleosomes and arrays containing H2B S6C from those found for NCPs (Fig. 7, A and B). Similar results were obtained for nucleosome arrays containing H2B S14C or H2B T21C (Fig. 7, C and D). However, we did detect a small but significant decrease in the binding affinity at position 14 for NaCl concentrations Ն200 mM in the nucleosome array that may be caused by the relocation of the histone H2B tail domain from the nucleosome to linker DNA (52). These data also indicate that our results are not dependent upon specific DNA sequences as NCPs that contain essentially random DNA sequences exhibited affinities identical to that measured with nucleosomes assembled with 5 S DNA sequences (Fig. 7A).
DISCUSSION
We have developed a quantitative method to measure conformational equilibrium constants reflecting the affinity with which core histone tail domains and regions within these domains bind within any model chromatin complex. This method can be applied over a range of conditions, including a wide range of ionic strengths. Using our method, we find a salt-dependent transition in H2B tail binding affinity that parallels previous general examinations of salt-dependent release of the core histone tails from the surface of nucleosomes (15)(16)(17). Indeed, plots of K eq for the H2B tail domain generated by our method almost exactly parallel plots of bulk tail release obtained by NMR of stripped native chromatin (17). The data are also consistent with a more recent examination of salt-dependent tail release by small angle x-ray scattering, which showed a significant increase in the form factor for nucleosome core particles, attributed to tail release, as NaCl concentrations were increased from 10 to 50 mM (45), although no further release was detected in the 50 -200 mM range, perhaps because of compensating salt-dependent alterations in core structure. Salt concentrations above 200 mM were not examined in this study. Thus our method can be used to quantitatively determine the binding state of histone tail domains in chromatin complexes.
Moreover, our results bear on the long-standing question of whether the core histone tail domains adopt defined secondary structures when bound within chromatin. The dependence of K eq on ionic strength we detected (Fig. 1H) suggests that sites 14 and 21 are part of the same localized region of cooperative structure, separate and distinct from the region including site 6. Indeed residues 10 -21 in the H2B tail have been predicted to form an ␣-helix (56), as supported by spectroscopic evidence (19,20). Moreover, data from transglutaminase reactivity and fluorescence anisotropy assays indicate two ionic strength-dependent transitions occur within the H2B tail domain, consistent with a cooperative release of the region of the tail encompassing residue 22 (46). Thus, despite the general view of the tails as unstructured domains, these data provide further support for the idea that the tails adopt defined structures and participate in localized interactions when bound within chromatin (38,57). The effect of single or multiple Lys 3 Gln mutations mimicking acetylation on K eq values for NCPs containing H2B S14C were determined and plotted versus NaCl concentration in comparison to NCPs containing native (S14C) H2B. A and B, comparison of single substitution mutants within H2B S14C, as indicated. C, effect of multiple Lys 3 Gln substitutions within H2B S14C on K eq , as indicated. D, acetylation within the histone H2B tail domain is not additive. E, effect of six Lys 3 Gln substitutions on the apparent first order rate constant k NCP for modification at site 14. Although lysine acetylation is intimately linked to actively transcribed chromatin and is known to cause direct alterations in the physical behavior of the chromatin fiber (5), there is little understanding of the actual molecular mechanism(s) by which this modification alters tail-DNA interactions. Although it is often assumed that the basis of this effect is a general weakening of histone tail-DNA interactions, we found that mutations modeling single or multiple acetylation events do not result in a wholesale weakening or loss of tail binding affinity, consistent with earlier UV cross-linking experiments (37). Rather, our data suggest that acetylation causes distinct, localized changes in H2B tail interactions. For example Lys 3 Gln mutations had no effect or only modest effects on K eq S6C and K eq T21C , representing interactions of both ends of the H2B tail, but did result in a significant weakening of interactions in the middle of the tail region, indicated by increases in K eq S14C . Indeed our results suggest that at physiological ionic strengths acetylation operates like a "switch" such that in the unacetylated tail the region encompassing site 14 exhibits binding similar to site 21, whereas site 14 behaves like site 6 upon acetylation (Fig. 8). This switchmayberelatedtoacetylationdependent alteration of the ␣-helical content of the tail domains, detected by spectroscopic investigations (19). Nevertheless, our data suggest that the direct effect of acetylation on chromatin structure is derived from a much more subtle and specific alteration in tail interaction than has been previously assumed.
We find that the reaction of FM with cysteine residues within the H2B tail domain can be described by a pre-equilibrium kinetic model. Thus, k 21 is Ͼk 3 Ј indicating that in regions encompassing sites 14 and 21 and in the range of physiological ionic strength k 21 is more than ϳ1.7 s Ϫ1 and, given K eq (ϳ0.03), then k 12 is more than ϳ0.05 s Ϫ1 , corresponding to half-lives for the bound and unbound states of less than ϳ15 and ϳ0.4 s. Thus the tail equilibrates fairly rapidly in the context of the nucleosome, indicating there would be little or no kinetic restriction for factors designed to bind to specific sites within the tail domain. Importantly, however, our results imply that binding of such factors would be reduced by 10 -50-fold compared with binding affinities measured with free peptides representing the tail domains (58). Of course, such factors may also FIGURE 6. H2B tails lacking all positively charged residues exhibit salt-dependent binding within NCPs. A and B, comparison of K eq versus [NaCl] for regions encompassing sites 6 and 14 within the H2B tail domain, respectively, for tails containing 0, 6, or 9 Lys 3 Gln substitutions, as indicated. make productive interactions with sites elsewhere in the nucleosome, increasing affinities (59).
Our work is in agreement with previous work by Dimitrov and co-workers (37) in which UV laser cross-linking revealed no difference in the extent of histone tail-DNA interactions in NCPs containing hypo-and hyper-acetylated nucleosomes. Indeed, we did not find a significant change in tail binding affinity when monitoring reactivity of a cysteine placed at residues 6 or 21, near either end of the tail domain. Nonetheless, we did detect an ϳ3-fold increase in K eq at site 14 upon Lys 3 Gln substitution, indicating a loss of tail binding affinity in this central region. It is important to note, however, that the UV laser cross-linking of Dimitrov and co-workers (37) detected binding of bulk tails, whereas our experiment measures actual changes in affinities. Thus, although we detected a significant drop in affinity for the center of the tail domain upon introduction of mutations modeling acetylation, the actual fraction of tail domains in this region existing in the bound state would have only decreased from ϳ98 to ϳ93% in the range of 150 mM NaCl, with very little change in flanking regions. Assuming the effect of acetylation is similar in other tail domains, it is not surprising that UV laser cross-linking or other similar techniques sensitive to only the bound state would not detect an alteration in the fraction of tails bound to DNA because of acetylation.
Our data also indicate that acetylation events may not be additive in terms of structural changes or alterations in tail binding induced by this modification. We observed that single Lys 3 Gln substitutions at lysines 15, 24, and 27 induced as much reduction in tail binding interactions as did multiple substitutions within the H2B tail domain. These results may provide a structural explanation for recent genetic results suggesting that three of four specific lysine acetylation events within the H4 tail domain appear not to be unique in terms of global effect on gene expression in yeast cells (60).
The salt dependence of histone tail binding observed here and in previous reports (15)(16)(17)(18) indicates that a portion of the binding free energy of the H2B tail domain is provided by electrostatic interactions, presumably between basic residues and DNA. However, somewhat unexpectedly, our data indicate other significant contributions. For example we find that loss of 6 of the 9 positively charged lysines in the H2B tail did not lead to a complete abrogation of tail binding. Moreover, mutation of all 9 lysines to glutamines did not result in further reductions in affinity at any of the sites probed (Fig. 6). Thus an H2B tail domain devoid of positively charged amino acid residues except for the protonated N-terminal amino group still exhibited tight binding to sites within the NCPs. Moreover, there was remarkably little difference in tail binding affinity at all sites probed between tails with one or two Lys 3 Gln substitutions and nine Lys 3 Gln substitutions. Previous work with a free H4 tail domain showed that acetylation vastly reduced binding of an H4 tail peptide to DNA (15,36). Thus our results indicate that the tail binding in the context of chromatin is significantly different from the binding of free peptides and that constituents other than lysine ⑀-amino groups participate in salt-sensitive interactions that contribute significantly to tail binding free energy. In addition, it is possible that tail-tail interactions compensate for loss of lysine charge and also that glutamine amides (and secondary amide groups in acetylated lysines) participate in productive (H-bonding) interactions within NCPs as is typically observed in proteins. Indeed lysine acetylation and glutamine substitutions result in similar changes to K eq measured in the H4 tail domain. 4 Our results also suggest that unmodified lysine residues may not contribute as significantly to overall tail binding free energy as has been assumed, perhaps because of a severe entropic cost of restricting the conformationally flexible four-carbon side chain and ⑀-amino group to the vicinity of a negatively charged phosphate. In light of these questions it will be interesting in future experiments to examine the effect of lysine to arginine substitutions within the H2B tail domain because the latter has one less methylene unit and a more conformationally constrained charged moiety and thus might be expected to contribute more positively to the total binding free energy of the tail domain than lysine.
In summary we have for the first time measured conformational equilibrium constants describing the binding affinity of a core histone tail domain. Measurements at specific sites within the tail domain over a range of salt concentrations indicate that the tail contains distinct structural domains and binds with sufficiently high affinities so as to cause significant reductions in the biological activity of tail-binding proteins deduced from peptide binding studies. Moreover, our results suggest that acetylation does not induce a general weakening of tail-DNA interactions as is often assumed but rather causes localized alterations in tail binding. These alterations may be related to specific structural changes in tail structure detected by spectroscopic techniques (19,57). We present evidence that site-specific acetylation events in the H2B tail domain are not additive and not unique with regard to induced structural changes; many single events induce the same change with regard to binding of the middle of the tail, whereas additional Lys 3 Gln changes do not induce a commensurate alteration in observed binding affinity. Our results with nucleosomes containing linker DNA and nucleosome arrays suggest that the H2B tail binds with equivalent affinity regardless of the availability of linker DNA (52). It will be interesting to examine whether acetylation or availability of linker DNA causes similar or distinct changes in tail interactions of the other tail domains within the nucleosome or whether acetylation within one tail affects binding of other tails. | 8,765.4 | 2007-11-09T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Field Traction Performance Test Analysis of Bionic Paddy Wheel and Vaned Wheel
In order to improve the traction performance of a wheel of a micro-tiller on the soil surface of a paddy field, we extracted the surface curve of a cow’s hoof and used the cow’s hoof as a bionic prototype to design a bionic paddy wheel. In order to verify the passability of the bionic paddy wheel in paddy soil, a wheel-soil test bench was built in the experimental field with a moisture content of 36%. The test results show that under the condition of the same load on the wheel, the changing laws of torque, drawbar pull, and slip ratio of the bionic paddy wheel and the conventional vaned wheel are similar. The torque and drawbar pull of the bionic paddy wheel are higher than those of the conventional vaned wheel at the same slip ratio. The maximum torque and hook traction provided by the bionic paddy wheel and the conventional vaned wheel both increase with the increase of the load on the wheel. Under the same load on the wheel, the bionic paddy wheel is at least 22% higher than the conventional vaned wheel. Compared with the conventional vaned wheel, the bionic paddy wheel can provide a higher driving force and hook traction, which can improve the working efficiency of the vehicle in the paddy field.
Introduction
Jiangxi Province is an important commodity grain-producing area in China, known as 'Jiangnan Granary'. From 2021, the province's grain area will be 56.592, 6 million mu, the total output will be 21.923 billion kg, and the annual output will remain above 21.5 billion kg for nine consecutive years. Jiangxi Province is located in the south of the Yangtze River, with low latitude. The normal landform types are mainly mountains and hills. The landform can be summarized as: 'six mountains and one water, two fields, one road and manor', which belongs to the typical red and yellow soil low hills. However, due to the complex agricultural operation environment in hilly and mountainous areas, especially the cultivated land for grain crops such as terraced rice, because the terraced rice field is small (about 200 m2 ), the gap is large, and it is narrow and long. Therefore, the design of light and simple, advanced, and applicable equipment to meet the hilly terraced rice operation has become the first choice to solve this problem. Micro tillage machines have the advantages of light weight, flexible operation, strong adaptability and high-cost performance. They are the main power machinery used for small plot tillage. When combined with a rotary tiller, direct seeding machine, ditching machine and other agricultural machinery, they can realize the tillage of terraced rice and the orderly planting of seedlings. When the wheels of conventional micro-cultivators drive and operate on the surface of paddy fields, due to the slip and serious subsidence of the wheels, the adhesion is small and the driving resistance is large. There are problems of poor trafficability, low efficiency, high energy consumption and inability to drive. Therefore, the design of a micro-cultivator wheel suitable for paddy 2 of 7 field walking is conducive to improving the working efficiency of the micro-cultivator and reducing wheel slip and driving resistance.
In recent years, in order to improve the trafficability of vehicles or robots on the soft soil surface, domestic and foreign scholars have explored and studied the traction performance and subsidence mechanism of wheel foot forms of different types of wheel legs, walking wheels and convertible walking wheels. Goldman et al. from the Georgia Institute of Technology designed a six-wheeled robot, Sandbots, based on how lizards move across the desert to improve the ability to pass through soft sand [1]. Francisco et al. from the University of Surrey designed a walking wheel composed of five evenly distributed wheel legs. Each wheel leg is equipped with different types of foot-end structures to increase the contact area with the ground and has a strong passing ability [2]. The Whegs series of robots developed by Kathryn A. Daltorio et al. from Case Western Reserve University in the United States split the wheel into three-spoke walking wheels, with cylindrical feet in contact with the ground to overcome obstacles through changes in its flexible body [3]. Asguard is a five-spoke walking wheel robot developed by the German Artificial Intelligence Research Center, with semi-cylindrical rubber feet for climbing stairs [4]. Messor II, a wheel-legged robot developed by Dominik Belter et al. at Poznan University of Technology, has a hemispherical foot-end structure. The robot can cross obstacles by establishing a foot-soil perception model [5]. ANYmal quadruped robot was designed by Kolvenbach of ETH Zurich; in order to improve its stability, the foot structure is rectangular, and the robot can realize the inspection of underground pipe networks [6]. Chen took the lead in carrying out the research on ground mechanics of legged robots, and put forward the theory of bionic paddy wheels such as 'half walking wheel', 'convertible walking wheel' and 'mechanical transmission walking wheel' [7]. Professor Zhuang et al. analyzed the mechanism of camels crossing sand in detail, and designed the imitation camel hoof tire to reduce the grounding specific pressure, restrain the sand flow under the wheel, and improve the propulsion and bearing capacity. Professor Gao Feng from Beijing University of Aeronautics and Astronautics proposed the concept of a deformable wheel on the basis of summarizing a variety of walking wheel movements. It can be converted between two wheel states with and without a rim to achieve walking and continuous walking [8]. Ding et al. [9] established a foot-ground interaction mechanical model based on the theory of ground mechanics, and verified the proposed model by experiments.
As a typical soft ground, paddy soil has large viscosity and relatively complex internal interactions. Its mechanical properties are sensitive to water content and compactness. A slight change in water content will lead to a greater change in the mechanical properties of paddy soil. Natural selection leads to some species can adapt to different geographical environments. For example, camels and ostriches live in the desert [10,11]. They have a unique foot-shaped structure that allows them to walk freely in the desert. Cattle is a long-term animal used for paddy field operation. The foot of the cattle hoof is petalshaped. When it enters the soil, it enters the soil vertically as a whole with little resistance. After entering the soil, the foot is separated, the area increases, and the grounding pressure is reduced [12][13][14].
Therefore, in this paper, the cow hoof was used as the bionic prototype. By analyzing the influencing factors and main parts of the soil fixation and anti-slip of the cow hoof, the reverse engineering software was used to extract the characteristic elements with high passing capacity and low resistance in the cow hoof and apply the bionic elements to the wheel design. A bionic paddy wheel suitable for soft ground such as a paddy field was obtained and compared with the traction performance of a conventional paddy wheel in paddy field soil to verify the traction performance of the bionic paddy wheel.
Preparation of Bionic Structure
An adult healthy cattle hoof was selected as the research object, and the three-dimensional data point cloud of the whole cattle hoof surface was scanned by the three-dimensional optical scanner ATOS Triple Scan (precision 0.03 mm) (GOM Metrology, Braunschweig, Federal Republic of Germany). The point cloud was deleted, denoised, and filled with defects. After that, the operations of filling holes, removing features, grid doctor, simplification and relaxation were performed, and the three-dimensional features such as the hoof flap gap and hanging hoof were removed, as shown in Figure 1a. According to the simplified morphological characteristics of the hoof, it is mainly divided into two parts, namely the buried surface and the unearthed surface. The bottom surface of the hoof is regarded as a horizontal plane, and the three-dimensional coordinate system of the hoof was established with the maximum circumscribed circle center of the hoof bottom surface as the coordinate origin, as shown in Figure 1b. The characteristic curve segment at the maximum outer diameter of the lower end of the hoof was extracted, and the Bionic Curve A and Bionic Curve B curves were selected. After scaling the characteristic curve, the feature point 1 and the feature point 2 were connected by a straight line, and the feature point 3 and the feature point 4 were connected by a straight line. Then the coordinate origin was transformed to the center of the closed curve, and finally the coordinate points of the curve segment were exported to MATLAB. The derived characteristic curve was symmetrical in line A as shown in Figure 1c. The bionic paddy wheel foot model was established, as shown in Figure 1d.
Preparation of Bionic Structure
An adult healthy cattle hoof was selected as the research object, and the three-dimensional data point cloud of the whole cattle hoof surface was scanned by the three-dimensional optical scanner ATOS Triple Scan (precision 0.03 mm) (GOM Metrology, Braunschweig, Federal Republic of Germany). The point cloud was deleted, denoised, and filled with defects. After that, the operations of filling holes, removing features, grid doctor, simplification and relaxation were performed, and the three-dimensional features such as the hoof flap gap and hanging hoof were removed, as shown in Figure 1a. According to the simplified morphological characteristics of the hoof, it is mainly divided into two parts, namely the buried surface and the unearthed surface. The bottom surface of the hoof is regarded as a horizontal plane, and the three-dimensional coordinate system of the hoof was established with the maximum circumscribed circle center of the hoof bottom surface as the coordinate origin, as shown in Figure 1b. The characteristic curve segment at the maximum outer diameter of the lower end of the hoof was extracted, and the Bionic Curve A and Bionic Curve B curves were selected. After scaling the characteristic curve, the feature point 1 and the feature point 2 were connected by a straight line, and the feature point 3 and the feature point 4 were connected by a straight line. Then the coordinate origin was transformed to the center of the closed curve, and finally the coordinate points of the curve segment were exported to MATLAB. The derived characteristic curve was symmetrical in line A as shown in Figure 1c. The bionic paddy wheel foot model was established, as shown in Figure 1d. The Bionic Curve A and Bionic Curve B in Figure 1c were curve-fitted, and the polynomial equation structure of the bionic curve is shown in Formula (1).
The coefficients of the two bionic curves are listed in Table 1. It can be seen from the table that the fitting correlation coefficients of the two bionic curves are 0.998 and 0.999, respectively. The Bionic Curve A and Bionic Curve B in Figure 1c were curve-fitted, and the polynomial equation structure of the bionic curve is shown in Formula (1).
The coefficients of the two bionic curves are listed in Table 1. It can be seen from the table that the fitting correlation coefficients of the two bionic curves are 0.998 and 0.999, respectively. The raw material of the bionic structure is rubber blocks. The rubber block has the characteristics of convenient processing, strong compressive strength, not being easy to deform, and being lightweight and of low cost. With the wheel frame of the mini-cultivator as the carrier, eight L-shaped supporting plates were welded at equal intervals on the rim of the frame, mounting holes were machined on the supporting plates, and the bionic structure was fixed to the L-shaped supporting plate of the vaned wheel by bolts. Figure 2 shows the vaned wheel with blade device and the bionic paddy wheel with a bionic structure. The maximum diameter and wheel width of the two wheels in Figure 2a,b are equal, which are 740 mm and 102 mm respectively.
RMSE
0.174 2.51 × 10 −3 The raw material of the bionic structure is rubber blocks. The rubber block has the characteristics of convenient processing, strong compressive strength, not being easy to deform, and being lightweight and of low cost. With the wheel frame of the mini-cultivator as the carrier, eight L-shaped supporting plates were welded at equal intervals on the rim of the frame, mounting holes were machined on the supporting plates, and the bionic structure was fixed to the L-shaped supporting plate of the vaned wheel by bolts. Figure 2 shows the vaned wheel with blade device and the bionic paddy wheel with a bionic structure. The maximum diameter and wheel width of the two wheels in Figure 2a,b are equal, which are 740 mm and 102 mm respectively.
Test Rig Construction
The test site was selected in the experimental field of Jiangxi Agricultural University. The steel pipe was used as the support frame of the track. The slide rail was fixed at both ends by hinges. The height and spacing of the two slide rails were adjusted. The total length and width of the track were 8.0 m and 1.2 m, respectively. Through the tillage device, the soil was first rotary tilled, and then scraped with a scraper. The soil after scraping is shown in Figure 3a. A single wheel bench was installed on a horizontal track, which mainly comprises a power device, a data acquisition device and a load-applying device (load). The power unit is mainly provided by a servo motor with a power of 1.5 kw, which transfers power to the wheels via a 1: 50 reducer. The data acquisition device mainly collects wheel torque, wheel horizontal moving actual speed and theoretical speed and wheel sinkage, and realizes system control and data acquisition through Arduino1.8; the load application device is a rocker's arm structure. The load on the wheel is changed by changing the weight of the rear weight of the rocker arm, and the change of drawbar pull is realized by dragging the load of different weights.
Test Rig Construction
The test site was selected in the experimental field of Jiangxi Agricultural University. The steel pipe was used as the support frame of the track. The slide rail was fixed at both ends by hinges. The height and spacing of the two slide rails were adjusted. The total length and width of the track were 8.0 m and 1.2 m, respectively. Through the tillage device, the soil was first rotary tilled, and then scraped with a scraper. The soil after scraping is shown in Figure 3a. A single wheel bench was installed on a horizontal track, which mainly comprises a power device, a data acquisition device and a load-applying device (load). The power unit is mainly provided by a servo motor with a power of 1.5 kw, which transfers power to the wheels via a 1: 50 reducer. The data acquisition device mainly collects wheel torque, wheel horizontal moving actual speed and theoretical speed and wheel sinkage, and realizes system control and data acquisition through Arduino1.8; the load application device is a rocker's arm structure. The load on the wheel is changed by changing the weight of the rear weight of the rocker arm, and the change of drawbar pull is realized by dragging the load of different weights.
RMSE
0.174 2.51 × 10 −3 The raw material of the bionic structure is rubber blocks. The rubber block has th characteristics of convenient processing, strong compressive strength, not being easy t deform, and being lightweight and of low cost. With the wheel frame of the mini-cultiv tor as the carrier, eight L-shaped supporting plates were welded at equal intervals on th rim of the frame, mounting holes were machined on the supporting plates, and the bion structure was fixed to the L-shaped supporting plate of the vaned wheel by bolts. Figur 2 shows the vaned wheel with blade device and the bionic paddy wheel with a bion structure. The maximum diameter and wheel width of the two wheels in Figure 2a,b ar equal, which are 740 mm and 102 mm respectively.
Test Rig Construction
The test site was selected in the experimental field of Jiangxi Agricultural Universit The steel pipe was used as the support frame of the track. The slide rail was fixed at bot ends by hinges. The height and spacing of the two slide rails were adjusted. The tot length and width of the track were 8.0 m and 1.2 m, respectively. Through the tillage d vice, the soil was first rotary tilled, and then scraped with a scraper. The soil after scrapin is shown in Figure 3a. A single wheel bench was installed on a horizontal track, whic mainly comprises a power device, a data acquisition device and a load-applying devic (load). The power unit is mainly provided by a servo motor with a power of 1.5 kw, whic transfers power to the wheels via a 1: 50 reducer. The data acquisition device mainly co lects wheel torque, wheel horizontal moving actual speed and theoretical speed and whe sinkage, and realizes system control and data acquisition through Arduino1.8; the loa application device is a rocker's arm structure. The load on the wheel is changed by chang ing the weight of the rear weight of the rocker arm, and the change of drawbar pull realized by dragging the load of different weights. The soil water content is 36.98 g (100 g), and its mechanical properties are shown in Table 2.
Experimental Design
The theoretical speed of the test wheel in this paper is 0.36 m/s. The data acquisition device collects the actual speed and theoretical speed of the wheel through the encoder. The on-wheel loads of the test wheels are 58.57 N, 84.57 N and 107.4 N respectively, and the change of the on-wheel load is controlled by adding or removing weights on the rocker's arm. We changed the load test hitch traction by changing the number of medium-weight steel tubes in the drum. At the beginning of the experiment, the soil was plowed by a soil tillage device and leveled by a scraper, and then the test was carried out on a single-wheel test bench. The load on each wheel corresponded to 5 groups of different loads, and each group of tests was repeated twice.
Results and Discussion
Torque and hitch tractions are the main factors that affect the vehicle's traction and pass-ability, and understanding the relationship between them and the slip ratio is helpful for the study of the vehicle's passing performance. The bionic paddy wheel and the conventional vaned wheel of the tiller were tested in the field according to the test method, and the torque and drawbar pull data obtained in the test were compared, as shown in
Experimental Design
The theoretical speed of the test wheel in this paper is 0.36 m/s. The data acquisiti device collects the actual speed and theoretical speed of the wheel through the encod The on-wheel loads of the test wheels are 58.57 N, 84.57 N and 107.4 N respectively, a the change of the on-wheel load is controlled by adding or removing weights on t rocker's arm. We changed the load test hitch traction by changing the number of mediu weight steel tubes in the drum. At the beginning of the experiment, the soil was plow by a soil tillage device and leveled by a scraper, and then the test was carried out on single-wheel test bench. The load on each wheel corresponded to 5 groups of differe loads, and each group of tests was repeated twice.
Results and Discussion
Torque and hitch tractions are the main factors that affect the vehicle's traction a pass-ability, and understanding the relationship between them and the slip ratio is help for the study of the vehicle's passing performance. The bionic paddy wheel and the co ventional vaned wheel of the tiller were tested in the field according to the test metho and the torque and drawbar pull data obtained in the test were compared, as shown Figure 4 is a graph showing the relationship between the torque and the slip ratio of the bionic paddy wheel and the conventional vaned wheel under different wheel loads. It can be seen from Figure 4 that the torque provided by the bionic paddy wheel is larger than that of the conventional vaned wheel under different loads on the wheel, and the larger torque can provide higher starting power. It can be seen from Figure 4 that under different loads on the wheel, the vaned wheel and the bionic paddy wheel have a similar trend; that is, the torque increases with the increase of the slip ratio. When the load on the wheel is 84.57 N, the torque of the conventional vaned wheel begins to decrease after the slip ratio is greater than 0.6, which may be caused by experimental factors. When the loads on the wheel are 58.57 N, 84.57 N and 107.4 N, the maximum torque of the bionic paddy wheel is increased by 38.63%, 31.39% and 22.3% compared with the conventional vaned wheel. The maximum torque of the bionic paddy wheel when the wheel load is 107.4 N is 50.8 N, which is 37.19% and 11.16% higher than that when the wheel load is 58.57 N and 84.57 N, respectively. Figure 4 is a graph showing the relationship between the torque and the slip ratio the bionic paddy wheel and the conventional vaned wheel under different wheel loads can be seen from Figure 4 that the torque provided by the bionic paddy wheel is larg than that of the conventional vaned wheel under different loads on the wheel, and t larger torque can provide higher starting power. It can be seen from Figure 4 that und different loads on the wheel, the vaned wheel and the bionic paddy wheel have a simi trend; that is, the torque increases with the increase of the slip ratio. When the load on t wheel is 84.57 N, the torque of the conventional vaned wheel begins to decrease after t slip ratio is greater than 0.6, which may be caused by experimental factors. When the loa on the wheel are 58.57 N, 84.57 N and 107.4 N, the maximum torque of the bionic pad wheel is increased by 38.63%, 31.39% and 22.3% compared with the conventional van wheel. The maximum torque of the bionic paddy wheel when the wheel load is 107.4 N 50.8 N, which is 37.19% and 11.16% higher than that when the wheel load is 58.57 N a 84.57 N, respectively. Figure 5 is the curve of the relationship between the drawbar pull and the slip ra of the bionic paddy wheel and the conventional vaned wheel under different wheel loa It can be seen from Figure 5 that under different loads on the wheel, the drawbar pull the vaned wheel and the bionic paddy wheel increases first and then tends to be flat w the increase of the slip ratio. When the loads on the wheel are 58.57 N, 84.57 N and 10 N, the maximum drawbar pull of the bionic paddy wheel is 92.79 N, 105.85 N and 117.8 N, which are 23.35%, 21.97% and 22.3% higher than that of the conventional vaned whe When the on-wheel load of the bionic paddy wheel is 107.4 N, the maximum drawbar p is increased by 26.99% and 11.33% when the on-wheel load is 58.57 N and 84.57 N. It c be found from Figure 5 that under the same load on the wheel, the change curve of t drawbar pull of the bionic paddy wheel with the slip ratio is always above the conve tional vaned wheel, which shows that compared with the conventional vaned whe lower bionic paddy wheels promote greater hitch traction. When the slip ratio is the sam Figure 5 is the curve of the relationship between the drawbar pull and the slip ratio of the bionic paddy wheel and the conventional vaned wheel under different wheel loads. It can be seen from Figure 5 that under different loads on the wheel, the drawbar pull of the vaned wheel and the bionic paddy wheel increases first and then tends to be flat with the increase of the slip ratio. When the loads on the wheel are 58.57 N, 84.57 N and 107.4 N, the maximum drawbar pull of the bionic paddy wheel is 92.79 N, 105.85 N and 117.845 N, which are 23.35%, 21.97% and 22.3% higher than that of the conventional vaned wheel. When the on-wheel load of the bionic paddy wheel is 107.4 N, the maximum drawbar pull is increased by 26.99% and 11.33% when the on-wheel load is 58.57 N and 84.57 N. It can be found from Figure 5 that under the same load on the wheel, the change curve of the drawbar pull of the bionic paddy wheel with the slip ratio is always above the conventional vaned wheel, which shows that compared with the conventional vaned wheel, lower bionic paddy wheels promote greater hitch traction. When the slip ratio is the same, the drawbar pull of the bionic paddy wheel can be increased by 20% compared with the conventional vaned wheel, which means that the bionic paddy wheel has better passing performance under the working conditions of high water content in paddy fields.
Hook Traction and Slip Ratio
The curve in Figure 5 is the fitting equation of the curve of drawbar pull of the bionic paddy wheel and the conventional vaned wheel with the slip ratio, which is fitted according to y = a × ln(x) + b. Where x represents the slip ratio, and y represents the drawbar pull of the wheel. Through Figure 5 we can see that as the wheel load increases, the fitting curve also increases, so the coefficients a and b are changed to a function related to the wheel load, and you can get the relationship between the traction and slip ratio of the two-wheel hooks under different wheel loads. where x 1 represents the wheel load, x 2 the slip ratio, and y the drawbar pull.
Conclusions
(1) Under the conditions of different loads on the wheel, the maximum torque and hook traction of the bionic paddy wheel and the conventional vaned wheel can increase with the increase of the load on the wheel. The torque of the bionic paddy wheel increases with the increase of the slip ratio under the same round of upper load, and drawbar pull increases gradually with the increase of the slip ratio and then tends to level off. The torque of the conventional vaned wheel increases with the increase of the slip ratio, and drawbar pull increases gradually with the increase of the slip ratio and then tends to level off. These two wheels have similar patterns of change.
(2) The test results comparing the bionic paddy wheel and the conventional vaned wheel show that the torque and hook traction of the bionic paddy wheel are larger than those of the conventional vaned wheel at the same slip ratio. Under the same load on the wheel, the bionic paddy wheel has at least 22% higher torque and 22% higher traction than the conventional vaned wheel, indicating that the bionic paddy wheel can provide higher driving force than the conventional vaned wheel. With hook traction, it can improve the working efficiency of the vehicle in paddy fields. | 6,385.4 | 2022-11-03T00:00:00.000 | [
"Agricultural and Food Sciences",
"Engineering",
"Environmental Science"
] |
Inspection of 56 Fe γ -Ray angular distributions as a function of incident neutron energy using optical model approaches
. Neutron inelastic scattering cross sections measured directly through (n,n) or deduced from γ -ray production cross sections following inelastic neutron scattering (n,n (cid:1) γ ) are a focus of basic and applied research at the University of Kentucky Accelerator Laboratory ( www.pa.uky.edu/accelerator ). For nuclear data applications, angle-integrated cross sections are desired over a wide range of fast neutron energies. Several days of experimental beam time are required for a data set at each incident neutron energy, which limits the number of angular distributions that can be measured in a reasonable amount of time. Approximations can be employed to generate cross sections with a higher energy resolution, since at 125 o , the a 2 P 2 term of the Legendre expansion is identically zero and the a 4 P 4 is assumed to be very small. Provided this assumption is true, a single measurement at 125 o would produce the γ -ray production cross section. This project tests these assumptions and energy dependences using the codes CINDY/SCAT and TALYS/ECIS06/SCAT. It is found that care must be taken when interpreting γ -ray excitation functions as cross sections when the incident neutron energy is < 1000keV above threshold or before the onset of feeding.
Introduction
Neutron-induced reactions are the main research activity at the University of Kentucky Accelerator Laboratory (UKAL). Neutrons are produced by impinging a pulsed beam of protons or deuterons into a gas cell containing either tritium or deuterium. Samples are hung ∼ 7 cm from the gas cell. Scattered neutrons or γ rays are registered in appropriate detectors mounted inside collimated shielding on a moveable carriage. The time-of-flight (TOF) technique allows for measurement of the scattered neutron energies and for background suppression. Three types of measurements are typically performed and are illustrated in Fig. 1.
Inelastic neutron (n,n ) scattering angular distributions are measured for scattered angles of 30 o to 155 o at fixed incident neutron energies (E n ). There are numerous advantages for using this technique to obtain inelastic cross sections. The incident beam contains only a single neutron energy and therefore the inelastic cross section to a given final state is measured directly without feeding complications. The neutron angular distribution contains details about the reaction mechanism not discoverable with angle-integrated techniques. There are several disadvantages to the (n,n ) technique. Usually only the first few excited states occur as isolated peaks in the spectrum with the higher-lying final levels unresolved. a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>The count rate is low for neutron detection with TOF, so individual spectra take 4-8 hours each. Three to four days are required to measure each angular distribution, and this limits the number that can be completed in the allotted beam time.
Gamma-ray (n,n γ ) angular distributions are measured for scattered angles of 30 o to 155 o at fixed incident neutron energies (E n ). There are also numerous advantages for using this technique to discover inelastic cross sections. The incident beam contains only a single neutron energy and while the raw yield of a particular γ ray may be impacted by feeding from higher-lying states, the feeding contribution can be subtracted since it is directly measured. The energy resolution of HPGe detectors is excellent, so that cross sections can be determined to all excited final states. In addition, the γ -ray data contain nuclear structure information such as spins and parities, branching ratios, mixing ratios, lifetimes, and electromagnetic transition rates. Two to three days are required to measure a highquality angular distribution, limiting the number that can be completed in the allotted beam time.
The third technique to determine inelastic cross sections is to measure the γ -ray excitation function. The detector is placed at 125 o and spectra are taken as the incident neutron energy is stepped upward. Again, only one incident neutron energy is present and any feeding is observed and easy to subtract. The HPGe has excellent energy resolution and spectra are reasonably quick to obtain. Angle-integrated cross sections for the complete region of interest can be obtained with a week of beam time. This technique requires that the a 4 P 4 term in each γray's angular distribution at 125 o be negligible. This paper examines this assumption.
Calculations
Gamma-ray angular distributions involving multipolarities L = 1 and 2 are written as a Legendre polynomial expansion: where P L are the Legendre polynomials. Higher-order terms do not occur for dipole and quadrupole transitions. The quantity 4π A 0 is the angle-integrated γ -ray production cross section when properly normalized. Table 1 presents maximum allowable a 4 coefficients for various limits of error. In a situation where the discrepancy is ∼5%, the error is comparable to the uncertainties arising from multiple scattering corrections or the cross section normalization and will have an impact upon the accuracy of the cross section reported. If the discrepany is ≤ 2%, the error is minor and does not significantly impact reported cross section values.
The Legendre coefficients are derived with angular momentum algebra treatments such as that presented in Ferguson [1]. The a 2 and a 4 coefficients are controlled by the upper and lower state spins and the substate population distribution of the upper state. The substate population distribution is refered to as the alignment and often considered to be a Gaussian distribution about m J = 0, with spreading width σ . The upper level's alignment is determined by entrance and exit channel transmission coefficients (and angular momentum coupling).
In this investigation, we explore the σ spreading width and a 4 values using: 1) the der Mateosian, Sunyar [2], and Yamazaki [3] (MSY) Atomic Data and Nuclear Data Tables tables, 2) the Hauser-Feshbach code CINDY [4,5] with either an optical model parameter set or T l j transmission coefficients from TALYS [6], and 3) experimental measurements of γ -ray angular distributions.
MSY tables
The tables of der Mateosian, and Sunyar [2], and Yamazaki [3] (MSY) may be used to 'fit' γ -ray angular distributions if computer codes are not available to perform the angular momentum algebra. This approach is an empirical technique, where decays from known levels are used to judge the appropriate value of σ . This σ is assumed valid for the alignment of near-by states. Table 2 provides an example for 2 + → 0 + decays. We see from Table 2 that the 5% discrepany limit is reached when σ ∼ 1.1 and the 2% discrepany limit is reached when σ ∼ 1.4.
CINDY
To understand when certain values of σ are expected in actual excitation function measurements, we utilize the code CINDY [4,5] to predict alignments and a 2 and a 4 coefficients for the first excited state of 56 Fe as a function of incident neutron energy. CINDY performs a Hauser-Feshbach reaction model calculation and handles the angular momentum algebra. One must provide either optical model potential parameters or transmission coefficients as a function of neutron energy. We have examined both approaches. Table 3 provides the alignment and Legendre coefficients obtained when a simplified version of the Koning & Delaroche [7] neutron optical model parameters are used. At a fundamental level, CINDY calculates transmission coefficients using the subroutine SCAT [8], the same base routine employed in TALYS/ECIS and performs Moldauer fluctuation corrections. The optical model treatment in CINDY is not sophisticated by today's standards and values for the cross sections are not the best, but the code does handle the angular momentum calculations rather well. The σ was estimated by comparing the m J = 0 and m J = 1 substate population probabilities using P(m J )∼ex p(−m 2 J /2σ 2 ). The substate population distributions are not perfectly Gaussian at > 2 MeV above threshold.
In Table 3, we observe that even at threshold for exciting the 2 + 1 level, the substates are not fully aligned. The 5% discrepancy limit is reached at E n ∼ 1.8 MeVor ∼900 keV above the threshold for exciting the 2 + 1 . The 2% discrepancy limit is reached at E n ∼ 5.0 MeV -or ∼4.0 MeV above the threshold. Table 4 provides the alignment and Legendre coefficients obtained from TALYS. TALYS is a fullfeatured code which uses a sophisticated optical model treatment, incorporates all open channels, and can apply numerous refinements to obtain higher-quality transmission coefficients. TALYS produces quality values for cross sections but does not calculate γ -ray angular distributions. Hence the T ± l j transmission coefficients generated by TALYS were inserted into CINDY to manage the angular momentum algebra.
From Table 4 we observe that the 5% discrepancy limit is reached at E n ∼ 1.8 MeV -or ∼900 keV above the threshold for exciting the 2 + 1 . The 2% discrepancy limit is reached at E n ∼ 2.8 MeV -or ∼2000 keV above the threshold. Tables 3 and 4 do not include the additional spreading of σ that would result from γ -ray feeding from higher-lying states. This additional effect becomes significant a couple of MeV above the 2 + 1 level. Attempts to take this into account with calculations are too labor-intensive and would be too ambiguous because of uncertainties in the higher state alignments and multipole mixing ratios for their decays. To gauge the experimental reality, we compile information from numerous 2 + i → 0 + angular distribution measurements at the UKAL on many nuclei over the last 30 years. Figure 3 displays the measured a 4 coefficients as a function of energy above the threshold for the level. The trendline indicates the < 5% limit is reached about 1000 keV above threshold and the < 2% limit ∼2000 keV above threshold. This observation is consistent with our CINDY predictions for the 56 Fe 2 + 1 , and suggests the same will be true for all 2 + → 0 + transitions.
Summary
Transmission coefficients for l = 0, 1, 2 channels have significant differences depending on the reaction model treatment used. This variation impacts the overall scale of the cross sections but not so much the γ -ray angular distributions.
Ignoring the a 4 P 4 term in the angular distribution creates serious discrepancies for 2 + i → 0 + transitions before the onset of feeding. The a 4 P 4 discrepancy is not as important for other spin states, J = 3, 4, . . . because substate spreadings σ are wider and therefore large a 4 values tend not to occur.
Substate spreading differs slightly according to the choice of optical model treatment, but the impact upon the γ -ray angular distribution is not large.
Care must be taken when interpreting γ -ray excitation functions as cross sections when 1) the incident neutron energy is < 1000 keV above threshold or 2) before the onset of feeding. | 2,552.2 | 2017-09-01T00:00:00.000 | [
"Physics"
] |
Discrete symmetry tests using hyperon-antihyperon pairs
. The BESIII experiment has accumulated millions of spin entan-gled hyperon–antihyperon pairs from the decays of the J /ψ and ψ (2 S ) mesons produced in electron–positron collisions. Using multi-dimensional methods to analyze the angular distributions in the sequential decays the hyperon and antihyperon decay amplitudes are studied and tests of the combined charge-conjugation–parity (CP) symmetry are performed. Implications of the recent results using J /ψ decays into Λ ¯ Λ and Ξ − ¯ Ξ + are presented. For the cascade decay chain, the exclusive measurement allows for three independent CP tests and the first direct determination of the weak phases.
Introduction
The standard model of elementary particles explains most of the subatomic phenomena but it leaves out questions like the unification with the gravitational forces or the origin of the observed dominance of the matter over the antimatter in the universe. Any candidate for the underlying fundamental theory has to address these questions. A clue how to construct such theory might be found by scrutinizing the cornerstone symmetries of the standard model. In addition to the natural symmetries like space-time translations and space rotations, which lead to the four-momentum and angular momentum conservation, there are discrete symmetries related to parity (P), charge conjugation (C) and time reversal (T) transformations. Parity is the spatial inversion which changes directions of all particles to the opposite. Charge conjugation is the swap between all particles and antiparticles in a system. Separately the C and P symmetries are maximally violated in the weak interactions but their combination -the CP symmetry -is nearly preserved and a small observed violation is due to quantum interference effects only. The quantum interference can also enhance a signal of new interactions. For example, having the amplitude of a known process M = |M| exp(iδ) and a new process amplitude m = |m| exp(iξ) the probability of the combined process is |M + m| 2 = |M| 2 + 2|M||m| cos(δ + ξ) + |m| 2 . If |m| 2 ≪ |M| 2 the contribution of the new process is dominated by the interference term that is linear in |m|.
Here I will discuss direct CP violation that requires amplitudes for a process and the CPtransformed process to have different complex phases. For example if the amplitude for a particle decay is A = |A| exp(iξ + iδ) then the amplitude for the CP-transformed process is A = |A| exp(−iξ + iδ), where ξ is the CP-odd phase that changes sign after the CP transformation. In order to observe an effect due to the CP-odd phases an interference with other amplitudes contributing to the process is needed. Since the CP-odd phases are small for the weak transitions of strange quarks a significant contribution might be due to a new physics (NP) effect. One process where the direct CP-violation was observed is the combined weak and strong decay of K 0 meson into a pion pair K 0 → π + π − . There are two final states that have distinct properties with respect to the strong interactions 1 . The initial weak transitions can proceed to one of the two states and are represented by the amplitudes A 0 and A 2 where |A 2 /A 0 | ≈ 0.05. The strong interaction of the pions is described by two strong phases δ 0 and δ 2 . The complete amplitude for the K 0 → π + π − process can then be written as . The amplitude of the CP-transformed process K 0 → π − π + ,Ā, differs only by the signs of the CP-odd phases ξ 0 and ξ 2 . The decay rates |A| 2 and |Ā| 2 can be different if CP is violated. The direct CP violation observable is expressed as [1,2]: where the measured value of Re(ϵ ′ ) is (3.7 ± 0.5) × 10 −6 [3] corresponding to the weakphase difference ξ 0 − ξ 2 ≈ 10 −4 rad. In the standard model the contributions to the CP-odd phases ξ 0,2 are given by the loops that involve all three quark generations as shown in the diagrams in Fig. 1. An order of magnitude estimate for these contributions is determined from the Cabibbo-Kobayashi-Maskawa matrix [4,5] and can be expressed by the product of the Wolfenstein parameters [6] as λ 4 A 2 η ≈ −6 × 10 −4 [3]. Recent standard model predictions for the CP-violation in kaon decays that include hadronic effects are given in the framework of the low energy effective field theory [7] and the lattice quantum chromodynamics [8,9]. The agreement with the Re(ϵ ′ ) parameter measurement is satisfactory but due to large uncertainties there is a room for a new physics contribution.
Baryon two-body weak decays
Similar mechanism of the direct CP violation is possible in the main decay modes of the baryons with one or more strange quarks -hyperons. The spin of a baryon provides an additional degree of freedom that can be used for more sensitive CP tests. A quantum state of a spin-1/2 baryon B can be described by Pauli matrices: where σ 0 is the 2 × 2 unit matrix and σ := (σ 1 , σ 2 , σ 3 ) represents spin x, y, z projections in the baryon rest frame. The polarization vector P B describes the preferred spin direction for an ensemble of the B-baryons. Consider the B → bπ transition between two spin-1/2 baryons with positive internal parities where a negative parity pion is emitted. Examples of such processes are the decays Ξ − → Λπ − or Λ → pπ − . The baryon b can have the same or opposite direction of spin as the baryon B and the angular momentum conservation requires s-or p-wave for the b-π system. The parity of the final state is negative for the s-wave and positive for the p-wave. Both are allowed since parity is not conserved in weak decays. The decay amplitude is given by the transition operator wheren is the direction of the b-baryon in the B-baryon rest frame. The complex parameters S and P can be represented as S = |S| exp(iξ S + iδ S ) and P = |P| exp(iξ P + iδ P ). The strong interaction of the b-π system is given by the δ P and δ S phases and the CP-odd phases are ξ P and ξ S . The amplitude ratios |P/S| for Λ → pπ − and Ξ − → Λπ − are 0.442(4) and 0.188 (2), respectively [10]. Since only the angular distributions will be discussed the probability of the decay Γ ∝ |S| 2 + |P| 2 is set to |S| 2 + |P| 2 = 1. The real and imaginary parts of the amplitude interference term are The real part given by the α parameter can be determined from the angular distribution of the baryon b when the baryon B has known non-zero polarization or by measuring the polarization of the daughter baryon. For example, the proton angular distribution in the Λ(Λ → pπ − ) decay is given as where P Λ is the Λ polarization vector. Measurement of the imaginary part of the interference term given by the parameter β requires that the polarization of both baryon B and the daughter baryon is determined. For the decay Ξ(Ξ − → Λπ − ), where the cascade is polarized, the β Ξ parameter can be determined using the subsequent Λ → pπ − decay that acts as Λ polarimeter. The relation between the initial Ξ − polarization P Ξ and the daughter Λ polarization P Λ is given by the Lee-Yang formula [11]. If theẑ axis in the Ξ − rest frame is defined along then direction than the relation between the polarization vectors is where the parameter γ = |S| 2 − |P| 2 . The equation implies that Λ in decay of unpolarized Ξ − has the longitudinal polarization vector component P z Λ = α Ξ . Using a polar angle parameterization such that β = √ 1 − α 2 sin ϕ and γ = √ 1 − α 2 cos ϕ, the ϕ parameter has interpretation of the rotation angle between the Ξ and Λ polarization vectors in the transversal x-y plane.
CP violation in baryon decays
For the C-transformed baryon decay processB →bπ the amplitude is: where the complex parametersS andP are obtained from S and P by reversing the sign for the weak CP-odd phases ξ S and ξ P . Since the product of the P-odd and P-even terms changes sign, the decay parameters are: The weak phase difference ξ P − ξ S can be determined using two independent experimental observables:
Spin entangled baryon-antibaryon systems
A novel method to study hyperon decays is to use the entangled baryon-antibaryon pair from J/ψ resonance decays. In general the spin state of a baryon-antibaryon system can be represented as where the Pauli operators σ (B) µ and σ (B) ν act in the rest frames of the baryon and antibaryon, respectively. The coefficients of the spin correlation-polarization matrix C BB µν depend on the pair production process. Two reactions provide best sensitivity for the CP tests in hyperon decays: e + e − → J/ψ → ΛΛ [12,13] and e + e − → J/ψ → ΞΞ [14]. At the BESIII experiment for every million of the produced J/ψ resonances, 320 and 56 of fully reconstructed ΛΛ and ΞΞ pairs are selected, respectively. With 10 10 J/ψ mesons available at BESIII, precision measurements of the hyperon decay parameters are possible. The elements of the C BB µν matrix for the e + e − → BB processes are known functions of the baryon B production angle θ and depend on two parameters that need to be determined from data [15,16]. The baryons can be polarized in the direction perpendicular to the reaction plane given by theŷ unit vector. The polarization vector component P y (θ) and the spin correlation terms are C i j (θ), i, j = x, y, z: The average baryon polarizations |P y | in the e + e − → J/ψ → ΛΛ and e + e − → J/ψ → ΞΞ reactions are 18% and 23%, respectively. The two production reaction parameters, the decay parameters α Λ , α Ξ and β Ξ , as well as the CP-violating variables A Λ CP , A Ξ CP and B Ξ CP , can be determined using unbinned maximum likelihood fit to the measured angular distributions. The statistical uncertainty of the observable A Λ CP in the process e + e − → J/ψ → ΛΛ is inversely proportional to the Λ polarization [10]. The weak phase difference for the Ξ baryon decay is directly given by the measurement of the B Ξ CP observable in the process e + e − → J/ψ → ΞΞ. The sensitivity of such measurement depends on the average squared of the Ξ polarization and the ΞΞ spin correlation terms [10]. Figure 2. Summary of the results for the weak phases (ξ P − ξ S ) Ξ and (ξ P − ξ S ) Λ . The present BESIII results are given by the blue rectangle [13,14]. The red horizontal band is the HyperCP result [17]. The brown rectangle is the projection of the statistical uncertainties for the complete BESIII data set.
Weak phases
The goal of the CP tests is to determine the weak-phase differences for the two decays. The BESIII results based on 1.3×10 9 and 10 10 J/ψ data for (ξ P −ξ S ) Ξ [14] and (ξ P −ξ S ) Λ [13], respectively, are presented in Fig. 2 by the blue rectangle. The red band represents the HyperCP experiment result on A Λ CP + A Ξ CP [17]. Interpretation of this measurement requires the value of tan(δ P − δ S ) Ξ that is poorly known. Projection for the statistical uncertainties of the analysis using full BESIII data set for e + e − → J/ψ → ΞΞ is given by the brown rectangle. The results should be compared to the standard model predictions of (ξ P − ξ S ) Λ = (−0.2 ± 2.2) × 10 −4 and (ξ P − ξ S ) Ξ = (−2.1 ± 1.7) × 10 −4 . In case of hyperon decays the standard model contribution is dominated by QCD-penguin contributions shown in Fig. 1(a). The results on the weak phases in kaon and in the hyperon decays can be combined in order to search for deviations from the standard model. The present kaon data imply the limits |ξ P − ξ S | Λ NP ≤ 5.3 × 10 −3 and |ξ P − ξ S | Ξ NP ≤ 3.7 × 10 −3 [18]. Clearly, the hyperon CP-violation measurements with much improved precision will provide an independent constraint on the NP contributions in the strange quark sector. However, a lot also remains to be done on the theory side, as the present predictions suffer from considerable uncertainties. It is hoped that the lattice analyses [19] could help solve this problem in future. | 3,101 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
A New Hybrid Fault Tolerance Approach for Internet of Things
: In the Distributed Management Task Force, DMTF, the management software in the Internet of things (IoT) should have five abilities including Fault Tolerance, Configuration, Accounting, Performance, and Security. Given the importance of IoT management and Fault Tolerance Capacity, this paper has introduced a new architecture of Fault Tolerance. The proposed hybrid architecture has used all of the reactive and proactive policies simultaneously in its structure. Another objective of the current paper was to develop a measurement indicator to measure the fault tolerance capacity in di ff erent architectures. The CloudSim simulator has been used to evaluate and compare the proposed architecture. In addition to CloudSim, another simulator was implemented that was based on the Pegasus-Workflow Management System (WMS) in order to validate the architecture that is proposed in this article. Finally, fuzzy inference systems were designed in a third step to model and evaluate the fault tolerance in various architectures. Based on the results, the positive e ff ect of using various combined Reactive and Proactive policies in increasing the fault tolerance in the proposed architecture has been prominently evident and confirmed.
Introduction
The evolution and transformation of the Internet have transitioned from the Internet of content to the Internet of services and the Internet of people, and today there is the Internet of things.The Internet of things, hereinafter referred to as IoT, consists of a series of smart sensors, which, directly and without human intervention, work together in new kinds of applications.It is obvious that the management and traditional architecture of the Internet must be changed.For example, the address space for support must be changed from IPv4 to IPv6.The first and most important thing for upgrading the management of the IoT is a requirement for a layered and flexible architecture.Many models have been proposed for the IoT architecture.A basic model is a three-layered architecture.This architecture has been formed from sensors, network, and process layers (See e.g., [1]).
The explosive growth of smart objects and their dependencies on wireless communication technologies make the Internet more vulnerable to faults.These faults may be established due to different reasons, including cyber-attacks, which are referred to in [2].These faults may provide daunting challenges to experts, as it becomes more important to manage the participating components in IoT.The Distributed Management Task Force, briefly referred to as DMTF, has announced that the cloud management software should have the ability of FCAPS.The first letter of the word that is character F is an acronym for fault tolerance.In other words, the first feature of the management software should entail fault tolerance.Other management capabilities can be considered if there is a fault tolerance feature.However, in the absence of fault tolerance, other features are not important and they accompany no management ability.Fault tolerance refers to providing an uninterrupted service system, even in the presence of an existing fault in the system.It is quite clear that, if we want to review and implement the management in the IoT layered architecture, the primary focus should be on its first feature, i.e., fault tolerance.
Different methods have been introduced to increase the fault tolerance in the IoT.For example, in [3], the virtual form of cluster head nodes are utilized when considering that CH nodes have the role of forwarding collected data in the IoT application.Therefore, the tolerance of the CH nodes failure will increase in the network.Furthermore, in [4], the virtualization technique is used in wireless sensor networks due to the growth of the IoT service.When a fault is developed in wireless sensor network communications, it has significant impact on many virtual networks running IoT services.The framework that is proposed in [4] provides the optimization of the fault tolerance in virtualized wireless sensor networks, with a focus on heterogeneous networks for service-based IoT applications.Regardless of other methods, the main techniques for increasing fault tolerance can be categorized in three broad categories.The techniques for increasing fault tolerance have been divided into three main groups.The first group includes redundancy techniques.These techniques can be implemented as hardware redundancy, software redundancy, or time redundancy.The second category includes load balancing techniques.These techniques can also be implemented as hardware, software, or in the network.Finally, the third group of techniques to increase the fault tolerance (FT) capability is related to the use and benefit of different policies.These policies are dependent on the environment in which they are implemented.For example, two types of policies are used in cloud computing environments: the proactive policy and the reactive policy.The fault will not be allowed in proactive policies and design should be done in such a way that the fault is forecasted before creating the fault, which is possible to be prevented.Accordingly, these types of policies cover two phases, including fault forecasting and fault prevention.This procedure is also followed in reactive policies in which the fault tolerance operation happens after the occurrence of the fault in the form of fault detection, fault isolation, fault diagnosis, and fault recovery.Of course, it should be noted that the use of each of these methods in increasing the FT capability imposes overhead costs to the system, but the highest performance for the system is achieved when the reactive and proactive policies are used and simultaneously implemented.
The aim of the present study was evaluating the management of fault tolerance in the IoT communication platform, i.e., its second layer architecture.In this regard, the analysis that was carried out in [5] was used.Subsequently, a new architecture was proposed, whereby the maximum coverage of various phases enjoyed the FT capability.The simultaneous benefit from the reactive and proactive policies achieved the highest possible performance in the output.
The Internet has created extensive variations in industrial scopes and business models.Industrial internet is the result of combining internet and big data and artificial intelligence and economy in the world.A developed digital channel is responsible for information delivery based on the latest technologies regarding the industrial Internet, so that intelligent decisions can be done in the real time format to enhance the efficiency.To this end, there is a need for a platform in the real world to realize the rapid integrity and reply to the market as fast as possible.The rapid integrity can be realized and it can reply to the market as long as it uses resources given the fact that the proposed architecture in this article was mainly applicable in real time systems.Hence, it can have a wide range of applications in industrial internet platform and IIoT architecture.
The contributions of this paper are as follows: • We offer a hybrid modern architecture that simultaneously uses proactive and reactive policies.
•
Our proposed architecture uses all three types of reactive policies at the same time.
•
Both proactive policies are implemented at the same time in our proposed architecture structure.
•
Maximum use of different fault detection/recovery methods is also considered in our proposed architecture.
•
In addition to simulations that were carried out, the design and implementation of scientific workflows in the past architectures and our proposed architecture are one of the most important achievements and innovations of this study.
The remainder of the article has been divided into eight sections.The related previous architectures are presented in Section 2 of the article.In Section 3, the proposed architecture is introduced and described.Subsequently, in Section 4, the CloudSim simulator is used to simulate the proposed architecture and it has compared its performance with previous architectures; in Section 5, scientific workflows are introduced and the proposed architecture is simulated.In Section 6, the evaluation systems in fuzzy logic are designed and implemented, and the analysis of the results is discussed; and ultimately, in Section 7, optimal decision-making is discussed.In Section 8, the conclusions and related ideas for future work are expressed.
Related Works
As expressed in [5], the fault-tolerant architectures of cloud computing are divided into two general categories according to the policies that will benefit them.The first group is formed by proactive architectures and the second group includes reactive architectures.As expressed in [6][7][8], the preemptive migration and self-healing policies are used in proactive architectures.The Check Point/Restart, Replication, and Job Migration policies have also been used in reactive architectures.The Map-Reduce and FT-cloud architectures are examples of proactive architectures.The Map-Reduce that was introduced in [9-13] has used both proactive policies in its structure.The FT-cloud described in [14] has only used the self-healing policy.
The FTWS, LLFT, FTM-2, FTM, MPI, Gossip, BFT-Cloud, Haproxy PLR, AFTRC, Vega Warden, MagiCube, and Candy, as presented in [15][16][17][18][19][20][21][22][23][24][25][26][27][28], are among the reactive architectures.Each of these architectures simultaneously uses one or two policies.Of course, the architectures of the PLR, AFTRC, FTM, as presented in [15][16][17], have used all three reactive policies.The difference between the FTM with two architectures is that AFTRC and PLR are devoted to real-time systems, whereas FTM is not devoted to such systems.The difference between PLR with AFTRC is that the PLR has a lower Wall Clock Time than the AFTRC.An explanation of all architectures mentioned in [5] has been described in detail.As expressed in [5,16], the AFTRC architecture has benefited from five modules: Recovery Cache (RC), Decision Maker (DM), Reliability Assessor (RA), Time Checker (TC), and Acceptance Test (AT) in its internal structure.The AT module is responsible for checking the output value of the virtual machine.TC performs time checking, i.e., the time that is required for generating the output of the virtual machine.The RA module that evaluates the reliability of the output, in fact, identifies the percentage of the output credit that is based on two previous parameters, i.e., the AT and TC.DM module is responsible for determining and selecting the final output and, finally, the RC module is the storage of the checkpoints in the case of operation replication.Figure 1 shows the structure of this architecture.• In addition to simulations that were carried out, the design and implementation of scientific workflows in the past architectures and our proposed architecture are one of the most important achievements and innovations of this study.
The remainder of the article has been divided into eight sections.The related previous architectures are presented in Section 2 of the article.In Section 3, the proposed architecture is introduced and described.Subsequently, in Section 4, the CloudSim simulator is used to simulate the proposed architecture and it has compared its performance with previous architectures; in Section 5, scientific workflows are introduced and the proposed architecture is simulated.In Section 6, the evaluation systems in fuzzy logic are designed and implemented, and the analysis of the results is discussed; and ultimately, in Section 7, optimal decision-making is discussed.In Section 8, the conclusions and related ideas for future work are expressed.
Related Works
As expressed in [5], the fault-tolerant architectures of cloud computing are divided into two general categories according to the policies that will benefit them.The first group is formed by proactive architectures and the second group includes reactive architectures.As expressed in [6][7][8], the preemptive migration and self-healing policies are used in proactive architectures.The Check Point/Restart, Replication, and Job Migration policies have also been used in reactive architectures.The Map-Reduce and FT-cloud architectures are examples of proactive architectures.The Map-Reduce that was introduced in [9-13] has used both proactive policies in its structure.The FTcloud described in [14] has only used the self-healing policy.
The FTWS, LLFT, FTM-2, FTM, MPI, Gossip, BFT-Cloud, Haproxy PLR, AFTRC, Vega Warden, MagiCube, and Candy, as presented in [15][16][17][18][19][20][21][22][23][24][25][26][27][28], are among the reactive architectures.Each of these architectures simultaneously uses one or two policies.Of course, the architectures of the PLR, AFTRC, FTM, as presented in [15][16][17], have used all three reactive policies.The difference between the FTM with two architectures is that AFTRC and PLR are devoted to real-time systems, whereas FTM is not devoted to such systems.The difference between PLR with AFTRC is that the PLR has a lower Wall Clock Time than the AFTRC.An explanation of all architectures mentioned in [5] has been described in detail.As expressed in [5,16], the AFTRC architecture has benefited from five modules: Recovery Cache (RC), Decision Maker (DM), Reliability Assessor (RA), Time Checker (TC), and Acceptance Test (AT) in its internal structure.The AT module is responsible for checking the output value of the virtual machine.TC performs time checking, i.e., the time that is required for generating the output of the virtual machine.The RA module that evaluates the reliability of the output, in fact, identifies the percentage of the output credit that is based on two previous parameters, i.e., the AT and TC.DM module is responsible for determining and selecting the final output and, finally, the RC module is the storage of the checkpoints in the case of operation replication.Figure 1 shows the structure of this architecture.The two main phases of the FT capability are fault detection and fault recovery.Fault detection and fault recovery can be conducted in different ways.The Gossip architecture that is presented in [20], the FTM presented in [15], and the PLR presented in [17] have used the self-detection method.This is the weakest method of fault detection, because a node itself should detect the fault.Since the PLR architecture has only used the method in the fault detection phase, it has a very weak capability of fault detection, and it is a great disadvantage of this architecture.Nevertheless, the architectures of the FTM and Gossip have also used the methods of other detection and group detection, in addition to this method.
The majority of the architectures that are proposed in the FT field of cloud computing have implemented the other detection method in their own fault detection phase.It should be noted that only the fault group detection method has been in architectures, such as the LLFT and BFT-Cloud, as presented in [19,22].The dominant method in fault recovery is also the system recovery method whereby the recovery is performed at the overall level of the system.Again, it can be seen in this phase that the LLFT and Vega-Worden architectures that are presented in [22,26] perform the fault recovery at the node level, which is a weaker method than the previous method.The weakest recovery method is the fault mask mode in which it sought to exploit the techniques for removing and covering the fault effect.The FTM and AFTRC architectures that are introduced in [15,18] have used this technique with other methods.
It is extremely important that all of the studied architectures so far are reactive or proactive.In addition, the architecture described in [27] is a hybrid one, which is termed as VFT.VFT architecture utilizes Self-healing, Preemptive Migration, and Replication policies.In [28], it is stated that the architecture introduced in [27], simultaneously used both proactive and reactive policies.Figure 2 shows the structure of this architecture.The two main phases of the FT capability are fault detection and fault recovery.Fault detection and fault recovery can be conducted in different ways.The Gossip architecture that is presented in [20], the FTM presented in [15], and the PLR presented in [17] have used the self-detection method.This is the weakest method of fault detection, because a node itself should detect the fault.Since the PLR architecture has only used the method in the fault detection phase, it has a very weak capability of fault detection, and it is a great disadvantage of this architecture.Nevertheless, the architectures of the FTM and Gossip have also used the methods of other detection and group detection, in addition to this method.
The majority of the architectures that are proposed in the FT field of cloud computing have implemented the other detection method in their own fault detection phase.It should be noted that only the fault group detection method has been in architectures, such as the LLFT and BFT-Cloud, as presented in [19,22].The dominant method in fault recovery is also the system recovery method whereby the recovery is performed at the overall level of the system.Again, it can be seen in this phase that the LLFT and Vega-Worden architectures that are presented in [22,26] perform the fault recovery at the node level, which is a weaker method than the previous method.The weakest recovery method is the fault mask mode in which it sought to exploit the techniques for removing and covering the fault effect.The FTM and AFTRC architectures that are introduced in [15,18] have used this technique with other methods.
It is extremely important that all of the studied architectures so far are reactive or proactive.In addition, the architecture described in [27] is a hybrid one, which is termed as VFT.VFT architecture utilizes Self-healing, Preemptive Migration, and Replication policies.In [28], it is stated that the architecture introduced in [27], simultaneously used both proactive and reactive policies.Figure 2 shows the structure of this architecture.A computational algorithm, called the success rate for each node, has been individually applied in VFT architecture.The algorithm decides based on two factors.The first factor is PR, which represents the performance rate.The second factor is the max success rate, which represents the maximum level that is considered for the success rate.If the SR of a node is less than or equal to zero, in this case, the load balancer does not assign tasks to the virtual machine in the next cycles.
It would be possible to decide the failure of a node according to the parameters of SC and TDC.Parameter SC stands for Status Checker and parameter TDC stands for Task Deadline Checking.If all of the nodes also fail, the FDM, Which stands for Final Decision-Making, sends feedback to the fault handler in order to make it aware of this issue.The fault handler detects and recovers the fault, based on the techniques that are defined and implemented for fault detection and fault recovery.It is A computational algorithm, called the success rate for each node, has been individually applied in VFT architecture.The algorithm decides based on two factors.The first factor is PR, which represents the performance rate.The second factor is the max success rate, which represents the maximum level that is considered for the success rate.If the SR of a node is less than or equal to zero, in this case, the load balancer does not assign tasks to the virtual machine in the next cycles.
It would be possible to decide the failure of a node according to the parameters of SC and TDC.Parameter SC stands for Status Checker and parameter TDC stands for Task Deadline Checking.If all of the nodes also fail, the FDM, Which stands for Final Decision-Making, sends feedback to the fault handler in order to make it aware of this issue.The fault handler detects and recovers the fault, based on the techniques that are defined and implemented for fault detection and fault recovery.It is worth noting that the architecture in the fault detection phase acts based on the other detection method according to the scenario of the VFT architecture.The fault recovery method in the VFT architecture has also been implemented at the system level.
The approaches that are presented in [29][30][31][32][33] can be mentioned as some instances of architectural models of fault tolerance, which have been presented according to other architectural models of the IoT.Fault tolerance implemented on the internet of military objects has been investigated in [29].In the presented model, which is shortly called IOMT, is an approach called MM and presented by Malek and Maeny, which has been used in order to facilitate fault detection.This method is the majority of the duplicate entries that have been sent to a dual-processor.In this architecture, the fault mask technique has also been used on the fault recovery phase.In addition, fault tolerance on routing IoT is the proposed method in [30].Layer fault management, which as a plan for end-to-end transfer, has been introduced in [31].In [32], another method has been presented using the concept of virtual services.In the present article, a genetic algorithm, which is known as NSG-ii, has been used.Fault tolerance that is based on the architecture of the middle base ware has been shown in [33].In this method, redundancy on the services' level has been implemented.To conclude, the shortcomings of the existing solutions are as follows:
•
The highest efficiency of the fault tolerance architecture is obtained through hybrid architectures.Among all of the investigated architectures, VFT architecture is the hybrid one, so the lack of hybrid architecture is extensively felt in this scope.
•
VFT architecture enjoys proactive policies in the integrative way.However, it has only used the replication method among reactive policies and it has not utilized all of the methods in an integrative way.Thus, it cannot have the maximum reliability.
•
Fault detection method in architecture has been done in another detection way, which is an average method among the detecting methods.
•
The fault recovery method of the VFT architecture has been the only previous hybrid architecture in the system level, in which the refinement of faults has not happened in VMs' level.Moreover, the fault mask method has not enjoyed this method in fault recovery phase of architecture.
Proposed Approach
The architecture that is proposed in this paper has been called PRP-FRC.This architecture is considered as a hybrid architecture in terms of implementing the reactive and proactive policies.It has been sought in the PRP-FRC that proactive policies, including preemptive migration and self-healing and reactive policies, including Checkpoint/Restart and Job Migration and Replication can be simultaneously implemented.Figure 3 shows the proposed PRP-FRC architectural structure.
In the proposed architecture of PRP-FRC, tasks are initially divided between physical nodes, which are the hosts, by the load balancer.Subsequently, they are divided between virtual nodes by the manager available in each node with the help of the mapping table.Output accuracy and checking the reliability of the output for a virtual machine is achieved by the AT module.If the output validity of a virtual node is not confirmed by the AT, then the task will be assigned to another virtual machine by the manager via feedback that is available from the AT to the manager.The TC module makes decisions on the time validity of a virtual machine's generated output.In fact, the task of the TC is to check that the virtual machine has produced an output in the time that is taken to respond or has spent a more defined period to generate the time departure.
The importance of the mentioned module is very critical, because the proposed architecture of PRP-FRC, such as the AFRC architecture, is intended for real-time applications.The RA module decides on the reliability rate of a node that is based on the output values of the two previous parameters (AT and TC).In other words, if the percentage of node reliability is less than the limit that is defined in the RA modules, then a new task will not be assigned to the virtual node in the next cycles.In the process of verifying the time validity of the virtual nodes, if none of the nodes have the desired output, the TC issues an "All Nodes Failed" message.The request for job migration is activated in this case, and the guest virtual node on another host is selected by the MPI architecture and the job migration is done.The importance of the mentioned module is very critical, because the proposed architecture of PRP-FRC, such as the AFRC architecture, is intended for real-time applications.The RA module decides on the reliability rate of a node that is based on the output values of the two previous parameters (AT and TC).In other words, if the percentage of node reliability is less than the limit that is defined in the RA modules, then a new task will not be assigned to the virtual node in the next cycles.In the process of verifying the time validity of the virtual nodes, if none of the nodes have the desired output, the TC issues an "All Nodes Failed" message.The request for job migration is activated in this case, and the guest virtual node on another host is selected by the MPI architecture and the job migration is done.
In the case where all of the nodes are not failed and only some of them have gained the reliability that is necessary for the production of the Trust Output, and then two modes will be implemented based on the decision of the DM module, which stands for Decision Making.The first case is that the task`s restart operation is performed from the last checkpoint that is based on the storage of checkpoints in the architectural structure.The second case is that the job migration to a virtual machine happens on another host, regardless of the checkpoint.The choice of a method in this step depends on the policies that are defined in the DM module.Algorithm 1 shows the algorithm of the proposed architecture of the PRP-FRC.In the case where all of the nodes are not failed and only some of them have gained the reliability that is necessary for the production of the Trust Output, and then two modes will be implemented based on the decision of the DM module, which stands for Decision Making.The first case is that the task's restart operation is performed from the last checkpoint that is based on the storage of checkpoints in the architectural structure.The second case is that the job migration to a virtual machine happens on another host, regardless of the checkpoint.The choice of a method in this step depends on the policies that are defined in the DM module.Algorithm 1 shows the algorithm of the proposed architecture of the PRP-FRC.As a result, the following issues can be mentioned if we want to state the proposed strength to cover the weaknesses of the previous architecture.
•
The proposed architecture is a hybrid architecture, so it is expected to have the highest efficiency.
•
The proposed architecture enjoys all reactive and proactive policies in an integrated way, so it leads the system to achieve the highest level of fault tolerance.
•
The proposed architecture has simultaneously utilized two methods of self-detection and other detection in the detection phase of the faults.
•
The proposed architecture has used every refinery fault, which is unlike the previous architectures and hybrid VFT ones.It also masks the faults and refines the faults that are in Vms levels.Finally, it prepares the refinement of the faults in the system level.
Simulation of the Proposed Architecture in CloudSim
CloudSim is used to simulate and compare the proposed architecture with previous architectures.This simulator can generate various reports on energy consumption, cost, and execution time of each Cloudlet.The ease of implementation, simulation of different types of network topologies and architectures, the defining of multiple DataCenter, ease of implementation of different policies for allocating Vm and Host, and support for Space Shared and Time-Shared scheduling are the benefits of this simulator.
The implemented scenario in this article has been designed in the cloudSim simulator in a two-host DataCenter with a number of different processors.Each host has four VMs with various specifications.The ability of VMs is differently defined to deal with different versions of each Cloudlet, whose different behaviors and functions have been examined.Tables 1 and 2 show the configuration specifications of DataCenter and Vm, respectively.Moreover, a number of Cloudlet have been designed and implemented with a different processing length.The purpose of this type of design was to have a different length of Cloudlets to create different modes of faults on the system.After designing DataCenter, Host, Vm, and Cloudlet, and before simulation, it has become necessary to extend the CloudSim based on the proposed architecture.According to which, the manner of distribution of computational resources, such as Ram and especially PEs among requests, should be determined.This scheduling can be in the form of Space Shared, Time-Shared, or Customized policy definition.In all simulations, we have used all resources in Space Shared manner.
In the Reliability calculation algorithms, the calculations are done in such a way that the effect of a fail or a Success causes changes with a slower slope in the reliability of each step.Additionally, at the beginning of a Vm, the fail has a different effect with the fails occurred after several requests.Obviously, the reliability in each stage of the simulation becomes equal to the average reliability of all of the VMs at that stage.The simulation results in Tables 3-5 and Figure 4 is shown.These results have demonstrated the positive effect of using all policies as hybrids in the proposed architecture.As it can be understood from the data center configuration and Vms of Tables 1 and 2, the Cloudlets had a data size with a constant value of 300.However, the lengths of jobs were considered to be different, so that, when it faced different faults, various behaviors could be appeared.Regarding the time variation value, the acceptable time for the implementation was considered to be between 1 to 7 s.Thus, in the case that one Vm presented an output in less than 1 s, it would be unaccepted, and if this time was more than 7 s, the output would be unacceptable, as well.Therefore, this time range should be considered in the first phase of analyzing virtual machines' outputs.Regarding the second phase, if the output was produced in the acceptable time range, the validation of the output's amount would be considered to see whether the amount of the produced output of Vm is acceptable or not.
Validation platform
The reliability of a Host Vm equals the average of all Vms' reliability of one Host.It is well evident that, when all Vms have acceptable outputs, the amount of total reliability would be increased.Moreover, when all of the Vms fail, it is quite natural that their total reliability would be decreased.These two, are respectively the best and worst possible modes, which include the Max and Min of the reliability.In the case that Vm has some acceptable outputs and lacks the expected output, the reliability would be high or low.On the other hand, as time passes, the results become more important.Our criteria is the current time and status of each Vm.It is possible that a Vm may have had an acceptable output before, but it lacks proper output at the present, or vice versa.Therefore, the number of successful outputs or failed ones, as well as the success-effect coefficient versus the fail-effect coefficient is considered to be the effective parameters when doing calculations.
For example, we assume that the Success-Effect coefficient, which is indicated as SE, equals to 0.1 and the Fail-Effect coefficient, indicated as FE, equals to 0.03, the importance factor, indicated as IF, equals 0.6, whose highest value would be 1.The number of success and failure would be considered, respectively, as CSC and CFC, in which CSC is the abbreviation for Continues Success Counter and CFC is the abbreviation for Continues Fail Counter, whose first values are considered to be zero.As a result, the amount of every Vm's reliability equals to the amount of previous round's reliability, plus the multiplication of the previous round's reliability when multiplied in CSC and the success-effect or fail-effect coefficient, which is multiplied to the importance factor.
Simulation Environment
Many scientific calculations use workflow technologies to manage complexity [6].Workflow technology is used to the schedule calculating tasks on distributed resource to manage task interdependence.The aim of this scheduling is to optimize the mapping between tasks and appropriate resources.One of the important factors that have a great influence on choosing a scheduling strategy is the dependence and independence among the tasks.Some tasks have to be done in succession; these kinds of tasks are called dependent tasks.In contrast, there is another category of tasks that are simultaneously run or in a special order; these kinds of tasks are called independent tasks.Scheduling dependent tasks is known as workflow scheduling.
A simulated environment has been implemented based on the Pegasus-WMS workflow management system to validate the architecture proposed in this article (Pegasus-WMS).In Pegasus, the workflows are described as Direct A cycle Graphs (DAGs).In DAG, each node represents one of the tasks.The edges of a DAG also represent the interdependence between those tasks.Montage and CyberShake are the most famous scientific workflows.Montage is applied for the processing and transmitting images that have been used in NASA. Figure 5a shows the Montage Workflow.Figure 5b also shows CyberShake Workflow.Cybershake Workflow has been used to process the waves at the California Seismological Center.The Pegasus-WMS approach acts as a bridge between the field of science and the field of action expressing the connection between them.In addition to executing a described abstract workflow, the Pegasus Workflow Management System has the ability to translate the tasks into the jobs.Subsequently, it also attends the running of those jobs.This workflow management system is capable of simultaneously executing data management and running the jobs.Additionally, it has the ability to monitor job execution and tracking.Eventually, Pegasus can handle them in the case of failure.The mentioned actions are performed by the five Pegasus subsystems.Figure 6 shows the architecture of the Pegasus workflow management system.Electronics 2019, 8, x FOR PEER REVIEW 13 of 23 The first major Pegasus subsystem is Mapper.Mapper is the producer of an executive workflow that is based on an abstract workflow that was provided by the user.The second sub-system is the local execution engine, which is responsible for submitting the jobs that are defined by the workflow.Submitting the jobs is done based on the workflow.Subsequently, jobs` states are tracked and the readiness timing of running those jobs is determined.The next sub-system is job scheduler, which is responsible for the management of unique job scheduling and monitoring their implementation on local or remote resources.The remote execution engine`s sub-system manages the execution of one The first major Pegasus subsystem is Mapper.Mapper is the producer of an executive workflow that is based on an abstract workflow that was provided by the user.The second sub-system is the local execution engine, which is responsible for submitting the jobs that are defined by the workflow.Submitting the jobs is done based on the workflow.Subsequently, jobs' states are tracked and the readiness timing of running those jobs is determined.The next sub-system is job scheduler, which is responsible for the management of unique job scheduling and monitoring their implementation on local or remote resources.The remote execution engine's sub-system manages the execution of one or more tasks based on the possible or probable structures of a sub-workflow on one or more remote computing nodes.Finally, the subsystem of the monitoring component is responsible for monitoring at run time, which monitors the execution of a workflow.The analysis of jobs in a workflow, and populating them in a workflow database, is the responsibility of this subsystem.
Basic structures or main components of a scientific workflow include process, pipeline, data distribution, data aggregation, and, finally, data redistribution.The VFT and AFTRC architectures are depicted in Figure 7a,b, respectively.The first major Pegasus subsystem is Mapper.Mapper is the producer of an executive workflow that is based on an abstract workflow that was provided by the user.The second sub-system is the local execution engine, which is responsible for submitting the jobs that are defined by the workflow.Submitting the jobs is done based on the workflow.Subsequently, jobs` states are tracked and the readiness timing of running those jobs is determined.The next sub-system is job scheduler, which is responsible for the management of unique job scheduling and monitoring their implementation on local or remote resources.The remote execution engine`s sub-system manages the execution of one or more tasks based on the possible or probable structures of a sub-workflow on one or more remote computing nodes.Finally, the subsystem of the monitoring component is responsible for monitoring at run time, which monitors the execution of a workflow.The analysis of jobs in a workflow, and populating them in a workflow database, is the responsibility of this subsystem.
Basic structures or main components of a scientific workflow include process, pipeline, data distribution, data aggregation, and, finally, data redistribution.The VFT and AFTRC architectures are depicted in Figure 7a and Figure 7b, respectively.The PrP-FRC workflow proposed architecture has been presented in Figure 8. Obviously, the basic structures of the process, data distribution, data aggregation, and pipeline have been exploited, in the implementation of the workflows of VFT and AFTRC architectures and the proposed PrP-FRC architecture.
The evaluations of the proposed architecture when compared to the VFT and AFTRC architectures have been conducted in terms of the run time of the relevant workflows.The experiments have been performed using the above-described simulation environment.In the following subsections, the results of the carried-out simulations have been described in detail.
The PrP-FRC workflow proposed architecture has been presented in Figure 8. Obviously, the basic structures of the process, data distribution, data aggregation, and pipeline have been exploited, in the implementation of the workflows of VFT and AFTRC architectures and the proposed PrP-FRC architecture.The evaluations of the proposed architecture when compared to the VFT and AFTRC architectures have been conducted in terms of the run time of the relevant workflows.The experiments have been performed using the above-described simulation environment.In the following subsections, the results of the carried-out simulations have been described in detail.
Simulation results
Tables 6, 7, and 8 presented the results of the simulations of AFTRC and VFT architectures and the proposed PrP-FRC architecture in the Pegasus-WMS simulator.
The proposed PrP-FRC architecture has been evaluated with VFT and AFTRC architectures in terms of two criteria.The average execution time and reliability are considered as two comparative criteria.
Figure 9 shows the average execution time for each architectural workflow.It is clear that the PrP-FRC architecture had the highest average of execution time, since it has implemented all Proactive and Reactive policies.The AFTRC architecture also had the lowest average of execution time, which was not a hybrid architecture, rather it was considered as Reactive architecture.
On the other hand, as shown in Figure 10, the highest level of fault tolerance was provided by the proposed PrP-FRC architecture, and the VFT hybrid architecture was in the second place.The reliability of this architecture was less than PrP-FRC and more than the reliability of AFTRC architecture.Additionally, in Figure 11, the number of failed and succeed tasks or jobs has been illustrated in one of the proposed simulation rounds as an example.
Simulation Results
Tables 6-8 presented the results of the simulations of AFTRC and VFT architectures and the proposed PrP-FRC architecture in the Pegasus-WMS simulator.
The proposed PrP-FRC architecture has been evaluated with VFT and AFTRC architectures in terms of two criteria.The average execution time and reliability are considered as two comparative criteria.
Figure 9 shows the average execution time for each architectural workflow.It is clear that the PrP-FRC architecture had the highest average of execution time, since it has implemented all Proactive and Reactive policies.The AFTRC architecture also had the lowest average of execution time, which was not a hybrid architecture, rather it was considered as Reactive architecture.
On the other hand, as shown in Figure 10, the highest level of fault tolerance was provided by the proposed PrP-FRC architecture, and the VFT hybrid architecture was in the second place.The reliability of this architecture was less than PrP-FRC and more than the reliability of AFTRC architecture.Additionally, in Figure 11, the number of failed and succeed tasks or jobs has been illustrated in one of the proposed simulation rounds as an example.
Modelling and Fuzzy Analysis of Architectures
Our real world is the world of uncertainties and ambiguities.Given that fault tolerance is a qualitative parameter and it is associated with uncertainty, the fuzzy logic is used in modelling and analysing this important feature.Different methods have been presented in different papers for reliability analysis.One of these methods, as referred in [35], is the modelling of the fault tolerance
Modelling and Fuzzy Analysis of Architectures
Our real world is the world of uncertainties and ambiguities.Given that fault tolerance is a qualitative parameter and it is associated with uncertainty, the fuzzy logic is used in modelling and analysing this important feature.Different methods have been presented in different papers for reliability analysis.One of these methods, as referred in [35], is the modelling of the fault tolerance based on fuzzy logic.The support of fuzzy systems from the rapid pattern generation and incremental optimization increases the importance of the results.Furthermore, the evaluating frameworks having the tolerant characteristic of the faults have been introduced in the various architectures of [36,37].The evaluating frameworks were fuzzy-base, which analysed and measured the intended capability while considering various parameters, like those methods that were used in detection phases and fault refinements of various designed fuzzy Inference systems.A fuzzy evaluation has been formed of four main parts, including the Fuzzier, Defuzzier, and Fuzzy Inference System, which are briefly referred to as FIS and eventually the Fuzzy Data Base Rules.
The role of the fuzzier of this system is to convert the input terms to a linguistic term set.This is conducted to be the membership function.The fuzzy inference engine uses the database of fuzzy rules in order to obtain the fuzzy output.It is clear that the fuzzy rules have been stored on a particular database and the fuzzy inference engine exploits them.Additionally, the defuzzier converts the fuzzy output of the fuzzy inference engine to a crisp value.An assessment of the architectures has been carried out on four separate fuzzy engines.These engines are termed FIS (1), FIS (2), FIS (3), and FIS (4), respectively.Figure 12 shows the inputs and outputs of the engines.
Modelling and Fuzzy Analysis of Architectures
Our real world is the world of uncertainties and ambiguities.Given that fault tolerance is a qualitative parameter and it is associated with uncertainty, the fuzzy logic is used in modelling and analysing this important feature.Different methods have been presented in different papers for reliability analysis.One of these methods, as referred in [35], is the modelling of the fault tolerance based on fuzzy logic.The support of fuzzy systems from the rapid pattern generation and incremental optimization increases the importance of the results.Furthermore, the evaluating frameworks having the tolerant characteristic of the faults have been introduced in the various architectures of [36] and [37].The evaluating frameworks were fuzzy-base, which analysed and measured the intended capability while considering various parameters, like those methods that were used in detection phases and fault refinements of various designed fuzzy Inference systems.A fuzzy evaluation has been formed of four main parts, including the Fuzzier, Defuzzier, and Fuzzy Inference System, which are briefly referred to as FIS and eventually the Fuzzy Data Base Rules.
The role of the fuzzier of this system is to convert the input terms to a linguistic term set.This is conducted to be the membership function.The fuzzy inference engine uses the database of fuzzy rules in order to obtain the fuzzy output.It is clear that the fuzzy rules have been stored on a particular database and the fuzzy inference engine exploits them.Additionally, the defuzzier converts the fuzzy output of the fuzzy inference engine to a crisp value.An assessment of the architectures has been carried out on four separate fuzzy engines.These engines are termed FIS (1), FIS (2), FIS (3), and FIS (4), respectively.Figure 12 shows the inputs and outputs of the engines.All of the designed engines have an output.The engines of FIS (1), FIS (2), and FIS (3) have three inputs.In addition, the number of database rules and engine membership functions have similarities and differences.The trapezoid membership functions have been used for designing the FIS (1) engine.Each of the triple inputs of these engines has been considered as a three-level input, including the low, medium, and high levels.Moreover, the output of this fuzzy evaluation engine has been designed on five levels.The labels of the linguistic variables in FIS (1) are very low, low, medium, high, and very high.The results of the assessments that were conducted by each architecture by FIS (1) engine have been reported in Table 9.An important point is that the VFT architecture and the proposed PRP-FRC architecture, which are considered to be hybrid architectures, are not valuable to this engine, because the first input of this engine is designed as a three-level input.The first input of this engine is related to the situation of the policies that are used in the architecture.
The trapezoid membership functions have been used for designing the FIS (2) engine like the FIS (1) engine.The main difference between the two engines is in terms of the numbers of the first input linguistic variables of these engines.The first input is dedicated to the policies that are used in
Discussion
Inside the DMTF, whose focus is on the FCAPS capabilities in management, there is another group, which is known as the Cloud commands.The main task of this group is the development of service measurement index technologies, which is briefly called SMI.The goal of the SMI is to evaluate and measure the aspects of cloud performance in the standard form, and some methods were established in this field.The purpose of this article was to present a new hybrid architecture of fault tolerance.The simulation results in CloudSim, Pegasus-WMS, as well as fuzzy modeling and evaluation, confirmed the increasing fault tolerance and reliability in the implementation of the proposed architecture.
VFT architecture has been a hybrid architecture, but the biggest weakness of this architecture has happened when all of the nodes failed.In this case, on the VFT architecture, there was a feedback toward the fault handler that an appropriate decision should be taken.In this condition, the architecture should be designed and implemented, such that the job is migrated toward another host; but, since there is no policy of job migration in this architecture, the internal feedback of the same host occurred, which reduced the recovery ability and, finally, the architecture fault tolerance.Additionally, when the policy of checkpoint/restart has not been implemented in this architecture, in the case of any task being encountered to the lowest fault, again, the task was entirely implemented so that the general effect was significantly affected.If this policy was used, then, implementing that task began from the last checkpoint and, consequently, the architecture efficiency remarkably increased.
In the proposed architecture of PrP-FRC, simultaneously implementing all of the proactive and reactive policies has been sought.It is clear that in this case, the significant weaknesses of the VFT architecture, due to the lack of policies of job migration and checkpoint/restart, would not be in the proposed architecture of PrP-FRC.The phase of fault detection has been carried out due to the decision marker module by the other detection method.
In the proposed architecture, each of the three fault detection methods has been implemented.The AT module that investigated the output of each of the VMs has provided a self-detection method.The TC module that had straight supervision on the time validity of the output generated by each of the VMs caused the use of other detection methods in this architecture.Finally, due to the roles played by the RA and DM modules in the PrP-FRC architecture, this architecture had not used the group detection method in the fault detection phase.
In the fault recovery phase, the VFT architecture has implemented system recovery, due to the feedback that was in its final output.In the PrP-FRC architecture, both the final architecture output and the output of each VM has been separately implemented.They designed feedback to the management system to trigger the recovery, which was simultaneously carried out on two levels, which included the node and the system.
The evaluations by FIS (1) to FIS (4) fuzzy engines also showed that the fault tolerance in the PrP-FRC proposed architecture has increased from 16 to 25 percent in comparison with the AFTRC architecture.The PrP-FRC architecture also showed a 25 to 58 percent increase in the VFT hybrid architecture among the fuzzy evaluators engines.
The fault mask, which is one of the fault recovery methods, has been implemented in the AFTRC simple architecture because the outputs of all the VMs were collected and the fault effects were disappeared.However, in the proposed PrP-FRC, due to the feedback considered in the output of each VM, the node recovery method has been implemented and the fault effects that were made on the VMs were not masked.In addition, in the AFTRC architecture, the node recovery method has not been implemented and it was just used for masking fault effects.As a new idea in the following of leading research studies, reference may be made to a method for using the strategy of masking the fault effects in the PrP-FRC architecture.
Conclusions
Presenting a new architecture of fault tolerance, which simultaneously uses proactive and reactive policies, was the goal of this research.The proposed PrP-FRC architecture has covered fault tolerance on the quintuple phases, which consisted of fault forecasting, fault prevention, fault detection, fault isolation, and fault recovery.The main reason was to fully cover each of the five phases of fault tolerance by the aforementioned architecture, while simultaneously using all of the proactive and reactive policies in this architecture.Given the full implementation of all the policies in the PrP-FRC proposed architecture, it was expected that the proposed architecture would provide higher fault tolerance than previous architectures.The results of the simulations that were performed in the CloudSim and the comparison and simulation of proposed architectural workflow confirmed the increasing of the aforementioned capability.The time execution of PrP-FRC proposed architecture had no significant difference with previous architectures and it had only slightly increased.This feature highlighted the prominence of the proposed architecture.
The proposed architecture in this paper had simultaneously used methods of self-detection and other detection in the fault detection phase.Additionally, this architecture had simultaneously used the triple methods of fault mask, node recovery, and system recovery in the fault recovery phase.If a group detection method has been used in fault detection phase in the PRP-FRC, then it could be considered as a complete architecture for implementing all fault detection methods.Using and implementing this method on fault detection in the aforementioned architecture can be followed as a future work.
Figure 2 .
Figure 2. VFT architectural structure introduced in [27], which is the only hybrid architecture.
Figure 2 .
Figure 2. VFT architectural structure introduced in [27], which is the only hybrid architecture.
Figure 3 .
Figure 3.Our proposed Architecture proposed in this paper (PRP-FRC) hybrid architectural structure.
Figure 3 .
Figure 3.Our proposed Architecture proposed in this paper (PRP-FRC) hybrid architectural structure.
Figure 4 .
Figure 4. Comparison of Diagrams of Reliability of Architectures.
Figure 4 .
Figure 4. Comparison of Diagrams of Reliability of Architectures.
Figure 9 .
Figure 9. Average of execution time of the architectures.
Figure 10 .
Figure 10.Reliability Rate for each architecture.
Figure 9 .
Figure 9. Average of execution time of the architectures.
Figure 9 .
Figure 9. Average of execution time of the architectures.
Figure 10 .
Figure 10.Reliability Rate for each architecture.
Figure 11 .
Figure 11.The number of jobs and tasks and their status.
Figure 11 .
Figure 11.The number of jobs and tasks and their status.
Figure 12 .
Figure 12.(a) Inputs of the fuzzy engines; (b) outputs of the fuzzy engines.Figure 12. (a) Inputs of the fuzzy engines; (b) outputs of the fuzzy engines.
Table 4 .
Results of Simulating VFT in CloudSim. | 11,795 | 2019-05-09T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Brain 3-Mercaptopyruvate Sulfurtransferase (3MST): Cellular Localization and Downregulation after Acute Stroke
3-Mercaptopyruvate sulfurtransferase (3MST) is an important enzyme for the synthesis of hydrogen sulfide (H2S) in the brain. We present here data that indicate an exclusively localization of 3MST in astrocytes. Regional distribution of 3MST activities is even and unremarkable. Following permanent middle cerebral artery occlusion (pMCAO), 3MST was down-regulated in both the cortex and striatum, but not in the corpus collosum. It appears that the down-regulation of astrocytic 3MST persisted in the presence of astrocytic proliferation due to gliosis. Our observations indicate that 3MST is probably not responsible for the increased production of H2S following pMCAO. Therefore, cystathionine β-synthase (CBS), the alternative H2S producing enzyme in the CNS, remains as a more likely potential therapeutic target than 3MST in the treatment of acute stroke through inhibition of H2S production.
Introduction
Ischemic stroke occurs when the blood supply to a particular area of the brain stops due to occlusion of a blood vessel. It has been reported that poor clinical outcome in acute stroke patients is strongly associated with high plasma homocysteine (Hcy) and cysteine (Cys) levels [1][2][3]. In animal studies, the administration of cysteine increased the infarct volume after experimental stroke induced by permanent middle cerebral artery occlusion (pMCAO), which could be attenuated by aminooxyacetic acid, an inhibitor of the enzyme cystathionine β-synthase (CBS). As CBS can produce hydrogen sulfide (H 2 S) using Cys and/or Hcy as substrates [4][5][6], these observations indicate that the Cys effect may be due to its conversion to H 2 S [3]. Moreover, administration of NaHS, an H 2 S donor, instead of Cys, similarly increased infarct volume after pMCAO [7]. H 2 S, although well-known to be a toxic gas, is now recognized to be present in mammalian tissues and has important physiological functions especially in the cardiovascular system and the central nervous system (CNS) [8,9]. It is an important neuromodulator which facilitates the induction of hippocampal long-term potentiation (LTP) by enhancing the activity of NMDA receptors in neurons and promotes the influx of Ca 2+ into astrocytes (calcium wave) [4,10]. It is known that H 2 S may be produced by the action of 2 key enzymes in the brain, namely, the pyridoxal-5'-phosphate (PLP)-dependent CBS [4,11], and the PLP-independent 3-mercaptopyruvate sulfurtransferase (3MST). 3-Mercaptopyruvate (3-MP) is converted from cysteine by the action of cysteine aminotransferase (CAT) [12]. It has been reported that H 2 S produced by 3MST may be readily stored as bound sulfane sulfur, which in turn can rapidly release H 2 S on stimulation. Thus, cells expressing 3MST and CAT have an increased level of bound sulfane sulfur [12]. However, it is not known what changes occur in 3MST expression under ischemic conditions in the brain. As H 2 S is known to increase after stroke [7], we hypothesized that the expression of 3MST might increase if 3MST is the major source of H 2 S under such conditions. In this article, we report the regional distribution of 3MST activities and the cellular localization of 3MST, and its expression in the striatum and cortex before and after pMCAO.
Ethics Statement
All animal experimental procedures in this study were approved by the Institutional Animal Care and Use Committee of the National University of Singapore.
3MST assay
3MST activities in tissue homogenates were measured according to Westrop et al. [13] with significant modifications as follow. All incubations were performed in reaction tubes fitted with air-tight serum caps and plastic center wells. The centre well contained a folded 2cm x 2.5cm filter paper (Whatman No. 1) wetted with 0.5ml of 1% (w/v) zinc acetate in 12% NaOH for trapping evolved H 2 S. Brain homogenate (300µl, 14.3% w/v) in 50mM potassium phosphate buffer (pH6.8) was mixed with 3-MP (2mM, sodium salt, Sigma-Aldrich) and 2-mercaptoethnol (10mM, Sigma-Aldrich) with or without 2-ketobutyric acid (40 mM), an uncompetitive inhibitor of 3MST [14,15], in the reaction tube in an ice bath. Total volume was 0.4 ml. The reaction tube was then flushed with N 2 for 20s and then capped. The reaction was initiated by transferring the tube to a shaking bath at 37°C. After incubating for 90 min, the reaction was stopped by injecting trichloroacetic acid (0.5 ml, 50% w/v) through the serum cap. After 1 h incubation at 37°C to allow complete trapping of H 2 S, the centre well was taken out. N,N-Dimethyl-p-phenylenediamine sulphate in 7.2M HCl (0.5 ml, 20mM) and FeCl 3 in 1.2M HCl (0.5ml, 30mM) were added and left in the dark at room temperature for 20min. Finally the absorbance at 670nm was determined with a spectrophotometer (Epoch, BioTek). Blanks were obtained by replacing brain homogenate with buffer. Calibration curve was obtained using NaHS (0-1mM).
pMCAO
Male Sprague Dawley rats (250-280 g) were randomly assigned into pMCAO group or sham control group. A subtemporal approach was used to induce the permanent occlusion of the left middle cerebral artery (MCA) [7,16]. The rats were anaesthetized with ketamine (75 mg/kg i.p.) and xylazine (10 mg/kg i.p.). A craniectomy was extended dorsally up to the first major branch of the MCA. Then the dura was opened with a bent 26-gauge needle and the arachnoid membrane was carefully removed. The MCA was cauterized with an electrocauterizer without damaging the brain surface and then cut. The site of the occlusion is between the inferior cerebral vein and olfactory tract. The sham group was operated in the same way as the experimental groups but with the MCA left intact.
Western blot analysis
Brain tissues (cortex or striatum) were homogenized in cold lysis buffer [10 mM HEPES, pH 7.9 with 1.5 mM MgCl 2 and 10 mM KCl, containing protease inhibitor cocktails (Roche Diagnostics GmbH, Mannheim, Germany)]. The lysates were centrifuged for 10 min at 14000×g. The cytoplasmic fractions containing equal amounts of protein, as determined by the Nano-Drop method (Thermo Scientific), were separated by 10% SDS/PAGE and transferred onto a nitrocellulose membrane (Amersham Biosciences, Buckinghamshire, UK). After incubating in 10% milk at room temperature in TBST buffer (10 mM Tris-HCl, 120 mM NaCl, 0.1% Tween-20, pH 7.4) for 1 h, the membranes were incubated with antibodies against 3MST (Sigma-Aldrich) at 4°C overnight, and then incubated with 1:1000 dilutions of HRP-conjugated anti-rabbit IgG at room temperature for 1h. The visualization was detected using a high quality fluorescent camera (UVItec Cambridge). The density of the bands on the membranes was quantified by densitometry analysis of the scanned blots using UVIband software. All protein levels were normalized to the corresponding β-actin bands (Sigma-Aldrich). For Western blotting, the contralateral side of the pMCAO brain was used as control instead of sham animals in order to reduce the number of animals used to a minimum as preliminary data showed no significant difference between the contralateral side and shamcontrol.
Statistical analysis
All statistical analyses were performed by using Excel 2013 (Microsoft, Redmond, WA) and SPSS 20 (IBM, Armonk, NY). Differences between two groups were analyzed with Student's t-test. Differences between three or more groups were analyzed with one-way analysis of variance (ANOVA). Post hoc multiple comparisons were made by using the Bonferroni test.
Results
3MST activities were detected in all six brain regions examined as shown in Figure 1. There appeared to be no significant differential distribution with all regions exhibiting mean activities in the range of about 21 to 26 µmol/g tissue/h (Figure 1). In the presence of 2-ketobutyric acid, 3MST activities were almost completely inhibited showing that no other enzymes contributed to the production of H 2 S under the assay conditions used.
3MST activities were measured biochemically in tissue homogenates in the absence (solid bars) and presence of 2ketobutyric acid (open bars) as described in Methods; N=3.
The expression of 3MST in brain cells was demonstrated using immunohistochemical staining. Figures 2 and 3 show 3MST immunoreactivity in the cortex and striatum, respectively. By double immunostaining, 3MST was demonstrated to localize in astrocytes as it colocalized with GFAP immunoreactivities. In contrast, 3MST immunoreactivity did not colocalize with NeuN or OX42, indicating that it was not expressed in neurons or microglia. At the subcellular level, 3MST immunoreactivity appears to be cytoplasmic and is present in both the processes and cell soma of the astrocytes. Consistent with previously findings, when rats were subjected to pMCAO, the infracted areas involved the cortex, striatum, corpus callosum and hippocampus [7] as shown in Figure 4. When the expression of 3MST was investigated in the cortex and striatum after pMCAO by Western blot analysis, it was found that 3MST was significantly down-regulated by about 40-50% at 72h in the cortex and at 24 h in the striatum. In contrast, expression of GFAP increased progressively from 8 h onward reaching very high levels at 24 and 72 h after pMCAO ( Figures 5A and 6A). While the western blot analysis was performed in whole cortex and striatum, immunohistochemical staining was studied in the peri-infarct areas as indicated in Figure 4. Immunostaining results are generally consistent with the Western blot analysis and they clearly show that 3MST immunoreactivity remained colocalized with astrocytes under ischemic conditions as shown in Figures 5-7).
The infracted regions are not stained by TTC thus appear white. Photomicrographs presented in Figures 5-7 are taken from the peri-infarct areas at the level of +1mm from Bregma as indicated. a: Cortex; b: corpus callosum; c: striatum.
At 24 h after pMCAO, it was observed that both 3MST and GFAP immunoreactivities were much reduced in both the cortex and striatum (Figures 5B and 6B) despite the increase in GFAP expression observed in Western blot analysis. This suggests that at this time point, astrocyte proliferation was still confined to the core infarct areas but not in the peri-infarct areas. At 72 h post-pMCAO, GFAP immunoreactivity was quite comparable to control, reflecting a significant level of astrocyte proliferation in the peri-infarct areas. However, 3MST immunoreactivity remained low at this time point indicating that 3MST expression continued to be suppressed despite astrocyte proliferation.
In contrast to the cortex and striatum, immunostaining results show that 3MST expression did not appear to be affected in the corpus callosum at both 24 and 72 h after pMCAO (Figure 7). Therefore, it appears that the suppression of 3MST expression is not a general phenomenon occurring in all peri-infarct regions following pMCAO. However, the possibility of a change in 3MST expression in this region at a later time point may not be ruled out.
Discussion
It has been known that H 2 S may be synthesized through the actions of 3 enzymes, CBS, 3MST and cystathionine γ-lyase (CSE) [12,17]. CSE, while being important in the cardiovascular system [8], is considered a minor contributor to H 2 S synthesis in the brain based on its low level of expression [4]. CBS can produce H 2 S either by hydrolyzing its substrate Cys or by condensation of Cys and Hcy. Kinetic studies have shown that the latter appeared more efficient [18]. On the other hand, 3MST produce H 2 S from 3-MP with pyruvate as the byproduct. 3-MP is produced by CAT from Cys. Therefore, the two pathways can be commonly regulated by Cys availability. However, the regulation of H 2 S synthesis in the brain is not well understood and has been reported to be closely associated with intracellular level of Ca 2+ [19].
Our finding of astrocytic expression of 3MST contradicts an earlier report that 3MST is localized to neurons in many areas of the mouse brain and spinal cord [12]. In the cortex, 3MST immunoreactivity was reportedly present in pyramidal neurons in layers II/III and V, and in layers I-VI of the neocortical areas [12]. Although colocalization with NeuN was not performed in the previous study [12], the immunohistochemical staining presented are convincing. It is difficult to know the cause of such discrepancy and any explanation can only be speculative. However, an astrocytic localization of 3MST seems consistent with the even and unmarkable regional variation of the in vitro 3MST activities (Figure 1).
We have previously reported that H 2 S level in cortical tissues was increased by 2-fold 24h after pMCAO. This increase was not associated with an upregulation of CBS expression [7]. Our present observation that 3MST is significantly downregulated under the same conditions would indicate that 3MST is not likely to be responsible for the increased production of H 2 S under ischemic conditions. This is interesting as CBS and 3MST are supposedly the predominant enzymes for H 2 S synthesis. However, we have now obtained preliminary data that show an upregulation of the truncated CBS (45kDa) (unpublished) while full-length CBS (62kDa) remained unchanged as reported previously [7]. It has been reported that the truncation of CBS was associated with increased CBS activities [20]. Most recently, Perna et al. [21] reported that NaHS down-regulated both 3MST and CSE in cultured endothelial cells. Therefore, it is a distinct possibility that the observed down-regulation of 3MST is caused by a high level of H 2 S produced through CBS truncation and activation.
In conclusion, 3MST appears not to be instrumental in the acute increase in H 2 S production in the ischemic brain. As inhibition of H 2 S production may lead to a reduction in ischemic damage, CBS remains as a more likely potential therapeutic target than 3MST in the treatment of acute stroke. | 3,096.8 | 2013-06-21T00:00:00.000 | [
"Biology"
] |
Beyond the species name: an analysis of publication trends and biases in taxonomic descriptions of rainfrogs (Amphibia, Strabomantidae, Pristimantis )
The rainfrogs of the genus Pristimantis are one of the most diverse groups of vertebrates, with outstanding reproductive modes and strategies driving their success in colonizing new habitats. The rate of Pristimantis species discovered annually has increased continuously during the last 50 years, establishing the remarkable diversity found in this genus. In this paper the specifics of publications describing new species in the group are examined, including authorship, author gender, year, language, journal, scientific collections, and other details. Detailed information on the descriptions of 591 species of Pristimantis published to date (June 2022) were analyzed and extracted. John D. Lynch and William E. Duellman are the most prolific authors
Introduction
Pristimantis Jiménez de la Espada, 1870 is a clade of New-World direct-developing frogs belonging to the family Strabomantidae, order Anura, class Amphibia, phylum Chordata. It is the most speciose genus of terrestrial vertebrates with 591 described species to date (Hedges et al. 2008;Frost 2022). A greater focus on molecular, acoustic, and osteological techniques combined with a significant increase in sampling efforts has led to a rise in the number of newly described species in recent years, allowing further research in understanding their taxonomy and systematics (Padial et al. 2010;Hutter and Guayasamin 2015;Kaiser et al. 2015;González-Durán et al. 2017;Reyes-Puig et al. 2020).
The earliest description of this genus was the description of the genus type Pristimantis galdi (Jiménez de la Espada, 1870). The genus was later placed under the synonymy of Hylodes sensu lato by Boulenger (1882), then synonymized as Eleutherodactylus by Peters (1955). Cyclocephalus, Pseudohyla, and Trachyphrynus were also synonymized with Eleutherodactylus by Lynch (1968Lynch ( , 1971. Heinicke et al. (2007) removed Pristimantis from the synonymy under Eleutherodactylus, with support from molecular evidence. This large and phenotypically diverse genus has faced subsequent molecular analyses confirming its monophyly and status as a closely related taxa to Lynchius, Oreobates, and Phyrnopus, with Yunganastes being suggested as well (Pyron and Wiens 2011;Canedo and Haddad 2012;Pinto-Sánchez et al. 2012). Several discernible groups can be found within the genus, originally described based on extensive morphological data and revised on subsequent molecular analyses (Lynch and Duellman 1997;Hedges et al. 2008;Pinto-Sánchez et al. 2012;Padial et al. 2014;Mendoza et al. 2015). There are currently 13 recognized species groups, with P. conspicillatus as the largest with 36 species (Padial et al. 2014;González-Durán 2017;Reyes-Puig et al. 2020;Taucce et al. 2020). Several other species remain unassigned as they are demonstrably non-monophyletic. These species can maintain taxonomic value as they can be grouped among phenotypically similar species, thus revealing useful comparative information (Hedges et al. 2008;Padial et al. 2014).
Members of this genus are remarkable for laying eggs in terrestrial habitats, with the embryos developing directly into frogs, bypassing the aquatic stage of their lifecycle (Woolbright 1985;Duellman and Lehr 2009). Assemblages of Pristimantis species are common, as their morphology and therefore behavioral and ecological activities are remarkably similar (Arroyo et al. 2008). They are characterized as insectivorous generalists, choosing prey depending on availability and size (Lynch and Duellman 1997;Arroyo et al. 2008). Individuals are predominantly arboreal and nocturnal, commonly perching on leaves at heights below 200 cm. As their reproductive biology leads them away from congregating at ponds, a common strategy for males in this genus is to vocalize from the ground or a suitable perch in search of a mate (Lynch and Duellman 1997;Duellman and Lehr 2009).
The genus is widely distributed throughout the New World and considered the most extensive among Neotropical amphibians, with species found in tropical and subtropical forests in South America and up to lower Central America (Lynch and Duellman 1997;Pinto-Sánchez et al. 2012; Meza-Joya and Torres 2016; Armesto and Señaris 2017). The group shows particularly high levels of diversity and endemism in the Andes, predominantly at elevations above 2000 m along humid environments and surrounding lowlands and Chocoan South America (Heinicke et al. 2007;Meza-Joya and Torres 2016;Armesto and Señaris 2017;Reyes-Puig et al. 2020). The diversity of the genus is greater in Colombia, Ecuador, and Peru and the piedmont, montane, and montane cloud forests of the western and eastern slopes of the three countries concentrate the highest levels of endemism (Hedges et al. 2008;Ron et al. 2022).The continuous increase in the number of species described within this genus suggests a much greater species richness than anticipated, with subtle differences in behavior and physiology reflecting distinct evolutionary pathways (de Queiroz 2005;Hutter and Guayasamin 2015).
The conditions leading to such a high diversification rate in Pristimantis are not completely understood. Families of direct-developing frogs diverged quickly during the early to middle Cenozoic, favoring a wide dispersion across a range of habitats in South America, leading to a rapid accumulation of population genetic isolation (Heinicke et al. 2007;Heinicke et al. 2009;Pröhl et al. 2010). An important point of the radiation of the genus is found between 1000 and 3000 m in the northwestern Andes (Mendoza et al. 2015). The complex biogeographic dynamics in the area not only supported allopatric speciation, but also facilitated dispersion of lowland species during the Paleocene and Pliocene, leading to a speciation pattern particularly suitable to originate cryptic and sibling lineages (Lynch and Duellman 1997;Mendoza et al. 2015).
In the last 20 years, the increase in species description rates in South America has shed light on the remarkable diversity and endemism of Pristimantis, while hinting at the complex patterns of speciation taking place. Given the cryptic nature found in members of this genus, the work to discover and describe new species appears far from over. Categorizing these frogs and their characteristics in a taxonomic context presents several challenges, further obscured by the high degree of plasticity evidenced across the group .
Despite the fact that this genus is so diverse and taxonomic and systematic research continues from year to year, information on publication patterns and trends is scarce. Issues such as gender and language biases in publications have not yet been explored. Within the biological sciences, Zoology in particular, studies in this branch have been characterized as male-dominated, imposing limitations in the professional development of many women (Slobodian 2021). Similarly, it has been shown that, for example, in ecology and zoology the proportion of principal investigators publishing with women is lower compared to the proportion with men (Salerno et al. 2020). On the other hand, the language of publication continues to be dominated by the English language, although there have been approaches to improve the transmission of science so that it can be disseminated locally (Ramírez-Castañeda 2020). Latin American researchers not only tend to be under pressure to publish in English, but also to do so with colleagues from developed countries, issues that tend to be related to the number of citations an article can receive (Meneghini et al. 2008). Thus, in this article we aim to delve into the specifics of the biases and trends in publication of descriptions of new species of Pristimantis by carrying out a detailed review of all description parameters, emphasizing location, authorship (including author gender), scientific collection, and language.
Materials and methods
We followed the proposal of Hedges et al. (2008) and Heinicke et al. (2018) for family classification of Pristimantis. The updated list of species formally described to date was extracted from the Amphibian Species of the World web page from the American Museum of Natural History (Frost 2022). On 14 June 2022, the Pristimantis list contained 591 described species. Based on this list, we generated a detailed database for each species, extracting specific data available on the original description and the aforementioned web page. We built a database with the following fields: unique species identifier, species, journal, year of publication, first author, authors, gender of the author, nationality of authors, corresponding author, genus, country of description, type locality, holotype, synonymy, species group (if applicable), distribution, language of the description, institutional affiliations, conservation status and type of description. For more details see Suppl. material 1. We calculated the growth rate of global descriptions and that of the seven countries with the highest number of descriptions (i.e., Ecuador, Colombia, Peru, Venezuela, Brazil, Panamá, Bolivia). In order to better represent the different historical trends seen in Pristimantis description, growth rates are divided into two periods: The first from 1958-1989, when descriptions begin to be more constant, and the second one from 1990-2021. Due to a significant increase in descriptions at the time, the period ranging from 2010-2021 is also considered. This is mainly because we intend to identify a realistic description rate with the latest advances in the taxonomy and systematics of the genus. In addition to the fact that the number of researchers currently working in Pristimantis taxonomy is greater than in previous periods (Suppl. material 1). We did not consider the periods prior to 1958, as there is no significant growth in description rates observed during them.
We carried out the search for publicly available information regarding gender and nationality of authors through individual exploration enabled by the Google search engine. This search was based on exploration of the Google search results associated with the names of authors (as self-reported in publications describing Pristimantis species) throughout the web. Priority was given to information associated with institutional affiliation and research endeavors, cross-referenced with professional sites such as ResearchGate, LinkedIn and Google Scholar. Information regarding the gender of authors was assessed based on a binary understanding of gender and assumed based on gender roles traditionally assigned to their names, physical appearance, and self-reported identity, when available. Information regarding conservation status was obtained from the IUCN Red List of Threatened Species (IUCN 2022). After finalizing data input, we cleaned and homogenized the database to prevent typographical errors as well as duplicate and empty cells. In order to describe the increase in women involved in Pristimantis descriptions over the years, we used generalized linear models (GLMs). These include a quasi-binomial error distribution, necessary given the nature of the data (i.e., dichotomous proportionality data and overdispersion). We defined years as the dependent variable and the proportion of female authors as the independent variable, in order to identify the relationship between the proportion of women featured in descriptions and time. We made a GLM of the total number of female authors and one specific to each of the five countries with the greatest diversity of Pristimantis (Ecuador, Colombia, Peru, Venezuela, and Brazil). The timeframe for this analysis is from 1970 onwards, corresponding to the period in which women begin to be active in describing a new species of Pristimantis. All management, cleaning, and analysis of the database were performed in the statistical software R (R Core Team 2022); utilizing the "tidyverse", "forcats" and "gridExtra" packages.
Results
Species descriptions of the group began in 1858 with Pristimantis conspicillatus (under the synonymy of Hylodes conspicillatus, Lithodytes conspicillatus, and later Eleutherodactylus conspicillatus). For a century the descriptions of this group remained relatively stable with fewer than four species described on average every ten years ( Fig. 1). From the 1960s, the descriptions in the group increased significantly with several peaks towards the 70s but with a particular one towards 1980. The number of descriptions every five years increased in relation to the range between 1958-1970, with an average of 5-14 descriptions between 1978 and 1996. After this year, the most notable peaks in new descriptions were restricted to the years 1997years , 1998years , 1999years , 2007years , 2019years , 2020years , and 2021. The significant increase in the descriptions of new species in this group intensified after 1978 and the accumulation curve of descriptions seems to continue increasing up to the present date, with a total of 591 formally described species (Fig. 1B). The period with the highest annual rates of Pristimantis description is found between 1958-1989. For Ecuador, the period with the highest description rate lies between , for Colombia between 1958-1989, for Peru between 1990, and for both Brazil and Bolivia, a peak in the description rate is observed in the period 2010-2021 and 1990-2021, respectively. However, it is important to consider that in order to calculate the growth rate, the initial and final values of the numbers of described species are taken into account. By having such a low number of total descriptions, if the species double from one year to the next, the rate is higher; however, the total number of species is much lower compared to that of other countries (Table 1).
The distribution of Pristimantis extends from Honduras to Peru, specifically, Honduras east through Central America through Colombia and Ecuador to Peru, Bolivia, northern Argentina, and Amazonian and Atlantic Forest Brazil and the Guianas; Trinidad and Tobago; and Grenada, Lesser Antilles (Frost 2022). Ecuador is the country with the highest number of described species with 212 total descriptions, followed by Colombia (168 descriptions), Peru (107 descriptions), and Venezuela (56 descriptions). Other countries within the range of this genus report fewer than ten descriptions ( Fig. 2A). From the 1960s onwards, descriptions occurred predominantly in Ecuador, Colombia, and Peru, with at least two peaks of descriptions higher than the average of ten new species described per year (Fig. 2B). Ecuador exhibits three important peaks in species description (i.e., 1979, 1980, and 2019), with more than 15 new In total, 320 researchers are formally recognized as authors and co-authors in publications describing new species of Pristimantis (Suppl. material 1). Almost half (46%) of them have described a single species, while another 46% have described between two and seven species, 6.5% of authors have described 8-30 species, and finally only 0.6% of authors have described more than 50 species (Fig. 3A). Lynch is the author with the highest count of newly described species, with 195 species descriptions to his name. He is followed by Duellman with 82, and E. Lehr and M. Yánez-Muñoz with 36 each. Details on the 20 authors with the highest count of newly described species are provided in Table 2. Across all scientific collections where the type material has been deposited, the four institutions with the highest number of holotypes are the Museum (Fig. 3B). The rest of the collections hold a lower proportion of specimens than those mentioned (Fig. 3B, Suppl. material 1).
We identified a variety of eight languages used for the description of new Pristimantis species (Fig. 4A, Table 3), with the most common ones being English and Spanish. The English language has dominated species descriptions since the 1960s. The number of descriptions published in Spanish started to increase during the 1980s, though it remains lower than those published in English (Fig. 4A, Table 3). Conversely, there are more than 20 nationalities of researchers who have participated as authors and co-authors in descriptions of new species of Pristimantis (Fig. 4B). Researchers from the USA have significantly dominated, not only the total number of descriptions, but also the number of annual descriptions until 2002. As of this year, the participation of Ecuadorian authors has increased in greater numbers compared to their Colombian, Peruvian, and Venezuelan peers (Fig. 4B). The greatest percentage of authors involved in descriptions of new Pristimantis species are of Ecuadorian nationality (20.9%), followed by US Americans (18.8%) and Colombians (15.3%) ( Table 4).
From a gender perspective, 80% of the authors who have participated in the descriptions are male researchers and 20% female researchers (Figs 5,6). In spite of an increase in the number of descriptions involving women starting in the 1950s, the participation of women in the description process continues to be considerably lower than that of their male counterparts (Fig. 5A). Ecuador is the country with the highest percentage of descriptions that incorporate female researchers with almost 18.8%, while Colombia, Peru, Venezuela, and Brazil have lower percentages (Fig. 5B). The bias towards male authors is overwhelmingly disproportionate in terms of principal authorship (i.e., first author or corresponding author). Where 50% of male authors featured as main authors, only 2% of female authors have held this role (Fig. 6, Table 5). We detected a significant slope between the proportion of female authors who participated in descriptions of all Pristimantis and the years (estimate = 0.03, SE = 0.01, t = 2.2, p = 0.02; Fig. 7). Of the four countries with the highest number of Pristimantis species descriptions, Ecuador was the only one with a significant positive slope between the proportion of female authors and time (estimate = 0.09, SE = 0.02, t = 3.8, p = < 0.001), while that on the contrary Colombia (estimate = 3.7 *10-3, SE = 0.01, (Fig. 7). In relation to peer-reviewed journals in which the descriptions of new species of the genus have been published, Zootaxa and the Revista de la Academia Colombiana de Ciencias, Exactas, Físicas y Naturales have published the highest number of descriptions with 20.6% of the total described species. Other journals such as Herpetologica, ZooKeys, and Miscellaneous Publication of Museum of Natural History of University of Kansas have published 15.7% of the descriptions of Pristimantis (Fig. 8A). From the beginning of the descriptions of this diverse genus until the first decade of the 2000s, descriptions have been based mainly on morphology. Phylogenetic analyses were gradually incorporated into descriptions over the last 12 years, leading to a significant reduction in morphology-only descriptions of new Pristimantis species (Fig. 8B). Regarding the conservation status of Pristimantis species, 24% are categorized by the IUCN Reed List as Least Concern, 31% are threatened (i.e., CR, EN, or VU), and 36.8% are Not Evaluated or Data Deficient (Fig. 9A). The country with the highest number of Not evaluated species is Ecuador (i.e., 39% of all Ecuadorian species). Peru and Venezuela host the highest percentages of species under the Data Deficient category, 31.8% and 42.8% of their total species, respectively. In Ecuador and Colombia, at least 30% of Pristimantis species are under some form of threat (Fig. 9B).
Most prolific authors and countries with the highest numbers of descriptions
The contributions made by Lynch and Duellman to the advancement of Pristimantis taxonomy and systematics since the 70s are indisputable. Their most significant contributions focus on large compendiums that include analyses of distribution patterns and advances in systematics and descriptions of several new species (Lynch 1979;Duellman 1980, 1997;Duellman and Pramuk 1999). Their work initially focused on the eastern slopes of Ecuador (Lynch 1979;Lynch and Duellman 1980) and later towards the 1990s their interests shifted over to the Colombian and Peruvian foothills (Lynch 1998;Duellman and Pramuk 1999). The most representative works and the most productive years for the descriptions of new species of this genus were: • 1979 with "Leptodactylid frogs of the genus Eleutherodactylus from the Andes of southern Ecuador" by J. D. Lynch and the description of 16 new species; • 1980 with "Eleutherodactylus of the Amazonian slopes of the Ecuadorian Andes (Anura: Leptodactylidae)" by Lynch and Duellman, which includes 12 new species; • 1997 with "Frogs of the genus Eleutherodactylus in western Ecuador" by Lynch and Duellman; • 1998 with "New frogs of the genus Eleutherodactylus from the eastern flank of the northern Cordillera Central of Colombia" by Lynch and Rueda-Almonacid and "New species of Eleutherodactylus from the Cordillera Occidental of western Colombia with a synopsis of the distributions of species in western Colombia" by Lynch, in which nine new species are described; • 1999 with "Frogs of the genus Eleutherodactylus (Anura: Leptodactylidae) in the Andes of northern Peru" by Duellman and Pramuk, describing more than 15 species; • 2007 with "Three new species of Pristimantis (Anura: Leptodactylidae) from the Cordillera de Huancabamba in northern Peru" and "New eleutherodactyline frogs (Leptodactylidae: Pristimantis, Phrynopus) from Peru" both by Lehr with three and four new species each; • 2019 with "Systematics of Huicundomantis, a new subgenus of Pristimantis (Anura, Strabomantidae) with extraordinary cryptic diversity and eleven new species" by Páez and Ron which added 11 new species to Ecuador after almost 40 years of a contribution that includes joint descriptions of Pristimantis, as previously published by Lynch and Duellman (Lynch 1979;Lynch and Duellman 1980); • 2021 with several descriptions from different authors (see Suppl. material 1). We assume that the pandemic due to COVID 19 had a negative effect on field trips in all countries actively working on Pristimantis taxonomy. In addition, pure research activities were probably reduced by the effect of the lockdown and COVID 19, as has been observed in other lines of research (Donthu and Gustafsson 2020).
If current trends in the growth rate of annual Pristimantis descriptions are maintained and taking into consideration the entire temporal history of descriptions of each country, the total number of species of is expected to increase in the next 10 years to ~777 described species, with ~299 in Ecuador, ~217 in Colombia and ~153 in Peru, ~73 in Venezuela, ~22 in Brazil, ~11 in Panama, and ~6 in Bolivia.
Why does Ecuador lead in the number of new species of Pristimantis described?
The contributions of Lynch and Duellman in defining the group as a diverse and highly endemic genus became extremely influential to local researchers, leading to further discoveries mainly around Ecuador, Colombia, and Peru (Lynch 1979;Duellman 1980, 1997). Despite being the smallest by area of these three, Ecuador sports both the highest number of descriptions of new species of Pristimantis and the greatest richness of the genus, followed by Colombia and Peru, suggesting that the known diversity of the genus is underestimated in these countries (Frost 2022;Ron et al. 2022). Out of the six peaks Pistimantis described per year, three of them took place in Ecuador, highlighting the commitment to taxonomic efforts in the region. In addition, Ecuador has had more academic approach to amphibian taxonomy more noticeable than it's neighboring countries: some Ecuadorian taxonomists often pursue higher education abroad in the USA and return to the country to develop their lines of research (e.g., Santiago R. Ron, Juan Manuel Guayasamin, Luis Coloma). However, younger generations of ecuadorian taxonomists have contributed significantly to the taxonomy and systematics of the group, despite not being trained abroad (e.g., Mario H. Yánez- (Table 2). However, it is important to consider that alliances with colleagues, researchers, and institutions have been crucial for the advancement of Pristimantis descriptions in Ecuador. As mentioned by Costello et al. (2013) it seems that South America would have an increase in the number of taxonomists in general, but this is related to the region being much more diverse than others. The Convention on Biological Diversity has proposed some strategies to improve the productivity of taxonomy, including collaborations with both national and international researchers, the requirement to include national institutions when the research is being carried out by foreign researchers, etc. Ecuador has been successful in this context, and this is evidenced by its productivity.
John D. Lynch, an US American herpetologist and taxonomist credited with the greatest number of described species; a total of 149. After working 30 years at the University of Nebraska-Lincoln, he became associate professor and curator of herpetology at the Instituto de Ciencias Naturales de la Universidad Nacional de Colombia in 1997. William E. Duellman, a prominent US American zoologist who was Curator Emeritus of the Herpetology Division of the Natural History Museum of University of Kansas (Coloma and Guayasamin 2022), is the second most productive taxonomist with 82 formally described species. Next in the ranking are authors such as Edgar Lehr, M. H. Yánez-Muñoz, and Santiago R. Ron taking up the mantle from Lynch and Duellman, the three of them are responsible for 15% of the total number of descriptions in the genus. Edgar Lehr is a herpetologist at the University of Illinois and his work has focused on amphibian systematics, mainly from Peru (Illinois Wesleyan University 2022). Mario H. Yánez-Muñoz of Instituto Nacional de Biodiversidad and Santiago R. Ron, professor and curator of the Museum of Zoology QCAZ (Museo de Zoología de la Pontificia Universidad Católica del Ecuador), have focused on the taxonomy and systematics of the genus in the Andean slopes and lowlands of Ecuador for the last 15 years INABIO 2022).
The first two positions of the most prolific taxonomists for Pristimantis are occupied by US American researchers, who account for 46.8% of the known diversity of the genus. However, among the 20 authors with the most descriptions, 70% are Latin American authors following in the footsteps of Lynch and Duellman. The increase of Latin American researchers interested in the genus arises from the empowerment of local science and biodiversity, driving further interest as research goals are met. Although the last few years have seen an increase in the number of researchers interested in describing new species of Pristimantis, most of them describe between one and four species, which reflects the number of authors participating in the publications (Suppl. material 1; e.g., Guayasamin et al. 2017).
Species description language
English and Spanish are the dominant languages for descriptions of Pristimantis species. From a temporal perspective, all the work developed mainly by Lynch and Duellman Duellman 1980, 1997;Duellman and Pramuk 1999;Duellman and Lehr 2009) correlates with language, scientific collections, years of production, etc. Although the number of descriptions published in Spanish has steadily increased since 1980, publication trends have encouraged an increase in English descriptions as well during the last two decades. The irony of this relationship is that most of the current researchers of the genus are Spanish-speaking Latin Americans, publishing in English mainly for other Spanish speakers. Even this article is an example of this conundrum. However, our aim is to visualize the specifics of descriptions in Pristimantis, highlighting patterns and trends in order to visualize to a wider audience the information that lies behind a new description in such a diverse group. In addition, as suggested by Ramírez-Castañeda (2020), we included a full Spanish translation of this manuscript as Suppl. material 2.
The publication of results in English is directly related to pressures from academic institutions to publish in high-impact journals, where English is established as the official language of publication. Publishing in this language for non-English speakers can be time-consuming, demanding, and stressful, and some efforts have been made to understand this in the framework of Latin American researchers (Ramírez-Castañeda 2020). Despite English being the common language for communicating science, the incorporation of Spanish for taxonomic groups geographically distributed in Latin American countries could also be a valuable alternative for disseminating results. Therefore, mechanisms to promote the advancement of research in Pristimantis published in Spanish in specialized journals (taxonomy and systematics journals) (e.g., Fig. 7) are necessary.
Author nationalities
The number of Ecuadorian researchers is proportionally higher than that of other nationalities describing new Pristimantis species during the last few years (e.g., Yánez-Muñoz et al. 2010;Guayasamin et al. 2017;Páez and Ron 2019;Reyes-Puig et al. 2020;Ron et al. 2020). However, it is also worth mentioning that the number of authors per description is higher compared to descriptions from the 1980s (e.g., Lynch and Duellman 1980). There are currently descriptions with up to nine authors (e.g., Ron et al. 2020; see Suppl. material 1), and the average number of authors per species described in the 1980s is between 1 and 2, while during the last decade the average number of authors participating in descriptions is between 5 and 6. In many of these multi-author descriptions, most individual authors have very low description rates while the lead author is an experienced researcher in the study of taxonomy and systematics of the genus.
In this paper we also identify a gap in the presence of Pristimantis taxonomists in Peru, Venezuela, Brazil, Panama, and Bolivia compared to Ecuador and Colombia. The inclusion and empowerment of local scientists from these countries would seem necessary in order to increase the study of rainfrog species in their territory, which has been mainly led by foreign authors (e.g., Duellman and Pramuk 1999;Catenazzi and Lehr 2018;Lehr et al. 2021) (details on authors, years, and descriptions can be downloaded from Suppl. material 1).
Natural history museum collections
Due to the extensive work of Lynch and Duellman, the museums related to their research currently hold the largest amount of type material from Pristimantis (i.e., KU and ICN). The QCAZ is an institution with more than 40 years dedicated to safeguarding and researching Ecuadorian biodiversity and has positioned itself as an international benchmark regarding management and open access to scientific collections, not only for amphibians but also for other vertebrates. This institution is notable for hosting holotypes for 44 species of Pristimantis until June 2022 (Ron et al. 2022;Torres-Carvajal et al. 2022). Other South American institutions that preserve a large number of holotypes are the DHMECN (División de Herpetología del Instituto Nacional de Biodiversidad, Ecuador) and the MUSM (Museo de Historia Natural, Universidad Nacional Mayor de San Marcos). The DHMECN of Instituto Nacional de Biodiversidad has focused on the taxonomy and systematics of Ecuadorian herpetofauna for the last 15 years, primarily on Pristimantis from the Andean slopes of Ecuador (INABIO 2022). The Herpetology Division of MUSM has a long record of researching Peruvian herpetofauna since 1946. However, since 2007 its scientific production related to descriptions of new species has decreased (MUSM 2022). In addition, virtual access to its collections and databases is limited compared to that of Ecuadorian collections (Ron et al. 2022).
Author gender
Historically, zoology has been a male-dominated field. The barriers that limited the number of women in scientific fields before the 20 th century has led to the study of animals being an exclusively male discipline (Slobodian 2021). Herpetology is no exception, although there are now far more women in the field than men (Rock et al. 2021). In general, the proportion of female authors is lower, at least in the actively publishing population ). In the case of descriptions of new species of Pristimantis, the number of women actively working in this group is low (20% of the authors are women). Consequently, women are poorly featured as principal authors, a position considered to be the most important indicator of the author's role in the description process. Of a total of 66 female authors, only six have been principal authors. It is necessary to reflect on this pattern, as it is consistent with other studies on the underrepresentation of women in science (West et al. 2013;Fox et al. 2018;). If current trends continue, female participation in Pristimantis descriptions would increase to 50% over the next 25 years (2047). This would mean that in this period the proportion of women in the field could potentially be the same as men, considering only authorship and not principal authorship. Ecuador, being the country with the most researchers, also has the largest number of female researchers; however, many of them are thesis students of principal investigators, so their incursion into the study of the taxonomy of the group is temporary. In this article we encourage the active inclusion of female researchers interested in the field of taxonomy, promoting the advancement of the diversity of Pristimantis. We also strongly encourage male researchers to open the doors of their laboratories to women, where their roles are not limited to field assistants or specific sections of descriptions, but also as leaders taking on critical positions in particular taxonomic works. The best way to promote this in the future will be through increasing the proportion of women in senior author positions .
Peer-reviewed journals
The journals with the highest number of contributions of rainfrog descriptions are Zootaxa and Revista de la Academia Colombiana de Ciencias, Exactas Físicas y Naturales. We can identify two critical issues: the first is the problem of taxonomy today (i.e., underestimation of this area of knowledge being considered a basic science), and the second is how to get other journals to climb to better academic positions if they are local, free, and open access. This question does not have a simple answer since the scaling of these journals will depend entirely on metrics such as the impact factor (IF), an index that assesses the relative importance of a scientific journal in a particular field. The effect of the IF has repercussions on the endless cycle of not citing journals that do not have a medium high IF. Publishing purely taxonomic articles is becoming increasingly complex, both because of the decrease in the number of researchers interested in the field and the number of journals interested in this topic (Wägele et al. 2011). In 2020, Zootaxa was excluded from the Journal Citation Report (JCR) for 2019 (2020 release) by Clarivate Analytics due to excessive use of self-citations. Following a petition that included more than 3900 signatures from research biologists, Clarivate Analytics reversed the decision (Parise Pinto et al. 2021). This misunderstanding and lack of knowledge on how taxonomy works raises concerns about the difficulty of publishing new species discoveries and which journals can be chosen for such a task. Zootaxa shows its importance not only because it has included descriptions of more than 25% of the world's known biodiversity (Parise Pinto et al. 2021), but in the case of Pristimantis it is the main journal in which descriptions have been published.
Description type
It is important to note that in the past the vast majority of new species descriptions have been based on morphologic data (83.7% of total descriptions). However, during the last 20 years the molecular revolution has transformed the description process, including DNA sequences that allow phylogenetic positioning of taxa. The continuous increase in available molecular techniques has made the costs to implement them lower and thus more accessible. Therefore, the inclusion of sequencing has become a common tool to better understand the ancestry-descent relationships of living organisms (Malakhov 2013). However, it is worth considering that in the description of new species, sequencing should not replace a detailed taxonomic assessment with morphological characters that allow observers to clearly differentiate one species from another (Guerra-García et al. 2008). Positioning taxonomy as a necessary science to discover and describe biodiversity is essential in its own right. This is evident in the case of Pristimantis, as supported by this review, given the trend of increasing numbers of described species and descriptions of highly cryptic species (Padial and
Global threat categories
Finally, regarding the IUCN global conservation status of Pristimantis, we highlight that a representative proportion of the species described to date (i.e., 36.8%) are found in uncertainty categories such as Not Evaluated and Data Deficient. These categories reflect the need for a global evaluation of the species of this genus, on account of its high endemism and species richness. On the other hand, it also highlights the increasing rate of species descriptions. Conservation status assessments are generally carried out by the global IUCN every four to five years (IUCN 2022), a period in which there are likely to be 10-15 new species of Pristimantis per year. Therefore, global red list assessments will always be one step behind the evaluation of threat criteria. An alternative to address this matter could be through local and regional red list assessments, which would be more readily updated and may include other variables for categorization (Ortega-Andrade et al. 2021). In addition, many of the new species of Pristimantis have restricted distribution ranges with populations facing various threats (e.g., Brito-Zapata et al. 2021), implying a higher degree of vulnerability. Nevertheless, if global assessments are not conducted for many of these species, the conservation priorities of the genus remain underestimated. In this work we include the categories proposed by the IUCN global red list (IUCN 2022), since the local and regional evaluations of each country are not updated and standardized and therefore cannot be compared. Ecuador has made the most recent effort for a more complete and up-to-date cataloguing, establishing 50% of Ecuadorian Pristimantis species under some degree of threat (Ortega-Andrade et al. 2021). This estimate is considerably higher than that reported by the IUCN of 33.5%, likely due to the proportion of species not evaluated or with insufficient information. All these aspects reinforce the importance of continuing to invest not only in the advancement of taxonomic research on the genus, but also in conservation strategies articulated between countries in the region. | 8,390.2 | 2022-01-01T00:00:00.000 | [
"Biology"
] |
Phylogenetic Diversity of Archaea and the Archaeal Ammonia Monooxygenase Gene in Uranium Mining-Impacted Locations in Bulgaria
Uranium mining and milling activities adversely affect the microbial populations of impacted sites. The negative effects of uranium on soil bacteria and fungi are well studied, but little is known about the effects of radionuclides and heavy metals on archaea. The composition and diversity of archaeal communities inhabiting the waste pile of the Sliven uranium mine and the soil of the Buhovo uranium mine were investigated using 16S rRNA gene retrieval. A total of 355 archaeal clones were selected, and their 16S rDNA inserts were analysed by restriction fragment length polymorphism (RFLP) discriminating 14 different RFLP types. All evaluated archaeal 16S rRNA gene sequences belong to the 1.1b/Nitrososphaera cluster of Crenarchaeota. The composition of the archaeal community is distinct for each site of interest and dependent on environmental characteristics, including pollution levels. Since the members of 1.1b/Nitrososphaera cluster have been implicated in the nitrogen cycle, the archaeal communities from these sites were probed for the presence of the ammonia monooxygenase gene (amoA). Our data indicate that amoA gene sequences are distributed in a similar manner as in Crenarchaeota, suggesting that archaeal nitrification processes in uranium mining-impacted locations are under the control of the same key factors controlling archaeal diversity.
Introduction
Metagenomic studies have revealed that Archaea are widely distributed and likely play an important role in a variety of environmental processes, such as chemoautotrophic nitrification [1], carbon metabolism [2], and amino acid uptake [3,4]. The most abundant organisms among the archaeal phyla are Crenarchaeota and Euryarchaeota [2,5]. Crenarchaeota represent more than 75% of the archaeal populations in natural environments [6]. Certain crenarchaeotic groups are thought to be confined to specific environments; for example, group 1.1a consists mainly of aquatic organisms, while the members of group 1.1b are typical soil crenarchaeotes [7].
Worldwide mining and milling activities have introduced high levels of radionuclides and heavy metals (HMs) into soil and aquatic environments. The adverse effects of pollutants on Archaea are not well studied [8,9]. Moreover, only a few studies have investigated archaeal diversity in HM- [10,11] and uranium-(U-) contaminated environments [5,[12][13][14]. Radeva and Selenska-Pobell [13] reported crenarchaeotic 16S rRNA gene sequences in U-contaminated soils of Saxony, Germany, belonging only to the 1.1b group of the phyla, while Reitz et al. [14] identified 1.1a, 1.3b, and SAGMCG.1 crenarchaeotic gene sequences from deeper U-polluted soil horizons. Porat et al. [5] investigated the diversity of archaeal communities from mercury-and U-contaminated freshwater stream sediments by pyrosequencing analysis. They found a higher abundance and diversity of Archaea in mercury-than in U-contaminated sites, where the archaeal sequences were of both the Crenarchaeota and Euryarchaeota phyla.
Archaea
To date, little is known concerning the interactions between archaea and U or HMs. Kashefi et al. [15] published that the hyperthermophilic crenarchaeote Pyrobaculum islandicum is able to reduce U(VI) to U(IV) under anaerobic conditions at 100 ∘ C. Francis et al. [16] demonstrated that the halophilic euryarchaeote Halobacterium halobium accumulates high amounts of U(VI) as extracellular uranyl phosphate deposits; however, these two organisms are not found in U-contaminated substrata. Later, Reitz et al. [9,17] revealed the capacity of the acidothermophilic Sulfolobus acidocaldarius, which is an indigenous archaeon for Ucontaminated soils and mine tailings, to accumulated intracellular U(VI).
Intensive U mining and milling in Bulgaria were performed between 1946 and 1990 and have caused significant soil and water pollution. U production was stopped by a government decree in 1992, and mines and tailings were technically liquidated and gradually remediated. Nevertheless, their surroundings are still highly contaminated, and further contamination from the compromised remediation of mines and tailings has been recorded.
The aim of this study was to investigate the diversity of archaeal communities inhabiting environments impacted by U mining and milling activities and in particular to reveal the diversity of the archaeal amoA gene. Since U and HM contamination represent an old environmental burden, we expected that the composition and diversity of archaeal and amoA communities were stabilized under the selective power of both contamination level and environmental characteristics.
Sites and Sampling.
Two locations in Bulgaria were studied: the abandoned mining and milling complex "Buhovo" and the "Sliven" mine, both of which have been classified as areas of high radiological risk by the Bulgarian Agency for Radiobiology and Radioprotection. The mining complex "Buhovo" ( Samples from Buhovo were collected in May 2003 at depths of 20 cm (BuhC) and 40 cm (BuhD). Samples labelled "Sliv" were collected in June 2004 from the "Sliven" mine waste pile at a depth of 40 cm. Five samples from BuhC, BuhD, and Sliv were collected under sterile conditions, transported at 4 ∘ C, and stored at −20 ∘ C until use.
Environmental Variables.
The organic matter content of the sample was determined by Turyn's method based on its oxidation by potassium dichromate [35]. The pH was measured using a portable potentiometer (HANA pH meter) after the soil samples had been suspended in distilled water (soil : liquid, 1 : 2.5). The concentrations of sulfates and nitrates were determined using a spectrophotometer in 0.1 M CaCl 2 soil extract following methods described by Bertolacini and Barney II [36] and Keeney and Nelson [37], respectively. The concentration of HMs was measured using an ELAN 5000 Inductively Coupled Plasma Mass Spectrometer (Perkin Elmer, Shelton, CT, USA) in a 1 M HCl solution (1 : 20; soil : 1 M HCl). The results were calculated for oven-dried soil.
DNA Extraction.
Total DNA (>25 kb) was extracted from the samples (3 g) after direct lysis using the method described by Selenska-Pobell et al. [38], and the DNA subsamples (five DNA subsamples for sampling site) were collected in a representative average sample for further analysis.
PCR Amplification.
Archaeal 16S rRNA genes from the genomic DNA were amplified via seminested PCR using specific archaeal 16S 21-40F (5 -TTCCGGTTGATCCYGCCG-GA-3 ) and universal 16S 1492-1513R (5 -ACGGYTACCTTG-TTACGACTT-3 ) primers. Each PCR reaction mixture (20 L) contained 200 M deoxynucleotide triphosphates, 1.25 mM MgCl 2 , 1.25 mM MgCl 2 , 10 pmol DNA primers, 1-5 ng template DNA, and 1 U AmpliTaq Gold polymerase with the corresponding 10x buffer (Perkin Elmer, Foster City, CA, USA). The amplifications were performed with a "touch down" PCR in a thermal cycler (Biometra, Göttingen, Germany). After an initial denaturation at 94 ∘ C for 7 min, the annealing temperature was decreased from 59 to 55 ∘ C over five cycles, followed by 25 cycles each with a profile Archaea 3 of denaturation at 94 ∘ C (60 sec), 55 ∘ C (40 sec), and 72 ∘ C (90 sec). The amplification was completed by an extension of 20 min at 72 ∘ C. The diluted products of the first reaction were used as templates for the second round of PCR, where two archaeal specific primers 16S 21-40F and 16S 940-958R (5 -YCC-GGCGTTGAMTCCAATT-3 ) were applied [39]. The initial denaturation at 95 ∘ C for 7 min was followed by 25 cycles each consisting of denaturation at 94 ∘ C (60 sec), annealing at 60 ∘ C (60 sec), and polymerization at 72 ∘ C (60 sec). The amplification was completed by an extension of 10 min at 72 ∘ C. This seminested PCR format was applied to obtain a sufficient amount of PCR products for the cloning procedure.
16S rRNA Gene Clone Libraries.
One archaeal and one amoA gene clone libraries for BuhC, BuhD, and Sliv were constructed using the pooled products from the PCR reactions. The 16S rDNA amplicons from five replicates were combined and cloned directly into Escherichia coli using a TOPO TA Cloning Kit (Invitrogen, Carlsbad, CA, USA) following the manufacturer's instructions to generate clone libraries. The archaeal 16S rRNA gene inserts and amoA gene inserts were subsequently amplified by PCR with plasmid-specific primers for the vectors M13 and M13 rev and then digested (2 h, 37 ∘ C) with the MspI and HaeIII restriction enzymes following the manufacturer's instructions (Thermo Fisher Scientific, USA). Restriction fragment length polymorphism (RFLP) patterns were visualized using 3.5% Small DNA Low Melt agarose gels (Biozym, Hessisch, Oldenburg, Germany), and these data were then used to group clones into phylotypes. The representatives of the RFLP types were purified using an Edge BioSystems Quick-Step 2 PCR Purification Kit (MoBiTec, Gottingen, Germany) and then sequenced using the BigDye Termination v.3.1 Kit (Applied Biosystems) and ABI PRISM 310 DNA sequencer (Applied Biosystems, Foster City, CA, USA). The sequencing of archaeal 16S rRNA gene fragments was performed using the primers 16S 21-40 F and 16S 940-958R , while amoA gene fragments were sequenced using the vector primer SP6.
2.6. Phylogenetic Analysis. The sequences obtained were analysed and compared with those in the GenBank database using the BLAST server at the National Centre for Biotechnology Information (NCBI) (http://www.ncbi.nlm.nih.gov). The presence of chimeric sequences in the clone libraries was determined using the programs CHIMERA CHECK, available on the Ribosomal Database Project II (release 11.0) and Bellerophon [41]. The sequences were aligned with those corresponding to the closest phylogenetic relatives using the Clustal W program [42]. Phylogenetic trees were constructed according to the neighbour-joining method using the Bioedit software package.
Data Analysis.
The results were statistically analysed by NCSS97 (NCSS, Kaysville, Utah), and the average values were presented. The sampling efficiency and diversity within the archaeal clone libraries were estimated using the MOTHUR software program based on the furthest-neighbour algorithm, and the sequences were grouped into operational taxonomic units (OTUs) [43] at sequence similarity levels (SSLs) of BuhC ≥ 97% (0.03 distance), BuhD ≥ 94% (0.06 distance), and Sliv ≥ 91% (0.09 distance). For each sample, the archaeal OTU richness (rarefaction curves, Chao 1, ACE) [44] and diversity (Shannon-Weiner index) [45] estimates were calculated. Statistical analysis of amoA OTUs was not carried out because of the low number of unique gene sequences identified in the BuhC, BuhD, and Sliv clone libraries. The level of pollution was expressed using a toxicity index (TI) as follows: where is the concentration of metal in substratum (mg kg −1 ) and ED50 is the total concentration of metal causing 50% reduction in microbial dehydrogenase activity (original ED50s were taken from Welp [46]).
Nucleotide Sequence Accession Numbers.
The sequences reported in this study were deposited in GenBank under the following accession numbers: FM897343 to FM897356 for partial archaeal 16S rRNA gene sequences and FM886822 to FM886831 for crenarchaeotic amoA gene sequences.
Environmental Variables.
Buhovo and Sliven samples differed in their geochemistry and the levels of U and HM contamination. BuhC and BuhD were sampled (Chromic cambisols) from different soil depths, while Sliv was a sandy gravel material collected from a mine waste pile. The texture of BuhC (20 cm at soil depth) was classified as sandy clay (35% silt and 54% clay), whereas BuhD (40 cm at soil depth) was classified as clay (38% silt and 60% clay). The bulk density of Buh soil varied in depth from 1.5-1.6 g cm −3 (20 cm) to 1.7-1.8 g cm −3 (40 cm). Soil porosity was 36-40% (20 cm) and 25-30% (40 cm) (personal communication). There is no data concerning the texture and geochemistry of Sliv substratum, except the organic matter content (0.3%) and pH (7.5). The organic matter content of the Buh samples was 2.8% for BuhC and 1.6% for BuhD. The total amount of nitrogen decreased from 1.19 g kg −1 (20 cm) to 1.03 g kg −1 (40 cm), while the total amount of phosphorus was not significantly different between the two soil layers-0.53 g kg −1 (20 cm) and 0.51 g kg −1 (40 cm). The pH H 2 O of BuhC and BuhD was slightly acidic (pH 6.9 and 6.6, resp.).
The main pollutants were Cu and Zn (BuhC, BuhD, and Sliv), U (BuhC and Sliv), Cr (BuhC and BuhD), As (BuhC Archaea and Sliv), Pb (Sliv), and sulfates (BuhD) ( Table 1). All sites were highly contaminated as shown by their individual TI ( -heavy metal with TI > 1.0) and TI sum , which decreased as follows: Sliv (119.38) > BuhC (15.38) > BuhD (9.91). Moreover, the level of toxicity might actually be stronger if the values took into account Mn (BuhC and BuhD) and U (BuhC and Sliv), since their concentrations were also high. However, the TI sum did not include these due to a lack of ED50 data.
Phylogenetic Diversity of Archaeal and amoA Gene
Sequences. A total of 355 archaeal clones (156 from BuhC, 128 from BuhD, and 71 from Sliv) and 229 amoA gene clones (107 from BuhC, 99 from BuhD, and 23 from Sliv) were selected, and their 16S rDNA inserts were analysed by RFLP. The clones sequenced were grouped into 19 (archaeal) and 15 (amoA) OTUs, and out of these 14 OTUs and 10 OTUs were unique, respectively. The rarefaction curves of the archaeal BuhC (3.99 ± 0.24 OTUs), BuhD (6.99 ± 0.07 OTUs), and Sliv (1.99 ± 0.06 OTUs) clone libraries were saturated, indicating that they completely covered the natural archaeal diversity of the samples and that the observed OTUs were a good representation of the archaeal community richness (Figure 2). The estimates of archaeal richness (Chao 1, ACE) and diversity (Shannon-Weiner index) predicted the highest values of indices in BuhD, followed by the BuhC and Sliv clone libraries (Table 2).
Archaeal Community Composition.
The 16S rRNA gene sequences identified in BuhC, BuhD, and Sliv belonged to the 1.1b/Nitrososphaera cluster of Crenarchaeota (Figure 3). Representatives of other crenarchaeotic clades or other archaeal phyla were not detected in this study. OTUs were defined at a 3%, b 6%, and c 9% differences in 16S rRNA gene sequences. The crenarchaeotic sequences were grouped into clusters (A and B; Figure 3). Cluster A involved 16S rRNA gene sequences retrieved mainly from the highly polluted environments of Sliv and BuhC. Cluster B consisted of OTUs from the BuhC and BuhD (226 of 227 clones) libraries. The latter cluster was separated into subcluster IB, generated by the sequences of the BuhD clone library (36 of 37 clones), and subcluster IIB, which mainly consisted of clones belonging to the BuhC and BuhD libraries (190 of 196 clones).
Composition of the amoA Community.
Phylogenetic analysis of 10 archaeal amoA OTUs revealed a high sequence identity (98-100%) with ammonia-oxidizing crenarchaeotes. Cluster I from the phylogenetic tree of the amoA gene sequences was formed by two OTUs from Sliv, whereas clusters II and III were only composed of OTUs from the Buhovo soil environments (Figure 4). In total, all amoA OTUs were presented in a relatively small number of clones (1-15 clones), except BuhD-A-24 and its analogue OTU from BuhC, which consisted of 55 and 92 clones, respectively.
All retrieved archaeal amoA sequences were matched with uncultured crenarchaeotes. Archaea 7 Protein sequences derived from the same samples were also analysed, and the data validated our DNA results (data not published). The protein sequences exhibited 96-100% similarity to the closest matched GenBank sequences retrieved from terrestrial, estuarine, and hot spring environments.
Discussion
The BuhC, BuhD, and Sliv archaeal communities appear to be composed solely of members of the soil-freshwatersubsurface group (1.1b) of Crenarchaeota, which was recently assigned by Bartossek et al. [49] as Nitrososphaera cluster. The presence of Crenarchaeota in these sites was not surprising, since these organisms are widespread [4,7,50], even in environments highly polluted with U and HMs [5,7,13,51]. Probably, the selection and propagation of only 1.1b Crenarchaeota in Buhovo and Sliven are passed under the power of U and HM pollution. Supporting this notion, Geissler et al. [52], Reitz et al. [14], and Radeva et al. [53] reported a strong reduction in archaeal diversity and a shift from Crenarchaeota 1.1a to 1.1b in soil samples supplemented with uranyl nitrate. The adverse effects of U were also confirmed by Porat et al. [5], who found low archaeal diversity in U-/nitratecontaminated sediments of the Oak Ridge stream (TN, USA).
The importance of the substratum and the level of pollution in the pattern of crenarchaeotic distribution is evident from the archaeal phylogenetic tree (Figure 3), where OTUs are grouped in one large cluster (B) based on 16S rRNA gene sequences from Buhovo soil (9 of 10 OTUs/226 of 227 clones) and another smaller cluster (A) formed of OTUs from the most polluted environments, Sliv and BuhC (4 of 6 OTUs/ 114 of 128 clones). There are no common 16S rRNA gene sequences from the two substrata (Buh soil and Sliv sandy gravel matter) studied.
The Buh soil environments comprise more complex and more diverse archaeal communities: 84% of OTUs and 80% of archaeal clones are from Buh, which validates data from Ochsenreiter et al. [7] indicating that the 1.1b crenarchaeotic clade is a typical "soil lineage. " Archaeal diversity in Buh soil is relatively low, varying from 0.97 (BuhC) to 1.51 (BuhD), and is depth dependent. Archaeal communities of the two soil depths include both common (BuhC-Ar8, BuhC-Ar18, BuhC-Ar44, BuhC-Ar48, and BuhD-Ar111) and depth-specific 16S rRNA gene sequences, the latter of which are represented by a small number of clones (1-15 clones). The dominant OTU BuhC-Ar8 is equally distributed in soil depth, comprising 45% and 48% of clones retrieved from BuhC and BuhD, respectively. Moreover, it is closely affiliated (99% SSL) with the uncultured crenarchaeote Gitt-GR-74 (AJ535122), which is found in uranium mill tailing in Saxony, Germany [13].
A trend for depth dependency in archaeal distribution was also observed in other studies, which indicate that Crenarchaeota are more abundant in deeper soil layers [54][55][56][57] and that archaeal : bacterial ratios increase with soil depth [2]. In the aforementioned studies, increasing abundance of crenarchaeotes correlated with decreasing nutrient (organic carbon and inorganic nitrogen) and oxygen concentrations in deeper soil layers. In agreement with the above-mentioned statements, we can speculate for BuhD that the diversity of Crenarchaeota is favoured by the nutritional and oxygen status of this soil depth and its low levels of U and HM pollution. The relative opposite conditions in BuhC soil layer comparing to BuhD (higher organic matter content, higher aeration in the upper soil layer, and higher levels of U and HMs) limit its archaeal diversity mainly to three dominant OTUs (BuhC-Ar8, BuhC-Ar18, and BuhC-Ar48) that harboured 93% of clones in the BuhC clone library.
The sandy gravel substratum of Sliv and its high level of pollution make this environment very unfavourable for archaeal proliferation. The inhabitants of Sliv are presented by two main OTUs (Sliv-Ar32 and Sliv-Ar22) that comprise 99% of clones. All archaeal 16S rRNA gene sequences retrieved from Sliv correspond with uncultured crenarchaeotic matches, except Sliv-Ar32, which exhibits a 99% similarity with Candidatus Nitrososphaera gargensis Ga9.2. According to Spang et al. [58], Ca. N. gargensis is well adapted to HM-contaminated environments and encodes a number of HM resistance genes that convey the genetic capacity to respond to environmental changes. The close similarity of Sliv-Ar32 to Gitt-GR sequences (99% SSL) recovered from U mill tailings in Germany also confirms the high tolerance of Sliv-Ar32 towards U and HM pollution. The other, more abundant OTU is Sliv-Ar22 (40 clones), and its dominance in Sliv clone library can be explained by both tolerance towards high levels of pollution and ability of Sliv-Ar22 archaeon to colonize rocky substrata. This sequence exhibits high similarity to the uncultured crenarchaeote QA4 (99% SSL), which was recovered from quartz rocks located in the highaltitude tundra of Central Tibet [59].
The phylogenetic analysis of archaeal amoA gene sequences retrieved from BuhC, BuhD, and Sliv reveals that the Crenarchaeota inhabiting these locations harbour ammonia oxidizers (Figure 4). The pattern of amoA gene sequence distribution is similar to that of Crenarchaeota with the smallest number of OTUs in the most unfavourable environment of Sliv (2 OTUs/23 clones), followed by the highly polluted BuhC (5 OTUs/107 clones) and the relatively low polluted BuhD (6 OTUs/99 clones). The high number of amoA OTUs in BuhD is related to the highest archaeal diversity in this depth and is due to the favourable conditions 8 Archaea (low organic matter, nitrogen and oxygen content, and high clayey soil texture) which stimulate not only the archaeal diversity but also the diversity of ammonia-oxidizing archaea. To date, studies [33,[60][61][62][63] that have investigated the environmental factors that shape amoA gene diversity in oceans, sediments, and soils have identified these factors as key environmental parameters for the proliferation of ammonia-oxidizing archaea.
Forty-six percent of the archaeal amoA OTUs, which comprise 73% of clones retrieved in this study, affiliate with archaeal amoA gene sequences obtained from freshwater ecosystems [64,65] and wastewater treatment plants [66]. These belong to the "soil and other environments" cluster, as proposed by Prosser and Nicol [67]. The other amoA OTUs (all from BuhD and BuhC) exhibit gene sequences closely related to those retrieved from soil environments like bulk [60] and arable (FN691264, HM803786) soils, grassland (HQ267736, EU671839), and semiarid soil (JQ638739) that belong also to the "soil and other environments" cluster [67].
BuhC and BuhD are very different environments with regard to soil texture, nutrients, oxygen (low soil porosity), and pollution status. Nevertheless, the two environments are inhabited by ammonia-oxidizing archaea as determined by the presence of the amoA gene sequence; BuhD-A-24 comprised 23% (BuhD) and 41% (BuhC) of all retrieved amoA clones. It is likely that the exclusive domination of BuhD-A-24 in Buhovo soil depths is a result of the adverse effects of pollution that reduce archaeal amoA diversity and the selection of only a few resistant gene sequences. We did not detect novel archaeal amoA clusters that would indicate the existence of special U-and HM-resistant ammonia-oxidizing archaea in the sites studied. This reveals the widespread distribution of ammonia-oxidizing archaea and the capacity of some species to tolerate high levels of U and HMs.
Conclusions
Phylogenetic analysis revealed that all archaeal 16S rRNA gene sequences assessed in this study belong to the 1.1b/ Nitrososphaera cluster of Crenarchaeota. The diversity of crenarchaeotic communities that inhabit the three sites of interest was very low, especially in the high U-and HM-polluted, sandy-stone environment of the Sliv mine. The archaeal communities of Buh and Sliv mines were distinct to each site and did not harbour common gene sequences. We did not detect novel crenarchaeotic and amoA gene clusters, indicating that the polluted environments of Buh and Sliv are inhabited by typical archaeal soil lineages. It is likely that these archaeal soil lineages were selected by the multifactorial nature of the local environment, resulting in the development of tolerance of indigenous archaea to high U and HM pollution. The archaeal amoA gene sequences detected in BuhC, BuhD, and Sliv supposed that ammonia-oxidizing archaea participate in nitrogen cycling in environments highly polluted with U and HMs. This study will be helpful in understanding the archaeal and ammonia-oxidizing archaeal diversities in soils polluted with U and HMs. | 5,286.4 | 2014-03-11T00:00:00.000 | [
"Biology"
] |
Behavioral Analysis and Immunity Design of the RO-Based TRNG under Electromagnetic Interference
: True random-number generators based on ring oscillators (RO-based TRNG) are widely used in the field of information encryption because of their simple structure and compatibility with CMOS technology. However, radiated or conducted electromagnetic interference can dramatically deteriorate the randomness of the output bitstream of the RO-based TRNG, which poses a great threat to security. Traditional research focuses on the innovation of the means of attack and the detection of circuit states. There is a lack of research on the interference mechanism and anti-interference countermeasures. In this paper, the response of the RO array to electromagnetic interference was analyzed, and the concept of synchronous locking was proposed to describe the locking scene of multiple ROs. On the basis of synchronous locking, the RF immunity of the RO-based TRNG was modeled, which can explain the degradation mechanism of bitstream randomness under RFI. Moreover, the design method of gate-delay differentiation is presented to improve the RF immunity of the RO-based TRNG at a low cost. Both transistor-level simulation and board-level measurement proved the rationality of this scheme.
Introduction
Integrated circuits under electromagnetic interference (EMI) are widely studied. The subject can be divided into the propagation of electromagnetic-interference waves and the interaction between electromagnetic interference and integrated circuits. The mechanism of the response of an integrated circuit to EMI can inspire the design of anti-interference circuits, which prevent interference damage. As a typical integrated circuit, true randomnumber generators based on a ring oscillator (RO-based TRNG) are widely used to generate secret keys in the field of security encryption because of their simple circuit structure and compatibility with CMOS technology.
RO-based TRNGs were first proposed by Sunar et al. [1]. Their randomness is generated by sampling the RO oscillating signal with random jitter, which is caused by physical noise. First, multiple oscillating signals are generated by an RO array. An exclusive OR (XOR) operation is performed between every two oscillating signals, and operations are repeated until there is only one output. Then, a D flip-flop is used to sample the XOR operation output, as shown in Figure 1a. Because jitter accumulates with time, sampling signal f s should be of a lower frequency. Wold et al. [2] proposed an improved RO-based TRNG. Compared with the original structure, only a D flip-flop was added at the output end of each RO, as shown in Figure 1b. The output bitstream of the TRNG directly met the randomness test [3] and did not need to be optimized by the post-processing program. Therefore, the improved structure was adopted in this paper to study electromagnetic interference.
Published studies show that RO-based TRNGs are quite vulnerable to EMI. The randomness of its output bitstream is severely damaged when ROs are locked and jitter is suppressed. Markettos and Moore [4] first injected a continuous wave into the power wire of an RO-based TRNG and reduced the keyspace of a secure microcontroller containing an RO-based TRNG from 264 to 3300. Bayon et al. [5,6] implemented a contactless electromagnetic-wave attack on RO-based TRNG. Compared with the conducted injectionattack method in [4], it was not limited to the low-pass filtering effect of the power-supply pin, and the interference-frequency range was greatly enhanced. The RO-based TRNG in [6] was implemented on a FPGA chip, which contained up to 50 ROs, so it was more universal and persuasive. Osuka et al. [7] coupled a sinusoidal EMI wave to a power cable through a current probe, where the power cable transmitted the interference into the chip. This could implement a long-distance interference injection, destroy the randomness of an RO-based TRNG, and leave no invasion evidence. These conducted and radiated interference methods could lock ROs and destroy the entropy source. However, the injection-locking conditions and the mechanism of randomness degradation are not clear. Some anti-interference methods based on the algorithm level were proposed [8,9], but there is still a lack of countermeasures based on the circuit-design level. The degradation of bitstream randomness is highly relevant to the status of ROs. A ring oscillator (RO) is composed of an odd number of inverters that are the minimal delay unit in the digital circuit. It is widely used in noise-waveform detection [10] and on-chip process sensing [11] because of its high sensitivity. Studies on the injection locking of ROs under EMI already exist. The authors in [12][13][14][15] injected interference from the signal port or the tail-current port of an RO, which differed from the interference-injection scene of the RO-based TRNG. ROs in the above papers also were not composed of complementary metal-oxide semiconductors (CMOS). Mureddu et al. [16] built RO circuits on an FPGA board and injected interference coupled from a delay line. Detailed experiments were carried out on different series of FPGAs. Tao Su et al. [17,18] directly injected interference from the power-supply port of CMOS inverter-based ROs and discovered and explained the injection-pulling and -locking phenomenon. In the above studies, the power-side-injecting method was closest to the real interference situation. The RO-based TRNG contains an array of ring oscillators, in which RO stages may be the same or different. This requires that we focus on the locking behavior of the whole RO array.
This paper supplemented and discussed aspects that were not considered in existing research, and it is arranged as follows: Section 2 introduces the experimental locking conclusion of an RO with EMI on the power supply and puts forward the concept of the locking region. The theory of synchronous locking for the situation of a locked RO array was proposed to explain the overall behavior. In Section 3, the degradation mechanism of the TRNG bitstream is analyzed and verified by simulation. To improve electromagnetic immunity, the electromagnetic immunity of the RO-based TRNG was modeled. A design scheme, gate-delay differentiation, was proposed to improve immunity. Section 4 contains the PCB measurement where the gate-delay-differentiation scheme was verified to be effective. Section 5 is the conclusion.
Conical Locking Region
According to our previous research, there are two behaviors of the RO with EMI on the supply, as shown in Figure 2a: injection pulling [17] and injection locking [18]. In Figure 2b, injection pulling means that the oscillating frequency of RO f RO shifts towards a higher or lower frequency with the increase in sinusoidal interference amplitude A RFI . A shift towards a higher or lower frequency depends on the f RFI value. In Figure 2c, injection locking means that the RO is out of a frequency-shifting state, and f RO is locked to a constant value. Changing A RFI does not affect f RO . f RO_ f ree and f RO_locked represent the free-oscillating frequency and locked frequency of the RO, respectively. With massive and detailed simulation and measurement verification, an empirical injection-locking condition was summarized [18], which can be expressed by the relationship between interference period T RFI and average gate-delay τ ave : m is a positive integer indicating different locking modes. The locking strength and locking range of the RO are the largest with the mode where m is equal to two. In the following discussion, we defaulted to the fact that m was equal to two because of its high representativeness. If the mode of m = 2 has high immunity, modes with a smaller locking range are more robust. The RO consisted of an odd number of inverters, so period T RO : The RO stage is denoted as N. According to Equations (1) and (2), the relationship between f RFI and f RO_locked when the RO is locked is deduced as follows: The larger f RFI is, the larger f RO_locked is. The response curves of the RO to A RFI under different f RFI are shown in Figure 3. As shown in Equation (3), the theoretical f RO_locked can be calculated from f RFI . When the offset of f RO_locked from f RO_ f ree is small, the RO can be locked with small interference; the larger the offset is, the larger A RFI is needed to lock the RO. When even the maximal A RFI cannot lock the RO, the response curve is only determined by the pulling effect. Therefore, we can obtain a conical locking region from which we can infer the A RFI and f RFI needed to lock the ROs. Any combination of interference A RFI and f RFI outside the region cannot lock the RO. To prove the existence of the conical locking region, four 41-stage ROs were built with the SMIC-65, SMIC-130, SMIC-180, and TSMC-180 nm process libraries, respectively. The response curves of f RO to A RFI under different interference frequencies f RFI was simulated by HSPICE, and the locking regions were obtained, as shown in Figure 4. The f RO_ f ree of the four ROs were 1085, 555, 259, and 257 MHz, respectively. The locking region was approximately conical. The locked frequency with A RFI equal to 0.5 V and the interference frequency were recorded and converted into the average gate-delay and interference period, as shown in Figure 5. The dotted line represents the empirical relationship in (1), with m equal to two. The ROs of different CMOS processes occupied different positions, where the advanced SMIC-65 nm process had the minimal average gate-delay. When RO locking occurs, the average gate-delays of all ROs linearly increased with the interference period, and the curves coincided with the dotted line. When the RO was released from the locking state, it was out of the linear relationship and dominated by injection pulling, as shown by the SMIC-180 nm and TSMC-180 nm curves in Figure 5. The simulation results were consistent with those of theoretical analysis.
Synchronous Locking of the Ring Oscillator Array
In the above section, empirical Equation (1) shows the interference condition to lock an RO. This locking condition is different from traditional LC oscillator injection locking. It is the integral of multiple relationships between the period of the interference signal and the average gate-delay of the RO, whereas traditional LC locking is the harmonic relationship between frequencies. The injection-locking condition does not contain any RO stage information. This means that, as long as the gate-delay or the process, voltage, and temperature (PVT) condition of the RO are consistent, locking always appears. This is a novel and interesting locking phenomenon.
To verify the locking independent of stage, forty-one-, forty-three-, and forty-five-stage ROs of the SMIC-65, SMIC-130, SMIC-180, and TSMC-180 nm processes, respectively, were implemented and simulated by HSPICE. With the interference frequency of 43,050 MHz, the response curves of the RO to A RFI were obtained and are shown in Figure 6. The f RO_locked and τ ave values were recorded and are listed in Table 1. To emphasize the stage independent of RO locking, τ ave values are marked red. For the SMIC-65 nm RO simulation case, the free-oscillating frequencies of the three ROs were 1084, 1034, and 988 MHz, which were locked to 1050, 1001, and 957 MHz, respectively. The calculation results of the average gate-delays were 11.61 ps, which is satisfied with Equation (1), with m equal to two. Therefore, the curves of oscillating frequency vs. interference amplitude in Figure 6(a1) can be transformed into the curves of average gate-delay vs. interference amplitude in Figure 6(a2), and all the curves coincide. In the SMIC-130, SMIC-180, and TSMC-180 nm RO simulation results, it showed the same characteristic. If the intrinsic gate-delay of the RO was the same, they were pulled and locked to the same value. Therefore, this locking mechanism was independent of RO stage. An RO array scenario was considered in which the average gate-delay of each RO was the same. From the above analysis of stage-independent locking, as long as the A RFI can lock one RO of the array, it locks the other ROs regardless of stage discrepancy. In other words, the synchronous locking phenomenon occurs. For the RO array on the actual chip, the average gate-delay of each RO was slightly different due to the fluctuations of PVT condition, but this subtle difference could be overcome by intentional EMI. Lastly, with a suitable interference condition, all ROs in the array were synchronously locked.
As shown in Figure 7, the 22,550 MHz sinusoidal interference was injected into the power-supply port of a 41-stage RO. To simulate the effect of process deviation, the width of the NMOS and PMOS in the inverter was intentionally modified from the size of the SMIC-130 nm inverter standard cell INVX1 (w n = 0.42 µ, w p = 0.64 µ) to w n = 1.62 µ, w p = 2.47 µ. The response curves of each size were recorded by the simulation. Figure 7a shows that the free-oscillating frequencies of the RO with different inverter sizes were slightly different. With the increase in interference amplitude, all ROs could be locked to 550 MHz, which conformed to the conclusion of Equation (3). The more the oscillating frequency deviated from 550 MHz, the greater the interference amplitude that was required for locking. The response curves of average gate-delay to interference amplitude are shown in Figure 7b. The difference in gate-delay caused by intentional process deviation can be overcome by electromagnetic interference, and the oscillator system was synchronously locked.
RO-Based TRNG Randomness-Degradation Mechanism
Both radiated and conducted interference can suppress the jitter of ROs and degenerate the randomness of the RO-based TRNG. As shown in Figure 8, the uncertain logic value can be sampled when the oscillating signal of the RO jitters. Otherwise, the D flip-flops the certain logic, which means that the output is not random. With the RO array, the jitter range covers the whole sampling period [1]. Good-quality random bitstreams can be generated by the TRNG. When the RO-based TRNG was disturbed by intentional EMI, the ROs were locked, and the jitter was suppressed [7], as shown in Figure 8d. Therefore, the randomness of the output bitstream deteriorated.
The reason for this disastrous result was that the injection-locking condition of the RO was independent of the stage and only related to the gate-delay. With the same CMOS process, the gate-delays were consistent, and all ROs had to be synchronously locked once locking occurred. Even with the existing process deviation, it was overcome by EMI to achieve synchronization, according to Section 2.2.
To prove the correctness of the above explanation, a specific RO-based TRNG circuit was built using an SMIC-130 nm process library and was simulated by HSPICE in which the array consisted of two ROs of 7, 9, 11, and 13 stages, respectively. To simulate the noise environment in the actual circuit, a −28 dBm Gaussian white-noise voltage was generated in MATLAB, which was superimposed onto the DC power supply of the circuit. Noise voltage was essential to cause the oscillating signal to jitter.
The simulated oscillating parameters are shown in Table 2, jitters were suppressed by 60%, 64%, 67%, and 70%. This phenomenon is also represented by the period-distribution histograms in Figure 9b,d. Figure 9a,c represents the bitstream output by the TRNG. Black and white pixels are used to represent zero and one, respectively. The binary images can be obtained by scanning the bitstream from left to right and from top to bottom. In the no-EMI case, the binary image was arranged in a disordered manner, which represents a certain degree of randomness. When all ROs were locked by the EMI, the binary image tended to be completely black, and the occurrence probability of zero and one was not uniform. It could not even pass the first test of the NIST Statistical Test Suite [3], which detects whether zero and one appear the same number of times, called the monobit test.
Immunity Modeling and Gate-Delay Differentiation
To solve the problem of the randomness of the RO-based TRNG deteriorating sharply due to synchronous locking, the design method of gate-delay differentiation was proposed in this paper: the average gate-delay of each RO in the TRNG was intentionally set to be different so that the locking regions of ROs were staggered from each other. This avoided all the ROs being locked in some EMI situations.
An array with four ROs was considered, composed of inverters with different gatedelays. According to the conclusion of Section 2.1, each RO had a conical locking region. Due to the difference in gate-delay, the conical locking regions of the four ROs were staggered, as shown in Figure 10a. The scattered distribution was beneficial to the EMI immunity of the RO-based TRNG: • To lock all four types of ROs, available f RFI and A RFI can only be selected in the red area, which is the overlapping area of the four conical locking regions. The required A RFI had to be greater than A 3 , and the selection of f RFI was also very harsh; • To lock three types of ROs, the available f RFI and A RFI can only be selected in the yellow area. This required that A RFI > A 2 , and the available selection range of f RFI was also small; • To lock two types of ROs, f RFI and A RFI can only be selected from the gray area. In such a case, it required A RFI > A 1 , and the selection range of f RFI was large. However, randomness did not become much worse because of only two types of ROs being locked; • To lock one type of RO, f RFI and A RFI can only be selected from the light green area, which is easy to achieve. The influence of one type of RO locked on randomness can be ignored.
In a traditional EM attack on RO-based TRNG, the gate-delays of four types of RO were the same, and there was no staggering between conical locking regions, as shown in Figure 10b. It was easy to realize synchronous locking for all ROs, which was destructive to the RO-based TRNG.
In Figure 10, we considered the locking region of the frequency. Similarly, average gate-delay τ ave , which can be converted from f RFI , also had a conical locking region. It was clearer and more appropriate to analyze the locking situation of the RO array, because the analytical method of τ ave discarded the independent factor of the RO stage, which made it easier for us to see the essence of locking. In the following simulation and test, we used the τ ave -type conical locking region to illustrate this.
To improve immunity, the gate-delay difference of each RO must be increased. The ideal situation is only one type of RO locking under any interference condition. To differentiate gate-delays, changing the channel width of transistors in the inverter, the load of each inverter, and the interconnect length are feasible because of the simplicity of implementation and low cost. Meanwhile, we can flexibly choose the implementation according to the situation: for the RO-based TRNG on application-specific integrated circuits (ASICs), changing the channel width of transistors is the first choice; for the RO-based TRNG on an FPGA, the load of each inverter and the interconnect length are appropriate. For a specific CMOS process, the response of the oscillating RO frequency to different capacitor loads and different transistor sizes can be simulated in advance. According to the response data, suitable capacitor loads and transistor size parameters can be selected and assigned for the ROs. Increasing the number of ROs with different average gate-delays matters, so that there is a smaller proportion of ROs in the locked state.
Therefore, we verified the design method of gate-delay differentiation with HSPICE using the RO-based TRNG circuit in Section 3.1. Two implementation strategies for differentiating gate-delay are shown in Table 3: Case 1 is changing the inverter size, and Case 2 is changing the capacitor load of the inverter. Although increasing the size of the inverter and capacitance load slowed down the circuit and caused additional power consumption, these shortcomings were tolerable compared with the goal of improving immunity. Table 3, which led to different gate-delays. The simulation results are shown in Figure 11. In Figure 11a, the conical locking regions are shown to be staggered. Only when the A RFI > 0.36 V could all ROs in the array be locked. To observe the output bitstream of RO-based TRNG, some representative interferences were selected to realize the locking of 0, 1, 2, 3, and 4 types of RO, respectively; see Table 4 for the interference and jitter information. The jitters of locked ROs, which are marked as red, were much lower than those of unlocked ROs.
Interference seemed to result in two kinds of consequences: one was suppressing jitter, as we know, and the other was intensifying it. In the unlocked state, as shown in the black data, the larger the A RFI was, the more severe the jitter was; in the locked state, as shown in the red data, the larger A RFI was, the more strongly the jitter was suppressed.
Bitstream binary images were obtained, as shown in Figure 11(b1-b5). When there were three or more types of ROs locked, output bitstreams showed strong regularity, and the randomness of the RO-based TRNG was greatly reduced. However, the required A RFI was also large. When only one or two types of RO were locked, the randomness hardly changed. This agreed well with our theoretical analysis. To obtain the quantization results of the randomness, the bitstream was tested in the NIST suite. When three or four types of ROs were locked, the p-value was far less than 0.01, which shows that the randomness was seriously damaged. For Case 2, each inverter of the 7-, 9-, 11-, and 13-stage RO was loaded with 0, 0.13, 0.28, and 0.44 fF capacitors, respectively. The simulation results are shown in Figure 12, which were very similar to those of Case 1. Detailed jitter and interference information is shown in Table 5. Adding different capacitor loads can also stagger the locking regions of the ROs to improve electromagnetic immunity. To prove that the design method of gate-delay differentiation to improve the immunity did not depend on the CMOS process, the RO-based TRNG built with SMIC 65-nm was simulated by HSPICE. The circuit setting was the same as Case 2 in the above experiment. To simulate such a situation, the corresponding average gate-delays of ROs loaded with 0, 0.037, 0.074, and 0.111 fF capacitors were 11.90, 12.25, 12.59, and 12.94 ps. According to Equation (1), these ROs could be locked by the interference of 41,700, 40,300, 39,300, and 38,300 MHz, respectively. This suggested that if the inverters in TRNG were loaded with the same capacitors as above, all the ROs would be locked synchronously, and the randomness deteriorated, as shown in Figure 13(c1-c4). If each inverter of the 7-, 9-, 11-, and 13-stage RO was loaded with 0, 0.037, 0.074, and 0.111 fF capacitors, respectively, the locking regions were staggered, as shown in Figure 13a. The same interference conditions above could not lock all the ROs synchronously, and the randomness of the bitstream did not become worse, which can be seen in Figure 13(b1-b4). The results proved that the improvement of immunity was independent of the CMOS process.
Measurement
In this section, the measurement of the design method for gate-delay differentiation to verify its feasibility is described. On a printed circuit board (PCB), the 7-, 9-, 11-, and 13stage RO circuits were built with the single inverter chip SN74LVC1G04DBVR. Some design considerations were as follows: For the convenience of controlling the gate-delays of the ROs and the consistency of A RFI reaching the power-supply port of each inverter, inverters in the same RO were arranged in a circle, and the spacing between adjacent inverters was strictly equal. The wiring length from the interference injection SMA connector to the power-supply port of the inverter should be equal. In order to not affect the uniform load of the ROs, an inverter was applied at the output port of each inverter in the bottom layer to isolate the oscillating circuit from the observation circuit. The circuit structure is shown in Figure 14. The D flip-flops and the XOR-tree circuits were implemented on an XC7A100t chip (Xilinx Artix-7 FPGA). The test platform is shown in Figure 15. The PC controlled the interference source to generate an interference wave with a specific frequency and amplitude, which was injected into the RO array board through a power amplifier, an isolator, and an attenuator. The attenuator was mainly used to weaken the reflection wave caused by the board and protect the amplifier. Similarly, the PC also controlled the oscilloscope and FPGA for data acquisition. For the control group, each inverter in the RO array was loaded with a 120 pF capacitor. Under the same load, the free oscillation frequencies of the 7-, 9-, 11-, and 13-stage ROs were 21.11, 16.55, 13.26, and 11.21 MHz, respectively. The average gate-delays were 3.38, 3.38, 3.43, and 3.43 ns, which were almost equal. Interference was applied to measure the locking region of each RO, as shown in Figure 16. The locking cones of the four ROs were almost coincidental on the vertical gate-delay axis, which was consistent with the theoretical analysis in Section 2. For this condition, it was very easy to inject interference to lock all ROs in the array and drastically degrade the randomness of the TRNG. There were many interference frequencies and amplitudes from which to choose. Three kinds of interference were selected for comparison: no interference, 0.9 V and 145 MHz, and 0.9 V and 150 MHz. The bitstream is shown in Figure 17. The latter two kinds of interference could easily lock all ROs. Binary images show that, when all ROs were locked, the output bitstream of the TRNG presented a certain regularity. A discrete Fourier transform (DFT) test in the NIST suite was carried out to detect the periodicity of the bitstream. The p-value was far less than 0.01, which represents the latter two conditions that destroy the randomness of the TRNG output. For the experience group, we changed the load-capacitance values of the inverters in the four ROs as shown in Table 6. The free-oscillating frequencies of the 7-, 9-, 11-, and 13-stage ROs were 14.28, 13.81, 13.00, and 12.82 MHz, respectively. The average gate-delays were 5.00, 4.02, 3.50, and 3.00 ns, which were quite different. According to theoretical analysis, the locking regions of each RO were staggered in the vertical axis, as shown in Figure 18. It was impossible to lock all four types of ROs within a reasonable range of interference amplitude. At most, two kinds of ROs could be locked. In the range of soft failure caused by EMI, it was impossible to reduce the TRNG randomness. The immunity was greatly improved. To prove the above conjectures, the amplitude of the injected interference was maintained at 0.9V, and interference frequencies of 95, 125, 133, 143, 154, and 167 MHz were selected, corresponding to the serial number marked in Figure 18. In addition to the 133 and 154 MHz cases, only one type of RO could be realized. The TRNG binary images are shown in Figure 19. The randomness of the bitstream was greatly improved compared with in the latter two cases of all ROs locked in Figure 17, which was also confirmed by NIST testing. Therefore, we verified the effectiveness of the design method of gate-delay differentiation.
Conclusions
This study analyzed the injection locking of an RO array under an EMI wave. Synchronous locking was proposed to summarize the response of an RO array in the RO-based TRNG, which could explain the randomness-deterioration mechanism of the bitstream.
To improve the EM immunity of the RO-based TRNG, the design method of gate-delay differentiation was proposed: by reasonably increasing the gate-delay difference of each RO, the locking regions were staggered, and the situation of most ROs being locked was avoided. The specific implementation methods of changing transistor sizes and capacitance loads were successfully verified by HSPICE simulations and PCB measurements. RO-based TRNG chips could be fabricated in CMOS processes, and the effectiveness of the above implementation methods can be verified in more detail.
Enriching the design method of gate-delay differentiation, the locking phenomenon of a single RO having inverter gates with different propagation delays is our future study direction. This can enrich our understanding of the locking phenomenon and inspire anti-interference research of RO-based TRNGs. We could also integrate the method with electronic design automation (EDA) tools to guide the immunity design of TRNGs.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,658.4 | 2021-06-04T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Yield prediction in parallel homogeneous assembly
We investigate the parallel assembly of two-dimensional, geometrically-closed modular target structures out of homogeneous sets of macroscopic components of varying anisotropy. The yield predicted by a chemical reaction network (CRN)-based model is quantitatively shown to reproduce experimental results over a large set of conditions. Scaling laws for parallel assembling systems are then derived from the model. By extending the validity of the CRN-based modelling, this work prompts analysis and solutions to the incompatible substructure problem.
Introduction
Large-scale manufacturing requires fast and efficient fabrication of many exact copies of desired objects.Robot assisted fabrication typically involves serial, deterministic procedures that are reliable but inefficient to assemble vast quantities of products, especially those of small size. 1,2An alternative manufacturing strategy consists of components assembling with one another autonomously to form many equal copies of a target structure.Such a self-assembly approach 3 is massively parallel and inspired by natural systems that assemble autonomously, such as crystals 4 and viruses. 5 set of mobile components capable of bonding with one another will tend to form assemblies in a bounded space.The formation of intercomponent bonds decreases the enthalpy of the system 6 and the number of accessible component configurations. 7Under thermal conditions that make entropic contributions negligible, the assembly of components reduces the free energy of the system, as well as the number of further bonds that can be formed.However, modular assembling systems are characterised by a multitude of intermediate states, associated with local minima in the free energy landscape, besides a few degenerate target states that correspond to global minima. 8Assembling systems able to break out of local energy minima, thanks to, e.g., perturbing energy imparted to the system to compete with bond formation, are self-assembling systems. 9,10Assembling systems incapable of escaping local energy minima in finite time are aggregating systems. 11Irreversible intercomponent bonds arrest aggregating systems typically within one of their local energy minima.Once in a local minimum, aggregating systems are prevented from exploring further possible configurations to reach the target state(s).Self-assembly is often used in literature, albeit incorrectly, to describe aggregating systems as well. 10In the following, we use assembly to describe systems that aggregate.
An assembling system is in a depletion trap when no more components are available to advance the assembly, as all components are already employed in existing (sub)structures. 12n parallel assembling systems of homogeneous components, depletion traps are absorbing states of the dynamics and local energy minima corresponding to the formation of incompatible substructures, i.e., partial target structures which cannot complete assembly due to steric incompatibility.§ Fig. 1 illustrates an instance of the issue.By preventing the assembly of target structures, the formation of incompatible substructures significantly reduces the assembly yield in parallel homogeneous assembly, and therefore wastes resources. 16At the same time, the incompatible substructure problem is both a logical and a physical problem, and is thus amenable to both analytical and experimental study.
In this paper, we present a comprehensive study of modelbased prediction of assembly yield for parallel assembling systems.We consider the parallel assembly of two-dimensional (2D), geometrically-closed target structures composed of homogeneous sectors of varying anisotropy.The results of an extensive set of parametric assembly experiments are quantitatively shown to closely match in most cases the assembly yield predictions obtained by a corresponding chemical reaction network (CRN)based model.Consequently, this work significantly extends the validity of CRN-based modelling of assembling systems, and the analysis of the incompatible substructure problem provides a foundation for the study of dynamical aspects of parallel (self)-assembly.
The paper is organised as follows.Section 2 gives an account of prior art.The physical system and the CRN-based model used in this study are described respectively in Sections 3 and 4. The experiments conducted to compare the physical system and analytical model are presented in Section 5.The results are reported in Section 6. Scaling of system properties is discussed in Section 7. Finally, Section 8 provides conclusions and outlook.
Background
The study of agents coming together to form autonomously ordered predictable and structures has been pioneered in the context of molecular chemistry, 17,18 biology, 19 material sciences 17 and soft matter. 20A seminal paper by Penrose and Penrose 21 presented a macroscopic mechanical assembling system.In the early 1980's, a model of diffusion-limited aggregation was proposed by Witten Jr. and Sander. 11][24][25][26] In the context of parallel assembly, Hosokawa et al. 27 studied the assembly of triangular components into hexagonal target structures.Hosokawa et al. described the negative impact of incompatible substructures on assembly yield in their experiments, and first proposed to study their parallel assembly system using the formalism of chemical reaction networks.
Klavins 28 built a mechatronic version of the triangular components used by Hosokawa et al. 27 Software embedded in their ''programmable parts'' defined rules, based on graph grammars, to guide the assembly of specified target structures.The programmability of the components was used to control the interactions, leading to coordinated behaviour of the system-an approach recently extended by Haghighat and Martinoli. 29iyashita et al. 30 studied the influence of reversible reactions on the yield of an assembly system.Their latching self-propelled components floated on the surface of water and had similar shape to those of Hosokawa et al. 27 Miyashita et al. conducted experiments to compare sequential aggregation, reversible assembly and random aggregation using these components.They used a CRN-based model to quantitatively describe their system.
Mermoud et al. 31 and Haghighat et al. 32 developed a magneto-fluidic system of passive, centimetre-sized components that self-assembled on water.The system was supported by a general, CRN-based, multi-level modelling framework for stochastic distributed systems of reactive agents.Using such framework, the authors were able to control the self-assembly of the water-floating components in real-time.Mastrangeli et al. 33 developed a downscaled version of the system, tailored for the automated control of the acousto-fluidic self-assembly of microparticles.
Hacohen et al. 34 proposed an algorithm to program the mechanical self-assembly of 3D macroscale objects.Taking inspiration from the fully specified assembly of DNA-based structures, Hacohen et al. showed that an arbitrary 3D object, properly dissected into components, can be re-assembled by unsupervised mechanical shaking when appropriate rules are uniquely encoded on the faces of the components.In their experiments, Hacohen et al. introduced enough components to assemble 2 target structures of 18 components in parallel, and were able to assemble only 1 target structure with no erroneous bonds.
Murugan et al. 12 studied the formation of incompatible substructures in chemical reaction-based systems.Using a theoretical model relevant to DNA, proteins and colloids, the authors suggested that the incidence of incompatible substructures can be reduced by appropriately tuning the reaction rates by the stoichiometry of the system.
Borrowing the concepts of intercomponent bond and structure formation from chemical reactions, CRN-based formalisms and approaches 35 are commonly used to model and simulate Fig. 1 An instance of the incompatible substructure problem in the assembly of a modular target structure.The circular target structure is composed of 4 equal sector-shaped components.In an aggregating system, each of the available components moves within a confined space and irreversibly bonds with two components.When exactly 4 components are used, a single target structure will always assemble.However, 5 components will not always form a single target structure and a spare component, i.e., a {1,4} population.They could instead end in one of 2 absorbing states-the other one being {2,3}.
aggregating and self-assembling systems.However, experimental validation of CRN-based model predictions of assembly yield is, to date, mostly qualitative.For instance, Hosokawa et al. 27 conducted a single experiment under 2 conditions (systems of 20 and 100 components) and 100 trials per condition, while Miyashita et al. 30 conducted a single experiment under 2 conditions (systems of 6 and 7 components) with 10 trials per condition.While noteworthy demonstrations of the feasibility of CRN-based modeling, these contributions fall short of showing the details of the relationship between the CRN-based model and physical experiments.Such a lack of comprehensive and quantitative comparison motivates the work presented here.
Physical system
We study the assembly yield of a two-dimensional system of passive components designed to form geometrically-closed target structures.The components are agitated within a reactor that is a bounded horizontal container attached to an orbital shaker.The agitation enables components and substructures to move around the reactor, interact with one another, and form irreversible magnetic bonds.Substructures of the target structures assemble upon bond formation.Only geometrically compatible substructures can bond together, eventually forming full target structures.The target structures are by design inert and stop growing once formed.
Components
All components are 3D-printed, 7 mm-thick polyhedra embedding magnets with opposite orientations in two of their faces to enforce the formation of the closed target structures (Fig. 2).NdFeB magnets (N48, Supermagnete (Webcraft GmbH), strength 210 g) assure intercomponent bonds that can withstand impacts with the container and among components.
We designed target structures of equal area and three different shapes: circle, square and triangle.All component sets are homogeneous.Sectors of a circle with radius 25 mm compose the circular target structures.We use sets of sectorshaped components with 4 different sector angles: 451, 601, 721 and 901.Magnets are embedded in the middle of the two straight faces.For square target structures, the components are 4 equal squares with side length of 22.2 mm, with magnets embedded in two adjacent lateral faces.For triangular target structures, the components are 3 identical isosceles triangles, the two equal sides and the base side measuring 38 mm and 67 mm, respectively.Magnets are embedded in the middle of the two equal sides.
Reactor
The experiments were carried out in a lid-covered circular container of inner radius of 125 mm, bounded by a 9 mmthick rim (Fig. 3).The container and the lid, made from clear acrylic, were fastened to an orbital shaker (New Brunswickt Innova s 2300).The reactor served two purposes: (1) to transfer kinetic energy from the shaker to the components, and (2) to constrain the component motion in 2D and avoid component flipping.We chose the components density in the reactor so that the components could frequently interact with one another while minimizing packing effects and avoiding jamming of the (sub)structures (e.g., 5 target structures occupied 20% of the reactor surface).We set the shaker frequency depending on the experiment type to maximise interactions among substructures while avoiding disassembly events.Orbital shaking produced a qualitatively pseudorandom motion of the components.The experimental results are not expected to be significantly influenced by the shaking method.
Analytical model
We analyse our assembling system with the chemical reaction network-based model proposed by Hosokawa et al. 27 In the model, the population of the system is represented as a Fig. 2 The 3D printed components with embedded magnets used in the experiments (dimensions overlaid).Positions and orientations of the magnets are marked.multiset of integers, x i .The integer x i represents the number of substructures with i, 1 r i r N, components and N the number of components in a full target structure.X i represents a substructure with i components.Considering, for instance, a system with N = 8 equal components ¶ shaped as 451 sectors to form a circular target structure, the state vector is x = (x 1 ,. ..,x 8 ).As we are excluding reverse reactions through irreversible intercomponent bonds, the state of the 451-sector system can undergo the following set of reactions: Given a large enough population, we study the probabilistic evolution of the system using a system of difference equations: where t is the number of bond forming collisions between clusters, and the transition function is F = (F 1 ,. ..,F 8 ).The number F i represents the expected value for each incremental step of x i , and it is the summation of the stoichiometric coefficient v ij times the probability P j with which the jth reaction in eqn (1) occurs, i.e., F i ¼ P v ij P j .An example of the calculation is presented in Section 3.1 of Hosokawa et al. 27 The probability P j is the product of the collision probability and bonding probability, P j = P c lm P b lm .When x c 1, the collision probability is defined as: : where S ¼ P x i , i.e., the total number of clusters.The geometry-specific values of P b are calculated according to Hosokawa et al. 27 We implement the calculation by placing the first substructure at the origin and the second substructure at infinity.The faces of the components form an angle of visibility.The number P b is the probability that the angle of visibility of one substructure's face that includes the South polarity magnet (refer to Fig. 2) is within the angle of visibility of another substructure's face that includes the North polarity magnet.A more detailed explanation of the computation is available in Appendix A of Hosokawa et al. 27 The values of P b used in this work are presented in Table 1.
The F i are generally expressed as: Using only the mean values of x i in our analysis would prevent us from accurately predicting the yield of a system.This is especially true for smaller values of x i .Therefore, we use the master equation, which uses the probability distributions of each x i instead of their mean values.The master equation is: wðx; x 0 Þpðx 0 ; tÞ À pðx; tÞ X x 0 wðx 0 ; xÞ where w(x,x 0 ) is the probability of transition from x 0 to x.
For instance, the probability of the reaction 2X 1 -X 2 at state x 0 is wðx; Substituting w(x,x 0 ) in the master equation, we obtain: Þpðx þ ð0; 0; 0; 2; 0; 0; 0; À1Þ; tÞ The master equation was solved numerically.The equation was iterated over interaction steps until only absorbing states and their corresponding probabilities were left.This inherently allowed listing the number of possible absorbing states.
Experiments
We verified the assembly yield predictions of the CRN-based model by conducting an extensive set of physical assembly experiments.The aim of the experiments was to record experimental yield statistics, to be compared with the CRN-based predictions.The parameters of the experiments were number of components and component shape, and the variable was assembly yield.
We conducted 6 experiments.The experiment with 451-sector shape components was conducted under 33 conditions parameterised by the number n of components, which varied from 8 (sufficient for 1 target structure) to 40 (sufficient for 5 target structures).20 trials were conducted for each of the 33 conditions, and an additional 21 trials were conducted for 12 of the 33 conditions.Experiments for 601, 721 and 901 circular sectors and for square and triangular components were conducted under 13 conditions each.The number of components used were: N, N + 1, 2N À 1, 2N, 2N + 1, 3N À 1, 3N, 3N + 1, 4N À 1, 4N, 4N + 1, 5N À 1 and 5N, respectively.11 trials were conducted per condition.
We conducted each trial as follows.We manually placed the individually separated components at various initial positions and orientations in the reactor, ensuring that the polarities on the right edge of every component were the same and that intercomponent bonds were not to instantaneously form upon inception of shaking.The shaking frequency was then set to 350 rpm for all circular sector-shaped and square components.Using the same shaking frequency and type of magnets, we observed lever-caused breakage in substructures of triangular components; thus in this case we ran the experiments at 300 rpm to eliminate the problem.
After inception of shaking, each experiment was run until the system reached an absorbing state, composed only of target structures and incompatible substructures (Fig. 3).The final population of structures was then manually recorded.8
Results and discussion
The results of the experiments are presented in Fig. 4-11 and in Fig. 17-20 of Appendix D. They show, in all cases, a qualitative agreement respectively with model-based predictions of absolute assembly yield Y, defined as the mean number of fully formed target structures over a given number of trials, and relative assembly yield T YÁN/n, defined as the mean number of components that form target structures divided by the number n of available components.For the comparisons of absolute yield, Fig. 4-11 (left) also report the 95% confidence error bound, i.e., the 95% confidence maximum difference between the predicted and the experimental absolute yields.Statistically significant confidence intervals were computed by means of the bootstrapping procedure described in Appendix B. A complementary measure of waste W, capturing the relative fraction of components not used in complete target structures, is defined as W 1 À YÁN/n = 1 À T and reported in Fig. 6 for the N = 8 case.
Circular sector components
As shown in Fig. 4, the yield in both the physical system and the abstract model is 100% for the 8-component condition.This is expected, since in this condition 8 components necessarily assemble in only one way if correctly placed and constrained to move in 2D (Fig. 5).Then, Y drops sharply for the 9-component condition, the first where the formation of incompatible substructures is possible.In fact, the 9-component system can end in one of 4 possible absorbing states (Fig. 5), three of which consist of two incompatible substructures.The values of the error bound, expectedly null for the n = 8 condition, show a non-monotonic increasing trend (Fig. 4).**The relative assembly yield also drops sharply for the 9-component condition, and initially shows a nonmonotonic trend that becomes approximately constant at B31% for larger n (Fig. 17).Correspondingly, the component waste fraction W in Fig. 6 appears to converge asymptotically and non-monotonically to an approximately constant value of B69%.
The relation between absolute assembly yield and possible number of absorbing states is explicited in Fig. 16 of Appendix C. The curve is multi-valued since, as shown by Fig. 5, different values of n give rise to the same number of possible absorbing states.Nevertheless, a non-monotonic increase of Y with the number of possible states can generically be observed.
The absolute assembly yield data and error bounds for the 601, 721, and 901 sector conditions (Fig. 7-9 (left)) show nonmonotonically increasing trends similar to the 451 sector case.The corresponding comparison between possible and observed absorbing states are shown in Fig. 7-9 (right).The relative yield data, presented in Fig. 17 ** When computing the yield considering also the additional 21 trials (i.e., on a total of 41 trials), the difference between the model prediction and the observed value becomes smaller; similarly, the corresponding bounds are reduced (see Fig. 15 in Appendix C).
This journal is © The Royal Society of Chemistry 2017
Square and triangular components
The absolute assembly yield curve derived from the CRN-based model for the square components (Fig. 10 (left)) exactly matches that of the 901 sector components (Fig. 9 (left)).The match is expected, since the model parameters describing both component shapes-namely, bonding probability P b and collision probability P c -are the same (see Section 4).For the same reason, the number of possible absorbing states is also the same in both cases (Fig. 9 and 10 (right)).The data for relative assembly yield shown in Fig. 21 are consistent with previous observations.Yield data and number of absorbing states for systems of triangular components are shown in Fig. 11 (left), Fig. 22 and Fig. 11 (right), respectively.Also in this case, the absolute yield curves for experimental and model-based results have closely matching, non-monotonically increasing trends; not all possible absorbing states were observed; and T appears to saturate for large n.
Though time is not a parameter of our experiments, it is worth noting that the trials with the triangular components took 15 minutes on average to reach the absorbing state, compared to an average of 5 minutes for square-and sector-shaped components.This can be only partially justified by the use of a lower shaking frequency in the former case (see Section 5).Visual observations suggested that the triangular components tended to mutually align and stack themselves against the edge of the container.This ordering and packing effect induced by local component density 36 sterically hindered the roto-translational motion of the components and significantly reduced the rate of interbond formation.
Discussion
Experimental and analytical assembly yield data (Fig. 4, and (Fig. 7-11 (left))) show non-monotonically increasing, closely Formation of the (z À 1)th target structure Consider the experiment with 16 of the 451 circular sector components, which can form z = 2 target structures.If and when the first target structure is formed, the second target structure will also and always form.This is true since the residual components can assemble in only one way.This also implies that the probability of forming exactly one target structure is 0 in this condition.With one more component with respect to the previous condition, the second target structure is not guaranteed to form even if the first target structure is formed.This is true because the residual components could still form incompatible substructures.The expected drop in yield for the latter compared to the former condition was confirmed by the experimental data.
Beyond the third target structure, the formation of the (z À 1)th target structure does not seem to cause a significant drop in yield.
Number of possible absorbing states
The assembly yield is also influenced by the number of absorbing states a system can access (see Fig. 16) and the corresponding distribution of substructures.The number of possible absorbing states can be exactly computed for every set of components used in this work.Interestingly, for the conditions we studied, the number of possible absorbing states is not a monotonic function of the number of components in the system.This affects the assembly yield of the corresponding systems.
To improve the reliability of the experimental data, the absorbing state space of each assembly system should be completely explored, and a sufficient number of trials conducted to build a reliable empirical estimate of the distribution over the absorbing states. 37Even with the non-negligible number of experimental trials we conducted (see Section 5), ultimately limited by time constraints, we were able to explore only a subspace of the possible absorbing states for all experimental conditions.In the experiment with N = 8 we did increase the number of trials per condition from 20 to 41 for selected values of n, as evidenced in Fig. 15.The additional trials did marginally help explore more of the absorbing state space (Fig. 5).
Scaling
Having verified the capability of the CRN-based model to predict the outcome of assembly experiments, we can use the model to investigate further properties of assembling systems of interest.Here, we begin by numerically exploring the scaling of the number of absorbing states and of assembly yield for The parallel and combinatorial nature of the assembly process and the existence of incompatible substructures let expect the number of absorbing states to be a power law function of the number of components.Fig. 12 plots the growth of the number of possible absorbing states with the number of components.It can be observed that different sets of components are associated with different scaling exponents; and that, in spite of the different geometry, pairs of sector-shaped components give rise to the same scaling exponent, respectively 1.00 in the case of 1201 and 901 sector components and 1.92 for 721 and 601 ones.The origin of such subdivision remains to be investigated.A power law scaling of the absolute assembly yield with the number of components is derived from Fig. 13.In this case, all component types share a very similar scaling exponent (B1.00).Scaling of the relative assembly yield with the number of components is shown in Fig. 14, which further evidences the convergence of T toward a fixed, N-dependent values for large values of n.
Conclusions
We presented a comprehensive study of assembly yield in systems of passive homogeneous agents assembling into equal target structures in parallel.The yield predictions derived from a chemical reaction network-based model were quantitatively compared with physical experiments conducted on a magnetomechanical system.Our results demonstrate that the CRN-based model is able to predict with fair accuracy the assembly yield for the extensive presented set of experimental conditions parameterised by number and shape of components.The results provide a quantitative grounding as well as a generalisation of the incomplete evidence first reported by Hosokawa et al. 27 They evidence that the relative assembly yield for a given target structure tends to converge to a fixed value for populations with large number of components.Correspondingly, the average fraction of components ultimately not used in the assembly of complete target structures was shown to remain rather high (e.g., above 69% for the N = 8 case) irrespective of the number of components available, which posits severe waste concerns for practical applications.Our results also support the theoretical observation by Miyashita et al., 30 according to which assembly yield increases as the angle of the sector-shaped components increases (i.e., N decreases).The CRN-based analysis clearly pinpointed the crucial role of incompatible substructures in the suboptimal yield in parallel assembly of geometricallyclosed target structures, and was shown suitable to derive scaling laws for the assembly system.A similar analysis can be developed to model the 3D assembly of modular target structures-as reported e.g. by White et al., 38 Hacohen et al. 34 and Neubert et al. 39 -provided that CRNs and bonding probabilities are appropriately defined.
This work preludes to future analyses of ways to avoid incompatible substructures.A trivial solution to the problem is the compartmentalisation of the assembly space in subregions, each bounding the exact number of components making up a target structure.An obvious approach is also to convert the assembling system into a self-assembling system, whereby intercomponent bonds are reversible and can be broken under specific conditions. 24In a system designed so that only the target structures do not break up upon collisions, the components population would be able to escape local energy minima.Another approach consists in templating the assembly of the target structures, for instance by using (non-homogeneous) sets of geometrically complementary components 16 or addressable, uniquely-encoded intercomponent bonds. 34,40,41In yet another approach, originally proposed but never realised by Hosokawa et al., 27 mechanical conformational switching 42 would direct the assembly kinetics of passive components with binary internal states and limit the number of possible coexisting nucleation sites. 20inally, the abstract model we used takes into account the combinatorial entropy of the system and a limited information on the geometry of the components.Additional entropic considerations might improve the model by integrating complementary spatial information pertaining to, e.g., component density in the assembly space. 7,36Developing a model that takes into account spatial entropy will be part of our forthcoming work.
Fig. 3
Fig. 3 Absorbing states from four different physical assembly trials: (a) triangular, (b) square, (c) 721 sector and (d) 601 sector components.The exact number of components to form 5 target structures was provided in all samples.
with P b l,m = P b m,l and P b l,m = 0 for l + m 4 N.The F i for the case of N = 8 are explicited in Appendix A.
-20 of Appendix D, show the convergence of T to an N-dependent asymptotic value already observed for N = 8.
Fig. 4
Fig.4Results for the systems with N = 8 (451) sectors per circular target structure: absolute assembly yield for experiments with physical system, absolute assembly yield for analytical model, and 95% error bound.Vertical dotted lines extend the markers on the x-axis for the fully formed target structures.Experimental data collected over 20 trials per condition.
Fig. 5
Fig. 5 Number of possible and of observed absorbing states as a function of the number of components for the system of Fig. 4.
Fig. 6
Fig. 6 Waste, i.e., relative fraction of components not assembled into target structures, for the system of Fig. 4.
Fig. 7
Fig. 7 Results for the systems with N = 6 (601) sectors per circular target structure: (left) absolute assembly yield for experiments with physical system, absolute assembly yield for analytical model, and 95% error bound.Vertical dotted lines extend the markers on the x-axis for the fully formed target structures.Experimental data collected over 11 trials per condition.(right) Number of possible and of observed absorbing states as a function of the number of components.
Fig. 8
Fig. 8 Results for the systems with N = 5 (721) sectors per circular target structure: (left) absolute assembly yield for experiments with physical system, absolute assembly yield for analytical model, and 95% error bound.Vertical dotted lines extend the markers on the x-axis for the fully formed target structures.Experimental data collected over 11 trials per condition.(right) Number of possible and of observed absorbing states as a function of the number of components.
Fig. 9
Fig. 9 Results for systems with N = 4 (901) sectors per circular target structure: (left) absolute assembly yield for experiments with physical system, absolute assembly yield for analytical model, and 95% error bound.Vertical dotted lines extend the markers on the x-axis for the fully formed target structures.Experimental data collected over 11 trials per condition.(right) Number of possible and of observed absorbing states a function of the number of components.
Fig. 10
Fig. 10 Results for systems with N = 4 (square-shaped) components per square target structure: (left) absolute assembly yield for experiments with physical system, absolute assembly yield for analytical model, and 95% error bound.Vertical dotted lines extend the markers on the x-axis for the fully formed target structures.Experimental data collected over 11 trials per condition.(right) Number of possible and of observed absorbing states a function of the number of components.
matching trends.Predicted and observed drops in assembly yield for specific n values can be justified by (1) the formation of the (z À 1)th target structure, and (2) the number of possible absorbing states.
Fig. 11
Fig. 11 Results for systems with N = 3 (triangle-shaped) components per triangular target structure: (left) absolute assembly yield for experiments with physical system, absolute assembly yield for analytical model, and 95% error bound.Vertical dotted lines extend the markers on the x-axis for the fully formed target structures.Experimental data collected over 11 trials per condition.(right) Number of possible and of observed absorbing states as a function of the number of components.
Fig. 12
Fig. 12 Scaling of the number of possible absorbing states for large populations of homogeneous sector-shaped components of varying anisotropy.
Fig. 13
Fig. 13 Scaling of absolute assembly yield for large populations of homogeneous sector-shaped components of varying anisotropy.
Fig. 14
Fig. 14 Scaling of relative assembly yield for large populations of homogeneous sector-shaped components of varying anisotropy.
Table 1
Assembly reactions and corresponding bonding probabilities P b for chemical reaction network-based models of systems with circular sector-shaped components.The integers X i denote target structures of cardinality i | 7,093.8 | 2017-10-25T00:00:00.000 | [
"Physics"
] |
Assessing the Impacts of Self-Help Group Based Microcredit Programmes: Non-Experimental Evidence from the Rural Areas of Coastal Orissa in India
This impact assessment study of microcredit was conducted by a crossectional data-set drawn from a pool of 200 samples from Puri district of India. A structured pre-tested household schedule was used to gather information from households. The "household" was taken as the unit of analysis; and a comparison between the factual and counterfactual was formed as the base of the study where the statistical means of the target households were compared with that of the control households across various variables. The statistical test of significance was conducted by using z-test. Under the econometric model, probit model was used to understand the determinants of the probability of participation in the Self-help Group based microcredit programmes. The study resulted into positive impact of Self-help Group based microcredits programmes on the household income, saving, employment days, literacy position and reduction in migration. The probability of participation was greatly determined by savings, employment days, days of migration and number of literates of the households.
Introduction
Microcredits are tiny loans for production and consumption purposes provided to poors who often lack access to the formal banking systems.Nonformal credit was in practice in India from centuries where the money lenders dominated the sector with low transition period and less transaction cost but with Faculty Member, Department of Rural Development, Xavier Institute of Social Service (XISS), E-mail: debaduttakp<EMAIL_ADDRESS>Management Dynamics, Volume 10, Number 1 (2010) usurious interest rates and corruptive procedures.Understanding the importance of microcredits, Government of India at a later stage considered it a part of national financial framework (Panda, 2009).
Small scale financing to weaker section of the society in India was started way back in 1960s with cooperative banking followed by the nationalisation of the commercial banks and initiation ofLeadBank Scheme in 1969.Social banking again strengthened by establishment of Regional Rural Banks (RRBs) in 1975 and National Bank for Agriculture and Rural Development (NAB ARD) in 1982.This social banking phase was characterised by extensive subsidised credit.The Integrated Rural Development Programmes (IRDP) in 1980s started by Government of India with the mission of poverty alleviation through credit programmes accelerated at a larger scale.In 1990s India had financial system approach where small scale financial products and services disbursed by Microfinance Institutions (MFIs) who were broadly Non-government Organisations (NGOs).Group based microcredit programmes were developed which started operating on peer pressure, social and moral collateral.Self-help Group (SHG) based microcredit programmes with a motive of thrift and credit started replicating and grew extensively.The innovation of SHG-Bank Linkage Programme (SBLP) by NAB ARD in the year 1992 started scaling up the SHG based microcredit interventions and later accredited as the biggest microcredit intervention in the world.From the year 2000 onwards, the financial inclusion phase started with legitimising NGO-based MFIs and with the provision of customised microcredit products as per the poors' demand (Panda, 2009).
It is difficult to trace the exact date of the SHG initiation in India.Few researchers traced out the existence of women SHGs working with the facilitation of NGOs even before 1980s.In the early 1980s, these women SHGs were noticed by the policy makers and had shown their concern for development and replication (Reddy andManak, 2005).However Femendez (2007) courted that the SHGs have first emerged as a Kamataka based NGO, MYRADA in 1985, and by 1987, MYRADA had about 300 SHGs under its project.
The SHGs are group villagers, mostly women from similar socio-economic background, who pool their saving regularly and re-lend within the group on rotational basis or based on a pre-defined criteria.But Reserve Bank of India (RBI) explained SHGs as registered or unregistered group of micro entrepreneurs Management Dynamics, Volume 10, Number 1 (2010) having homogenous social and economic background voluntarily, coming together to save small amounts regularly, to mutually agree to contribute to a common fund and to meet their emergency needs on mutual help basis.These SHGs are not limited to thrift and credit only rather they act as a tool for overall socioeconomic development of the poor by addressing income generation, women empowerment, capacity building, education, micro-enterprise development, linkage building etc (Panda, 2008).These SHGs work on principles like unity and self-help with the understanding of the fact that they stand if united otherwise they will fall.
The SHGs have started massive growth after the SHG-Bank Linkage Programme; and by 2004-05,1618456 numbers of SHGs were financed under this programme jointly by commercial banks, Regional Rural Banks and Cooperatives (Bose and Khaklari, 2007).Under this programme the states like Orissa and Jharkhand also experienced the SHG movement with the active facilitation of intermediary NGOs.
Review of literature
There were many impact studies conducted at regional, national and international level to explore the effect of group based microfinance interventions.Various researches conducted in different states of India had concluded the positive impact of SHG group based microcredit on the overall socio-economic development of poor ruralities (Panda, 2008;Lalrinliana andEaswaran, 2006;Sarangi, 2003;Dwarakanath, 2002;Saundariya and Mahanta;2001).The study conducted by SIDBI (2008) covering 10 states of India found increased household income, consumption especially on food, employment opportunities and employment man-days, high cost education etc. but had weak evidence of equality income distribution among the microcredit participating households.Choudary and Vasudevaraj (2008) found that SHG-based microcredit programmes in India have had significant achievement in outreach to 10 million people with a saving accumulation of about Rs. 8 Millions.The national level study conducted by NAB ARD and GTZ (Hannover, 2005) on the SHG-Bank Linkage Programme in India also corroborated similar findings.
There were also weak evidences of impact of the group based micro-credit interventions.Jung (2004) had the effectiveness of microcredit programmes despite their rapid expansions.Similarly Shamsuddoha and Azad (2004) did not find the substantial effect of microcredit to eliminate the poverty situation of the poor people in Bangladesh.Again the discussions of Hulme (2000) on the darker side of the microcredit put the researchers to go beyond the universal assumption of the positive impact of microcredit interventions.In this direction, this microresearch aims at measuring the impact of the SHG based microfinance over a range of socio-economic characteristics of the participating rural households in the coastal regions of Orissa state in India.
Methodology
This study was conducted in Purl district in the state of Orissa by employing a multistage sampling method.A pool of 200 sample size of crossectional data were engaged to conduct this study.In the first stage Puri district from Orissa was selected purposively.Again Pipli and Nimapara blocks from Puri district were selected randomly in the second stage.In the third stage five villages from each block were randomly selected; and in the fourth stage, from each village 10 households for target group and 10 households for control group were selected randomly and 10 households for control group were selected by matching method.Data collection was done by using pre-tested household schedules.
A comparison between the target households and control households across various variables was formed the base of the study.Target group contained households whose family members were under the Self-help Group based microcredit programmes; while the control households were the households who were neither under any Self-help Group neither based microcredit programme nor under any other group based microcredit interventions like Grameen Joint Liability Group (JLG), Mutually Aided Cooperative Societies (MACS) etc.The comparison between the target group and control group across various household characteristics happens to be one of the simplest methods for quasi-experimental research and is most suitable model in the absence of baseline information where Management Dynamics, Volume 10, Number 1 (2010) the control group was a counterfactual rather than factual (ADB Evaluation Study).Also this method controlled the exogenous variables in this study.
This study had engaged target group versus control group technique to understand the impact of Self-help Group based microcredit on the target households where the control group had served as counterfactual instead of factual.Under this methodology finding of the counterfactual was a tough task and the selection of the control households which could be similar with the target households across a range of variables was difficult.So in some cases selection of counterfactual was made by taking possible variables.
Microcredit interventions impact at individual level, household level and enterprise level (Panda, 2009) but this study had considered "household" as the unit of analysis to measure the direct and indirect impact of Self-help Group based microcredit.There were six household variables i.e. income, saving, expenditure, literacy, employment and migration selected in this study.These variables found suitable in past impact assessment researches conducted by SIDBI (2008), Panda (2008), Sarangi (2007), Hannover (2005) and Amin, Rai and Topa(2003).
Statistical significance test to understand difference between two means i .e. between the target group and control group, was conducted by using z-test because of the higher sample size (Chandel, 1999).The value of "z" was computed by the following equation, The Gini coefficient and Lorenz curve were employed to measure the inequality of income distribution as the Gini coefficient is a measure of statistical dispersion, most prominentiy used as a measure of inequality of income distribution (Panda, 2008).It is a ratio with values between 0 and 1: the numerator is the area between the Lorenz Curve of the distribution and the uniform distribution Where, X is the Percentage Cumulative Frequency and,
Y is the Percentage Cumulative Total Income
To understand how the probability of participation determined by various determinants, probit regression model was used (Sarangi, 2007).Since the participation in the microfinance programme depended upon various endogenous factors, so anon-linear regression model i.e. logistic regression model was chosen.Probit model was suitable to address the issue of endogeneity.
Yi=a + PX, where Y = 1 for participation and Y= 0 for non participation.
Where, a is the constant and P is the coefficient of explanatory variables, where K = a + + + P where, p,, P,, P3 P"are the coefficient of variables Xj,X2, X^respectively.
Results and discussion
The target households recorded annual household income of Rs. 71557.00 while the control households had Rs.67896.00 of income per household per annum.The intervention of the microcredit programmes led to 5.39 per cent of higher annual income in the target households as compared to that of the control households which was found statistically significant as evident from the z-value (Table -1).Since the study involved the comparison between factual and counterfactual, so it could not map the actual growth on household income.The annual income of target and control households force us to think whether all the households under the SHG programmes have been drawn from low-income households, since the basic definition of microfinance thrusts on the provision of finance to low to middle level households.Not being a longitudinal study and suffering from the lack of a stable baseline, the current research could not focus on the mentioned issue.Data presented in Table-2 shows that the inequality in income distribution was not affected by the microcredit intervention (Figure 1 and 2) as the difference in the value of Gini Coefficient between the target group and the control group was found very negligible.It established the weaker impact of microcredit interventions on the equality of the income distribution.However from Table-1, Management Dynamics, Volume 10, Number 1 (2010) higher inconsistency and variability was traced in the target group as compared to control group with regards to the annual household income as the coefficient of variation was found higher in target group than that of the control group.This result corroborates the results of Panda (2008).The higher household income in the target households as compared to that of the controlled households (Table -1) might have resulted due to higher investment in the productive assets.The assets position of the target households was 9.79 per cent highly significant higher over that of the control households.But the assets positions of the target group was lesser consistent as compared to that of the control group.
Another reason behind higher household income microcredit beneficiary households could be due to higher employment generation and higher outcome as a result of the use of microcredits.The target households were found with 623.16 number of annual average employment days as compared to 490.88 number of annual employment days of the control households (Table -1).This shows that the microcredit cliental households had 26.95 percent highly significant higher annual employment days as compared to that of the control households.The higher employment days in the target group were due to the increased operational capacity of the farming and micro-enterprises.Also the increased operational capacity demanded higher employment days and employees which again led to the increased employability of non-employed household members.So the higher employment days were the result of the increased employment days of the existing family members and employment of other family members as a result of higher capacity utilisation, addition and diversification of existing business (including farming as a business).The inconsistency and variability of the employment days were reduced from control group to target group as evident from the coefficient of variation presented in Table -1.The increased employment days as a result of the microcredit programmes had reduced the migration significantly.The number of family members migrating per annum was reduced by 45.26 per cent from control group to the target group which was found statistically highly significant.But the increased coefficient of variation in the target group over that of the control group signifies higher inconsistency and variability in the number of family members migrating per household in the target group as compared to that of the control group.
The major objectives of the Self-help Group based microcredit programmes was saving first and then the provision of credit.The members of the groups had contributed regularly a monthly saving of Rs. 10 to Rs. 20.This monthly saving increased the savings of the target clients which was not found for the non-cliental Management Dynamics, Volume 10, Number 1 (2010) households.The monthly saving habit of the cliental households led to a habit of saving in commercial banks, post offices and other sources apart for monthly group saving, which in turn increased the annual savings of the target households as compared to that of the control households.Data presented in Table-1 shows that the annual savings of the target households was Rs. 3749.47 and that of that control households was Rs. 2309.12.The target households had recorded highly significant higher savings by 62.38 per cent over that of the control households.Since all the participating households must contribute savings so it reduced the inconsistency and variability savings to a great extent in the target group as compared to that of the control group.
The microcredit intervention had led to an increased number of literates per households in the target group by 56.10 per cent over that of the control group and it was statistically highly significant (Table -1).Participation in the microcredit groups led to enhanced literacy status of the clients who in turn catalysed the increased literacy position of the family members in their own household.The probit results presented in Table -3 were the determinants and also consequences of participation (Sarangi, 2007), the participation was found positively correlated with household income, but it was not found very significant.However household savings, employment days and literacy was significantly positively correlated with the probability of the participation in the Self-help Group based microcredit programmes.The migration days and number of migrating family members were negatively correlated with the probability of participation as evident from the negative coefficient in Table -3.This shows that participation in the Self-help Group based microcredit programmes reduces migration and increases savings, employment, literacy and income of the participating households.
Conclusion
The Self-help Group based microcredit interventions in the coastal district of Puri in Orissa State of India had positive impact on participating rural households.The household income was 5.39 per cent higher in the target households as compared to the control households.Increased saving habit as a result of Self-help Group principles had led higher annual household saving by 62.38 per cent in the target households over the control households.Similarly per annum household number of employment days and number of literates were higher by 26.95 per cent and 56.10 per cent respectively in the target group as compared to that of the control group.Assets position was also higher in the target group by 9.79 per cent that that of the control group.Also the target group had experienced 45.26 per cent lesser number of family members migrating per annum per household as compared to that of the control group.However weak evidence of the impact of Self-help group based microfinance programme on the equality of the income distribution in households was traced from the study.
The probability of participation was strongly determined by savings, employment days, migration days and number of literates of the household.The income, savings, employment days and number of literates of the household were ppsitively comelated; and migration days and number of family members migrating of the household were negatively correlated with the probability of participation in the Self-help Group based microcredit programmes.
Scope for further research
This study being a quantitative study, employed closed-ended information Management Dynamics, Volume 10, Number 1 (2010) through interview schedule, so it could only be able to tell 'what' the impact is but remains silent on why and how is the impact.So this study invites further probe by researchers to design suitable qualitative research methods to study the 'why' and 'how' factor of the impact studies.There are also some of the variables which this study did not include due to the specific objectives of the study and time and resource limitations.Again this study looks forward to academic researches on impact assessment with variables like women empowerment, household decision making and participation in Panchayati Raj Institutions (PRIs) by the SHG members.Also studies can be conducted to measure the impact of SHGs on micro-enterprises and micro-entrepreneurship taking micro-enterprise as the unit of analysis.Since many of the above mentioned studies conducted in India and abroad were of qualitative in nature, so the demand of the hour is to go for quantitative studies with statistical and econometric tools.
(
Mean of X,-Mean of Xj) SEof (Mean of XI -Mean of X2) ^ Where, Xj is the sample represented the target group Xj is the sample represented the control group SE represented the Standard Error ManagementDynamics, Volume 10, Number 1 (2010) line; the denominator is the area under the uniform distribution line.The Gini Coefficient is calculated by the formula.
Figure 2 :
Figure 2: Lorentz Curve for Annual Income Distribution in Control Group show determinants of the participation in the Self-help Group based microcredit programmes.The positive coefficient of the annual household income shows the positive relationship between the household income and probability of participation.Since the dependent variables Management Dynamics,Volume 10, Number 1 (2010) | 4,327.4 | 2014-02-09T00:00:00.000 | [
"Economics",
"Sociology"
] |
miR-484 suppresses proliferation and epithelial–mesenchymal transition by targeting ZEB1 and SMAD2 in cervical cancer cells
MicroRNAs (miRNAs) play important roles in cancer initiation and development. Epithelial–mesenchymal transition (EMT) is a form of cellular plasticity that is critical for embryonic development and metastasis. The purpose of the study was to determine the function and mechanism of miR-484 in initiation and development of cervical cancer (CC). We determined the expression levels of miR-484 in cervical cancer tissues and cell lines with RT-qPCR. Prediction algorithms and EGFP reporter assay were performed to evaluate the targets for miR-484. MTT assay, colony formation assay, flow cytometric analysis, transwell cell migration and invasion assays, and detection of EMT markers were employed to investigate the roles of miR-484 and the targets in regulation of cell proliferation and EMT process. We also used rescue experiments to confirm the effect of miR-484 on CC cells through directly regulating the expression of its targets. Firstly we found miR-484 was down-regulated in cervical cancer tissues and cell lines compared with their matched non-cancerous tissues or normal cervical keratinocytes cells. Further studies revealed that overexpression of miR-484 suppressed the cell proliferation, while exacerbates apoptosis. Besides, miR-484 suppressed cellular migration, invasion and EMT process of CC cells. EGFP reporter assay showed that miR-484 binds to ZEB1 and SMAD2 3′UTR region and reduced their expression. The expression of miR-484 had reverse correlation with SMAD2/ZEB1, and SMAD2/ZEB1 had positive correlation with each other in cervical cancer tissues and cell lines. Furthermore, the ectopic expression of ZEB1 or SMAD2 could rescue the malignancies suppressed by miR-484, suggesting that miR-484 down-regulates ZEB1 and SMAD2 to repress tumorigenic activities. We found miR-484 inhibits cell proliferation and the EMT process by targeting both ZEB1 and SMAD2 genes and functions as a tumor suppressor, which may served as potential biomarkers for cervical cancer.
Background
Cervical cancer (CC) is the fourth leading cause among cancer-related deaths in women, and due to delayed initial screening, it mainly occurs in developing countries and causes about 265,000 deaths every year worldwide [1]. Nowadays, advances in CC therapies have improved treatment outcomes, while the prognosis remains limited and ineffective and a great number of patients died of metastasis. Although human papilloma virus (HPV) is the major risk factor for CC [2,3], independent alterations in tumour suppressor genes and oncogenes are essential for the development of these cancers as well [4,5]. Therefore, it's crucial to identify specific molecules
Open Access
Cancer Cell International and markers that contribute to understanding cervical carcinogenesis and ascertaining diagnostic and treatment strategies. Recently, researchers have focused on the effect of miRNAs on CC and a lot of miRNAs were found to play great importance in the initiation and development of CC [6][7][8].
MicroRNAs are a class of 18-25-nucleotide, highly conserved non-coding RNAs that post-transcriptionally regulate gene expression by binding to their 3′UTRs and regulate a wide range of physiological and pathological processes including cell differentiation, proliferation, apoptosis, invasion and migration [9][10][11][12]. In addition, growing evidences indicate that miRNAs are aberrantly expressed in human cancers and may function as tumor suppressors or oncogenes [13]. miR-484 was located on chr6. The expressions and functions of miR-484 in cancers were little. Although Yang et al. [14] reported that miR-484 was overexpressed in premalignant lesions of hepatocellular carcinoma (HCC), and can promote hepatocyte transformation and hepatoma development in two hepatocyte orthotopic transplantation models. Until now, the role and mechanism of miR-484 in CC cells are not clear.
Epithelial-mesenchymal transition (EMT) is an essential requirement for cancer invasion and metastasis [15][16][17]. The transcription factors Snail, Slug, Twist, zinc finger E-box-binding homeobox (ZEB) play vital role in initiation of EMT process. Recent reports have showed that the miR-200 family and other miRNAs regulate EMT through targeting these transcription factors [18][19][20]. ZEB family factors (ZEB1 and ZEB2) are transcriptional repressors that comprise two widely separated clusters of C2H2-type zinc fingers which bind to paired CAGGTA/G E-box-like promoter elements. These factors promote EMT by repressing expression E-cadherin [21][22][23] and are important intracellular mediators of TGFβ-induced EMT. Over the past few years, ZEB1 has increasingly been considered as an important contributor to the process of malignancies including endometrioid cancer [24], breast cancer [25], lung adenocarcinomas [26] as well as cervical cancer [27]. On the one hand, it has been shown that miRNAs, such as the miR-200 family can directly bind to 3′UTR of the ZEB mRNA to down-regulate its expression and influence epithelial differentiation [28,29]. On the other hand, it has been revealed that SMAD proteins directly act with the promoter of the ZEB factor and indirectly regulates establishment and maintenance of EMT [30,31].
In this report, we demonstrated that miR-484 is downregulated in cervical cancer tissues and cell lines, and overexpression of miR-484 inhibits cell proliferation, cell viability and exacerbates apoptosis, suppresses cell migration, invasion and EMT process of CC cells as well. Moreover, miR-484 was validated to directly bind to the 3′UTR of the ZEB1 and SMAD2 transcript, inhibiting their expression in CC cells. We also found that SMAD2 is an upstream regulator of ZEB1. Therefore, miR-484 regulated EMT process through both directly and indirectly targeting ZEB1. Collectively, our present work provides the first evidence that miR-484 down-regulates ZEB1 and SMAD2 expression to repress malignant properties in CC cells. The findings may provide insights into the mechanisms underlying carcinogenesis and potential biomarkers for cervical cancer.
Human cervical cancer tissue specimens and cell lines
Fifteen CC tissues and the paired adjacent non-tumor cervical tissues were obtained from the cancer center of Sun Yat-sen University. The diagnose was evaluated by pathological analysis. Written informed consent was obtained from each patient and ethics approval for this work was granted by the Ethics Committee of Sun Yat-Sen University. The cervical samples were classified by pathologists. The human CC cell lines HeLa, Caski and ME-180 were maintained in RPMI-1640 medium. C33A, SiHa and SW756 were maintained in MEM-α medium according to Ref. [32]. Primary cultures of normal cervical keratinocytes (NCx) were obtained from hysterectomy specimens removed for non-neoplastic disease unrelated to the cervix. Cell culture and determination of growth rates were according to Ref. [33]. All the cells were maintained in a humidified incubator with 5% carbon dioxide (CO 2 ) at 37 °C.
Vector construction
To over-express miR-484, the primary miR-484 fragment was amplified from genomic DNA and cloned into pcDNA3 vector between BamHI and EcoRI sites. To block the function of miR-484, we purchased the 2′-O-methylmodified antisense oligonucleotide of miR-484 (ASO-miR-484) and the scramble control oligonucleotides (ASO-NC) from the GenePharma (Shanghai, China).
The gene encoding ZEB1/SMAD2 was amplified from the cDNA of HeLa cells, and the product was cloned into pcDNA3-Flag vector between EcoRI and XhoI sites. The shRNA for knocking down SMAD2 and ZEB1 were synthesized from GenePharma (Shanghai, China) and were annealed and cloned into pSilencer 2.1-neo vector (Ambion) between BamHI and HindIII sites.
The 3′UTR of ZEB1/SMAD2 (containing the predicted binding sites for miR-484) was amplified from the cDNA of HeLa cells and then was cloned into pcDNA3-EGFP vector between the BamHI and EcoRI sites (downstream of EGFP). The mutant 3′UTR of ZEB1/SMAD2 (five nucleotides were mutated in the miR-484 binding sites) was amplified from the construct (pcDNA3-EGFP/ZEB1 or pcDNA3-EGFP/SMAD2 3′UTR). All of the primers for PCR amplification and all the oligonucleotides for annealing are listed in Table 1.
Cell transfection
Transient transfection was performed in antibiotic-free Opti-MEM medium (Invitrogen) with the Lipofectamine 2000 reagent (Invitrogen, Carlsbad, CA) following the manufacturer's protocol.
RNA isolation and reverse transcription quantity (RT-qPCR)
Extraction of total RNA from cells was performed using the TRIzol reagent (Invitrogen, CA) following the manufacturer's instructions. Expression of mature miRNAs and mRNAs were quantified by RT-qPCR using the SYBR Premix Ex TaqTM (Promega, Madison, WI). The concentration of RNA were measured with a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and stored at −80 °C for further use. Special stem-loop primers were used for the miRNA reverse transcription (RT) reaction, and U6 small nuclear B noncoding RNA (RNU6B) was used as the endogenous control to normalize the level of miRNA. The oligo (dT) primer was used for the RT reaction for gene expression. β-actin was used as the endogenous control to normalize the level of genes. All analyses were performed in triplicate and reported as 2 − ΔΔCt . The primers for RT and PCR are provided in Table 1.
Fluorescent reporter assays
To identify the direct target relationship between miR-484 and the 3′UTR of ZEB1/SMAD2 mRNA, the CC cells were cotransfected with pcDNA3/pri-miR-484 or ASO-miR-484 and the 3′UTR of ZEB1/SMAD2 or the mutant 3′UTR of ZEB1/SMAD2 in 48-well plates. The vector pDsRed2-N1 (Clontech, Mountain View, CA) expressing RFP (red fluorescent protein) was transfected together with the above plasmids and used as an internal control standard. 48 h after transfection, the cells were lysed by RIPA buffer and fluorescence intensities of EGFP and RFP were detected with an F-4500 fluorescence spectrophotometer (Hitachi, Tokyo, Japan).
Cell viability assay and colony formation assay
The cell viability of CC cells was evaluated by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) assay. The absorbance was determined at 570 nm (A570) (Bio-Tek Instruments, Winooski, VT, USA) after CC cells transfection for 24, 48 and 72 h. The details of methods were according to Ref. [34]. For colony formation assay, the cells were seeded into 12-well plates at a density of 300 HeLa or 400 C33A per well at post-transfection 24 h. Change medium every 3 days. After 11 or 13 days, the cells were stained with crystal violet, and colonies including more than 50 cells were counted. The average number was used to evaluate the formation ability.
Cell cycle analysis and apoptosis assay by flow cytometry
Cell cycle analysis and apoptosis assay was according to Refs. [6] and [35] respectively.
Cell migration and invasion assays
The migration and invasion were analyzed by 24-well Boyden chambers with an 8-μm pore size polycarbonate membrane (Corning, Cambridge, MA). Briefly, 8 × 10 5 cells were resuspended in culture medium without FBS and seeded in the upper chamber. Then the chamber was placed into a 24-well plate containing 800 μL of culture media with 20% FBS. Approximately 48 h later, the cells were fixed with paraformaldehyde and stained with crystal violet. The cells did not pass through the membrane were removed with the cotton stick while the cells that passed through the membrane were counted.
Western blot analysis
The detailed procedures for western blot were described in a previous study [36]. The primary anti-bodies used in this study including ZEB1, SMAD2, E-cadherin, cytokeratin, vimentin, N-cadherin, fibronectin and GAPDH, which were obtained from Saier Co. (Tianjin, China). The secondary goat anti-rabbit antibodies were purchased from Sigma.
Immunohistochemistry
The tumor tissues were fixed in 4% formaldehyde for 24 h and sent to Tangshan People's Hospital for immunohistochemistry.
Statistical analyses
All the data are presented as the mean ± SD. Each experiment was performed at least three times, and the analysis was performed using paired t test. p ≤ 0.05 was considered statistically significant (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).
miR-484 is down-regulated in cervical cancer tissues
We first examined the expression of miR-484 in 15 pairs of cervical cancer tissues and the adjacent noncancerous tissues by RT-qPCR. The results showed that miR-484 was generally down-regulated in cervical cancer tissues compared with the adjacent noncancerous tissues (Fig. 1a).
miR-484 inhibits cell growth of cervical cancer cells, suppresses cell cycle while exacerbates apoptosis in cervical cancer cells
First, we tested the efficiency of the plasmids pcDNA3/ pri-miR-484 and ASO-miR-484 in HeLa and C33A cells with RT-qPCR. The results showed that both the plasmid and ASO-miR-484 can change the miR-484 expression efficiently (Fig. 1b). Next, MTT assays were used to test cell viability after transfecting with pcDNA3/ pri-miR-484 or ASO-miR-484 at 24, 48 and 72 h. The results showed that miR-484 decreased, whereas ASO-miR-484 increased cell viability in both HeLa and C33A cells (Fig. 1c). Meanwhile, colony formation assay was performed to test the effects of miR-484 on proliferation. The results showed that the overexpression of miR-484 suppressed, whereas ASO-miR-484 increased the colony formation rate (Fig. 1d). Taken together, these results indicate that miR-484 suppresses the proliferation of HeLa and C33A cells.
To investigate the mechanisms underlying the regulation of cell growth, we examined the alterations in cell cycle progression and apoptosis caused by miR-484 in the HeLa cells using flow cytometry analysis. As shown in Fig. 2a, the overexpression of miR-484 in HeLa cells increased the percentage of cells in the G1 phase from 66.69 to 75.00% and decreased the percentage of cells in the G2/M phase from 13.8 to 7.87% (Fig. 2b). The proliferation index of the miR-484-treated cells was apparently lower than that of the negative control (Fig. 2b). In contrast, inhibition of miR-484 by antisense oligonucleotides in HeLa cells led to an increase in the percentage of cells in the G2/M phase from 13.85 to 21.90% and a decrease in the percentage of cells in the G1 phase from 65.32 to 56.87% (Fig. 2b). The proliferation index of the ASO-miR-484-treated HeLa cells was increased compared with the ASO control (Fig. 2b). In addition, flow cytometry results showed that miR-484 exacerbates apoptosis, while ASO-miR-484 inhibited apoptosis obviously in CC cells (Fig. 2c). Taken together, all these results indicated that miR-484 suppresses cell growth by both delaying the G1 to S phase and the S to G2 phase transitions and exacerbating apoptosis in CC cells.
miR-484 suppresses the migration and invasion of cervical cancer cells and inhibits the EMT process
To explore the effects of miR-484 on the migration and invasion of CC cells transwell migration and invasion assays were performed. The transwell membrane was coated with Matrigel in the invasion assay. The results showed that overexpression of miR-484 significantly suppressed the migration ability by approximately 59.8 and 43.7% in HeLa and C33A cells; while blocking of miR-484 increased the migration ability by approximately 1.7-and 1.9-fold in HeLa and C33A cells respectively (Fig. 3a). The overexpression of miR-484 suppressed the invasion ability by approximately 52.1 and 44% in HeLa and C33A cells; while ASO-miR-484 increased the invasion ability by 1.6and 1.7-fold in HeLa and C33A cells respectively (Fig. 3b).
It has been reported that EMT is an important mechanism correlated with migration and invasion [22]. During the transition, the expression of epithelial markers that enhance cell-cell contact decreases, while the expression of mesenchymal markers increases [17]. Therefore, we tested the expression of molecular markers to clarify the effects of miR-484 on the EMT process. As shown in Fig. 3c, the overexpression of miR-484 increased epithelial markers (E-cadherin and cytokeratin) protein levels but decreased mesenchymal markers protein levels (vimentin, N-cadherin and fibronectin) in both HeLa and C33A cells. By contrast, ASO-miR-484 decreased the epithelial markers but increased the mesenchymal markers protein levels. Importantly, RT-qPCR showed that miR-484 decreased the expression of transcription factors Snail, Slug, Twist and ZEB1 which play vital role in initiation of EMT process (Fig. 3d). With the modulation of miR-484, the expression of ZEB1 showed the greatest alteration. In summary, these results demonstrated that miR-484 suppresses the migration and invasion and inhibits the EMT process of CC cells.
miR-484 targets and down regulates ZEB1 and SMAD2 expression
As miR-484 suppresses cervical cancer cell growth, migration, and invasion, it is important to understand which targets are directly responsible for the observed phenotypes. We used three prediction algorithms (Tar-getScan, miRecords, and PITA) to predict the targets for miR-484 in common. There were 258 candidate targets shared by all the three databases (the overlapping fraction), in which only four genes involved in the regulation of EMT process (Fig. 4a). Based on our data (Fig. 3d) and previous reports, we chose ZEB1 and SMAD2 for further study. ZEB1 usually acts as a key transcription factor which can induce EMT, and our data had shown ZEB1 expression was altered with the modulation of miR-484 (Fig. 3d), which suggested that ZEB1 may be a bona fide target of miR-484. Moreover, the previous work in our lab has demonstrated that SMAD2 promotes cell growth, migration, invasion, and EMT in Human CC Cell Lines [6]. Other reports have revealed that Smads interact with the ZEB promoter [30,31]. Therefore, we chose SMAD2 as a second candidate target of miR-484.
To confirm whether SMAD2 and ZEB1 are direct targets of miR-484 in human CC cells, we used an EGFP reporter system, in which we cloned the wild SMAD2/ ZEB1 3′-UTR or a mutant SMAD2/ZEB1 3′-UTR downstream of the EGFP reporter gene (Fig. 4b). Cotransfection was performed with an RFP reporter as a transfection normalizer and with either pri-miR-484 or ASO-miR-484 in HeLa cells. After 48 h, we determined the fluorescence intensity. As shown in Fig. 4c, overexpression of miR-484 decreased the fluorescent intensity of the wild type ZEB1/SMAD2 3′UTR. In contrast, ASO-miR-484 increased the fluorescent intensity (Fig. 4c). However, neither the over-expression nor inhibition of miR-484 affected the fluorescent intensity of the mutant ZEB1/SMAD2 3′UTR (Fig. 4c).
We also explored the functions of miR-484 in the expression of endogenous ZEB1/SMAD2 protein by western blot. The results showed that the overexpression of miR-484 decreased, while ASO-miR-484 increased the expression of ZEB1/SMAD2 (Fig. 4d). Thus, the results demonstrate that miR-484 directly targets the 3′UTR of ZEB1/SMAD2 and down-regulates their protein expressions in CC cells.
Interestingly, we observed that overexpression of SMAD2 led to an increase in ZEB1 protein level (Fig. 4e). Conversely, decreased SMAD2 prevented up-regulation of ZEB1 protein level. This indicates that SMAD2 is an upstream regulator of ZEB1 and regulated its expression. These results demonstrate that miR-484 directly targets the 3′UTR of ZEB1 and SMAD2 mRNA and down-regulates their expression simultaneously.
ZEB1 and SMAD2 rescue the suppression of the malignant behavior induced by miR-484
Based on our results, we questioned whether the effect of miR-484 on CC cells is mediated by its down-regulatory effect on SMAD2 and ZEB1 expression. We performed rescue experiments to address this issue. First, we co-transfected miR-484 with the SMAD2/ZEB1 expression plasmid without the 3′UTR which has been proved worked (Fig. 5a) and confirmed the over-expression of SMAD2/ZEB1 could rescue the decrease in SMAD2/ZEB1 protein levels caused by miR-484 (Fig. 5a). Then we performed a series of functional rescue experiments. As expected, the restoration of SMAD2/ZEB1 expression mostly blocked the inhibitory effects of miR-484 on the cell viability (Fig. 5b), colony formation rate (Fig. 5c) and cell cycle (Fig. 6a). SMAD2/ZEB1 also restored the apoptosis induced by miR-484 (Fig. 6b). In addition, the restoration of SMAD2/ZEB1 can also counteract the suppression of the migration/invasion via miR-484 (Fig. 7a, b). Meanwhile, the ectopic expression of SMAD2/ZEB1 counteracts the inhibition of EMT induced by miR-484 in CC cells (Fig. 7c).
To confirm the deduce that the effect of miR-484 on CC cells is mediated by its down-regulatory effect on SMAD2 and ZEB1 expression in an inverse way, ASO-miR-484 along with pshR-ZEB1/SMAD2 was co-transfected into HeLa and C33A cells. As anticipated, the effects of miR-484 blocking on cell growth were significantly impaired when ZEB1/SMAD2 was suppressed (Fig. 8a, b). Furthermore, the expressions of EMT markers in the (See figure on previous page.) Fig. 3 miR-484 suppresses the migration and invasion of CC cells and down-regulates the EMT process. a, b After transfection 48 h, cell migration (a) and invasion (b) were evaluated using a transwell system with 8 μm pores in polycarbonate membranes. Representative views of migratory or invasive cells on the membrane were presented below. All pictures were photographed at ×20 magnification. c Protein levels of EMT-associated markers were assessed by western blotting after transfection 48 h. d RT-qPCR analysis for the expression of EMT transcription factors ZEB1, Snail, Slug and Twist2 in HeLa cells transfected with miR-484 or the control vector. The control was normalized to 1. All data represent mean ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001 Fig. 4 miR-484 directly targets SMAD2/ZEB1 and down-regulates their expressions. a The venn diagram shows the potential target genes (overlapping fraction) that were shared by all three databases. b The predicted binding sites for miR-484 in the 3′UTR of SMAD2 and ZEB1 and the mutations in the binding sites are shown. c The EGFP reporter assay was performed in HeLa or C33A cells co-transfected with pcDNA3/EGFP-SMAD2/ZEB1 3′UTR wild type or pcDNA3/EGFP-SMAD2/ZEB1 3′UTRmut with pri-miR-484 or ASO-miR-484. The fluorescence intensities were measured at 48 h after transfection. d Western blot assays were used to detect the SMAD2 and ZEB1 protein levels in HeLa or C33A cells transfected with pcDNA3/ pri-miR-484, ASO-miR-484 or their corresponding controls. e Western blot assays tested the effect of SMAD2 on the expression of ZEB1. *p < 0.05, **p < 0.01, ***p < 0.001, ns, no significance. All the bars indicate the means ± SDs. All experiments were performed at least triplicates miR-484-blocking cells were restored to the normal levels after knocking-down of ZEB1/SMAD2 (Fig. 8c). In conclusion, all these data indicate that SMAD2 and ZEB1 are functional targets of miR-484 in cervical cancer cells.
miR-484 is inversely correlated with SMAD2 and ZEB1 expressions in cervical cancer tissues and cell lines
To further explore the expression levels of ZEB1 and SMAD2 in cervical cancer, we performed RT-qPCR analyses to examine their expression levels in 15 pairs of CC tissues and adjacent non-tumor tissues as well as a panel of cervical cancer cell lines. We found that ZEB1 and SMAD2 were both upregulated in cancer tissues compared with the adjacent non-tumor tissues, which was opposite to the expression pattern of miR-484 (Fig. 9a). By Pearson's correlation analysis, miR-484 presented a reverse correlation with the expression of ZEB1 (r = −0.70, p < 0.01) and SMAD2 mRNA (r = −0.65, p < 0.001) (Fig. 9b). Interestly, the expression of ZEB1 and SMAD2 presented a positive correlation (r = 0.61, Fig. 5 The ectopic expression of SMAD2 and ZEB1 counteracts the inhibition of cell proliferation induced by miR-484. a CC cells were co-transfected with pcDNA3/pri-miR-484 and pFlag-SMAD2/ZEB1 without its 3′UTR or the control vector and western bolt assay was used to determine the restoration of SMAD2 or ZEB1 protein levels by pFlag-SMAD2/ZEB1 in the presence of miR-484. b After transfection 48 h, MTT assay was performed to detect cell viability in CC cells. c The transfected cells were submitted to colony formation assays to test the proliferation of CC cells. All data represent mean ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001 p < 0.05) (Fig. 9b). We also performed immunohistochemistry (IHC) to determine the expressions of ZEB1 and SMAD2 in clinical cervical samples. The results further demonstrated that ZEB1 and SMAD2 were both upregulated in cervical cancer tissues compared with the normal cervix (Fig. 9e).
We also determined the expressions of miR-484, ZEB1 and SMAD2 using RT-qPCR and western blotting analyses in a panel of cervical cancer cell lines as well as normal cervical keratinocytes cells. RT-qPCR and western blotting analyses revealed a significant inverse correlation between miR-484 and ZEB1/SMAD2 expression levels in different cervical cancer cell lines (Fig. 9c, d). The expression of ZEB1/SMAD2 in HeLa cells changed the most sharply while in C33A cells changed the least sharply compared with others. The expression of miR-484 and SMAD2/ZEB1 in both the tissues and cell lines indicated that miR-484 had reverse correlation with SMAD2/ZEB1, and that SMAD2 and ZEB1 had positive correlation with each other.
In conclusion, miR-484 regulated cell proliferation and EMT process through both directly and indirectly targeting ZEB1. As shown in Fig. 9e, we illustrated the regulation network reported by this work. To our notice, Fig. 6 The ectopic expression of SMAD2 and ZEB1 counteracts the inhibition of cell cycle and the promotion of apoptosis induced by miR-484. a The cell cycle progression of HeLa cells was analyzed by flow cytometry. b After transfection 24 h, apoptosis of HeLa cells was monitored by Annexin V-PI double staining and flow cytometry analysis. All data represent mean ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001 ZEB1 was the central molecules in this regulatory pathway. On one hand, miR-484 could inhibit cell proliferation and EMT process by directly targeting ZEB1. On the other hand, miR-484 suppresses cell growth and EMT by directly targeting SMAD2, an important upstream regulator of ZEB1 (Fig. 10). These results indicate that miR-484 exerts multiple pathways of regulation on ZEB1 expression resulting in the inhibition of cell growth and EMT.
Discussion
miRNAs have been indicated to be important regulators of a variety of biological processes, and their aberrant expressions are relevant to cancer initiation and development. In recent years, miRNAs were reported as potential biomarkers as therapeutic approaches in human cancers [37]. Understanding the role of miRNAs in cervical cancer will provide theoretical basis for miRNAspecific personalized treatment and molecular-targeted therapy.
miR-484 was originally discovered to be associated with resistance to chemotherapeutic agents in cancer [38]. This year, an aberrant expression of miR-484 has also been described in high-fat-diet-induced tumours, with a fluctuant miR-484 overexpression during feeding [39]. Ectopic expression of miR-484 initiates tumourigenesis and cell malignant transformation of HCC through synergistic activation of the TGF-β/Gli and NF-κB/IFN-I pathways [14]. This indicated that miR-484 may act as a significant role in tumor metastasis including EMT process. To our understanding, the functional study of miR-484 in cervical cancer is still missing. Here, we demonstrated that miR-484 was down-regulated in cervical cancer tissues compared to adjacent non-tumor tissues, which is different from the result in HCC. These differences may be due to the different types of cancer and the phases of the cancer, but the mechanisms need to be elucidated in the future. For example, miR-200s is downregulated and inhibit local invasion in breast cancer but enhances spontaneous metastasis and colonization of lung adenocarcinoma [40]. Functional analyses revealed that overexpression of miR-484 suppressed the malignant behavior of cervical cancer cells. Specifically, miR-484 overexpression significantly inhibited the cell proliferation, migration, invasion and EMT, whereas ASO-miR-484 enhanced these oncogenic features. In addition, western blot indicated that the expression of EMT markers including E-cadherin, cytokeratin, vimentin, N-cadherin and fibronectin were significantly modulated by miR-484, which is consistent with the formation Fig. 8 Knockdown of SMAD2/ZEB1 abrogates the effects induced by ASO-miR-484. a After co-transfection with ASO-miR-484 and pshR-SMAD2/ZEB1 or the control vector for 48 h, MTT assay was performed to detect cell viability in CC cells. b The transfected cells were submitted to colony formation assays to test the proliferation of CC cells. c The expression of EMT markers (E-cadherin and vimentin) was detected by western blot analysis. All data represent mean ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001 (See figure on previous page.) Fig. 7 The ectopic expression of SMAD2 and ZEB1 counteracts the inhibition of migration, invasion and EMT progression induced by miR-484. a, b Transwell migration/invasion assays and three-dimensional Matrigel culture to test the cells' abilities to migration (a) and invasion (b). c Protein levels of EMT-associated markers were assessed by western blotting after transfection for 48 h. *p < 0.05, **p < 0.01, ***p < 0.001. All the bars indicate the means ± SDs. All experiments were repeated at least three times Fig. 9 The inverse correlation between the expression of miR-484 and ZEB1/SMAD2 in cervical cancer tissues and cell lines. a RT-qPCR showing the expression of SMAD2 and ZEB1 in 15 pairs of human cervical cancer tissues and the adjacent noncancerous tissues. U6 snRNA was used as the internal control. b Pearson's correlation analysis indicated the negative correlation between the expression of miR-484 and ZEB1 (r = −0.55*) and SMAD2 (r = −0.65***). c RT-qPCR showing the expression of miR-484, SMAD2 and ZEB1 in cervical cancer cell lines compared with NCx. U6 snRNA and β-actin were used as the internal controls. d Western blot were used to detect the ZEB1 and SMAD2 protein levels in cervical cancer cell lines compared with NCx. All data represent mean ± SD of three independent experiments. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. e Representative IHC images showing the expression level of SMAD2 and ZEB1 in cervical cancer and normal cervix (×20) miRNAs generally play their roles by regulating their target genes. To better understanding the role of miR-484 in CC, we applied bioinformatics analyses to predict that the 3′UTR of SMAD2/ZEB1 contains miR-484 binding sites. EGFP reporter assay demonstrated that miR-484 directly targets SMAD2 and ZEB1 transcripts. Western blot analyses showed the decreased expression of ZEB1 and SMAD2 upon overexpression of miR-484 and vise versa. Functionally, overexpression of miR-484 could inhibit cell proliferation, migration, invasion and EMT, while the ectopic expression of SMAD2/ZEB1 abrogated the inhibitory effects of miR-484 on these tumorigenic features in cervical cancer cells. Meanwhile, we also showed that knocking-down of SMAD2/ZEB1 abrogated the tumorigenic effects induced by ASO-miR-484. Thus, we concluded that miR-484 may function as tumor suppressor by down-regulation of ZEB1 and SMAD2 expressions in cervical cancer.
Upregulation of the expression of ZEB1 plays an important role in the progression and metastasis in many cancers [24][25][26], including cervical cancer [27]. Our previous study has shown that SMAD2 promotes cell growth by facilitating the G1/S phase transition, enhances cell migration and invasion through regulated EMT in CC cell lines [6]. In addition, regulating SMAD2 by miRNAs has been identified to be involved in the TGFβ-induced EMT. For example, miR-18b targets SMAD2 and inhibits TGF-β1-induced differentiation of hair follicle stem cells into smooth muscle cells [41], and miR-27a inhibits colorectal carcinogenesis and progression [42]. Expectedly, ectopic expression of SMAD2 counteracts the inhibition of EMT induced by miR-484 in CC cells. The restoration of SMAD2/ZEB1 expression mostly blocked the inhibitory influence of miR-484 on malignant behavior (Figs. 5, 6, 7). The expression of miR-484 and SMAD2/ ZEB1 in the tissues and cell lines demonstrated that miR-484 had negative correlation with SMAD2/ZEB1, and that SMAD2 had positive correlation with ZEB1 (Fig. 9b, c). This result further supported that SMAD2 is an upstream protein of ZEB1 and they are both regulated by miR-484. Previous work has demonstrated that SMAD directly binds to the promoter of the ZEB factor and indirectly regulates EMT [30,31], which is in accordance with our results in CC. In summary, all these results prove that miR-484 inhibits the development of CC and may partially (if not completely) through down-regulating SMAD2 and ZEB1 expressions.
Except for ZEB, other transcription factors such as Snail, Slug and Twist are also the master regulators of EMT [22], which can activated by TGFβ directly and repress the transcription of E-cadherin [23]. The TGFβ/ ZEB/miR-200 double-negative feedback loop has been found and postulated to explain both the stability and interchangeability of the epithelial versus mesenchymal phenotypes in MDCK cell systems [43,44]. In the present study, we found that miR-484 could target both SMAD2 and ZEB1 genes simultaneously, and ZEB1 is regulated by SMAD2 as well, which is different from The ZEB/ miR-200 double-negative feedback loop. Interesting, our results also showed that miR-484 affects the expression of Snail, Slug and Twist as well, which may due to miR-484 regulating ZEB1 to the EMT-related regulators, while the detailed regulatory mechanism need to be further investigated. These results indicate that miR-484 exerts multiple pathways of regulation on EMT and highlight the importance of stringently regulating transcription factors. These findings also fit the emerging concept that miRNAs fine-tune gene expression to precisely modulate essential biological processes and provide a mechanistic view of miRNA-based regulation on specific molecules and markers involved in cancer metastasis.
Despite these studies, further development of the underlying mechanisms need to be investigated. And our next work is to found out the upstream regulatory mechanisms of miR-484 in CC cells.
Conclusions
In summary, our study demonstrated that miR-484 plays an important role in tumorigenesis and related to the EMT process in CC for the first time. miR-484 inhibits the proliferation, and exacerbates apoptosis, suppresses migration, invasion and EMT process through the down-regulation of ZEB1 and SMAD2 expression and functions as a tumor suppressor gene. In other words, miR-484 regulated EMT process through both directly and indirectly targeting ZEB1. Altogether, our findings provide new insights into the roles of miR-484 in cervical carcinogenesis and imply its potential applications as new biomarkers in cervical cancer. Abbreviations miRNAs: microRNAs; CC: cervical cancer; NCx: normal cervical keratinocytes; EMT: epithelial-mesenchymal transition; ZEB: zinc finger E-box-binding homeobox; ASO-miR-484: the 2′-O-methyl-modified antisense oligonucleotide of miR-484; ASO-NC: the scramble control oligonucleotides; MTT: the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide.
Authors' contributions
Conceived and designed the experiments: TH, HY. Performed the experiments: HY, LY. Analyzed the data: HY. Contributed reagents/materials: LM, XH, LW. Wrote the paper: HY. Revised the paper: TH, XH. All authors read and approved the final manuscript. | 7,603.4 | 2017-03-07T00:00:00.000 | [
"Biology",
"Medicine"
] |
Context-aware SAR image ship detection and recognition network
With the development of deep learning, synthetic aperture radar (SAR) ship detection and recognition based on deep learning have gained widespread application and advancement. However, there are still challenging issues, manifesting in two primary facets: firstly, the imaging mechanism of SAR results in significant noise interference, making it difficult to separate background noise from ship target features in complex backgrounds such as ports and urban areas; secondly, the heterogeneous scales of ship target features result in the susceptibility of smaller targets to information loss, rendering them elusive to detection. In this article, we propose a context-aware one-stage ship detection network that exhibits heightened sensitivity to scale variations and robust resistance to noise interference. Then we introduce a Local feature refinement module (LFRM), which utilizes multiple receptive fields of different sizes to extract local multi-scale information, followed by a two-branch channel-wise attention approach to obtain local cross-channel interactions. To minimize the effect of a complex background on the target, we design the global context aggregation module (GCAM) to enhance the feature representation of the target and suppress the interference of noise by acquiring long-range dependencies. Finally, we validate the effectiveness of our method on three publicly available SAR ship detection datasets, SAR-Ship-Dataset, high-resolution SAR images dataset (HRSID), and SAR ship detection dataset (SSDD). The experimental results show that our method is more competitive, with AP50s of 96.3, 93.3, and 96.2% on the three publicly available datasets, respectively.
Introduction
SAR is an active microwave imaging sensor, which can obtain high-resolution radar images under low visibility weather conditions, and it is widely used in the field of ship monitoring (Yang et al., 2018), geological exploration (Ghosh et al., 2021), and climate forecasting (Mateus et al., 2012).Distinguished from other remote sensing modalities, SAR stands out due to its ability to operate day and night, under all weather conditions, and its high resolution.So it makes SAR a crucial tool for object detection and marine monitoring.Recently, scholars have shown significant interest in utilizing SAR for ship detection in ports and on the open sea, and its applications have proven vital in both military and civilian domains.
In the past decades, a series of traditional SAR ship detection methods have emerged as the research related to SAR imaging technology and surface ship detection has been continuously and vigorously developed.The most representative types of traditional methods, such as the global threshold-based method that determines a global threshold through statistical decisionmaking and then searches for bright spot targets in the whole SAR image (Eldhuset, 1996), adaptive threshold methods that utilize the statistical distribution of sea clutter to determine an adaptive threshold with a constant false alarm probability (Rohling, 1983) and generalized likelihood ratio methods that take into account the distributional properties of both the background clutter and the ship's target (Iervolino and Guida, 2017).However, these traditional methods are based on interpretable theoretical justifications and well-established a priori knowledge to analyze ship features in SAR images, relying on manual feature extraction.When facing complex backgrounds and SAR images with a small proportion of target pixel values, the use of manually predefined features proves challenging in extracting effective target information and eliminating background noise interference.This results in a high false negative rate in target detection, preventing the accurate identification of ship targets.With the development of convolutional neural network (CNN) and the emergence of extensive SAR image ship detection datasets, such as SAR-Ship-Dataset (Wang et al., 2019), HRSID (Wei et al., 2020), and SSDD (Li et al., 2017), which has led to the rapid development of remote sensing image-based SAR target detection techniques for ships, especially in the feature extraction of targets.
Initially, driven by a substantial quantity of publicly SAR ship datasets, several deep learning-based multi-target detectors were directly used in SAR ship detection tasks.Such as two-stage detectors, region extraction-based convolutional neural networks (RCNN; Girshick et al., 2014), FastRCNN (Girshick, 2015) and the FasterRCNN, which is representative (Ren et al., 2015).Another example is single-stage detectors such as RetinaNet (Lin et al., 2017a), SSD (Liu et al., 2016), CenterNet (Zhou et al., 2019), and YOLO series (Redmon et al., 2016;Redmon andFarhadi, 2017, 2018).The above algorithms can automatically mine the effective features of the target and no longer rely on manual extraction, but they are ineffective, those who were initially designed for use as a general-purpose object detector in visible light.Subsequently, many scholars began to consider the design of deep networks for the task of ship target detection in SAR images.For example, Ma et al. (2022) proposed a ship target detection method based on attention mechanism and key point estimation.The method uses residual link and hierarchical features to extract multi-scale targets, then uses an attention mechanism to focus on target features and detect key points to solve the dense arrangement problem.As for multi-scale problem, Zhang et al. (2022) expanded the scope of image perception region by acquiring multiple scale slices with different region sizes.In addition, they addressed the issue of false positives by calculating the distinctiveness between targets and background, and by employing a multi-ensemble reasoning mechanism to merge confidence scores from multiple bounding boxes, which enhanced the extraction of target features.
FIGURE
Examples of SAR images with complex backgrounds and di erent scales in the SAR ship dataset.The blue boxed lines show ships of di erent scale sizes, and the red boxed lines show the complex background noise interference that their ships may be subjected to around them.
designed the Coordinate Attention Module (CoAM), embedding positional information into channels, thereby enhancing sensitivity to spatial details and strengthening the localization of ship targets.Then, they designed the receptive field increased module (RFIM), which employs multiple parallel convolutions to construct a spatial pyramid structure, to acquire multi-scale target information.
However, in practical applications, numerous challenging issues exist, as illustrated in Figure 1.On one hand, due to the coherent imaging principles in SAR images, adjacent pixel values undergo random variations, leading to speckle noise in the image.In scenarios such as coastal ports, islands, and regions with sea clutter, SAR ship images may struggle to extract valid information, resulting in instances of both missed detections and false positives.On the other hand, the multiscaling problem poses another challenge.The varying resolutions and morphological sizes of ship targets necessitate higher demands for multiscale feature extraction from the network model, given that the pixel range occupied by ship targets can vary from a few to several hundred.
Firstly, to address the issue of significant scale variations in ship targets, we designed a LFRM, which improves upon atrous spatial pyramid pooling (ASPP; Chen et al., 2017).Apart from the first layer, a residual link is employed for each atrous convolution layer to receive and fuse the output from the previous layer, concatenating it with the current layer's output.This effectively integrates information from different scales.Finally, by combining a dual-branch channel attention mechanism using global average pooling (GAP) and global max pooling (GMP), we achieve local cross-channel interactions.The overall network architecture of our proposed method employs a multi-level design with multiple detection heads to detect targets of different sizes, making it more suitable for multiscale targets.
Secondly, to mitigate the impact of noise from a complex background on the target, we introduce the GCAM, which expands the network's sensory domain by adaptively weighting features in different spaces.It leverages estimation-based long-range dependencies to obtain global semantic features, concentrating on the target's intrinsic characteristics to weaken background noise interference.Finally, we sequentially link and embed these two modules into the Feature Pyramid Network (FPN; Lin et al., 2017b) structure with a backbone network, enabling multi-level, wideangle perception of context.The main contributions of this paper are as follows: • We propose a context-aware SAR image ship detection and recognition network (CANet) that effectively detects multiscale targets through both bottom-up and top-down pathways, equipped with multiple detection heads.et al., 2019), HRSID (Wei et al., 2020), and SSDD (Li et al., 2017).Our method demonstrated outstanding performance with detection accuracies reaching 96.3, 93.3, and 96.2%, respectively.
Related work
SAR image ship target detection methods are mainly categorized into traditional methods and deep learning-based methods.The former defines ship target features manually, and then search for feature-matched ship targets in SAR images based on the predefined features, which can be categorized into three main groups: based on transform domain (Schwegmann et al., 2016), threshold-based algorithms (Renga et al., 2018) and statistical feature distribution algorithms (Wang et al., 2013).Within, the most representative one is the constant false alarmbased (CFAR-based) method.It is based on the statistical model of sea clutter, which is affected by the ocean area, the wind field conditions of the ocean, and the radar backscattering intensity varies in different wind field regions, thus forming a more complex clutter edge environment at the junction of different regions.Therefore, it is challenging to establish an accurate statistical model for a wide range of complex sea clutter.In addition, clutter modeling often requires complex mathematical theory support and time-consuming manual involvement, which also reduces the flexibility of the model and makes it difficult to effectively detect ship targets.
In recent years, convolutional neural networks (CNNS) have made great achievements in the field of natural image object detection, and their detection performance has been significantly improved compared with traditional methods.At present, natural image object detection methods based on deep learning are mainly divided into two categories: single-level object detectors and twolevel object detectors.Girshick et al. (2014) proposed the first twostage target detection model, R-CNN, which employs a traditional selective search algorithm to generate about 2,000 candidate frames, which are then fed into the CNN to extract features and categorize the candidate frames, and finally obtain the detection results.Subsequently, inspired by SPPNet (He et al., 2015), Fast R-CNN (Girshick, 2015) was proposed to solve the problem of slow detection speed of RCNN, which extracts the ROI features on the network feature map to avoid the repeated computation of features.It improved the detection speed.They used the Fully Connected (FC) layer instead of the original SVM classifier to further improve the classification performance.Ren et al. (2015), who proposed the faster FasterRCNN, designed the RPN network to replace the traditional candidate region generation algorithm selective search (Uijlings et al., 2013), which uses the convolutional network to extract the features and generate the position of the preselected frame.It reduces the time burden caused by the selective search algorithm and can almost reach the standard of real-time detection.More recently, faster R-CNN (Ren et al., 2015) is still the mainstream representative of two-stage detectors, and its mature design scheme has been widely used by numerous scholars.
As more demanding real-time target detection tasks are proposed, single-stage target detection is developing rapidly.As the pioneers of single-level target detectors, the YOLO series (Redmon et al., 2016;Redmon andFarhadi, 2017, 2018), by directly treating the object detection problem as the regression problem of the target region position and target category prediction, can output the positions and categories of target bounding boxes using only convolutional networks, meeting the requirement of real-time detection.Subsequently, YOLOv4 (Bochkovskiy et al., 2020) and YOLOv5 were proposed to achieve a new balance between the accuracy and speed of this series of algorithms, which were applied to more detection and recognition tasks.Another improvement of YOLO, TPH-YOLO (Zhu et al., 2021), to improve the detection accuracy of tiny targets, a tiny target detection head is added based on YOLOv5, and a total of four Prediction heads can mitigate the effects of large changes in the size of the target scale.Meanwhile, it replaces some convolutional blocks with transformer encoder ones to capture global information and sufficient background semantic information.SSD (Liu et al., 2016) and RetinaNet (Lin et al., 2017a) are two other common single-stage detectors.The former directly utilizes convolutional layers to extract detection results from different feature maps.It employs prior boxes with varying scales and aspect ratios to better match the shapes of targets, distinguishing it from YOLO, which uses fully connected layers for detection.While the latter proposes a new loss function that can be used as a more efficient alternative to previous methods for dealing with class imbalance.This class imbalance problem is solved by reshaping the standard cross-entropy loss to reduce the loss assigned to well-categorized examples.
With the blooming of deep learning in the field of images, CNN-based ship detection is increasingly subject to becoming popular.Dense Attention Pyramid Network (DAPN; Cui et al., 2019) embedded a convolutional block attention module (CBAM) into each level of the pyramid structure from the bottom up to enrich the semantic information on different level scale features and amplify the significance of features.CBAM is used to fuse the features at all levels, and the adaptive selection focuses on the scale features to further strengthen the detection and recognition of multi-scale targets.Also improved based on FPN (Lin et al., 2017b), Zhao et al. presented a novel network called attention receptive pyramid network (ARPN; Zhao et al., 2020), by fine-tuning the pyramid structure, to generate candidate boxes at different levels of the pyramid.Then, asymmetric convolution and atrous convolution are used to obtain convolution features in different directions to enhance the global context features of the local region.Then channel attention and space attention are combined to reweight the extracted features, improving the significance of the target features and suppressing the interference of noise, and finally connect them to each layer of the pyramid laterally.Chaudhary et al. (2021) tried to directly apply YOLOv3 (Redmon and Farhadi, 2018) to ship detection and achieved some good results.Inspired by YOLO, Zhang and Zhang (2019) divided the original image into grid regions, and each grid was independently responsible for detecting the target in the region.Then, the image features are extracted through the backbone network for detection.In particular, backbone networks use separable convolution to reduce network burden.
PPA-Net (Tang et al., 2023) took into consideration that the designs of attention mechanisms such as CBAM are tailored for natural images, overlooking the impact of speckle noise in SAR images on attention weight generation.The target salience information is introduced into the attention mechanism to obtain the attention weight suitable for the SAR image.First, three pooled operations of different region sizes are constructed to obtain parallel multi-scale branches, and then activation functions are used to obtain the final channel attention weights.Meanwhile, considering the mutual exclusivity between semantic and location information and avoiding simple feature cascade operations, the authors use two self-attention weights to adaptively regulate the fusion feature ratio.To enhance the practical value of SAR ship detection applications, Zhang et al. (2019) constructed a lightweight SAR ship detection network based on the depthwise separable convolution neural network (DS-CNN).They replaced traditional convolutions with DS-CNN, significantly improving detection speed with fewer parameters, making it applicable for real-time detection tasks.Similarly, to improve detection speed, Lite-yolov5 (Xu et al., 2022a) designed a lightweight stride module and pruned the model to create a lightweight detector.To ensure detection accuracy, histogram and clustering methods were applied to enhance detection performance.Additionally, there are instance segmentation methods based on SAR ships, such as the attention interaction and scale enhancement network (MAI-SE-Net; Zhang and Zhang, 2022a).This method models long-range dependencies to enhance global perception and uses feature recombination to generate high-resolution feature maps, improving the detection capabilities for small targets.Zhang and Zhang (2022b) employed a dense sampling strategy, fusing features extracted by FPN at each layer and adding contextual information to the region of interest (ROI) to enhance information gain.
To address the issue of multiscale object detection, HyperLi-Net (Zhang et al., 2020a) utilized five improved internal modules to enhance the accuracy of multiscale object detection.These modules include multiple receptive fields, dilated convolution, attention mechanisms, and a feature pyramid to extract multiscale contextual information.Xu et al. (2022b) utilized the polarimetric characteristics of SAR to enhance feature expression and fused multiscale polarimetric features to obtain scale information.Zhang and Zhang (2020) proposed a lightweight one-stage SAR ship detection method, ShipDeNet-20.Because it uses depth-separable convolution with fewer layers and parameters instead of traditional convolution, its detection speed and model size are superior to other detection methods.Meanwhile, to ensure that the detection accuracy is not lost, features of different depths are fused to enhance the contextual semantics of features, and feature maps of the same size are superimposed to improve the expression ability of features, to improve the detection accuracy.Zhu et al. (2022) used the gradient density parameter g to construct the loss function of the network in order to solve the sparse problem of ship targets unbalanced with positive and negative ship samples.To prevent positive samples from having a decisive influence on the global gradient, the weight of the gradient proportion of multiple samples is neutralized.The author also studies the effect of the imbalance of feature levels on multi-scale ship detection.In order to ensure that semantic information is not lost during multilayer transmission, the method of horizontal link integration of multilevel features is adopted to accelerate the flow of information so that the detailed features and semantic features can achieve balance, avoiding the semantic information and detailed features caused by the loss of other resolutions only by focusing on adjacent resolution information.
To mitigate the impact of background noise on the target, the Balance Scene Learning Mechanism (BSLM; Zhang et al., 2020b) employs a generative adversarial network (GAN) to extract complex scene information from SAR.This is followed by a clustering method to differentiate between nearshore and offshore backgrounds, thus enhancing the background.Similar balancing strategies are employed in various methods (Zhang et al., 2020c(Zhang et al., , 2021b)).Additionally, some approaches utilize pixel-level processing to reduce background noise.Sun et al. (2023) used superpixels to reduce the impact of noise on the target.Firstly, the image is segmented by pixel blocks of different sizes to obtain target features of different sizes and image understanding of different semantic levels.After that, the surrounding contrast feature region is dynamically selected by dividing the size of the superpixel so that the smaller superpixel can have a larger contrast region while the larger superpixel can choose the features around itself for comparison.Finally, the superpixel features at different levels are fused for detection.Previous studies focused on extracting the features of ship targets in the spatial domain, but Li et al. (2021) believed that the spatial features of ship targets could not meet the requirements of high-precision detection, so they used the frequency domain to make up for the shortcomings in the spatial domain.Like most methods, the multi-scale spatial information of the ship target space domain is obtained through hierarchical learning, and then the invariance features of the target in the frequency domain are obtained by using the Fourier transform
Context-Aware Network
In this section, we detail the overall architecture of the network and some other design-specific concepts and corresponding examples.The overall architecture of our approach is shown in Figure 2. Specifically, features are first extracted initially using CSPDarkNet53 as the backbone network.For the backbone network, our input goes through two convolutional layers to downsample the data to 1/4 of the input, where the activation function used in the convolutional layer is chosen to be the SiLU function.The SiLU function has a smoother curve as it approaches 0, controlling the output structure between 0 and 1 and achieving better results than ReLU in some applications.Then, the feature extraction method of YOLOv5 was adopted to obtain three effective feature layers with different resolutions and channel numbers through multiple C3 modules, and the three feature layers were input into the FPN network structure composed of LFRM and GCAM in series in parallel.The C3 module consists of three standard convolutional layers as well as multiple CSP Bottlenecks.The CSP Bottleneck mainly uses a residual structure, with one 1X1 convolution and one 3X3 convolution in the trunk, after which the residuals are left untouched and the inputs and outputs of the trunk are directly combined.The C3 module uses the CSPNet (Wang C. Y. et al., 2020) network structure, which still employs the residuals.
We capture multi-scale features through LFRM to better adapt to different scales of ship object information, thus obtaining a more representative feature map.Then, the long-range dependencies are captured by GCAM to enhance the feature representation of the target and suppress the interference of noise.The following subsections present detailed information.
. LFRM
Since ship targets in SAR images in real applications may have different scales, some ships may be very large while others may be relatively small, making the detection process complicated.To address this problem, we designed the LFRM module as shown in Figure 3.The deep features x = {x 1 . . . . . .x i } obtained from the backbone network are computed in parallel by a 1×1 convolutional layer and three atrous convolutions with rates of 3, 6, and 12 to obtain convolutional features on multiple scales.
After that, the feature maps b i of each layer except the first one is sequentially fused with the feature maps b i−1 of the previous layer and activated by convolution to obtain new feature maps bi , which allows each layer to obtain a diversity of resceptive fields.
bi = Conv(b i )
To better fuse the different scales of information, the four obtained feature maps are finally superimposed in the channel dimension using the Concat operation and then fed into the convolutional layer to obtain a new multi-scale feature map s i .
For the purpose of enhancing the generalization ability of the network, we improve ECA-Net (Wang Q. et al., 2020) by learning the correlation between channels and adaptively adjusting the weights of the channels to improve the performance of the network.As shown in the lower part of Figure 3, we first perform global maximum pooling and global average pooling operations on the feature map x i to obtain two global feature descriptors, respectively, m∈ R 1×1×C , a∈ R 1×1×C , C indicates the number of channels.
The cross-channel information interaction is accomplished by two one-dimensional convolutions, respectively, and then the weight coefficients for each channel are calculated by SoftMax normalization.Where w i is the result of channel interactions, w j i denotes the weights of the channel features, and y i denotes the neighboring feature channels in a one-dimensional space.K is the result computed by the given formula, and i denotes the number of channels, j∈ R K .
Where the convolutional kernel size K is self-adapted by a function that allows layers with a larger number of channels to interact across channels more often.The adaptive convolutional kernel size is calculated as, |t| odd is the nearest odd number to t and C is the number of channel.
Finally, the results of the two different pooling branches are superimposed according to the channel dimension, and the weight coefficients for each channel are obtained using SoftMax normalization, andx i is attentively weighted according to the channel dimension.
Finally, the multiscale feature s is overlaid with the feature map p after local cross-channel interaction to obtain the final LFRM output.
Since using only GAP to extract global features does not capture the detail information well, GM is added to enhance the grasp of details, and the two pooling branches complement each other to enhance the extraction of local semantic features.
. GCAM
To obtain remote dependent features and thus global context information to enhance the ontological target characteristics and to remove the interference of complex background noise on the target, we design the GCAM module as shown in Figure 4, where we take the multi-scale information obtained from the LFRM module as an input to obtain the remote context information about the local features.
As shown in Figure 4, it given the output of the LFRMP = {P 1 . . . . . .P i } as input, P 1 ∈ R 1×C is the feature vector at pixel i with C channel.The global context feature f i is obtained by estimating the relationship between the current pixel and all pixels.After that, the weight coefficients are matrix multiplied with the local features to aggregate the contextual information (matrix multiplication is employed on the weight and local feature to aggregate contextual information).With the aim of further extracting the channel dependencies while reducing the number of parameters and computational complexity, the acquisition of spatially distant effective features will be augmented by transformations, so we draw on the Non-local (Wang et al., 2018) method.
Both φ and θ are realized by a 1 × 1 convolution.And the normalization (LN) and SiLU activation layers are added after the first convolution to improve the generalization of the model.Finally, the transformed feature fi is element-wise added to the multi-scale local features, yielding the GCAM output fi which aggregates global contextual features at each pixel.
fi = fi + p i
The GCAM module selectively acquires distant features for each pixel based on the correlation between spatially distant pixels, which enhances the modeling capability of feature representation and reduces background noise interference.Meanwhile, the module can be easily inserted into various network models to obtain global context information.
Experimental results and analysis
In order to fully verify the validity of our proposed methods, we test them on authoritative public data sets and compare them with several other advanced ones.In addition, to demonstrate the effectiveness of proposed LFRM and GCAM, we design ablation experiments to evaluate the validation.Finally, we provide a comprehensive analysis of the experimental results and time complexity.
. Training configurations and datasets
All of our experiments are conducted on a GPU workstation equipped with NVIDIA RTX 3090 with 24 GB of video memory, and the operating systems are ubuntu21.0,CUDA (10.0) and cuDNN7.0.The language and framework used to build the model are python3.7 and pytorch1.1.0,respectively.For achieving fast convergence during training, with AdamW optimizer, we set the initial learning rate to 1e-3 and employ a cosine annealing strategy to adjust.Also, to ensure experimental fairness and consistency, all the methods involved in the experiments are trained and validated under the same data benchmark.The batch setting is 16 and the maximum number of iterations is 300 to find the best model parameters.
The loss function, which used for model training, consists of classification loss, confidence loss and regression localization loss.The former two chose the classical Cross Entropy (CE), while the latter adapts Complete-IoU (CIoU) Loss.
The Cross-Entropy Loss L CE function expression is shown below, where p (x i ) is the probability distribution of the true value, q (x i )is the probability distribution of the predicted value, and C denotes the total number of categories.
The CIOU loss L CIOU function expression is shown below, where ρ 2 (b, b gt ) represents the square of the distance between the center point of the prediction box and the center point of the real box.c represents the diagonal length of the smallest outer rectangle of the two rectangular boxes.α is the parameter used to do trade-offs, and v is the parameter used to measure aspect ratio consistency.
The CIOU loss was chosen to normalize the coordinate scales to take advantage of the IOU and initially address the case where the IOU is zero.
To more fully evaluate the superiority of our methods, AP50 is used as the main evaluation metric, compared with currently popular methods.Specifically, PR curve is a curve drawn with precision P as the vertical coordinate and recall rate R as the horizontal coordinate.The higher the accuracy of the model, the higher the recall rate, the better the model performance, and the larger the area under the PR curve.AP50 Indicates the AP value when the IoU confidence score is 0.5.In addition, we use accuracy, recall, and F1 scores for a confidence threshold of 0.4.We also use FLOPs as an auxiliary evaluation metrics to test the efficiency of the model.The formula for calculating indicators is as follows: . Datasets We evaluate our proposed methods on several public SAR ship datasets, including the SAR-Ship-Dataset (Wang et al., 2019), HRSID (Wei et al., 2020), and SSDD (Li et al., 2017) datasets.All of these datasets contain real scene images of various complex scenes ship targets of different sizes and dimensions.The SAR-Ship-Dataset (Wang et al., 2019) annotated by SAR experts, which uses 102 SAR images taken by the Gaofen-3 satellite and 108 SAR images taken by the Sentinel-1 satellite, containing 43,819 slices and 50,885 ship targets.The pixels in distance and orientation are 256.Finally, the data set is randomly divided into training set, verification set, and test set, with an image ratio of 7:2:1.HRSID (Wei et al., 2020) is a public data set used for the ship detection, semantic segmentation, and instance segmentation in high-resolution SAR images.It contains 5,604 high-resolution SAR ship images and 16,951 ship instances.The construction process draws on the COCO dataset and includes SAR images of different resolutions, polarization modes, sea states, sea areas, and ports.Its spatial resolution is 0.5-3 m.We follow the original dataset paper's delineation method.For the SSDD (Li et al., 2017) dataset is obtained by downloading publicly available SAR images from the Internet and cropping the target area into 1,160 pixels of size around 500 × 500 and manually labeling the ship target positions.We select images with image index suffixes 1 and 9 as the test set.
. Results and analysis
strong competitiveness.Our approaches achieve precision, recall, F1, and AP50 accuracy of 93.8, 96.1, 94.4, and 96.3%, respectively.Regarding AP50 accuracy, it outperforms the two-stage detector Faster R-CNN (Ren et al., 2015) in general object detection by 5.3%, and exceeds YOLOv4 (Bochkovskiy et al., 2020) and YOLOv5 (both are single-stage detectors) by 3.1 and 0.5%, respectively.In addition, in comparison with SAR ship detection method DAPN (Cui et al., 2019), which primarily focuses on the scale issue of ship targets but neglects the interference and impact of noise in small targets within complex backgrounds, resulting in an AP50 accuracy of 90.6%, significantly lower than ours and other advanced SAR ship detection methods.Our approach also outperforms another anchor-free popular algorithm, The best results are highlighted in bold.
CoAM+RFIM (Yang et al., 2021), by 0.3% in the AP50 metric.Despite the consideration of noise impact and the use of attention mechanisms to reduce noise effects, the latest SAR ship detection method PPA-Net (Tang et al., 2023) falls short due to relying solely on pooling operations to address multi-scale information, leading to significant information loss.
. . HRSID
The HRSID dataset exhibits a more complex image background and includes a greater number of densely packed small ship targets, posing higher challenges for algorithms and allowing for a better validation of our method's effectiveness in complex background and small target detection.As shown in Table 2, our method shows an improvement of ∼0.4-15.1% compared to state-of-theart methods, benefiting from the proposed LFRM and GCAM.LFRM first extracts local multiscale information using multiple differently-sized receptive fields and then employs a dual-branch channel attention mechanism to facilitate local cross-channel information interaction between different scale features, alleviating the detection impact of scale variations.
FIGURE
We have chosen to compare the detection results of di erent methods for complex backgrounds and multi-scale targets (especially small targets).The red box indicates the ground truth, and false alarms and missed detections are circled using yellow and green circles, respectively.
. . Visual results
To directly showcase the advanced detection results of our method, we visualize the detection outcomes on three different datasets.As illustrated in Figures 5-7, it is evident that our method performs exceptionally well in both complex background and various-sized ship targets, surpassing other approaches.
FIGURE
Plot of detection results for selected ships with complex backgrounds from HRSID, SSDD, and SAR-Ship-Dataset datasets for our method.
FIGURE
Our approach plots a selection of detection results with small targets and densely arranged ships in the HRSID, SSDD, and SAR-Ship-Dataset datasets.
Specifically, Figure 5 displays the detection results of our method and other approaches in SAR images with complex backgrounds and multiple-scale targets.It is noticeable that other methods exhibit instances of missed detections or false positives, while ours demonstrates good detection accuracy in both scenarios.Figure 6 presents the detection results of our method for ships with complex backgrounds.Figure 7 illustrates the results of detecting small target ships, consistent with our expectations that the LFRM module can effectively utilize multiple receptive fields of different sizes to extract local multiscale information, making the network more sensitive to small targets.
In summary, the visualization results intuitively reflect that our proposed method can accurately detect and identify ship targets in SAR images with complex backgrounds and various target sizes.Moreover, it demonstrates effective target detection across different datasets and diverse scenarios, offering better practical utility.However, our method exhibits some instances of missed detections and false positives in dense target detection, as shown in Figure 5, where our method displays a few missed detections in SAR images with densely packed ships, marked with green circles.This is attributed to our method solely considering the influences of multiscale targets and backgrounds, without accounting for potential feature overlap and misalignment that may arise when targets are densely arranged.Our current approach does not perform feature subdivision for overlapping targets, and we plan to address this in future work. .
. Ablation study
To evaluate the effectiveness of the components in our proposed Context-Aware Network, we conduct extensive ablation experiments on the HRSID (Wei et al., 2020) dataset.For LFRM, the results are shown in Table 4, where our proposed LFRM module improves the accuracy of AP50 from 91.1 to 92.3% compared to the benchmark level.As shown in Table 5, consistent with what we envisioned, LFRM uses multi-level atrous convolution to extract feature information at different scales hierarchically, and adopts residual linking to diversify the feature receptive field at each layer, better fusing the scale features.Combined with the dual-branch channel attention mechanism to realize local crosschannel interaction, it can enhance the ability to characterize the target and efficiently filter complex semantic information.The ablation experiments also demonstrate that LFRM is not only sensitive to scale information but also can mitigate complex background noise.
For GCAM, our proposed GCAM module improves the accuracy of AP50 from 91.1 to 93.0% compared to the benchmark level.Essentially, GCAM expands the sensory domain of the network by adaptively weighting features in different spaces and suppresses background noise interference by obtaining global contextual information based on the estimated long-range dependency.As shown in Figure 8, to show the effectiveness of our proposed module more directly, we visualize it by outputting a visual graph of the intermediate results.Finally, by combining our two modules in series, their AP50 accuracy can reach 93.3%, which shows that the LFRM and GCAM can effectively improve the SAR ship detection performance, and the interaction can further improve our network performance.To mitigate the impact of Batch Size on experimental results and determine the optimal Batch Size for training, we conduct ablation experiments with different Batch Size values.The experimental results are presented in Table 6.Notably, when the Batch Size reaches 16 and 32, the detection accuracy (AP50) both achieve the highest value of 93.3%.However, with a Batch Size of 8, the larger randomness introduced by the smaller Batch Size makes it challenging to converge, resulting in a lower classification accuracy of only 92.8%.When the Batch Size exceeds 32, there is a possibility of encountering local optima, leading to a decrease in accuracy to 92.9%.We exhaustively explored a range of Batch Size values in the ablation experiments to identify the most optimal Batch Size.
. . The complexity and speed of the network
We conduct a complexity analysis of the model, and the results are presented in Table 7. Ours has metrics of 28.1, 60.4, and 126.9 for Runtime, Params, and FLOPs, and although it is more complex to model with some other state-of-the-art methods such as YOLOv5, CoAM+RFIM (Yang et al., 2021) and PAA-Net (Tang et al., 2023), our method exhibits outstanding performance on the SAR-Ship-Dataset (Wang et al., 2019), HRSID (Wei et al., 2020), and SSDD (Li et al., 2017) datasets, delivering exceptional results while maintaining acceptable model sizes.The reason for the more complex model is that we use a more complex backbone network and GCAM in by calculating the correlation between each pixel and the other pixels, which imposes some network burden, but our method achieves a good balance for accuracy and speed.
Conclusion
To address the two challenges of various complex background interferences and multi-scale ship targets in SAR image ship detection tasks, we propose a context-aware one-stage SAR ship detection algorithm.To solve the problem of multi-scale ship target detection, we propose the LFRM module, which uses dilated convolutions with different ratios to obtain multi-scale features, and then uses average and maximum global pooling to interact the extracted information of different scales, enhancing its representation ability and sensitivity to scale, and achieving multi-scale ship detection.Furthermore, we also design the GCAM module to enhance the analysis of global context information and further suppress the interference of noise from complex backgrounds on targets.Extensive experiments have demonstrated that our method outperforms the latest methods in comprehensive performance.The method proposed in this paper can effectively cope with the interference of complex background noise and detect ship targets of different scales.However, there are still some missed detection issues for densely arranged targets.In future work, we will pay more attention to the detection of densely arranged small targets.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.organizations, or those of the publisher, the editors and the reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
FIGURE
FIGUREGeneral framework of our method, where LFRM and GCAM are the proposed modules.The input image is first sent to the backbone to extract features, then passes through the FPN network structure consisting of LFRM and GCAM in series, and finally the detection results are output through the header.Where BCE loss is used for classification and objectivity and GIoU loss is used for regression.
(p m) * p jWhere n(p i ) = W k p j and n(p m ) = W k p m represent linear transform matrices, and W k implements the 1 × 1 convolution.
FIGURE
FIGUREIllustration of the proposed LFRM.The upper half shows the extraction of multi-scale features using atrous convolution and the lower half shows the two-branch pooling channel attention mechanism.
FIGURE
FIGUREVisualization of the outputs of the di erent modules of the intermediate process tested by our method on the HRSID dataset.
Zhou et al. (2023)Finally, the features in the two-dimensional domains are compactically fused to obtain the multi-dimensional representation of the target features.In order to better adapt to the differences brought by SAR images collected by different sensors,Zhao et al. (2022)proposed an adaptive learning strategy based on the adversarial domain.Considering the different polarization modes and scattering intensity of SAR images, in order to realize the alignment of instance-level objects and pixel-level features between different domains (different sensor images), the concept of entropy is introduced as a feature weight coefficient to distinguish regions with different entropy.Since the entropy of the uniform region in SAR images is lower than that of the non-uniform region, adding entropy-based adversarial domain adaptive learning to different layers of the backbone network can effectively deal with the relationship between entropy and different receptive fields so that different domains can be aligned at the feature level as much as possible.At the same time, assigning different weights to regions with different entropy can help to distinguish the alignment results better.With the aim of distinguishing different instance-level target characteristics and make better alignment, the domain alignment compensation loss is constructed.In order to extract more precise feature information so that more uniquely representative example features can be accessed, the result of the highest score in the clustering is used to calculate the weight of the class.Zhou et al. (2023)added an edge semantic branch to solve problems such as confusion in edge detection caused by overlapping targets and used convolution of deeper and larger convolution kerns to expand the learning of context edge semantics and decouple the learned rich features, which is conducive to accurate localization of ship targets and prediction of detection frames.In addition, considering that the size of the receptive field extracted by CNN is limited, it is impossible to analyze the context from a global perspective.Therefore, a transformer framework is introduced to acquire global context features by using a multi-head attention mechanism, thus enhancing the remote analysis capability and achieving better detection and recognition effects for large-scale targets. in . From the Table 1, it can be observed that our method exhibitsTABLE Comparison of evaluation metrics of di erent methods on the SAR-SHIP dataset.The best results are highlighted in bold.
TABLE Comparison of evaluation metrics of di erent methods on the HRSID dataset.
TABLE Comparison of evaluation metrics of di erent methods on SSDD dataset.
TABLE Ablation experiments on the HRSID dataset.We validate the effectiveness of each component step by step.It displays the AP50 (%) and the Runtime (ms).The optimal metrics have been bolded.All scores are expressed in percentage (%).
TABLE Ablation experiments on the HRSID dataset for the size selection of the convolutional region K in two-branch channel attention.
The bold values indicate the best results.
TABLE Ablation experiments were performed on HRSID data sets with di erent batch sizes.
TABLE Comparison of Runtime, Params size, and FLOPs for di erent models.The bold values indicate the best results. | 9,590 | 2024-01-16T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
THE REVIEW OF THE MODELING METHODS AND NUMERICAL ANALYSIS SOFTWARE FOR NANOTECHNOLOGY IN MATERIAL SCIENCE
Due to the high demand for building materials with universal set of properties which extend their application area the research efforts are focusing on nanotechnology in material science. The rational combination of theoretical studies, mathematical modeling and simulation can favour reduced resource and time consumption when nanomodified materials are being developed. The development of composite material is based on the principles of system analysis which provides for the necessity of criteria determination and further classification of modeling methods. In this work the criteria of spatial scale, dominant type of interaction and heterogeneity are used for such classification. The presented classification became a framework for analysis of methods and software which can be applied to the development of building materials. For each of selected spatial levels from atomistic one to macrostructural level of constructional coarsegrained composite – existing theories, modeling algorithms and tools have been considered. At the level of macrostructure which is formed under influence of gravity and exterior forces one can apply probabilistic and geometrical methods to study obtained structure. The existing models are suitable for packing density analysis and solution of percolation problems at the macroscopic level, but there are still no software tools which could be applied in nanotechnology to carry out systematic investigations. At the microstructure level it’s possible to use particle method along with probabilistic and statistical methods to explore structure formation but available software tools are partially suitable for numerical analysis
http://nanobuild.ru к содержанию of microstructure models.Therefore, modeling of the microstructure is rather complicated; the model has to include potential of pairwise interaction.After the model has been constructed and parameters of pairwise potential have been determined, many software packages for solution of ordinary differential equations can be used.In the cases when pairwise forces depend not only on distance between surfaces of particles, the development of special-purpose algorithms and software is required.The investigation at lower spatial level can be done with help of quantum chemistry packages.In the middle between microstructural and atomistic levels there was a gap corresponding to the scale 1...100 nm.At this level properties of material are considerably affected by size; this fact is proposed to be considered when defining nanostructure of constructional composite.Today there are a lot of achievements in development of modeling methods and software; however, there are still several problems to be solved.The system approach to the analysis of problem, followed by selection of proper modeling methods, algorithms and software tools, is the key for design of new efficient building materials.
Key words: nanotechnology, constructional material science, disperse systems, computational chemistry, probabilistic models, molecular dynamics.DOI:dx.doi.org/10.15828/2075-8545-2014-6-5-34-58echniques and software tools for modeling and simulation -socalled «third method of research» -are widely used in different branches of science together with theoretical studies and experimental investigations; such techniques and tools are considered as equal alternative to theory and experiment [1,2].This is because of the fact that decisions taken on the base of modeling and simulation allow significant reducing costs and resource consumptions during research and development process.
For decades numerical experiments have been used to design and analyze structures in civil engineering.On the contrary the development of construction materials is performed mainly by means of «traditional» methods -experimental studies combined with regression analysis, commonly referred to as «statistical modeling».Such situation is a natural consequence of complex character of building materials; there are multiple levels of heterogeneity which significantly complicate formulation and further analysis of the models.
http://nanobuild.ru к содержанию Such state of the modeling in constructional material science can not last any longer.Due to demand for building materials with unique complex of properties (including materials with controllable service lifetime [3]) the research efforts are focusing on modern methods of nanotechnology [4].It is often too costly to design of nanoscale-structured and nanoscalemodified materials with «trial-and-error» approach, even if such approach is combined with some optimization methods applied to raw experimental data or constructed statistical model (approximation of raw data).The reduction of resource consumption during research and development is possible only in the case of changing the direction towards theoretical studies, modeling and simulation.
Fortunately, the constructional material science takes advantages of plethora of models, algorithms and software tools already developed in other branches of science -mostly in theory of condensed state («clean» models based on quantum mechanics; «ab initio» models).The application of phenomenological models [5][6][7] (which are only required to be consistent with both experiment and theory, but not to be derived from the theory entirely) is also highly efficient; in particular, such methods can be used for modeling of polymer matrix nanocomposites [7,8].
The modeling process involves system analysis as a sound foundation.The examination of complex system is the decomposition process [1] where we initially split system in parts while minimizing both number of parts and number of cross-dependencies (which in fact are interactions) between them [2].Yet before decomposition can be started the bounds for spatial scales of interest and corresponding list of possible interactions -which, in turn, define applicable modeling methods -should be fixed.
The selection of methods, algorithms and simulation software tools depends on characteristic scale of the system under examination.There can be several distinctive scales -from nanometer scale up to macroscopic level, and there are corresponding modeling methods (representing primary interactions between elements) most suitable for these scales.Depending on the characteristic size of the test system, different techniques of numerical analysis can be applied -methods based on quantum and classical mechanics; geometry and probability theory; continuum mechanics (Table 1).
The first row of Table 1 may correspond to distinctive spatial levels and reflects the fact that in some circumstances heterogeneous structure of constructional composite may be ignored.This fact is the fundamental Commercially available software packages for the first row of Table 1 often combine tools for many of the mentioned areas of application (so-called multiphysics solvers).At least, ANSYS Software suite [9], MSC Software suite [10] (Nastran, Dytran, Sinda etc.) and Abaqus [11] include solvers for all of the above areas.But these software suites are already based on results achieved in material science and mostly unsuitable during R&D in material science.
The next two structural levels -macro-and microstructure -are of great interest for material scientists.The naming for second and third rows of Table 1 closely connected with concept widely accepted in constructional material science: the structural levels in building materials should be defined on the basis of dominant interaction type.
The coarse-grained structure of constructional mixtures and materials is mainly forming under influence of gravity; this level commonly referred as macrostructure.The structure of fine-grained part (binder, water and fine filler in case of cement composites; binder and fine filler in case of materials with polymer, bitumen and sulfur matrices) evolved under influence of forces caused by surface effects and surface energies.This level usually referred as microstructure.Exact spatial boundary between macro-and microstructure is not defined, but it is near 100 um.
Since at the macroscopic level the main type of interaction is the interaction caused by gravity force, the motion equations for elements of the system are simple.These are equations of classical mechanics.In constructional material science, the transient problem of structure formation process reflects technology of preparation and laying of constructional mix.If this problem is out of scope for the research, then modeling is often preformed only to determine the final configuration of coarse aggregates.The corresponding problems are theoretical examination and Monte Carlo simulation of dense packing in polydisperse systems, as well as percolation problems for frame of aggregates (the results can be used for prediction of mechanical properties) and pores (properties of porous space affects permeability and coupled properties -water, chemical and frost resistance).These problems has long history (at least, starting from early XX century, e.g.[12,13]).
The theoretical aspects for problem of packing of both polydisperse spheres and particles with complex shape were discussed in many research articles and scientific monographs.This is because method to design an http://nanobuild.ruк содержанию ideal particle size distribution of constructional mix is extremely important for construction.The various geometric models were proposed, and different methods of percolation theory were then applied to the simulated packing [14][15][16][17][18].
Surprisingly, despite the great interest to the packing and percolation problems at the macroscopic level of constructional composite, there are no well known software tools for simulation in this area.Small tools for specific purposes and a few computational algorithms are created mostly by authors of algorithms.Such tools are of very limited scope of application, they use mutually incompatible formats of input and output data, realization of algorithms often is not verified extensively.In no conditions such software tools can be considered as production-ready, even if some of them proposed for commercial use (e.g.[16,17]).It seems that such situation for modeling the macroscopic level of constructional composite will last, though there are efforts directed to the unification of software tools and methods [7,19] for modeling the macrostructure of constructional composites.
Modeling of the microstructure is more complicated in comparison with modeling of the upper structural level.To adequately represent the disperse system with fine filler, not only gravity force must be taken into account; moreover, gravity force can usually be ignored during numerical experiments.The primary forces caused by surface properties of the disperse phase and free energies on the inter-phase boundaries (interface energies; the wettability and contact angle are macroscopic properties tightly connected with these energies).Forces caused by surface effects and by external compaction pressure (if later is measured per one particle of the fine filler) may be of the same order of magnitude.
To take into account the surface effects, the model which is also based on classical mechanics have to include potential (force field) representing pairwise interaction.The derivative of pairwise potential along the direction between two particles is equal to binary forces acting between these particles; this force is then used during solution of the motion equations [7].After the model is constructed and parameters of pairwise potential are determined, many existing software packages can be used for solution of ordinary differential equations (ODE) of motion.Wide availability of the software tools is due to rather curious fact that so-called N-body problem (particle system with pairwise forces and classical equations of motion) http://nanobuild.ruк содержанию was first occurred in completely different spatial scale, when motions of a group of celestial objects are studied.Thus, at present there are already a lot of efficient algorithms [20,21] and highly-optimized computer code [21] for the purpose.The comprehensive list of resources concerning software suitable for N-body simulation can also be easily obtained [22].
At the same time, there are cases when pairwise forces between particles of the filler in constructional composite are too complex and depend not only on distance between surfaces of particles.Such cases can not be adequately handled by existing ODE software packages.Nevertheless, high availability and open source license for general-purpose numerical libraries allow to implement only «material science» part of the code -a part which is solely based on physical chemistry and thermodynamics, leaving the underlying mathematics aside.
There are also known extensions of the continuum mechanics which allow to perform constitutive modeling of heterogeneous multiphase materials.For example, Mori-Tanaka method [23, 24] can be used during simulation of stress-strain behavior in two-phase cases where volume fraction of one phase is significantly higher then volume fraction of the other.
The last (but not least) row of the Table 1 corresponds to atomistic scale.It is the domain of quantum mechanics: there are no particles; there are only waves and probability.Time-independent Schrödinger equation Hψ = Eψ is very simple in form -it only states that valid energy E must be eigenvalue of Hamiltonian H, and corresponding to this energy «shape of electron» («shape» of electron orbital) should be square ψψ * of the eigenfunction ψ (wave function of electrons).The «solution» of the Schrödinger equation -even in time-independent case -is not simple, though.Numerous Nobel prizes were awarded for the process.The numerical methods, specifically designed for solution of Schrödinger equation, are implemented in software.The methods themselves often referred as quantum chemistry methods, and corresponding software is called quantum chemistry software.
Probably, the most famous open source package for quantum chemistry is General Atomic and Molecular Electronic Structure System (GAMESS) [25].The code base of GAMESS was served as basis for some derivatives, for example FireFly [26].Methods of quantum mechanics are also implemented in various multi-purpose commercial packages, notably Biovia (former Accelrys Materials Studio).The HyperChem [27] is the multi-purpose softhttp://nanobuild.ru к содержанию ware package focused primarily on atomistic modeling and quantum chemistry computation.It allows to perform structure input, manipulation and display (GUI tools for creating and transforming structures, build clusters and complex molecular assemblies, import and export structures in different file formats -Brookhaven PDB, ChemDraw CHM, MOPAC Z-matrix, MDL MOL, ISIS Sketch, Tripos MOL2; parallel and perspective projections, highlighting hydrogen and van der Waals bonds, displaying dipole moment vectors), contain implementations of DFT and semi-empirical quantum algorithms, as well as implementations of molecular mechanics and mixed mode calculations.
And, finally, there is the fourth row of Table 1.This is nanoscale level and it was not previously defined at constructional material science at all.Here we are going to change the situation.Accordingly to the existing definitions of macro-and microstructure (which are only indirectly refer to the spatial scale), we will define nanostructure of constructional composite (which have not to be confused with general term «nanostructure») as a spatial level where properties of material are considerably affected by size effects.From now on there is no «gap» between microstructure of the filled binder and atomistic scale.
Quantum-chemical modeling can be used for predicting the interaction processes during structure formation of nanostructure.In [28], the Hyper-Chem software was successfully used for quantum-chemical simulations of nanomodified polymeric composite material.During modeling the fragments imitating epoxy resin, hardener (polyethylene polyamine) and nanomodifiers were optimized.Than, absolute values of binding energy were calculated, the influence of fine suspensions of nanocomposites to the epoxy oligomer was modeled and simulation of the hardening process of nanocomposite was carried out.On the base of comparative analysis of the binding energies and electron densities authors stated that NH bond weakening takes place in case of presence of metal ions.The results obtained during modeling of hardening process allowed to claim that there is a self-organization process during the structure formation of epoxy composite.Such process is aftermath of introduction of metal-carbon nanosystem into the composite.The formation of metal coordination bond with nitrogen atom of the hardener can increase the polyamine activity during the hardening process [28].
In general, software tools for neighbor spatial scales can be utilized during investigation of nanostructure.For instance, modeling based on the analysis of the ensemble geometry is universal and can be applied on any structural level.In particular, authors of [29] had used this method in combination with Monte Carlo method for solution of percolation problem at nanoscale level.The obtained results concerning value of percolation threshold can successfully be used during development of building materials on upper spatial levels.
It must be stressed in conclusion, that while there are a lot of progress in development of modeling methods and software, the numerous particular problems arising during R&D in constructional material science and nanotechnology of construction still require adequate solution.The system approach to the analysis of problem, followed by selection of proper modeling methods, algorithms and software tools, is the key for design of new efficient building materials.
к содержанию Spatial scale Spatial level characteristic Preferred theory and models State of theory and presence of modeling tools
http://nanobuild.ru | 3,572.8 | 2014-10-28T00:00:00.000 | [
"Materials Science"
] |
The Venusian Insolation Atmospheric Topside Thermal Heating Pool
,
Introduction
Close proximity observations of the planet Venus by the NASA Mariner 10 space probe in 1974 have shown that its upper atmosphere displays a set of cloud bands that are part of a global atmospheric circulation system, which connects the solar zenith point of maximum solar radiant forcing to both polar vortices of the planet (Figure 1).
Figure 1. NASA 1974 Mariner 10's Portrait of Venus
This paper develops the results of the application of the Dynamic-Atmosphere Energy-Transport ((DAET) mathematical model to a study of the climate of Venus (Mulholland & Wilde, 2020) and addresses the following two issues: 1) That the intensity of the dim sunlight is too weak to fully energise the surface of the planet Venus at the base of a 63.4 km thick troposphere.
2) Consequently, the temperature at the base of the atmosphere of 699 Kelvin (426 O C) (Singh, 2019, pp.1-5) has a value that far exceeds the effective solar radiative thermodynamic temperature of the surface insolation received by Venus (Table 1).At its base Venus has a dense atmosphere with a value of 69.69 kg/m 3 (Table 2), while this is far less than the density of liquid water (1,000 kg/m 3 ), the oceans of Earth do provide a model as to how the topside of a planetary atmosphere can be heated.On Earth the photic zone is that shallow part of the ocean (typically less than 200 m water depth) where sunlight energy is absorbed and the water is heated.
On Venus the average post-albedo insolation received by the lit hemisphere is 299 W/m 2 , which equates to a thermodynamic temperature of 269.5 Kelvin (-3.6 O C) (Figure 2).This average intensity will apparently provide heating for only the upper 5 km of the troposphere at heights above 58.4Km, where the lapse rate reduced air temperature is below the 269.5 Kelvin value (Figure 3).
Figure 2. Venus Lit Hemisphere Illumination Interception Geometry
At depths in the atmosphere below this average insolation level the average energy contained in the sunlight is less than the ambient temperature of the surrounding air, so no heating is apparently possible.
However, and perhaps more importantly the local intensity of the insolation at solar zenith has sufficient power to heat the Venus atmosphere down to a level of 49 km, in a column that is 14.4 km thick (Figure 3).
Figure 3. Venus Atmospheric Solar Radiant Thermal Heating Pool
Published by SCHOLINK INC.
It is this concentration of energy at the solar zenith, heating the upper air that creates the bow-wave disruptor observed in the centre of the blue disk, as the thermal impact of the solar zenith travels around the planet (Figure 1).
The energy imparted by the sun into that zenith induced bow-wave powers the circulatory system of the upper atmosphere (Limaye, 2010).So, just like the sun heats the top of the Earth's oceans, the sun clearly heats the top of the Venus atmosphere.However, unlike water in the Earth's oceans the heated topside atmosphere of Venus is a compressible gas held at high elevation in a gravity field.This has clear implications for the process of surface heating by full troposphere mass-motion solar forced convection overturn of a compressible gas in the presence of a gravity field.
In order to study this circulation process using the DAET climate model a pressure profile model for the Venus troposphere at 1 metre increments has been created.This calculation has been applied from the surface to the lower stratosphere, a modelled vertical height of 100 kilometres (Mulholland & Wilde, 2021).
Two equations of state are used to achieve this objective, these are the Pressure, Volume, Temperature (PVT) version of Boyle's law, and the application of Newton's gravity law of spherical shells, used to calculate the reduction in strength of the gravity field as the height above the surface of Venus increases.For the purpose of this study a set of four linked predictive lapse rate equations based on published data has been created (Justus & Braun, 2007).These equations are used as the fundamental temperature control of the tropospheric pressure profile.The temperature data that controls these equations is calibrated to a surface datum global average temperature for Venus of 699 Kelvin (Singh, 2019, pp. 1-5).
Method
The spreadsheet analysis of the pressure profile of the Venusian atmosphere presented here is built on the following Baseline Parameters (Williams, 2023): 1) The surface pressure measured in Pascal.
2) The surface temperature measured in Kelvin.
3) The Molecular Weight of the Venus atmosphere measured in g/mole.
4) The surface gravity of Venus measured in m/s 2 .
5) The planetary mass of Venus in kg.
6) The mean radius of Venus in metres.8) Using the two physical relationships of Boyle's gas law, and Newton's spherical shell gravity law, a pressure profile is created for the atmosphere, applying the predictive lapse rate equations as temperature control over the relevant height intervals (Figure 5).
The predictive lapse rate equations used in the pressure profile model are listed in Table 2.The datum parameters used for the pressure profile analysis are listed in Table 3.
Result
In order to verify the DAET climate model of Venus, the model was first calibrated against the new surface datum temperature of 699 Kelvin (Singh, 2019, pp. 1-5).This process was achieved by reducing the energy intensity flux partition ratio to an atmosphere retained percentage of 98.071%, down from the previously published value of 99.1138% (Mulholland & Wilde, 2020, pp. 20-35).This adjustment is in line with the modelling concept that the average global surface temperature of a planet is a function of the energy flux partition ratio between the retained atmospheric energy in the troposphere, and the radiant energy loss to space from the stratosphere (Table 4).
Table 4. Adiabatic Model of Venus showing Internal Energy Recycling for Both Hemispheres
The key results from the Boyle's Law Pressure Model Analysis for the atmosphere of Venus (Mulholland & Wilde, 2021) are listed in Table 5 and displayed in Figure 5.
These results include the following: 1) The average post-albedo irradiance for the lit hemisphere of Venus is 299 W/m 2 , this intensity (Table 1, Z5) converts to a thermodynamic temperature of 269.5 Kelvin (-3.6 O C).This temperature occurs at an altitude of 58.46 Km and a pressure of 192.9 hPa.By geometry the average intensity value of 299 W/m 2 also occurs at a solar elevation angle of 30 o (Figure 2).
Published by SCHOLINK INC.
2) The post-albedo solar zenith irradiance for Venus is 598.3W/m 2 , this maximum possible intensity (Figure 3, Z1) converts to a thermodynamic temperature of 320.5 Kelvin (47.3 O C).This temperature occurs at an altitude of 49.06 Km and a pressure of 835.5 hPa (Table 5).5).
4) The DAET adiabatic climate model of Venus also predicts for the dark hemisphere a thermal emission intensity (Table 4, Z12) of 148.75 W/m 2 and a thermodynamic temperature of 226.3 Kelvin (minus 46.8 O C).This temperature occurs at an elevation of 71.15 Km and at a pressure of 18.33 hPa (Table 5).
5) The modelled height of the Venusian droplet cloud planetary veil (Z17: s) occurs at an elevation of 60.67 Km and a temperature of 260 Kelvin (Young, 1973, pp. 564-582) with an associated pressure of 18.33 hPa (Table 5).
6) The measured freezing point of 75% wt H 2 SO 4 (Z18:s) is 250 Kelvin (-23 O C) (Young, 1973, pp. 564-582).This temperature is found at a model altitude of 63.27 Km, and a pressure of 83 hPa (Table5).This near association between the stable air freezing point of concentrated sulphuric acid, the main condensing volatile in the Venus atmosphere, and the DAET modelled height of the convection tropopause warrants further study.Solid aerosol particles are efficient thermal emitters and can enhance atmospheric thermal radiation loss to space through the transparent lower Stratosphere (Figure 4).
Published by SCHOLINK INC.
The Energy Consequence of Air Convection in a Gravity Field
At the modelled convection tropopause of Venus, over 63.3 km above the planet's surface (Table 5, Z10), a cubic metre of Venusian air has a mass of 174 g and possesses a potential energy of 95.7 Kilojoules.All air mass held aloft in a gravity field contains a considerable quantity of potential energy.
On descent to the surface this air will undergo adiabatic heating and consequent air temperature rise as it falls towards the planet's surface.In doing so it loses potential energy by the process of conversion to kinetic energy (Figure 6).
Discussion
On Venus the solar forced radiant heating of the upper troposphere at the zenith creates a process of pole-ward advection of heated air that feed the planet's polar vortices (Luz et al., 2011).Figure 2 shows how the upper atmosphere of the lit hemisphere of Venus intercepts the energy of the sunlight in a pattern of concentric rings of intensity centered around the zenith, the point at which the overhead sun provides the maximum flux that heats the atmosphere.When the sun heats the cold upper part of the Venusian atmosphere it will distort the lapse rate slope to the warm side.That forces the lapse rate profile downward.That compression then steepens the lapse rate slope lower down which causes convection to accelerate as a negative compensation mechanism.
Published by SCHOLINK INC.
As Venus slowly turns from east to west, the locus of the solar zenith tracks along the equator towards the east creating a point of disturbance in the upper air.This forms a bow shockwave disruptor dividing the equatorial flow of the zonal circulating winds which are forced apart and made to track towards higher latitudes (Figure 1).
Due to the conservation of angular momentum associated with the slow planetary rotation of Venus, these winds travel faster than the ground surface below them and are called super-rotation winds (Zasova et al., 2007).Eventually these circulating winds reach the planet's poles at a point on the rim of the lit hemisphere.Here the illumination intensity of the low angle sun within 5 O of the terminator does not have sufficient power to heat the tropospheric air.At the poles of Venus, the low power of the sunlight, combined with the angular momentum of the super-rotational winds creates a cyclonic vortex which drives the air down into the deep atmosphere below (Ignatiev et al., 2009).
Figure 7. Planetary Rotation and the Conservation of Angular Momentum
This forced descent of the topside heated air, means that the compressible air undergoes adiabatic heating as it falls in the gravity field of Venus.The descending mass flow within the polar vortex provides a hydrodynamic piston drive that causes the planet's air to circulate vertically in a giant hemisphere encompassing Hadley cell (Figure 3).By this means the compressed air is heated as it falls and the apparent thermal limit set by insolation at the top of the atmosphere is easily surpassed (Lacis & Hansen, 1974).
Figure 3 shows the impact of upper atmosphere heating, the circulation system powered by the solar zenith constantly replenishes the forced descent vortex over both poles which heats the surfaces beneath.That energy then flows across the entire Venusian surface so that it can reach temperatures much higher than predicted by the Stefan-Boltzmann (S-B) radiation equation.The greater the mass of the Venus atmosphere the greater the system's efficiency, and the more heat that will be delivered to the surface by the air descent at the poles.
Published by SCHOLINK INC.
The piston-like hydrostatic circulation is fuelled by whatever energy is available from any source, but can never exceed the amount of energy required to balance the upward pressure gradient force with the downward force of gravity.The pattern of differing lapse rate slopes within the vertical plane is infinitely variable, but must always average out to the slope dictated by mass and gravity.
The Utility of the DAET Climate Model
The key physical process that the DAET climate model describes is that mobile compressible fluids circulating within a gravity field over and above the surface of a rotating terrestrial planet, will at the same time capture, store and transport energy in various guises.Not all of these are thermal and so not all are subject to radiative loss.While energy can flow from cold to hot (e.g., the meteorological process of cooling rain falling onto the surface of a hot desert below), however heat being a directed dynamic process cannot flow from cold to hot (e.g., Unconfined rivers of water cannot flow uphill).
Mass motion is a process that generates a system lag because it is inherently slower than radiative processes.Convection is also a process that deals with albedo variations because convection just shifts to equalise these perturbations.There is still enough room for internal climate variability as the system lags somewhat in response to destabilising influences, but it always gets there quickly enough to retain the atmosphere in a dynamically stable state.
The Venus surface is at the temperature it is simply because that is the temperature needed to balance the mass of the atmospheric gases against gravity.It makes no difference what the source of that energy is.It is the same for stars in the cold of space and the gas planets far from the sun.Convection always settles at a level that keeps the gases suspended against the downward force of gravity.Until, in the case of stars, a fusion reaction starts whereupon convection adopts a new equilibrium.
It is by this mechanism of circulating mass motion of a compressible gas acted on by a gravity field, within the context of a rotating spherical planet that surface thermal enhancement is created, and which it is proposed here to call the Maxwell Mass Effect after the work of James Clerk Maxwell (Maxwell, 1868).
Conclusion: The Venus Heating Paradox Explained
In conclusion the matter of the high surface temperature of the planet Venus, and the paradox of the dim surface sunlight not being able to create this 699 Kelvin global average temperature will now be addressed.
The process of deep atmospheric convection throughout the whole 62.2 km (100 mbar limit) of the Venus troposphere means that sunlight heated air at the top of the atmosphere can and does deliver heat to the planet's surface.Instead of solar radiation, this process of energy delivery to the surface occurs by the mechanisms of full troposphere planetary rotation-forced mass-motion, the circulation of polar vortex descending air and heating by adiabatic auto compression.
The warming at the surface of Venus is from the mechanical process of convection, and any potential warming effect from downward radiation is neutralised by convective adjustments.Instead, descending air heats both itself and the surface beneath via reconversion of Potential Energy (PE) to Kinetic Energy (KE).The atmosphere is held aloft by potential energy which is not thermal energy.Heat cannot be amplified, but it can be stored in a non-kinetic form as potential energy so that it is not then sensed as temperature.
Potential energy is in effect a form of Latent Heat.This store of energy within mass is then returned again as temperature at a later time and critically at a lower elevation.So, as long as there is constant mass motion recycling to and fro between PE and KE as the air moves vertically within a gravity field, then the surface will receive kinetic energy from the descending air and be warmed.
To maintain long term hydrostatic equilibrium the total energy retained at the surface must be a dynamic equilibrium that is just right to support the weight of atmospheric gases against the downward force of gravity.It makes no difference whether the source of the necessary energy is from the sun, the surface, volcanic outbreaks, atmospheric opacity, particulate aerosols or anything else.
It is known that planetary atmospheres vary hugely in composition, and that the way the composition of an atmosphere is sorted into differing compositional layers will affect the vertical boundaries between those layers.Thus, a tropopause can vary in height somewhat depending on the various compositionally induced stratifications within a planet's atmosphere.However, if an atmosphere is to be retained by a planet, then the average lapse rate slope between surface and space must always net out to the slope specified by mass and gravity.
Convection always adjusts in order to balance energy into the system from space with energy out to space derived from the net combination of all energy transfer mechanisms between surface and atmosphere.If it were not so then the tiniest radiative imbalance would prevent the formation and retention of an atmosphere.It is known that atmospheres are ubiquitous and last for geological eons in the absence of catastrophe, so it must be that convection neutralises all "normal" radiative imbalances.
Published by SCHOLINK INC.
V 2 = P 1 .V 1 .T 2 /T 1 /P 2 Equation 2The next issue to be resolved is to determine the rate of pressure reduction with height.
In a column of air, the pressure is a function of the overlying mass, so if that the atmosphere is modelled as a stack of one metre cubes of air, then for each one metre rise in height the mass of the overlying column will be less, and so this mass reduction will cause a pressure reduction which can be calculated.
Pressure is a force; it is defined as the product of mass times acceleration.In the atmosphere the acceleration acting on the air parcel at rest in the column is the planet's gravity at that level, and this can be determined by Newton's gravity law of spherical shells.The value of the surface gravity of a planet can be calculated by using the Universal Gravity Equation, and knowing the planet's mass and its average radius.
But a standard measured quantity of gas is also required.
To do this the process used by chemists to find the relationship between the mass in grams and the volume in litres (dm 3 ) at Standard Temperature and Pressure (STP) for one mole of gas has been adopted here.
At 273.15 Kelvin (0 o C) and 1013.25 hPa (mbar) the volume is 22.414 litres (dm 3 ) and so for air with a molecular weight of 43.45 g/mol (standard Venus atmospheric composition) the mass contained in molecular volume (22.414 dm 3 ) will be 43.45g.
Phase 1: Building the Pressure Ladder for the Venus Atmosphere.
Step 1: From knowledge of the surface pressure of the Venusian atmosphere and the value of the surface gravity of Venus, compute the total atmospheric mass in a column bearing down on 1 square metre of the planet's surface.
Using the equation of force F = m.a this equation can be restated as Pressure/Gravity = Mass For Venus the equation of state is: 9,321,900/8.87039= 1,050,990.969kg (1,051 tonnes/sq metre).
Step 2: Compute the volume change for I mole of gas from STP at the Earth's surface to the ambient temperature and pressure conditions on the surface of Venus.
Using the constant Pressure Volume Temperature relationship of P 1 .V 1 /T 1 = P 2 .V 2 /T 2 this establishes the unknown V 2 (the volume of 1 mole of gas at the surface of Venus).
Step 4: Convert the Gas Density to Discrete Mass of Gas per Unit Metre Cube.
Figure 6 .
Figure 6.Scaled Comparison Chart of Pressure, Gravity, Discrete Mass, Discrete Potential Energy (PE) and Cumulative PE Curves for Venus | 4,494 | 2023-09-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Optimisation of GaN LEDs and the reduction of efficiency droop using active machine learning
A fundamental challenge in the design of LEDs is to maximise electro-luminescence efficiency at high current densities. We simulate GaN-based LED structures that delay the onset of efficiency droop by spreading carrier concentrations evenly across the active region. Statistical analysis and machine learning effectively guide the selection of the next LED structure to be examined based upon its expected efficiency as well as model uncertainty. This active learning strategy rapidly constructs a model that predicts Poisson-Schrödinger simulations of devices, and that simultaneously produces structures with higher simulated efficiencies.
. Schematic of the LED that the learning algorithm was given the task to optimise. Left: reference LED structure. Right: conduction band of the active region of the reference LED structure, at high current density (75 A/cm 2 ). ciencies, one uses Bayes rule to obtain a 'posterior' normal distribution for LED efficiency 21 . Specifically, for each as-yet unobs er ved LED str uc ture x , the GP mo del pro duces a p osterior distribution , with μ the expected efficiency and σ its standard deviation. In this study, we use the gaussian_process module of the scikit-learn Python package 22,23 . We choose a squared-exponential auto-correlation function and hyper-parameters determined by the maximum likelihood principle.
of LED structures and their measured efficiencies, the GP model produces a normal posterior distribution P y x ( , ) n with mean efficiency μ and standard deviation σ for the structure x. The remaining question concerns experimental design: what is the best strategy to select new LED structures ...
such that we most quickly find LEDs with very high efficiencies? The EGO method provides a simple approximate answer: at each step in the design loop, the next LED structure to sample should be selected to optimise the expected improvement in efficiency, after accounting for model uncertainty 12 . Concretely, let y max denote the efficiency of the best LED device currently in our dataset n . The expected efficiency improvement may be expressed as: where erfc (·) denotes the complementary error function and α µ σ = − y ( ) / 2 max is the scaled difference between the expected efficiency of x and the best LED in our dataset.
We select the next sample point to optimise this objective function, n n x 1 The new LED structure x n+1 becomes an input to a Poisson-Schrödinger code, described below, which calculates the simulated efficiency y n+1 . Next, we extend our dataset, 1 and update the GP posterior, , from which we can select another sample point via Eq. (2). This iterative process is repeated until a satisfactory LED structure is found. Simultaneously, we obtain a predictive ML model of LED efficiency over a broad range of inputs.
To better understand the objective function in Eq. (1), we evaluate it in two asymptotic limits. In the limit of vanishing uncertainty, σ → 0 (equivalently α → ± ∞), we observe . That is, when the model is very certain, the objective function seeks primarily to select points x with expected efficiency μ better than the best known, y max . Conversely, imagine that the model uncertainties σ are relatively large compared to μ − y max . In this α → 0 limit we observe → . Thus, when there is no obvious opportunity to improve on the best known LED, the learning strategy becomes primarily exploratory and favors points x with the largest model uncertainty. For intermediate α ~ 1 the strategy of Eq. (2) balances exploitation (maximizing μ) and exploration (maximizing σ). In this way, we avoid getting stuck in local maxima: once a region of very efficient LEDs has been well explored, the algorithm samples from a region of larger uncertainty, even if the predicted efficiency is not great.
Automated LED Design
In this work, we take the point x to represent the structure of the 5-well active region in a GaN-based LED (see Fig. 1 for a schematic). Each input point x has 6 parameters: the indium composition of each quantum well and the collective indium composition of the quantum barriers. The quantum well width varies with the indium composition of both well and barrier to keep the wavelength approximately constant. To determine the simulated efficiency of each structure, we use the APSYS software package with materials parameters taken from 24 and current density 75 A/cm 2 . The band structure was calculated using the 6 × 6 k.p method 25 in a finite volume approximation. The carrier transport equations were self-consistently computed and coupled with Schrödinger's equation to determine the confined states in the QWs. Schrödinger and Poisson equations are solved iteratively to account for the band structure deformation with carrier redistribution. The carrier transport consists of drift-diffusion of electrons and holes, Fermi statistics, and thermionic emission at hetero-interfaces, as well as band-to-band tunneling.
We use the machine learning algorithm in Eqs. (1) and (2) to optimise the internal quantum efficiency within the 6-dimensional space (the In content of each the 5 wells and the average In content of the barriers) of our LED structures. As can be seen in Fig. 2a, the procedure converges rapidly, finding a nearly optimal simulated LED efficiency in about 75 iterations. Subsequent iterations make little improvement upon optimal LED efficiency (Fig. 2b), and instead focus on decreasing model uncertainty. Between learning steps 150 through 1000 (Fig. 2c,d), this procedure constructs a very robust model over the global space of LED structures. At iteration 1000 the algorithm is fully converged, and the coefficient of determination is R 2 > 0.99, as determined by cross-validation.
The very high accuracy model provides also some physical insight into the Poisson-Schrodinger simulations. While the drift-diffusion model predicts that most of the light emission of a standard LED structure comes from the 2 top wells, in agreement with electro-luminescence experiments 26 , it also informs us that allowing the indium content of the individual wells to vary across the active region increases the carrier and light emission spreading, in agreement with recent electro-luminescence experiments 27 . As can be seen in Fig. 3, our active learning algorithm finds several optima, which have in common a diminishing indium content in the quantum wells from the n-side to the p-side and the use of InGaN barriers rather than GaN barriers. The diminishing indium content reduces the confinement in the p-side wells 28 , which otherwise concentrate most carriers. The diminishing indium content and the use of InGaN quantum barriers increases the thermionic emission and tunnelling through the hetero-interfaces 29 , allowing the carriers to spread more easily across the active region. The decreasing indium content with increasing well number is associated with increasing well widths for a constant peak emission wavelength. At high current, Auger recombination grows more rapidly with carrier concentration than radiative recombination, and wider wells that compensate for the low indium content become beneficial 30 , as the carrier spreading within each well is increased. Figure 4 draws a comparison between the simulation of a standard LED structure that has GaN barriers and identical wells with the simulation of a machine learning optimised LED structure. The optimised structure achieves an increased spreading of the radiative recombination events: within the wells due to wider wells, which should be beneficial at high currents 31 , and between the wells due to a high barrier indium content and a decreasing indium content towards the p-side of the active region.
To summarize, our active learning strategy rapidly finds LED structures with nearly optimal quantum efficiency while simultaneously building a GP regression model that is predictive for a wide range of LEDs. We used the objective function in (1) for experimental design, which balances the trade-off between exploitation (high predicted efficiency) and exploration (high model uncertainty). At each iteration in our algorithm, the objective function guides the selection of a new LED structure which we simulate, and then use to expand our GP model.
Interestingly, this automated approach finds LEDs that a human expert would strive to design: a structure that spreads evenly the carrier recombination events through the active region of the LED, maximising the radiative recombination events. Leaving the algorithm to optimise the indium content of the active region, we find much higher simulated efficiencies than in standard LEDs. This structure employs a high barrier indium content and a decreasing well indium content towards the p-side of the active region to prevent the accumulation of carriers on the p-side and improve the spreading of the carriers and the radiative recombination events. It also employs wider wells to compensate for wavelength changes with indium content, and to achieve a carrier spreading within the quantum wells that is desirable at high currents.
Our modelling of gallium nitride devices with Poisson-Schrödinger solvers provides qualitative information rather than quantitative predictions. Nevertheless, the algorithm we present demonstrates the power of machine learning for device design. Our method also applies to the optimisation of different LED structures than those presented here. When used in conjunction with actual materials fabrication, our method readily extends to the design of experimental devices. This work is currently ongoing.
Conclusions
Materials informatics is an emerging field 32-34 with great promise for functional materials design [35][36][37][38][39] . This approach has not yet been adopted by the LED community, despite great potential for improving physical understanding and for accelerating structural design of devices. In this work, we demonstrate that active learning based global optimisation can rapidly and automatically explore Poisson-Schrödinger simulations of gallium nitride devices, and can accelerate the discovery of efficient LEDs. We are currently using this machine-learning approach to guide the growth of experimental structures. | 2,273.6 | 2016-04-26T00:00:00.000 | [
"Engineering",
"Physics",
"Computer Science"
] |
The role of citizens and geoinformation in providing alerts and crisis information to the public
Mankind has been facing constant threats and challenges from natural and civilisational disasters for centuries. The fundamental responsibility of states is to protect the lives, health, and property of their citizens. However, protection against natural and civilisational disasters is a complex task in which the population also has to take part, and the availability of geoinformation is a prerequisite for effective protection. The aim of this study is to demonstrate the combined power of both citizens and technology in the task of alerting and informing the public of the opportunities offered by virtual crowdsourcing, Web 2.0, the role of geoinformation, crisis maps, and drones through the application of a qualitative method, by analysing case studies and by searching for internal connections between different phenomena. Citizens around the world can collaborate and contribute to the sharing and collection of geoinformation to create real-time, interactive maps. These so-called crisis maps support intervention organisations in obtaining information, and they can also be used as sources of information. The use of Web 2.0, crisis maps and drones, as well as the emergence of digital humanitarian volunteering, have fundamentally changed the role of the public when it comes to responding to disasters, including alerting them using geoinformation.
Introduction
In the era of global information flows, crises and disasters can no longer be interpreted locally or regionally. Their occurrence, the events of these processes and the details of the events flow freely on the World Wide Web through regions and across national borders. Disasters with a larger spatial and temporal scope, such as earthquakes, tsunamis, volcanic eruptions, nuclear accidents, wars and migration waves, can be transmitted to any part of the world within a fraction of a second, for the purposes of helping governmental and non-governmental systems and individuals in regions far from the point of occurrence but potentially affected, to mobilise and increase their response capacity. This has given rise to the phenomenon of the so-called 'virtual crowd in times of crisis', which has made it possible for practically anyone and everyone to know what is happening in crisis situations and to react to them. The emergence of this practice has radically changed the systems and procedures of crisis-and disaster management, rendering the whole spectrum of response and management faster, more accurate, even more professional, and reliable.
As the virtual crowd is given an increasingly bigger role and its contribution is becoming increasingly more essential in disaster response, it prompts a necessary and complete overhaul as well as adaptation of mass communication, media, and governance systems. Decision-support systems based on real-time, voluntary community participation and geoinformation gathering and sharing have been developed to understand and harness the power and collective wisdom of the masses. Residents around the world can collaborate and contribute to the sharing and collection of crisis information to create interactive and up-to-date crisis maps (CM), through free and open-source software such as Ushahidi Volunteered geographic information (VGI) and its analysis on maps provide valuable information for crisis management and response organisations, and also provide real-time, key information and assistance to people affected by a specific disaster.
Thus, alongside the information from organisations involved in the management of crisis situations and disasters, there is also information generated by groups of people present or having specific knowledge, the so-called Digital Humanitarian Volunteers (DHVs). It enables the creation and continuous evolution of a collective dataset of people and organisations connecting people, communities, and governmental organisations, so that the set of actors in a given crisis and disaster situation may merge into a kind of resource pool and then into a crowdsourcing solution. The crowdsourcing phenomenon allows for information to be generated in a fundamentally different way from what people are used to or which has been provided by official sources, and this can speed up and clarify the information snapshot on the crisis or disaster at a given moment. It enables faster, more precise, more adequate and more efficient solutions to be created in order to manage and solve a given crisis and to prevent the emergence of further risks.
The civilian population has now been integrated into the disaster response systems, which used to be state-run, and it can now directly support the complex system of public alert and information. This trend increasingly results a situation where crowdsourcing will remain outside the competences of the government, creating-from a governmental perspective-a so-called 'outsourced' public alerting and information segment. The fundamental reason is that the exploitation of collective wisdom in crisis situations has proven to be far more effective than the deployment of closed state systems. This paper therefore seeks to answer the following questions: • What is the role of citizens and geoinformation in providing alerts and information to the public?
• What process has led to the use of voluntarily generated geoinformation in disasters, paying special regard to the provision of public alerts and information?
• What are the future trends in geoinformation, public alerts and information provision?
The thesis of this paper is that as the Internet and its popularity grew and led to the development of the Web 2.0 phenomenon and related technologies and technical tools, governmental, non-governmental and civil users have become interconnected, creating a living, mutually supportive community in disaster response. Depending on the particular crisis and disaster, the set of actors that emerges immediately first forms a 'core', then starts to grow and builds up into a kind of resource mass, and later into a so-called 'mass resource'. In this process, voluntarily generated geoinformation is becoming increasingly important as it provides a growing, increasingly diverse and increasingly accurate set of information for disaster response, and thus for the information provision and alerting of the population, the scope of this analysis. This is evidenced by the growing number of websites and applications around the world that use volunteered geoinformation to effectively support the work of governments, humanitarian organisations, civil society organisations and individuals in disaster and crisis prevention, response and recovery, with the involvement of the affected population. Case studies confirm that these platforms worked efficiently when used (Ushahidi, n.d.).
In terms of methodology, each chapter of the study is based on different research methods. The research primarily uses a qualitative research method, in which, with a deep content analysis, it looks for internal relations and connections between topics and phenomena. The quantitative method is used to describe statistical data and cause-and-effect relationships, which helps to point out the topicality and necessity of the subject. The first main chapter is based on a literature review, which examines the possibilities inherent in the crowd, especially the power and wisdom of the virtual crowd characteristic of our information society, as well as the connections between the virtual crowd and crowdsourcing in light of Web 2.0. Based on the possibilities inherent in the crowd, the sub-chapter looks for connections in the tasks of crowdsourcing and public alerting and information. In addition to reviewing the literature related to volunteer-based geoinformation, the next main chapter illustrates the role of geoinformation systems such as Ushahidi, OpenStreetMap, drones and the volunteer population in the task of public alerting and information by analysing case studies and processing their experiences.
In order to prove the thesis, this paper first explains the relationship between the public and crowdsourcing from the perspective of public alerting and information, and describes the basics and related concepts regarding the use of volunteered geoinformation. It then illustrates the operation and potential of crisis mapping in alerting and informing the public by referring to case studies and drawing conclusions. At the end, it makes suggestions for future procedures by looking at and analysing current trends.
Because of the fast developing technology/governmental structures/international relations/market players, there are specific limitations that the authors had to respect. The aim of the authors is to illustrate the determining role of the population in the task of alerting and informing the public. There have been many studies on the relationship between social media and volunteered geographic information which examine the importance of social media and volunteered geographic information in disaster situations; however, apart from one or two examples, the focus of this study is open source software solutions. Geoinformation-based solutions are not intended to replace information from official sources; therefore the study primarily deals with the possibilities of the systems, supporting the addition of information from official sources, and not the limitations of the systems.
The relationship between the citizens and crowdsourcing I n terms of the relationship between the public and crowdsourcing, the public is an effective participant in disaster response through its collective power and wisdom and through its use of crowdsourcing. The thesis of this chapter is that through the means of crowdsourcing, sharing information makes the masses of citizens active and decisive participants in disaster preparedness, public alert and information, in the right technical and technological environment. The aim of this chapter is to illustrate the potential entailed in the masses of people through the related phenomena of 'the power of the crowd' and 'the wisdom of crowds', as well as crowdsourcing and the Web 2.0 phenomenon that facilitates its effectiveness on the one hand, and to illustrate the role of crowdsourcing in the task of alerting and informing the public, while taking into account the current technical and technological environment on the other. The chapter first analyses the so-called 'power of the crowd' and 'wisdom of the crowd' phenomena, then interprets the general technological and social context of crowdsourcing, and finally goes into detail on the links between crowdsourcing and public information.
The 'power of the crowd' and the 'wisdom of crowds' phenomena
The 'power of the crowd' and the 'wisdom of crowds' phenomena show what individuals can do when they form a crowd and what they can consequently achieve. The power of the crowd phenomenon refers to what can be achieved by a multitude of individuals coming together for a common goal when it comes to problem solving, change, and advocacy. A range of events in history have demonstrated the masses' capacity for advocacy, such as in revolutions or in the fall of the Berlin Wall, in national and international humanitarian co-operations during natural disasters (e.g. the earthquakes in Haiti and Japan, the Australian forest fires) and industrial disasters (the red sludge disaster in Ajka, Hungary, the chemical explosions in Beirut), but when used for propaganda, such tool can also lead the masses in dangerous directions, such as during National Socialism, and the same negative effect can also be observed in the ongoing armed conflict between Russia and Ukraine. According to Jeff Howe (2008), under the right circumstances, a large diverse group of individuals will almost always outperform a small group of talented professionals. In addition to the phenomenon of crowd power, James Surowiecki popularised the wisdom of the crowds theory in his 2004 book, which examined how large groups make the right decisions. The idea behind the theory is that groups of people collectively are likely to be smarter than individual experts when it comes to problem-solving, decision-making, innovation and forecasting (Surowiecki, 2004). The idea of the wisdom of the crowds appears in Aristotle's Politics, Book III, which he illustrates with a dinner to which everyone contributes and is thus more satisfying than if only one member of the group had provided it (Aristotle, 1969). The combination of the collective power and wisdom of the crowd has given rise to crowdsourcing. The Aristotelian dinner brings together the expertise of a group in much the same way as crowdsourcing uses the expertise of the masses to solve a problem.
Crowdsourcing
The term crowdsourcing was first described by Jeff How as a contributor to Wired magazine in 2006, when he wrote about how companies were using the Internet to outsource jobs en masse (Howe, 2006). Activities such as crowdsourcing existed before the advent of the Internet as early as in the 1800s, when the creators of the Oxford English Dictionary asked the masses for help. Following a public call, people collected the words they were using, along with their perceived meanings (Ideaconnection, 2019). We also find early examples of crowdsourcing in disaster prevention, when, following the Lisbon earthquake of 1755, volunteers from across Europe reported their experiences to help researchers create an early 'quake map' that estimated the scale and intensity of the seismic event (UNDRR, 2017). Crowdsourcing can also be interpreted as a business model, activity or function that depends on mass users and the outsourcing of certain tasks. Crowdsourcing is also a process whereby companies, public organisations or NGOs 'outsource' parts of their activities to the community, when typically each participant contributes with a small share to the workload. The term crowd refers to the large number of participants in a process, who are essentially volunteers. The meaning of the term has evolved and it continues to change, and in the meanwhile it appears and gains legitimacy in newer areas, including the public, business, and civil spheres (Hammon and Hippner, 2012). This chapter argues that crowdsourcing has become a system to create value for a common goal by reaching out to the masses, connecting committed people, and learning about and harnessing opinions to solve a given task. In other words, the information set becomes valuable via the systematisation of the data generated by the masses, which enriches knowledge and collective wisdom as a whole. The masses can thus become a resource mass and a mass resource.
Crowdsourcing and Web 2.0
Crowdsourcing could not have become widespread in different disciplines without the popularity of Web 2.0. The World Wide Web initially contained read-only information, but with the advent of Web 2.0, the formerly passive, editorial content was shifted towards active user participation.New Internet services have appeared, along with new forms of communication and media platforms. Web 2.0 introduced blogging, video sharing, Wiki sites and social media, which are filled with content by their users. This type of content is usually referred to as user-generated content (Hudson-Smith et al., 2009). Web 2.0 provides easier access to target audiences and its popular use has made communication and coordination activities cheaper and easier. The uniform technological standards allow for masses of people to be involved in content creation, without any specialised technological skills. Darcy Dinucci first coined the term Web 2.0 to identify this phenomenon in 1999, and it has become popular and widespread partly thanks to the work of Tim O'Reilly (Prandini and Ramilli, 2012).
The Web 2.0 functionality allows people to deliver and display their messages on existing platforms, but it also enables their modification and enhancing. Web 2.0 also brings along tagging, whereby an image, video or text can be supplied with metadata. The users may tag content uploaded to the web by adding keywords, and the tags listed for the file lead to files on the same topic (Szűts, 2012). The spread of the Internet and Web 2.0 has created a communication platform that makes the notion of the masses even broader. Before the advent of the Internet, the term 'crowd' itself was used to refer to people who were in physical proximity to each other. However, the Internet created the concept of a 'virtual crowd', where people could form communities in online spaces to achieve a specific goal, which also encouraged crowdsourcing activities to take place (Vander Schee, 2009).
The links between crowdsourcing and public information
When considering the connection between crowdsourcing and public information, it can be established that similar to the spread of the Internet, Web 2.0, smart devices and other related technologies have helped crowdsourcing to expand and become more effective; in a similar way, crowdsourcing, together with technical and technological developments, helps, supports and complements the task of alerting and informing the public. The popularity of the Internet, social media sites and smart devices created a platform for the public whereby they can take part in disaster response in an interactive way. Based on the research conducted by Hossain and Kauranen (2015), crowdsourcing proves to be particularly successful in disaster situations. Through crowdsourcing, information linked to a geographical location, i.e. geoinformation, can be obtained (Heipke, 2010), and it is also possible to involve a layer in the phases of disaster management. In other words, a group can also be formed, whose members may be affected themselves, making the population important in the task system of public alerting and information. A resident affected by a disaster or crisis can provide real-time, credible information by sharing their own perceptions and experiences, which can indirectly support the alert and information activities of emergency response organisations. In this sense, citizens operate as sensors and as a special network of sensors, forming a group which is composed of the people themselves (Goodchild, 2007). They use their five senses to detect and understand local information, and with the help of their communication tools, such as mobile phones-which by their very nature carry sensors in themselves-they also support the task of alerting and informing the public. Professional organisations can obtain real-time information, to supplement their own sources and thus achieve more effective prevention and protection.
According to research, the fastest earthquake alerts-as a positive example for the abovecome from social media networks rather than from the seismic underground sensors of the relevant institute, the US Geological Survey. Users of the social media platform Twitter use the hashtag earthquake-a method mentioned earlier-to report their experiences in their posts. The US Geological Survey uses software to monitor the appropriate words, which allows it to pinpoint the exact source of the earthquake faster than seismic underground sensors. However, these entries, also known as tweets, can only be credible and usable if they are linked to a Global Positioning System (GPS) system that determines their exact location. Twitter can be used to guess the size and impact of an earthquake, but also to help fill the data gap in regions with sparse seismic networks (Oskin, 2014). As you can see, the very existence of geoinformation is an essential element in this process. In addition to information sharing, where appropriate, there are also open platforms such as Ushahidi, which allow affected residents to be informed, in addition to being able to post comments. The question of authenticity regarding the received information is not addressed in detail in this study, but if the report does not have a geolocation or if the information received is not verifiable, a large number of users report it and it can serve to prove the credibility of the original post. In fact, it is the users themselves who can confirm reliable or uncover fake reports.
In another process the local or geographically distant volunteers are involved in processing information coming in from stakeholders on various platforms, sent in the form of SMS, MMS, through web interfaces, or from the media through the tags or hashtags mentioned above. The software in question decides its purpose, operation and use. Through the process of crowdsourcing, people, who may be affected and local residents or geographically distant persons, use digital technology and voluntarily participate in a disaster and crisis situation. They are the ones reporting or managing such reports with the information they provide, the so-called volunteered geoinformation, and thus emerge as a special group of digital volunteers.
Overall, the potential of the public, combining crowdsourcing and the associated stateof-the-art techniques and technologies, is becoming an increasingly important element of the public alert and information task. Crowdsourcing gives the public the opportunity to express their opinions and share their experiences. The tools and technologies used by the public support the effective development of crowdsourcing, and government systems rely heavily on the crowd-sourced collective wisdom.
Use of voluntary geoinformation
T he use of voluntary geoinformation-provided by local or remote individuals and groups which share information-manifests as crisis mapping in times of disaster and crisis situations, and it supports the task of alerting and informing the public. The thesis of the chapter is that as a special group of digital volunteers, these individuals and groups play an important role in supporting decisions by way of the geoinformation-based crisis maps created with their voluntarily provided data. This is how they provide a supporting technical and technological background to the process of public alerting and informing. The aim of this chapter is to present the use of voluntarily provided geoinformation in the task of alerting and informing the public. In light of this, the chapter first highlights the significance, use and connections of volunteer-based geoinformation in the field of public alerting and information, and then analyses the possibilities to map the volunteered data and its related concepts. The chapter illustrates the role of geoinformation systems in disaster response by drawing on the experiences of the Ushahidi and OSM platforms, and finally the role of airborne drones in crisis mapping.
Volunteered geoinformation
Volunteered geoinformation is geographic information collected and shared voluntarily by individuals. The term volunteered geographic information (VGI) was coined by Michael F. Goodchild. Volunteered geoinformation is a specific form of user-generated content. Most of these users are unskilled, but together they show great potential, which undoubtedly has a profound impact on the role of geographic information systems (Goodchild and Glennon, 2010). The role of volunteer-based geoinformation in disaster management was first observed in the 2007-2009 Southern California wildfires, where volunteer residents shared information about the fires, and it was also noticed that volunteers were able to provide more timely situational information in some circumstances than official sources (Fenga et al., 2022). Today, two approaches to volunteer-based geoinformation are known, one based on direct participation and the other the so-called opportunistic approach (Ostermann and Spinsanti, 2011). A participatory approach requires conscious and active participation, and people usually need to install a specific application, use a website created for it, or register their own account in order to share information.
In contrast, the opportunistic approach is a passive and more unconscious way of providing information, the source of which is, for example, the social media interface.
The concept of information, which is the basis of volunteered geoinformation, is important in disaster and crisis management, as well as in alerting and informing the public, since information at the right time and place can reduce damage and loss, and the informationinduced communication can increase cooperative effectiveness. Effective communication and cooperation require real-time information flow. In a crisis, there is increased demand for information in general, from the public and from the intervening organisations and individuals. The number of public reports and requests for help to be handled by the relevant state, governmental and civil organisations as required increases, and the various intervening organisations to some extent also depend on information provided by the public. Processing, sorting, and updating large amounts of information requires adequate information technologies and tools.
Location-based information constitutes a specific group of data. For this group of information, a particular location is of importance, whether it is an everyday life or a disaster situation, because it reveals the location where the events are taking place. Any piece of information on objects or phenomena which is connected to a geographic or spatial location is defined as geoinformation. The means of storing and displaying geoinformation was previously limited to traditional maps but it received a boost with the development of computer technology and the emergence of geoinformation systems. Such information also has an information system which can be used to extract, store, organise, retrieve, manage and analyse data and to perform various operations. These systems, which are used to obtain, manage, analyse and display information in a fixed location, are called spatial information systems or Geographic Information Systems (GIS) (Czimber, 2012).
Spatial data was formerly handled only by professionals, but with the advent of Web 2.0 and the development of mobile devices that provide location data, people are now more involved and share such data not only for their own use, but for the public, and such data and information could be even more detailed and better quality than what official organisations used to provide. This type of voluntary data provision has led to volunteered geoinformation and has become an effective tool for expanding, improving, supplementing, updating and using existing information for humanitarian purposes. This is a specific kind of information, such as familiarity with the location that is not available by means of traditional data collection processes, and it allows for the compilation of highly detailed reports on local conditions during disasters (Haworth and Bruce, 2015).
Digital humanitarian volunteering
Digital humanitarian volunteers (DHVs), as referred to by John Crowley, are a special group of digital volunteers (Crowley, 2013). These digital humanitarians, as defined by Patrick Meier, form a new group of disaster response volunteers, sharing volunteer-based geoinformation. Digital humanitarians can include local residents or geographically affected masses alike, who provide raw information directly, and the indirectly involved community, even a geographically distant one that coordinates and manages information to support a humanitarian action. In terms of profession, digital humanitarians could be technologists, geographers, seasoned humanitarian experts, journalists, skilled community translators or academics (Radianti and Gjøsaeter, 2019). Digital humanitarians can also include those without the above professional skills, as VGI itself refers to a phenomenon where people can report their geographical location and create a map without any cartographic knowledge (Hung et al., 2016). The use of mapping platforms is the basis for digital humanitarian volunteering. Initially, the term digital humanitarian was often associated with crisis mapping activities during disasters.
The basics of crisis mapping
Crisis mapping refers to the collection of real-time data, the display and analysis of incoming, localised, up-to-date information on a digital map in disaster and crisis situations, such as war, elections, natural disasters and humanitarian crises. Crisis mapping projects usually allow people, including the general public and crisis managers, to provide information either remotely or from the scene of the crisis. The advantage of the crisis mapping method over others is that it can increase situational awareness by allowing the public to make announcements (Stewart, 2011). Crisis mapping can track fires, floods, pollution, crime, political violence, the spread of diseases, as well as provide transparency on fast-moving events and facilitate the identification of longer-term phenomena. It can effectively demonstrate the spread of a geographical phenomenon. Crisis maps are used by intervening organisations to help them identify and respond to situations (Stringer, 2014). The information can usually be sent to the initiators of the map by SMS or by filling out an online form, which will then automatically appear on the online map after approval, or it can be collected by a specific group of volunteers (social media platforms, mainstream media, etc.) to be displayed on the map (Aitamurto, 2012). Crisis mappers are usually online groups of volunteers who collect and provide online data to people responding to or affected by disasters. Organisations active in crisis mapping include Ushahidi and the Humanitarian OpenStreetMap Team (HOT). The term crisis mapping was made popular in the media by Ushahidi after the 2010 Haiti earthquake.
Ushahidi
Ushahidi is an initiative, a platform, a freely editable and usable open-source technology created to collect and analyse crowd-sourced data in disaster and crisis situations. It was created with the aim of exploiting the potential of crowdsourcing to facilitate the sharing of information in an uncertain environment. The term Ushahidi comes from Swahili; it means witness testimony. The Ushahidi platform was set up in early 2008 to map violent incidents and peace efforts in the aftermath of Kenya's general elections. Ory Okkolloh, a Kenyan activist, lawyer and blogger, and later the founder of Ushahidi, envisioned a website where people could anonymously report incidents online or via SMS and when this information is displayed on a map, people may get a real picture of the course of events. The site was launched with the help of volunteering experts. Information imported by volunteers from social media platforms, blogs, text messages, mainstream media and announcements made on the web interface was also displayed on the web-based interactive maps, available to anyone with an Internet connection. All reports had to be manually checked and approved by the Ushahidi staff. A growing number of people started to use the new platform to share information, while some radio stations considered the website as a source of information. It turned into an interactive website, which meant that people could not only contribute by providing but could also receive information from the platform. As interest in the website grew, it became clear that the tool could be used outside Kenya, particularly during disasters and crisis situations (Okolloh, 2009).
Ushahidi can also assist local and international NGOs working in crisis situations in the early monitoring of conflict and danger, in the tracking of emergencies and crisis situations, and to assess damage and losses during recovery. The use of related technology in emergencies and crisis situations received more attention following the response to the 2010 Haiti earthquake. Patrick Meier helped the crisis mapping process by creating an online map to track what was really happening, using the announcements. Following the earthquake, Ushahidi launched a crisis map that enabled volunteers from around the world to start mapping information from various sources, including social and mainstream media, as well as from the affected communities submitted via SMS or the Internet, on real-time reports of incidents, damage and calls for help. Residents in need of help or wishing to make a report used their mobile phones to send their messages to a central response team with a designated phone number. For the Haitian population, notifications in SMS were the most predominant, as 80% of the population had a mobile phone while only 10% had Internet access. The incoming messages were translated by online volunteers before they were displayed on the crisis maps (Stewart, 2011). In fact, other search and rescue teams also started to use the information from the map, such as the disaster management agency of the USA, Federal Emergency Management Agency (FEMA) as well as the United States Marine Corps.
The concept of the Standby Task Force (SBTF) was introduced at the 2010 Crisis Mapping Conference to streamline online support from volunteers, building on lessons learnt, and to create a formal platform that forms a network of digital volunteers, so they are ready to be deployed in crisis situations. Some members of the team are specialists in information and communication technology or humanitarian operations, but others come from different backgrounds and the team has helped to develop their skills (Standby Task Force, n.d.). Outside of volunteer groups, however, anyone can create an online map via the website using Ushahidi, an open-source software. In 2017 Ushahidi released an app for iOS and Android devices, which allows anyone to collect and make reports from anywhere and at any time, in line with Ushahidi's mission. The application enables quick notifications to be made with or without an Internet connection. It saves the collected data, the GPS location, images and video and sends them to the map interface as soon as the Internet connection becomes available. It also allows people to collect, analyse, visualise, and respond to data all in one place. Ushahidi allows you to create custom surveys and import data from a third-party service (social media platforms, SMS, etc.) and to share such maps and timelines publicly, and there is a group response option, too (Ushahidi, n.d.).
OpenStreetMap (OSM)
The Ushahidi platform is not the only mapping technology available that was developed to collect and map crowdsourced data. This can be done by OSM as well, involving volunteer mappers, or through the regular, periodic mapping efforts of local communities to ensure that relevant, up-to-date information is available to support response organisations when a disaster occurs. This type of cooperation has been particularly successful in places without well-mapped or covered areas, e.g. when it was used to map local schools and health services. Experience has repeatedly shown that OSM data is just as accurate, and in some cases more so, than data sets produced by official organisations. The dynamic nature of the OSM allows for the updating of map data, especially in places where active local mapping and map processing communities have developed. OSM is a teamwork-based project that aims to create a freely editable world map. The main reasons for its creation were the restricted access to and use of geoinformation worldwide and the emergence of low-cost navigation tools. The project was launched by Steve Coast in the UK in 2004. The inspiration came from the success of Wikipedia and the predominance of proprietary geographical data. Registered users can collect data through manual surveys, GPS tools, aerial photography and other freely available sources, or can utilise their local knowledge (Ramm et al., 2011).
The free and open-source OSM platform is primarily known for its project with community-collected and processed data on roads and streets (road mapping) around the world, but the platform has also been used to map community-sourced data on other types of infrastructure, such as refugee camps in Haiti, health facilities in Libya, damaged buildings in Turkey and disaster preparedness data in Indonesia. Within 48 hours of the earthquake in Haiti, the OSM community began using high-resolution post-event satellite images to map the earthquake-affected area. In the space of a month, more than 600 volunteer contributors signed up to build a base map of Haiti on OSM, which could provide fundamental information for organisations responding to the crisis. OSM mapped the Haiti crisis by synthesising a comprehensive digital map of Haiti's infrastructure including roads, buildings, and camps, gathered from satellite imagery and GPS surveys with the massive input of volunteers (Internews Center For Innovation & Learning, 2012). The organisation's data was then used by various aid agencies, including the World Bank. In 2010, following the Haiti earthquake, the Humanitarian OSM Team (HOT) was established to produce and provide timely maps free of charge during crisis situations around the world, using volunteer cartographers. The OSM Humanitarian Team acts as an interface between OSM communities and humanitarian organisations. To date, however, the Ushahidi platform is the technology that is used to map events from community sources (Humanitarian OpenStreetMap Team, n.d.).
Airborne drones and volunteered geoinformation
In recent years, the use of aerial drones has also emerged in the production of voluntary geoinformation-based crisis maps, with an impact on the emergency alert and information function, as airborne drones are versatile tools that can be used in all phases of disaster response. Volunteered geoinformation is usually available through geoinformation-based collaboration and community mapping sites. Ushahidi and OSM are examples of websites that provide informal sources of data and local knowledge about the geography of a place. Technologies enabling volunteered geoinformation such as Web 2.0, geo-referencing, tags, GPS, broadband communication are also favourable for the use of multifunctional drones. These can be used to create high-quality maps, images, videos, some of which can be shared on the web, and they can facilitate First Person View (FPV), or live streaming. Volunteered geoinformation is valuable if it can complement the existing spatial data infrastructure. Despite high-resolution images taken from high altitudes, a true understanding of some of the Earth's elements and processes is not possible without the involvement of certain participants. Drones allow these participants to access 3D-space from a different perspective and at a different speed. Drones can also complement what humans are capable of sensing. Sensors connected to drones can be used in conjunction with human observation for appropriate adaptation or processing, either in real time or in retrospect, and these practices can be used for scientific purposes such as early warning, and in disaster and crisis situations (Choi-Fitzpatrick, 2014).
Since the Haiti earthquake in 2010, Meier's crisis mapping has supported humanitarian action in almost every major disaster. The 2015 earthquake in Nepal also saw the start of the use of robotics for humanitarian purposes, including camera-equipped unmanned aircraft, i.e. drones, which take hundreds of images that can be linked together to create maps or 3D models. Meier envisages that the use of drones will make crisis mapping even more effective in disaster response. Robotics facilitates the collection of information that will be put on the crisis map. This is another way of collecting and mapping geo-referenced data. In the immediate aftermath of the Nepal earthquake that killed over 8 000 people, cloudy weather obscured the view on the few satellite images that were recorded. However, Meier and his team were able to use drones to capture detailed images of the damage around the capital, Kathmandu. Through WeRobotics, the non-profit organisation he started, Meier collaborated with Nepalese experts to create a local "Flying Lab" so it could conduct this kind of activity in future emergencies. Meier sees the solution in the power of local communities as the first responders to disasters since they are unmatched in their familiarity with the location. His expertise has been used and recognised by the UN, the World Bank, and the United States Agency for International Development.
However, drones also present some challenges due to air traffic regulations. Each country has different aviation regulations that need to be taken into account when flying drones (NPR, 2016). In 2015, Meier was asked by the World Bank to coordinate a humanitarian drone mission to speed up damage assessment in the South Pacific. At the time there were no local drone pilots available, so two foreign drone companies had to be recruited from Australia and New Zealand. The South Pacific was struck by severe cyclones in 2018. Unlike in 2015, a local drone pilot was assigned to support the affected areas, serving as the South Pacific Flying Labs coordinator in Fiji. He became the first local drone pilot to be deployed in the region together with the National Red Cross Society (iRevolutions, 2018). At WeRobotics, his mission is to create local opportunities for participation and leading in problem-solving as well as participating in providing a solution. By localising robotics expertise, a new opportunity is created in humanitarian aid and disaster response (Flying Labs, n.d.).
Overall, the rise of the Internet, the popularity of Web 2.0, the development of geographic information systems, smart devices, and related developments, and crowdsourcing have all improved the collection, sharing and interaction with geoinformation, leading to the emergence of volunteered geoinformation. Voluntarily generated geoinformation provides information as the basis for crisis maps, thus supporting the decision-supporting function of individuals, groups and organisations involved in disasters and crisis management. In recent years, as part of crisis mapping, aerial drones have been used to collect volunteer-based geoinformation, further developing tools for alerting and informing the public.
Conclusions
A s a result, it can be stated that the study proved the theses, e.g. the residents are active and decisive participants in disaster prevention, public alerting and information through crowdsourcing, with shared information in the appropriate technical and technological environment, since even posts shared by 'passive' individuals on social media can also be collected by active and conscious participants and displayed on crisis maps. Digital volunteers perform a decision-support function and with the development and popularity of the Internet, the Web 2.0 phenomenon, and related technologies and technical devices, governmental, non-governmental, and civil users have become connected and become a living, mutual-helping community during disaster prevention. Voluntarily produced geoinformation has become more and more important, as it has provided an ever-larger, more diverse, and more accurate set of information for protection against disasters, including for the public information and alerts under investigation.
It can be concluded that the development of technology (Web 2.0, geoinformation, mobile technology, robotics) and the development of society (the collective power and wisdom of the crowds, crowdsourcing, active participation in humanitarian activities, environmental awareness and being aware of the surrounding dangers), as well as the combined development of technology and society (digital awareness, volunteer-based geo-information, digital humanitarian volunteering, crisis maps, drone pilots) help and support the task of alerting and informing the public. This manifests itself in the fact that participating volunteers complement the information of professional organisations by sharing volunteer-based geoinformation on the one hand, which can be used as a basis for information, and they can also be informed by geo-information shared and verified by other residents on the other. As a result of digital volunteering, people become more aware of both the technological achievements and the dangers in their environment. Even if someone is involved in a disaster or crisis as a geoinformation sharing volunteer or as a member of a humanitarian team helping to synchronise information to ensure the safety of a community, they are actual participants in the disaster response and they are digital humanitarian volunteers. The public becomes involved as (1) sensors, (2) digital humanitarian volunteers, (3) producers of volunteered geoinformation, (3a) users, (3b) sharers, (3c) synchronisers, map makers, (4) map analysts and (5) drone pilots. The initiatives and examples of Ushahidi and OpenStreetMap reflect the power and the benefits of using crowd-sourced, user-generated data that enable faster and more direct communication between residents and aid agencies, as opposed to the usual procedures of traditional humanitarian organisations.
Agreeing with Patrick Shirky's idea that technology only becomes socially interesting when it has already become boring at the technological level (de Kretser, 2017), it can be added that in some cases a technology becomes mature in a given field later, when its technical and technological elements have become sufficiently popular, part of our everyday life, and the population accepts it and knows about it on a large-scale. Disaster management in terms of public information and alerting requires continuous monitoring and adaptation to the needs of the population, since technology alone cannot have a positive impact without human factors. Therefore, it is important that the means and methods of public alerting and information are adapted to the constantly changing environment, taking into account and implementing both old and new technologies.
Funding
This research received no external funding.
Data Availability Statement
Not applicable. | 9,759.4 | 2022-09-13T00:00:00.000 | [
"Computer Science",
"Geography",
"Environmental Science",
"Political Science"
] |
Two-Loop QCD Corrections to the Higgs Plus Three-parton Amplitudes with Top Mass Correction
We obtain the two-loop QCD corrections to the Higgs plus three-parton amplitudes with dimension-seven operators in Higgs effective field theory. This provides the two-loop S-matrix elements for Higgs plus one-jet production at the LHC with top-mass correction. We apply efficient unitarity plus IBP methods which are described in detail. We also study the color decomposition of the fermion cuts and find a connection between fundamental and adjoint representations which can be used to reduce non-planar to planar unitarity cuts in the Higgs to three-gluon amplitudes. We obtain final results in simple analytic form which exhibits intriguing hidden structures. The principle of maximal transcendentality is found to be satisfied for all results. The lower transcendentality parts also contain universal building blocks and can be written in compact analytic form, suggesting further hidden structures.
JHEP02(2020)169
Scattering amplitudes play an indispensable role in particle physics. They act as a bridge between theory and experiment. The Large Hadron Collider (LHC) verified the correctness of the standard model of particle physics with the discovery of its last missing particle, the Higgs boson [1,2]. One main object of the present and future collider experiments is to understand more precisely the Higgs properties and the mechanism of electroweak symmetry breaking. The proposed future colliders, such as the circular electron-positron collider (CEPC) in China [3,4] and the future circular collider (FCC) at CERN [5][6][7], are expected to provide experimental data with unprecedented precision. In order to compare with the experiment, one needs to compute scattering amplitudes to the next-to-next-to leading order (NNLO) or even higher orders. This is usually beyond the capability of traditional Feynman diagram methods. Fortunately, during last thirty years, in the field of amplitudes calculation, many new methods and tools have been developed, including the spinor helicity formalism [8][9][10][11], the unitarity cut method [12][13][14], and recursion relations [15][16][17]. These methods have achieved great success in the computation of scattering amplitudes, not only in supersymmetric field theories, but also in realistic QCD.
In this paper, we study the Higgs plus three-parton amplitudes in standard model. Our motivation is twofold. First, as mentioned above, the precise theoretical prediction of Higgs scattering process is highly demanded to match the improving precision of experiments. At the LHC, the dominant Higgs production channel is the gluon fusion through a top quark loop [18,19]. The computation of this process can be simplified using an effective field theory (EFT) in which the top quark is integrated out [20][21][22][23][24][25][26]. This EFT is a good approximation when the top mass m t is much larger than Higgs mass m H . The leading term in the effective Lagrangian is a unique dimension-5 operator, Htr(F µν F µν ), where H is the Higgs field and F µν is the gauge field strength. The two-loop QCD corrections to Higgs plus three-parton amplitudes with the leading dimension-5 operator were computed in [27], which have been used to obtain the cross sections of Higgs plus a jet production at N 2 LO [28][29][30][31][32][33][34] in the infinite top mass limit. When the Higgs transverse momentum is comparable to the top mass, the contribution of higher dimension operators in the Higgs EFT will be important. This has been taken into account so far only at NLO QCD accuracy, including the finite top mass effect [35][36][37]. A concrete goal of this paper is to compute the two-loop QCD corrections for Higgs plus 3-parton amplitudes with dimension-7 operators in the Higgs EFT. This provides, for the first time at N 2 LO QCD accuracy, the S-matrix elements of the top mass correction for Higgs plus a jet production.
Another motivation is to study the hidden analytic structures of amplitudes. One particular focus of this paper is related to the so-called maximal transcendentality principle (MTP). Transcendentality is a mathematical quantity used to characterize the algebraic complexity of functions or numbers. The principle of maximal transcendentality conjectures that the algebraically most complicated part of certain physical observables in QCD and N = 4 SYM are equal. It was first proposed in [38,39] that, the anomalous dimensions of twist-two operators in N = 4 SYM can be obtained from the maximally transcendental part of the QCD results [40]. Intriguingly, the principle can be extended to the Higgs JHEP02(2020)169 plus three-parton amplitudes or form factors, which involes complicated two dimensional Harmonic Polylogarithms [41,42]. This was first observed for the Tr(F 2 ) → 3g form factors [43], which on one side correspond to the QCD corrections to the Higgs to 3-parton amplitudes in the infinite top quark mass limit [27], and on the other side is equivalent to the form factors of stress-tensor multiplet in N = 4 SYM. (The universal property of the maximally transcendental parts were also found in form factors of more general operators in N = 4 SYM [44][45][46][47][48].) Besides, the MTP was verified for other quantities like Wilson lines [49,50], and has also been applied to compute collinear anomalous dimensions [51]. Recently, the MTP was found to be true for three-gluon form factors of the dimension-6 operators in the pure gluon sector [52][53][54][55]. In this paper, as also reported in [56], we show that the MTP can be extended to the form factors with external fundamental quarks: with a simple replacement of the quadratic Casimirs C F → C A , the maximally transcendental (MT) part of H → qqg form factors were found to reduce to the MT part of H → 3g form factors.
Furthermore, we find that the lower transcendentality parts also exhibit certain universality. For example, the transcendentality degree-3 parts can be constructed by a building block T 3 , plus simple log functions and constants. The degree-2 parts also contain building blocks T 2 and T 2 . With these building blocks, the amplitudes can be written in compact forms, which suggest that hidden structures also exist in the lower transcendental parts. Exploring these structures further will be important for the computation of full QCD amplitudes.
Our computations employ a new strategy of combining unitarity cut [12][13][14] and integration by parts (IBP) methods [57,58]. We apply IBP directly to the cut integrands, which are constructed using unitarity cuts. This avoids the non-trivial reconstruction of the full integrand. The strategy also increases the efficiency of IBP reduction significantly, with the cut constraints imposed. Similar strategy has also been used in [59]. Ideas of applying cuts to simplify IBP reductions were also studied in [60][61][62][63][64][65]. The pure gluon sector of two-loop H → 3g amplitudes contains only the leading color contribution, in which the loop integrands can be conveniently obtained using the planar unitarity method. In the presence of internal quarks, more complicated color structures appears. We will show that by making connection between fermions in fundamental and adjoint representations, a color decomposition is possible such that the full two-loop H → 3g integrand can be constructed using planar cuts. This paper is organized as follows. In section 2, we review the Higgs EFT and describe the divergence structures. In section 3, we describe the details of the computation using unitarity-IBP strategy. In section 4, we discuss the color decomposition of amplitudes that involve internal quarks. In section 5, we present the analytic results of form factors. We conclude and discuss the transcendentality properties in section 6. Appendices A-D provide the expressions of one-loop and two-loop results.
Preparations
In this section, we first describe the Higgs effective action and the dimension seven operators, then we review the subtraction of divergences and provide explicit formulae.
Operator basis
The Higgs boson can be produced from the gluon fusion through a heavy quark loop at the LHC. The Yukawa couplings between Higgs and quarks are proportional to the mass of quarks, so the diagrams with a top quark loop dominate. Integrating out the top quark renders the Higgs effective field theory (HEFT) [20][21][22][23][24][25][26]: whereĈ i are Wilson coefficients, H is the Higgs field, O 0 = tr(F 2 ) is the leading term, and the subleading terms contain dimension-6 operators [66][67][68][69][70]: The last two operators have zero contribution in the pure gluon sector and only contribute when there are internal quark lines. In this paper we will consider the full QCD corrections including massless quarks contributions. An amplitude with a Higgs boson and n gluons is equivalent to the form factor with an operator O i in the EFT (2.1): where q 2 = m 2 H . In the following, we will often refer Higgs amplitudes as form factors. Using Bianchi identity one can decompose the operator O 2 as (see e.g. [67]) The operator relation can be transformed into a relation for the form factors: where the partial derivatives reduce to square of q which is the total momentum flowing through the O 0 operator. This relation can serve as a self-consistency check for our computations. One can classify the operators according to their length. Naturally, the length of an operator O is the number of elementary fields (A,ψ and ψ) in its lowest expansion (i.e. with minimal number of elementary fields). For example tr(F 2 ) ∼ tr(∂ 2 A 2 ) has length 2, and tr(F 3 ) ∼ tr(∂ 3 A 3 ) has length 3. A form factor of an operator is called "minimal" if the form factor contains exactly the same number of on shell particles as that of the lowest expansion of the operator. For example tr(F 3 ) → ggg and ijk ψ i ψ j ψ k → qqq are minimal form factors, but tr(F 2 ) → ggg is a non-minimal form factor.
JHEP02(2020)169
Sometimes this "naive" definition results in a zero minimal tree form factor. As an example, for O 4 the tree form factor with two external gluons vanishes, and the simplest non-zero tree form factor is O 4 → qqg. The reason is that, using the equation of motion which is a length-3 operator. The more proper definition is that, the minimal form factor for a given operator is the simplest form factor which is non-zero at tree level, and the length of the operator is the number of external on-shell states in the minimal form factor. Using this definition, O 4 has length 3, and its minimal form factor is O 4 → qqg. Similarly, O 3 has length 4, and its minimal form factor is O 3 → qqqq.
Divergence structures
Form factors contain both UV and IR divergences. We apply dimensional regularization (D = 4−2 ) in the conventional dimension regularization (CDR) scheme, and for the renormalization, we use the modified minimal subtraction renormalization (MS) scheme [71]. To subtract the IR divergences, we apply the Catani subtraction formula [72]. Below we describe these in detail.
To begin with, the bare form factor can be expanded as where g 0 = g YM is the bare gauge coupling and α 0 = . We pull out the coupling g δn 0 = g n−L 0 in the tree form factor, which depends on the number of external legs n and the length of the operator L.
The renormalization of the UV divergences can be implemented in two steps, one for the coupling constant and one for the local operator.
First, we express the bare coupling α 0 in terms of the renormalized coupling α s = α s (µ 2 ) = gs(µ 2 ) 2 4π , evaluated at the renormalization scale µ 2 , as where the factor S = (4πe −γ E ) is due to the use of MS scheme, and µ 2 0 is the scale introduced to keep gauge coupling dimensionless in the bare Lagrangian. The first two coefficients of the β function are 1 where n f is the flavor number of fermions and C A and C F are the quadratic Casimirs in the adjoint and fundamental representations:
JHEP02(2020)169
Second, we renormalize the operator by introducing the renormalization constant Z for the operator (2.13) The anomalous dimension can be computed from the renormalization constant as (2.14) Using (2.13) and note that µ ∂ ∂µ α s (µ) = −2 α s − β 0 2π α 2 s + O(α 3 s ), we have Since γ is finite, it is clear that the 1 2 part in Z (2) is fixed by the one-loop results as Expanding the renormalized form factor as we have the relations between the renormalized components F (l) and the bare ones F The renormalized form factors contain only IR divergences, which take a universal structure [72,73] (see also [27]): where for the form factor with three external gluons, we have Ω,g .
JHEP02(2020)169
For the case with external quarks, we have
Computation with unitarity-IBP
The traditional method of computing scattering amplitudes is based on Feynman diagrams. In multiloop calculations, this traditional method is not very efficient. This is mainly because the gauge symmetry, unitarity and other properties of the scattering amplitude are destroyed, when the amplitude is split into Feynman diagrams. The modern unitarity method uses tree amplitudes as building blocks to construct the integrand of loop amplitudes [12][13][14]. In this construction, the original properties and symmetry of the amplitude are mostly preserved, so that the integrand can be calculated much more efficiently. The commonly used strategy of unitarity method is to first construct the full integrand using a set of unitarity cuts. The complete amplitude (before integration) contains a set of loop integrals whose coefficients are rational in the momentum invariants and the spacetime dimension D. Each unitarity cut can be used to fix some of the coefficients, and different unitarity cuts will be applied successively until all the coefficients are fixed. After the full integrand is obtained via unitarity, the integration by parts (IBP) method can then be used to reduce the amplitude further to a small set of master integrals [57,58]. We illustrate the above procedure as: where M i are the IBP master integrals.
JHEP02(2020)169
This strategy has two potential drawbacks. First, rebuilding the complete integrand is not a trivial task. The labelings of loop momenta in different unitarity cuts are usually different from each other. So the reconstruction of the full integrand involves cumbersome shifting and redefinition of loop momenta, especially when non-planar graphs are involved. 2 Second, the IBP reduction for the full amplitude can be very slow. IBP usually takes a long time and consumes a lot of computing resources, and it is sometimes the main bottleneck in the whole calculation.
We use a new strategy of combining unitarity and IBP which helps to overcome both issues above. The key idea is that instead of applying IBP to the full loop amplitude, we apply IBP directly to each cut integrand. If a master integral allows a given unitarity cut, this cut will be enough to determine the final coefficient of the master integral. A single unitarity cut only fixes the coefficients of a subset of master integrals. One needs to apply different unitarity cuts successively, until all the coefficients are fixed. In this way, there is no need to construct the full integrand, and one obtains the final coefficients c i of IBP master integrals directly. This strategy can be illustrated as: Furthermore, imposing the cut condition drops a lot of integrals and makes a lot of sectors trivial during IBP. Our strategy typically increased the efficiency of IBP by an order of magnitude. A further important bonus of the unitarity-IBP method is that different cuts can provide internal consistency checks, which are very helpful in complicated cases. The idea of applying cuts to simplify the IBP reduction has also been used in e.g. [60][61][62][63]. In those cases, the loop integrand is generally taken as a known input. The strategy used here is different in the sense that its main purpose is to simplify the unitarity construction of amplitudes (from the scratch using tree products), while IBP with cut is only one natural step involved. Similar strategy has also been used in the numerical unitarity approach [64,65], where unitarity cut and IBP are carried out together with numerical momentum variables to avoid large intermediate expressions. Here our approach is purely analytical and does not involve numerical reconstructions. We will illustrate our strategy with explicit examples later.
D-dimensional unitary cut
Four-dimensional spinor helicity formalism is very powerful in the computation of supersymmetric gauge theory amplitudes. However, in the computation of non-supersymmetric theory amplitudes, it fails to capture the rational terms (see e.g. [74][75][76]). We will apply the more general D-dimensional unitarity method in the computation of H → 3g amplitudes. In this case, it is also enough to consider only the planar cuts. The building blocks are color-stripped tree amplitudes and form factors, which can be computed using planar Feynman diagrams, or recursion relations [15][16][17]. The polarization vectors of cut internal JHEP02(2020)169 gluons satisfy the following contraction rule where q µ is an arbitrary reference momenta. The q-dependent terms vanish due to gauge invariance, and disappear in the full cut-amplitude. The quark (or anti-quark) fields also have two external states, denoted by u s (p) (orū s (p)), which are the solutions of the (massless) Dirac equation. The contraction rule of internal quark states is Comparing with the four-dimensional unitarity cut in spinor helicity formalism, the D-dimensional unitarity method usually generates much larger expressions in the intermediate steps. As a compensation, the D-dimensional unitarity method not only captures all rational-type terms, but also produces integrals with regular propagators, which is ready for the IBP reduction. In contrast, in the case of four-dimensional unitarity cut, a reconstruction must be performed to convert the spinor-brackets to standard propagators.
In the pure gluon sector the non-planar contribution of H → 3g amplitudes vanishes at two loops [53] and the amplitudes are proportional to the simple color factor N 2 c , so the planar unitarity cut gives the full result. In the presence of internal quark legs, the amplitudes contain N 0 c and N −2 c contributions. However, as will be shown in section 4, we can still use planar cuts, if we assign proper color factors to different internal-state configurations. So these contributions are not intrinsically non-planar.
The planar unitarity cuts (with color-stripped amplitudes as input) are not suffice in the computation of H → qqg amplitudes, which contain intrinsic non-planar contributions. One may carry out the non-planar unitarity cut, in which the building blocks will be the amplitudes with full color factors. However, unlike the planar diagrams which can be written in unique forms with the help of zone variables, in the non-planar cut, the same integral may appear in different forms which are related by shifting loop momenta. Extra amount of work needs to be done to bring these different copies of integrals to the canonical forms, and to compare the results of different unitarity cuts. A naive application of this strategy typically makes the non-planar unitarity method less efficient. Instead, we have computed these non-planar contributions using standard Feynman diagrams (with FeynArts [77]) plus IBP reduction methods. It would be very desirable to develop an efficient way to apply unitarity cut method in the non-planar sector which we leave for future work.
Gauge invariant basis
The unitarity cut integrand is explicitly gauge invariant, since all its tree building blocks are gauge invariant. This means the cut-integrand vanishes if any ε i → p i , even before IBP is performed. This explicit gauge invariance serves as a self-consistency check of our cut-integrand. By contrast, the complete uncut-loop integrand is typically not explicitly gauge invariant, setting ε i → p i leaves some scaleless integrals which are zero only after integration.
JHEP02(2020)169
Since the (cut) amplitude is gauge invariant, we can expand it using a set of gauge invariant basis B α (see e.g. [27] and also [59,78] for recent general discussion): 3 The coefficients f α n (p i , l a ) can be computed as where the dual basis B α play as projectors, which satisfies, The '•' product is defined in (3.3) and (3.4).
For the form factor with three gluons, the gauge invariant basis has 4 elements and we can choose the basis as where the {i, j, k} in A i are cyclic permutations of {1, 2, 3}. For form factors with two external gluons, there is only one gauge invariant basis B 0 = C 12 .
Next we consider form factors containing external quarks. The amplitudes (form factors) with a pair of quark fields contains fermion chains structures likeū p · · · u. To define gauge invariant basis, we need to distinguish operators with even and odd numbers of gamma matrices, which will be denoted as O even and O odd , respectively. For example, the operatorsψψ and F µνψ γ µν ψ belong to O even , while F µν D µψ γ ν ψ belongs to O odd . One major difference between O even and O odd is that in a Feynman diagram of a non-chiral massless theory (like Higgs EFT), if O even appears in a fermion loop, the gamma trace of this fermion loop would contain odd number of gamma matrices, thus vanish. An example is shown in figure 1. By contrast, a Feynman diagram with O odd in the fermion loop does not vanish. The scattering amplitudes, and consequently the gauge invariant bases, of O even or O odd contain product of even or odd number of gamma matrices, respectively.
To be more concrete, let us start with the gauge invariant basis for the H → qq amplitude. In this case, there is only one single element B 1 =ū(p 2 )u(p 1 ) for the basis. Consequently, only form factors of O even contribute and all O odd → qq type amplitudes must vanish, since the gauge invariant basis for the latter must contain odd number of gamma matrices in the product. The gauge invariant basis for H → qq can be summarized as: (1) For amplitudes H → qqg, the gauge invariant bases contain 4 types of contractions: in which two of them contain odd/even number of gamma matrices. The gauge invariance basis can be constructed as: After expanding a form factor in the gauge invariant basis as in (3.5), the helicity information is contained in the basis B α , and the coefficients f α n contain only scalar product of loop and external momenta, which can be reduced directly using IBP. Comparing with other tensor reduction methods like the PV reduction, the gauge invariant basis method produces integrals with less numerator power, and the coefficients of the integrals are also more compact and do not contain Gram determinants.
Details of the unitarity-IBP construction
In this subsection we demonstrate the strategy described above using explicit examples. Let us first point out an important difference between planar form factors and planar scattering amplitudes: the planar-color form factors can contain integrals whose topologies are non-planar. This is because the operator (or the Higgs particle) is a color singlet, thus the presence of the operator does not alter the structure of the color diagram. So the diagram contributes to the leading N c order even if the operator appears in the middle of the diagram. Since the operator carries a non-zero momentum q, the planar-color diagram for form factors may correspond to a non-planar integral.
Two-loop two-gluon form factor. As a simple example, we consider the two-point twoloop form factor O 0 → 2g in the pure gluon sector. The complete set of master integrals JHEP02(2020)169 Figure 3. The triple cut for a two-loop form factor of tr(F 2 ) with two external particles. are given in figure 2. One may note that two pairs of master integrals, (2) and (3), and (4) and (5) in figure 2, are equivalent up to some relabellings of loop momenta. In our unitarity computation, we need to distinguish them, because as discussed at the beginning of this subsection, they correspond to different planar-cut diagram contributions. Practically, the relabeling of loop momenta is not consistent with cut conditions, since the cut momenta are fixed in a given unitary cut. By choosing not to relabel loop momenta during IBP reduction, the above pairs of integrals will appear as distinguishable master integrals.
Below we demonstrate the unitarity-IBP method by considering the triple-cut shown in the l.h.s. of figure 3. The two-loop cut form factor is given as the product of a color-ordered three-gluon tree form factor and a color-ordered five-gluon tree amplitudes: (3.14) The polarization vectors ε 3,4,5 of the cut gluons can be summed using the contraction rule in (3.3), then the polarization vectors ε 1,2 of the external gluons can be contracted with the gauge invariant basis, which contains a single element B 0 = C 12 in (3.9). Thus we have The scalar function f 0 is a function that is rational in s ij and polynomial in the dimension parameter D, and thus it can be directly reduced using IBP reduction with e.g. public codes [79][80][81][82]. As shown in the r.h.s. of figure 3, only two master integrals (1) and (6) in figure 2 enter in this cut. This cut allows us to compute their coefficients {c 1 , c 6 } as which are consistent with the known result (see e.g. [83]).
To determine the coefficients of other master integrals, four other cuts can be used as shown in figure 4. More explicitly: cut-(b) for {c 2 }, cut-(c) for {c 3 }, cut-(d) for {c 4 } and cut-(e) for {c 5 , c 6 }. Note that c 6 appears in both cut-(a) and (e), and these two cuts provide a non-trivial consistency check.
The full form factor F O 0 can be written as Figure 4. The cuts needed in the 2-loop 2-point form factor calculation. where M i correspond to the master integrals with label (i) in figure 2. Note that the permutation of two external gluons does not alter the integrals (5) and (6), so for these two master integrals a factor 1 2 is added to avoid double counting.
Two-loop three-gluon form factor. The set of cuts which is sufficient for the computation of the three-point two-loop form factors are given in figure 5. All these cuts are required for the form factors of length-2 operators, while for length-3 operators only the four cuts in the first row are needed. Consider the two-loop three-gluon form factor of length-3 operator O 1 as an example. F O 1 contains seven master integrals up to permutations of external legs, as show in figure 6 and figure 7. Each cut fixes the coefficients of a subset of these master integrals. For example, triple cut (b) of figure 5 in s 12 -channel determines the coefficients of five master integrals in figure 6, and the coefficients of (2) (or (3) ) are related to that of (2) (or (3)) by flipping symmetry p 1 ↔ p 2 . If a master integral appears in the results of several different cuts, its coefficient in these cuts must be the same, which provides consistency check for the computation. For the Higgs to three-parton amplitudes considered in this paper, the full set of master integrals are shown in figure 8. They have been obtained in terms of 2d harmonic polylogarithms [41,84]. Using these expressions we can obtain the analytic bare form factors.
Color decomposition of fermion cuts
In the pure gluon sector, planar cuts are enough to construct the form factors. The cut form factor can be decomposed as products of planar tree form factors or amplitudes. However, such a decomposition is not obvious in the presence of quark loops. In this section we show that, in the case of Higgs to 3-gluon amplitudes, by making connection between the fundamental and adjoint fermions, a nice color decomposition is still possible, such that the full 2-loop integrand can be constructed using planar cuts.
In our notation, gluons carry an adjoint color index a = 1, 2, . . . , N 2 c − 1, and quarks and antiquarks carry an N c or N c index, i, = 1, . . . , N c . We will use the group algebras We denote f x for the flavor index of quarks and the contraction is given by δ fxfx = n f .
Color decomposition of tree amplitudes
As far as the color factors are concerned, we do not need to discriminate n-point amplitudes and n-point form factors. Since the Higgs field is a color singlet, we can remove it from the form factor color graph, what is left is the color graph of a scattering amplitude. For example, the H → 3g tree form factor has the color factor f abc , which is the same as the color factor of 3-gluon tree amplitude. We use the following color decomposition of n-gluon tree amplitudes: A(1 g , 2 g , · · · , n g ) = σ∈S n−2 A (n − 1)σ 1 σ 2 · · · σ n−2 n f a n−1 aσ 1 ···aσ n−2 an . Here A (F ) denotes the amplitudes (form factors) with full color factors, while A (F ) denotes the color-tripped planar amplitudes (form factors).
We also need tree amplitudes and form factors with quark pairs. For tree amplitudes with one quark pair and (n − 2) gluons, a similar color decomposition is The color decomposition for the 4-quark tree amplitude is
The s 12 two double-cut
The s 12 two double-cut, as show in figure 9, corresponds to the product of a 3-point form factor and two 4-point amplitudes: F (345) A(7654) A(1267). First we consider the case when all cut lines (4, 5, 6, 7) are all fermions. The cut integrand is the product of the following three tree amplitudes:
JHEP02(2020)169
It is important to notice that our discussion so far applies to general representation of quarks. In the case that quarks are in adjoint representation, it is clear that the cut integrand is proportional to C 2 A f a 1 a 2 a 3 and can be written as where the kinematic parts X i can be computed using planar unitarity cuts and correspond to the coefficients of n 2 f and n f in the planar cut integrand, respectively. In order to match (4.9) with (4.10) when taking fermions to be adjoint, we must have c 3 = −c 1 and c 4 = −c 2 , and (4.9) can be reduced to Furthermore, in the adjoint fermion case, the two color factors above reduce to: (4.12) By matching the n 2 f and n f terms, the result in a generic representation can be written as So the cut integrand can be obtained using planar unitarity cut in the adjoint case. To be more explicit: first, one compute the cut integrand in the adjoint representation using planar unitarity cut, then replace C 2 A by t 2 F in the coefficient of n 2 f , and replace C 2 In the case that (4, 5) are gluons, and (6, 7) are fermions, one obtains the following color decomposition: iC A 2 n f t F A(4 g , 5 g , 6q, 7 q ) − A(5 g , 4 g , 6q, 7 q ) × A(1 g , 2 g , 7q, 6 q ) Tr(T a 1 T a 2 T a 3 ) + A(2 g , 1 g , 7q, 6 q ) Tr(T a 2 T a 1 T a 3 ) , (4.14) which has two color structures, C A t F Tr(T a 1 T a 2 T a 3 ) and C A t F Tr(T a 1 T a 3 T a 2 ). The same structure happens in the case (4, 5) are fermions, and (6, 7) are gluons. By similar analysis as the previous example, if we denote the cut amplitude in adjoint representation as the cut amplitude in generic representation can be written as Here again, X 3 and X 4 can be extracted form the cut integrand in adjoint representation. The above discussion means that the planar cut is suffice to determine the s 12 double 2-cut integrand in the generic representations. The only difference is that different color factors should be assigned to different terms in the cut integrand. All these color factors should be reduced to C 2 A in the adjoint representation.
The s 12 triple-cut
The s 12 triple-cut corresponds to the product of a 4-point form factor and a 5-point amplitudes, F (3456)A(12456), as show in figure 10. If the internal states are all gluons, the color factor is simply C 2 A . Now consider the case (4, 5, 6) = (g, q,q). The tree amplitudes are i 5 δ f 5 f 6 + permutations of (124) . the cut amplitude can be rewritten as The three terms in the bracket in (4.19) take obviously the planar-cut form and correspond to the three different internal-state configurations in figure 11, respectively. If the gluon line appears in the middle of the diagram, the color factor is 2C F − C A , otherwise it is C A . The same pattern also appears in other cuts.
Other cuts
The color structures of the other cuts can be computed in a similar way. It turns out that for every cut, a planar color decomposition is possible. We summarize all cases as follows JHEP02(2020)169 These allow us to compute full amplitudes using only planar cuts.
Results
In this section we perform UV renormalization and IR subtraction for form factors and obtain compact analytic forms. The two-loop bare form factors contain UV and IR divergences, which were discussed in section 2.2. The 1/ m , m = 4, 3, 2 pole terms must cancel with the universal IR divergences and the 1 loop UV divergences, which offer non-trivial self-consistency checks of the results. The cancellation of 1/ pole terms then determines the two-loop anomalous dimension of the operator.
As an important check of the computation, we have reproduced known results including the non-trivial two-loop amplitudes of Higgs to three partons with the operator tr(F 2 ) [27]. As a further check, we recall that the form factors should satisfy the linear relation (2.8). We compute form factors of different operators independently. We explicitly check that, already at the level of IBP master integrals, the results satisfy exactly this linear relation. We would like to emphasize the computation of form factors of tr(D 2 F 2 ) is more involving than the known result of tr(F 2 ) due to the extra derivatives in the operator, and our method can be efficiently applied to such case as well as operators with higher dimensions.
A word about notation: for form factors with three partons, it is enough to consider three configurations given in table 1. The subscripts α, β, γ denote different external states, similar to that in [27]. We also introduce dimensionless variables: Table 2. Normalized tree-level form factors r
Tree-level results
We first recall the tree-level form factors for the dimension-4 operator tr(F 2 ) [85]: Since the operators satisfy the linear relation (2.7), it is convenient to introduce a dimension-6 operatorÔ 2 asÔ The form factor ofÔ 2 is the same as O 0 up to an overall factor s 123 : For convenience, we normalize all form factors of dimension-6 operators by dividing the tree form factor ofÔ 2 and introduce the 'dimensionless' form factors r The ratio tree-level form factors are given as (also summarized in table 2): Note that, we have normalized the operators {Ô 1 ,Ô 3 ,Ô 4 } properly, such that the 3-point tree form factors all have the unit constant. From now on, we will takeÔ I : as the basis of dimension-6 operators.
Loop corrections
Below we consider the form factors of O 0 andÔ 1 in detail. Explicit results of other operators are collected in appendices A-D.
JHEP02(2020)169
Example 1: We first consider the form factor of O 0 and three gluons. This result has been obtained in [27]. The main purpose of this discussion is to make contact with the known literature and to set up the notation which will be used for the higher dimension operator cases. Since O 0 is a length-2 operator, we have δ n = 3 − 2 = 1, as defined in section 2.2. The one-loop bare form factor is: where I 4 and I 2 are one-loop box and bubble master integrals, and the master coefficients to all order in are As discussed in section 2.2, the one-loop form factor satisfies the following relation (with δ n = 1): Using the bare one-loop form factor and universal IR information, we can extract the one-loop renormalization constant The one-loop finite remainder can be obtained as Similarly, the two-loop form factor satisfies the relation:
JHEP02(2020)169
Evaluating the bare two-loop form factor and using the universal IR information and oneloop results, we can extract the two-loop renormalization constant. The 1/ 2 part is determined by the one-loop data as in (2.17), while the 1/ part is The two-loop finite remainder can be decomposed according to the color factors as The explicit expressions are given in [27] (see also [86]), which we do not reproduce here. 4 Next we consider the form factor ofÔ 1 . SinceÔ 1 is a length-3 operator, we have δ n = 3 − 3 = 0. The one-loop bare form factor is given in terms of bubble integrals: The one-loop form factor satisfies (with δ n = 0) from which we extract the one-loop renormalization constant At two loops, using (2.21) with δ n = 0, the form factor satisfies The cancellation of divergences fixes the two-loop renormalization constant. The 1/ 2 part is determined by the one-loop data as in (2.17), while the 1/ part presents interesting new structure of operator mixing: We can see that the first term provides a diagonal part of the renormalization constant matrix:
JHEP02(2020)169
while the second term is due to the mixing withÔ 2 which gives an off-diagonal component of the renormalization constant matrix: The two-loop finite remainder can be further simplified using symbol techniques [87]. We decompose it according to the color factors as where the explicit expressions are collected in appendix B. Form factors of other operators and other external states can be obtained following the same procedure. Similar operator mixing effects also appear in other form factors, and we summarize the renormalization matrix in section 5.3. The one-loop results in master expansion are collected in appendix A. The two-loop finite remainders are collected in appendices B-D.
Operator mixing
As shown in (5.26), the operators in general have operator mixing effects. This is represented by the renormalization constant matrix Z J I defined througĥ We summarize below the renormalization constant matrix for dimension-6 operators at one and two loops. At one-loop, there is no operator mixing and the renormalization constant matrix is diagonal: The two-loop renormalization Z (2) contains 1/ 2 pole terms which are determined by the one-loop matrix using (2.17). The simple pole terms are the intrinsic new two-loop contribution, which can be summarized as follows: The computation of entries (Z ) 2 2 matches the result for dimension-4 operator tr(F 2 ) in [27]. The N 2 c terms of (Z were computed in [53]. All the other entries are given for the first time to our knowledge. To determine the matrix elements (Z (l) ) 3 I , one needs to compute form factors ofÔ 3 with four partons, which we leave for future work.
Discussion
The results in the last section provide the complete two-loop QCD corrections to Higgs plus 3-parton amplitudes with dimension-7 operators. They are of phenomenological relevance for the LHC experiments, and provide for the first time the top-mass correction of S-matrix elements for Higgs plus one-jet production at N 2 LO. Results with full top-mass dependence would require a three-loop computation involving a massive subloop, which is beyond the state of the art. Our computation relies on a combination of modern on-shell unitarity-cut method and IBP reduction. This strategy can be applied efficiently to the case with higher dimension operators in the Higgs effective action.
The final analytic results take remarkable simple form and exhibit intriguing hidden structures. Below we comment on this in more details. First of all, the maximally transcendental parts take universal forms, generalizing the maximal transcendentality principle (MTP) in two aspects. Firstly, the MTP applies to Higgs and three-parton amplitudes with dimension-seven operators, see also [52][53][54][55][56]. Secondly, the principle applies also to Higgs amplitudes with external quark states, by a change of color factors (see also [56]): Max. Tran. of (H → qqg) C F →C A = Max. Tran. of (H → 3g) , Note that the last equality exactly corresponds to taking C F → C A . Physically, such an identification corresponds to changing the fermions from the fundamental to the adjoint JHEP02(2020)169 representation. This has been known for the kinematic independent quantities such as anomalous dimensions [38,39]. For pseudo-scalar Higgs amplitudes involving qqg states, the universal maximally transcendental part was also noted in [89]. For the lower transcendentality parts, the results of QCD and corresponding N = 4 form factors are not identical as expected. Intriguingly, the transcendentality degree-3 and degree-2 parts of QCD results also show some universal structures and have certain connections to the N = 4 results. In N = 4 form factors, the transcendentality degree-3 part can be expressed in terms of the function T 3 [45,47,52]: It turns out that the QCD form factors can also be expressed using T 3 , plus simple ζ 3 or (ζ 2 × log) terms, as given in appendices B-D. We should point out that there are still rational factors associated to the T 3 functions which can be different for different form factors, while all the non-trivial transcendental functions are organized together into T 3 functions.
For the transcendentality degree-2 parts of QCD form factors, there appear two main building blocks, T 2 and T 2 : , a few extra Li 2 functions also exist, but all their coefficients are simple numerical numbers. All Li 2 functions with non-trivial rational factors are organized themselves in terms of T 2 and T 2 . Other remaining terms are simple ζ 2 and log 2 terms. For the form factor of O 1 ∼ tr(F 3 ), the T 2 function is not needed, and it was noted that both transcendental degree-2 and degree-1 (log) functions with non-trivial rational kinematic factors are identical between the QCD and N = 4 results [53]. For example, comparing with (B.5)-(B.6), the corresponding N = 4 results are 6
JHEP02(2020)169
As a side comment, we note that in [90], a transcendentality-2 building block was found for the two-loop five-gluon double trace scattering amplitudes: which is equivalent to the finite part of one-mass box functions. It is similar to our degree-2 building block (6.5). This similarity might be related to the similarity between the kinematics of five-gluon amplitude and three-parton form factors when two gluons with momenta p 4 and p 5 are merged together.
Knowing the building blocks as described above makes it much easier to simplify the expressions. In particular for the maximal transcendental part, the MTP may allow one to obtain the QCD expression from a much simpler N = 4 result which may be computed to very high loops. One should note that there are also known examples where the maximal transcendentality principle does not apply. For example, MTP does not hold for the fourgluon and five-gluon scattering amplitudes even at one loop. The one-loop QCD four-gluon amplitudes contain polylogarithm functions, while N = 4 SYM amplitudes only contain simple log functions. Counter examples were also noted in the Regge limit of amplitudes [91] and for the form factor of stress tensor operator [92]. By now the sphere of application of MTP is still not clear. It would be interesting to explore the underlying mechanism and consider more examples. Furthermore, it would be important to study further the structures of lower transcendentality parts which are needed to compute full QCD results. It would be worthy to consider amplitudes in N = 1, 2 SYM, which may serve as bridges connecting the QCD and N = 4 SYM amplitudes.
When scattering amplitudes are classified by transcendental degrees, usually 1 s k type spurious poles appear. The cancellation of these unphysical poles makes it possible to relate terms with different transcendental degrees, and may be used to constrain the lower transcendentality parts of the amplitude from the higher transcendentality pieces. The analytical expressions of a subset of two-loop non-planar master integrals for Higgs to 3parton amplitudes with finite top quark were obtained recently [93] (which correspond to the NLO order in the Higgs EFT expansion). These integrals contain elliptical sectors. It would be interesting to explore whether there are universal analytical structures in the elliptical sectors.
A One-loop results
In this appendix we provide the one-loop bare form factor results to all order in , which to our knowledge are given explicitly for the first time in the literature. The higher order terms in expansion will be needed in the higher loop computation. r (l) is the normalized form factor defined in (5.5). We have: where I 2 [s ij ] are one-loop scalar bubble integrals with p i + p j as the external momentum.
B Two-loop remainder of FÔ In this and following appendices, we collect the two-loop remainder functions. We follow the definition of r (l) in (5.5).
The two-loop finite remainder can be decomposed according to the color factors as: in which we separate the terms proportional to log(−q 2 ) in R . We decompose the R and R L3;4 (u, v, w), T 3 (u, v, w), T 2 (u, v) are defined in (6.2), (6.4) and (6.5) respectively. In (B.5), there seems to be 1 w 2 -type unphysical poles. Such poles can be cancelled by the zero of T 2 (u, v) when w → 0: The n f parts are simpler and we collect terms of different degrees together: Finally, the terms containing log(−q 2 ) are give by: The (1 − , 2 − , 3 + ) configuration is very simple: We decompose the two-loop finite remainder according to the color factors as: and T 2 (u, v) are defined in (6.5).
C Two-loop remainder of FÔ 3 ForÔ 3 with three partons, only the (1 q , 2 q , 3 − )-configuration is non-zero. We have We decompose the two-loop finite remainder according to the color factors as: and T 2 (u, v) are defined in (6.5).
We decompose the two-loop finite remainder according to the color factors as: The degree-4 part: The n f parts do not contribution to the maximal transcendentality part.
The above results have been simplified using the symbol method [87]. It turns out that R (2),N 2 ĉ O 4 ;γ;4 part can not be expressed using only classical polylogarithms, which explains the appearance of the multiple polylogarithm function G (1 − v, 1 − v, 1, 0, w). It is also worth mentioning that the symbol of R is identical to the universal remainder density of minimal form factors in N = 4 SYM [44][45][46][47]. See [56] for more discussion on this point. The above three terms with different color factors, when combined together, reproduce the universal function R | 11,148.2 | 2020-02-01T00:00:00.000 | [
"Physics"
] |
Multiple point principle in the general Two-Higgs-Doublet model
Based on the Multiple Point Principle, the Higgs boson mass has been predicted to be $135 \pm 9$ GeV - more than two decades ago. We study the Multiple Point Principle and its prospects with respect to the Two-Higgs-Doublet model (THDM). Applying the bilinear formalism we show that concise conditions can be given with a classification of different kinds of realizations of this principle. We recover cases discussed in the literature but identify also different realizations of the Multiple Point Principle.
Contents
1 Introduction
The Multiple Point Principle
The Multiple Point Principle (MPP) [1][2][3] states that the Higgs potential should have coexisting phases: Besides one minimum at the electroweak scale of O(100) GeV there should be a degenerate minimum or degenerate minima at a scale Λ far above the electroweak scale up to the Planck scale. Based on this principle the Higgs boson mass has been predicted more than 20 years ago to be 135 ± 9 GeV [4]! A more refined analysis yielded a mass 129.4 ± 2 GeV [5] to be compared to the observed mass 125.10 ± 0.14 GeV [6] of the Higgs boson which was discovered by the CMS and ATLAS collaborations in 2012 [7,8].
Let us briefly sketch the derivation of this remarkable result following closely [4]: The MPP states that 1. The Higgs-boson doublet ϕ has at least two coexisting vacua 1 and 2: with the same potential value, that is, The additional minimum or minima should appear at the high scale Λ with 100 GeV Λ < M Planck , In the Standard Model with only one Higgs-boson doublet, the effective gauge-invariant potential is written, where we define φ = ϕ † ϕ, and where the dependence of the parameters on the scale is written explicitly. Close to the second vacuum the quartic term is dominant, The two conditions (1) and (2) above then give, using the fact that the degeneracy of the potential values requires that λ(Λ) ≈ 0, (1. 5) This means that in addition to the quartic parameter λ also its β λ function has to vanish at the scale Λ. The β λ function depends in the following way on the Higgs-boson field φ: = β λ (λ(φ), g t (φ), g 1 (φ), g 2 (φ), g 3 (φ)) (1. 6) with g t (φ) the top-Yukawa coupling and g 1/2/3 (φ) the scale-dependent gauge couplings. From the explicit form of the β λ function in the Standard Model, Nielsen and Froggatt evaluate the renormalization group equation numerically, using two loop beta functions and plot λ(φ). The evolution depends on the masses of the top-quark and the Higgs boson. Requiring a vanishing quartic parameter λ(φ) as well as a vanishing β λ function at the high scale Λ, the masses of the top quark and the Higgs boson are predicted.
Motivated by this result, the question arises what are the consequences of the MPP in the Two-Higgs-Doublet Model (THDM)? The original motivation by T. D. Lee [9] to study the two-Higgs-doublet extension has been to have another source for CP violation -one of the shortcomings of the Standard Model, where violation of CP only arises from the CKM matrix (and the PMNS matrix) and is too small to explain the observed baryon asymmetry dynamically. Another motivation has been given by supersymmetric models which require to have more than one Higgs-boson doublet in order to give masses to upand down type fermions. A more pragmatic reason is that there is nothing which prevents the introduction of more copies of Higgs-boson doublets. In particular, the ρ parameter, relating the masses of the electroweak gauge bosons with the weak mixing angle (see [10] for details) is measured close to one in agreement with the Standard Model. The ρ parameter is known to keep unchanged at tree level with respect to additional copies of Higgs-boson doublets. Eventually, let us mention that compared to the two real parameters of the Higgs potential of the Standard Model, the potential of the THDM has a much richer structure allowing for different phases. For a review of the THDM we refer to [11].
The most general gauge-invariant potential with two Higgs-boson doublets in the convention with both Higgs doublets carrying hypercharge y = +1/2, reads [12] V THDM (ϕ 1 , (1.8) The parameters m 2 12 , λ 5/6/7 are complex, whereas all other parameters have to be real in order to yield a real potential. Therefore we count in total 14 real parameters in contrast to two real parameters of the Standard Model.
In the work [13] the MPP has been studied with respect to the general THDM. The argumentation in this work has been developed in the following way: Supposing that the potential has a second minimum at a high scale Λ, after an appropriate SU (2) L × U (1) Y transformation the vacuum expectation values of the two Higgs-boson doublets are parametrized as [13] where Λ 2 = φ 2 1 + φ 2 2 . Then the conditions are studied for the potential (1.8) and its derivatives with respect to φ 1/2 to be independent of the phase ω at the scale Λ. This leads to conditions for the quartic couplings as well as their derivatives, that is, the β functions, evaluated at the scale Λ. In particular it is shown that these conditions originating from the MPP yield a CP conserving potential, obeying in addition a softly broken Z 2 symmetry giving an argument for the absence of flavor-changing neutral currents. Therefore, they continue their analysis in the framework of models with natural flavor conservation, e.g. the THDM type II.
In [14] a detailed phenomenological study of the MPP is carried out, starting from the results of [13], and applying them to the THDM type II as well as the Inert Doublet Model. It is found that in both cases, the MPP is incompatible with the requirement of providing simultaneously the experimental value of the top quark mass, electroweak symmetry breaking and stability.
Here we show that the study of the MPP in the THDM can be implemented concisely in the bilinear formalism [15][16][17]. The advantage of the bilinear formalism is that all unphysical gauge degrees are eliminated systematically and the potential and all parameters are real. Basis transformations, that is, a unitary mixing of the two doublets, for instance, are given by simple rotations. Similar to the case of the Standard Model, where we have to have a vanishing quartic coupling together with its β function at the high scale Λ, we find conditions among the potential parameters and its derivatives in order to satisfy the MPP. We present a classification of all possible realizations of the MPP in the THDM. The conditions for these classes of realizations are given in a basis-invariant way and can be checked easily for any THDM.
In order to arrive at the conditions for the MPP we present the β functions of the potential parameters in the bilinear formalism (see also [18]). We demonstrate the conditions of different realizations of the MPP in examples and we show in particular that the results of [13] can be recovered in the presently developed formalism as one possible realization of the MPP. It should be noted that other MPP solutions have been discussed in [19] in the conventional formalism. Here, we will present a complete classification of the MPP solutions in a transparent way using the bilinear formalism.
Brief review of bilinears in the THDM
Here we briefly review the bilinears in the THDM [15][16][17] in order to make this article self contained. We will also discuss briefly basis transformations. Bilinears systematically avoid unphysical gauge degrees of freedom and are defined in the following way: All possible gauge-invariant scalar products of the two doublets ϕ 1 and ϕ 2 which may appear in the potential can be arranged in one matrix (1.10) This hermitian matrix can be decomposed into a basis of the unit matrix and the Pauli matrices, with four real coefficients K 0 , K a , called bilinears. Building traces on both sides of this equation (also with products of Pauli matrices) we get the four real bilinears explicitly, The matrix K is positive semi-definite. From K 0 = tr(K) and det(K) = 1 As has been shown in [16] there is a one-to-one correspondence between the original doublet fields and the bilinears apart from unphysical gauge-degrees of freedom. In terms of bilinears we can write any THDM (a constant term can always be dropped), with real parameters ξ 0 , ξ a , η 00 , η a , E ab = E ba , a, b ∈ {1, 2, 3}. Expressed in terms of the conventional parameters (1.8), these new parameters read A unitary basis transformations of the doublets, corresponds to a transformation of the bilinears, It follows that R(U ) ∈ SO(3), that is, R(U ) is a proper rotation in three dimensions. We see that the potential (1.14) stays invariant under a change of basis of the bilinears (1. 19) if we simultaneously transform the parameters [16] ξ 0 = ξ 0 , ξ a = R ab ξ b , η 00 = η 00 , η a = R ab η b , E cd = R ca E ab R T bd . (1.21) Note that by a change of basis we can always diagonalize the real symmetric matrix E.
As an illustration, the following parametrization for a unitary transformation [16] corresponds, in terms of bilinears, to the rotation matrix and is useful to relate a given general basis to the so-called Higgs basis (see appendix A) when the doublets acquire non-zero vacuum-expectation values. In this case, the angle β fulfils |v 0 1 | sin β = |v 0 2 | cos β (or tan β = |v 0 2 |/|v 0 1 |).
Let us also briefly recall that (standard) CP transformations, that is, ϕ i → ϕ * i , i = 1, 2, have a simple geometric picture in terms of bilinears [20]. With view on (1.12) we see that a (standard) CP transformation corresponds to K 2 → −K 2 keeping all other bilinears invariant in addition to the parity transformation which flips the sign of the arguments not written explicitly. Now let us assume for simplicity, that by a change of basis, the parameter matrix E is diagonal. For the general case of arbitrary matrices E we refer to [20]. With E diagonal we see that the potential (1.14) is invariant under the (standard) CP transformation if the parameters ξ 2 and η 2 vanish. Eventually we note that by a basis change (1.19), (1.21) this is equivalent to any commonly vanishing entries of the parameter vectors ξ = (ξ 1 , ξ 2 , ξ 3 ) T and η = (η 1 , η 2 , η 3 ) T at the same position.
Let us also prepare the analysis of the THDM for the case of large K 0 . First we note that K 0 ≥ 0 and for K 0 = 0 the potential is trivially vanishing. We define for K 0 > 0 [16] (1.24) With (1.24) we can write the potential (1.14) in the form with the functions In appendix A we recap some parts of electroweak symmetry breaking in the THDM in terms of bilinears.
Classification of the vacua
We now apply the MPP to the THDM potential, that is, we study the two conditions (1) and (2) as mentioned in section 1.1 for the case of the THDM. Analogously to the Standard Model case we consider the parameters as scale dependent. In the bilinear formalism, advantageous in the description of the THDM potential, large field configurations correspond to a large bilinear K 0 which itself is bilinear in the Higgs-doublet fields, see (1.12). Therefore the bilinears depend quadratically on the mass scale Λ. The THDM potential is considered as an effective parametrization V THDM eff of the form (1.25), (1.26). Higherdimensional order operators are neglected since we consider a scale Λ much larger than the electroweak scale but also sufficiently below the Planck mass. For large fields close to the high scale the quartic terms are dominant, so we have where we write explicitly the dependence of the parameters on the scale K 0 . In appendix B we briefly discuss the condition of a vanishing function J 4 at the high scale. Note that the bilinear dimensionless field k is defined on the domain |k| ≤ 1. With respect to the MPP we are looking for a potential which has a second minimum at the high scale, that is, K 0 2 = Λ 2 at a corresponding "direction" of the second minimum, see (1.24)-(1.26), In order to have a degenerate vacuum at the high scale with the same potential value we find with respect to (2.1) for large K 0 the condition (2.6) we find the condition for the beta functions at the Planck scale: For the THDM the conditions (2.4) and (2.7) replace the ones for the SM.
Stationary points at the high scale Λ
So far we have found that a minimum k 2 should satisfy the conditions (2.4) and (2.7). We now study the stationarity structure of the dominant quartic terms. This study is quite analogous to the stability study in [16]. We have |k| ≤ 1 and we consider the cases |k| < 1 and |k| = 1 separately. For |k| < 1, stationarity of the potential requires, expressed in terms of J 4 (since Note that we do not write explicitly the scale dependence of the parameters which implicitly is given by Λ 2 . With the condition (2.9) for a vanishing gradient we can in the case |k| < 1 write the condition for vanishing J 4 (2.4) now in the form For det(E) = 0 the regular solution of (2.9) is or we have for det(E) = 0 exceptional solutions. We check that for the regular solutions we have with 1 − k T k > 1 indeed For |k| = 1 we impose a Lagrange multiplier, and the stationary solutions follow from that is, (2.14) With these conditions for a vanishing gradient we can in the case |k| = 1 write the condition for a vanishing J 4 (2.4) now in the form The regular solution of (2.14), that is, a solution with det(E − 1 3 u) = 0 is We check that for the regular solution we have indeed with u the solution of (2.17). Alternatively, we may have for det(E − 1 3 u) = 0 exceptional solutions of (2.14).
Eventually we note that we have to ensure that the stationary solutions of (2.9) for |k| < 1, respectively, (2.14) for |k| = 1 are local minima. As usual this can be done by considering the (bordered) Hessian matrix. Alternatively, in case of a stable potential, that is, a potential which is bounded from below, we may look for the deepest stationary solution or in the degenerate case, solutions, which are then of course minima.
Classification of the MPP in the THDM
Let us now study the vacuum structure with respect to the MPP in detail. Especially, we derive the conditions to have isolated points, respectively, continuous stationarity regions, corresponding to the MPP in a weaker or a stronger sense. First we recall that we can, by a basis change, (1.19), (1.21), diagonalize the real symmetric matrix E and therefore we suppose to have We emphasize that E diagonal is assumed to hold at the scale Λ 2 . This in particular means that in principle the matrix E may be non-diagonal at a different scale. We discuss the running of the parameters of the THDM in the next section.
In order to distinguish the parameters in the new basis, where the matrix E is diagonal, from the original ones, we denote them with a prime symbol. In conventional notation the potential with a diagonal matrix E corresponds to arbitrary parameters with λ 6 = λ 7 general complex and λ 5 real.
For K 0 = 0 the bilinear space is defined on the domain |k| ≤ 1. Let us first consider the case |k| < 1.
As pointed out above, for det(E) = det(E ) = 0 the regular solution of the gradient equation (2.9) is a single point (2.11) and requires that η 00 also satisfies (2.10), providing a degenerate value of the potential. We emphasize that the condition det(E) = 0 as well as the parameter η 00 are invariant under a change of basis.
If one of the eigenvalues of E is zero, say in the diagonalized matrix E its upper component, we get from (2.9) This is a line segment. In the case that η 1 together with E 11 are vanishing, that is, the two zero components of η and E are aligned, we may have a degenerate line of solutions satisfying (2.9). For η 1 = 0 there is no solution of (2.9). Besides, the η 00 parameter has in any case to satisfy (2.10). We want to derive these conditions in a basis-invariant way. Firstly, we remark that one vanishing eigenvalue of the matrix E (note that E is the original parameter matrix and not necessarily diagonal) corresponds to rank(E) = 2 and this in turn gives the basis-invariant conditions Now we can construct the conditions to have one zero in the parameter vector η aligned with E in a basis-invariant way: We get the statement, that for a matrix E of rank 2, that is, a matrix E fulfilling the conditions (2.21), there is a line of second degenerate vacua possible if in addition the two conditions (2.22) hold and the basis-invariant parameter η 00 satisfies (2.10). This corresponds to the MPP in the stronger sense. In case that only the rank conditions (2.21) hold but not the conditions (2.22) there is no realization of the MPP possible. Similar we can treat the case that two eigenvalues of E vanish. Going to a basis where E is diagonal, we suppose that the two upper components of the diagonal matrix E vanish, then we find from (2.9) the solutions This solution is a disk. In case that not both components of η aligned with the vanishing diagonal entries of E vanish, there is no solution. We note that for a solution also the parameter η 00 has to satisfy (2.10). The formulation in a basis-invariant way is as follows: Two vanishing eigenvalues correspond to a matrix E of rank(E) = 1, that is, det(E) = 0, and tr 2 (E) − tr(E 2 ) = 0, and tr(E) = 0 . (2.24) Since two eigenvalues vanish, we can by a basis change always achieve that also one of the components of η vanish, aligned with one of the vanishing entries of E. This can be written in a basis-invariant way: Only in case that in addition to (2.24) the conditions (2.25) are satisfied, we can have the MPP in form of a disk in bilinear space, that is, the MPP in the stronger sense. We note, that in this case it is required that the parameter η 00 satisfies (2.10). Otherwise, if the rank 1 conditions are fulfilled, but (2.25) do not both hold, there is no solution as a multiple point. Eventually, we consider the case E = 0. Note that a vanishing matrix E does not depend on the chosen basis. Now, we find from (2.10), that is, the condition of a vanishing potential at the second minimum, that there is for η = 0 no solution with respect to the MPP. However, if we have in addition to E = 0 also η = 0 and η 00 = 0 we have a sphere of solutions, that is, the MPP realized in the stronger sense.
The case |k| = 1 can be treated analogously to the previous one. We look for solutions of the gradient equation (2.14) instead of (2.9). This system of four equations has in general solutions for the indeterminants k 2 as well as the Lagrange multiplier u. The solutions of (2.14) and in particular the degeneracy of the solutions depend on the determinant of the matrix Let us now turn to the exceptional cases, with at least one eigenvalue of the matrix M vanishing. The argumentation is quite analogously to the previous study where we have to replace the matrix E by M and have to take into account the condition k T k = 1.
If one of the eigenvalues of M is zero, say, without loss of generality in the diagonalized matrix M its upper component, we get from (2.9) This gives at most two points, supposed that η 1 = 0, and otherwise there is no solution.
In addition, the η 00 parameter has to satisfy (2.15) in order to give a degenerate second vacuum. We want to find the conditions independent of the chosen basis. One vanishing eigenvalue of the matrix M corresponds to rank(M ) = 2, hence, basis-invariantly written, In case there is a solution of a vacuum with u from (2.14) satisfying the conditions (2.28) and also the conditions (2.29), there are points as a second degenerate vacuum possible supposed that η 00 satisfies (2.15). Suppose now that the two components, say, the upper components of the diagonalized matrix M vanish, then we find from (2.14) the solutions Only in case that we have also the two vanishing components of η aligned with the vanishing eigenvalues of M , we get a circle of degenerate solutions; otherwise there is no solution.
The formulation in a basis-invariant way is as follows: Two zero eigenvalues correspond to a matrix M of rank one, that is, Eventually, we consider the case M = 0. Then, we find from (2.15), that is, a vanishing potential at the second minimum, that there is for η = 0 no solution with respect to the MPP and in case of η = 0 we have a surface of a sphere of solutions, that is, the MPP in the stronger sense. Note that these conditions are already basis invariant.
Let us mention that we have seen, that in addition to a vanishing eigenvalue of the matrix E, respectively, M = E − u1 3 , also the corresponding component of η has to vanish (in a basis where E , respectively M , is diagonal). This in turn means that we do have CP conservation in this case [20]. We thus confirm the result [13] that the MPP in the THDM in the stronger sense corresponds to a CP conserving potential. In the strongest case where the second vacuum is a degenerate sphere we have to have E = 0 together with η and η 00 vanishing. This means that the potential has J 4 = 0 for all k. In conventional notation this gives λ 1 = λ 2 = λ 3 and λ 4 = λ 5 = λ 6 = λ 7 = 0.
Moreover, let us note the interesting aspect of solutions corresponding to |k| < 1 (see [16] for details), which give charge-breaking minima and solutions corresponding to |k| = 1 which give electroweak symmetry breaking SU (2) L × U (1) Y → U (1) em , however, for a second vacuum at a high vacuum expectation scale Λ, there is no reason to discard the possibility of charge-breaking minima.
We summarize our findings in table 1.
Constraints from the quantum potential
Thanks to the bilinear formalism, the 1-loop β-functions of the general THDM can be put in a concise tensor form (see appendix C), allowing one to perform an analytical study of the renormalization group. In the case where |k| < 1, using (2.9) and (2.10), the constraint (2.7) can be put into the following form: where we defined for convenience the strictly positive quantities g ≡ 9 20 g 2 1 g 2 2 and G ≡ and where the definition of the T -tensors is given in (C.18). If |k| = 1, we must use (2.14) and (2.15) and the constraint now reads (2.35) Let us first consider a simplified situation where the theory does not contain Yukawa couplings. In that case, eq. (2.33) becomes and has in fact no solutions. This can be seen by working in a basis where k = (k 1 , 0, 0) T , and where the term in the brackets now reads Clearly, since k T k < 1 the right-hand side is a positive quantity. This holds in any basis, and implies in turn that Table 1: Classification of possible realizations of the MPP in the THDM. The last column gives the kind of realization of the MPP or no in case the MPP is not realized. The upper part gives solutions for the case |k| < 1 and the lower part for the case |k| = 1. In all cases where the MPP is realizable, the parameter η 00 has to fulfill the condition (2.10), respectively, (2.15). The solutions of the vacuum vector k follow from (2.9), respectively, the solutions of k and u from (2.14). The conditions are given in a basis-invariant way and therefore directly applicable to any THDM.
We therefore have proven that in absence of Yukawa couplings, and if k T k < 1, eq. (2.7) cannot be satisfied, which means that the MPP cannot be realized. Applying the above reasoning to the case k T k = 1 we get the same conclusion.
The situation changes if we consider Yukawa couplings. For the known fermions we will see that only the dominant contribution from the top quark allows for a realization of the MPP. Alternatively, lower bounds on the Yukawa couplings could be given in order to have the MPP realized.
Comparison with previous work on the MPP in the THDM
As a concrete application of the results derived in sections 2.2 and 2.3, let us take the example of a THDM type II 1 . Thus we will be able to compare our results with the ones from [13] and [14].
We can now begin the comparison between the present work and the previous analysis of the MPP by Froggatt and Nielsen. The study in [13] results in two possible vacuum configurations, namely which respectively correspond to a charge-conserving, CP-violating and a charge-breaking CP-conserving minimum.
In the first case, (2.45a), that is, cos(θ) = ±1, we have k T k = 1 at the vacuum. Also Froggatt and Nielsen require that the value of the potential at the minimum is independent of ω, meaning that, with view on (2.43), in the approach developed in the present work, this corresponds to a circle-shaped vacuum. This in turn requires, as can be seen from table 1, that there is a basis where the parameters must satisfy The value of u being fixed, we may solve equations (2.14) and (2.15), giving η 2 3 = (η 00 + u)(E 33 − u) , (2.47) which in terms of the conventional parameters gives Requiring that |k 3 | < 1, it can be shown that the choice of the solution depends on the sign of λ 1 and λ 2 , thus giving In addition, ensuring that the extremum is a minimum rules out the second solution (2.50).
The only remaining set of constraints (2.49) corresponds to the one derived in [13].
We now turn to the second case (2.45b) where cos(θ) = 0. Making use of (2.43), this means that k 1 = k 2 = 0 and, in the case where neither φ 1 nor φ 2 vanish, |k 3 | < 1. 2 In any case, from a geometric point of view, the solution is a point, meaning that none of the eigenvalues of the matrix E should vanish. Applying the constraints (2.9) and (2.10) gives: or, in conventional parameters, where obviously the quantity λ 1 λ 2 must be positive. In addition to the above constraint, we can work out the condition |k| < 1 to give in the conventional formalism: Once again we want to ensure that the solution is a minimum, meaning here that E 33 must be positive. This rules out the first solution while the second set of conditions corresponds to the one in [13]. Thus the present formalism agrees with the results of Froggatt and Nielsen in both cases.
Finally, relation (2.7) must hold if the MPP is to be realized. This results in an additional constraint among the beta-functions, that is 3 : Injecting the expression of the bilinear couplings in terms of the conventional parameters gives: In the case of the charge-conserving vacuum (2.45a) we have λ 5 = β λ 5 = 0 and k 2 1 + k 2 2 + k 2 3 = 1. With these relations as well as the expression of k 3 in (2.15) in terms of the conventional parameters we find 1 2 In the other case (2.45b), we have k 1 = k 2 = 0 and with (2.10) we find 1 2 These two expressions exactly match the condition (26) from ref. [13], with the only difference that here we did not need to consider the sign of λ 4 . Instead, what distinguishes the two cases is the shape of the MPP vacuum.
To summarize this section, we have shown that the constraints derived by Froggatt and Nielsen [13,21] and later reused by McDowall and Miller [14] in the framework of a THDM type II constitute in fact a very particular case of the application of the formalism developed in this paper and can be easily recovered. We stress that numerous different realizations of the MPP may be derived in the same manner using the classification from table 1, which might lead to various phenomenological implications of this principle at the EW scale. Let us emphasize that the conclusion in the works [13,14,21], that the MPP in the THDM cannot be realized for a SM-like Higgs-boson mass and the observed top-quark mass is based on a special case of the MPP. Here we have seen in the geometric approach in terms of bilinears that the THDM may develop many more different kinds of realizations of the MPP.
Example potential
As an additional example we study a CP conserving THDM potential in which the parameters in conventional notation (1.8) satisfy Let us recall that these relations are assumed to hold at the scale Λ. In bilinear space (1.14) this corresponds to the quartic parameters in the form With view on table 1 we have the case det(E) = 0, tr 2 (E)−tr(E 2 ) = 0, but with tr(E) = 0. together with η T Eη = E 11 η 2 1 = 0 and also (Eη × η) T (Eη × η) = 0, that is the MPP is realizable as a disk supposed η 00 satisfies the condition (2.10).
Let us look into the solutions in detail. First we note that we can, by a change of basis (1.21), with the rotation matrix shift both, the diagonal entry as well as the corresponding entry of η. This case corresponds therefore to the case (2.23) with two vanishing eigenvalues of E.
In order to study the MPP we consider first the case |k| < 1. The second vacuum follows from (2.9) This is indeed an open disk in the y − z direction with border radius 1 − η 2 1 /E 2 11 . The parameter η 00 has thereby to fulfill the condition (2.10), (2.62) Moreover, the β functions have to satisfy (2.7). In the case that we only consider the potential without any Yukawa couplings these conditions read again at the scale Λ 2 . For |k| = 1 the stationarity condition is given by (2.14) with a Lagrange multiplier u. For u = 0 we get an exceptional solution at the border of the solution (2.61), which is, as to be expected from table 1 a circle in case that η 00 fulfills (2.15) which equals (2.62) in this case. Also the β functions have to fulfill (2.7) which give, neglecting Yukawa interactions (2.63).
For u = 0 we immediately get the solution from (2.14) that is, two possible points with corresponding values for the Lagrange multiplier The condition (2.15) restricts the parameter η 00 , corresponding to one of the two discrete vacua in (2.65). Besides, we have to satisfy the condition for the beta functions (2.7), that is, We note that the potential is CP conserving [20] (see section 1.2) and we conclude that the MPP is realizable in the stronger sense with a continuous disk of degenerate stationary points with the parameters and its β functions satisfying the discussed constraints. If these constraints are not fulfilled we may get at most an isolated point, that is, the MPP in the weaker sense, where the parameters and β functions have to have in particular to satisfy (2.67) and (2.7).
Application of the MPP: From the high scale to the EW scale
Having classified the possible types of vacua at the high scale Λ, we now want to study the MPP and its low-energy phenomenological implications. The method we use in this analysis can be summarized by the following steps: • At the high scale Λ we encounter 7 real parameters from the quartic part of the potential besides the parameters of the Yukawa couplings. The three gauge couplings can be run up from their known values at the electroweak scale.
• We consider the constraints provided by the MPP at the high scale Λ.
• We run all couplings down to the electroweak scale by the evolution equations. At the electroweak scale we have to consider in addition the quadratic parameters of the potential. These quadratic parameters are constrained since the model should provide the observed spontaneous electroweak symmetry breaking.
• We scan over all remaining free parameters.
• For every parameter set, we compute the masses of the physical Higgs bosons of the THDM and the masses of the fermions.
The purpose of this analysis is to study the implications of the MPP on the masses of the physical states, namely the five masses of the physical Higgs bosons and the masses of the fermions. The main goal is to determine whether the application of the MPP to the THDM may yield correct (i.e. observed) masses of a Standard-Model-like Higgs boson and the top-quark mass. In appendix A we recall the mechanism of spontaneous symmetry breaking in the THDM in the bilinear formalism. In particular we present in this appendix the mass matrices of the Higgs bosons and the mass of the pair of charged Higgs bosons.
Example: MPP as a spherical vacuum
As an application of our methods, we now study a THDM potential with the MPP realized as a spherical vacuum characterized by the potential parameter matrix E = 0 (respectively M = 0 for |k| = 1) (see table 1). We consider for simplicity only the top Yukawa coupling and we stay in the framework of a THDM type III, where in general none of the two couplings has to vanish. Note however that other types of THDM can be obtained as special cases when either y t or t is set to 0. In appendix D we show the Lagrangian of the top quark Yukawa coupling and its behavior under Higgs-basis changes.
Following the discussion in section 2 we have to consider the cases |k| < 1 and |k| = 1.
In case |k| < 1 (2.33) reads and can be further evaluated if we choose a basis where k = (0, 0, k 3 ) T ≡ (0, 0, k) T : This is a quadratic equation in k. It can be shown that a necessary condition for this equation to have real solutions is where the lhs is basis invariant and where the rhs can be evaluated using the 1-loop running of g 1 and g 2 . At the scale Λ = 10 16 GeV, the condition reads Given the above inequality, and under the assumption that the lhs is roughly of order y SM t , we expect the top quark to be the only fermion that couples strongly enough to the Higgs doublets to satisfy it. However the situation may not be as clear in some limiting cases, e.g. a THDM type II with a high value of tan(β).
We now turn to the case |k| = 1, where the evaluation of (2.35) in the same basis as above gives a constraint on the Lagrange multiplier u, namely (3.5) Similar to the previous case, we have a necessary condition for this equation to have solutions, that is, We note that in both cases we can reduce the number of free parameters by considering real Yukawa couplings with y t = |y t | e iθy and t = | t | e iθ . This is achieved by considering the U (2) basis transformation (see appendix D) corresponding, in terms of bilinears, to a rotation matrix R(U ) (1.23). The latter is in fact a rotation around the z-axis which in our case can be performed without loss of generality. We illustrate in Fig. 1 the constraints on y t and t in both cases, showing the regions of the (y t , t ) plane providing real solutions for (3.2) and (3.5).
Choosing a specific value for the Yukawa couplings we can fix all the relevant parameters at the high scale: In case |k| < 1 all the quartic couplings vanish whereas in case |k| = 1 the non-zero parameters are related to the Lagrange multiplier u via It is remarkable that in both cases this set of couplings implies CP conservation at the level of the quartic part of the scalar potential [16]. The only remaining possible source of CP violation is a non-zero value for the scalar mass coupling ξ 2 in the basis chosen above. The next step is to perform the running of the couplings down to the electroweak scale, where the study of spontaneous symmetry breaking will eventually allow for the determination of the masses of the Higgs bosons and the fermions. At the electroweak scale we have to consider the quadratic parameters ξ 0 , ξ i, i=1,2,3 of the Higgs potential. These couplings are however subject to constraints in order to give a proper SU (2) L × U (1) Y → U (1) em symmetry breaking pattern. Using (1.22), we can trade these four parameters for u EW , β, ζ and v 0 . The latter is known since it corresponds to the Standard Model vacuum-expectation value v 0 ≈ 246 GeV.
The angles β and ζ parametrize the basis transformation (1.22) which allows to achieve the form of the Higgs basis (A.4). We note that a non-zero value for ζ will in our case generate CP violation at the level of the scalar potential, and accordingly imply a mixing between the scalar and pseudo-scalar physical states. For simplicity reasons we consider the case ζ = 0, in which we will identify the lightest neutral scalar with the CP-even Standard-Model like Higgs boson.
Results and discussion of the spherical vacuum
We now present the results of the numerical analyses for the MPP for a Higgs potential providing a spherical vacuum. In order to detect the minima at the electroweak scale we have to solve the corresponding gradient equation (see appendix A and [16]). In principle, we may encounter regular and irregular solutions of these equations. However, for all benchmark points of the irregular solutions we find at least one negative eigenvalue of the scalar mass matrix (A.7). These cases are therefore ruled out. For the regular solutions, we show in Fig. 2 the results of the numerical analysis in the (m h , m t ) plane for several values of the high scale Λ, namely Λ = 10 15 , 10 16 , and 10 17 GeV. We emphasize that the free parameters of the model are y t (Λ), t (Λ), β and u EW . All the displayed parameter points were selected according to the following constraints and assumptions: • In the basis where k = (0, 0, k) T , the Yukawa couplings (y t , t ) are taken in the range [−1, 1] and are chosen such that (3.2) or (3.5) have solutions, in the cases |k| < 1 and |k| = 1, respectively.
• We vary β in the range between − π 2 and π 2 and u EW between 0 and 10, supposing that the classical scalar potential has a global minimum at the EW scale 4 . This is ensured by requiring Theorem 3 of [16] to be satisfied.
Clearly, for a scale Λ 10 16 GeV, the MPP provides values of (m h , m t ) compatible with the observed values for a certain region of the parameter space. This is no longer the case for higher values of this energy scale. These results agree with [14], where the MPP was shown to give a top quark mass too high to be compatible with the measured value in the context of a THDM type II, with Λ ≈ M P lanck ≈ 10 18 GeV.
Conclusion and outlook
The MPP [4] forces the Higgs potential to provide degenerate vacua with the same potential value. A long time before the discovery of the Higgs boson, its mass has been predicted to be 135 ± 9 GeV based on this principle applied to the Standard Model, a quite remarkable result.
Some effort [13,14,21] has been spent to apply the MPP to the TDHM. This has been done in the conventional formalism, where the gauge degrees of freedom appear explicitly. Here we have studied the MPP in the THDM applying the bilinear formalism [15][16][17] . This formalism turns out to be quite powerful to study models with additional Higgs-boson doublets.
We have classified all different types of degenerate vacua in the THDM. In particular, we find degenerate vacua which realize the MPP in a weaker sense, providing additional isolated points, but also realizations in a stronger sense of line segments, circles, surfaces of spheres, as well as spheres. We have presented the classification in a basis-invariant way. For any THDM the corresponding conditions for the different types of realizable MPP's can be easily checked.
We recover the MPP cases studied in the literature but in addition can identify different realizations of the MPP in the THDM. It would be very interesting to show whether these new realizations of the MPP can give THDMs which are stable, have the observed electroweak symmetry-breaking behavior, and fulfill all the experimental constraints.
We have studied the β functions of the THDM in detail in terms of bilinears and have shown that the MPP, considering only the THDM potential, is not realizable. This changes if we consider the Yukawa couplings.
As an example, we have studied a spherical-shaped vacuum in a general THDM with a simplified matter content. Releasing the constraint that the MPP scale must equal the Planck mass, we have shown that some regions of the parameter space may give the correct, that is, observed masses of a Standard-Model-like Higgs boson and the top quark.
Our analysis is done at the one-loop order of the β functions and using the treelevel RG improved potential. In future we would like to consider higher orders and also the dependence of the results on the chosen scheme. Also we would like to consider the experimental constraints from negative searches of additional Higgs bosons. Eventually, we want to extend this study by considering all three families of fermions also in the context of THDMs with natural flavor conservation.
where v 0 ≈ 246 GeV is the Standard Model vacuum-expectation value. Then we can expand the fields about the minimum, giving .
(A.4)
As has been shown in [16], the observed electroweak symmetry breaking, that is, a nontrivial vacuum with both charged components of the doublets vanishing, (A.1), corresponds to This minimum of the potential can be found from the gradient of the potential by introducing a Lagrange multiplier u EW in order to satisfy (A.5), that is, from with V the potential as given in (1.14). Supposed the potential has the correct electroweak symmetry breaking, corresponding to a solution of (A.6) the mass matrix for the neutral scalars (ρ , h , h ) is given by: where the second equality was obtained using relation (A.6). Due to the dependence on u EW , it appears that one way to approach the decoupling limit is to have high values for u EW . The mass of the charged Higgs is given by: where we see that the charged Higgs-boson mass is proportional to the Lagrange multiplier u EW .
B Suppression of the quadratic terms of the THDM potential
Here we briefly argue that the MPP applied to the THDM (1.25) requires that the function J 4 vanishes at a high scale Λ. We first note that J 2 (k) and J 4 (k) depend linearly on the quadratic and quartic potential parameters; see (1.26), in addition to the dimensionless fields k with |k| ≤ 1. Therefore we expect that the absolute values of J 2 and J 4 are not much larger than one since the parameters should not be too large for perturbativity reasons.
We first consider the potential at the electroweak scale, that is at the scale Λ = O(100 GeV). The non-trivial minimum of the potential is at K 0 1 = −J 2 /(2J 4 ) with a corresponding potential value of V THDM = −J 2 2 /(4J 4 ). The MPP (see (1.1) and (1.2)) requires to have the same potential value at the high scale. We expect from the running of the parameters that also the functions J 2 and J 4 depend on the scale, so let us denote these functions at the high scale with a prime symbol, J 2 and J 4 . Even that they are in general different from J 2 and J 4 at the electroweak scale, we expect their absolute values also not to be much larger than one. Now the condition, that the potential value is degenerate at the high scale gives This in turn means that J 4 goes to zero for large Λ supposed J 2 , J 4 , and J 2 are not too large. This condition simply arises from the principle to have degenerate vacua with the same potential value.
C One-loop RGEs in the bilinear formalism
The RGEs of the THDM were computed using an updated version of PyR@TE [22,23], where the scalar mixing is correctly taken into account. We compute the β functions in the bilinear formalism. As we show, in this formalism, the RGEs can be put into a condensed tensor form where the β-functions inherit the transformation properties of the associated parameters under a change of basis.
16π 2 β(η 00 ) = 8 4η 2 00 + 6 η 2 1 + η 2 2 + η 2 3 + (E 11 + E 22 + E 33 ) η 00 + E 2 11 + E 2 22 + E 2 33 + 2 E 2 12 + E 2 13 + E 2 which in the unitary gauge at the vacuum becomes We see that, as in the Standard Model, the top mass is given by This general form can be specialized to other types of THDM by imposing that either y t or t vanishes in the original basis. In this case β can be understood as the usual physical parameter related to the ratio of the vevs. | 11,232.2 | 2020-01-28T00:00:00.000 | [
"Physics"
] |
Q&A: Evolutionary capacitance
What is an evolutionary capacitor? Many mutations are conditionally neutral: important under some conditions, invisible at other times. Capacitors are on/off switches affecting the visibility of a particular set of conditionally neutral variants. While in the neutral state, ‘cryptic’ genetic variants can drift to high frequency; extending the electrical analogy, accumulated cryptic variants can be seen as a kind of genetic ‘charge’ [1].
Isn't it really bad for the yeast to read through stop codons?
Yeast tolerate [PSI+] remarkably well, and not just under lab conditions: the prion is seen in the wild, too [2]. Even when [PSI+] is present, most proteins still terminate normally. But [PSI+] does make a big difference to the phenotype, depending on exactly which cryptic variants are present in a given strain [2,3].
[PSI+] is relatively rare in wild yeast strains, so most of the time it is probably bad for yeast [4]. But [PSI+] increases variation, so it might have a positive effect some of the time. On those occasions, [PSI+] can smooth the path to adaptation. By providing this smoothing, evolutionary capacitors act as adaptive devices or 'widgets' [5] that increase evolvability, defined as the rate of appearance of heritable and potentially adaptive phenotypic variants [6].
Isn't that a pretty extraordinary claim in evolutionary biology?
Actually, the logic and math of the evolution of a capacitance switch is identical to that of the well-accepted case of bet-hedging [7]. The seeds of annual plants don't all germinate straight away. If they did, and it was a bad year, the plant lineage would get wiped out. It's better for a plant to pay a cost to hedge its bets, and have some seeds germinate while others lie dormant. Dormancy is a bet on a bad season; evolutionary capacitor switching is a bet on finding adaptive cryptic variants in a new environment.
So the switching is completely random?
We don't understand exactly how switching happens, but we do know that the probability of switching goes up when the yeast is under stress [8]. When everything is going well, increasing variation will be bad both for the average yeast cell, and for the population as a whole. But in a new and stressful environment, there's a better chance that change introduces an adaptation into the population.
How is the new phenotype inherited?
At first, the new phenotype is only present when [PSI+] is present. Prions like [PSI+] are inherited epigenetically, via the cytoplasm [9]. In a phenomenon known as 'genetic assimilation' , a trait that was originally non-genetic can become genetic after some generations of selection [10]. As explained earlier, even in the presence of [PSI+], any given stop codon is not always read through to express the cryptic variant; this partial nature of the readthrough presumably weakens the variant's phenotypic effects. Selection can favor changes, such as mutations in stop codons [11], that both increase the expression level of the adaptive variant, and cause it to lose its initial dependency on the presence of [PSI+] [12]. Then the prion can disappear again. The [PSI+] prion can help the yeast lineage survive long enough to find these genetic assimilation mutations [6], or it can simply help the lineage get to the final [psi-] adaptation faster through a more efficient adaptive path [12]. Unlike the case of mutator alleles, or a mutation partially knocking out Sup35 function, there is no long-term cost involved when capacitance increases the rate of adaptation [13]. The prion acts as a stopgap, finding an adaptive variant quickly, allowing its short-term inheritance, and disappearing again once the phenotype is more stably assimilated into the DNA. Lineages that are able to switch in and out of the [PSI+] state have an advantage over those that can't, even if the [PSI+] state itself carries a cost [4].
The [PSI+] prion is pretty weird and obscure. Can you give a more general example?
The most famous example is loss of function of the chaperone Hsp90 [14,15]. Hsp90 stabilizes many metastable signal transducers. When an organism is in trouble, Hsp90 may have too much work to do, mimicking a partial deletion. This has unpredictable consequences, depending on which cryptic variants are present in that individual's genetic background, both in direct Hsp90 client proteins that may destabilize in the absence of Hsp90, and in genes downstream of the clients in signal transduction pathways.
For how many generations does Hsp90 stay impaired?
That's a good question. For Hsp90 or other evolutionary capacitors to have a significant impact on evolvability, cryptic variants need to stay switched on long enough for genetic assimilation to take place [12,16]. We don't know if this is the case for Hsp90-mediated variation.
Are most capacitors chaperones?
Probably not. Models of gene regulatory networks suggest that any reversible knockout -for example, as may be the result of the protein forming a prion -can act as an evolutionary capacitor [17]. Even irreversible knockouts, such as gene deletions, can facilitate adaptation [17]. Mutations to chromatin regulators [18] and other regulatory genes [19][20][21] reveal substantial cryptic variation. Yeast knockouts were screened for genes that provide robustness to normal developmental perturbations [22]; these genes (sometimes known as 'phenotypic stabilizers' rather than capacitors because they have not yet been shown to provide robustness to mutations [23]) are enriched for a number of gene types, but not chaperones. An unbiased screen of genomic regional deletions in Drosophila found many new sites whose deletion reveals cryptic genetic variation, but Hsp90 was not among them [24]. There may be so many potential capacitors out there that rather than capacitance being a 'special' mechanism, cryptic standing (or 'crouching') genetic variation may make routine contributions to adaptation [25].
So capacitor genes provide mutational robustness, which is lost in the knockout?
The phenotypes of mutants such as gene knockouts are more variable than the phenotypes of the wild type [26], but this does not necessarily reflect mutational robustness. It does demonstrate the high robustness provided by the gene to the microenvironmental perturbations inherent in normal developmental processes. Robustness to mutations is more complicated, as shown in Figure 1. In the capacitance story, a knockout reveals genetic variants, showing that the intact gene provided robustness to those variants. But the knockout can also hide variants that would otherwise matter, in which case the gene is called a potentiator rather than a capacitor [27,28]. Considering both effects together, knockouts may be no less robust to mutations than wild types are [21]. But even when a gene does not increase robustness to mutations overall, it will still make some specific mutations cryptic, allowing them to accumulate until the capacitor discharges [29]. In other words, capacitors are best defined as genes with many epistatic interactions (in the classical genetic sense in which an allele at one locus masks the effect of a polymorphism at another), rather than as genes that increase mutational robustness.
What's so good about cryptic variation?
The distribution of fitness effects of new mutations is bimodal ( Figure 2); most mutations either destroy something, or they tinker with it (in other words, have only small fitness effects), but their effects are rarely inbetween [30]. Adaptation comes entirely from the tinkering mutations. Cryptic variants are special because they have a higher proportion of tinkering variants and fewer lethal ones. This is because there is a sharp threshold for the effectiveness of selection s that depends on the effective population size N e ; when this threshold is exceeded (which is when s > 1/N e ), selection works great, otherwise evolution is neutral. When genetic variants spend time in a cryptic state, they are usually expressed just a tiny bit. In other words, conditional neutrality in practice usually means conditional nearneutrality. This slight deviation from strict neutrality is enough to purge the destructive variants but retain the tinkering ones, which accumulate until the capacitor comes along to release them [31,32].
So the advantage of cryptic genetic variation is the quality, not the quantity?
Cryptic genetic variation has both quality and quantity advantages [33]. As for quantity, capacitance helps time the release of genetic variants to coincide with episodes of stress. Because lots of variants lose their crypticity at the same time, evolution can also explore more combinations and so cross more valleys in adaptive landscapes [16,31].
Do recessive alleles count as cryptic variants?
Sure, when an allele usually finds itself in a heterozygote, its recessive effects are cryptic to selection. In mostly asexual populations, excess heterozygosity can build up during an asexual phase, and rare episodes of sex will then act as a capacitor: outcrossing explores new allele combinations [34], while inbreeding increases phenotypic variation by converting heterozygotes to homozygotes [35]. In obligate sexuals, hybridization can have similar effects [36]. But unlike the changes brought about by a prion switch or a temporary gene downregulation, the changes brought about by sex are not so easily reversible. Sex releases cryptic variation in a single generation, but making it cryptic again is slower and more difficult. Unlike [PSI+], the capacitance switch only goes one way. In the red section of the genome (left), the gene acts as a capacitor, allowing cryptic genetic variation to accumulate. In the black section of the genome, variation is fully expressed, and most is purged. When the gene is knocked out, abundant cryptic genetic variation is revealed, and a smaller amount of previously expressed variation is made cryptic. The red and black sections are shown as equally large, indicating that the wild type is not necessarily more robust to new mutations, on average, than the knockout [21]. | 2,230.2 | 2013-09-30T00:00:00.000 | [
"Biology"
] |
Dry Friction Interblade Damping by 3D FEM Modelling of Bladed Disk: HPC Calculations Compared with Experiment
Interblade contacts and damping evaluation of the turbine bladed wheel with prestressed dry friction contacts are solved by the 3D finite element method with the surface-to-surface dry friction contact model. This makes it possible to model the space relative motions of contact pairs that occur during blade vibrations. To experimentally validate the model, a physical model of the bladed wheel with tie-boss couplings was built and tested. HPC computation with a proposed strategy was used to lower the computational time of the nonlinear solution of the wheel resonant attenuation used for damping estimation. Comparison of experimental and numerical results of damping estimation yields a very good agreement.
Introduction
e trend in the development of power turbines and jet engines is to continuously improve performance and energy efficiency. Modern turbines are designed for higher operating temperatures and flow rates. e rotating turbine blades are, apart from the considerable centrifugal forces, subjects to aerodynamic forces from the flowing medium. However, the trend of producing ever-longer and everthinner blades leads to lower dynamic stiffness. erefore, dynamic stiffness and structural damping must be increased by additional structural elements, e.g., tie-bosses and shroud connections, with minimal impact on blade weight and aerodynamic loss.
Although the turbines and their bladings can be carefully designed, it is not possible to omit resonant vibration leading to a high-cycle fatigue risk. e bladed wheel with sufficient dissipation of mechanical energy is the protection against this case. Since the material damping of the metal blades is very low, it is necessary to increase the damping by additional construction damping. erefore, dry friction damping contacts in tie-bosses, shrouds, or platforms are introduced into the turbine design. In literature, study cases either on blade-to-blade contacts in shrouds or tie-bosses, e.g., Pešek et al. [1], Bachschmid et al. [2], Gu et al. [3], and Santhosh et al. [4], or on blade-to-disk contacts in platforms, e.g., in works of Petrov and Evins [5,6], Botto et al. [7], Gola and Liu [8], Zucca and Firrone, [9], and Pesaresi et al. [10], can be found.
In general, the contact problems of elastic bodies, Herz's theory is the basis for analytical solutions of contact problems till today. Initial studies on finite element methods regarding contact problems appeared in the 1970s (e.g., Francavilla [11]). Since then, abundant literature on linear and nonlinear contact scenarios within the FEM has appeared (e.g., Wriggers et al. [12] and Simo Laursen [13]). e effect of the friction damping on the dynamic behavior of the blading is a complex problem of continuum mechanics as to the dynamic behavior of spatial distorted blades coupled by disk and time-variant boundary conditions at contacts with friction (Sextro [14]). It leads to multipoint contacts influenced by production accuracy and roughness of the contact surfaces and thermomechanical coupling (Awrejcewicz and Pyr'yev [15]).
To get the time effective solution of bladed wheels with dynamical dry friction contact, the model can be simplified by (a) reducing a number of DOFs of the blades and disk or (b) linearization or another approximation of the contact. To the group (a) belonged semianalytical solutions with few degrees of freedom, e.g., Bachschmid et al. [2], Pešek et al. [16], Pennacchi et al. [17], and Pešek and Půst [18]. Furthermore, in models, where a number of DOFs are reduced by model reduction methods, e.g., modal condensation method, the nonlinearity remains only in contacts (Pešek et al. [19] and Zeman et al. [20]). e methods of the group (b) are represented mainly by the harmonic balance method (HBM) (Magnus and Schwingungen [21]) used for the stationary harmonic vibration analysis. e HBM is very well theoretically developed and widely used in the bladed disc dynamics with friction contacts (e.g., Zmitrowicz [22], Pierre et al. [23], Ferri and Dowell [24], Petrov and Ewins [25,26], Santliturk et al. [27], Sanliturk et al. [27], and Suss et al. [28]). e simplified approaches of solution are computational efficient, but they have their drawbacks and limitations due to the simplifications. Namely, for the HBM method, it is necessary to know in advance the contact areas and stiffness used to calculate the contact forces, which requires further demanding experiments (Schwingshackl [29]) or numerical simulations (Voldřich et al. [30]). e works based on computational contact mechanics using the 3D finite element technology, e.g., Bachschmidt et al. [31], Yamashita et al. [32], Pešek et al. [33][34][35], and Drozdowski et al. [36], solve this problem as a fully coupled mechanical system with nonholonomic constrains. Friction is herein described by the general Coulomb law, where friction coefficient is a function of a relative velocity and quality of surfaces. e contact forces are here computed by, e.g., penalty, Lagrangian, or augmented Lagrangian methods.
ese methods are usable for general dynamic excitation with smooth and nonsmooth contact surfaces. is solution is time consuming leading to high performance computations (HPC), but it is straightforward in the contact description for 3D body motions considering variable contact states in space and time. And with the fast increasing performance of computers, i.e., number of cores and speed of processors, it becomes more and more feasible to everyone. Due to possible space and time discretization inaccuracies and numerical errors, an experimental validation is still needed. Nevertheless, it brings new promising possibilities for solving the dynamics of bladed wheels in addition to the simplified models. e study deals with our latest research aimed at dry friction damping in the tie-boss contacts (Pešek et al. [37][38][39]), where bladed wheel systems are numerically described by the 3D finite element method with dynamical contact problem. e works [37][38][39] present the results of this solution at its early stage.
is study brings newly achieved numerical and experimental results of the wheel dynamics for larger amplitudes. It provides new information about the method accuracy when blades reach more dangerous amplitudes of vibration.
Smooth surfaces with high contact stiffness are assumed. Due to the spatially different deformations of the blades during oscillations, there are spatial, relative movements of the contact surfaces creating time-varying contact states in terms of contact area location and contact normal force. e decisive factor for the suppression of dangerous vibrations is the frictional damping after the transition from microslip contact (stick-slip) motions to macroslips when slip states prevail. As proven experimentally, with macroslips in contacts, the dynamic behavior of the blades approaches the behavior of the blades with open contacts, and conversely under microslips, their behavior approaches the state of blades with bonded contacts. In the second case, accuracy of results is due to precise computations of elastic deformations in contacts very sensitive to the contact modelling (Segalman et al. [40]) and is not herein considered.
For experimental study of the topic and validation of the numerical results, the physical model of the bladed disk with interblade tie-boss contacts has been designed and manufactured. For numerical analysis, the corresponding three-dimensional FE model with friction contacts has been developed in the program ANSYS. e numerical model is built from hexahedral structural finite elements. As for the contacts, the "surface-to-surface" dry friction contact model is applied, and the augmented Lagrangian method is used to compute contact normal pressures and friction stresses. e friction coupling is modelled by the isotropic Coulomb's law.
is solution leads, however, to high performance computations (HPC). erefore, we used the supercomputer Salomon in the Centre IT4Innovation with 2PFLOPS Rpeak using 24 processors per 5 nodes. e long computational time is caused especially by number of iterations in each integration time step in a large number of nonlinear couplings (30 contact pairs in our case) between discretized contact surfaces. Due to nonlinear solution of the dynamics and therefore long computation time, a computational strategy for more effective damping evaluation was proposed.
In the study, first, the physical model of the bladed wheel and its numerical discretization is described. Second, the numerical linear model of the bladed wheel is validated by modal analysis. Furthermore, the description and results of experimental rotary tests used for comparison with numerical results are presented. We focused on two critical wheel speeds with different modes of vibration, i.e., modes with 2 and 6 nodal diameters (ND). Following is the numerical modelling of the contacts and the computational strategy, and finally, the results of the calculation and comparison with the rotation experiment are discussed.
Blade Model Design and FE Discretization.
e model disk is equipped with 30 prismatic blades. Figure 1 shows the design of bladed wheel with "tie-boss" couplings and additional weights. Each blade is fixed to the disk by the system of two small finger consoles. e bottom console is bolted to the disk, and upper console is bolted to the blade. e consoles are bolted together, and their mutual position is set at 45°angle before they are bolted together. At the tip of the blades, an additional mass is bolted. Each blade flexurally oscillates in the plane α 0 perpendicular to the plane of the blade. e tie-bosses are shoulders, and their ends are in contact at neighboring blades. e shoulders of tie-bosses have the same length, and their placement is at a radius 0.44 m of the wheel. To allow the blade to move freely during flexural vibrations, the ends of the tie-boss shoulders were designed to be at a certain angle. e definition of cutting planes β A and β B of two contacts A and B is schematically shown in Figure 2. e axis ζ A and ζ B , respectively, are radials laying in the middle plane of the wheel, starting from the centre of the wheel and passing through the middle between the blades. erefore, they are 12°apart. If we consider the auxiliary planes α A and α B perpendicular to the plane of the wheel and passing through the radials ζ A and ζ B , then by rotating them by an angle of 45°a round the axes ζ A and ζ B , we obtain the cutting planes β A and β B . Because of setting up the contact surfaces between the tie-bosses of the neighboring blades, the tie-bosses consist of extensible shoulders screwed with left (right side) and right (left side shoulder) winding into the suspension bolt that was fixed to the blade by two nuts. By screwing the bolt onto the nuts, the shoulders extend simultaneously on both sides. e three-dimensional FE model of the bladed wheel with tie-bosses ( Figure 3(a)) was developed in the program ANSYS 19.3. In total, 89,000 hexahedral eight-node SOLID85 elements were applied for blade and disk discretization. e detail of the mesh at tips of blades B A , B B , and B C is depicted in Figure 3(b). For evaluation of the relative motions of the contact surfaces, two contact pairs A and B were chosen having two nodes N167179, N166940 and N165136, N32610, respectively, that lie in the midst of each contact surface ( Figure 3). To validate linear dynamic behavior of the bladed wheel, the contact surfaces of neighboring tie-bosses were in state OPEN. e wheel was clamped in the hub for the dynamic calculations. e global reference system x, y, z of the model coordinates is shown in Figure 3(a).
Test Rig Set-Up.
Experimental tests under rotation were performed on the rotary test stand of the Institute of ermomechanics ( Figure 4). e model wheel is driven by the three-phase synchronous engine ABB (10 kW) supplied by a current from the frequency converter ACSM1. e scheme of the rig with denotation of components of the wheel, excitation, and measurement is graphically shown in Figure 5. e rotating bladed wheel is excited by eight electromagnets EM1 ÷ EM8 distributed along the circumference of the wheel. Strain gauge (SG_1) was glued on the blade L1 for measurement of the blade vibration. e absolute encoder ECN1313 (512 lines) was used for the blade position and angular speed detections. Electromagnets were grouped into pairs (EM1, EM5), (EM2, EM6), (EM3, EM7), and (EM4, EM8) for more uniform distribution of the excitations on the wheel. Algorithms for synchronized electromagnetic excitation with revolution were developed in a Simulink program for the real-time control system dSPACE. e normal contact force was realized by prestressed rubber band springs with magnitude 2 N.
Modal Analysis of Full Bladed Disk.
To attune the material parameters of the numerical model, the experimental modal analysis of the full bladed disk was performed for open contacts in a steady nonrotational state. e system Pulse, B&K, and MeScope and Vibrant Technology (Table 1) were used for measurement and analysis. e experimental SIMO modal analysis was evaluated from the axial responses of all blades on the swept sine excitation of the wheel. e eigenfrequencies were classified according to a number of nodal diameters (ND) of the associated eigenmodes.
As it can be seen from the table, the pairs of very close eigenfrequencies for each ND eigenmode appear. It corresponds to the split double eigenfrequencies of the eigenmodes of the rotational bodies with lightly disturbed symmetry. e numerical model with open contacts (Figure 3) was tuned to the experimental results of modal analysis. e eigenfrequencies of modes from 2ND up to 6ND of both numerical and experimental modal analyses are presented in Table 1 and show good agreement. e eigenfrequencies of the numerical open contact model Figure 1 is reproduced from [37].
monotonously increase with a number of ND, and its value converges to a limit eigenfrequency 50.3 Hz for 15ND mode that is close to the first flexural mode of a clamped blade.
Since the open contact model approximates the "eigenfrequency" of the wheel at macroslips, the validation of the numerical modal model is a necessary step before transient nonlinear calculations with dry friction contacts.
Tests under Revolution.
To determine the resonant states of the wheel model under revolution, the Campbell diagrams were evaluated from the strain gauge signal of one blade (L1). e diagram of Figure 6 was ascertained for the bladed wheel with prestressed contacts in tie-bosses. is colored map of amplitude-frequency dependences on revolutions was performed in the automatized data acquisition system PULSE, B & K at a slow run-up (60 up to 450 rpm in 250 s) of the driving engine. e excitation was realized by eight electromagnets as mentioned above. e sloping lines of the vibration with a revolution frequency and its order, i.e., engine order lines, are visible in the diagram. e engine order lines were mainly generated by revolution-dependent excitation by electromagnets and deflections of blades from imbalances and gravitation forces. e highest amplitudes along these lines (red spots) are achieved at so-called critical speeds when these lines cross vertical branch of the flexural vibration of the wheel. e critical speeds (rpm) can be easily calculated for number of electromagnetic pulses p � 4x2 � 8 (twice of number of electromagnet pairs), o order number of pulses, and the resonant frequency (Hz) of the wheel
Shock and Vibration
For evaluation of dry friction damping from a free attenuation, we identified three critical speeds, i.e., 123 (o = 3), 192 (o = 2), and 334 (o) rpm in the Campbell diagram, so when we know the resonant critical speed, we can calculate back the resonant frequency by equation (1). For evaluation of dry friction damping on the vibration amplitude level, we chose critical speeds at 123 and 334 rpm ( Table 2). Since the eigenfrequencies of the wheel change very slightly (up to 1.5 Hz) with the revolution speed, we can identify the mode of vibration, i.e., number of ND, at this critical speed by association with the eigenfrequencies of Table 1.
In the rotary test, the electromagnetic excitation pulled the rotating wheel into the resonance. en, the excitation was switched off, and the frequencies and damping were evaluated from the amplitude envelope of a free attenuation. e displacement courses of the blade L1 and excitation forces of all pairs of electromagnet pulses were recorded. e records are plotted for the wheel with open contacts, mode 2ND and 331 rpm, as shown in Figure 7. e displacement of the blade L1 is at the top and the electromagnetic forces F emIJ (see legend) at the bottom. Indexes I and J designate electromagnets that are interconnected in the pairs, and the first index I determines when the electromagnetic excitation is triggered, e.g., both 15 and Figure 6 is reproduced from [37].
Shock and Vibration 5 51 belong to the same pair (EM1 and EM5), but the excitation comes when blade L1 passes EM1 in case 15 and L1 passes EM5 in case 51. It can be seen that electromagnetic pulses are activated during the resonant vibration period and are switched off during attenuation period. e green line denotes the envelopes of the vibration in the time responses of blade L1. e blade response is almost constant during the resonant period and slowly decreasing during the attenuation period, which proves the low material and aerodynamic damping at this evolution. e cases with prestressed contacts (normal contact force 2 N) and different levels of resonant excitation amplitudes are drawn both for speeds 334 rpm (tests 1-3, Table 2) and 123 rpm (tests 4-6, Table 2) in Figures 8 and 9, respectively, in the same way as in Figure 7. e fast decrease of the envelope after the resonant period shows the strong effect of dry friction damping on the vibration attenuation. ese results were used for evaluation of the dry friction damping effect on different excitation amplitudes and for comparison with numerical results too.
3D FE Model with Surface-to-Surface Contacts.
Topology of 3D FE model discretization of the wheel is described in the previous chapter. Contacts between the blade were modelled by a "surface-to-surface" method using a pair of contact surface elements TARGET170 and CONTACT174 placed against each other on contact lateral ends of the tie-boss shoulders making 30 contact pairs in the wheel. To detect contact points, the "pinball" search algorithm was used. e augmented Lagrangian method was used to compute contact normal pressures and frictional stresses. Contact surface behavior is modelled as standard unilateral contact, i.e., the normal pressure equals to zero if separation occurs. e friction coupling was modelled by the isotropic Coulomb's law. If the friction stress τ does not exceed the limit friction stress τ < τ lim in the contact surface, where τ lim � μ s p (μ s is a static friction coefficient and p is the normal pressure), then the contact is in a state of "sticking." In this state, there is zero sliding between the contact surfaces, and only elastic deformations x t occur. e contact stiffness k t is automatically calculated according to the local stiffness of the contact areas by the program. is stiffness estimation was a good fit for our study case, i.e., macroslip movements and smooth surfaces of contacts, since it was not necessary to update its value to improve agreement with experiments.
After the friction stress exceeds the limit by the equivalent friction stress, "sliding" of contact surfaces appears. e size and direction of this sliding are evaluated by the sliding rule using a so-called potential of friction flow. Shock and Vibration For a description of the friction coefficient μ, the following dependence on relative velocity v rel is considered: where FACT � μ s /μ d , μ d represent the dynamic friction coefficients, and DC represents the decay coefficient. Values μ s � 0.4, μ d � 0.2, and DC� 4 were used for the computation of the dynamics of the wheel, as shown in Figure 10. e prestress in contacts was modelled by contact surface offset 1e -5 m with a resulting normal contact force of 2.4 N. Setting of the boundary and initial condition for transient analysis of the wheel is described in the next chapter.
A full solver for the unsymmetrical task solved the transient responses with the Newmark integration method and time step 5e − 6 s. e Newmark parameter c � 0.5 was set for numeric stabilization reasons. e damping ratio Shock and Vibration 0.1% was imposed as the steel material and other construction damping.
HPC Computational Strategy
Due to long computation time of nonlinear transient solution of wheel response, we had to deal with a computational strategy for damping evaluation. To eliminate the long resonant run-up to steady state before a free attenuation as performed at experiments, the resonant state was defined directly by the input initial conditions, i.e., the displacements of wheel were set as a chosen eigenmode with zero velocities. e term "free" means that the dynamic response for damping evaluation deals with an unforced vibration attenuation. e applied computational strategy can be divided into three steps. e integration period 0.1 s (corresponds to 20000 integration steps) of the vibration attenuation by ANSYS parallel computing required about 40000 core hours of computational resources of the supercomputer Salomon. If the resonant run-up was included, the task requirement would be multiple.
Numerical
Results. Due to HPC time demanding computations of the transient responses of the wheel, we aimed at study cases for which experimental data were available and could be directly compared with the experiment. Namely, we dealt with the two resonant attenuations for 2ND and 6ND modes of vibration. e 2ND and 6ND modes with open contacts were precomputed for creation of the excitation vector. From the distribution of displacements of the modal vector, the affinity force vectors Fx, Fy, and Fz denoted by red vectors in Figures 11(a) and 12(a), respectively, in selected nodes of the wheel were added. For our case, the force vectors were specified at two nodes of the tie-boss prism of each blade around the circumference. e total static displacements (m) of the wheel computed for the 2ND, 6ND, and their modal force distributions are shown in Figures 11(b) and 12(b), respectively. e ascertained deformation modes were used as initial displacement conditions for the next transient analysis.
Since the attenuation is monotonic and the mode of vibration holds on during all calculation periods, the response of the wheel is presented here as time characteristics of displacements of selected nodes of target-contact pairs of the contact A (Figures 13(a), 13(b), 14(a), and 14(b)) and the contact B (Figures 13(c), 13(d), 14(c), and 14(d)) denoted in Figure 3(b). Amplitude envelopes are plotted in red (upper envelope) and green (bottom envelope) for displacements of node N165136. One can see almost linear decrease of the amplitudes typical for dry friction damping modelled by Coulomb's law. To show the complex relative motion of the contact surfaces during vibration by 2ND and 6ND modes, trajectories of blade motions at the same contact-target pair nodes as before are shown in Figures 15 and 16, respectively. e trajectories correspond to one vibration period defined by blue frames in Figures 13 and 14. e graph shows that there is nonparallel relative motion of the contact surfaces that causes variable contact states as to the localization of contact areas and value of contact normal stresses during vibration. As to the localization of contact areas, very often edge contacts arise as shown in a detail of three blades for one selected integration time, i.e., 0.096745 s for 2ND mode ( Figure 17) and 0.035995 s for 6ND mode (Figure 18). e contact state picture shows which areas are in a sliding contact (sliding) and which are contact open (near contact state).
Comparison with Experiment.
To compare the numerical results with the experimental data, we inserted all results into the aggregate graph of amplitude attenuations (Figure 19). e envelopes corresponding to resonant attenuation of the 2ND mode evaluated from experiment is denoted by light blue and 6ND by violet. ere are three tests (tests 1-3) distinguished by the amplitude level of the resonant attenuation for each mode. e numerical FE envelopes are denoted by blue and red lines for 2ND and 6ND modes (upper envelope from Figure 14(c)), respectively. In case of 2 ND, there were two separate tasks with (A) higher (upper envelope from Figure 13(c)) and (B) lower initial amplitudes (Pešek et al. [39]), and therefore, the black line is interrupted. For a better comparison with the experimental data, they were shifted in time and pinned to experimental Figure 10 is reproduced from [37]. data and do not start from time zero. Marks on the lines denote the peaks of the amplitude of harmonic attenuation. e graph shows that the numerical results match very well to the experimental data in the range of computed displacements (up to 0.5 mm) of the blades. e higher slope of 6ND than 2D envelopes indicates that a higher damping effect can be expected on the mode with a higher number of ND at comparable absolute displacements due to higher relative motions in contacts. e damping ratios evaluated from logarithmic decrement of the FEM envelopes yields 1.2% for 2ND and 2.2% for 6ND. Experimental data show more pronounced dependence of amplitude decay on higher absolute amplitudes (above 0.5 mm). It shows a slightly higher nonlinear effect at friction contacts. It may be caused by, e.g., different characters of friction coefficient at higher relative velocities or more nonlinear behavior in contact of the experimental model due to production inaccuracies.
Conclusion
In the study, dry friction interblade damping of the bladed disk was solved by the three-dimensional FE model with the surface-to-surface dry friction contact model. It enabled to model time and space (spot and edge contacts) variant contact states and analyze the dry friction effect in such geometrically complex structures as the bladed disk with interblade contacts. e damping study aimed at the case when sliding states prevail in the contact (macroslip motion) which can occur at larger blade displacements. is solution, however, due to nonlinear solution of the dynamics, leads to time-consuming computations. anks to high performance computation (HPC) facilities; these tasks are, nowadays, computationally more feasible. Our computations were performed using 24 processors per 5 nodes on the supercomputer Salomon in the Centre IT4Innovation with 2PFLOPS Rpeak. Due to nonlinear solution of the dynamics and therefore long computation times, a computational strategy for more effective damping evaluation was proposed. Using our computational strategy for damping evaluation, i.e., the damping effect is evaluated from the solution of free attenuation at predefined initial conditions; the net computational time of the 6ND mode case was lowered to 7 days. To compare and validate numerical results experimentally, the physical model of the disk with interblade tie-boss's contacts was built and tested on the rig under revolution. It was shown that the numerical results match very well to the experimental data in the range of computed displacements and modes of vibration of the blades. e study yields very valuable results and shows that the proposed numerical 3D modelling of the bladed disk with dynamical surface to surface contacts is perspective for assessment of damping behavior of the bladed wheels when adhesion in contacts is exceeded and contacts get into macroslip relative motion. It could help to design more effective interblade damping couplings, such as their placement, mass and stiffness distribution, and tilting of contact areas, with respect to the danger excitation of the wheel vibration modes.
Data Availability e data or information used for the study were generated from the cited literature or our own resources. It is described herein how data were obtained. More information can be gained directly upon request to the authors. As to applicability, the study yields very valuable results and shows that proposed numerical 3D modelling of the bladed disk with dynamical surface-to-surface contacts is perspective for assessment of damping behavior of the bladed wheels. It could help to design more effective interblade damping couplings with respect to the danger excitation of the wheel vibration mode.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Acknowledgments is work was supported by the research project of the Czech Science Foundation "Study of dynamic stall flutter instabilities and their consequences in turbomachinery application by mathematical, numerical, and experimental methods" (20-26779S). e HPC calculations were supported by the Czech Ministry of Education, Youth, and Sports from the Large Infrastructures for Research, Experimental Development, and Innovations project IT4Innovations National Super-Computing Center (LM2015070). | 6,567.4 | 2021-10-14T00:00:00.000 | [
"Materials Science"
] |
Thermal Performance of Double-Sided Metal Core PCBs
: Thermal management in printed circuit boards is becoming increasingly more important as the use of LEDs is now widespread across all industries. Due to availability of the preferred electronic LED current drivers and system constraints for a machine-vision application, the design dictated the need for a double-sided metal core printed circuit board (MCPCB). However, design information for this relatively new MCPCB offering is sparse to non-existent. To fill-in this missing information in the literature, experiments were conducted where LEDs were arranged on a double-sided metal core printed circuit board (MCPCB), and their impact on the board temperature distribution was tested in a static fan-less configuration where the first condition was at room temperature, 23 °C, and the second configuration was for a heated environment, 40 °C. Two MCPCB orientations were tested (vertical and horizontal). Additionally, several LED arrangements on the MCPCB were configured, and temperatures were measured using a thermocouple as well as with a deep-infrared thermal imaging camera. Maximum temperatures were found to be 65.3 °C for the room temperature tests and 96.4 °C for the heated tests with high temperatures found in near proximity to the heat sources (LEDs), indicating less than ideal heat-conduction/dissipation by the MCPCB. The results indicate that the double-sided MCPCB topology is not efficient for high thermally loaded systems, especially when the target is a fan-less system. The results of testing indicate that for fan-less systems requiring high-performance heat-transfer, these new MCPCB are not a suitable design alternative, and instead, designers should stick with the more traditional single-sided metal-back PCB. Author Contributions: Conceptualization: M.G.P.; methodology: M.G.P., S.C.P, J.A.C., K.D.T..; experimental design and testing: M.G.P., S.C.P, K.D.T., J.A.C.; analysis: M.G.P, S.C.P., K.D.T., J.A.C.; project administration,
Introduction and Overview of the Research and Results of System Performance That This Technical Note Supports
Design information for a new thermally efficient printed-circuit-board, PCB, has been recently offered by PCB board houses. As PCB boards are now a commodity off-the-shelf order that is available from most PCB board manufacturers; designers have little to no control over the manufacturing process. Hence, control is basically limited to component placement and types of component packages that are mounted onto the PCB board, per normal PCB manufacturing design process. A recent new design offering from PCB board houses are metal-cored PCB boards, MCPCB, of which the manufacturers are claiming high performance thermal heat transfer properties and which they promote as new options to consider where one needs a design that supports components mounted on both sides of the PCB. Giving the glowing recommendations from the PCB board houses, our initial impression was very favorable and led the authors to invest significant engineering resources into pursing the use of the MCPCB for one of their cotton contamination detection systems. The environment dictated a fanless design would be optimal, given the high dust levels, and due to the elevated ambient temperatures in these factories during the summer, the light-emitting-diode, LED, board required by the design would face significant thermal loading. These new MCPCB's offer thermally efficient boards that are formed around a metal-core that is promoted by the board houses as having a built-in thermal heat-sink. While in theory this is logical and persuasive, there is however little to no design information available from the PCB board houses or in the published literature on double-sided metal-core PCB boards. This technical note details the thermal testing of a highthermally loaded electronic design that was developed for the lighting sub-system of a machinevision system that is targeted for use in an online real-time plastic contamination removal system for use in cotton gins.
LEDs are used across all industries in lighting, signaling and various other applications. Machine vision technologies typically use high-power LEDs to provide maximum contrast on features of interest [1]. However, the high-power LEDs that are used in applications such as this also produce a large amount of thermal energy. This thermal energy must be transferred out of the printed circuit board, (PCB), in order to maintain safe operating conditions for the LEDs and other board components. Because the life of the LED is directly affected by the temperatures present at the LED's junction, between the silicon die and the LED's component housing, ensuring adequate heat transfer away from the LED is vital to maximizing the lifespan of these components [2]. Increased temperatures not only affect the lifespan of the LED but also the performance. The luminous efficacy of the LED is decreased by 5 percent for every 10 °C rise in operating temperature [3]. The color of the light can also change as the temperature rises, which would impact applications with strict lighting requirements [4] such as in machine-vision applications. Typical PCB materials, such as FR4 (NEMA grade designation for glass-reinforced epoxy laminate material) and polyimide, have low thermal conductivities and do not sufficiently transfer heat [5]. One of the main ways of achieving adequate thermal management is the use of a metal-core printed circuit board, MCPCB, where the metal core has a high thermal-conductivity that readily transfers heat away from the sourcing component [6].
Numerous studies have been done regarding LEDs and heat transfer in single-sided MCPCBs. In this application, the exposed aluminum side of the MCPCB is typically mounted to a finned heat sink to help dissipate the LED generated heat. This approach, however, imposes the constraint that limits application to use only surface-mount components. The literature reports that the LED arrangement has been shown to affect the adequacy of heat transfer in single-sided MCPCBs [7]. In addition to LED arrangement, wire patterns have also been shown to affect the thermal behavior of PCBs [8]. The advantageous effects of the high thermal conductivities of the metals used in these single-sided MCPCBs have also been explored [9]. Algorithms for optimal heat sink placement for surface mount LED applications in single-sided MCPCBs have been developed [10]. Heat sink designs such as heat pipes [11] and finned LED holders [12] have been tested. The effects of forced convection via fans on single-sided MCPCBs have been analyzed [13]. Thermal management design criteria for LED arrays have been established [14]. In addition to experimentation, heat transfer in single-sided MCPCBs has been simulated using CFD packages [15]. However, most if not all of the current research uses a single-sided MCPCB stack-up similar to that shown in Figure 1a, a style which is limited to surface mount components only. This typical single-sided stack up allows for heat sinks to be attached to the highly conductive metal core bottom layer, which allows for good thermal management. Recent report utilizing computational fluid-dynamic analysis for single-sided LED thermal design reported utilizing various shapes of aluminum heat-sink for 60 LED's mounted on a grid pattern [16]. Other notable fanless passive convective cooling research for single-sided LED thermal designs reported on experimentally confirmed computational fluid dynamic design that found significantly higher thermal heat transfer coefficients (2.2 degrees C W −1 ) for a vertical hollow cylinder versus a vertical chimney design [17]. Their research found typical temperature distribution to vary from 5-10 degrees C with a maximum temperature from 55-70 degrees C., with rise dependent upon heat sink design. Additional notable work on passive natural convection heat dissipation for rectangular finned heat sinks or various configurations is reported [18]. They found blockage of natural convective flow to be one of the dominant factors deteriorating heat transfer in rectangular fin designs. While the literature reports are clearly indicating effectiveness in thermal management is possible for fan-less single-sided PCB LED designs utilizing heat sinks, the singlesided design is limiting with regards to circuit design. To alleviate this, a new MCPCB design has recently become available; however, while the double-sided stack up of these new MCPCB's ( Figure 1b) provides the advantage for double-sided component placement, there is currently a lack of engineering thermal design information and literature regarding this double-sided stack-up. Hence, questions about the adequacy of heat transfer using this style of board remain unanswered for this new configuration where the metal core is insulated, and there is no easy means to attach a heat sink to the inner metal core. In addition to being unable to attach a heat sink, the intended application for this board does not allow for the use of fans, due to the deployment into a high-dust environment. Hence, the idea target design. This research explores the new double-sided MCPCB stack-up option using a set of experiments that allows for thru-hole component placements, as shown in Figure 1b. In this report, we present a thermal analysis of experimental data taken of seven different LED arrangements on a double-sided MCPCB.
Materials
The materials used in the experiment are listed in Table 1. The schematic and PCB board artwork files are listed in Table 2.
Location of LED and LED Drivers
The layout of the LEDs is shown in Figure 2a and the board schematic is shown in Figure 2b. This experimental protocol utilized the convention shown in Table 3 to refer to specific test configurations. [circuit traces shown in red are on the top-layer and traces in blue are on bottomlayer]
Experimental Methods
Two types of tests were performed during the experiment. Tests with the device in a 23 °C room (the room temperature test), and a test with the device in a heated and well insulated environment at 40 °C (the heated test). An insulated chamber was heated using hot soil to achieve a stable and consistent steady-state heat-source, HS, that provided a thermally stable temperature. The procedure for the room temperature test is shown below.
The experimental protocol for ambient room-temperature was as follows: 1. Set power supply to 18 V, [turned off]. 2. Connect power supply leads to the active LEDs for given test.
3. Using electrically isolated tape, secure thermocouple temperature probe to the PCB board near one of the LEDs that will be on [to verify thermal image temperature reading emissivity settings on IR imaging camera are correct]. 4. Turn the power supply on then a. At 1-min intervals, capture a thermal image of the board using the FLIR camera, and record thermocouple temperature reading b. Repeat step a until the board temperature stabilizes The procedure for the heated tests was similar, with the exception that the PCB board was placed into an insulated chamber with large mass HS. The heated tests were done for vertical and horizontal board orientations with a target temperature of 40 °C.
Room Temperature Tests
The results of the ambient room temperature tests are shown in Figure 3. Of interest for design purposes, for the transects shown in Figure 5, are the rapid drops in temperature with increasing distance from the LED heat source. This lack of heat transmission, by the double-sided MCPCB, results in highly localized hotspots on the circuit board which will likely lead to component failure. As the metal-core, in the MCPCB, is surrounded by insulating FR4 layers, there is not a convenient way to mount a heat-sink that could be used to augment thermal heat extraction. As such, this experiment shows the limitations of the double-sided MCPCB design.
Heated Tests
The results of the transient response, for configuration 1, heated room tests are detailed in Figure 6. An additional heated room test, Horizontal Test 2, was done for the horizontal orientation after the first two configuration 1 tests. This lower temperature results obtained for the Test 2 showed that the power output of the LEDs was significantly lower than the first tests indicating component failure due to the previous heated tests. As the target deployment room temperature for the design, of 40 °C, was the same as the heated room test temperature; these results provide conclusive evidence that this design would not be successful for an industrial deployment of a similarly configured system. A thermal infrared temperature image captured at the final time value of the heated-room test for configuration 1 is shown in Figure 7, and a plot of the infrared image temperatures across the transects is shown in Figure 8. Of note again are the highly localized hotspots near the LED heat sources, indicating an insufficiency of the double-sided MCPCB to off-load heat from the LED sources to the cooler regions of the MCPCB. A thermal infrared temperature image captured for the heated room test, configuration 1, at the final time value (PCB in vertical orientation) is shown in Figure 9. Of note are the highly localized hotspots, again indicating poor heat transfer by the inner metal-core of the double-sided MCPCB. A plot of temperatures across the transects for this test is shown in Figure 10. In all the tests explored by this research, the double-sided MCPCB was not effective at moving the heat away from the LED heat sources to the cooler parts of the PCB. As such, with only a minor rise in room temperature the junction-temperatures of the LED's were driven to near maximum such that they began to under-go thermal failure. Given these results, unless some method is developed by which to couple the inner metal-core of the double-sided MCPCB to an external heat-sink, the use of this new topology is not recommend for highly localized applications such as LED light boards for machine-vision applications.
Summary
For a LED light board utilizing a double-sided metal-core printed-circuit-board, MCPCB, that would be constrained to only operate at or below standard room-temperature, 20 °C, the results of the experimental tests show that the maximum temperature of 65.3 °C was reached when all three LED groups were on, reaching steady-state after 12 min and would therefore provide an acceptable level of performance as this maximum was well below the maximum operating junction-temperature of the LEDs. However, for deployments, such as the intended target environment into industrial environments where the room temperatures are likely to approach or exceed 40 °C, the experimental tests in heated room yielded a maximum temperature in a horizontal orientation of 96.4 °C. The maximum temperature of the vertical orientation for the heated test was only slightly lower at 85.2 °C. As the measured hottest spot on the PCB is lower than the LED junction temperature, these elevated temperatures are likely unsafe from a component life perspective. Further evidence of unsuitability was provided by subsequent tests, in the heated room, where results were already showing thermal degradation of the components.
In examining the heat distribution via thermal transects taken using thermal infrared imaging, the double-sided MCPCB was found to perform poorly with regards to thermal dissipation of the heat generated at the LED, leading to large non-uniformity in temperatures and elevated temperatures in near locales of the heat generating LED's. In summary, the double-sided MCPCB's were not as effective at thermal management as the well-proven single-layer metal-backed PCB stack. Although the temperatures present in the room temperature tests were well below failure temperatures, the temperatures achieved with only modest room temperature elevation of 20 °C resulted in component failure. As such, it is recommended that most applications avoid the use of the double-sided MCPCB and, instead, strive to modify design requirements such that a single-layer metal-backed PCB option would suffice. Should that not be possible and a double-sided MCPCB be a requirement and the design be targeted to an elevated industrial environment, it is hypothesized that the elevated PCB thermal loading, leading to failure, might be mitigated if it were possible to attach a heat sink to the highly conductive metal core. One potential method by which this could be achieved would be through the addition of numerous electrical layer transition vias to help pipe the heat from the core out to the external heat sink. This is however an unproven recipe and would need to be confirmed via further experimentation and is left for future work. Another option is to simply provide a large etched area in the PCB design so that heat-sinks can be applied to key-locations to help improve heat dissipation. Due to the highly localized nature of the heating, placement would likely be critically important, and it should be noted, it would take up valuable PCB area. However, for these very specialized designs, these guide-lines may provide a means to achieve sufficient thermal performance.
Another option for consideration, to pipe heat out from the inner metal-core of the double-sided MCPCB, would be to terminate the top and or bottom layers to allow the metal core layer to be extended past the sides of the outer layers, thereby exposing the inner metal-core to the air. It would also allow for application of heat-sinks to these extended, now exposed, metal-core surfaces. However, as noted in the heat transects ( Figures 5, 8 and 10), this would likely be of limited effectiveness due to the high localization of the heat within the covered sections of the double-sided MCPCB. It is hypothesized that the localization is potentially due to the insulative nature of the FR4 layers enclosing the metal core and is likely trapping the heat limiting the movement along the core. Thermal modeling will be explored in future work to ascertain this theory and help to develop more successful solutions. It is suggested that the most likely best recipe for two-sided MCPCB's would be to utilize a high-density array of through-vias in near-proximity to the heat-sourcing LED's or complete removal of outer layer via etching and then attach a heat-sink to the opposite side to help dissipate the heat in near proximity to the source. This approach however needs to be proven out first and again is left for future research. In the meantime, for a fan-less system design required to run in an elevated temperature environment, it is suggested to strive to utilize single-layer MCPCB and, until proven otherwise, avoid use of dual-sided MCPCB designs in these high thermally loaded PCB configurations. | 4,142.6 | 2019-11-13T00:00:00.000 | [
"Materials Science"
] |
A Brief Overview of Results about Uniqueness of the Quantization in Cosmology
The purpose of this review is to provide a brief overview of some recent conceptual developments about possible criteria to guarantee the uniqueness of the quantization in a variety of situations that are found in cosmological systems. These criteria impose some conditions on the representation of a group of physically relevant linear transformations. Generally, this group contains any existing symmetry of the spatial sections. These symmetries may or may not be sufficient for the purpose of uniqueness and may have to be complemented with other remaining symmetries that affect the time direction, or with dynamical transformations that in fact are not symmetries. We discuss the extent to which a unitary implementation of the resulting group suffices to fix the quantization, a demand that can be seen as a weaker version of the requirement of invariance. In particular, a strict invariance under certain transformations may eliminate some physically interesting possibilities in the passage to the quantum theory. This is the first review in which this unified perspective is adopted to discuss otherwise rather different uniqueness criteria proposed either in homogeneous loop quantum cosmology or in the Fock quantization of inhomogeneous cosmologies.
The purpose of this review is to provide a brief overview of some recent conceptual developments about possible criteria to guarantee the uniqueness of the quantization in a variety of situations that are found in cosmological systems. These criteria impose some conditions on the representation of a group of physically relevant linear transformations. Generally, this group contains any existing symmetry of the spatial sections. These symmetries may or may not be sufficient for the purpose of uniqueness and may have to be complemented with other remaining symmetries that affect the time direction, or with dynamical transformations that in fact are not symmetries. We discuss the extent to which a unitary implementation of the resulting group suffices to fix the quantization, a demand that can be seen as a weaker version of the requirement of invariance. In particular, a strict invariance under certain transfomations may eliminate some physically interesting possibilities in the passage to the quantum theory. This is the first review in which this unified perspective is adopted to discuss otherwise rather different uniqueness criteria proposed either in homogeneous loop quantum cosmology or in the Fock quantization of inhomogeneous cosmologies.
I. INTRODUCTION
Quantization is the process of constructing a description that incorporates the principles of Quantum Mechanics starting with a given classical system. In the present work we consider exclusively the so-called canonical quantization process. This means that the classical system can be described in canonical form (e.g. in terms of variables that form canonical pairs) and that one aims at promoting classical variables to operators in a Hilbert space, preserving the canonical structure as much as possible. The prototypical system is, of course, the phase space R 2n , with coordinates {(q i , p i ), i = 1, · · · , n}, equipped with the Poisson bracket {q i , p j } = δ ij . The standard quantization is realized in the Hilbert space L 2 (R n ) of square integrable functions, with configuration variables promoted to multiplicative operatorsq and momentum variables acting as derivative operatorŝ Here, and in the following, we set the reduced Planck constant equal to one. These operators satisfy the canonical commutation relations (CCRs), thus implementing Dirac's quantization rule that Poisson brackets go to commutators [1]. This quantization is irreducible, in the sense that any operator that commutes with all theq i 's andp i 's is necessarily proportional to the identity operator. In addition, this quantization satisfies a technical continuity condition, since it provides a continuous representation of the Weyl relations associated with the CCRs (see Section II for details). It turns out that these two conditions uniquely determine the quantization. This is the celebrated Stone-von Neumann uniqueness theorem (see e.g. Ref. [2]): every irreducible representation of the CCRs coming from a continuous representation of the Weyl relations is unitarily equivalent to the aforementioned quantization, meaning that, given operatorsq ′ i andp ′ i with the above properties in a Hilbert space H, there exists a unitary operator U : H → L 2 (R n ) relating the two sets of operatorŝ q i ,p i andq ′ i ,p ′ i (see Equation (6) below). On the other hand, it is known since Dirac's work (and it was rigorously proved by Groenewold and van Hove [3,4]) that imposing Dirac's quantization rule on a large set of observables is not viable, in the sense that it is impossible to satisfy the relations for a large set of classical observables, and certainly not for the whole algebra of classical observables (see Ref. [5] for a thorough discussion). Nevertheless, in any given physical system there are certainly observables of interest, other than the coordinate variablesq i andp i , which need to be quantized as well. Also, certain canonical transformations typically stand out in a given classical system, e.g. dynamics or symmetries, and these require as well a proper quantum treatment (these two aspects are in fact related, since observables of physical interest often emerge as generators of some special groups of canonical transformations). This poses no problem as far as one considers the standard quantization of linear canonical transformations and the corresponding generators in R 2n , precisely because of the above uniqueness theorem. One can easily illustrate this issue by considering a canonical transformation in R 2n , where S is a symplectic matrix. Then, the operatorsq ′ i andp ′ i , defined as provide a new representation of the CCRs (in this case in the same Hilbert space) with the same properties of irreducibility and continuity. So, it is guaranteed that there exists a unitary operator U such thatq The unitary operator U is naturally interpreted as the quantization of the symplectic transformation S. We will also refer to it as the unitary implementation (at the quantum level) of the canonical transformation S. If instead of a single linear transformation S, one has a 1-parameter group, generated e.g. by a quadratic Hamiltonian function H, one obtains a corresponding 1-parameter group of unitary operators U(t), which is typically continuous, so that a self-adjoint generatorĤ such that U(t) = e −iĤt can be extracted. Non-quadratic classical Hamiltonians of the type H = (1/2) p 2 i + V (q i ) do not fall in this last category, but there is nevertheless a standard and well defined procedure to obtain their quantum version (see Ref. [6] for details), which is simply to defineĤ asĤ = (1/2) p 2 i + V (q i ). Some ambiguities may occur in the quantization of more general functions, involving e.g. products of q's and p's, but these ambiguities are typically not too severe. These are precisely the type of ambiguities that may happen in standard homogeneous quantum cosmology (QC). In fact, in a homogeneous cosmological model, the number of both gravitational and matter degrees of freedom (DoF) is hugely reduced, ending up with just a finite number of global DoF, precisely due to homogeneity. In this so-called minisuperspace setup, the configuration variables on the gravitational side are typically given by the different scale factors, the number of which depend on the degree of anisotropy. The object of interest here is the Hamiltonian constraint, the quantization of which leads to what is often called the Wheeler-de Witt equationĤΨ = 0. This quantization may not be entirely trivial, owing to a possibly complicated dependence of the constraint with respect to the basic configuration and momentum variables. Nevertheless, the ambiguity that may emerge from the quantization of the Hamiltonian constraint typically involves a choice of factor ordering inĤ with respect to the basic quantum operators. Although different choices may lead to different versions of the Hamiltonian constraint, it is often the case that this does not affect the physical predictions substantially, in the sense that the predictions remain qualitatively the same.
Thus, the formalism of standard homogeneous QC, based on the standard quantization for a finite number of DoF, is to a large extent free of major ambiguities, as follows from the Stone-von Neumann theorem.
This last paradigm can be broken in two different ways, from very distinct reasons. First, the quantization -even of the set of basic configuration and momentum variables -of systems with an infinite number of DoF escapes the conclusions of the Stone-von Neumann theorem. On the contrary, in that case there are representations of the CCRs leading to physically inequivalent descriptions of the same system. This occurs when local DoF are considered, of which one can mention two distinct situations of interest in cosmology: i) the quantization of gravitational DoF (and possibly also of matter fields) in inhomogeneous cosmologies (such as in the example of Gowdy models [7]), and ii) the quantum treatment of fields propagating in a non-stationary curved spacetime (e.g. of the FRW or de Sitter type) which is considered as a classical background. While the first case embodies genuine applications to QC, i.e. a full quantum treatment of the gravitational DoF (in cases with considerable symmetry, like the Gowdy model, in the so-called midisuperspace setup), the second situation finds applications in the treatment of quantum perturbations in cosmology (both of gravitational and matter DoF), with the homogeneous background kept as a classical entity, like e.g. in inflationary scenarios. There is thus the need of selecting physically relevant quantizations corresponding to a given system containing an infinite number of DoF. Note that available selection criteria (leading to uniqueness) typically rely on stationarity, and are therefore not applicable in the above described situations.
Precisely, Section IV is devoted to review selection criteria that were recently introduced and proved viable, leading to unique and well defined quantizations in the aforementioned cases. Such criteria are based on remaining symmetries, present in the cosmological system, and crucially on the unitary implementation of the dynamics, which can be seen as a weaker version of the requirement of invariance under time-translations, which can be applied only in stationary settings.
The other avenue for departure from the conclusions of the Stone-von Neumann theorem which is relevant in the cosmological context is exemplified by the quantization approach for homogeneous cosmologies known as loop quantum cosmology (LQC) 1 . Although the same models with a finite number of DoF are considered as in standard homogeneous cosmology, the obtained quantizations are not physically equivalent. The type of quantization used in LQC is not unitarily equivalent to that of standard QC, and the reason why this is possible is because in LQC one of the conditions of the Stone-von Neumann theorem is broken, in a hard way. The LQC-type of quantization starts from the Weyl relations, which are the exponentiated version of the CCRs, and considers representations of the Weyl relations that are not continuous, thus violating one of the conditions of the Stone-von Neumann theorem. In particular, it is the configuration part of the representation that is not continuous. In result, and although the correspondent of the unitary group e itq is well defined in the LQC-type of representation as a unitary group U(t), the would be generatorq cannot be defined, owing to the lack of continuity. This, in turn, is at the heart of the emergence of the discretization (in the canonically conjugate variable) that is so characteristic of LQC. Together with quantization methods adapted from those of loop quantum gravity (LQG) [8,9], this discretization is responsible, at the end of the day, of the results about singularity avoidance for which LQC is known. One question that naturally arises is the following: are there other representations of the Weyl relations with different physical properties? Or is this particular LQC-type of representation naturally selected in some way? In Section III we discuss and comment on a uniqueness result for isotropic LQC recently put forward by Engle, Hanusch, and Thiemann [10], following a previous discussion concerning the Bianchi I case [11].
We would like to stress that, to the best of our knowledge, this is the first time that the results reviewed in Section III and those mentioned in Section IV are considered and 1 There are nowadays also LQC-inspired applications to inhomogeneous cosmologies. We will not consider them in the present work. discussed together. In particular, this joint and integrated review brings about a discussion on the two possible ways to use relevant transformations in order to select a unique quantization. In fact, results like those described in Section III are rooted on a requirement of strict invariance, while the results of Section IV relax that condition (in what dynamical transformations are concerned), requiring only the weaker condition of unitary implementation of the transformations in question. These two approaches are discussed, providing a better understanding on the results achieved so far in cosmology.
For completeness, we will start with a very brief review of the formalism for the study of Weyl algebras and their representations. Also, we include an appendix sketching the proof of the uniqueness of the representation results mentioned in Section IV, in the simplest case of a scalar field in S 1 with time-dependent mass, with the purpose of providing the main steps and typical arguments of the proofs to the interested reader, without overloading the main text.
The standard representation of the Weyl relations, corresponding to the usual Schrödinger representation of the CCRs, is obtained as follows. Consider the Hilbert space L 2 (R) of square integrable functions ψ(q) with respect to the usual Lebesgue measure dq. The expressions (U(a)ψ) (q) = e iaq ψ(q) and define unitary representations of R which clearly satisfy the Weyl relations (7). These representations are moreover jointly irreducible and continuous, i.e. a → U(a) and b → V(b) are continuous functions. The appropriate notion of continuity of these operator valued functions is that of strong continuity, and irreducibility means that no proper subspace of H supports the action of both U(a) and V(b) ∀a, b. It is precisely due to continuity that Stone's theorem guarantees that it is possible to define infinitesimal generatorsq andp such that U(a) = e iaq and V(b) = e ibp . In this case, it turns out thatq is the multiplication operator q andp = −i d dq . The celebrated Stone-von Neumann Theorem ensures that any other representation of the Weyl relations on a separable Hilbert space, with the same properties of irreducibility and continuity, is unitarily equivalent to the one above.
In order to make contact with the language of ⋆-algebras, we now introduce the so-called Weyl algebra. This is the algebra of formal products of objects U(a) and V(b), subjected to the Weyl relations (7). Note that, thanks to the Weyl relations, a generic element of the Weyl algebra can always be written as a finite linear combination of elements of the form U(a)V(b), a, b ∈ R, or equivalently of elements known as Weyl operators. A representation of the Weyl relations is thus tantamount to a representation of the Weyl algebra, and since this is a ⋆-algebra with identity, its representations can be discussed in terms of states of the algebra 2 . In particular, given a state one can construct a cyclic representation of the algebra, by means of the so-called GNS construction [12]. Taking into account the above remarks, it follows that states of the Weyl algebra, and therefore the corresponding representations, are uniquely determined by the values assigned to the Weyl operators.
Concerning the unitary implementation of automorphisms of ⋆-algebras, it is a well known fact that, if a state ω is invariant under a given transformation, then a unitary implementation of that transformation is ensured to exist in the GNS representation defined by ω.
The above constructions related to the Weyl algebra are straightforwardly generalized to any finite number of DoF, and also without difficulties to field theories. In this respect, let us consider for instance a scalar field in R 3 .
The starting point in the canonical quantization process is the choice of properly defined variables in phase space, that are going to play e.g. the role of the q's and the p's. The integration of the field φ and of the canonically conjugate momentum π against smooth and fast decaying test functions provides just those amenable variables. Thus, given test functions f and g with the above properties, hence belonging to the so-called Schwartz space S, one defines linear functions in phase space by In particular, the variables φ(f ) and π(g), with Poisson bracket replace in this context the familiar q's and p's. The corresponding Weyl relations are with seemingly defined Weyl operators Let us focus on a particular type of representations of the Weyl relations (or equivalently of the associated Weyl algebra), namely representations of the Fock type. These are representations defined by complex structures on the phase space, or equivalently on the space S ⊕ S of pairs of test functions (f, g). Note in this respect that the pairs (f, g) define linear functionals in the phase space and so S ⊕ S is naturally dual to the phase space, inheriting therefore a symplectic structure which is induced from that originally considered on phase space. We also recall that a complex structure J on a linear space with symplectic form Ω is a linear symplectic transformation such that J 2 = −1, compatible with Ω in the sense that the bilinear form defined by Ω(J·, ·) is positive definite.
Let then J be a complex structure on the symplectic space S ⊕ S. As mentioned above, a state of the Weyl algebra is defined by the values on the Weyl operators, and therefore the assignment defines a state and an associated cyclic representation of the Weyl algebra. The cyclic vector is here physically interpreted as the vacuum of the Fock representation, and therefore the above expression coincides precisely with the expectation values of the Weyl operators on the vacuum of the Fock representation defined by J.
Let us now discuss the question of unitary implementation of symplectic transformations in the specific context of Fock representations. We consider unitary operators W(f, g) providing a representation of the Weyl algebra, and a linear canonical transformation A. There is then a new representation W A , defined (in the same Hilbert space) by If the representation W is defined by a complex structure J, it follows that W A corresponds to a new complex structure J A := AJA −1 . In general, A is not unitarily implementable, i.e. there is no unitary operator U A such that In fact, the Fock representations defined by J and J A are unitarily equivalent if and only if the difference J A − J is an operator of a special type, namely a Hilbert-Schmidt operator [13]. Two notorious cases where that condition is automatically satisfied are the following. First, every operator in a finite dimensional linear space is of the Hilbert-Schmidt type, and therefore the unitary implementation of symplectic transformations comes for free in finite dimensional phase spaces, as expected from the Stone-von Neumann theorem. On the other hand, the null operator is always of the Hilbert-Schmidt type, regardless of the dimensionality, and therefore a transformation A that leaves J invariant is always unitarily implementable in the Fock representation defined by J. With respect to previous remarks in our exposition, we note that invariance of J immediately translates into invariance of the associated Fock state defined by (15). Of course, the two situations that we have described correspond only to sufficient conditions for unitary implementation, which are by no means necessary. In particular, unitary implementation of a canonical transformation A can be achieved via a non-invariant complex structure J, provided that J A − J is Hilbert-Schmidt. In any case, and in clear contrast with the situation found for a finite number of DoF, in field theory no Fock representation supports the unitary implementation of the full group of linear canonical transformations. Fock representations are therefore distinguished by the class of transformations that are unitarily implementable. Now, in a particular theory, specified e.g. by a given Hamiltonian, a particular set of canonical transformations stands out, namely transformations generated by the Hamiltonian and by possible symmetries. The requirement of unitary implementation of relevant canonical transformations therefore provides a criterion guiding the selection of one representation over another when we are trying to quantize a field theory. In this respect, we notice that the case of the set of transformations corresponding to classical time evolution is particularly relevant, given the role that unitarity plays in the probabilistic interpretation of the quantum theory.
The simplest situation is that of a free field of mass m in Minkowski spacetime. In this case the representation is completely fixed by the requirement of invariance under spatial symmetries and time evolution (or the full Poincaré group), in the sense that a unique complex structure is selected by that requirement, namely J m defined by where ∆ is the Laplacian. Besides the above free field in Minkowski spacetime, there are other known situations of linear dynamics where uniqueness results apply. In fact, provided that the Hamiltonian is time independent, the criterion of positivity of the energy, together with invariance under the 1-parameter group of canonical transformations generated by the Hamiltonian, is sufficient to select a unique complex structure. Here, positivity means that the unitary group implementing the dynamics possesses a positive generator, i.e. the quantum Hamiltonian is a positive operator [14,15]. This result finds remarkable applications in the quantization of free fields in stationary curved spacetimes (i.e. with a timelike Killing vector) [16,17]. On the other hand, no general uniqueness results are available for the non-stationary situations typical in cosmology.
III. LOOP QUANTUM COSMOLOGY
Let us now consider the representation of the Weyl relations used in LQC, sometimes referred to as the polymer representation. We will restrict our attention to its simplest version, namely the one associated with the homogeneous and isotropic flat FLRW model 3 . For convenience, we set the speed of light and Newton constant multiplied by 4π equal to the unit, and we make the Immirzi parameter [18] equal to 3/2 in order to simplify our equations, without loss of generality. At the classical level, the system is described by a pair of canonically conjugate variables, usually denoted by c and p, with Poisson bracket {c, p} = 1. The variables c and p parametrize, respectively, the (homogeneous) Ashtekar connection and the densitized triad (see Ref. [10] for details in the context of the current uniqueness discussion, and Refs. [19,20] for more general introductions to LQC). In particular, p is proportional to the square scale factor of the FLRW spacetime.
Let us consider the Hilbert space H P defined by the discrete measure in R, i.e. the space of complex functions ψ(p) such that with inner product given by where the overbar denotes complex conjugation. We note that this Hilbert space, also referred to as the polymer Hilbert space, is very different from the standard one, L 2 (R).
In particular, H P is non-separable 4 . An orthonormal basis of H P is formed e.g. by the uncountable set of functions Ψ p 0 , for all p 0 ∈ R, where with δ pp 0 being the Kronecker delta. We then define the operators U P (a) and V P (b), for a, b ∈ R, acting on H P by and It is clear that these operators satisfy the Weyl relations (7) and that the representation is irreducible. The map b → V P (b) is continuous, so that one can define the infinitesimal generator. We will denote it by π(p), and notp, to distinguish it from the standard Schrödinger representation in L 2 (R). It follows that On the other hand, U P (a) is not continuous. To see this, it suffices to note that, for arbitrarily small a, the vector Ψ p 0 is mapped by U P (a) to an orthogonal one Ψ p 0 +a . The would be generator of the unitary group U P (a) cannot therefore be defined. Nevertheless, taking into account the commutator [π(p), U P (a)], one can see that the operators U P (a) can be regarded as providing a quantization of the classical variables e iac . We note that the reality conditions are properly satisfied, since U † (a) = U(−a), with the dagger denoting the adjoint. Thus, the set of operators U P (a), together with π(p), provide a quantization, in the usual Dirac's sense, of the Poisson algebra of phase space functions made of finite linear combinations of the functions p and e iac , with a ∈ R (see Ref. [23] for details). This Poison algebra is of course different from the kinematical algebra usually associated with the linear phase space R 2 with coordinates c and p, which is simply the Heisenberg algebra of linear (non-homogeneous) functions in c and p. At the foundation of LQC, there is thus a situation similar to that of LQG: a non-standard choice of basic variables and a representation of the associated algebra which is non-continuous (at least in part of the algebra), thus obstructing the quantization of the connection itself [8]. However we note that, contrary to the situation in LQG, the standard Schrödinger quantization gives us a different representation of the very same Poisson algebra, since the variables e iac are trivially quantized in L 2 (R) by multiplication operators, e.g. by operators e iac such that e iac ψ (q) = e iaq ψ(q).
In the case of this Schrödinger quantization, the continuity of the representation U(a) allows us to define the operatorĉ itself, and therefore to extend the quantization of configuration variables to a much larger set of functions f (c), simply by defining f (c) = f (ĉ), whereas in the polymer quantization one is restricted to quantize configuration variables of the type e iac (and linear combinations thereof).
In LQG there is a celebrated result about the uniqueness of the quantization that gives robustness to the loop representation [24,25]. In Ref. [10], the authors proved a similar uniqueness result for LQC, that we now discuss. In order to make contact with that work, which uses the language of ⋆-algebras and corresponding states, we first introduce the LQC analogue of the LQG holonomy-flux algebra, which is again denoted in Ref. [10] as the quantum holonomy-flux ⋆-algebra U. The LQC ⋆-algebra U is constructed in Ref. [10] as the algebra of formal products of operators corresponding to the variables p and e iac , subjected to the conditions coming from the commutator [π(p), U P (a)]. However, since we have already introduced the Weyl algebra, it is more natural here to follow an alternative procedure, and identify instead the ⋆-algebra U with the Weyl algebra, i.e. the algebra of formal products of objects U(a) and V(b), subjected to the Weyl relations (7).
Recall now that the group of spatial diffeomorphisms is a "gauge" symmetry in General Relativity (GR). Physical states in quantum gravity should therefore be invariant under the quantum operators representing these diffeomorphisms (or annihilated by the quantum diffemorphism constraint). Thus, a unitary implementation of the group of spatial diffeomorphisms in the quantum Hilbert space is required. Since the LQG holonomy-flux algebra is again a ⋆-algebra with identity, it follows from previous comments that, in order to achieve the required unitary implementation of the group of diffeomorphisms, it is sufficient that the quantization be defined by a diffeomorphism invariant state.
The LQG result of uniqueness of the quantization [24,25] guarantees, precisely, that there exists a unique diffeomorphism invariant state of the LQG holonomy-flux algebra. Moreover, the GNS representation defined by such a unique invariant state is unitarily equivalent to the LQG representation that was previously known. The analogous result for LQC starts from the observation, explained in detail in Ref. [10], that a residual gauge group still remains, when descending from full GR to homogeneous and isotropic flat models. In fact, although almost all of the diffeomorphism gauge symmetry is automatically fixed, in homogeneous and isotropic flat models one is left with a small gauge group, namely the group of isotropic dilations, acting on phase space as Thus, there is the possibility of exploring this residual gauge symmetry in order to select a state in LQC, very much like in the above mentioned LQG uniqueness result. The authors of Ref. [10] indeed succeeded in proving that there is a unique dilation invariant state of the LQC holonomy-flux algebra, for which the associated GNS representation is unitarily equivalent to the polymer representation described above.
Although the result of Ref. [10] is definitely very interesting and rigorous, we argue here that in a certain sense its status is not as strong as that of the LQG uniqueness result. From the physical viewpoint, what is really required is a unitary implementation of the group of interest (may it correspond to dynamics, to symmetries, or to gauge transformations), and not necessarily an invariant state. There are even situations (see e.g. Section IV) where invariant states are not available and, nevertheless, a unitary implementation of the relevant group exists. It is true that constructing the quantization by means of an invariant state is sufficient to achieve unitary implementation -and it is perhaps the "Kings way" of doing it -, but it is by no means necessary. In the present case, the Schrödinger representation, although not (unitarily equivalent to a representation) defined by an invariant state, carries a unitary implementation of the group of dilations (25), which is physically just as good as the one provided by the polymer representation. The existence of this unitary implementation actually follows from the Stone-von Neumann theorem, but let us show it explicitly. We consider the transformations U λ : L 2 (R) → L 2 (R) defined by which are clearly unitary ∀λ, with respect to the standard inner product defined by the measure dq. We consider also the standard operatorp = −i d dq and the Schrödinger quantization of configuration variables which obviously includes the LQC variables e iac . A straightforward computation shows that which is the announced unitary implementation of the group of dilations (25) in the Schrödinger representation.
From this perspective, we then conclude that the physical criterion of a unitary implementation of the residual group of dilations in homogeneous and isotropic flat cosmology does not fully succeed in selecting a unique quantization, since both the polymer and the Schrödinger representations are viable from this viewpoint. Only the more mathematically stringent requirement of strict invariance selects a unique state. This is perhaps a reminder about the fact that strict invariance is not an unavoidable requirement, and that the quantization of groups of interest via unitary implementations that are not necessarily based on invariant states is worthwhile exploring.
Let us end with a brief comment regarding the analogous uniqueness result in LQG. In that case, the uniqueness is also proved by requiring strict invariance of a state of the holonomy-flux algebra under the action of spatial diffeomorphisms. There is however a key difference with respect to the above described situation in LQC: no other (irreducible) representation is known, of any kind, admitting a unitary implementation of the diffeomorphism group 5 and so there is no alternative route that may cast any shadow on the uniqueness. The situation remains however somewhat open, until a stronger uniqueness result is demonstrated that is based exclusively on a unitary implementation of the spatial diffeomorphisms, and not just in strict invariance, or otherwise until a new representation of the LQG algebra admitting a unitary non-invariant implementation of the diffeomorphism group is constructed.
IV. FOCK QUANTIZATION IN NON-STATIONARY COSMOLOGICAL SETTINGS
As we have already mentioned, the Stone-von Neumann uniqueness result fails for infinite number of DoF, and no general result on the uniqueness of the quantization is available for non-stationary situations. This includes, of course, cases of interest in QC. In some of those cases, the underlying theory can be recast in the form of a linear scalar field with a timedependent mass, propagating in an auxiliary background spacetime which is both static and spatially compact. One such situation is the linearly polarized Gowdy model with the spatial topology of a three-torus, where the gravitational degrees of freedom are encoded by a scalar field on S 1 , evolving in time precisely as a linear field with a time-dependent mass of the type m(t) = 1/(4t 2 ) [31,32]. Other situations of interest include free scalar fields in cosmological scenarios, e.g. propagating in (compact) FLRW or the Sitter spacetimes. In those cases, the non-stationarity is transfered from the background to the field efective mass, by means of a simple transformation.
A. Gowdy Models
Midisuperspace models are symmetry reductions of full GR that retain an infinite number of degrees of freedom. Typically, these are local degrees of freedom, so that they often describe inhomogeneous scenarios. Therefore, these midisuperspace models must face the inherent ambiguity that affects the quantization of fields. One of the simplest inhomogeneous cosmologies obtained with a symmetry reduction is the linearly polarized Gowdy model on the three-torus, T 3 [7]. This model describes vacuum spacetimes with spatial sections of T 3 -topology containing linearly polarized gravitational waves, with a symmetry group generated by two commuting, spacelike, and hypersurface orthogonal Killing vector fields. In consequence, the local physical degrees of freedom can be parametrized by a scalar field corresponding to those waves, and effectively living in S 1 .
After a partial gauge fixing, the line element of the linearly polarized Gowdy T 3 model can be written as [33] where (∂/∂σ) a and (∂/∂δ) a are the two Killing vector fields. The true dynamical field DoF are encoded in φ(θ, t), where t > 0 and θ ∈ S 1 . On the other hand, p is a homogeneous non-dynamical variable, and the field γ is completely determined by p and φ as a result of the gauge-fixing process [33]. There remains a global spatial constraint on the system, giving rise to the symmetry group of (constant) translations in S 1 . Time evolution is dictated by the field equationφ Here, the prime denotes the derivative with respect to θ and the dot denotes the time derivative.
A complete quantization of the system was obtained in Ref. [34]. Nevertheless, it was soon realized [33,35] that the classical dynamics could not be implemented as a unitary transformation in such quantization. With the purpose of achieving unitarity, and restoring in particular the standard probabilistic interpretation of quantum physics within the quantum Gowdy model, an alternative quantization was introduced by Corichi, Cortez, and Mena Marugán [31,32]. A crucial step towards a quantization with unitary dynamics is the following time-dependent transformation, performed at the level of the classical phase space. Instead of working with the original field φ and its corresponding conjugate momentum P φ , the authors introduced a new canonical pair χ and P χ related to the first one by means of the canonical transformation taking advantage in this way of the freedom in the scalar field parametrization of the metric of the Gowdy model. The evolution of the new canonical pair turns out to be governed by a time-dependent Hamiltonian that, in our system of units, adopts the expression The corresponding Hamiltonian equations arė that combined give the second-order field equation We can therefore view the system as a linear scalar field with a time-dependent mass of the form m(t) = 1/(4t 2 ), evolving in an effective static spacetime with one-dimensional spatial sections with the topology of the circle 6 . Another relevant aspect of the model is the invariance of the dynamics under (constant) S 1 -translations: These translations are moreover symmetries generated by a remaining constraint, as we already mentioned. The quantization of the system put forward by Corichi, Cortez, and Mena Marugán in Refs. [31,32] starts from the CCRs satisfied by the canonical pair χ and P χ , or rather by the corresponding Fourier modes. The advantage of using Fourier components, say , instead of the field χ(θ) itself is clear: since the spatial manifold is compact, the set of Fourier modes is discrete and one therefore avoids the issues of dealing with operator valued distributions, like e.g.χ(θ). The aforementioned quantization is of the standard Fock type, with the following remarkable properties. To begin with, the complex structure on phase space that effectively defines the Fock representation is invariant under S 1 -translations. Thus, the corresponding state of the Weyl algebra is S 1 -invariant, leading to a natural unitary implementation of these gauge transformations. Secondly and most importantly, the classical dynamics in phase space defined by Equation (35) (i.e. generated by the Hamiltonian (34)) is unitarily implemented at the quantum level. In other words, let t 0 be an arbitrary but fixed initial time, and S(t, t 0 ) be the linear symplectic transformation corresponding to the classical evolution in phase space from the time t 0 to the arbitrary time t. Then, to each transformation S(t, t 0 ) there corresponds a unitary operator U(t, t 0 ), that intertwines between the quantum operatorsχ andP χ defined at the initial time t 0 and those obtained fromχ andP χ by application of S(t, t 0 ), as exemplified in Equations (5,6).
This last transformation corresponds of course to the usual evolution in the Heisenberg picture, which can always be formally defined, once canonical operators are given at some initial time. The key difference with respect to systems with a finite number of DoF is that, whereas in those cases the relation between the "initial" and the evolved operators in principle is always unitary, the existence of such unitary operators is far from being guaranteed in an arbitrary representation of the CCRs for field theory (or generally with an infinite number of DoF).
We note that, although a unitary implementation of all the transformations S(t, t 0 ) is achieved in the Corichi, Cortez, and Mena Marugán representation, the corresponding state is not invariant under these transformations. In fact, no state exists such that it remains invariant under all the transformations S(t, t 0 ), ∀t.
All in all, the quantization of the linearly polarized Gowdy model proposed in Refs. [31,32] is one of the few available examples of a rigorous and fully consistent quantization of an inhomogeneous cosmological model. Nonetheless, the eventual robustness of its physical predictions might be affected by the possible existence of major ambiguities in the quantization process. Fortunately, a quantization with the aforementioned properties is indeed unique, as shown in Refs. [36,37], where the uniqueness result that we now discuss was derived.
A source of ambiguity in the process leading to the Corichi, Cortez, and Mena Marugán quantization is the choice of representation for the canonical pair χ and P χ . However, it was shown in Ref. [36] that any other Fock representation of the CCRs that i) is defined by a S 1 -invariant complex structure (or equivalently by a S 1 -invariant state of the Weyl algebra) and ii) allows a unitary implementation of the dynamics defined by Equation (35), is unitarily equivalent to the considered representation (and therefore physically indistinguishable). We note that there actually is an infinite number of S 1 -invariant states, leading to many inequivalent representations. It is only after the requirement of unitary dynamics that a unique unitary equivalence class of representations is selected.
Another possible source of ambiguity concerns the choice of the "preferred" canonical pair (χ, P χ ). In this respect, note that a time-dependent transformation different from Equation (33) would lead to classical dynamics that would not reproduce Equation (35), and it is in principle conceivable that the new dynamics could be unitarily implemented in a different representation, thus leading to a distinct quantization. This is however not the case. In fact, it was shown in Ref. [37] that any other canonical transformation of the type (33) modifies the equations of motion in such a way as to render impossible the unitary implementation of the dynamics, with respect to any Fock representation defined by a S 1 -invariant complex structure. It is worth mentioning that the kind of transformations considered in Ref. [37] is restricted by the natural requirements of locality, linearity, and preservation of S 1 -invariance. In configuration space, these are contact transformations that produce a time-dependent scaling of the field φ (see Ref. [37] for a detailed discussion). Such scalings can always be completed into a canonical transformation in phase space, of the general form which includes a contribution to the new momentum which is linear in the field φ.
Finally, let us mention that completely analogous results have been obtained for the remaining linearly polarized Gowdy models, namely those with spatial sections with topologies S 1 × S 2 and S 3 . This analysis was performed in the following independent steps. First, the classical models were addressed in Ref. [38], showing that, in these cases, the local gravitational DoF can also be parametrized by a single scalar field, namely an axisymmetric field in S 2 . Then, following a procedure similar to the one introduced by Corichi, Cortez, and Mena Marugán, a Fock quantization with unitary dynamics was obtained [39]. In particular, a time-dependent scaling of the original field is again involved, now of the form φ → √ sin tφ. Finally, the uniqueness of the quantization obtained in this way was proved in Ref. [40].
B. Quantum Field Theory in Cosmological Settings
A common feature of the Gowdy models mentioned in the previous section is that the local DoF are parametrized by a scalar field effectively living in a compact spatial manifold. Moreover, after the crucial scaling of the field, the dynamics is that of a linear field with time-dependent mass, i.e. it obeys a second order equation of the typë where ∆ is the Laplace-Beltrami (LB) operator for the spatial sections in question, e.g. S 1 for the Gowdy model on T 3 and S 2 for the remaining two models.
Remarkably, a whole different type of situations in cosmology can also be described by an equation of the form (39). Let us consider e.g. a free scalar field in a homogeneous and isotropic FLRW spacetime, with line element where t is the conformal time, a(t) is the scale factor, and h ab (a, b = 1, 2, 3) is the Riemannian metric, for either flat Euclidean space or the 3-sphere. A minimally coupled scalar field of mass m obeys in this cosmological spacetime the equation where ∆ is the LB operator defined by the metric h ab of the spatial sections. The most obvious situation described by this setup is the propagation of an actual (test) scalar matter field (disregarding the backreaction) in an FLRW background. Nonetheless, the treatment of quantum perturbations, both of matter and of gravitational DoF, also fits the above description. In fact, in the context of cosmological perturbations, the leading-order approximation in the action, together with a neglected backreaction, amounts to keep the homogeneous classical cosmology as the background and treat both matter and gravitational perturbations as fields propagating on that background [41][42][43][44][45].
The quantum treatment of the situations described above therefore faces the ambiguity of the choice of quantum representation. In particular, the criterion based on stationarity, mentioned in Section II, is not available, since all these situations are inherently non-stationary.
There is however a natural avenue to address this issue, arriving hopefully at a unique quantum theory, with unitary dynamics. The way forward is actually suggested by a standard procedure, commonly found precisely in QFT in curved spacetimes (see e.g. Refs. [46,47]) and in the treatment of cosmological perturbations (see for example Refs. [41][42][43][44]). It consists in the scaling of the field variable φ by means of the scale factor, thus introducing the rescaled field χ = a(t)φ, which now obeys an equation of the type (39), with s(t) = m 2 a 2 − (ä/a).
Thus, the combined effect of the use of conformal time and the scaling φ → a(t)φ is to recast the field equation in the form (39), which is effectively the field equation of a linear field with a time-dependent quadratic potential V (χ) = s(t)χ 2 /2, propagating in a static background with metric In this manner, we see that a generalization of the uniqueness result obtained for the Gowdy models would provide a useful criterion to select a unique quantization for fields in nonstationary backgrounds such as FLRW universes, typically with compact spatial sections with the topology of S 3 or T 3 , allowing also applications to the usual flat universe case. Note that the restriction to spatial compactness is most convenient from the viewpoint of mathematical rigor, as otherwise infrared issues would plague the analysis. Nonetheless, the physical effects of the artificially imposed compactness, e.g. in the spatially flat case, should be irrelevant when the physical problem at hand does not involve arbitrarily larges scales, e.g. going beyond the Hubble radius. Moreover, in fact the case of continuous scales, which corresponds to non-compactness, can be reached in a suitable limit for flat universes, after completing all the demonstrations of uniqueness in the framework of compact spatial sections [48]. The desired generalization of the result about the uniqueness of the representation in the aforementioned context of test fields and perturbations on cosmological backgrounds was obtained in Refs. [49][50][51][52][53]. In the rest of this Section, we very briefly explain the implications of these results.
Let I×Σ be a globally hyperbolic spacetime, where I ∈ R is an interval and Σ is a compact Riemannian manifold of dimension d ≤ 3 (for cosmological applications, one can think of Σ as being either S 1 , S 2 , S 3 , or T 3 ). The spacetime is assumed to be static, with metric given by Equation (42), where h ab is the time-independent, Riemannian metric on Σ. Consider a linear scalar field in I×Σ obeying a field equation of the type (39), where s(t) is (essentially 7 ) an arbitrary function and ∆ is the LB operator associated with the metric h ab . Note that any symmetry of this metric is transmitted to the LB operator, and therefore to the equations of motion. Let us consider the Weyl algebra associated with the field χ (and of course its canonical conjugate momentum) and their Fock representations. Then, 1. there exists a Fock representation defined by a state which is invariant under the symmetries of the metric h ab (or equivalently, a Fock representation with invariant vacuum) and such that the classical dynamics can be unitarily implemented; 2. that representation is unique, in the sense that any other Fock representation defined by an invariant state and allowing a unitary implementation of the dynamics is unitarily equivalent to the previous one. Remarkably, it again follows that the rescaling of the field φ → a(t)φ is quite rigid and uniquely determined: no other time-dependent scaling can lead to a(n invariant Fock) quantization with unitary dynamics, and so no ambiguity remains in the choice of the preferred field configuration variable. Furthermore, the unitarity requirement essentially selects as well the canonical momentum field. Physically, this can interpreted as a unique splitting between the time-dependence assigned to the background and that corresponding to the evolution of the scalar field DoF.
Finally, it is worth mentioning that extensions of these uniqueness results where obtained in several directions. First, analogous results were obtained for scalar fields in homogeneous backgrounds of the Bianchi I type [54]. We point out that these spacetimes are not isotropic, and therefore the conformal symmetry which was a common characteristic of the previous cases (at least asymptotically for large frequencies) is no longer present in an obvious way. Even more remarkable are the extensions of the uniqueness results attained for fermions, since they mark the transition to a largely unexplored territory [55][56][57]. A recent account of these results 8 can be found in Ref. [59]. 7 Only very mild technical conditions on s(t) are required, see Ref. [50]. 8 Part of the techniques employed for fermions were already explored in the case of the scalar field in Bianchi I, in order to deal with the lack of conformal symmetry. A review of the range of different methods and improvements required to address the increasing degree of generalization encountered in the treatment of the scalar field can be found in Ref. [58].
V. CONCLUSIONS
We have reviewed and discussed several results concerning the quantization of systems with relevant applications in cosmology. In particular, we have focused our attention on results ascertaining the uniqueness of the quantization process. This uniqueness is crucial in order to provide physical robustness to the eventual cosmological predictions of the quantum models in question, which would otherwise be fundamentally affected by ambiguities.
Starting with homogeneous models in this cosmological context, we have discussed a uniqueness result by Engle, Hanusch, and Thiemann, concerning the representation of the Weyl relations commonly used in LQC. Turning to recent investigations carried out by our group and collaborators, the quantization of the family of inhomogeneous cosmologies known as the linearly polarized Gowdy models was reviewed next. Crucial in this discussion is the requirement of unitary implementation (at the quantum level) of the dynamics. Together with invariance under spatial symmetries, the criterion of unitary dynamics has proved very effective in the selection of a unique and physically meaningful quantization, in a variety of situations. A common general mathematical model embodying all these cases is that of a scalar field with a time-dependent mass (with arbitrary time dependence except for some very mild conditions), propagating in a static spacetime with compact spatial sections. Existence and uniqueness of a Fock quantization with unitary dynamics has been proved for such general model, thus providing unique quantizations in cosmological systems, ranging from quantum fields in FLRW backgrounds to the quantization of (perturbative) gravitational DoF.
In particular, it follows from our analysis and the aforementioned discussions that the quantization of linear transformations of physical interest via a unitary implementation does not necessarily require the existence of an invariant state, and that it is worthwhile pursuing alternatives that are not based on such invariance. In fact, what is really required in physical terms is a unitary implementation and not necessarily an invariant state, i.e. unitary implementations via non-invariant states are still physically acceptable and cannot be discarded. Clearly, a proof on the uniqueness of the quantization based just on the unitary implementation of those transformations, rather than on the existence of an invariant state, is a stronger result inasmuch as the requirement of unitarity is weaker than invariance. Note that whereas in some circumstances there are good reasons to restrict attention to invariant states (for instance when there is a time-independent Hamiltonian giving rise to a group of transformations), in the general case of arbitrary transformations the requirement of an invariant state is not so compelling and non-invariant states cannot be simply disregarded. In this sense, it is worth pointing out that invariance can actually be a too much rigid demand in some situations, especially if the considered transformations provide a notion of evolution that is crucial to describe the dynamics. For instance, the unitary implementation of the evolution via (dynamical-)invariant states in typically non-stationary scenarios is of doubtful physical use, and in fact seems hopeless, whereas physically viable states, still leading to unitary dynamics and ensuring uniqueness, are nevertheless available and have direct cosmological applications. In this respect, we conclude with the following remark.
The uniqueness of the Fock quantization of the scalar field with time-dependent mass was obtained by restricting the attention to states that remain invariant under spatial symmetries. Although the removal of this restriction seems unlikely to lead to physically new representations with unitary dynamics, it is nevertheless an open possibility. More precisely, it remains to be disproved the existence of representations with unitary dynamics and a unitary implementation of the spatial symmetries which are nevertheless not unitarily equivalent to the representation defined by an invariant vacuum. Thus, a conceivable line of future research in this area is to consider also Fock representations that, while not possessing an invariant vacuum, still allow a unitary implementation of the spatial symmetries. The uniqueness result would be strengthened if those representations were shown to be equivalent to the previous one, or otherwise new and potentially interesting representations could emerge. Another possibility, with a clearly much higher potential to produce physically inequivalent results, is to use polymer inspired representations for the scalar field, instead of Fock representations. In this Appendix, we are going to consider the simplest example of a linear scalar field in a static spacetime with compact spatial sections, namely the case where the spatial sections are 1-dimensional with the topology of the circle. The field equation reads where the mass term s(t) can be an (essentially) arbitrary function of time.
Since the spatial manifold is compact, and the field equation is linear, a Fourier decomposition gives us a discrete set of independent modes: q n cos(nθ) + x n sin(nθ) + q 0 √ 2π .
The configuration space for the scalar field is then described by the set of real variables q n , n ≥ 0, and x n , n > 0, which are completely decoupled. For simplicity, we drop all the modes x n (which can be treated like their cosine counterparts q n for n > 0) and q 0 , and continue with the infinite set {q n , n > 0}. The variable q 0 is dropped just to avoid introducing a special treatment in the case s(t) = 0, since n = 0 corresponds to a zero frequency oscillator, or a free particle, instead of a regular harmonic oscillator. In any case, it describes a single degree of freedom, which cannot affect the considered matters of unitary implementation. The equations of motion for the modes arë q n + [n 2 + s(t)]q n = 0.
The corresponding Hamiltonian equations arė q n = p n ,ṗ n = −[n 2 + s(t)]q n , where p n is the momentum canonically conjugate to q n , i.e. {q n , p n ′ } = δ nn ′ . There are many representations of the CCRs satisfied by the infinite set of pairs {(q n , p n ), n > 0}, or of the associated Weyl algebra. For instance, every sequence {µ n , n > 0} of (quasi-invariant 9 ) probability measures in R gives a representation, since it defines a regular product measure in the set of all sequences (q 1 , q 2 , . . .), thus providing a Schrödinger type of quantization, in the Hilbert space of square integrable functions in the configuration space. What is not available, however, is the straightforward generalization of the usual representation in finite dimensions obtained from the Lebesgue measure dq, since no mathematical sense can be made of the formal infinite product ∞ n=1 dq n . In such context, Fock representations of the Weyl relations are given by (normalized) Gaussian measures, which still make perfect sense in infinite dimensions. Even after restricting our attention to Gaussian measures, there are endless possibilities, and many of them lead to inequivalent representations.
In the present case, a particularly important measure in our infinite dimensional configuration space is which is associated with the quantum operatorŝ q n Ψ = q n Ψ,p n Ψ = −i ∂ ∂q n Ψ + inq n Ψ.
This is precisely the necessary and sufficient conditions for unitary implementability of the dynamics, in the J 0 representation. Thus, there is a Fock representation, defined by a complex structure that remains invariant under the spatial symmetries, such that a unitary quantum dynamics can be achieved. The proof that a quantization with the above characteristics is unique goes as follows. To begin with, any other invariant (under spatial isometries) complex structure J is related to J 0 by J = KJ 0 K −1 , where K is a symplectic transformation given by a block diagonal matrix with 2 × 2 blocks of the form κ n λ n λ nκn , |κ n | 2 − |λ n | 2 = 1, ∀n > 0. (53) Suppose now that the dynamics is unitary in the Fock representation defined by J. It turns out that this is equivalent to the unitary implementability in the J 0 -representation of a modified dynamics, obtained precisely by applying the transformation K to the canonical transformations corresponding to time evolution. A simple computation shows that, for this modified dynamics, the coefficients β n in Equation (50) are replaced with β J n (t, t 0 ) = 2iκ n λ n Im[α n (t, t 0 )] + (κ n ) 2 β n (t, t 0 ) − λ 2 nβ n (t, t 0 ), where the notation Im denotes the imaginary part. Then it follows from the hypothesis of unitary dynamics that the following condition holds: Now, a detailed asymptotic analysis [49,50] shows that condition (55) implies that ∞ n |λ n | 2 < ∞.
Given the relation between J and J 0 , this last condition guarantees precisely that the operator J − J 0 is of the Hilbert-Schmidt type, i.e. that the Fock representations defined by J and J 0 are unitarily equivalent. | 13,449.8 | 2021-08-13T00:00:00.000 | [
"Mathematics"
] |
Why Don't All Infants Have Bifidobacteria in Their Stool?
Members of the genus Bifidobacterium are abundant in the stool of most human infants during the initial exclusively milk-fed period of life, especially at an age of 2–3 months (Harmsen et al., 2000; Favier et al., 2002; Mariat et al., 2009; Coppa et al., 2011; Turroni et al., 2012; Yatsunenko et al., 2012; Tannock et al., 2013; Barrett et al., 2015). Bifidobacteria dominate the stool microbiota regardless of whether the infants are fed human milk or formula based on ruminant milk (cow or goat). However, bifidobacteria have about 20% higher relative abundances in human milk-fed compared to formula-fed babies (Tannock et al., 2013). The greater abundance of bifidobacteria in human-milk-fed infants can, at least in part, be explained by the fact that bifidobacterial species that are enriched in the infant bowel can utilize Human Milk Oligosaccharides (HMO) or their components as growth substrates (Sela et al., 2008; LoCascio et al., 2010; Garrido et al., 2013). It could be anticipated, therefore, that bifidobacteria would be detectable in the stool microbiota of every child nourished at the breast because of the supply of appropriate growth substrates. This expectation is not borne out completely because a proportion of infants have very low abundance or undetectable bifidobacteria as members of the fecal microbiota regardless of breast milk or formula feeding (Young et al., 2004; Gore et al., 2008; Tannock et al., 2013). Antibiotics had not been administered to these infants. How then can the absence of bifidobacteria be explained?
Members of the genus Bifidobacterium are abundant in the stool of most human infants during the initial exclusively milk-fed period of life, especially at an age of 2-3 months (Harmsen et al., 2000;Favier et al., 2002;Mariat et al., 2009;Coppa et al., 2011;Turroni et al., 2012;Yatsunenko et al., 2012;Tannock et al., 2013;Barrett et al., 2015). Bifidobacteria dominate the stool microbiota regardless of whether the infants are fed human milk or formula based on ruminant milk (cow or goat). However, bifidobacteria have about 20% higher relative abundances in human milk-fed compared to formula-fed babies (Tannock et al., 2013). The greater abundance of bifidobacteria in human-milk-fed infants can, at least in part, be explained by the fact that bifidobacterial species that are enriched in the infant bowel can utilize Human Milk Oligosaccharides (HMO) or their components as growth substrates (Sela et al., 2008;LoCascio et al., 2010;Garrido et al., 2013). It could be anticipated, therefore, that bifidobacteria would be detectable in the stool microbiota of every child nourished at the breast because of the supply of appropriate growth substrates. This expectation is not borne out completely because a proportion of infants have very low abundance or undetectable bifidobacteria as members of the fecal microbiota regardless of breast milk or formula feeding (Young et al., 2004;Gore et al., 2008;Tannock et al., 2013). Antibiotics had not been administered to these infants. How then can the absence of bifidobacteria be explained?
A GROWTH SUBSTRATE DEFICIT?
The "bifidobacteria-negative" babies have been detected in both human milk and formula-fed infants. Therefore, a bacterial growth substrate effect seems unlikely. While human milk is rich in HMO and ruminant milk lacks these complex molecules (although simpler forms such as sialylated-lactose are present in very small amounts), bifidobacteria are still the most abundant taxon in the feces of infants fed formula un-supplemented with galacto-or fructo-oligosaccharides (Tannock et al., 2013). In this case, lactose and/or glycoproteins and glycolipids are probable growth substrates for bifidobacteria (Turroni et al., 2010;Bottacini et al., 2014;O'Callaghan et al., 2015) in the bowel of exclusively milk fed infants. There is, however, a need to support genomic analysis of bifidobacteria with culture-based investigations of bifidobacterial nutrition based on substrates present in the bowel of exclusively milk-fed babies (other than HMO).
LACK OF SENSITIVITY OF BIFIDOBACTERIAL DETECTION METHODS?
An obvious reason for bifidobacteria-negative feces is that the detection methods lack sufficient sensitivity. Culture-based methods usually have a lower detection limit of 1 × 10 3 per gram, fluorescent in situ hybridization (FISH) 1 × 10 6 − 10 7 per gram (manual or digital counts respectively) or ∼4 × 10 4 by flow cytometry, and denaturing gradient gel electrophoresis of PCR amplicons ∼1 × 10 5 -10 6 cells (Welling et al., 1997;Jansen et al., 1999;Zoetendal et al., 2001Zoetendal et al., , 2002 or 1 × 10 4 using internal transcribed spacer targets (Milani et al., 2014). High throughput DNA sequencing methods, such as Illumina, generate tens of thousands of 16S rRNA gene sequences per DNA sample, but there may be several hundred OTU per sample. Thus, taxa present in very low abundance could be missed. However, reference to rarefaction curves (alpha diversity) during sequence analysis will show whether coverage of the microbiota is near complete or not. Therefore, while lack of sufficient sensitivity of detection methods remains a possibility, it probably does not provide the total explanation.
BIFIDOBACTERIAL POPULATIONS RISE AND FALL FROM DAY TO DAY?
Most fecal microbiota studies examine a single fecal sample from each participating individual. Comprehensive temporal studies of the fecal microbiota to determine day-to-day variations in composition have not been reported. It is possible that bifidobacteria are present in the feces of all children during early life but that, on some days, the bifidobacterial population falls to undetectable levels. Populations of bifidobacteria in the feces of some adults without diseases are dynamic in terms of strain composition, so there is some support for a concept of temporal instability in the bifidobacterial population of the microbiota (McCartney et al., 1996). Figure 1A shows data from feces collected at intervals from infants during the first 12 weeks of life. In the example, fluctuations in the abundances of bifidobacteria were seen, varying from very low abundance to absence, in feces of individual children. Strikingly, bifidobacteria were not detected in any of the fecal samples of one child. Therefore, bifidobacteria-free infants do seem to be a real phenomenon.
THE WINDOW OF INFECTIVITY (OPPORTUNITY/COLONIZATION) WAS MISSED?
A window of opportunity is a short time period during which an otherwise unattainable opportunity exists. After the window of opportunity closes, the opportunity ceases to exist. Caufield was the first to describe the "window of infectivity" in the acquisition of commensal bacteria. His example was Streptococcus mutans in the oral cavity of children (Caufield et al., 1993;Li and Caufield, 1995). This bacterial species is associated with dental plaque, thus the window of infectivity coincided with the eruption of the first molars. Prior to this, a habitat for S. mutans is not available in the oral cavity of children for this species. The Caufield hypothesis reminds us that many factors have to coincide to favor the establishment of a commensal in a body site. Cesareandelivered babies have lower prevalences of bifidobacteria in their feces in early life ( Figure 1B). By analogy to Caufield's studies, this probably relates to a lack of favorable opportunities for bifidobacteria to colonize the bowel relative to the vaginal birth process. Notably, we found that 36% of cesarean-derived babies lacked bifidobacteria, whereas 18% of vaginally delivered infants were bifidobacteria-free at 2 months of age (Tannock et al., 2013).
OTHER TAXA REPLACE BIFIDOBACTERIA IN SOME BABIES?
If bifidobacteria have not colonized the bowel of certain infants, they are likely to be replaced by other taxa, which may have the requisite metabolic properties to fill the vacant ecological niche. In a study of the fecal microbiotas of Australian babies that were breast milk-or formula-fed, we compared the relative abundances of bacterial taxa in infants that had very low (<10%) or higher (>10%) bifidobacterial content (Tannock et al., 2013). Analysis of the compositions of these microbiotas showed that when Bifidobacteriaceae abundance was low, Lachnospiraceae abundances tended to be greater in babies in all dietary groups (Figures 1C-E). There was also a tendency for Erysipelotrichaceae abundances to be greater in formula-fed babies with low bifidobacterial abundances, being much more evident in the case of goat milk-fed infants. These observations suggest that, yes, other taxa might replace bifidobacteria in the fecal microbiota of some children.
WHAT ARE THE CONSEQUENCES OF LACKING BIFIDOBACTERIA IN THE BOWEL?
The absence of bifidobacteria in the bowel may be detrimental for infant development. The curious phenomenon whereby mother's milk contains substances not used in the nutrition of the offspring, but which fertilize bifidobacterial growth, is unique to humans. There must be a good reason for this. Enriching bifidobacterial populations in the bowel tends to minimize the abundance of other bacterial species, so a competitive exclusion function could be ascribed to HMO. Additionally, HMO may act as "decoys" in the bowel by binding to pathogens (bacteria and viruses) and their toxins and thus limiting contact with mucosal surfaces (Kunz et al., 2000). The large diversity of HMO structures that is known to occur in human milk suggests a large diversity of decoy functions (Pacheco et al., 2015). Irrespective of where in the World babies live, their gut microbiomes are enriched in genes involved in the de novo biosynthesis of folate (Yatsunenko et al., 2012). In contrast, the microbiome of adults favors synthesis of another B vitamin, cobalamin. Folate synthesis is an attribute of bifidobacteria and folate can be absorbed from the large bowel, so enrichment of bifidobacteria in the infant bowel may provide an important contribution to infant nutrition (Aufreiter et al., 2009;D'Aimmo et al., 2012;Lakoff et al., 2014). Folate functions as a coenzyme or co-substrate in single-carbon transfers in the synthesis of nucleic acids and metabolism of amino acids. One of the most important folate-dependent reactions is the conversion of homocysteine to methionine in the synthesis of S-adenosyl-methionine, an important methyl donor. Another folate-dependent reaction, the methylation of deoxyuridylate to thymidylate in the formation of DNA, is required for proper cell division (Crider et al., 2012). Neonatal nutrition could, indeed, be the very important reason for the HMO-bifidobacteria-infant paradigm. The foundation of brain structure and function is set early in life through genetic, biological and psychosocial influences. The rate of neonatal brain growth exceeds that of any other organ or body tissue (Wang, 2012). The infant is born with neurons already formed but the synaptic connections between these cells are mostly established and elaborated after birth causing a large nutritional demand for biosynthesis of gangliosides (Svennerholm et al., 1989). Nutrition of the infant in early life affects brain developmental processes including cognition (Uauy and Peirano, 1999;Uauy et al., 2001). While long-chain fatty acids (such as docosahexaenoic acid) have been the focus of much of the research in this field, tantalizing research evidence now indicates that sialic acid (N-acetyl-neuraminic acid), a 9-carbon carbohydrate, is also an essential nutrient for optimal brain development and cognition (Gibson, 1999;Meldrum et al., 2012;Wang, 2012). Strikingly, cortical tissue from human brain contains up to 4 times more sialic acid than that of other mammals tested (Wang et al., 1998). Moreover, the sialic acid concentration in the brain of breast milk-fed babies is higher than in that of formula-fed infants (Wang et al., 2003). These facts correlate with the unique biochemistry of human milk and the unique bacteriology of the infant bowel. Intriguingly, Ruhaak et al. (2014) have reported the detection of sialylated oligosaccharides (3 ′ sialyl-lactose, 6 ′ sialyllactose, 3 ′ sialyl-lactosamine, 6 ′ sialyl-lactosamine) that might result from the hydrolysis of HMO, in the blood of human infants. Thus, bifidobacterial biochemistry in the bowel may have extra-intestinal, nutritional influences important in brain development. However, perhaps the taxa that are abundant in the bowel of infants in the absence of bifidobacteria can carry out these same functions? This interesting possibility remains to be investigated.
BABIES WITHOUT BIFIDOBACTERIA ARE IMPORTANT SOURCES OF KNOWLEDGE?
Rene Dubos explored in a number of books the interplay between environmental forces and the physical, mental, and spiritual development of humankind. His article published in the journal Pediatrics entitled "Biological Freudianism: lasting effects of early environmental influences" encapsulated this theme (Dubos et al., 1966). Drawing on the results of experiments conducted with specific-pathogen-free mice, the authors concluded that "From all points of view, the child is truly the father of the man, and for this reason we need to develop an experimental science that might be called biological Freudianism. Socially and individually the response of human beings to the conditions of the present is always conditioned by the biological remembrance of things past." Biological Freudianism is clearly of relevance to the concept that the first 1000 days, between conception and the child's second birthday, offer a unique window of opportunity to shape healthier and more prosperous futures. Nutrition during this 1000 day window can have a profound impact on a child's ability to grow, and learn. The influences of the microbiota on the development of the child during early life are potentially very important, and much longitudinal research is required to clarify whether there are continuing, medically important impacts of the microbiota, inlcuding the bifidobacteria, that last throughout the lifetime of humans. Comparisons of the cognitive development and general health status of children that had been bifidobacteria-free, and children that were ex-bifidobacteria-free then intentionally exposed to bifidobacteria, in a longitudinal study extending perhaps 10 or 20 years, would tell us whether these bacteria optimize short and/or long term human development and health.
AUTHOR CONTRIBUTIONS
GT wrote the article. BL, PL, and KW provided data described in the article. | 3,094 | 2016-05-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
OBTAINING ANISOTROPIC HEXAFERRITES FOR THE BASE LAYERS OF MICROSTRIP SHF DEVICES BY THE RADIATION-THERMAL SINTERING
The technology of obtaining anisotropic polycrystalline hexagonal ferrites by the thermal radiation sintering is developed. Using the thermal radiation sintering, we obtained the samples of anisotropic polycrystalline hexaferrites BaFe 12 O 19 , BaFe 12–х Al х O 19 (with the Ni, Ti, Mn additives), SrFe 12 O 19 and SrFe 12–х Al х O 19 (with the Ca, Si additives) for the base layers of the microstrip ferrite untying instruments of the shortwave part of the millimeter wavelength range. The essence of the RTS technology is the obtaining, by the method of classical ceramic technology, by pressing in the strong magnetic field, of raw billets with their consequent sintering in the beam of fast electrons. The use of different compositions and alloying additives makes it possible to control the electromagnetic and magnetic properties of hexaferrites. The advantages of the RTS technology consist in high energy efficiency, high values of operating characteristics of the obtained material and low duration of the process of sintering. It was established that the RTS technology may prove to be an alternative technology when obtaining the highquality polycrystalline hexaferrites M of elementary and complex substituted compositions. Owing to the low energy and time costs, high values of the performance parameters, the RTS technology of anisotropic hexaferrites may find wide application when obtaining permanent anisotropic magnets and various miniature SHFdevices.
Introduction
The method of radiation-thermal sintering that consists in the heating of original components by the beams of high-energy electrons without involving external sources of heat has been of increasing interest to researchers in recent years [1].
The advantages of the radiation-thermal method (simultaneous exposure to radiation and temperature) consist in the rapidity and low inertia of heating of materials, the absence of contact between the heating body and the heater, uniformity of material heating throughout entire volume [2]. The types of electron accelerators in the range of Е=0,01-13 MeV that exist today make it possible to heat solid bodies to the temperature of their melting [3].
To improve the properties of ferrites, it is necessary to obtain single-phase temperature-stable compositions with low dielectric and magnetic losses [4]. The properties of ferrites depend not only on their chemical and phase compositions but also on the process of firing and cooling in the process of obtaining [5]. Porosity, violation of stoichiometry, existence of second phases or incomplete progress of the reaction of ferritization lead to the reduction in the chemical and structural homogeneity of material. As a result, distortion of the magnetic anisotropy of ferrites occurs, which causes worsening of magnetic characteristics and their reproducibility.
The results obtained in the present work made it possible to develop the technology of obtaining polycrystalline hexagonal ferrites by the method of radiation-thermal sintering (RTS). Such hexagonal ferrites are used for the base layers of the subminiature microstrip ferrite untying instruments of the short-wave part of the centimeter and millimeter wavelength range.
Literature review and problem statement
Among the materials, obtained by ceramic technology, the articles made of polycrystalline ferrites are widely spread, which are the compounds of iron oxide with the oxides of other metals [6]. Possessing the unique combination of magnetic, electrical and other properties, they relate to the class of electronic, which ensures their wide application in the areas of science and technology that determine technical progress [7,8].
The properties of ferrites are determined, besides the chemical and phase composition, by the technology of their obtaining, and especially -by the regimes of ferrite formation and sintering [9].
In recent years, in the course of obtaining different ceramic materials (alkali-halide crystals, high-strength ceramics, oxides, steels, hard alloys, ferrospinels, ferrogarnets, hexaferrites), the method of radiation-thermal sintering with the aid of powerful flows of accelerated particles has been developing successfully [10,11].
A large number of papers [6,[9][10][11] is devoted to the study of the processes of ferrite formation and to the formation of magnetic properties in Li-and Li-substituted (Li-Zn, Li-Ti, Li-Ti-Zn) ferrospinels, which are the base of a large group of thermostable SHF ferrites with a straight hysteresis loop (SHF), as well as promising material of the cathodes of lithium batteries.
Paper [12] presents experimental study of high-temperature diffusion and radiation-thermal effects in the alkali-halide crystals (KBr and LiF) during heating by high intensity electron beams with energy from 0.01 to 1-2 MeV. The effect is discovered of the acceleration of diffusion mass transfer of metal ions in the alkali-halide crystals during radiationthermal treatment by electrons in the range of E=1,4-2 MeV.
Difference in the processes of diffusing oxygen in polycrystalline ferrites during the thermal (T) and radiation-thermal (RT) firing is connected to the change in the defective state of ferrite as a result of excitation of the electron and nuclear subsystems of the lattice caused by irradiation [12].
The technique of heating the samples by electron beam makes it possible to obtain oxide ceramic materials with the uniform phase composition and low elastic stresses, which ensures an increase in the operating characteristics. The studies, carried out on ferrospinels (Mg-, Ni-, Li-, Li-substituted), synthesized in the beam of accelerated electrons, convincingly attest about the increase in the rate of diffusion of original components, which leads to the higher efficiency of the formation of magnetic properties under conditions of RT-firing in comparison to the T-firing.
MnZn-and NiZn-ferrites were obtained previously by the method of radiation-thermal sintering with the aid of powerful streams of accelerated particles [5,6]. However, despite the fact that there is a large volume of information in this area, we did not find in the open sources any data on the technology of obtaining polycrystalline hexagonal ferrites by the method of radiation-thermal sintering.
Aim and tasks of the study
The purpose of the work was to obtain anisotropic polycrystalline hexagonal ferrites for the base layers of the microstrip SHF-instruments of the millimeter wavelength range by the thermal radiation sintering.
To accomplish the set aim, it was necessary to solve the following tasks: -to develop technology of obtaining, by the thermal radiation sintering, of anisotropic polycrystalline hexagonal ferrites on the basis of classic ceramic technology; -based on the developed technology, to obtain the samples of anisotropic polycrystalline hexagonal ferrites by the thermal radiation sintering and to determine their magnetic parameters.
1. Obtaining the objects of research
The technology of the fabrication of the billets of polycrystalline hexagonal ferrites (HF) of barium (HB) and strontium (HS) was based on the principles of classical ceramic technology. The formation of composition of hexaferrite took place by mixing original components in the process of wet grinding in the ball mill at the ratio of charge, spheres, deionized water =1:2:1, respectively, during 24 hours (Fig. 1). To increase the value of the inner field of crystallographic anisotropy Н А , we applied the substitution by ions Al 3+ that substitute Fe 3+ iron ions in the crystal lattice of hexaferrite. The introduction of titanium and nickel additives was conducted for the purpose of reduction in the temperature withdrawal of the field of anisotropy from 10 % to 4 % from the mean value in the temperature range (-60÷+85) °C. The addition of manganese increases electrical resistance of ferrites and decreases dielectric losses due to the substitution (binding) of the ions of bivalent iron, which are the rapidly relaxing ions and serve as the sources of losses. The additions of silicon and calcium are used for hexaferrite of strontium. Introduction of silicon atoms into the composition of hexaferrite makes it possible to detain an increase in the crystals growth in the liquid phase, and the addition of calcium improves magnetic parameters of the material due to the decrease in magnetic losses. Based on the need for the calculation of milling yield from the metallic spheres, and that the optimum combination of electromagnetic properties is observed in the samples of hexaferrite of barium (similarly for HF of strontium), whose composition differs from the stoichiometric (BaO•6Fe 2 O 3 ) by the reduced content of iron oxide Fe 2 O 3 , we examined ferrites with the chemical formula BaO•(5,5÷5,75)Fe 2 O 3 . After mixing, the charge was poured out into the steel cuvette and dried in a drying cabinet at T=150 °С until complete drying. We sifted the dried charge through the sieve and poured out into a nickel cuvette, after which we placed it into the furnace, where the process of ferritization was conducted. The duration of curing comprised 5 hours at a temperature 1150 °С for the strontium and 1250 °С for the barium hexaferrites. After ferritization, the charge was exposed to wet grinding in the ball mill at the ratio of charge, spheres, deionized water =1:2:1, respectively, for 96 h. This duration of grinding provided for obtaining the powder with the average particle size of the order 0,3÷0,5 µm. The charge in the china drum was washed by deionized water and poured out into an idle tank. The obtained suspension of the powder of hexaferrite was kept for three days, after which the excess water was removed. Humidity of the suspension during pressing amounted to 30÷35 %.
For the fabrication of anisotropic hexaferrite billets, the pressing was conducted in the magnetic field, applied along the direction of pressing (Fig. 2).
RTS of the samples was conducted using fast electrons on the pulse linear accelerator ILU-6 ( Fig. 3, 4) made by IYaF named after G. I. Budker SO RAN (Russia) (energy of electrons E e =2,5 MeV). Fig. 4. Schematic of pulse linear accelerator ILU-6: 1 -vacuum tank; 2 -resonator; 3 -magnetic-discharge pump of the NMD type; 4 -electrons injector; 5 -outlet device; 6 -measuring loop; 7 -anode of the lamp of HF generator; 8 -support of the loop of HF power input; 9 -loop of HF power input; 10 -cathode stub; 11 -input of displacement voltage -7 kV; 12 -supports of the lower half of the resonator
2. Techniques of the experimental studies
The roentgen phase and X-ray diffraction analysis of the examined objects was carried out in the X-ray diffractometer DRON-8 (Russia) (Fig. 5).
When running the roentgen phase analysis, we used the CuKα-emission, as well as the tube with the iron anode (operating current -25 mA, voltage -25 kW). Wavelength of emission is 0,193728 nm. When taking photographs of the samples, we used the filter from Mn. Focusing was performed according to the Bragg-Brentano method with two Soller slits. The measurements were carried at room temperature.
Fig. 5. Multifunctional X-ray diffractometer DRON-8
Identification of intensive peaks in the diffractogram was conducted with the help of the software PDWin 4.0 (NPO "Burevestnik" Russia). The roentgen phase analysis of the samples came down to determining a series of interplanar distances and their comparison with reference data of the powder diffraction database, which is based on the PDF2 card index.
Magnetic characteristics of the examined objects were recorded at room temperature at the vibration magnetometer of the sample VSM-250 made by Lake Shore Cryotronics, Inc. (USA) (Fig. 6).
Results of research and discussion
Technological scheme of obtaining polycrystalline hexagonal ferrites by the RTS method is represented in Fig. 7. For the pressing of samples in magnetic field, a special press was designed, equipped with two coils (electromagnet), which create magnetic field. The upper coil accepts the press's plunger with the tip fixed to it; its form contributes to the concentration of magnetic field. In the lower coil there is the base for the mold with the opening for water discharge, which ends with a coupling for fastening the hose, connected through the trap to the mechanical vacuum pump. The source that feeds the electromagnet provides for obtaining direct current to 10 A with voltage of up to 20 V.
The uniformity of magnetic field is an important factor, since its distribution directly affects both the properties of the pressed material and their uniformity. The measurement of magnetic field strength in the working gap was carried out with the aid of teslameter, which employs the Hall effect. Graphic representation of dependency of the measured data is presented in Fig. 8.
Results of the measurements demonstrated that magnetic field strength during pressing was approximately 10 kOe. On the assumption that for the provision of quality magnetic texture (formation in one direction of magnetic Fig. 6. Vibration magnetometer VSM-250 moments of most single-domain particles), magnetic field is required of the magnitude 3•H С (H С is the coercive force), and H С ≈3 kOe for hexaferrites of barium and strontium, then the field with magnetization of 10 kOe must be enough for creating the anisotropic material. Fig. 8 also demonstrates that the field is distributed fairly uniformly, which must ensure the uniformity of properties of the pressed material. The lower value of the field in the boundary point may be possibly explained by screening from the housing of the press or by physical properties of the coils.
For obtaining the samples from hexaferrites, we used the mold with a matrix from the nonmagnetic material (brass) and the punches made of mild steel. The scheme of the mold is depicted in Fig. 9.
This set-up makes it possible to create magnetic field in the gap between the punches, where the textured charge is located. The lower punch has openings for water discharge through the felt filters, located on it. Two molds with diameters of 50 and 70 mm are used.
For orienting particles in the magnetic field it is necessary to create conditions, which make it possible for a particle to pivot fairly free around its axis, which is achieved by diluting the charge with the distilled water, which is removed after orientation in the process of pressing.
The process of pressing consists of the following stages: 1) loading the suspension of ferrite powder into the mold; 2) settling of suspension in magnetic field for the purpose of orientation of the powder particles by their mechanical turning by the axes of easy magnetization along the direction of the applied field; 3) preliminary discharge of moisture with the aid of the backing pump through the punch with filtering elements; 4) pressing of the suspension at active magnetic field and continuous discharging of the freed moisture; 5) pressing out the billet.
In order to maintain pressing in the magnetic field, the top punch of press descends to the contact with the top punch of the mold, decreasing the gap between the coils, thus making it possible to more effectively use magnetic field. Next is the exposure to the magnetic field. Then, at the applied necessary pressure, the billet is maintained still for a while.
The magnitude of magnetizing field in the process of pressing was 10 kOe.
The replacement of conventional thermal sintering with the radiation-thermal one (RTS) in the beam of fast electrons is caused by the substantially lower energy consumption of the latter and by the higher quality of sintering.
At RTS, in addition to the factor of temperature, there is such an essential factor as the radiation-stimulated diffusion. Due to this, the sintering takes place at lower temperatures and in shorter time.
Temperature in the sample in the course of conducting RTS was achieved by variation in the frequency of electron beam, its density and by exposure time.
Tables 1-4 display information about the RTS regimes of hexaferrites with different compositions.
The X-ray studies confirmed (according to the identified peaks in the diffractogram with the aid of the software PDWin 4.0) that as a result of using radiation-thermal sintering, we obtained polycrystals of anisotropic hexagonal ferrites of barium and strontium (both in the pure form and substituted). The characteristic X-ray diffractograms of hexaferrites BaFe 12 O 19 anf BaFe 12-х Al х O 19 (with the Ni, Ti, Mn additives) are represented in Fig. 10, 11. Results of the magnetic parameters are given in Table 5.
As can be seen from Table 5, magnetic properties of hexagonal ferrites obtained by the RTS method are equal to or exceed the values of magnetic characteristics of hexaferrites, obtained by the traditional ceramic technology [13].
Thus, the RTS technology may effectively be used for obtaining not only ferrites-spinels [5,6], but also hexagonal ferrites. The advantage of RTS is significant energy cost reduction, high values of performance characteristics of material [13] and low duration of the process. The shortcomings of this technology include essential financial investments at the initial stage due to high price of the electron accelerator. 3. Owing to high energy efficiency and low duration of the process of sintering, RTS may find its worthy place among existing methods of obtaining polycrystalline hexaferrites.
Acknowledgement
This work is fulfilled at NUST "MISiS" with financial support from the Ministry of Education and Science, Russian Federation, within the framework of agreement about assigning the subsidy No. 14.575.21.0030 of 27 June 2014 (RFMEFI57514X0030). | 3,913 | 2016-10-30T00:00:00.000 | [
"Materials Science"
] |
High-Throughput Analysis of Water-Soluble Forms of Choline and Related Metabolites in Human Milk by UPLC-MS/MS and Its Application
Choline and related metabolites are key factors in many metabolic processes, and insufficient supply can adversely affect reproduction and fetal development. Choline status is mainly regulated by intake, and human milk is the only choline source for exclusively breastfed infants. Further, maternal status, genotype, and phenotype, as well as infant outcomes, have been related to milk choline concentrations. In order to enable the rapid assessment of choline intake for exclusively breastfed infants and to further investigate the associations between milk choline and maternal and infant status and other outcomes, we have developed a simplified method for the simultaneous analysis of human milk choline, glycerophosphocholine, phosphocholine, and the less abundant related metabolites betaine, carnitine, creatinine, dimethylglycine (DMG), methionine, and trimethylamine N-oxide (TMAO) using ultraperformance liquid chromatography–tandem mass spectrometry (UPLC–MS/MS). These analytes have milk concentrations ranging over 3 orders of magnitude. Unlike other recently described LC-based methods, our approach does not require an ion-pairing reagent or high concentrations of solvent modifiers for successful analyte separation and thus avoid signal loss and potential permanent contamination. Milk samples (10 μl) were diluted (1:80) in water : methanol (1:4, v:v) and filtered prior to analysis with an optimized gradient of 0.1% propionic acidaq and acetonitrile, allowing efficient separation and removal of contaminants. Recovery rates ranged from 108.0 to 130.9% (inter-day variation: 3.3–9.6%), and matrix effects (MEs) from 54.1 to 114.3%. MEs were greater for carnitine, creatinine, and TMAO at lower dilution (1:40, p < 0.035 for all), indicating concentration-dependent ion suppression. Milk from Brazilian women (2–8, 28–50, and 88–119 days postpartum, ntotal = 53) revealed increasing concentration throughout lactation for glycerophosphocholine, DMG, and methionine, while carnitine decreased. Choline and phosphocholine were negatively correlated consistently at all three collection time intervals. The method is suitable for rapid analysis of human milk water-soluble forms of choline as well as previously not captured related metabolites with minimal sample volumes and preparation.
INTRODUCTION
Choline, an essential micronutrient, is a key factor in cellular maintenance and growth, including brain function, liver health, reproduction, and fetal and infant development (1-4). As a methyl donor, choline interacts with other metabolites of methyl group metabolism including betaine, methionine, dimethylglycine (DMG), glycine, and the folate pool (5) (Figure 1). Choline, betaine, and carnitine are precursors of trimethylamine N-oxide (TMAO), an osmolyte formed by the gut microbiota and associated with greater risk of cardiovascular disease, adverse thrombotic events, atherosclerosis, and colorectal cancer among postmenopausal women (11)(12)(13). Betaine is also a hepatic methyl donor and an important osmolyte to protect the cells of the renal medulla, and DMG in cord blood is positively correlated with birth weight (14).
Dietary choline can be obtained from animal source foods and also from various plants, such as nuts, legumes, or cruciferous vegetables (15). The liver is the main site for choline metabolism, where conversion into phosphatidylcholine (PC) occurs, via either the cytidine diphosphate (CDP)-choline pathway or the phosphatidylethanolamine N-methyltransferase (PEMT) pathway (16). The main biological functions of PC include membrane biosynthesis and myelination, lipid metabolism, cell division, or signaling molecules (15,17). In lactation, maternal plasma choline is elevated, likely due to protecting intact choline, and not by upregulated de novo synthesis by the PEMT pathway as found in pregnancy (18). These reported higher maternal choline concentrations are thought to support increased infant needs in the first year of life, during which choline is a key factor for organ growth and membrane biosynthesis, and ensure efficient choline uptake by the brain and other tissues (15,19).
Low choline status can affect reproduction and fetal development, leading to higher risk of birth defects, interference with neural tube closure, and poorer cognitive performance (4,20). Choline status is principally maintained by dietary intake (5,21,22). Human milk is therefore the only choline supply for young infants if exclusively breastfed (EBF) as recommended by the World Health Organization (23). Moreover, higher milk choline concentrations in conjunction with higher milk lutein or DHA are related to better recognition memory in EBF infants (24), and milk choline and betaine are affected by maternal total choline intake and genotype (25). Thus, milk choline metabolites could be a useful proxy for infant and maternal status and genotype, as its sampling is less invasive than drawing blood, especially from infants, and there are no volume limitations.
Water-soluble free choline (Cho) represents ∼90% of total milk choline, with phosphocholine (PCho) being the most abundant form, followed by glycerophosphocholine (GPCho), and Cho. The lipophilic forms phosphatidyl choline and sphingomyelin are only minor contributors (26)(27)(28). Throughout lactation, milk choline concentrations decline from colostrum to mature milk. Water-soluble forms of choline are considered milk constituents at all lactation stages, while the less abundant fat-soluble forms are associated with mature milk (15,29). Choline uptake into the mammary epithelium is facilitated by a saturable, energy-dependent transporter system, or a non-saturable transport is activated with elevated maternal choline, where further choline metabolites can be derived (15).
Besides 1 H-nuclear magnetic resonance (25,30,31), chromatographic techniques have emerged as the method of choice for choline analysis in human milk, employing liquid chromatography (LC) coupled with electrochemical detector or mass spectrometry (MS) (18,26,(32)(33)(34)(35)(36)(37) or gas chromatography (GC)-MS after a more complex sample preparation, including purification using LC with radiodetection, followed by hydrolysis of phosphorylated choline forms (27). Generally, sample preparation described in these reports included multiple sample preparation steps, including various extraction approaches, purification, hydrolysis, or derivatization. The more recently described LC-MS-based methods employ high concentrations of formic acid as a solvent modifier (34), which can reduce the signals of the target analytes (38), or achieve optimal analyte separation using trifluoroacetic acid (TFA), an ion-pairing reagent (26). Ion-pairing reagents, however, tend to remain in the LC system, columns, and MS source, which can lead to signal suppression and reduced analytical column performance (39). Thus, ion-pairing reagents should only be used with dedicated columns and equipment, which can greatly impact costs and instrument availability. The analysis of plasma or serum choline and some metabolites has been reported without ion-pairing reagents, but not to the same extent in human milk (11,40,41).
Here, we describe the first high-throughput method for simultaneous analysis of human milk Cho, PCho, GPCho, and previously not simultaneously analyzed related metabolites betaine, carnitine, creatinine, DMG, methionine, and TMAO (Supplementary Figure 1) using ultraperformance LC (UPLC)-MS/MS, requiring only minimal sample volume and preparation, without the need for ion-pairing reagents or high concentration of solvent modifiers. The optimized method was used to analyze milk samples from Brazilian mothers collected at different stages of lactation.
Two-milliliter amber, screw-top vials, deactivated 150 µl glass inserts with plastic springs, and pre-slit PTFE/silicone screw caps were obtained from Waters (Milford, MA, USA), and 1.5 amber snap-cap centrifuge tubes and Ultrafree R -MC-VV centrifugal
Analyte optimization was done by infusion of a 100 µg/L solution for each compound in water: MeOH (1:1, v:v) at a flow rate of 7 µl/min and positive ion mode electrospray ionization (ESI). The analytes were detected with multiple reaction monitoring (MRM , Table 1). Curtain gas (20 psi), CAD gas (6 psi), ion spray voltage (4,500 V), turbo gas temperature (550 • C), ion source gases 1 and 2 (40 psi), entrance potential (10 V), and dwell time (20 ms) were identical for all analytes in the positive mode. An integrated two-position switch valve was used to regulate effluent flow into the MS (1.45-6 min) during the analytical run, which allowed the removal of the early eluting endogenous lactose. The efficiency of this removal was further monitored using MRM in negative ion mode electrospray ionization: ion spray voltage (−4,500 V) and entrance potential (−10 V). The switch time of 6 min was chosen to allow sufficient flow from the valve to the MS inlet, reducing the potential build-up of the sample matrix in the flow path to the MS.
Standard Preparation
All analytes (external standards) and internal standards (54.5-112 µg/L) were prepared in LC-MS-grade water: methanol (80:20, v:v) and stored in amber glass vials at −80 • C. A master mix was prepared in sample diluent (water: MeOH 1:4, v:v) containing 2,500 µg/L Cho, 5,000 µg/L GPCho, 7,500 µg/L PCho, and 500 µg/L for the remaining analytes (betaine, carnitine, creatinine, DMG, methionine, and TMAO). The calibration curve was prepared by further diluting the master mix in sample diluent and by adding 10 µg of a prepared internal standard (IS) mix (2 mg/L for GPCho and PCho and 1 mg/L for all remaining analytes). Besides being added to the calibration curve, the same amount of IS mix was added to each blank and sample.
Sample Preparation
Five to 10 µl of whole human milk was diluted [dilution factor (dF) = 40 or 80 in sample diluent], mixed, and centrifuged (14,000 rpm, 10 min, 4 • C). Ninety microliters of the diluted samples were transferred into the centrifugal filters and combined with 10 µl of the IS mix. The sample extracts were briefly mixed prior to filtration in a second centrifugation step (6,000 × g, 4 min, 4 • C). The filtered samples were transferred into 2-ml amber LC vials equipped with 150-µl glass inserts for analysis by UPLC-MS/MS as described above.
Method Validation, Linearity, Stability, and Matrix Effects
The validation of the developed method was carried out following Food and Drug Administration guidelines (43). Standard addition experiments at three different levels were carried out in five replicates on three different days using pooled human milk with unknown analyte concentrations. The master mix was used to spike the diluted milk samples with 25, 50, and 75 µg/L (5, 10, and 15 µl) of betaine, carnitine, creatinine, DMG, methionine, and TMAO. Spiking levels (L1, L2, and L3) were 125, 250, and 375 µg/L for Cho; 250, 500, and 750 µg/L for GPCho, and 375, 750, and 1,125 µg/L for PCho. With each standard addition experiment, five non-spiked milk samples were analyzed. To account for the volumetric changes due to the different spiking volumes, all samples were adjusted to the sample volume using the diluent as needed. For a quick comparison of recovery rates of the main analytes Cho, PCho, and GPCho, two additional milk samples were subjected to the described the standard addition. Analyte recovery was estimated using the following equation: where C measured denoted the measured concentration of the spiked sample, C endogenous the measured concentration Frontiers in Nutrition | www.frontiersin.org of the non-spiked sample, and C added the theoretically added concentration. Signal-to-noise (S/N) ratios were examined in three analytical runs for the lowest standard curve concentrations for each target analyte. Limit of detection (LOD) was achieved for S/N > 3, and limit of quantitation (LOQ) for S/N > 10.
Linearity and reproducibility of standard curves were evaluated by inter-day variability (mean and CV) of slopes, coefficients of determinations (r), coefficients of correlations (r 2 ), and back-calculation to the nominal standard concentrations using eight standard curves from eight different days over 1 month. Analyte stability in milk was monitored in the pooled human milk samples stored at −80 • C and prepared in five runs over 2 months (n = 15).
The matrix effect (ME) was determined using the isotopically labeled IS as described by Matuszewski et al. (44). The neat standards (set 1) at 100 µg/L were prepared in diluent (n = 9), while milk samples at two different dilutions (dF 40 and 80) were used to prepare set 2, by adding the IS after at the end of the sample preparation. Each dilution was analyzed in two separate analyses, for a total of four sets. ME was calculated using the following equation:
Standard Curve and Quality Control
For each validation experiment and for sample analyses, an eightpoint calibration curve (betaine, carnitine, creatinine, DMG, methionine, TMAO: 1-500 µg/L; Cho: 5-2,500 µg/L; GPCho: 10-5,000 µg/L; PCho: 15-7,500 µg/L) and a reagent blank (diluent) were prepared in diluent. Since the calibration curve was not diluted, the true analyte concentrations were calculated by adjusting the concentrations obtained from the sample extracts by the dilution factor (dF = 80 or 40). The validated pooled human milk was used as the quality control (QC) with each sample batch and analyzed every 24 samples. A typical batch consisted of a blank, calibration curve, four QCs, and 82 samples for a total of 95 samples when using two of the UPLC 2-ml 48-vial holder racks. Quantification was carried out by area ratio response to the respective stable-isotope IS to account for extraction efficiency, volumetric changes, and ion suppression. Since no IS for DMG was available in the laboratory, DMG was quantified using betaine-d11.
Data and Statistical Analysis
MultiQuant Software (version 3.03, Sciex, Framingham, MA, USA) was applied for data processing, analysis, and S/N ratio evaluation. Calculations for method validation (e.g., mean, SD, CV, ME, and analyte recovery) were carried out using Excel 2016 (Microsoft, Redmond, WA, USA). RStudio (version 1.3.959, R Foundation for Statistical Computing, Vienna, Austria) in conjunction with R Statistical Software (version 4.0.0) was used for statistical analysis and correlation maps. Student's t-test was used to examine difference in MEs by dilution factor. Data normality was evaluated using the Shapiro-Wilk test.
Significant differences in analyte concentrations in milk from Brazilian mothers based on stage of lactation were examined by Kruskal-Wallis test followed by Dunn's test adjusted for multiple comparison by Bonferroni. The Kruskal-Wallis test was also used to test for difference in recovery rates among different milk samples. Spearman's rank correlation was used to evaluate associations among the target analytes. Correlation strength was classified as follows: weak (ρ < 0.3), moderate (ρ = 0.3-0.5), good (ρ = 0.5-0.7), and strong (ρ > 0.7) (45). p-values < 0.05 were considered significant.
Chromatography
During method development, additional columns, solvent mixtures, and gradients in normal and reversed-phase modes were tested, including Waters ACQUITY UPLC HSS T3 (1.8 µ, 2.1 × 100 mm), ACQUITY BEH Amide (1.7 µ, 2.1 × 50 mm) columns, and Phenomenex Kinetex HILIC (1.7 µ, 2.1 × 100 mm) column. Solvent systems tested included aqueous buffers (10 mM ammonium formate, 0.1% acetic acid, or 0.1% propionic acid) and MeOH or ACN with and without modifiers (0.1% acetic or propionic acid). For a full list of tested columns and solvents, see Supplementary Table 1. The above-described normal-phase conditions using the Luna Silica (2) column, 0.1% aqueous propionic acid (solvent A), and ACN (solvent B) provided optimized separation to avoid cross talk or carryover interferences and peak shapes for all target analytes, while also allowing chromatographic removal of endogenous lactose to avoid MS contamination (Figure 2).
Method Validation, Linearity, MEs
All analytes were directly extractable using water: MeOH (1:4, v:v). No additional reagents were necessary for this application. Five and 10 µL sample volumes were used successfully during sample preparation, but we opted for the larger volume to reduce possible pipetting errors at minute volumes.
Since there is no certified human milk reference material, analyte recovery was determined using pooled human milk with unknown concentrations of the analytes using the standard addition method. Recoveries ranged between 108.0 and 130.9% for all target analytes over the three levels of standard addition. Inter-day variations ranged between 3.3 and 9.0% ( Table 2), intra-day variations from 1.7 to 8.8% for all analytes and days. Comparable recovery rates for Cho, PCho, and GPCho (p ≥ 0.1 for all) were found in milk from additional donors (MD2 and MD3, Supplementary Table 2).
A 1:80 sample dilution revealed MEs between 75 and 115% for most analytes ( Table 3); carnitine-d9 showed an ME of 54.1%. The lower sample dilution (1:40) resulted in significantly lower ME for carnitine-d9, creatinine-d3, and TMAO-d9 (p ≤ 0.034 for all). No data were available for DMG since an isotopically labeled DMG was not available.
The standard curves (n = 8) over 1 month showed good inter-day linearity. However, the initial chosen standard curve Table 4). The adjusted use of standard levels for Cho allowed for average trend line slopes for all analytes ranging from 0.96 to 1.02 (SD: ±0.007 to ±0.024). Coefficients of determinations (r) were above 0.997 (CV < 0.25%), and coefficients of correlation (r 2 ) above 0.995 (CV < 0.50%; Supplementary Figure 2). Differences to the nominal concentrations varied between 94.9 and 101.3% for all analytes. The lowest standard curve concentrations for all analytes revealed a S/N > 10 (S/N for all: 11-404; Table 4).
Comparing the measured QC sample concentrations in five runs over 2 months, conducted by two researchers showed high stability with inter-day variations well below 10% for most analytes (Figure 3). Betaine revealed the greatest variation but was still within run acceptance criteria (43).
Analysis of Brazilian Human Milk Samples at 0-3 Months of Lactation
Colostrum and milk samples from Brazilian mothers revealed significant differences in concentrations depending on the time of collection (groups A, B, and C) for some metabolites (Table 5). GPCho, DMG, and methionine concentrations increased from colostrum to 3 months postpartum, while Cho and PCho remained steady. Although the Kruskal-Wallis test indicated significant differences for PCho concentrations, these differences were no longer found after further investigation using Dunn's test. Carnitine revealed a nonsignificant p-value, but the pairwise comparison indicated a significant decrease in concentration between groups A and C. All significant associations between the analytes were moderate to strong, regardless of group assignment and time interval, but correlation patterns changed over time (Figure 4). The negative correlation between Cho and PCho was the only one which was conserved throughout the three groups (ρ = −0.54 to −0.71, p < 0.018 for all). Consistent correlations between two groups were found for carnitine/PCho, carnitine/DMG (A/B), Cho/betaine, and GPCho/PCho (B/C; ρ = 0.45-0.77, p < 0.03 for all). While carnitine and TMAO were strongly negatively associated in group A, this relationship inversed in group C (ρ = −0.71 vs. 0.43, p < 0.04 for all, Supplementary Table 3).
Chromatography and Sample Preparation
Among all tested LC columns, solvent systems, and gradients, the described conditions emerged as the most suitable for analyzing all target analytes. Given the structural similarities of some target analytes (Supplementary Figure 1), the use of similar MRM transitions was unavoidable, e.g., Cho, PCho, DMG, and GPCho. However, the optimized conditions fully separated all affected analytes, avoiding cross talk interferences (Figure 2).
Water-soluble vitamins in human milk are typically analyzed after removal of protein and lipophilic milk components to reduce matrix interferences (46,47). MeOH effectively precipitates human milk protein while conserving the watersoluble target analytes in the sample extract (46). Although the samples here were highly diluted, therefore reducing potential matrix interferences, for our diluent, we chose a water/MeOH ratio commonly used for protein precipitation (48) to obtain a cleaner sample extract. While this minimalistic sample preparation did not remove the endogenous lactose, a major interference and MS contaminant, the optimized chromatographic conditions fully separated lactose from the target analytes, which were removed by a post-column, integrated two-position switch valve. Moreover, the sample dilution of 1:80 reduced lactose concentrations (∼7%) below 0.1%, simplifying the lactose removal by lowering its abundance while still enabling the analysis of the highly abundant water-soluble choline forms simultaneously with the low-level metabolites.
Method Validation and MEs
Recovery rates for all analytes were above 100%, possibly due to the minimal sample preparation, which is accompanied by lower matrix removal efficiency. Matrix components which remain in the sample extract used for analysis could affect the recovery when co-eluting with the target analytes. This interference could be occurring in particular for analytes revealing recovery rates consistently above 120%, such as betaine, carnitine, and creatinine. Since matrix components are independent of the spiking level, such possible effects should be consistent across the different levels of standard addition, as observed here. Furthermore, the IS mix was added after the dilution and centrifugation step due to the otherwise considerable increased required concentrations of IS and the limited availability in the laboratory. Thus, the IS was not subjected to the described initial sample dilution, which could contribute to the higher recovery rates. Nevertheless, analyte recoveries were not only consistent across the different standard addition levels and over time but, albeit only tested for Cho, PCho, and GPCho, also across milk from different donors (QC, MD2, and MD3). These results emphasize the importance of standard addition experiments during method validation to enable accurate measurements, particularly when no certified reference material is available.
MEs are a source for ion suppression and enhancement and may affect detection, accuracy, and precision. Co-elution of matrix components, which were not removed during sample preparation, has been recognized as a main ME contributor (49). MEs for most analytes here ranged around 75-115% with no significant difference in ME between the 1:40 and 1:80 diluted samples, suggesting only minor ion suppression or enhancement effects, independent of sample dilution. However, carnitine and creatinine revealed considerable ion suppression with significantly improved MEs in the more highly diluted samples. The continued noticeable ion suppression for carnitine did not interfere with quantification in the human milk matrix. Phospholipids have been identified as a significant source of matrix interferences (50), and their removal could improve recoveries and MEs. Since fat removal was sacrificed for a more simplified sample preparation procedure, future applications, e.g., in the 96-well plate format, could include a phospholipid removal step, which will not considerably increase sample preparation time but could be beneficial for MEs and analyte recoveries.
LOQ, Standard Curve, and Analyte Stability
The S/N ratios of all lowest standard curve concentrations were above LOQ (>10), and median concentrations of all metabolites in the pooled human milk and in the Brazilian milk samples were above the lowest standard curve concentrations. TMAO concentrations in the samples were the closest to the low end of the standard curve, albeit within range. However, with an average S/N ratio of 38, the TMAO curve can be extended below the currently chosen lowest concentrations if needed. This was true for all other target analytes but betaine. Betaine concentrations in our experiments were on average about fivefold higher than the lowest standard, falling well into the chosen standard curve range.
The initial trend line slope of 0.85 for Cho obtained when all prepared standard levels were included indicated some bias between nominal and measured concentrations, likely due to extending the standard curve past the limit of linearity. By excluding the high standard, we preserved a sufficient number of standard levels (43), as well as the needed quantitation range.
All analytes in the QC milk sample analyzed over 2 months clustered around the nominal concentrations as established by the standard addition experiments. The greatest variation was found for betaine, which could point toward a potentially more sensitive metabolite, but since these analyses were conducted by two different researchers, additional between-person variation could have been introduced.
Application to Human Milk From Brazilian Mothers
For our pilot study of 53 milk samples from Brazilian mothers, we found declining concentrations with duration of lactation for PCho and carnitine, while GPCho, DMG, and methionine increased. Analyzing different forms of choline in milk from Turkish women at different lactation stages (0-2 and 12-180 days), Ilcol et al. (35) reported increasing Cho and GPCho concentrations and no change in PCho, while Holmes et al. (31) reported significantly increasing concentration in milk from English mothers for all three water-soluble choline forms from 2-6 to 7-22 days. Little information is available about the longitudinal changes of these milk choline metabolites throughout lactation; thus, our data add to the knowledge base on human milk composition.
Comparing our group A choline results to other published water-soluble choline concentrations in colostrum collected in England and Turkey (31,35), we found comparable concentrations to Cho and GPCho, but two-fold to five-fold higher PCho concentrations. This trend for Cho and PCho was also observable when comparing our results for groups B and C to milk samples collected up to 28 weeks of lactation in Canada, Cambodia, the USA, and the aforementioned studies in Turkey and England, but GPCho concentrations found in our sample set were about 1.2-1.8-fold lower (31,(35)(36)(37)51). While betaine concentrations in our milk samples were comparable to concentrations found in samples from Germany (34), they were about 10-fold lower than the reported values from US women (51). Concentrations found here were lower for carnitine (1.3-2.6-fold), creatinine (1.3-1.8-fold), and DMG (1.6-3.8-fold) than reported literature values from studies conducted in Germany (34,52,53). Methionine concentration in the literature varied considerably by geographic origin, with lowest values reported from German milk samples (0.03 mg/L) (34) and highest in a study conducted in Italy (1.3 mg/L) (54), while concentrations reported from a study in Bangladesh were similar to our results (55). TMAO was not detected in milk from German mothers (34), which is in contrast to our findings. These comparisons illustrate the wide variations in concentrations of water-soluble forms of choline and related metabolites, which may be affected by factors such as geographic origin, stage of lactation, maternal diets, and genotypes or phenotypes.
The found differences in the correlation patterns in all three groups (A, B, and C) are another indication for concentration changes over the duration of lactation. The only conserved association throughout all lactation stages was observed for Cho and PCho, which has also been previously reported, albeit as a weak association, by Gay et al. (56). The continuous correlations between Cho and betaine and between GPCho and PCho in groups B and C were also in agreement with the previous report (56), while Maas et al. (34) did not find a relationship between Cho and betaine or between Cho and carnitine. Consistent correlations between two groups always included carnitine when group A was involved (and either B or C). Further research is necessary to gain better insight into these relationships and their potential importance for human milk composition and their potential use as indicator for maternal and infant status, genotypes and phenotypes, and other outcomes.
CONCLUSIONS
To the best of our knowledge, this is the first reported method for human milk water-soluble choline that also includes the simultaneous analysis of six additional choline-related metabolites. The method required minimal sample volumes and preparation, allowing a high sample throughput with robust and reliable results. While not done for this report, the simple sample processing is easily transferable to a 96-well plate format, increasing the sample throughput further and allowing for additional simple and fast purification steps such as phospholipid removal, if desired. Enabling the analysis of these metabolites not only in plasma but now in milk allows for more in-depth examinations of the metabolite patterns, changes, and effects within the mother-milk-infant symbiotic relationship. While preliminary and limited, our pilot study offers additional and novel information about choline and related metabolites in human milk and their changes during lactation. More data are needed to fully understand the potential of human milk nutrients and metabolites as indicator for maternal and infant clinical outcomes.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Committees of the Municipal Secretariat of Health and Civil Defense of the State of Rio de Janeiro (protocol number 49218115.0.0000.5275), and of the Maternity School of Rio de Janeiro Federal University (protocol number 49218115.0.0000.5275). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
DH, SS-F, and LA: conceptualization and methodology. DH and NN: validation and sample analysis. DH: formal analysis, | 6,375.6 | 2021-02-05T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Application of business intelligence for analyzing vulnerabilities to increase the security level in an academic
This study aimed at designing a potential solution through Business Intelligence for acquiring data and information from a wide variety of sources and utilizing them in the decision-making of the vulnerability analysis of an Academic CSIRT (Computer Security Incident Response Team). This study was developed in a CSIRT that gathers a variety of Ecuadorian universities. We applied the Action-Research methodology with a qualitative approach, divided into three phases: First, we qualitatively evaluated two intrusion detection analysis tools (Passive Scanner and Snort) to verify their advantages and their ability to be exclusive or complementary; simultaneously, these tools recorded the real-time logs of the incidents in a MySQL related database. Second, we applied the Ralph Kimball’s methodology to develop several routines that allowed applying the “Extract, Transform, and Load” process of the non-normalized logs that were subsequently processed by a graphical user interface. Third, we built a software application using Scrum to connect the obtained logs to the Pentaho BI tool, and thus, generate early alerts as a strategic factor. The results demonstrate the functionality of the designed solution, which generates early alerts, and consequently, increases the security level of the CSIRT members.
Application of business intelligence for analyzing vulnerabilities to increase the security level in an academic CSIRT
I. IntroductIon
Currently, universities and education centers are targets of cyber-attacks that focus on the alteration, extortion, and theft of sensitive information [1].Due to such hazards, some questions arise: Do universities guarantee the confidentiality, integrity, and availability of information towards cyber threats?Does their technical staff maintain adequate security procedures to minimize vulnerabilities?Does the University have the ability to detect and respond to any cyber-attack?
For an adequate control of security incidents, organizational structures known as Computer Security Incident Response Team (CSIRT) or Computer Emergency Readiness Team (CERT) have been steadily implemented [2].A CSIRT offers services such as analysis, coordination, support, and response to computer security incidents, based on an adequate vulnerability analysis [3].However, when it comes to an academic CSIRT (A-CSIRT), the volume of the collected information might cause partial or total noncompliance of such services.
Based on the mentioned scenario, this study aimed at generating a novel technique that optimizes the extraction of malicious traffic, collected by intrusion detection and prevention systems in university networks in Ecuador.In order to comply with this purpose, we applied the Research-Action methodology [4]: (1) we compared Snort and Passive Vulnerability Scanner (PVS), two passive analysis tools that have been used in the CSIRT, to establish their benefits and check if they are exclusive or complementary; (2) we collected the data through extraction, transport and loading techniques (ETL); (3) we stored this information in a MySQL database; and (4) we designed an application based on Business Intelligence techniques to detect malicious events that may appear in the network, and thus act immediately.
Among the main contributions of this study is the generation of unpublished algorithms for ETL processes that allow transporting and filtering information, generating data of interest.In addition, we implemented an application using Pentaho BI [5] to generate a secure coupling.
II. research desIgn
This research is based on the conceptual framework illustrated in Fig. 1, and on the Ralph Kimball Methodology [6].In an orderly way, first, we defined the requirements (upper part) and established the project planning; these two processes created the basis to determine the fulfillment of the proposed objectives.Subsequently, we developed the application, obtaining a Web product.These processes are further explained below.
A. Requirements definition
The Academic CSIRT in Ecuador is responsible for reporting computer incidents that the universities' network record and detect, and for counting with a vulnerability review service.Within this service, several points have been established such as frequent reviews of the institutions' public networks, revisions of configuration files of routers and firewalls, configuration review, updates and security implemented in Linux and Windows servers, and policy compliance in database systems, servers, and routers, among others [7].However, an effective automatic solution to detect and classify malicious events is still missing, due to the high volume of such events.For the CSIRT, it has been fundamental that its members have a tool that allows analyzing effectively the traffic that their network tracks, accomplishing quick decisions towards malicious events.
B. Evaluation of the traffic analysis tools
In this section, we will evaluate the Snort and PVS tools.Snort is an open source intruder detection and prevention system capable of real-time traffic analysis [8].PVS is a product of the Tenable Network Security Company, which stands out for analyzing all the traffic transmitted within a network.PVS generates two files: the first intended to monitor real-time vulnerabilities, and the second focused on monitoring the Web and FTP activities [9].We determined that PVS does not act as an IPS, meaning that it simply verifies the traffic that flows through the network, with no activity on the circulating packages.
To conduct an unbiased comparison, within similar conditions, we used data sets from the DARPA Intrusion, detection, and evaluation group at MIT Lincoln laboratory [10].We analyzed the data in the alert file and the logs generated by Snort, as well as the real-time file and reports generated by the PVS.
Based on this analysis we compared the results shown in Table 1.Low 0-3.9 Medium 4-6.9 High 7-9.9 Critical 10 Table 1 documents the specified consumption that the tools registered on a Centos7 distribution with a 64-bit architecture.The data demonstrated that Snort consumes less resources than PVS, mainly because Snort lacks the graphical user interface Web deployed by default in the system.In addition, the analysis in Snort was structured in "rules", in which the parameters were established subject to the packet to be captured.Furthermore, PVS analyzes the "plugin's", which provided a specification to be displayed when they do not capture the packets to be analyzed.Within the assessment, we established that the PVS lists more categories to classify the risk of the raised events in the network than Snort.
With the capture of FTP and HTTP traffic, we identified a clear difference in the volume of analyzed data between the two tools.This occurred because Snort acts with default rules.Therefore, the optimal performance of Snort was just the configuration of the rules with "Ruleset" that appeared from a variety of sources.This evaluation allowed inferring that the tools have not been exclusive, as they complement each other, providing a more forceful protection to the network.
C. Design and implementation of algorithms and the application of business intelligence
Here, we used the phases of the methodology proposed by Ralph Kimball [11], diving it into three sections: (1) specific software to analyze the benefits; (2) database design and structure of ETL; (3) specifications of the BI application and development of the product.Below, we further describe the developed process.
The ETL processes for data capture and filtering are fundamental for this study, because they represent the technique that allows organizations to move, reformat, and clean data from multiple sources, using SQL algorithms.ETL allow extracting, analyzing, and interpreting upcoming information.Nonetheless, the data formats may vary among organizations due to their source of origin.Therefore, in an unprecedented way, to homogenize the data from their sources, ETL algorithms have been designed, implemented, and subsequently loaded into a MySQL, DataMart, or Data Warehouse database to submit them to a business process [11].In this study, the information from the Snort and PVS registers was relatively condensed and had no standardized format.However, the created ETL algorithms solved this issue, preventing to overload the database with potential irrelevant information.
We developed three solutions related to the PVS processes, based on vulnerability analysis, real-time activities, and risk filtering.Each solution processes its information from files, since the vulnerabilities have been encountered in an enriched ".XML" (.nessus) [12], and both the real-time files and the filtering work in a ".txt" file.We generated a transformation flow for the vulnerabilities, which originated from the PVS system (Fig. 2).Also, we generated flows that support the optimization of the processed packages, avoiding generating redundant or delayed data.Figure 3 illustrates the flow that which handles a "Work" generated by the Pentaho BI data integration system; in this flow, the transformation was executed establishing a parameter of repetition, and highlighting the error notification via e-mail, allowing for a subsequent call to an execution process, which manages to filter the risks according to the established parameters.
FIg. 2.
ETL algorithm transformation from a flat text file.
FIg. 3.
A generation algorithm of an information filter with control of possible errors in the transformation.
Within the vulnerability report, the events transformation flow occurs in real time (Fig. 4).
In such programming, a control of the events was established, specifying a classification that was stored in MySQL database.Like the vulnerabilities, the control of the flow transformation was highlighted by a warning via e-mail (Fig. 5).
FIg. 4.
A Transformation algorithm due to information extraction from an enriched XML file.
Snort is a tool with the complementary system Barnyard2, which allows migrating the alerts detected by the IDS to a MySQL database (Fig. 6 and 7).Hence, we adapted the fields to generate a relational model.For filtering, the transformation was based on programming in Pentaho BI (Spoon Data Integration) [13], which is based on the content registered in a file, allowed to catalog the obtained packages with Snort as relevant; this prevented a potential database overload with records that lack interest.
FIg. 5.
A generation algorithm of the data load of any given time, controlling possible errors that may arise during the transformation.
In agreement with Stephen Few [14], who defines a Dashboard as a scorecard in which a dense array of information must be demonstrated in a reduced manner, we used the Pentaho BI with the Community Dashboard Framework (CDF) guidelines [15].In this way, to visualize the data collected from the PVS, we developed two control panels: a first panel destined to the events in real time, and a second panel used for the vulnerability analysis.For Snort, we developed a panel that allowed visualizing a matrix with the latest detected events.Such information complements the real-time events registered in the PVS.
FIg. 6. Snort database diagram with the most significant tables.
Regarding the implementation of the application, we applied Scrum as a methodology to develop agile projects because it is independent of technologies and allows coupling to different programming models, focuses on people, as users and developers have defined roles, provides quick results, considers time management, is iterative, responds to changes, and the client is active [16].
We used Node.js, a platform created on Chrome's JavaScript Runtime [17] as programming language, since it allows implementing fast and scalable network applications.Node.js uses an event-driven, non-blocking I/O model that performs lightly and efficiently, being suitable for data-intensive applications in real-time.
FIg. 7.
Transformation algorithm that generates a filter from parameters in a flat file.
A. Proof of concept
The environment and the flow where the concept was proven (Fig. 8) are represented by 1) the generation of a request for Dashboard or Access to the Pentaho BI system; 2) the system generating the queries to the MySQL database; 3) the system generating the control panel that will be displayed in case of a Dashboard request, while an access has been requested, depending on the user's profile and taking into account that the Dashboard may be edited; 4) the requested Dashboard being displayed in the BI System; and 5) the BI System allowing impression, as well as submission of the data to Excel, generating a proper record file.
B. Results assessment
The application was verified in a real-time trial period, in which the users were able to browse the entire network.Thus, for example, in the records of the last week of June, we could determine that the day with most events was the 27 th (Fig. 9).Fig. 10 illustrates a variety of events recorded in the trial period.The most common event was the "ET SCAN Potential SSH Scan", which generated multiple connections to the server where the product was previously installed.
Francisco Xavier Reyes-Mena -Walter Marcelo Fuertes-Díaz -Carlos Enrique Guzmán-Jaramillo -Ernesto Pérez-Estévez -Paúl Fernando Bernal-Barzallo -César Javier Villacís-Silva The maximum vulnerability tracking point in the protected network was obtained on day 14 (Fig. 11).Afterwards, the flow became normal, with few days with fewer and others with more vulnerabilities; consequently, we can infer that control actions were taken on the observed vulnerabilities.
During this analysis, the false positives decreased, which implies an improvement in the fulfilment of the Academic CSIRT services.Therefore, the system focused on supporting the Academic CSIRT to improve its services for detecting vulnerabilities and tracking malicious activity on the network; this aims at uprising awareness among members to lead to a more efficient deal with incidents occurring in their network, thus achieving adequate and timely attention to their weaknesses and threats.
IV. conclusIons and Future work
In the current study, we designed a solution implemented through Business Intelligence, which acts as a strategic factor in the vulnerability analysis of an Academic CSIRT.This was possible by applying the Action-Research methodology and the phases of Ralph Kimball.The evaluation of Passive Scanner and Snort offered security management based on network traffic and customization of its configurations, to reduce false positives and thus enhance the response to security incidents.We developed several algorithms to apply the ETL process of the non-standardized logs that the graphical interface processed.Finally, we built a software application using Scrum, which allowed linking the obtained logs in Pentaho BI to generate early alerts of vulnerabilities and malicious codes.The results demonstrate that such application has managed to help those responsible for the CSIRT to establish
FIg. 8 .
FIg. 8. Experimental diagram of the proof of concept.
table 1
Comparative analysis between snort and pvs | 3,187.8 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
One-dimensional, non-local, first-order, stationary mean-field games with congestion: a Fourier approach
Here, we study a one-dimensional, non-local mean-field game model with congestion. When the kernel in the non-local coupling is a trigonometric polynomial we reduce the problem to a finite dimensional system. Furthermore, we treat the general case by approximating the kernel with trigonometric polynomials. Our technique is based on Fourier expansion methods.
Introduction
In this paper, we consider the following mean-field game (MFG) model where, G ∈ C 2 (T) is a given kernel, V ∈ C 2 (T) is a given C 2 potential and 0 < α 2, c ∈ R are given parameters.The unknowns are functions u, m : T → R and the number H ∈ R. We study the existence of smooth solutions for (1.1) and analyze their properties and solution methods.
MFGs theory was introduced by J-M.Lasry and P-L.Lions in [33,34,35,36] and by M. Huang, P. Caines and R. Malhamé in [31,32] to study large populations of agents that play dynamic differential games.Mathematically, MFGs are given by the following system −u t (x, t) − σ 2 ∆ x u(x, t) + H(x, D x u(x, t), m(x, t), t) = 0, m t (x, t) − σ 2 ∆ x m(x, t) − div x (D p H(x, D x u(x, t), m(x, t), t)) = 0, m(x, 0) = m 0 (x), u(x, T ) = u T (x, m 0 (x)).(1.2) where m(x, t) is the distribution of the population at time t, and u(x, t) is the value function of the individual player, and T is the terminal time.Furthermore, H : T d × R d × X × R → R, (x, p, m, t) → H(x, p, m, t) is the Hamiltonian of the system, where X = R + or L 1 (T d ; R + ) or R + × L 1 (T d ; R + ), and σ 0 is the diffusion parameter.Finally, m 0 , u T are given initial-terminal conditions.
Suppose L : is the Legendre transform of H.Then, formally, (1.2) are the optimality conditions for a population of agents where each agent aims to minimize the action u(x, t) = inf v {E T t L x(s), v(s), m(x(s), s), s ds + u T x(T ), m(x(T ), T ) }, (1.3) where the infimum is taken over all progressively measurable controls v(s), and trajectories x(s) are governed by for a standard d-dimensional Brownian motion {W s }.Assume that are driven by mutually independent Brownian motions.Indeed, the first equation in (1.2) is the Hamilton-Jacobi equation for the value function u.Furthermore, optimal velocities of agents are given by v(t) = −D p H(x, D x u(x(t), t), m(x(t), t)), thus the second equation in (1.2) which is the corresponding Fokker-Planck equation.Rigorous derivations of (1.2) in various contexts can be found in [33,34,35,36,37,8,26] and references therein.
Actions of the total population affect an individual agent through the dependence of H and L on m.The type of the dependence of H and L on m is called the coupling, and it can be either local, global or mixed.Spatial preferences of agents are encoded in the x dependence of H and L.
Nevertheless, most of the previous work concerns problems where Hamiltonian does not have singularity at m = 0.The problems where Hamiltonian has singularity at m = 0, such as in (1.4), are called congestion problems.The reason is that the Lagrangian corresponding to H in (1.4) is and in the view of (1.3) agents pay high price for moving at high speeds in dense areas.
Congestion problems were previously studied in [38,16,27,28,17,30,39,11]. Uniqueness of smooth solutions was established in [38].Existence of smooth solutions for stationary second-order local MFG with quadratic Hamiltonian was established in [16].Short-time existence and uniqueness of smooth and weak solutions for time-dependent second-order local MFGs were addressed in [27] and [28], respectively.Analysis of stationary first-order local MFGs in 1-dimensional setting is performed in [17].Problems on graphs are considered in [30].MFG models with density constraints (hard congestion) and local coupling are addressed in [39] (second-order case) and [11] (first-order case).To our knowledge, existence of smooth solutions for stationary first-order MFGs with global coupling has not been studied before.
One of the main tools of analysis in MFGs theory is the method of a priori estimates.See [23,8] and references therein for a detailed account on a prioriestimates methods in MFGs.
Here, we take a different route.Firstly, using the 1-dimensional structure of the problem, we reduce it to an equation with only m and H as unknowns.Indeed, from the second equation in (1.1) we have that where j is some constant that we call current.Therefore, (1.1) can be written in an equivalent form From here on, we do not differentiate between (1.1) and (1.6).Moreover, we refer to (1.6) as the original problem.
Remark 1.2.Note that c, as a solution parameter, in (1.1) is replaced by j in (1.6).We discuss the relation between c and j in Section 3.
Following [18], [17] we call (1.6) the current formulation of (1.1).There are two possibilities: j = 0 and j = 0. We study the simpler case j = 0 only in Section 3 and focus on the case j = 0 afterwards.
Our main observation is that when G is a trigonometric polynomial solutions of (1.6) have a certain structure in terms of unknown Fourier coefficients that satisfy a related equation.
More precisely, for j = 0 denote by c j = (j 2 /2) Then, we prove the following theorem.
Theorem 1.3.Suppose that G is a trigonometric polynomial; that is, for some n ∈ N and p 0 , p 1 ,
Moreover, the solution (m, H) of (1.6) is given by formulas ) where where Φ α is given by (1.8).
Remark 1.4.Assumptions (2.1), (2.2) are natural monotonicity assumptions for the coupling T G(x − y)m(y)dy, and we discuss them in Section 2. When G has the form (1.9) these assumptions are equivalent to p k 0, 0 k n and p 0 > 0, respectively (see Section 4).Theorem 1.3 reduces the a priori-infinite-dimensional problem (1.6) to a finite dimensional problem (1.12) when the kernel is a trigonometric polynomial.Also, Φ α is concave, so (1.12) corresponds to finding a root of a monotone mapping which is advantageous from the numerical perspective.This reduction is even more substantial, when the kernel G is a symmetrical trigonometric polynomial; that is, q k = 0 for 1 k n.In the latter case, (1.12) is equivalent to a concave optimization problem.More precisely, we obtain the following corollary.
Corollary 1.5.Suppose that G is a symmetrical trigonometric polynomial; that is, for some n ∈ N and p 0 , p Additionally, we find closed form solutions in some special cases.
Theorem 1.6.Assume that α = 1 and G, V are first-order trigonometric polynomials; that is where p 0 , p 1 , q 1 , v 0 , v 1 , w 1 ∈ R and p 0 > 0, p 1 0.Then, define a 0 , a 1 , b 1 , H as follows: where r is the unique number that satisfies the following equation Then the pair (m(x), H), where is the unique solution of (1.6).
Besides the trigonometric-polynomial case we also study (1.6) for general G.In the latter case, we approximate G by trigonometric polynomials and recover the solution of (1.6) as the limit of solutions of approximate problems.More precisely, we prove the following theorem.
corresponding to G n (the existence of this solution is guaranteed by Theorem 1.3).
Then, there exists (m, Consequently, (m, H) is the unique smooth solution of (1.6) corresponding to G.
In combination with preceding results this previous theorem provides a convenient method for numerical calculations of solutions of (1.6).
We also present a possible way to apply our methods to more general onedimensional MFG models.We consider the following generalization of (1.1) (1. 19) In (1.19), G is a given kernel, H : T × R × R + → R, (x, p, m) → H(x, p, m), is a given Hamiltonian, and F : X → R is a given coupling, where X can be a functional space or R. We discuss, formally, how our techniques apply to models like (1.19).
The paper is organized as follows.In Section 2 we present the main assumptions and notation.In Section 3 we study (1.6) for the case j = 0.
Next, in Section 4 we analyze (1.6) when G is a trigonometric polynomial and prove Theorem 1.3, Corollary 1.5 and Theorem 1.6.In Section 5 we analyze (1.6) for a general G and prove Theorem 1.7.
In Section 6 we present some numerical experiments.Finally, in Section 7 we discuss possible extensions of our results and a future work.
Assumptions
Throughout the paper we assume that G ∈ C 2 (T), V ∈ C 2 (T).Moreover, we always assume that for all f ∈ C(T) and that is the monotonicity of the coupling G[m] and plays an essential role in our analysis.In general, monotonicity of the coupling is fundamental in the regularity theory for MFGs: system (1.2) degenerates in several directions if the coupling is not monotone.In the view of (1.3) monotonicity means that agents prefer sparsely populated areas.See [13] and [18] for a systematic study of non-monotone MFGs.Assumption (2.2) is a technical assumption.It is not restrictive since one can always modify the kernel by adding a positive constant.
Furthermore, we assume that This, also, is a natural assumption for MFGs from the regularity theory perspective.
The, now standard, uniqueness proof for MFG systems in [35] is valid only for α in this range.This is a strong indication of degeneracy for α outside of this range (which is observed and discussed in detail in [17]).In fact, our methods also reflect these limitations in a natural way.
The 0-current case
As we have pointed out in the Introduction, (1.1) can be reduced to (1.6) by eliminating u from the second equation.The analysis of (1.6) is completely different for the case j = 0 and for the case j = 0.In fact, the case j = 0 is much simpler to analyze.Nevertheless, it is more degenerate.In this section, we discuss the case j = 0.
Firstly, we observe that j = 0 can occur only when c = 0. Recall that in this paper we are concerned only with smooth solutions.Therefore, if (u, m, H) is a solution of (1.1) we obtain (1.5) and At this point, we drop assumptions (2.1) and (2.2) because they are irrelevant.Suppose that V, G, m have following Fourier expansions Hence, formally, if p = 0 and V, G are given, we obtain that H is given by (3.3) and where r k e iθ k = p k + iq k .
Nevertheless, there are several issues in the previous analysis.Firstly, (3.2) may fail to have solutions or may have infinite number of solutions.If p 2 k + q 2 k = 0 for some k 1 and v 2 k + w 2 k > 0 then (3.2) and (3.1) do not have solutions.On the other hand if p 2 k + q 2 k = v 2 k + w 2 k = 0 then a k , b k can be chosen arbitrarily, so (3.2) and (3.1) may have infinite number of solutions.Thus, if p 2 k + q 2 k = 0 for some k 1 (1.6) degenerates in different ways when j = 0. Furthermore, if p 2 k +q 2 k > 0 for all k 1 then m, at least formally, is given by (3.4).Here we face two potential problems.First, one has to make sense of the formula (3.4).In other words, the series in (3.4) may not be summable in any appropriate sense.Moreover, summability of (3.4) is a delicate issue and strongly depends on the relation between {v k , w k } and {p k , q k }.
Additionally, even if the series (3.4) converge to a smooth function, we still have the necessary condition m(x) > 0 and that might fail depending on V and G.For instance, if V is such that v k = w k = 0 for all k 2, and G is such that p 2 k + q 2 k > 0 for all k 1 we get that Therefore, ).Hence, if the latter is violated (3.1) does not have smooth solutions.
Thus, existence of smooth, positive solutions for (3.1) depends on peculiar properties of V and G.This is quite different in the case j = 0, where (1.6) obtains smooth, positive solutions under general assumptions on V and G.
G is a trigonometric polynomial
From here on, we assume that j = 0.In this section, our main goal is to prove Theorem 1.3, Corollary 1.5 and Theorem 1.6.
We break the proof of Theorem 1.3 into three steps.Firstly, we show that (1.6) is equivalent to (1.12) -Proposition 4.3.Secondly, we prove that (1.12) has at most one solution -Proposition 4.6.And thirdly, we show that (1.12) has at least one solution -Proposition 4.8.
We use a short-hand notation (x, y) for a vector ( Here we perform the analysis in terms of Fourier coefficients of G. Hence, we formulate assumptions (2.1), (2.2) in terms of these coefficients.Lemma 4.1.For a G given by (1.9) the assumption (2.1) is equivalent to Furthermore, the assumption (2.2) is equivalent to Proof.Let f ∈ C(T) and A straightforward computation yields The rest of the proof is evident.
Remark 4.4.In our analysis we assume that p 2 k + q 2 k > 0 for all 1 k n.This assumption is not restrictive and the results are valid even if p k = q k = 0 for some k 1.Indeed, if p k = q k = 0, then in (4.4) there will be no terms with cos(2πkx) and sin(2πkx) and in the subsequent analysis we just have to omit the trigonometric monomials cos(2πkx) and sin(2πkx).
Proof of Proposition 4.3.First, we prove the direct implication.Suppose (m, where Therefore, from (1.6) we obtain which is equivalent to (1.10).The coefficients {a k , b k } in the previous equation are given by the formulas Since m > 0 we obtain that (a * 0 , 3) and (4.5) we obtain that u 0 = 1 and (1.11).Next, we plug the expression (1.10) for m in (4.3), and from (4.5) we obtain the following system Furthermore, note that , for 1 k n.Therefore, (4.6) can be written as . But this previous system is equivalent to (1.12).
The proof of the converse implication is the repetition of previous arguments in the reversed order.
Next, we study some properties of C and Φ α .
with equality if and only if with equality if and only if Proof.i.This statement is evident.
ii.This statement is evident.
where x 0 ∈ T. Firstly, we show that lim Denote by Then we have that x 0 ∈ argmin f .Hence, f (x 0 )=0, and Finally, the mapping a 0 → ∂Φα(a 0 ,a ,b) ∂a 0 is decreasing and lim so there exists unique a 0 = ω(a , b) such that (4.11) holds.Regularity of ω follows from the implicit function theorem.
Proof.Firstly, we show that F from (4.12) is coercive and bounded by below.Evidently, F 0. Next, from (4.7) we have that for all (a , b) ∈ R 2n for 1 k n.Furthermore, we use the elementary inequality and obtain for all (a , b) ∈ R 2n .Therefore, F is coercive.Now, we prove that for every critical point (a , b) of F the point (ω(a , b), a , b) is a solution of (1.12).For 1 k n denote by Then, for 1 l n.Next, by differentiating (4.11) we obtain (4.13).Then, we have that Furthermore, we have that Hence, from (4.14) we obtain that , where ξ = (ξ 0 , ξ ).On the other hand, from (4.9) we have that Therefore, there is an equality in the previous equation and from (4.9) we obtain that (ξ, η) = 0 or that, equivalently, ξ k = η k = 0 for 1 k n.The latter precisely means that (ω(a , b), a , b) is a solution of (1.12).Now we are in the position to prove Theorem 1.3.
Proof of the Corollary 1.5.By Theorem 1.3 we have that (1.6) obtains unique solution, (m, H), given by (1.10) and (1.11), where (a (1.12).Since G has the form (1.13) we have that q k = 0 for 1 k n.Therefore, (1.12) can be written as Furthermore, by Lemma 4.5 Φ α is strictly concave on C (see (1.7) for the definition of C), so the function is also strictly concave on C. Hence, (a * , b * ) is the unique maximum of (1.14).
Proof of the Theorem 1.6.Firstly, note that if m is a solution of (1.6) with α = 1, then m must necessarily have the form (1.17).Consequently, (1.17) leads to A direct calculation yields the following identities Using (4.16) in (4.15) and taking into the account that T m(x)dx = 1, we obtain which can be equivalently written as We eliminate a 1 and b 1 in the second and third equations and find a 0 from the fourth equation.It is algebraically more appealing to put a 0 = 2r − 1.Then, a straightforward calculation yields (1.16).
Since m > 0 we have that a 0 > 0.Moreover, from the fourth equation in the previous system we have that a 2 0 1, so a 0 1 or r 1.Note that the left-handside of (1.16) is increasing function in r for r 1, and it is equal to 0 at r = 1 and to ∞ at r = ∞.Therefore, for arbitrary choices of v 1 , w 1 there is a unique r 1 such that (1.16) holds.This is coherent with the fact that (1.6) obtains a unique smooth solution.Moreover, (1.16) is a cubic equation.Hence, formulas in (1.15) are explicit.
G is a general kernel
In this section we prove Theorem 1.7.We divide the proof into two steps.First, we prove that solutions of (1.6) are stable under C 2 perturbation of the kernel.Second, we show that arbitrary C 2 kernel can be approximated by suitable trigonometric polynomials.
Proof of the Theorem 1.7.The uniqueness of the solution for (1.6) follows from the uniqueness of the solution of (1.1) (See [38]).Part 1. Stability.Suppose that {G n } n∈N ⊂ C 2 (T) are such that Moreover, assume that for each n 1 (1.6) has a solution, (m n , H n ) ∈ C 2 (T) × R, corresponding to the kernel G n .We aim to prove that there exists (m, H) ∈ C 2 (T)× R such that (1.18) holds and (m, H) is the solution of (1.6) corresponding to the kernel G.
Remark 5.1.Note that in this part of the proof we do not assume that {G n } are trigonometric polynomials and that they satisfy (2.1), (2.2).We need these assumptions in the second part of the proof to guarantee the existence of solutions (m n , H n ).
We are going to show that families are uniformly bounded and equicontinuous.Denote by where We have that Next, denote by σ n := min T f n = f n (x n ), for some x n ∈ T.Then, we have that f (x n ) = 0, and Therefore, Furthermore, for 0 < α 2 we have that lim Therefore, σ n δ 0 > 0, or for n 1.Furthermore, denote by m n (z n ) = max T m n .Then, we have that Furthermore, for every x, z ∈ T we have that Firstly, if we plug in z = z n in (5.3) and use (5.2), we get that 1 m α n (x) for all n 1.Secondly, (5.3) yields that the family is uniformly Lipschitz which in turn yields (in combination with (5.1)) that the family {m α n } n∈N is also uniformly Lipschitz.Since families (1.18) are uniformly bounded, we get that {H n } n∈N is a bounded sequence.Then, we can assume that there exists (m, through a subsequence.Moreover, we obtain (1.18) through the same subsequence.
From the previous equations, we obtain that (m, H) solves (1.6) for the kernel G. Next, (1.6) must have a unique solution because it is equivalent to (1.1) that can have at most one solution (see [38]).Hence, the limit, (m, H), is the same for all subsequences.Therefore, (1.18) is valid through the whole sequence.Part 2. Approximation.Suppose G ∈ C 2 (T) satisfies (2.1) and (2.2) are satisfied.We formally expand G in Fourier series
Denote by
S n (x) = p 0 + n k=1 p k cos(2πkx) + q k sin(2πkx), x ∈ T, n 1, and S 0 (x) = p 0 the truncated Fourier series.Furthermore, let G n be the corresponding Cesàro mean; that is, Then by Fejér's theorem (see Theorem 1.10 in [14]) we have that Next, G satisfies (2.1), (2.2) so p 0 > 0 and p k 0 for k 1.Therefore, we have that so G n also satisfy (2.2), (2.1) for all n 1.Now, we can complete the proof of Theorem 1.7.We approximate G using Part 2 and conclude using Part 1.
Numerical solutions
Here, we numerically solve (1.6) for different types of kernels G.We present three cases.First, we consider G that is a non-symmetric trigonometric polynomial.Second, we consider G that is a symmetric trigonometric polynomial.And third, we consider G that is periodic but that is not a trigonometric polynomial.
During the whole discussion in this section we assume that This choice of parameters in (1.6) is random and robustness of our calculations does not depend on a particular choice of parameters.
6.1.The case of a non-symmetric trigonometric polynomial.By Theorem 1.3 we have that for a given non-symmetric trigonometric polynomial G the solution m of (1.6) has the form (1.10), where the vector (a * 0 , a * is the unique solution of (1.12).Furthermore, we define Then, solutions of (1.12) coincide with minimums of M .Accordingly, we find the solution of (1.12) by numerically solving the optimization problem min We devise our algorithm in Wolfram Mathematica R language and use the built-in optimization function FindMinimum to solve (6.1).
As an example, we consider the kernel We denote by ( u 1 , m 1 , H 1 ) the corresponding numerical solution of (1.1).We first find ( m 1 , H 1 ) by solving (6.1) and using (1.10) and (1.11).Next, we use (1.5) to find u 1 .Finally, to estimate the accuracy of numerical solutions we introduce the error function We plot G 1 and V in Fig. 1, m 1 and u 1 in Fig. 2 and Er 1 in Fig. 3. 6.2.The case of a symmetric trigonometric polynomial.By Corollary 1.5 we have that for a given symmetric trigonometric polynomial G the solution m of (1.6) has the form (1.10), where the vector (a * 0 , a * is the unique solution of (1.14).As before, we use FindMinimum to solve (1.14).As an example, we consider the kernel G 2 (x) = 1 + 4 cos(2πx) + cos(4πx) + 5 cos(6πx) + 7 cos(8πx), x ∈ T.
Analogous to the previous case we denote by ( u 2 , m 2 , H 2 ) the numerical solution of (1.1) corresponding to G 2 .Furthermore, we denote by Er 2 the error function corresponding to ( u 2 , m 2 , H 2 ).We plot G 2 and V in Fig. 4, m 2 and u 2 in Fig. 5 and Er 2 in Fig. 6.As before, we denote by ( u 3 , m 3 , H 3 ) and Er 3 the numerical solution of (1.1) and the error function corresponding to G 3 , respectively.We plot G 3 and V in Fig. 7, m 3 and u 3 in Fig. 8 and Er 3 in Fig. 9.
Extensions
Here, we discuss how our methods can be applied to other one-dimensional MFG system such as (1.19).Denote by L : T × R × R + → R, (x, v, m) → L(x, v, m), be the Legendre transform of H; that is, L(x, v, m) = sup p∈R (vp − H(x, p, m)) .
Then, if H satisfies suitable conditions, we have that L(x, v, m) + H(x, p, m) vp, (7.1) for all v, p ∈ R and there is equality in (7.1) if and only if v = H p (x, p, m) or p = L v (x, v, m).(7.2) As before, second equation in (1.19) yields for some constant j.Therefore, using (7.2) we find
Lemma 4 . 7 .
For every (a , b) ∈ R 2n there exists a unique a 0 = ω(a , b) ∈ R such that ∂Φ α (ω(a , b), a , b) and ω is the function from Lemma 4.7.Then, F is bounded by below and coercive.Consequently, the minimization problem min (a ,b)∈R 2n F (a , b) (4.13) admits at least one solution.Moreover, if (a , b) is a critical point for F , then (a, b) = (ω(a , b), a , b
which we plug-in to the first equation in(1.19) and obtain the following system H x, L v x, j m , m , m = F T G(x − y)m(y)dy + H, m > 0, T m(x)dx = 1.(7.3)Next, one can attempt to study(7.3)first when G is a trigonometric polynomial and then approximate the general case.As before, when G is a trigonometric polynomial the expressionT G(x − y)m(y)dyis always a trigonometric polynomial.Therefore, we have thatH x, L v x, j m , m , m = F n k=0 a * k cos(2πkx) + b * k sin(2πkx) + H,(7.4)for {a * k }, {b * k } ⊂ R.Suppose H is such that the left-hand-side expression of (7.4) is invertible in m with inverse A j (x, m).Then, (7.4) yields the following ansatzm(x) = A j x, F n k=0 a * k cos(2πkx) + b * k sin(2πkx) + H . (7.5) Thus, one can search for the solution m of (7.3) in the form (7.5) with undetermined coefficients {a * k }, {b * k }, H. Therefore, by plugging (7.5) in (7.3) we obtain a finitedimensional fixed point problem for {a * k }, {b * k }, H.If this fixed point problem has good structural properties (such as (1.12)) for a concrete model of the form (1.19), one may analyze this model by methods developed here. | 6,251 | 2017-03-11T00:00:00.000 | [
"Mathematics"
] |
Trajectory optimization and multiple-sliding-surface terminal guidance in the lifting atmospheric reentry
. In this paper the problem of guiding a vehicle from the entry interface to the ground is addressed. The Space Shuttle Orbiter is assumed as the reference vehicle and its aerodynamics data are interpolated in order to properly simulate its dynamics. The transatmospheric guidance is based on an open-loop optimal strategy which minimizes the total heat input absorbed by the vehicle while satisfying all the constraints. Instead, the terminal phase guidance is achieved through a multiple-sliding-surface technique, able to drive the vehicle toward a specified landing point, with desired heading angle and vertical velocity at touchdown, even in the presence of nonnominal initial conditions. The time derivatives of lift coefficient and bank angle are used as control inputs, while the sliding surfaces are defined so that these two inputs are involved simultaneously in the lateral and vertical guidance. The terminal guidance strategy is successfully tested through a Monte Carlo campaign, in the presence of stochastic winds and wide dispersions on the initial conditions at the Terminal Area Energy Management, in more critical scenarios with respect to the orbiter safety criteria.
Introduction
The development of an effective guidance architecture for atmospheric reentry and precise landing represents a crucial issue for the design of reusable vehicles capable of performing safe planetary reentry.Unsurprisingly, the interest in guidance and control technologies for atmospheric reentry and landing of winged vehicles has increased [1][2], as the flexibility and controllability of the reentry trajectory can be increased through the employment of lifting bodies.However, this implies a greater sensitivity to the environmental conditions.Thus, the usefulness of a real-time guidance algorithm, able to generate online trajectories, is evident, for the purpose of guaranteeing safe descent and landing even in the presence of nonnominal conditions and dispersions caused by the preceding transatmospheric phase.The guidance and control strategy of the Space Shuttle relied on the modulation of the bank angle to follow a pre-computed reference drag profile, and could only account for small deviations from the nominal conditions [3].Mease and Kremer and Mease et al. [4] revisited the Shuttle reentry guidance, using nonlinear geometric methods.Later on, Benito and Mease [5] developed and applied a new controller based on model prediction, where the bank angle is modulated to minimize an effective cost function which accounts for the error in drag acceleration and downrange.Nonlinear predictive control was employed by Minwen and Dayi to generate skip entry trajectories for low lift-to-drag vehicles [6].Most recently, Lu [7] considered a unified guidance methodology based on a predictor-corrector algorithm, for vehicles with different aerodynamic efficiency, while satisfying the boundaries on the thermic flux and load factor.Instead, a more limited number of papers addressed the terminal descent and landing, which is traveled after the Terminal Area Energy Management (TAEM) interface.Kluever [8] developed a guidance scheme for an unpowered vehicle with limited normal acceleration capabilities.Bollino et al. [9] employed a pseudospectral-based algorithm for optimal feedback guidance of reentry spacecraft, in the presence of large uncertainties and disturbances.Fahroo and Doman [10] used again a pseudospectral method in a mission scenario with actuation failures.Finally, reinforcement learning was used for autonomous guidance algorithms for precise landing [11].Recently, sliding mode control was proposed as an effective nonlinear approach to yield real-time feedback control laws able to drive an unpowered space vehicle toward a specified landing site [3,12].Depending on the instantaneous state and the desired final conditions, sliding mode control was already shown to be effective for generating feasible atmospheric paths leading to safe landing in finite time, even when several nonnominal flight conditions may occur that can significantly deviate the vehicle from the desired trajectory, e.g.winds or atmospheric density fluctuations [13].In this work, an open-loop optimal guidance is developed for the transatmospheric arc, capable of minimizing the total heat input while driving the vehicle toward the TAEM.The Space Shuttle Orbiter is taken as the reference vehicle and an analytical method is employed to keep the maximum thermic flux below the safety limit, while accounting for the saturation on the control variables.Finally, the multiple-sliding-surface guidance is employed in order to drive the vehicle from the TAEM to the landing point, with accurate aerodynamic modelling, while including stochastic winds and large dispersions on the initial values of the state and control variables.
Reentry dynamics
The reentry vehicle is modelled as a 3-DOF lifting body and the position of the centre of mass is identified by a set of three spherical coordinates , representing respectively the instantaneous radius, the geographical longitude and the latitude.The additional variables are given by the relative velocity with respect to the Earth surface , the heading angle and the flight path angle .The trajectory equations describe the motion of the center of mass due to the effect of the forces acting on it [14].
The Space Shuttle Orbiter is taken as the reference vehicle for numerical simulations.It is assumed that the lift and drag coefficients ( and ) depend only on the angle of attack and Mach number , while the sideslip coefficient depends only of the sideslip angle and Mach number .The aerodynamics coefficients are obtained from wind tunnel tests [15] and are interpolated in order to derive their expressions as continuous functions of the aerodynamic angles and Mach number ( , , ).
Transatmospheric phase
The transatmospheric guidance drives the vehicle from the entry interface towards the TAEM, while keeping the thermic flux per unit area at the stagnation point below the maximum value and minimizing the cost function (1) where the coefficients k are chosen to balance the different contributions, while the terms represent the deviations on the state variables at the final time, located at the TAEM.The reentry (r,λ g ,ϕ ) trajectory is sampled at equally-spaced time instants from the entry interface to the TAEM and the guidance law is determined through parametric optimization of the following parameters: • sampled values of the bank angle; • sampled values of the angle of attack from to the TAEM; • the total time of flight from the reentry interface to the TAEM; • the Mach number at the end of the costant-angle-of-attack flight profile; • the argument of latitude at the initial time; The boundary conditions reflect the typical discent profile of the Space Shuttle [16 ) and the algorithm must keep the thermic flux below the maximum allowable value, equal to 681.39 kW/m 2 , even lower than the typical value reported in the scientific literature, i.e. 794.43 kW/m 2 [17].The dynamic pressure must be less than 16.375 kPa.Thermic flux saturation.The thermic flux at the leading edge can be computed as , where and , with a = 17700, b = 0.0001 and n = 3.07 [17].The derivative of the thermic flux can be easily computed as , where F and G are auxiliary functions that do not depend on the input variable ( ).Therefore, the time derivative of the lift coefficient can be computed as (2) Guidance strategy.The descent of the vehicle through the atmosphere is controlled through modulation of the angle of attack and bank angle.In particular, the variation of the angle of attack follows the succession of four distinct flight profiles: • constant-angle-of-attack flight from the entry interface to ; • variable-angle-of-attack flight as described by Eq. 10; • variable-angle-of attack flight following a sinusoidal profile from to ; • variable-angle-of-attack flight optimized by the guidance algorithm.Numerical results.Table 1 reports the results of the optimization.The guidance algorithm is able to drive the vehicle through the atmosphere, with limited dispersions on the final state at the TAEM (cf.Table 1), along a descent path close to the actual trajectory of the Orbiter [16].
Terminal guidance
Along the transatmospheric arc, different factors may modify the reentry trajectory from the reference profile.Therefore, the terminal guidance must be able to drive the vehicle despite a wide range of initial conditions.In a previous work, sliding-mode control was already employed as a nonlinear approach to yield real-time feedback guidance laws in an accurate dynamic framework, including winds and large deviations from the initial trajectory variables [13].In this study, significant improvements are developed with respect to the previous research: • sliding-mode guidance is tested for a longer time period (i.e. from the TAEM to ground) and the aerodynamic modeling is based on real data rather than approximate analytical expressions; • the saturation of the control variables is accounted inside the expression of the control input, so that only feasible trajectories are generated; • the guidance gains are updated through an adaptive strategy, allowing further extension of the capability of the algorithm.Numerical results.A total number of 500 simulations are run and the initial conditions are randomly generated with upper/lower bounds set to (where denotes the standard deviation of the variable of interest).Stochastic wind is also accounted for, whose intensity and direction is stronger than the safety limits prescribed for the Space Shuttle Orbiter landing [18].Table 2 collects the initial conditions and associated standard deviations, which reflect the actual reference flight profile of the Space Shuttle [16].Instead, Table 3 From inspection of Table 3, it is evident that the algorithm is able to drive the vehicle to the prescribed landing point, which is located 762 m beyond the runway threshold, with limited crossrange component and vertical velocity at touchdown, and the proper alignment with the runway [16].Figure 3 shows the stream of trajectories from the TAEM to the landing runway.x(0) y(0) z(0) v r (0) ζ r (0) γ r (0) C L (0) σ (0)
Fig. 1
Fig. 1 and 2 highlight the time history of the angle of attack, which keeps the thermic flux below the maximum value.Saturation of the thermal flux occurs after about 200 s, as shown
Fig. 3 :
Fig. 3: stream of trajectoriesConcluding remarksThis paper addresses the problem of driving a winged vehicle (i.e. the Space Shuttle Orbiter) from the entry interface to landing, while satisfying all the constraints.The transatmospheric guidance is based on an open-loop algorithm that minimizes the total heat input and saturates the maximum thermic flux.The terminal guidance is based on a multiple-sliding-surface strategy, which allows online generation of trajectories.The simulation setup includes a complete dynamic framework with an accurate aerodynamics modeling based on wind tunnel tests.The numerical results show the ability of the proposed guidance to modulate the angle of attack to avoid exceeding the maximum thermal flux, while compensating for winds and dispersions of position and velocity from the nominal trajectory during the terminal phase.The vehicle reaches the landing point with the proper alignment with the runway and a safe vertical velocity.
Table 1 :
displacements of the state variables from the boundary values at TAEM
Table 2
collects the results of the Monte Carlo campaign. | 2,485 | 2023-11-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Homotopic Approximate Solutions for the General Perturbed Nonlinear Schrödinger Equation
In this work, a class of perturbed nonlinear Schrödinger equation is studied by using the homotopy perturbation method. Firstly, we obtain some Jacobi-like elliptic function solutions of the corresponding typical general undisturbed nonlinear Schrödinger equation through the mapping deformation method, and secondly, a homotopic mapping transform is constructed, then the approximate solution with arbitrary degree of accuracy for the perturbed equation is researched, it is pointed out that the series of approximate solution is convergent. Finally, the efficiency and accuracy of the approximate solution is also discussed by using the fixed point theorem.
INTRODUCTION
With the development of soliton theory in nonlinear science, searching for analytical exact solutions or approximate solutions of the nonlinear partial differential equations (NLPDEs) plays an important and significant role in the study of the dynamics of those nonlinear phenomena [1].Many powerful methods have been used to handle these problems recently.For example, inverse scattering transformation [2], Hirota bilinear method [3], homogeneous balance method [4], Bäcklund transformation [5], Darboux transformation [6], projective Riccati equations method [7], the generalized Jacobi elliptic function expansion method [8] and so on [9].But because of the complexity of NLPDEs, people can't find the exact solutions for many of them especially with disturbed term.Researchers had to develop some approximate methods for nonlinear theory, such as multiple-scale method [10], variational iteration method [11], indirect matching method [12] etc.The main essence of these methods is the study of nonlinear problems dealt with linear problems by using the approximate expansion.
The homotopy analysis method (HAM) was firstly proposed in 1992 by Liao [13], which yields a fast convergence for most of the selected problems.It also showed a high accuracy and a rapid convergence to solutions of the nonlinear partial evolution equations.After this, many types of nonlinear problems were solved with HAM by others, such as discrete KdV equation [14], a smoking habit model [15], and so on.As a special case of HAM, He proposed the homotopy perturbation method [HPM] [16].Recently, based on the idea of HPM, Mo proposed the homotopic mapping method to handle some nonlinear problems with small perturbed term [17].A great quantity of works about this subject have been researched by many authors, such as perturbed Kdv-Burgers equation [18], mid-latitude stationary wind field [19] etc.
In this paper, we extend the applications of HPM to solve a class of disturbed nonlinear Schrödinger equation in the nonlinear optics.And many useful results are researched.
MODEL AND HOMOTOPIC MAPPING
Consider the following generalized nonlinear Schrödinger equation with perturbed term If we let , t x z t ,Eq.( 1) turns to the following form Where f is a perturbed term, which is a sufficiently smooth function in a corresponding domain. ()t and () t are the slowly increasing dispersion coefficient and nonlinear coefficient respectively, () t represents the heat-insulating amplification or loss.The transmission of soliton in the real communication system of optical soliton is described by Eq.( 2) with 0 f [20][21][22][23][24][25].
In order to obtain the approximate solution of Eq.( 2),we make the transformation With the following consistency conditions 0 () Where , , , , k k a a c are arbitrary nonzero constants.Substituting (4) into (2),we have If we let , Eq.( 6) becomes When 0 f , we have '' 3 24 20 By using the general mapping deformation method [9], we can obtain the following solutions of the corresponding undisturbed Eq. ( 3) Where j F is an arbitrary solution of the equation obtain the twenty-two classes of solutions j F from Ref. [26], for example, if we let ) . We find that 0 u turns to the solution 31 u in Ref. [24],and which was degenerated to the famous bright-soliton solutions 1 u in Ref. [25] when 1 m .
Eq.( 8) has the solution In order to obtain the solution of Eq.( 2), We introduce the following homotopic mapping , Where is an initial approximate solution to Eq.( 8), and the linear operator L is expressed as Obviously, from mapping (9), ,1 0 H is the same as Eq.( 7).Thus the solution of Eq.( 7) is the same as the solution of
APPROXIMATE SOLUTION
In order to obtain the solution of Eq.( 7), set If we let 00 ,noticed the analytic properties of 0 , f and mapping (9), we can deduce that the series of ( 11) are uniform convergence when 0,1 p [16].
Substituting expression (11) into ,0 H u p , expanding nonlinear terms into the power series in powers of p , we compare the coefficients of the same power of p on both sides of the equation, we have 0 00 : From (12) we have If we select Where From (4),( 5),( 11),( 15),( 16), (17) and mapping (9) we have the first and second order approximate Jacobi-like elliptic function solutions x t of the generalized disturbed nonlinear Schrödinger equation (2) as follows: , With the same process, we can also obtain the N order approximate solution Where 2 12 .
Remark 2:
The N-order approximate solution x t is degenerated to the solitary wave approximate solution and trigonometric function approximate solution when the modulus 1 m or 0 m .If selecting different 0 , we can obtain the other fifty-one types of approximate solutions of Eq.(2).
COMPARISION OF ACCURACY
In order to explain the accuracy of the expressions of the approximate solution represented by Eq.( 18), we consider the small perturbation term From the discussion of Section 3, we obtain the second order approximate Jacobi-like elliptic function solution of Eq.( 19) as follows ,, to be an exact solution of Eq.( 6), noticed that 2hom ()
CONCLUSION
We researched a class of disturbed nonlinear Schrödinger equation with variable coefficients by using the homotopic mapping method, which is much more simple and efficient than some other asymptotic method such as perturbation method etc, the Jacobi-like function approximate solution with arbitrary degree of accuracy for the disturbed equation is researched, which shown that this method can be used to the soliton equation with complex variables, but it is still worth to research that whether or not the method can be used to the system with high dimension and high order.
Figure 1 .
Figure 1.Comparison between the curves of solutions 1hom () u
Figure 2 .
Figure 2. Comparison between the curves of solutions 1hom () u (solid line) and
Table 1 ,
(19)e2, Fig.1and Fig.2.From Figs.1-2, it is easy to see that as This behaviour is coincident with that for the approximate solution of the weakly disturbed evolution equation(19).
u are very close to each other. | 1,586.2 | 2015-04-01T00:00:00.000 | [
"Mathematics"
] |
Aniridia in Two Related Tennessee Walking Horses
1 Large Animal Clinical Sciences, University of Tennessee, College of Veterinary Medicine, 2407 River Drive, Knoxville, TN 37996, USA 2 Small Animal Clinical Sciences, University of Tennessee, College of Veterinary Medicine, 2407 River Drive, Knoxville, TN 37996, USA 3Diagnostic and Biomedical Sciences, University of Tennessee, College of Veterinary Medicine, 2407 River Drive, Knoxville, TN 37996, USA
Introduction
Aniridia is a rare condition marked by partial or complete absence of the iris.This condition has been reported in horses [6][7][8][9][10][11], cattle [1], laboratory animals [2,3], and humans [4,5].In Belgian horses [6] and Quarter horses [7], the defect has been reported to be genetically transmitted as an autosomal dominant trait, but at least one case in a Swedish Warmblood was not dominantly inherited [8].In humans, the anomaly either presents as a familial condition with autosomaldominant inheritance or is sporadic [4,5].Affected animals are usually photophobic with absent direct and indirect pupillary light responses bilaterally, and they often have additional ocular abnormalities including dermoid lesions and cataracts.Dermoid lesions, like many instances of aniridia, form during foetal development, and our understanding of the molecular genetics of ocular development has improved since the last case report of aniridia in horses [8].Therefore, the purpose of this report is to describe the clinical and histologic features of aniridia in 2 related Tennessee Walking horses, especially considering "new" information about the genetics of ocular involvement.
Case Presentation
Two Tennessee Walking horses were presented to the University of Tennessee Equine Hospital for bilateral ocular abnormalities.Horse A was a 15-year-old mare, and horse B was her 12-month-old female offspring.Very limited history was available because both animals were rescued.Visual deficits had been recognized in both animals prior to presentation, and the new owners had noticed bilateral ocular opacities in both horses.On ophthalmic examination of horse A, both direct and indirect pupillary light responses were absent in both eyes.However, the horse did have positive menace responses bilaterally.The tips of the ciliary processes were visible in both eyes, and the iris could not be identified.The corneas showed evidence of chronic keratitis bilaterally with mild vascular infiltration of the cornea at the dorsal aspect.Fine cilia-like hairs were protruding from the superior corneoscleral limbus of the left eye, consistent with a limbal dermoid.There were also bilateral immature nuclear cataracts.Cataracts hid portions of the fundus, but those portions that were evaluable were normal.Ophthalmic examination of horse B, the yearling offspring of horse A, revealed similar findings.There was no evidence of an iris, and the ciliary processes were easily seen (Figure 1(a)).Chronic keratitis was present bilaterally, and limbal dermoid lesions were more pronounced than those in horse A (Figure 1(b)).Incipient to immature cataracts were present bilaterally.
Both horses were diagnosed with bilateral aniridia with chronic keratitis, limbal dermoids, and cataract formation.The rescue organization elected euthanasia for both animals due to the poor prognosis for improvement in long-term vision.Following euthanasia, the eyes of both animals were harvested and fixed in Davidson's solution.
Histopathology
In both eyes (one from each horse), the iris leaflets were present but markedly blunted (iris hypoplasia) (Figure 2).The blunted iris was frequently adhered to Descemet's membrane (peripheral anterior synechia) (Figure 2).The iris sphincter muscle was absent, and only poorly developed remnants of the dilator muscle were apparent with a Masson's trichrome stain (Figure 3).There were occasional nerve bundles within the iris leaflets.The iridocorneal angle was otherwise normal, but horse B had erythrocytes (haemorrhage) within the angle.On the posterior surface of the iris, the normally heavily pigmented, bilayered epithelium was disorganized and varied from poorly to heavily pigmented (Figure 3).The ciliary body and ciliary processes were within normal limits.
Horse A had mild, limbal, superficial corneal vascularization on one side of the eye; the clinically noted limbal dermoid was not present in the section.At the limbus of horse B, there was a single hair follicle with an associated sebaceous gland, consistent with a limbal dermoid.The surrounding collagen was haphazardly arranged and contained scattered blood vessels, lymphocytes, and plasma cells.
There was artifactual retinal detachment in both horses.Grossly, both horses had bilateral lens opacities; however, only the lens from the foal (horse B) was examined microscopically.There was splitting of the anterior lens capsule and lens epithelial cells metaplasia and hyperplasia.No other histopathologic lesions were present in the lens.
Discussion
Aniridia was first reported in horses in 1955 in a Belgian stallion and his offspring [6].The defect identified in this group of related horses was heritable and passed via an autosomal dominant mode.Aniridia has also been described in a group of related Quarter horses, also with autosomal dominant inheritance [7,9].In single case reports in a Thoroughbred colt [10] and Welsh-Thoroughbred cross filly [11], hereditability was undetermined.Hereditability was undetermined for 5 in a series of 6 cases from Sweden, and dominant hereditability with complete penetrance was ruled out in one Swedish Warmblood.In humans, aniridia is inherited in about two-thirds of the cases and sporadic in the remainder [4].In the inherited forms, the majority are autosomal dominant [4].Although it is impossible to make a conclusion about inheritance in the 2 horses in this report, the presentation in a mare and its foal makes hereditability highly likely, and rarity of the syndrome makes dominant inheritance most likely.
The underlying pathophysiology of aniridia is not known with certainty.During ocular organogenesis, the epithelial layers of the normal iris are derived from the neuroectoderm of the anterior rim of the optic cup, while the stroma is derived from mesenchymal tissue of neural crest origin [12].Histologically, epithelium and stroma are both deficient in aniridia, so it is possible that defects in epithelial development prevent normal stromal maturation or that stromal maldevelopment causes failure of normal epithelial development [5].
A third theory suggests that aniridia is a result of excessive remodelling, with normal iris formation being followed by inappropriate iris tissue regression [13].
In humans, most cases are transmitted via autosomal dominant inheritance and are linked to defects in the PAX6 gene, one of a family of transcriptional regulators that has a central role in controlling the development of the eye [4,5].Most mutations (nonsense mutations, splice mutations, frameshift deletions, etc.) cause premature translational termination on one of the alleles, resulting in haploinsufficiency with decreased expression of gene product [4,5].Because a critical dose of PAX6 protein is necessary to initiate transcription of target genes [14], the reduced amount could prove critical in preventing normal iris development.A high, continuous expression of PAX6 in tissues of ectodermal origin (e.g., iridial epithelium) directly affects the regulation and structure of these tissues during organogenesis of the eye but is also necessary for the expression of signalling molecules that act on cells of mesenchymal origin (e.g., the iris stroma).In addition, a low and transient expression of PAX6 is observed in cells of mesenchymal origin during development of the iris and other anterior segment tissues [15].This favours a hypothesis that simultaneous defects in epithelial and mesenchymal development are involved in aniridia, with the former probably being more important.Missense mutations are relatively rare and result in a variety of phenotypes, including corneal dystrophy, Peter's anomaly, foveal hypoplasia, ectopia pupillae, congenital nystagmus, and presenile cataract [5].Sporadic mutations also occur and may be associated with nephroblastoma (Wilms' tumour), genitourinary abnormalities, and mental retardation [4,5].While most of our knowledge about PAX6 structure and function comes from humans and laboratory rodents, PAX6 protein function is highly conserved across bilaterian species [4].Therefore, it is easy to assume that most of the genetic features of aniridia described in humans would apply to horses as well.Another equine ocular developmental disorder, multiple congenital ocular anomalies, also features maldevelopment and hypoplasia of the iris, but that syndrome appears to be clinically and genetically distinct from the equine aniridia syndrome [16,17].
Clinically, aniridia in humans is usually found in association with other ocular defects such as cataracts, glaucoma, keratopathy, optic nerve hypoplasia, ectopia lentis, nystagmus, and photophobia [5].The foregoing discussion of the importance of PAX6 on ocular organogenesis focuses on the iris, but PAX6 is equally important in the development of the cornea [18], lens [19], optic nerve/retina [20], and iridocorneal angle [15].Multiple ocular defects are therefore not surprising.Both of the cases in this report had cataracts, corneal pathology, and limbal dermoids, and similar findings are common in previously published equine cases [7][8][9][10].The dermoids were clinically atypical in that the aberrant hairs did not emanate from a skin-like mass of tissue but rather appeared in a regular row along the limbus and were very much like a row of cilia.The precise embryological errors leading to the development of the various presentations of dermoids are unclear, but it has been hypothesized that the pathogenesis of limbal dermoids may be related in part to aberrant development and fusion of the lids, with displacement of lid elements to the limbus [21], which would certainly correlate with the clinical appearance in our cases.
Aniridia is a complex heritable disorder of horses that has been reported in Belgians, Quarter Horses, and now in Tennessee Walking horses.The disorder most commonly follows an autosomal dominant mode of inheritance and results in impaired vision of affected animals due to chronic keratitis and cataract formation.There is no known treatment of the condition for horses; affected animals are managed based on clinical symptoms as they arise.Although horses have been reported to perform well with the condition [10], affected animals should not be used for breeding purposes. *
Figure 1 :Figure 2 :
Figure 1: Horse B. (a) Left eye.Due to the absence of any grossly visible iris, the ciliary processes are easily seen (arrows).An immature nuclear cataract is also present (asterisk).(b) Right eye.Short hairs emanating from the corneoscleral limbus are considered variants of limbal dermoids (curved arrow).An incipient cataract is also present (asterisk). *
Figure 3 :
Figure 3: (a) Control horse.Normal portion of the posterior surface of the pupillary margin of the iris with both the dilator muscle (arrow) and sphincter muscle present (asterisk).Masson's trichrome.Bar = 200 m.(b) Horse B. Posterior surface of the iris with thin dilator muscle (arrow); the sphincter muscle was absent.Note also the poorly pigmented and disorganized epithelium on the posterior surface of the iris.Masson's trichrome.Bar = 200 m. | 2,346.4 | 2013-06-26T00:00:00.000 | [
"Biology"
] |
KMT2A: Umbrella Gene for Multiple Diseases
KMT2A (Lysine methyltransferase 2A) is a member of the epigenetic machinery, encoding a lysine methyltransferase responsible for the transcriptional activation through lysine 4 of histone 3 (H3K4) methylation. KMT2A has a crucial role in gene expression, thus it is associated to pathological conditions when found mutated. KMT2A germinal mutations are associated to Wiedemann–Steiner syndrome and also in patients with initial clinical diagnosis of several other chromatinopathies (i.e., Coffin–Siris syndromes, Kabuki syndrome, Cornelia De Lange syndrome, Rubinstein–Taybi syndrome), sharing an overlapping phenotype. On the other hand, KMT2A somatic mutations have been reported in several tumors, mainly blood malignancies. Due to its evolutionary conservation, the role of KMT2A in embryonic development, hematopoiesis and neurodevelopment has been explored in different animal models, and in recent decades, epigenetic treatments for disorders linked to KMT2A dysfunction have been extensively investigated. To note, pharmaceutical compounds acting on tumors characterized by KMT2A mutations have been formulated, and even nutritional interventions for chromatinopathies have become the object of study due to the role of microbiota in epigenetic regulation.
KMTs catalyze the transfer of methyl groups from S-adenosylmethionine to the lysine residues on histone tails, particularly the histone H3 tail. Unlike other epigenetic enzymes such as acetyltransferases (HATs), KMTs are more specific and usually modify one or two lysines on a single histone [1]. Lysines can be monomethylated, bimethylated or trimethylated without changing the electric charge of the amino acid side chain. The effect on chromatin state, i.e., whether it activates transcription or represses it, depends on the methylation states and their positions ( Figure 1) [2][3][4][5][6][7][8][9][10][11][12][13][14][15]. KMTs are so called writers, enzymes that catalyze the addition of chemical groups to histone tails or to DNA; these modifications are not permanent but can be removed by erasers to reverse the influence on gene expression. Readers possess specialized domains able to recognize and interpret different chemical modifications. Writers, erasers and readers form the epigenetic machinery, and mutations in genes coding for this apparatus lead to ann altered chromatin conformation and an incorrect gene expression, resulting in a series of syndromes known as chromatinopathies, Mendelian genetic diseases, most of them with a dominant character [16][17][18]. Pathogenic mutations in KMTs and KDMs (Lysine demethylases) lead to haploinsufficiency in numerous developmental syndromes ( Figure 2) (Table 1) [10,19].
Genes 2022, 13, x FOR PEER REVIEW 2 of 17 on gene expression. Readers possess specialized domains able to recognize and interpret different chemical modifications. Writers, erasers and readers form the epigenetic machinery, and mutations in genes coding for this apparatus lead to ann altered chromatin conformation and an incorrect gene expression, resulting in a series of syndromes known as chromatinopathies, Mendelian genetic diseases, most of them with a dominant character [16][17][18]. Pathogenic mutations in KMTs and KDMs (Lysine demethylases) lead to haploinsufficiency in numerous developmental syndromes ( Figure 2) (Table 1) [10,19]. on gene expression. Readers possess specialized domains able to recognize and interpret different chemical modifications. Writers, erasers and readers form the epigenetic machinery, and mutations in genes coding for this apparatus lead to ann altered chromatin conformation and an incorrect gene expression, resulting in a series of syndromes known as chromatinopathies, Mendelian genetic diseases, most of them with a dominant character [16][17][18]. Pathogenic mutations in KMTs and KDMs (Lysine demethylases) lead to haploinsufficiency in numerous developmental syndromes ( Figure 2) (Table 1) [10,19]. Many species have a KMT2A ortholog, including fishes, birds, amphibians, and mammals; thus, its evolutionary conservation allowed a comprehensive study of KMT2A molecular functions through in vivo experiments on animal models (Drosophila melanogaster, Danio rerio, Mus musculus). KMT2A expression is mainly nuclear and ubiquitously present in 27 tissues, especially in ovary, lymph node, endometrium, thyroid and brain tissue [20]. KMT2A encodes a lysine methyltransferase (KMT) formed of 3969 amino acids, a transcriptional co-activator which plays a crucial role in hematopoiesis, in regulating gene expression at early developmental stages, and in the control of circadian gene expression. KMT2A is processed by the endopeptidase Taspase 1 in two fragments (MLL-C and MLL-N) which heterodimerize and regulate the transcription of specific genes, including HOX genes [21]. KMT2A protein has 18 domains, including the CXXC-type zinc finger, the extended PHD domain and the bromodomain. The SET domain has the methyltransferase activity (mono-, di-, tri-methylation) on lysine 4 of histone 3 (H3K4 me1/2/3), a post-transcriptional modification (PTM) responsible of epigenetic transcriptional activation and which efficiency can be increased when the protein is associated with another component of the MLL1/MLL complex ( Figure 3) [22]. genes, including HOX genes [21]. KMT2A protein has 18 domains, including the CXXCtype zinc finger, the extended PHD domain and the bromodomain. The SET domain has the methyltransferase activity (mono-, di-, tri-methylation) on lysine 4 of histone 3 (H3K4 me1/2/3), a post-transcriptional modification (PTM) responsible of epigenetic transcriptional activation and which efficiency can be increased when the protein is associated with another component of the MLL1/MLL complex ( Figure 3) [22]. As other members of KMTs family, KMT2A regulates gene transcription through chromatin opening or closure and its activity is antagonized by the lysine demethylases (KDMs) family.
Wiedemann-Steiner Syndrome
KMT2A germinal variants are associated to the Wiedemann-Steiner syndrome (WDSTS, OMIM #605130), a rare autosomal dominant disorder characterized by different features, mainly intellectual disability (ID), developmental delay (DD), pre-and postnatal growth deficiency, hypertrichosis, short stature, hypotonia, distinctive facial features (thick eyebrows, long eyelashes, narrow palpebral fissures, broad nasal tip, down slanting palpebral fissures), skeletal abnormalities (clinodactyly, brachydactyly, accelerated skeletal maturation), feeding problems and behavioral difficulties ( Figure 4A) ( Table 2) [23][24][25]. KMT2A variants are distributed throughout the gene, with a pathogenic mutation hotspot in exon 27, and most of them lead to KMT2A loss of function. WDSTS patients usually present de novo private mutations, and the diagnosis is based on clinical evaluation of signs and symptoms then confirmed by molecular analysis. Unfortunately, a specific treatment is not available, thus possible interventions aim at reducing the severity of symptoms. As other members of KMTs family, KMT2A regulates gene transcription through chromatin opening or closure and its activity is antagonized by the lysine demethylases (KDMs) family.
Wiedemann-Steiner Syndrome
KMT2A germinal variants are associated to the Wiedemann-Steiner syndrome (WD-STS, OMIM #605130), a rare autosomal dominant disorder characterized by different features, mainly intellectual disability (ID), developmental delay (DD), pre-and postnatal growth deficiency, hypertrichosis, short stature, hypotonia, distinctive facial features (thick eyebrows, long eyelashes, narrow palpebral fissures, broad nasal tip, down slanting palpebral fissures), skeletal abnormalities (clinodactyly, brachydactyly, accelerated skeletal maturation), feeding problems and behavioral difficulties ( Figure 4A) ( Table 2) [23][24][25]. KMT2A variants are distributed throughout the gene, with a pathogenic mutation hotspot in exon 27, and most of them lead to KMT2A loss of function. WDSTS patients usually present de novo private mutations, and the diagnosis is based on clinical evaluation of signs and symptoms then confirmed by molecular analysis. Unfortunately, a specific treatment is not available, thus possible interventions aim at reducing the severity of symptoms.
Other Chromatinopathies
Mutations in KMT2A have been also found in patients with a clinical presentation suggestive of other chromatinopathies but negative for alterations in the related knowncausative genes. Their clinical presentation shares with WDSTS some phenotypic features and it is caused by alterations of genes involved in the regulation and maintenance of chromatin state as KMT2A. Indeed, these syndromes are caused by mutations in genes of the epigenetic machinery and therefore are known as chromatinopathies [16,18].
Other Chromatinopathies
Mutations in KMT2A have been also found in patients with a clinical presentation suggestive of other chromatinopathies but negative for alterations in the related knowncausative genes. Their clinical presentation shares with WDSTS some phenotypic features and it is caused by alterations of genes involved in the regulation and maintenance of chromatin state as KMT2A. Indeed, these syndromes are caused by mutations in genes of the epigenetic machinery and therefore are known as chromatinopathies [16,18].
Thanks to targeted sequencing and genome-wide DNA methylation analyses, in 2017, Sobreira and colleagues, investigating a cohort of 27 patients with a clinical diagnosis of Kabuki syndrome (KS1, OMIM #147920; KS2, OMIM #300867), found two patients positive for mutations in KMT2A (a de novo heterozygous missense mutation in pt#KS8 and a donor splice site mutation in pt#KS29) [31]. Kabuki syndrome is a congenital disease with a broad and variable spectrum, characterized by mild-to-moderate cognitive disability, post-natal growth deficit, characteristic facial features (long eyelid cracks with slight ectropion of lateral third of the lower eyelid), skeletal abnormalities and immunodeficiency ( Figure 4C) [32]. In about 60% of KS cases, the syndrome is caused by mutations in KMT2D (12q13.12, OMIM #602113; associated with KSS1), also known as MLL2, while in a few cases the causative mutation is carried by the KDM6A gene (Xp11.3, OMIM #300128; associated with KSS2). KMT2D is a methyltransferase that plays crucial roles in development, differentiation, metabolism, and tumor suppression [33]. Both patients analysed by Sobreira and colleagues presented hypotonia, persistent fetal fingerpads, eversion of the lower lateral lid and long palpebral fissure; patient #KS8 in addition showed seizures and recurrent infection and brachydactyly, while patient #KS29 presented ID and feeding difficulties ( Table 2) [31].
Effects of KMT2A Mutations in Animal Models
KMT2A is an evolutionary conserved gene, involved in several functional process of embryonic development, ranging from hematopoiesis to neurogenesis. Indeed, in 1995, Yu and colleagues showed that the complete disruption of KMT2A was embryonic lethal in mice, and heterozygous animals were anemic and affected by growth delay, hematopoietic anomalies and skeletal malformations [54]. Developmental defects were investigated in Drosophila melanogaster too, where mutations in KMT2A homolog (trx) led to a wide range of homeotic transformations [55]. Interestingly, KMT2A was demonstrated as having an important role in the maintenance of memory Th2 cell function [56] and in hematopoiesis, as its absence caused defects both in self-renewal of murine hematopoietic stem cells and in hematopoietic progenitor cell differentiation in zebrafish [57,58]. In addition, impairments in neural development were observed knocking down Kmt2a in zebrafish, and in murine models Mll1 was identified as a crucial component in memory formation, complex behaviors and synaptic plasticity [59][60][61][62][63].
Thus, KMT2A-depleted animal models recapitulate phenotypes described for patients with both germline and somatic mutations. KMT2A associated syndromes show clinical signs such as ID, behavioral problems, speech and growth delay and peculiar dysmorphisms, while the most frequent tumors enriched in KMT2A mutations are the
Effects of KMT2A Mutations in Animal Models
KMT2A is an evolutionary conserved gene, involved in several functional process of embryonic development, ranging from hematopoiesis to neurogenesis. Indeed, in 1995, Yu and colleagues showed that the complete disruption of KMT2A was embryonic lethal in mice, and heterozygous animals were anemic and affected by growth delay, hematopoietic anomalies and skeletal malformations [54]. Developmental defects were investigated in Drosophila melanogaster too, where mutations in KMT2A homolog (trx) led to a wide range of homeotic transformations [55]. Interestingly, KMT2A was demonstrated as having an important role in the maintenance of memory Th2 cell function [56] and in hematopoiesis, as its absence caused defects both in self-renewal of murine hematopoietic stem cells and in hematopoietic progenitor cell differentiation in zebrafish [57,58]. In addition, impairments in neural development were observed knocking down Kmt2a in zebrafish, and in murine models Mll1 was identified as a crucial component in memory formation, complex behaviors and synaptic plasticity [59][60][61][62][63].
Thus, KMT2A-depleted animal models recapitulate phenotypes described for patients with both germline and somatic mutations. KMT2A associated syndromes show clinical signs such as ID, behavioral problems, speech and growth delay and peculiar dysmorphisms, while the most frequent tumors enriched in KMT2A mutations are the hematological ones (e.g., B-cell lymphoma, T-cell lymphoblastic leukemia, acute myeloid leukemia), according to neurodevelopmental and hematopoietic defects found in the aforementioned in vivo models.
Epigenetic Strategies for Pharmacological Approaches
Targeting the regulators of lysine methylation is an emerging strategy for therapeutic approaches, given the role of chromatin post translational modification in regulating gene expression, and considering that lysine methylation has a pivotal role in this process. Indeed, mutations in one of the components of the epigenetic machinery affect the normal pattern of covalent histone modifications, leading to an incorrect gene expression pattern that may consequently result in tumor evolution. In addition, given the very high specificity of each methyltransferase to its target, the development of drugs directed to those enzymes would have the advantage to minimize the off-target effects [64].
As described above, KMT2A alterations have been reported in several blood cancers such as mixed-lineage, acute lymphoblastic and acute myeloid leukemia [65]. Acute leukemia with rearrangements of the KMT2A gene (KMT2Ar) is associated with a higher risk of relapse and is more resistant to standard therapies. KMT2A exerts its function by forming a core-complex with other proteins [66]; for this reason, the inhibition of KMT2A with its interaction partners, both histone and non-histone proteins, is a promising pharmacological strategy when KMT2A rearrangements are drivers of pathology, such as in leukemia. For example, recent studies have shown that the use of peptidomimetics disrupting the interaction between KMT2A and WDR5 (a member of the above-mentioned core-complex) in murine cell line reduces the expression of target genes responsible for KMT2A-mediated leukemogenesis and inhibits the growth of leukemia cells [67,68].
Similarly, it was demonstrated that the small molecule EPZ-5676 has a modest clinical activity reducing the proliferation of MLL-rearranged cells and inducing apoptosis by targeting the enzymatic core of DOT1L, a H3K79 methyltransferase recruited to fusion partners of KMT2A in disease-linked translocations and required for leukemogenesis [69][70][71][72]. Advances in treating MLL-rearranged leukemia were also achieved by using small molecules to block the KMT2A binding site on Menin, a protein encoded by MEN1 and required for oncogenic transformation, leading to the inhibition of the aberrant leukemogenic transcription program [73][74][75][76][77].
Another pharmacological efficient approach in cancer treatment might be the targeting of pathways deregulated in tumorigenesis. Indeed, the inhibition of glycogen synthase kinase 3 (GSK3) can induce the growth arrest of leukemia cells in KMT2Ar leukemia [78], while targeting the DNA damage response (DDR) pathway can lead to specific synthetic lethality in leukemic cells with MLL-rearrangements [79].
Besides the leukemia treatment, the KMT inhibitors are considered potential drugs for other cancers. In particular, Tazemetostat has been approved in January 2020 for the treatment of a rare tumor, epithelioid sarcoma, and then for follicular lymphoma, sustaining the role of the lysine methylation pathways as potential effective targets for treating various diseases [80].
On the contrary, in genetic disorders related to KMT2A, the altered histone methylation status is mainly attributed to loss of functions mutations or missense mutations involving this gene. For this reason, a possible pharmacological approach could counteract the lack of KMT2A activity.
Altered epigenetic control of gene expression may cause psychosis and other psychiatric diseases, it was demonstrated that the atypical antipsychotic clozapine can induce the methylation of GABAergic gene promoters through Mll1 recruitment in a mouse model of schizophrenia [81,82]. Moreover, a study comparing clozapine-responder and non-responder twins demonstrated that clozapine increases DNA methylation of the MECP2 promoter, leading to its downregulation, and consequently enhancing the expression of genes that are regulated by MeCP2 protein [83]. Similarly, the antidepressant phenelzine and its analogue bizine enhance H3K4me2 status in H460, A549 and MDA-MB-231 cancer cell lines by inhibiting the activity of the histone demethylase LSD1 [84]. Furthermore, tranylcypromine (TCP), another antidepressant, has been demonstrated to specifically inhibit LSD1, and its administration in combination with all-trans-retinoic-acid (ATRA) induces the differentiation of acute promyelocytic leukemia (APL) and acute myeloid leukemia (AML) blasts [85]. Moreover, a phase I/II trial (ClinicalTrials.gov: NCT02261779) have demonstrated that TCP-ATRA combined therapy can be used to treat refractory or relapsed AML patients, even if the required high dosage and the prolonged treatment may cause the onset of several side-effects [86]. For this reason, a selective LSD1 inhibitor, ORY-1001, has been developed using TCP structure. Sub-nanomolar doses of this molecule reduce the proliferation of MLL-translocated leukemic cell lines, both in vitro and in vivo, and display synergistic action with the common anti-leukemic drugs, opening the possibility of a targeted and personalized therapy [87]. A phase I/IIa clinical trial has already evaluated the tolerability, pharmacokinetics and pharmacodynamics of ORY-1001 in relapsed or acute refractory leukemia (EUDRACT no.2013-002447-29) [88].
Interestingly, epigenetic interventions could be either pharmaceutical or nutritional. It is well known that dynamic crosstalk between gut microbiota and the host is present and that it can be modulated by diet. Krautkramer and colleagues reported that microbiota regulates histone methylation and acetylation in different tissues as a diet-dependent process [89] and, notably, a microbiota-dependent epigenetic signature was reported in specific diseases, e.g., inflammatory bowel disease [90]. Indeed, the microbial community within the intestine can produce metabolites such as short-chain fatty acids (SCFAs) with a known role of histone deacetylase (HDAC) inhibitors. These compounds or diets able to increase them were recently used as possible therapeutic approach for several diseases, including drugresistant epilepsy [91,92], cancer [93], neurodegenerative disease [94], heart failure [95], and diabetes mellitus [96], and their effect was even studied in experimental models of chromatinopathies, i.e., Kabuki syndrome [97] and Rubinstein-Taybi syndrome [98]. Furthermore, bacteria synthetize essential vitamins, fundamental for immune systems, such as B12, but also folate, required for DNA, histone and protein methylation [99,100]. Intriguingly, in a kdm5-deficient Drosophila model, not only an increase in gut H3K4me3 but also the disruption of intestinal barrier together with aberrant immune activation and anomalies in social behavior were observed. All these changes correlated with alterations in gut microbiota composition, which were rescued by probiotic administration [101].
Thus, considering the latest developments on epigenetic intervention, a deepening understanding of microbiota composition of patients with KMT2A mutations could help new therapeutic approaches investigation among the epigenetic treatments.
Final Remarks
Epigenetic modifications are fundamental for many biological processes; indeed, alterations of genes with this activity can lead to neurodevelopmental disorders or tumorigenesis, when germinal or somatic mutations respectively occur [102,103]. This is the case of KMT2A, a lysin methyltransferase-coding gene, whose variants are associated with a chromatinopathy (WDSTS) at germinal level or can be found in both blood cancers and solid tumors in regard to malignancies.
Interestingly, due to exome-and genome-wide analyses, patients described above with a defined initial chromatinopathy diagnosis but lacking the molecular one were found to be carriers of pathogenetic variants in the KMT2A gene and could have obtained a clinical re-evaluation. In detail, nearly the totality of patients previously diagnosed with CdLS, CSS, KS and RSTS showed features common to WDSTS, such as ID (11/12), speech delay (7/10), peculiar dysmorphisms affecting eyes (12/12) (i.e., thick eyebrows, synophrys, long eyelashes, ptosis and downslanting/narrow palpebral fissure) and nose (12/12) (i.e., depressed nasal bridge and broad nasal tip), while about half of them shared with WDSTS feeding problems (5/10), hirsutism (6/10) and hypotonia (6/10). Oddly, almost all of these patients displayed features less frequently present in WDSTS, such as dysmorphisms affecting mouth (7/12) (i.e., high arched palate and thin upper vermilion) and anomalies of hands/feet (11/12) (i.e., clinodactyly, brachydactyly, persistent fetal fingerpads and broad halluces). Indeed, mutations in different genes involved in the regulation and maintenance of chromatin state can lead to a clinical overlapping phenotype, suggesting a common affected pathway during embryonic development and the evaluation of an expanded set of genes when investigating the molecular causes for a correct diagnosis of these syndromes.
In addition, somatic mutations in KMT2A have been reported in different tumors, as well as alterations in all KMT2 family genes [104] and in other genes associated to chromatinopathies. Curiously, we observed that germline mutations described in the literature are more frequently nonsense than missense, in contrast to somatic ones. This could be explained by the consequent loss of function mechanism characterizing most of chromatinopathies due to a defective protein production, which strongly impacts on embryonic development.
To conclude, since molecular defects in KMT2A also characterize some types of tumors, and research in the field of epigenetic drugs for malignancies is rapidly evolving [101], a therapeutic approach targeting KMT2A interaction or its pathway could be considered also for chromatinopathies, modulating epigenetic dysfunction with pharmaceutical products or diet-based interventions.
Conflicts of Interest:
The authors declare no conflict of interest. | 4,663.4 | 2022-03-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Gibbs sampling the posterior of neural networks
In this paper, we study sampling from a posterior derived from a neural network. We propose a new probabilistic model consisting of adding noise at every pre- and post-activation in the network, arguing that the resulting posterior can be sampled using an efficient Gibbs sampler. For small models, the Gibbs sampler attains similar performances as the state-of-the-art Markov chain Monte Carlo methods, such as the Hamiltonian Monte Carlo or the Metropolis adjusted Langevin algorithm, both on real and synthetic data. By framing our analysis in the teacher-student setting, we introduce a thermalization criterion that allows us to detect when an algorithm, when run on data with synthetic labels, fails to sample from the posterior. The criterion is based on the fact that in the teacher-student setting we can initialize an algorithm directly at equilibrium.
I. INTRODUCTION
Neural networks are functions parametrized by the so-called weights, mapping inputs to outputs.Neural networks are commonly trained by seeking values of weights that minimize a prescribed loss function.In some contexts, however, we want to sample from an associated probability distribution of the weights.Such sampling is at the basis of Bayesian deep learning [48,52].It is used in Bayesian uncertainty estimation [24,29,45] or to evaluate Bayes-optimal performance in toy models where the data-generative process is postulated [3].In this paper, we focus on studying the algorithms and properties of such sampling.
Given training inputs X in Bayesian learning, one implicitly assumes the labels to be generated according to the stochastic process y ∼ P (y|X, W ), where W are the weight of the network, on which a prior P (W |X) is placed.At its heart, Bayesian deep learning consists of sampling from the posterior probability of the parameters: P (W |X, y) = P (y|W, X)P (W |X) where we simply used Bayes theorem.This sampling problem is, in general, NP-hard [10], with many techniques being developed to sample from (1).In this paper, we look at iterative algorithms that in the large time limit, return samples from the posterior distribution (1).Most available algorithms for this task are based on MCMC methods.We focus on the two following questions: • Q1: Do we have a method to evaluate whether the algorithms have thermalized, i.e., if the samples returned by the MCMC plausibly come from the posterior (1)?
• Q2: Which combinations of sampling algorithm and form of the posterior distribution achieve the best performance in terms of ability to thermalize while reaching a low test error?
The first question addresses the long-standing problem of estimating an MCMC's thermalization time, that is, the time at which the MCMC starts sampling well from the posterior.We propose a criterion for thermalization based on the teacher-student setting.The criterion can only be reliably applied to synthetic labels generated by a teacher network.After a comparison with other thermalization heuristics, we argue that the teacher-student criterion is more discriminative, in that it provides a higher lower bound to the thermalization time.The second question explores the interplay between the form of the posterior and the sampling algorithm: since there is more than one way of translating a network architecture into a probabilistic process, we exploit this freedom to introduce a generative process in which noise is added at every pre-and post-activation of the network.We then design a Gibbs sampler tailored to this posterior and compare it to other commonly used MCMCs.
A. Related literature When running an MCMC one has to wait a certain number of iterations for the algorithm to start sampling from the desired probability measure.We will refer to this burn-in period as the thermalization time or T therm [39].Samples before T therm should therefore be discarded.Estimating T therm is thus of great practical importance, as it is crucial to know how long the MCMC should be run.
More formally, we initialize the MCMC at a starting state W 0 ∈ W of our liking.We run the chain iteratively sampling W (t) ∼ P (W (t)|W (t − 1)), where P (•|•) is the transition kernel.If the kernel is ergodic and satisfies the relation π(W ) = W ′ ∈W P (W |W ′ )π(W ′ ), for a certain probability measure π(•), then for t → ∞ the MCMC will return samples from π(•).Thermalization is concerned with how soon the chain starts sampling approximately from π(•).
Consider an observable φ : W → R. Let π φ (•) be the distribution of φ(W ) when W ∼ π(•).Define S φ (δ) ⊆ R as the smallest set such that π φ (S φ (δ)) ≥ 1 − δ.When δ ≪ 1, S φ (δ) is a high probability set for φ(•).We can then look at thermalization from the point of view of φ.For a general initialization W 0 one will usually have φ(W 0 ) ̸ ∈ S φ (δ), since in most initializations, W 0 is unlikely to be a typical sample form π(•).As more samples are drawn, the measure sampled by the chain will approach π(•), therefore we expect that φ(W (t)) ∈ S φ (δ) (up to a fraction δ of draws) for t greater than some time tφ = tφ (W 0 ).We call tφ (W 0 ) the thermalization time of observable φ; notice that tφ (W 0 ), depends both on the observable φ and on the initial condition 1 .In fact some observables may thermalize faster than others, and a good initialization can make the difference between an exponentially (in the dimension of W ) long thermalization time and a zero one (for example if W 0 is drawn from π(•)).In statistical physics it is common to say that the whole chain has thermalized when all observables that concentrate in the thermodynamic limit have thermalized [32,39,55].This will be our definition of T therm .Despite the theoretical appeal, this definition is inapplicable in practice.In fact, for most observables computing π φ (•) is extremely hard computationally.
Practitioners have instead resorted to a number of heuristics, which provide lower bounds to the thermalization time.These heuristics usually revolve around two ideas.We first have methods involving multiple chains [5,9,15,39].In different flavours, all these criteria rely on comparing multiple chains with different initializations.Once all the chains have thermalized, samples from different chains should be indistinguishable.Another approach consists of finding functions with known mean under the posterior and verifying whether the empirical mean is also close to its predicted value [9,13,18,19,56].The proposed method for detecting thermalization relies instead on the teacher-student framework [55].
Another field we connect with is that of Bayesian learning of neural networks.For an introduction see [17,23,30,48] and references therein.We shall first examine the probabilistic models for Bayesian learning of neural networks and then review the algorithms that are commonly used to sample.In order to specify the posterior (1), one needs to pick the likelihood (or data generating process) P (y|X, W ). The most common model, employed in the great majority of works [22,37,46,50,52] , where f (•, W ) is the neural network function, ℓ is the loss function, µ is the sample index, and ∆ is a temperature parameter.As an alternative, other works have introduced the "stochastic feedforward networks", where noise is added at every layer's pre-activation [35,40,44,54].Outside of the Bayesian learning of neural networks literature, models where intermediate pre-or post-activations are added as dynamical variables have also been considered in the predictive coding literature [2,33,34,51].
Once a probabilistic model has been chosen, the goal is to obtain samples from the corresponding posterior.A first solution consists of approximating the posterior with a simpler distribution, which is easier to sample.This is the strategy followed by variational inference methods [25,28,29,44,47,53].Although variational inference yields fast algorithms, it is often based on uncontrolled approximations.Another category of approximate methods is that of "altered MCMCs", i.e., Monte Carlo algorithms which have been modified to be faster at the price of not sampling anymore from the posterior [7,26,27,36,38,57].An example of these algorithms is the discretized Langevin dynamics [49].Restricting the sampling to a subset of the parameters has also been considered in [43] as an alternative training technique.
Finally, we have exact sampling methods: these are iterative algorithms that in the large time limit are guaranteed to return samples from the posterior distribution.Algorithms for exact sampling mostly rely on MCMC methods.The most popular ones are HMC [12,37], MALA [4,31,42] and the No U-turn sampler (NUTS) [21].Within the field of Bayesian learning in neural networks, HMC is the most commonly used algorithm [22,50,52].The proposed Gibbs sampler is inspired to the work of [1], and later [14,20], that introduced the idea of augmenting the variable space in the context of logistic and multinomial regression.
II. TEACHER-STUDENT THERMALIZATION CRITERION
In this section we explain how to use the teacher-student setting to build a thermalization test for sampling algorithms.The test gives a lower bound on the thermalization time.We start by stating the main limitation of this approach: the criterion can only be applied to synthetic datasets.In other words, the training labels y must be generated by a teacher network, using the following procedure.
We first pick arbitrarily the training inputs and organize them into an n × d matrix X.Each row X µ of the matrix is a different training sample, for a total of n samples.We then sample the teacher weights W ⋆ from the prior P (W ).Finally, we generate the noisy training labels as y µ ∼ P (y|X µ , W ⋆ ).Our goal is to draw samples from the posterior P (W |D), where D = {(X µ , y µ )} µ∈ [n] indicates the training set.Suppose we want to have a lower bound on the thermalization time of a MCMC initialized at a particular configuration W start .The method consists of running two parallel chains W 1 (t) and W 2 (t).For the first chain, we use an informed initialization, meaning we initialize the chain on the teacher weights, thus setting W 1 (t = 0) = W ⋆ .For second chain we set W 2 (t = 0) = W start .To determine convergence we consider a test function φ(W ).We first run the informed initialization: after some time T 1 , φ(W 1 (t)) will become stationary.Using samples collected after T 1 we compute the expected value of φ(•) (let us call it φ).Next, we run the second chain.The lower bound to the thermalization time of W 2 (t) is the time where φ(W 2 (t)) becomes stationary and starts oscillating around φ.In practice, this time is determined by visually inspecting the time series of φ(•) under the two initializations, and observing when the two merge.
At first glance this method does not seem too different from [15] or [5], whose method (described in Appendix A) relies on multiple chains with different initializations.There is however a crucial difference: under the informed initialization most observables are already thermalized at t = 0. To see this, recall that the pair W ⋆ , D was obtained by first sampling W ⋆ from P (W ) and then sampling D from P (D|W ⋆ ).This implies that W ⋆ , D is a sample from the joint distribution P (W, D).Writing P (W |D) = P (W,D) P (D) , we see that W ⋆ is also typical under the posterior distribution P (W |D).In conclusion, the power of the teacher-student setting lies in the fact that it gives us access to one sample from the posterior, namely W ⋆ .It then becomes easier to check whether a second chain is sampling from the posterior by comparing the value of an observable.In contrast, other methods comparing chains with different initialization have no guarantee that if the two chains "merge" then the MCMC is sampling from the posterior, since it is possible that both chains are trapped together far from equilibrium.
III. THE INTERMEDIATE NOISE MODEL
In this section, we introduce a new probabilistic model for Bayesian learning of neural networks.We start by reviewing the classical formulation of Bayesian learning of neural networks.Let f (x, W ) be the neural network function, with W its parameters, and x ∈ R d the input vector.Given a training set X ∈ R n×d , y ∈ R n we aim to sample from where ℓ(•, •) is the single sample loss function, and ∆ a temperature parameter.Notice that to derive (2) from (1), we supposed that P (W |X) = P (W ), i.e., W is independent of X.This is a common and widely adopted assumption in the Bayesian learning literature, and we shall make it in what follows.Most works in the field of Bayesian learning of neural networks attempt to sample from (2).This form of the posterior corresponds to the implicit assumption that the labels were generated by where W are some weights sampled from the prior.We propose an alternative generative model based on the idea of introducing a small Gaussian noise at every pre-and post-activation in the network.The motivation behind this process lies in the fact that we are able to sample the resulting posterior efficiently using a Gibbs sampling scheme.Consider the case where f (•, W ) is a multilayer perceptron with L layers, without biases and with activation function σ (•).Hence we have f (x, W ) = W (L) σ W (L−1) σ . . .σ(W (1) x) . . . .Here W (ℓ) ∈ R d ℓ+1 ×d ℓ indicates the weights of layer ℓ ∈ [L], with d ℓ the width of the layer.We define the pre-activations Z (ℓ) ∈ R n×d ℓ and post activations X (ℓ) ∈ R n×d ℓ of layer ℓ.Using Bayes theorem and applying the chain rule to the likelihood we obtain with the constraint Z (L+1) = y.The conditional probabilities are assumed to be (5) where X } L ℓ=2 control the amount of noise added at each pre-and post-activation.This structure of the posterior implicitly assumes that the pre-and post-activations are iteratively generated as X (1) = X ∈ R n×d are the inputs, Z (L+1) = y ∈ R n represent the labels and ϵ X are n × d ℓ matrices of i.i.d.respectively N (0, ∆ (ℓ) Z ) and N (0, ∆ (ℓ) X ) elements.We will refer to (7) as the intermediate noise generative process.If we manage to sample from the posterior (4), which has been augmented with the variables {X (ℓ) } L ℓ=2 , {Z (ℓ) } L ℓ=2 , then we can draw samples from P (W |X, y), just by discarding the additional variables.A drawback of this posterior is that one has to keep in memory all the pre-and post-activations in addition to the weights.
We remark that the intermediate noise generative process admits the classical generative process (3) and the SFNN generative model as special cases.Setting all ∆s (and hence all ϵ) to zero in (7) except for ∆ .Instead, setting ∆ (ℓ) X = 0 for all ℓ, but keeping the noise in the pre-activations gives the SFNN model.
Algorithm 1 Gibbs sampler for Multilayer perceptron
Input: training inputs X, training labels y, noise variances {∆ X } L ℓ=2 , prior inverse variances {λ ) ▷ See eq. ( 9) ▷ See eq. ( 8) ▷ See eq. ( 9) ▷ See eq. ( 10) Putting all ingredients together, we obtain the Gibbs sampling algorithm, whose pseudocode is reported in Algorithm 1.The main advantages of Gibbs sampling lie in the fact that it has no hyperparameters to tune and, moreover, it is a rejection-free sampling method.In the case of MCMCs, hyperparameters are defined to be all parameters that can be changed without affecting the probability measure that the MCMC asymptotically samples.The Gibbs sampler can also be parallelized across layers: a parallelized version of Algorithm 1 is presented in Appendix D. Finally, one can also extend this algorithm to more complex architectures: Appendices F and G contain respectively the update equations for biases and convolutional networks.We release an implementation of the Gibbs sampler at https://github.com/SPOC-group/gibbs-sampler-neural-networks
V. NUMERICAL RESULTS
In this section we present numerical experiments to support our claims.We publish the code to reproduce these experiments at https://github.com/SPOC-group/numerics-gibbs-sampling-neural-nets
A. Teacher student convergence method
In section II we proposed a thermalization criterion based on having access to an already thermalized initialization.
Here we show that it is more discriminative than other commonly used heuristics.We first briefly describe these heuristics.
• Stationarity.Thermalization implies stationarity since once the MCMC has thermalized, it samples from a fixed probability measure.Therefore any observable, plotted as a function of time should oscillate around a constant value.The converse (stationarity implies thermalization) is not true.Nevertheless observing when a function becomes stationary gives a lower bound on T therm .
• Score method [13].Given a probability measure P (W ), we exploit the fact that E W ∼P ∂W dW = 0. We then monitor the function ∂ log P (W ) ∂W along the dynamics.The time at which it starts fluctuating around zero is another lower bound to T therm .dedico questo papero alla mia paperella In the legend, next to each method we write between parentheses the initialization (or pair of initializations) the method is applied to.The circles on the x axis represent the thermalization times estimated by each method.Left: We compare the predictions for the thermalization time of the zero-initialized MCMC.The red y scale on the right refers uniquely to the lines in red.All the other quantities should be read on the black y scale.Right: We compare the predictions for the thermalization time of two chains initialized independently at random.The pink y scale refers uniquely to the pink line.All other quantities should be read on the black logarithmic scale.The randomly initialized runs fail to thermalize and their test MSEs get stuck on a plateau.However, R, whose time series on the plateau is stationary and close to 1, fails to detect this lack of thermalization.
• R statistic [15].Two (or more) MCMCs are run in parallel starting from different initializations.The within-chain variance is compared to the total variance, obtained by merging samples from both chains.Call the ratio of these variances R (a precise definition of which is given in Appendix A).If the MCMC has thermalized, the samples from the two chains should be indistinguishable, thus R will be close to 1.The time at which R gets close to 1 provides yet another lower bound to the thermalization time.
We compare these methods in the case of a one hidden layer neural network, identical for the teacher and the student, with input dimension d 1 = 50, d 2 = 10 hidden units and a scalar output.This corresponds to the function 2) σ W (1) x + b (1) , where σ(x) = max(0, x) and W indicates the collection of all parameters: W (1) ∈ R d2×d1 and W (2) ∈ R 1×d2 .We specify the prior by setting λ (1) . over the coordinates of the bias vector.Let n = 2084 be the size of the training set.We pick n to be four times the number of parameters in the network anticipating that the training set contains enough information to learn the teacher.We start by generating the matrix of training inputs X ∈ R n×d1 with i.i.d.standard Gaussian entries, then we sample the teacher's weights W ⋆ from the Gaussian prior.For concreteness we set Z to the same value ∆ and set ∆ = 10 −4 .To generate the training labels y, we feed X, the teacher's weights W ⋆ and ∆ into the generative process (7), adapted to also add the biases.For the test set, we first sample X test , with i.i.d.standard Gaussian entries.Both the test labels and the test predictions are generated in a noiseless way (i.e., just passing the inputs through the network).In this way, the test mean square error (MSE) takes the following form: The full details about this experiment set are in Appendix B. We run the Gibbs sampler on the intermediate noise posterior starting from three different initializations: informed, zero and random.Respectively the student's variables are initialized to the teacher's counterparts, to zero, or are sampled from the prior.In this particular setting, the Gibbs sampler initialized at zero manages to thermalize, while the random initializations fail to do so.Two independent random initializations are shown, in order to be able to use the multiple chains method., where P indicates the posterior distribution; this is the score rescaled by ∆ and averaged over the first layer weights.In the zero-initialized chain, U starts oscillating around zero already at t = 20.Then we consider the R statistics computed on the outputs of the two chains with zero and informed initializations.The criterion estimates that the zero-initialized chain has thermalized after t = 6 × 10 4 , when R approaches 1 and becomes stationary.Next, we consider the teacher-student method, with the test MSE as the test function (g in our previous discussion).According to this method, the MCMC thermalizes after the test MSE time series of the informed and zero-initialized chains merge, which happens around t = 10 5 .Finally, the stationarity criterion, when applied to the test MSE or to R gives a similar estimate for the thermalization time.The x-axis of the left plot provides a summary of this phenomenology, by placing a circle at the thermalization time estimated by each method.In summary, the teacher-student method is the most conservative, but the R statistics-based method is also reasonable here.
The right panel of figure 1 then shows a representative situation where thermalization is not reached yet the R statistics-based method would indicate it is.In the right panel, two randomly initialized chains, denoted by random 1 and random 2 are considered.Neither of these chains actually thermalizes, in fact looking at the test MSE time series we see that both chains get stuck on the same plateau around MSE=10 −3 and are unable to reach the MSE of the informed initialization.However, as soon as both chains reach the plateau, R quickly drops to a value close to the order of 1 and thereafter becomes stationary, mistakenly signalling thermalization.This example exposes the problem at the heart of the multiple-chain method: the method can be fooled if the chains find themselves close to each other but far from equilibrium.Similarly, since the chains become stationary after they hit the plateau, the stationarity criterion would incorrectly predict that they have thermalized.To conclude, we have shown an example where common thermalization heuristics fail to recognize that the MCMC has not thermalized; instead, the teacher-student method detects the lack of thermalization.
B. Gibbs sampler
In this section, we show that the combination of intermediate noise posterior and Gibbs sampler is effective in sampling from the posterior by comparing it to HMC, run both on the classical and intermediate noise posteriors, and to MALA, run on the classical posterior.We provide the pseudocode for these algorithms in Appendix H. Notice we don't compare the Gibbs sampler to variational inference methods or altered MCMCs, since these algorithms only sample from approximated versions of the posterior.For the first set of experiments, we use the same network architecture as in the previous section.The teacher weights W ⋆ , as well as X, X test are also sampled in the same way.The intermediate noise and the classical generative process prescribe different ways of generating the labels.However, to perform a fair comparison, we use the same dataset for all MCMCs and posteriors; thus we generate the training set in a noiseless way, i.e., setting y µ = f (X µ , W ⋆ ).We generate 72 datasets according to this procedure, each time using independently sampled inputs and teacher's weigths.The consequence of generating datasets in a noiseless way is that the noise level used to generate the data is different from the one in the MCMC, implying that the informed initialization will not exactly be a sample from the posterior.However, the noise is small enough that we did not observe any noticeable difference in the functioning of the teacher-student criterion.
First, we aim to characterize how often each algorithm thermalizes, when started from an uninformed initialization.Uninformed means that the network's initialization is agnostic to the teacher's weights.For several values of ∆, and for all the 72 datasets, we run the four algorithms (Gibbs, classical HMC, intermediate HMC, classical MALA) starting from informed and uninformed initializations.More information about the initializations and hyperparameters of these experiments is contained in Appendix I.
The left panel of figure 2 depicts the proportion of the 72 datasets in which the uninformed initialization thermalizes within 5:30h of simulation.The x−axis is the equilibrium test MSE, i.e., the average test MSE reached by the informed initialization once it becomes stationary.When ∆, and thus the test MSE, decreases, the proportion of thermalized runs drops for all algorithms, with the Gibbs sampler attaining the highest proportion, in most of the range.In the right panel, we plot the dynamics of the test error under each algorithm for a run where they all thermalize.For the same ∆s of this plot (respectively ∆ = 10 −3 , 4.64 × 10 −4 for the classical and intermediate noise posterior), we compute the average thermalization time among the runs that thermalize.Classical HMC, MALA, Gibbs, and intermediate HMC take, respectively on average around 130, 2700, 3200, 12500 seconds to thermalize.This shows that the classical HMC, when it thermalizes, is the fastest method, while MALA and Gibbs occupy the second and third position, with similar times.However classical HMC thermalizes about 20% less often than the Gibbs sampler.Therefore in cases where it is essential to reach equilibrium, the Gibbs sampler represents the best choice.
We now move from the abstract setting of Gaussian data to more realistic inputs and architectures.As an architecture we use a one-hidden layer multilayer perceptron (MLP) with 12 hidden units and ReLU activations, and a simple convolutional neural network (CNN) with a convolutional layer, followed by average pooling, ReLU activations, and a fully connected layer.See Appendix J for a description of both models and of the experimental details.In this setting, we resort to the stationarity criterion to check for thermalization, since the teacher-student method is inapplicable.We compare the Gibbs sampler with HMC and MALA both run on the classical posterior, picking MNIST as dataset.
Figure 3 shows the test error as a function of time for the two architectures.We choose the algorithms ∆s such that they all reach a comparable test error at stationarity.We then compare the time it takes each algorithm to reach this error.The results of the experiments are depicted in figure 3.For the MLP all algorithms take approximately the same time to become stationary, around 500s.In the CNN case, after HMC and MALA reach stationarity in 100s, compared to 800s for Gibbs.We however note that for HMC and MALA to achieve these performances we had to carry out an extensive optimization over hyperparameters, thus the speed is overall comparable.
VI. CONCLUSION
In this work, we introduced the intermediate noise posterior, a probabilistic model for Bayesian learning of neural networks, along with a novel Gibbs sampler to sample from this posterior.We compared the Gibbs sampler to MALA and HMC, varying also the form of the posterior.We found that HMC and MALA both on the classical posterior and Gibbs, on the intermediate noise posterior, each have their own merits and can be considered effective in sampling the high dimensional posteriors arising from Bayesian learning of neural networks.For the small architectures considered, Gibbs compares favourably to the other algorithms in terms of the ability to thermalize, moreover, no hyperparameter tuning is required, it can be applied to non-differentiable posteriors, and can be parallelized across layers.The main drawback of the Gibbs sampler lies in the need to store and update all the pre-and post-activations.This slows down the algorithm, compared to HMC, in the case of larger architectures.
We further proposed the teacher-student thermalization criterion: a method to obtain stringent lower bounds on the thermalization time of an MCMC, within a synthetic data setting.We first provided a simple theoretical argument to justify the method and subsequently compared it to other thermalization heuristics, finding that the teacher-student criterion consistently gives the highest lower bound to T therm .In this paragraph we look more in detail at the properties of the informed initialization.We saw in section II that W ⋆ is a sample from the posterior P (W |D).What does this imply for the chain initialized at W ⋆ ?First any observable φ(•) that concentrates under the posterior and that does not depend explicitly on W ⋆ (e.g.φ(W ) = ||W ||), will be thermalized already at t = 0.This implies that all these observables will be stationary from the very beginning of the MCMC simulation and they will oscillate around their mean value under the posterior.The case where φ(•) depends explicitly on W ⋆ is more delicate.We comment on it because the observable we use to determine thermalization throughout the whole paper is the test MSE (φ
VII.
2 ), which explicitly depends on W ⋆ .In the following we will explore the behavior of the test MSE, however keep in mind that most other observables dependent on W ⋆ will exhibit similar behavior.The first peculiarity of the test MSE, is that under the informed initialization it is not stationary, and it is not close to its expected value under the posterior.In fact at initialization we always have test MSE=0.As more samples are drawn the MSE then relaxes to equilibrium and starts oscillating around its expected value under the posterior.If the test MSE is not thermalized, what is then the advantage of using the informed initialization compared to an uninformed one?While the test MSE is not thermalized, most other observables are under the informed initialization.This means that the chain is started in a favorable region of the weight space.In practice, looking at the right panel of figure 2 one can compare the smooth convergence to stationarity of the informed initializations (transparent lines), to the irregular paths followed by the uninformed initializations (solid lines).
Numerical experiments
In the rest of this appendix, we report the details of the experiments presented in V A. Recall that a synthetic dataset was generated using a teacher network with Gaussian weights, and according to the intermediate noise generative process (7).Then the Gibbs sampler was used to sample from the resulting posterior.We precise that all parameters of the Gibbs sampler (i.e.all the ∆s and λs) match those of the generative process.The Gibbs sampler was run Figure 4: Percentiles of R as a function of time.A line marked with the number k in the legend represents how the k-th percentile of R changes throughout the simulation.The data comes from the same simulation that was used for computing the average R in figure 1.The red dashed horizontal line is placed at a height of 1, the value that R should approach when the chains are close to each other.Left: percentiles of R, computed on two chains with respectively zero and informed initialization.Right: percentiles when computing R from two chains independently initialized at random.
on four chains: one with informed initialization, one initialized at zero, and two chains with independent random initializations.For the random initializations the weights of the student are sampled from the prior (with the same λs as the teacher), then the pre-and post-activations are computed using the intermediate noise generative process (7), with noises ϵ, independent from those of the teacher.The zero-initialized chain plausibly thermalizes, while the randomly initialized ones do not.We briefly comment on how the R statistic and the score statistics were computed.
R statistic
In the notation of A, θ is given by {W (1) , b (1) , Z (2) , X (2) , W (2) , b (2) }.Next we have to pick the (possibly vectorvalued) function ψ(θ).One possible choice is to use the weights, e.g., ψ(θ) = W (1) .However, due to the permutational symmetry between the neurons in the hidden layer, this gives R ≫ 1 even when the MCMC has thermalized.Hence one must focus on quantities that are invariant to this symmetry.A natural choice is the student output on the test set.We pick ψ(θ) ∈ R ntest , with ψ(θ) = f (X test , W ) and f as in (11).We record these vectors along the simulation at times evenly spaced by 100 MCMC steps.We split the samples into blocks of 50 consecutive measurements.We then compute the R statistic on each block (hence N = 50).Since the function ψ we are using returns an n test dimensional output, R will also be n test dimensional.In figure 1
Rν
τ , with t τ being the average time within block τ .In principle the whole distribution of R is interesting.Figure 4 shows the evolution of the 25th, 50th, 75th and 95th percentiles of R. Recall that the two chains in the left panel thermalize, while those in the right panel are actually very far from equilibrium.Even if the distribution in the right panel is more shifted away from one than the distribution in the left panel, we still think that the sudden drop from a higher value and the subsequent stationarity could be interpreted as the chains having thermalized.
Score method
If a MCMC has thermalized then the gradient of the log posterior must have mean zero, hence the time series of each of its coordinates will have to oscillate around zero.In figure 1 , where P indicates the posterior distribution; this is the score rescaled by ∆ and averaged over the first layer weights.We also tried taking the gradient with respect to the second layer weights W (2) , or with respect to Z (2) .The results do not change significantly and the score methods keep severely underestimating the thermalization time.
Appendix C: Gibbs sampler derivation
In this appendix, we provide the details of the derivation of the Gibbs sampler algorithm in the case of an MLP without biases.In order, we will derive equations ( 8), ( 9) and (10).
For X (ℓ) we see that the conditional distribution factorizes over samples µ ∈ [n].
In the derivation we used that P (X (ℓ) |W (ℓ) , Z (ℓ) ) = P (X (ℓ) |Z (ℓ) ).We see that the conditional distribution of X (ℓ) is a multivariate Gaussian, hence it is possible to sample from it efficiently on a computer.Algorithm 2 Parallel Gibbs sampler for MLP.The expression for parallel indicates that all iterations of the loop can be executed in parallel.
, and (m Once again the conditional distribution of W (ℓ) is a multivariate Gaussian.In the case of Z (ℓ+1) the conditional factorizes both over samples and over coordinates.We have α )P (X ) ∝ (C8) does not appear.The mass of this part of the distribution is We now look at the case Z (ℓ+1)µ α > 0.
The mass of this part is Then one has Z = Z + + Z − .The probability of having First draw a bernoulli variable r ∼ Bernoulli(p − ).If r = 1 sample a negative truncated normal from the z < 0 distribution.If r = 0 sample a positive truncated normal from the z > 0 distribution.
σ(z) = sign(z)
If Z (ℓ+1)µ α > 0 (resp.< 0) one ends up sampling from positive (resp.negative) part of the following Gaussian We now compute the normalization associated to the positive and negative parts Positive part • r, r ′ ∈ [K ℓ ] are indices for the position within the filter (e.g. if the filter is 3 × 3 then K = 9 and r runs over all the components of the filter) • i = (β, r) and i ′ = (β ′ , r ′ ) are used to group pairs of indices.
Commas will be used to separate indices, whenever there is ambiguity.The basic building block is the noisy convolutional layer, which has the following expression with (ϵ ). W ∈ R C ℓ+1 ×C ℓ ×K ℓ .We also indicate with ν a (r) the position of the r − th coordinate of the filter inside the input layer, when the output is in position a.In other words ν a : [K ℓ ] → [d ℓ ].First we notice that P (Z (ℓ+1) |W (ℓ) , X (ℓ) , X (ℓ+1) ) is basically unaffected by the structure of weights.We will concentrate on computing P (W (ℓ) |Z (ℓ+1) , X (ℓ) ) and P (X (ℓ) |Z (ℓ+1) , Z (ℓ) , W (ℓ) ).Let us begin by Computing these quantities requires grouping the two indices r, β into a single index and then inverting the matrix of the quadratic form in W .The double index matrix we would like to invert is In words, we're packing the indices of Ã, inverting it, and then unpacking the indices.For the mean we have (m We now look at P (X (ℓ) |Z (ℓ+1) , Z (ℓ) , W (ℓ) ).In the computation we use the identity c δ c,νa(b) = 1 As in the previous case we let A be the matrix of the quadratic form, with In the last passage ν a ([K ℓ ]) indicates the image of the whole filter through ν a .We basically incorporated the constraint that a should be such that c, c ′ are in the same subset of the input (if such as exist at all).As we did previously for W we group the indices using i = (β, c) and i ′ = (β ′ , c ′ ).We then define the matrix For the mean we have (m
Practical implementation
So far we packed the spatial (i.e.x and y coordinate within an image) in a single index.This allowed for more agile computations.We now unpack the indices and translate the results we obtained.In a practical case X .For the weights we have W W .Let s y , s x be respectively the strides along the y, x axes.We do not use any padding.Then the height and width of Z (ℓ+1) will be respectively Given a = (a y , a x ) the position inside layer ℓ + 1 one can write ν a (r) = ν a (r y , r x ) = (r y + s y a y , r x + s x a x ), yielding the following expression for the forward pass: In ϵ we removed all the indices, for notation simplicity.
Recall we hat to compute the matrix à when sampling W .In this notation the expression for à The expression for m W in turn becomes We now move to sampling X (ℓ)µ .In that case we have X we have (m One can see that when the filter W (ℓ) has dimensions (height and width) that are much smaller than those of X (ℓ) , then à will have few nonzero elements.In fact for Ãβcycxβ ′ c ′ y c ′ x to be nonzero, one must have that c, c ′ are close enough to be contained in the filter W .This implies that all the pairs of pixels c, c ′ with |c x − c ′ x | > W x = 0. Hence à will be a sparse tensor.The same will somewhat be true in the covariance, which is the inverse of A.
Average Pooling
Here we look at how to put pooling into the mix.We focus on average pooling, which is easier since it is a linear transformation.For each pixel b ∈ [d ℓ ], let P ℓ (b)3 be the "pooled pixel" in layer ℓ + 1 to which b gets mapped.Hence we have P : [d ℓ ] → [d ℓ+1 ], a surjective function.Given a pixel a in layer ℓ + 1 this will have multiple preimages through P , we denote the set of preimages as P −1 (a).P −1 (a) can therefore be seen as the receptive field of pixel a.Let model.To do so, the additional variables Z (2) ∈ R n×12 , X (2) ∈ R n×12 , Z (3) ∈ R n×10 are introduced, with n = 6 × 10 4 in the case of MNIST.A noise with variance ∆ is put on Z (2) , X (2) .Notice there is no noise between Z (3) and the labels y.Hence the posterior has the hard constraint y µ = arg max α∈{0,1,...,9} Z (3)µ α . The priors on the parameters are given by: λ (1) b = 12.On the intermediate noise posterior we ran experiments with ∆ = 2, and all variables were set to zero at initial condition.
In the case of the classical posterior, we ran experiments with ∆ = 2 using MALA and HMC as algorithms.The value of ∆ was picked so that the test error at stationarity is the same as in the intermediate noise posterior.The optimal parameters of HMC are a learning rate of 10 −3 and 200 leapfrog steps, while for MALA the optimal learning rate is 2 × 10 −6 .For HMC, MALA the variables were initialized as i.i.d.Gaussians with respective standard deviations 10 −1 , 10 −4 .
Regarding the CNN, we provide a schematic representation of the architecture in figure 5.In the intermediate noise model a noise is added after the convolution, after the average pooling, after the ReLU, and after the fully connected layer.All noises are i.i.d.Gaussians with variance ∆, as prescribed by the intermediate noise model.To complete the description, we specify the prior.We set λ In the intermediate noise posterior, we run the Gibbs sampler with ∆ = 100, and initialize all variables to zero.For the classical posterior, we run MALA and HMC on the CNN architecture, with ∆ = 10.This value of ∆ leads to approximately the same test error as in the intermediate noise posterior.For HMC we use a learning rate of 10 −3 and 50 leapfrog steps, while for MALA we choose 5 × 10 −6 as learning rate.
Both for MLP and CNN, in the case of the classical posterior, we have to specify a loss function.To do so we replace the argmax in the last layer by a softmax and apply a cross entropy loss on top of the softmax.Calling Q ∈ R n×10 the output of the softmax, the loss function is ℓ(y µ , Q µ ) = − log Q µ y µ .The argmax is however still used when making predictions, for example when evaluating the model on the test set.All experiments were run on one NVIDIA V100 PCIe 32 GB GPU and one core of Xeon-Gold running at 2.1 GHz.
Figure 1 :
Figure1: Comparison of different thermalization measures.In the legend, next to each method we write between parentheses the initialization (or pair of initializations) the method is applied to.The circles on the x axis represent the thermalization times estimated by each method.Left: We compare the predictions for the thermalization time of the zero-initialized MCMC.The red y scale on the right refers uniquely to the lines in red.All the other quantities should be read on the black y scale.Right: We compare the predictions for the thermalization time of two chains initialized independently at random.The pink y scale refers uniquely to the pink line.All other quantities should be read on the black logarithmic scale.The randomly initialized runs fail to thermalize and their test MSEs get stuck on a plateau.However, R, whose time series on the plateau is stationary and close to 1, fails to detect this lack of thermalization.
Figure 1
Figure 1 illustrates a representative result of these experiments.In the left panel, we aim to find the highest lower bound to the thermalization time of the zero-initialized chain.Looking at the score method we plot U = ∆ 1 d1d2 d1 i=1 d2 α=1 ∂ log P ∂W (1) αi
Figure 2 :
Figure 2: Thermalization experiments on synthetic data.Left: Proportion of the 72 runs that thermalize plotted against the equilibrium test MSE.Right: Example of the dynamics of the test MSE in a particular run where all four algorithms thermalize.In order to get a similar equilibrium test MSE in the classical and intermediate noise posteriors, we pick respectively ∆ = 10 −3 and ∆ = 4.64 × 10 −4 .The transparent lines represent the informed initializations.
Figure 3 :
Figure 3: Gibbs on the intermediate noise posterior and HMC, MALA both on the classical posterior, compared on MNIST.Left: MLP with one hidden layer with 12 hidden units.Right: CNN network.
E14) • a, a ′ ∈ [d ℓ+1 ] are indices for positions inside layer ℓ + 1 (i.e. a specifies both the horizontal and vertical position within the layer) • b, b ′ , c, c ′ ∈ [d ℓ ] are indices for positions in layer ℓ.
W
or |c y − c ′ y | > H (ℓ) W will have Ãβcycxβ ′ c ′ y c ′
Figure 5 :
Figure 5: CNN architecture used in the experiments of section V B. The convolutional layer is composed of the filter W (1) with shape 2 × 1 × 4 × 4 and a output channel bias b (1) ∈ R 2 .The final layer instead has weights W (2) ∈ R 72×10 and bias b (2) ∈ R 10 .
we then decided to plot the average (over the test set) value of R. In other words, calling Rν τ the value of R computed on the τ −th block and the ν-th test sample, what we plot are the pairs t τ , 1 | 10,592.4 | 2023-06-05T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Immune Cell Reaction Associated with Coenurus cerebralis Infection in Sheep with Particular Reference to ELISA as a Diagnostic Tool
Simple Summary Infected sheep with Coenurus cerebralis (C. cerebralis) were subjected to histopathological, hematological, immunological and serological examination. In histopathological sections, cerebral tissue showed an areolar cyst wall with many protoscolices attached to the tissue with necrosis and inflammatory cells’ aggregation. The infected sheep exhibited a significant alteration in blood profile when contrasted with apparently healthy sheep. Evaluation of ELISA specificity using two antigens showed specificities of 46.2% and 38.5% for fluid and scolex Ag, respectively. Meanwhile, accuracy ranged from 76.7% and 73.3% for fluid and scolex Ag, respectively. Levels of TNF-α and IFN-γ were significantly elevated in infected sheep when contrasted with control ones. Abstract Sturdy is a disease caused by Coenurus cerebralis (C. cerebralis) that typically affects the brain and spinal cord of sheep. So, this study aimed to detect the pathological, hematological and immunological changes caused by C. cerebralis in sheep. On examination, a total of 17 sheep out of 30 sheep (56.7%) from various regions in Egypt were found infected with C. cerebralis from May to August 2019. Each cyst was extracted from the sheep brain; in addition, tissue specimens were taken from the brain tissues for histopathological examination. The hematological profile was analyzed. Enzyme-Linked Immunosorbent Assay’s (ELISA) specificity and sensitivity were evaluated using cystic fluid and protoscolices antigens (Ag). The cell-mediated immunity against the C. cerebralis cyst was also assessed via quantitative Real Time—Polymerase Chain Reaction (qRT-PCR) to show alterations in mRNA expression of the Tumor Necrosis Factor-alpha (TNF-α) and gamma Interferon (IFN-γ) cytokines qRT-PCR. In histopathological sections, cerebral tissue showed an areolar cyst wall with many protoscolices attached to the tissue. The affected part showed prominent necrosis together with inflammatory cells’ aggregation. Hyperplastic proliferation of the ependymal cells was a common finding. The infected sheep exhibited significantly lower total erythrocyte numbers (ER), hemoglobin levels (Hb), packed cell volume (PCV), platelet numbers (PN) and segmented cell numbers compared to apparently healthy sheep. Despite the sensitivity for the indirect ELISA being 100% for both of the Ags (fluid and scolex), the evaluation of ELISA specificity using the two antigen (Ag) preparations showed specificities of 46.2% and 38.5% for fluid and scolex Ag, respectively. Meanwhile accuracy ranged from 76.7% and 73.3% for the fluid and scolex Ags, respectively, that showed the priority was directed to the fluid to be used as an ideal sample type for ELISA. Levels of TNF-α and IFN-γ were significantly elevated in infected sheep compared to non-infected control ones. In conclusion, C. cerebralis is a serious disease infecting sheep in Egypt revealing economic losses. Although this investigation supports preliminary information about the prevalence, pathological and serological characterization of C. cerebralis, further sequencing and phylogenetic analysis is needed to understand better the T. multiceps epidemiology in ruminants and canines in Egypt.
Introduction
Gid (sturdy) is a fatal infection of small ruminants because of C. cerebralis; the larval stage of Taenia multiceps that infects the small intestines of dogs and wild canids [1]. C. cerebralis cysts'-the disease's key player-predilection niche is the central nervous system (CNS) of small ruminants causing ovine coenurosis that is presented in Africa and Asia, with an incidence of 1.3 to 9.8% [2].
Coenurosis has two forms: acute and chronic. Acute coenurosis occurs during larvae migration in CNS [3], usually near ten days post the ingestion of contaminated food with a huge count of T. multiceps eggs. Acute disease is mainly common in lambs aged 6-8 weeks as the symptoms are found with inflammation and allergic expression, including fever, fatigue, head pressing, convulsions and death [4].
Chronic cases suffer from paralysis, blindness, nystagmus, lethargy, circling movement, lateral deviation of the head (head tilting) and head shaking with no reaction to stimuli. Diseased sheep are separated from the flock by pushing their head versus fixed things that is commonly known as head pressing [5,6]. In addition, coenurosis represents a significant economic loss [7,8].
Locally, the overall prevalence rate of coenurosis in the examined sheep population in Egypt varied from 2.3% to 3.7% [5,9]. The variation in disease prevalence depends on the diagnostic tool, either clinically (about 11%) or by postmortem examination (3%) [5]. In Egypt, coenurosis was more prevalent in sheep [10][11][12], with no molecular information.
Coenurus cerebralis induces different pathological lesions in the brain tissue, including perivascular lymphocytes' aggregation, neuronal degeneration and demyelination, necrosis and giant cells' formation and ependymal cells' hyperplasia of the brain ventricles [12,13].
Red blood cell counts (RBCs) and PCV are affected by nutritional status, season, age, parasitic infection and physiological stage (Adewuyi and Adu, 1984). In addition, Mbassa and Poulsen [14] recorded that eosinophilia and basophilia in adult sheep have been considered an indicator of allergic reaction to recent parasitism and investigated the effect of parasitic infection of O. ovis on the complete blood cells of sheep (CBC).
The diagnostic specificity of serological methods needs further studies, especially in endemic areas with Fascioliasis and Schistosomiasis, as the cross-reaction of different parasites' antigens caused a failure to establish a reliable diagnostic method [15]. The base of using ELISA recombinant Ag in diagnosis of coenurus cysts showed low sensitivity [16]. However, serodiagnosis via indirect ELISA was frequently employed in the induced infection of coenurus in sheep [17]. So, assessment and purification of the Ags are required to enhance the efficacy of these serological methods [18].
Serological techniques, such as ELISA and Dot-ELISA, could be used as a screening instrument to detect antibodies against Coenurus spp. cysts before living animals' exports and discover the infection for the early treatment of the disease [19], including identifying and eliminating the focal reservoirs of infection during control programs [20].
IFN-γ and TNF-α are mostly influencing cell-mediated Th1 immunity. This response is useful in decreasing the parasitism and stimulation of latent infections, as in the case of Toxoplasma gondii and Neospora caninum illness [21] through cyst formation and Oestrus ovis infection [22]. Therefore, this investigation aimed to describe the brain change because of the cyst infection. In addition, (ELISA) sensitivity and specificity was evaluated, using cystic fluid and protoscolices (PSC) Ag in the serodiagnosis of coenurosis infection in naturally infected sheep. In addition to that, the cell-mediated immunity versus coenurosis in sheep was estimated via evaluation of the modification in mRNA expression of the TNF-α and IFN-γ using qRT-PCR.
Animals
A total of 50 sheep from different localities in Egypt were examined according to Constable et al., [23] from May to August and animals that showed nervous signs were recorded. The suspected diseased sheep were slaughtered and subjected to PM exanimation [24].
Sampling
The brain samples were obtained after PM examination from sheep with typical nerve signs and those suspected to be clinically diseased. A sagittal cut of every suspected sheep head was established to detect C. cerebralis in their brains. All of the harvested cysts were processed according to Attia et al. [25] and Varcasia et al. [26].
Cysts were extracted from the infected sheep brains and thoroughly rinsed with saline. Cystic fluid was drawn from the cyst and saved until use. Part of the protoscolices extracted from the cysts was saved in phosphate buffer saline (PBS) pH 7.2, while the other part was kept in 10% formalin buffer and loaded on slides for morphometric assessment.
In parallel, blood samples were recovered from each tested animal into EDTA-treated and plain 5 mL tubes. The recovered blood was centrifuged at 3500 rpm/15 min, and the sera were harvested and kept at −20 • C for further examination.
Fecal samples were collected from all examined animals for further parasitological examination. All samples (cysts, whole blood, sera, and faeces) were submitted to the Faculty of Veterinary Medicine, Cairo University, for further examinations. All animal handling steps followed the National Guidelines for the care and use of animals.
Parasitological Examination
A thin Giemsa-stained blood smear was performed to detect the existence of any heme-parasitism associated with Coenurus spp. infection. Feces were also tested to discover any gastrointestinal parasitism by following the concentration method [27]. Briefly, about 1 gm of faeces was mixed with 4 mL of saturated salt solution in a 20 mL conical glass test tube, then stirred well and more salt solution was added till the container was nearly full, while the stirring was continued. Any coarse matter which floated up was removed and the tube was placed on a levelled surface with a glass slide being placed over the top of the tube, which was in contact with the fluid. It was allowed to stand for 30 min. The slide was removed and observed for the presence of eggs/cysts.
Histopathological Examination
Tissue specimens were collected from the brain tissues of the animals showing nervous signs and parasitic infection with C. cerebralis. The tissue was then fixed in formalin buffered solution 10% and processed for histopathological examinations, according to Bancroft and Gamble [28].
Hematological Examination
To study the effect of coenurosis on the blood cells and platelets count, whole blood was used for CBC counts and PCV value detection, as recorded by Polizopoulou [29].
Antigen Preparation
The fluid from the C. cerebralis cyst was collected and centrifuged at 1500 rpm/15 min at 4 • C, with the supernatant kept in aliquots at 20 • C until needed. The protoscolices were rinsed for 3 times in PBS (PH 7.4) before being re-suspended in an equal amount of PBS and kept at −20 • C until needed. The cyst fluid Ag (HCF) was processed using the Maddison et al. [30] technique. Protoscolex Ag was processed according to Ahmed et al. [31].
Indirect ELISA
A checkerboard titration was utilized to determine the lowest Ag dilution [32], with 96 ELISA flat-bottomed plates. All dilutions and chemicals were administered following [33] on ELISA plates. A total of 10 mg of Ortho-Phenylene Diamine (OPD) substrate and Protein A conjugate (IgG) (Sigma -Aldrich, Cat: A9647, St. Louis, MO, USA) were used. Utilizing the stopping buffer 3N H 2 SO 4 , the reaction was stopped. The optical density (OD) was measured using an ELISA reader set to 450. (Bio-Rad, USA). When the OD value was equal to or exceeded the cut-off value, all sera were counted as positive.
Estimation of TNF-α and IFN-γ
The parasitic-infected tissues were dissected under aseptic conditions. As negative controls, samples from five uninfected sheep were gathered in the same way.
RNA Isolation
According to the manufacturer's guidelines, the RNA kit (Ambion, Applied Biosystems) was used to isolate RNA from 100 mg of infected samples. A Fast Prep-24 homogenizer (MP Biomedicals, 2 cycles of 30 s.) homogenized the samples in Lysing Matrix D tubes (MP Biomedicals). The quality and amount of RNA were estimated by Nanodrop (Thermo Scientific). Following the manufacturer's instructions, 500 ng of RNA were prepared by DNaseI amplification grade (Invitrogen). The high-capacity cDNA Archive Kit (Applied Biosystems) was utilized to transcribe the treated RNA.
qRT-PCR Procedures
Following the sequences submitted to the Gene Bank, specialized PCR primer sets for IFN-γ and TNF-α specific for sheep were created (Table 1). A reference gene, β-actin, was employed for sample normalization. The gene expression was assessed on a different pool of cDNA derived from 5 healthy animals screened for parasites. Denaturation, annealing and extension proceeded as follows: 94 • C for 30 s, then 60 • C for 30 s. followed with 72 • C for 45 s in a 40-cycle amplification, respectively. Table 1. The sequences of the forward and reverse primer used in the quantitative real-time PCR.
Statistical Analysis
Hematological and immunological parameters were compared between healthy and infected sheep using one-way analysis of variance (ANOVA), means and standard error (SE). To assess the sensitivity, specificity, accuracy and other diagnostic values of the Ag, the cut-off point for positive ELISA results was adopted as the two-fold mean optical density (OD) of negative control sera [34].
It was considered that the post-mortem examination of slaughtered sheep was the most reliable way to detect Coenurus infection, so it was used as the standard gold method [35].
A statistical correlation was carried out by PASW Statistics, Version 18.0 computer software (SPSS Inc., Chicago, IL, USA). A p-value of < 0.05 was statistically significant.
Parasitological Examination
The collected blood smear and fecal samples from animals infected with coenurosis were negative for blood and gastrointestinal parasites.
Postmortem Examination of Collected Cysts
The recovered C. cerebralis cysts from infected brains were identified macroscopically as bladder-like and all cysts were single cysts with no evidence of multiple cysts. In addition, cysts were made up from transparent hyaline envelopes with various internal protoscolices immersed in a translucent fluid. The scolices per cyst differed in count from 20 to less than 100 scolices.
Postmortem Examination of Collected Cysts
The recovered C. cerebralis cysts from infected brains were identified macroscopically as bladder-like and all cysts were single cysts with no evidence of multiple cysts. In addition, cysts were made up from transparent hyaline envelopes with various internal protoscolices immersed in a translucent fluid. The scolices per cyst differed in count from 20 to less than 100 scolices.
Gross Pathology
In the brain of sheep infected with C. cerebralis, the cerebral tissue showed multiple parasitic cysts with prominent pressure atrophy ( Figure 1A) and associated with severe congestion of the cerebral blood vessels ( Figure 1B). A pale triangular area of infarction in the cerebrum was also noticed. In the sectioning of the brain, the denuded neuroepithelial surface of the brain's lateral ventricle was a characteristic finding ( Figure 1C). A triangular area of hemorrhage on the cerebellum near the attachment site of the C. cerebralis cyst was observed ( Figure 1D).
Histopathological Findings
In histopathological sections of the cerebrum of sheep affected with C. cerebralis cysts, an areolar cyst wall showed with many protoscolices attached to the cerebral tissue ( Figure 2A). The area of infarction with prominent necrosis, inflammatory zones and atrophied tissue were common ( Figure 2B). Severe tissue necrosis and edema were observed ( Figure 2C). In the area of infarction, many multinucleated foreign body giant cells were noticed in the inflammatory zone separating the necrotic tissue from the healthy cerebral tissue ( Figure 2D).
Histopathological Findings
In histopathological sections of the cerebrum of sheep affected with C. cerebralis cysts, an areolar cyst wall showed with many protoscolices attached to the cerebral tissue (Figure 2A). The area of infarction with prominent necrosis, inflammatory zones and atrophied tissue were common ( Figure 2B). Severe tissue necrosis and edema were observed ( Figure 2C). In the area of infarction, many multinucleated foreign body giant cells were noticed in the inflammatory zone separating the necrotic tissue from the healthy cerebral tissue ( Figure 2D). In some other cases, the lesions in the neurons and brain ventricles were the prominent findings, where chromatolysis of the Nissl's granules and dead neurons were common ( Figure 3A) and demyelinated nerve fibers and neuronophagia were observed (Figure 3B). In the brain ventricles, dilation of the brain's lateral ventricle and hyperplasia of the neuronal epithelium and ependymal cells were noticed ( Figure 3C). In other cases, hyperplasia of the neuronal epithelium of the lateral ventricle was advanced and accompanied by subepithelial edema (Figure 3D). In the areas of hyperplasia, congested newly formed capillaries in the hyperplastic tissue were also noticed and the hyperplastic cells In some other cases, the lesions in the neurons and brain ventricles were the prominent findings, where chromatolysis of the Nissl's granules and dead neurons were common ( Figure 3A) and demyelinated nerve fibers and neuronophagia were observed ( Figure 3B). In the brain ventricles, dilation of the brain's lateral ventricle and hyperplasia of the neuronal epithelium and ependymal cells were noticed ( Figure 3C). In other cases, hyperplasia of the neuronal epithelium of the lateral ventricle was advanced and accompanied by subepithelial edema (Figure 3D). In the areas of hyperplasia, congested newly formed capillaries in the hyperplastic tissue were also noticed and the hyperplastic cells invaded deep in the brain tissue ( Figure 3E). There was severe congestion of the choroid plexus and perivascular aggregation of inflammatory cells, mainly lymphocytes ( Figure 3F).
Hematological Parameters
The hematological findings of examined sheep are found in Table 2. Diseased sheep revealed significantly lower total erythrocyte count, hemoglobin concentration, PCV, platelet count and segmented cell numbers than apparently healthy sheep. Notably, significantly high total leukocytes, eosinophil, staff, lymphocyte and monocyte counts were observed in infected sheep.
Immunological Parameters (Indirect ELISA)
The fluid and scolex Ag cut-off values were 0.37 and 0.41, respectively. Positive optical values for sheep's sera ranged from 0.53 to 2.30 using fluid Ag, while ranging between 0.50 and 2.90 using scolex Ag. The true prevalence of C. cerebralis obtained from PM examination of sheep (Gold standard test) was 17/30 (56.7%). In comparison, 24 (80.0%) and 25 (83.3%) sheep were considered positive for C. cerebralis with fluid and scolex Ag, respectively ( Table 3). The sensitivity for the indirect ELISA was 100% for both Ag. Furthermore, the specificities were 46.2% and 38.5% for fluid and scolex Ag, respectively. Accuracy ranged from 76.7% to 73.3% for fluid and scolex Ag.
According to the findings in Table 4, we found that there was a significant moderate correlation between the findings of the indirect-ELISA by fluid Ag and PM examination (κ = 0.49, p = 0.002), also, between the outcomes of the indirect-ELISA by scolex Ag and PM tests (κ = 0.42, p = 0.005). There was also a significant, almost perfect agreement between the findings of the indirect-ELISA using the two Ag (κ = 0.89, p < 0.0001).
Discussion
Coenurosis is a devasting problem causing major economic losses in the sheep industry. In Egypt, C. cerebralis has been described as the main infection responsible for the nervous problems of small ruminants [11] as well as high morbidity and deaths [1,38,39], particularly in young sheep.
The overall prevalence of coenurosis in this study was 56.7% in the examined sheep, C. cerebralis (T. multiceps metacestode) infection was estimated at 18.3% in the Suez Canal locality [11] and Anwar et al. [10] recorded 100% prevalence in infected sheep from the Cairo governorate, despite negative cysts in the healthy sheep. This is less than the prevalence of 18-100% recorded before in sheep in Egypt [10].
Globally, various occurrences were reported: 44.4% in Tanzania [40], 0.35% in Italy [41] and 18.7% in Iran [42]. The variable prevalence of infection over the various geographical areas might be due to the diversified geographical, ecological and sociological effects [2]. Seasonality is still the most controversial attribute [42,43].
The severity of the nervous manifestation of coenurosis might be attributed to the inflammatory response, the affected side of the brain, the space-occupying lesion and the affected centers [44]. In addition, the cyst site and the number of the ingested viable T. multiceps eggs are also factors relating to severity [45].
Nervous manifestations from the clinically diseased animals were aligned with those found by [39,[46][47][48], who recorded head tilting, corneal opacity usually in one eye, circling, deviated head posture, ataxia and paralysis in some cases.
It is recognized that risk factors affect the prevalence of coenurosis, in particular the locality had a pivotal role as the highest prevalence was monitored in the free-reared areas, as well as the farms that used dogs for protection or those where stray dogs and foxes had access and pastures that were contaminated by the feces of dogs and other carnivores due to the open borders. In contrast, the incidence was low in sheep reared in captivity and kept indoors.
C. cerebralis occurrence is found with the transmission cycle of infected canids. Homeless dogs surround slaughterhouses and those with sheep flocks and wild canids are also likely to contract the infection via access to diseased heads and internal organs cast away during butchering [38,49]. In addition, stray dogs near sheep and goat farms may be fed on dead carcasses on the farms [50]. Unattended non-controlled sheep slaughtering outside slaughterhouses is more prevalent in Egypt, configuring another route of disseminating the infection to dogs.
C. cerebralis is among the serious parasites causing brain pathologies in the small ruminants including sheep [4,12,51]. In our study, the infection with C. cerebralis in the examined sheep showed severe pathological lesions ranging from congestion of the blood vessels to neuronal degeneration, necrosis and edema. Our findings agreed with the previously recorded lesions in case of infection with such a parasite [13]. The infarction of the brain tissue could be attributed to the occlusion of the blood vessels because of the surrounding pressure of the growing cysts. To this end, Haridy et al. [12] recorded brain infarction in the case of C. cerebralis cysts infecting ewes and hyperplastic proliferation of the neuroepithelial lining of the brain ventricles' ependymal cells.
The level of PCV was markedly (p < 0.05) reduced in coenurosis-infected sheep in contrast with healthy ones. These findings concur with prior recorded elevated erythrocyte oxidation and damage in diseased sheep with Fasciola hepatica and Trypanosoma evansi [52,53] and hepatic cystic echinococcosis-infected sheep [54].
The early infectious stage of T. multiceps in sheep has no apparent clinical symptoms. So, a demonstration of antibody versus coenurus cyst may be useful for early diagnosis of the disease using ELISA [2]. Our investigation revealed that the indirect ELISA's specificity and accuracy using fluid Ag (46.2%) was higher than using scolex Ag (38.5%). False positive results may be attributed to recent infection, recently developed cysts and cross-reactivity with antibodies from other infections [55]. In addition, infected, infertile or calcified cysts and non-specific conjugate can also play a role in the false positive cases [56]. Furthermore, the sensitivity was 100% for both antigens.
In addition, Huang et al. [15] recorded high sensitivity and specificity of 95% and 96.3%, respectively, for 47 examined sera in contrast with the results of necropsy using indirect ELISA based on recombinant transcriptome of T. multiceps (rTmP2) and there was no cross-reaction when P2 protein was used for the diagnosis of Echinococcus granulosus positive sera. Moreover, the serodiagnosis of coenurosis in experimentally infected sheep using indirect ELISA was successful [17].
From our findings, the infected sheep revealed significantly elevated levels of INF-γ and TNF-α in contrast with non-infested sheep. This is attributed to stimulation of cellmediated Th1 immune response versus intracellular pathogens such as Toxoplasma gondii and Neospora caninum, as it elevates the expression for splenic INF-γ and TNF-α as a method of host protection [57]. As IFN-γ shares in restricting intracellular replication [58] and activating macrophage-mediated processes to destroy intracellular pathogens, especially early in the disease stages [59]. Meanwhile, TNF-α activates phagocytic cells to control the intracellular parasite multiplication and helps get rid of the parasites [58].
Conclusions
Coenurus cerebralis infection was detected with a 56.7% prevalence in the sheep under study, in Egypt. In histopathological sections, cerebral tissue showed an areolar cyst wall with many protoscolices attached to the tissue with necrosis and inflammatory cells' aggregation. The infected sheep exhibited a significant alteration in blood profile when contrasted with apparently healthy sheep. Evaluation of ELISA specificity using two Antigen preparation showed specificities of 46.2% and 38.5% for fluid and scolex Ag, respectively. Meanwhile, the accuracy ranged from 76.7% and 73.3% for fluid and scolex Ag, respectively. Levels of TNF-α and IFN-γ were significantly elevated in infected sheep when contrasted with control ones. Although this investigation supports preliminary information about the prevalence, pathological and serological characterization of C. cerebralis, further sequencing and phylogenetic analysis is needed to understand better the T. multiceps epidemiology in ruminants and canines in Egypt.
Data Availability Statement:
The data presented in this study are available on request from the corresponding authors. | 5,495 | 2022-09-28T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Deformation and relaxation of viscous thin films under bouncing drops
Thin, viscous liquid films subjected to impact events can deform. Here we investigate free surface oil film deformations that arise due to the air pressure buildup under the impacting and rebouncing water drops. Using Digital Holographic Microscopy, we measure the 3D surface topography of the deformed film immediately after the drop rebound, with a resolution down to 20 nm. We first discuss how the film is initially deformed during impact, as a function of film thickness, film viscosity, and drop impact speed. Subsequently, we describe the slow relaxation process of the deformed film after the rebound. Scaling laws for the broadening of the width and the decay of the amplitude of the perturbations are obtained experimentally and found to be in excellent agreement with the results from a lubrication analysis. We finally arrive at a detailed spatio-temporal description of the oil film deformations that arise during the impact and rebouncing of water drops.
Introduction
Drops impacting a liquid layer frequently occurs in nature as well as in many industrial and technological applications. Common examples are raindrops hitting the surface of a pond, spray coating on a wet substrate or inkjet printing on a primer layer. The collisions can generate complex scenarios such as floating, bouncing, splashing or jetting, which have been extensively studied (Worthington 1908;Rein 1993;Weiss & Yarin 1999;Thoroddsen et al. 2008;Ajaev & Kabov 2021). Impact velocity, impact angle, droplet size, liquid layer thickness, and the material properties of the liquids are the parameters which determine the impact dynamics. Among the many different impact scenarios, a particularly intriguing phenomenon is reported to occur at sufficiently low impact velocities: floating or bouncing drops which never directly contact the underlying liquid. The earliest reported observation of a drop floating over a liquid surface was made by Reynolds (1881) who noticed that under certain circumstances, the drops spraying from the bow of a boat or droplets from a shower of raindrops float on liquid surfaces for some seconds before they disappear. Later, Rayleigh (1882) reported bouncing of drops when collision of two distinct streams of liquids resulted in, under certain circumstances, drops bouncing off each other without merging. The reason for the presence of repulsion forces on impacting droplets even without direct contact with the (liquid) substrate is a lubrication pressure build-up in the draining thin air layer between droplet and substrate which was first detailed in the theoretical work by Smith et al. (2003). The importance of such thin air layers sparks interest in numerous recent investigations of eg., skating drops (Mandre et al. 2009;Hicks & Purvis 2010, 2011Kolinski et al. 2012Kolinski et al. , 2014, entrapment of bubbles (Thoroddsen & Takehara 2003;Thoroddsen et al. 2005;Tran et al. 2013;Hendrix et al. 2016), dimple formation under a falling drop Duchemin & Josserand 2012;, and suppressing of splash (Xu et al. 2005). Floating/bouncing drops can be observed not only on liquid surfaces but also on dry surfaces. Such scenarios include drops bouncing on a dry surface (Kolinski et al. 2012(Kolinski et al. , 2014, drops floating on a very hot surface (Leidenfrost effect) (Chandra & Avedisian 1991;Quéré 2013), drops bouncing on a pool of liquid (Rodriguez & Mesler 1985;Klyuzhin et al. 2010), drops floating/bouncing on a vibrating pool of liquid (Couder et al. 2005a,b), and drops floating on a very cold pool of liquid (inverse Leidenfrost effect) (Adda-Bedia et al. 2016;Gauthier et al. 2019a,b). In the present study, we investigate a drop bouncing on a thin liquid film. A schematic diagram is shown in figure 1 highlighting three important stages of a drop bouncing scenario. (a) Initial stage (cf. figure 1a) -A drop falls towards a flat film surface in a surrounding gas medium. (b) Deformation stage (cf. figure 1b) -The drop's center of mass velocity changes direction due to the lubrication force provided by the narrow gas layer separating the two liquids which exceeds the droplet's weight. Large spatial variations of the gas pressure cause large drop and significant thin film deformations in this stage. (c) Relaxation stage (cf. figure 1c) -The drop is far from the film surface after the bounce. The gas pressure separating the two liquids is again reduced to ambient pressure and the thin film deformations gradually decay via an intricate relaxation process.
Important parameters for the study of drops bouncing on thin liquid layers are the initial depth or height h f of the liquid layer above an underlying solid substrate and the drop radius R w . Experiments by Pan & Law (2007) reveal drop bouncing to be favoured on deep pools which have h f > R w , as compared to on thick liquid layers which have h f ≈ R w and to on thin films which have h f < R w . It was argued that the solid substrate (wall) restricts the penetration of the falling drop in the thin films, thereby suppressing bouncing. For thin films, the bouncing phenomenon is only observed for drops having moderately low kinetic energy as compared to their surface energy, i.e., W e = ρ w R w v 2 w γ −1 w 10, where W e denotes the Weber number of the drop, ρ w is the density of the drop, v w the drop impact speed and γ w the surface tension of the drop. At sufficiently high impact velocities, the drop contacts the underlying liquid due to the Van der Waals attraction force between the two liquids. This effect becomes important when the liquid-liquid separation is smaller than around 100 nm (Charles & Mason 1960). The critical Weber number which marks the transition from drop bouncing to merging has been studied by Tang et al. (2018), using liquids of different viscosities. They found that the critical Weber number, below which the drop bounces, increases as the liquid viscosity (drop and the thin film) and the thin film thickness are increased. This finding indicates that larger viscosity liquids promote drop bouncing. Similar observations are made in the work of Langley & Thoroddsen (2019), where delayed coalescence is observed for drops and thin films with large viscosities. found that water drops impacting a thin and an extremely viscous film (∼ 1 mm and ∼ 10 4 P a.s) did not entrap many microbubbles when compared to a regular glass (roughness ∼ 50 nm). It was speculated that the film deformations were extremely small which inhibits the localized contacts before full wetting is established. Gilet & Bush (2012) and Hao et al. (2015) found drop bouncing on a thin film to be similar to bouncing on a super-hydrophobic substrate. One such similarity was the apparent contact time of the drop which agreed well with the Hertz contact time (Richard et al. 2002). However, the droplet-film collision resembled an almost elastic collision between the two liquids with the coefficient of restitution close to unity. Pack et al. (2017); Lo et al. (2017); Tang et al. (2019) used interferometry measurements to obtain the time-resolved evolution of nanometric profiles of the air gap between impacting drops and thin viscous layers. They found a bell shaped annular air profile with maximum thickness at the center and minimum thickness at a radially outwards location which varied with time. Small variations in air profiles were observed when the impacting drop was slightly oblique relative to the underlying film surface and when the film thickness was increased from thin film to a deep pool limit. Significant asymmetries were also observed in the evolution of air profiles when comparing the drop spreading stage to the receding stage for a typical bounce process. Lo et al. (2017) successfully measured both the drop and the thin film deformations during the approach process. The thin film and the air film deformations were measured using the high-speed confocal profilometry technique and the dual-wavelength interferometry technique respectively. The drop deformation was inferred from the thin film and the air film deformations at around the same time instance by performing two separate experiments under identical impact conditions. The limitation in their measurement is that the thin film deformations had a 1.8 µm vertical resolution and that they could only be obtained for a few time instances before the rupture of the air film.
Previous experimental and numerical studies of drop bouncing on thin films mainly focused either on the macroscopic drop bouncing behaviour or on the evolution of the nanometric gas thickness between the two liquids without providing a distinction between drop and thin film deformations. All experimental studies except Lo et al. (2017) ignore the thin film deformations owing to the small film thickness and large film viscosity used in the experiments (Pack et al. 2017;Gilet & Bush 2012). They rather assume that the thin liquid film mimics a perfectly smooth solid surface. The numerical studies of thin film deformations prove challenging because of the large difference in involved length scales (millimetric to nanometric deformations) when computing the lubrication gas flow and the thin film flow simultaneously to drop deformations (Josserand & Zaleski 2003). Although viscous thin film deformations are typically small, they cannot be neglected since they play a crucial role in modulating the gas layer thickness, thereby affecting the drop bouncing process and possibly the coalescence of the drop with the thin film at higher impact velocities.
Finally, the thin film deformations can also give insight into the size and the velocity of the impacting drop, much alike how impact craters are used to determine the size and velocity of the impacted body. Understanding the size and dynamics of the thin film deformations will allow for a design of liquid infused surface (Quéré 2008) which can reduce lubricant depletion through shearing, cloaking and in the wetting ridge (Smith et al. 2013;Schellenberger et al. 2015;Kreder et al. 2018).
The objective of the paper is to measure oil film deformation that arise due to impacting and rebouncing water drops in an ambient air environment. The impacting water drops have W e ∼ 1 when inertial and capillary forces roughly balance so that the bouncing actually occurs. Thanks to Digital Holographic Microscopy we achieve the unprecedented precision down to 20 nm in the vertical resolution at 0.5 kHz recording speed. To our knowledge, the sub-micrometer thin film deformations reported in this experimental work are the first deformation measurements that explicitly document the effect of the air pressure buildup under impacting and rebouncing drops.
The structure of the paper is as follows: First, in section 2, the experimental setup and the control parameters are described and some typical orders of magnitude of the relevant non-dimensional numbers are given. The subsequently presented results are twofold: In section 3, we discuss the film surface deformations immediately after the bouncing event. We quantify how the surface deformations depend on the film thickness, film viscosity and drop impact speed. This part of the paper describes the deformations of the thin film after the end of the deformation stage (cf. figure 1b). The second part of our study is presented in section 4 and focusses on the relaxation stage described above (cf. figure 1c). We first illustrate a typical relaxation process of film deformations which occur after the drop bouncing. Starting from experimentally obtained deformations as initial conditions, we then compare the evolution of the experimental profiles in the relaxation process to a numerical calculation using lubrication theory. Next, we use a general theoretical result of Benzaquen et al. (2015) for the relaxation of thin film deformations. The thus obtained scaling laws for the width broadening λ(t) and amplitude decay δ(t) during the relaxation process are compared to our experiments over a large range of parameters. Finally, the paper closes with a Discussion in section 5.
Experimental details
A schematic drawing of the experimental setup is shown in figure 2. Using a syringe pump, Milli-Q water is slowly dispensed out of a needle tip as soon as the droplet's weight overcomes the surface tension force. The detached water droplet of radius R w = 1.08 mm is made to fall on a thin and viscous silicone oil film. The 3D surface topography of the deformed oil film is measured using a digital holography technique, as described below, complemented by simultaneous side view visualisations of the drop dynamics. The silicone oil films are prepared by the method of spin coating on cleaned glass slides. The thin film thickness is measured using reflectometry technique (cf. Reizman (1965)). A HR2000+ spectrometer and a HL-2000-FHSA halogen light source by Ocean Optics is used for the reflectometry measurements. The uncertainity in the film thickness measurement was less than 3.5%. Table 1 gives the density, dynamic viscosity, and surface tension of different liquids used in the present study. The density and dynamic viscosity of air at standard temperature and pressure are ρ a ≈ 1.225 kg/m 3 and η a ≈ 1.85 µP a.s, respectively. The film deformations are measured by varying three important control parameters, namely the oil film thickness h f which is either about 5 µm, 10 µm, or 15 µm, the oil film viscosity η f which is either 52 mP a.s, 98 mP a.s, or 186 mP a.s, and the impacting water droplet speed v w , for which we choose 0.16 m/s or 0.37 m/s. We use the parameters η * = η f η −1 w and h * = h f R −1 w to denote the dimensionless viscosity and the dimensionless thickness of the film respectively. Subscripts w and f represent water and oil-film respectively. γ is the liquid-air surface tension.
For the measurement of film deformations, a holographic technique is used (Gabor 1949;Schnars et al. 2016). It is a technique that records a light field (generally transmitted/reflected from objects) to be reconstructed later. Digital holography refers to the acquisition and processing of holograms typically using a CCD camera. The Digital Holographic Microscopy (DHM ® -R1000 by Lyncée Tec) is a reflection configured holographic device which provides real-time measurements of (at least) 20 nm vertical resolution within the 200 µm measuring window (cf. Appendix A). The working principle of the DHM is briefly explained here using the schematic in figure 2. The laser light is split into two beams: a reference and an object beam. The object beam is directed from underneath the glass substrate towards the thin film. A part of the object beam reflects off the thin film surface called the reflected object beam. The reflected object beam (which is slightly oblique) interferes with the undisturbed reference beam to produce a hologram which is recorded by a CCD camera. The thin film deformations arising from the impacting and rebouncing drops are recorded as a sequence of hologram images that are reconstructed later using numerical schemes to obtain the 3D topography of the film surface. A 2.5x objective is used along with the DHM setup, which provides a roughly 4.90 µm lateral resolution and allows for measurements of a maximum deformation slope up to 2 • . A pulse generator connects and approximately synchronizes the recordings of the side view camera and the DHM camera at a temporal resolution of around 0.5 kHz.
Deformation
Relaxation For the relaxation stage, the low Reynolds number in the viscous thin film Here, η * ∼ 10 2 and h * ∼ 10 −2 suggest small amplitude thin film perturbations due to small film thickness in comparison with the lateral length scales (h f R w ). Drop size and film thickness are much below the capillary length, so that gravity effects can be neglected.
Typical bouncing experiment
Before turning to a detailed quantitative analysis, we first describe the oil film deformations observed in a typical experiment. The synchronized recordings of the drop bouncing using a high speed camera and the oil surface deformation measured using DHM are shown in the respective top and bottom rows in figure 3. The impact and bouncing time instances in figures 3a -3d are given relative to t + , where t + is the time instance corresponding to the maximum drop spreading during first impact (cf. figure 3b). When the falling water drop is still far from the oil-air interface, the droplet takes on a spherical shape while there is no deformation in the oil surface (cf. figure 3a & 3e). As the bottom of the falling water drop approaches the oil-air interface, the air pressure builds up in the narrow air gap, deforming both the water-air and oil-air interface. The lubrication air pressures in narrow air gaps can become sufficiently large to decelerate the falling drop, bringing it to rest (or in apparent contact with the oil-air interface) and cause a reversal in droplet's momentum, leading to a contact-less drop bouncing. The maximum drop spreading and the corresponding oil-air deformation obtained during the apparent contact is shown in figure 3b & 3f. It should be noted that during this phase the holography measurement cannot be trusted quantitatively. This is due to the fact that when the drop is too close to the oil-air interface (small air gaps, h a 100 µm), additional light reflections from the water-air interface interfere with the measurements of the oil-air interface (cf. Appendix A). In particular, the concentric ring structure seen in figure 3f is such an artifact. However, as soon as the drop has bounced back and is sufficiently far away from the oil-air interface (large air gaps, h a 100 µm), light reflections from the water-air interface are no longer present, and the measurements of the oil-air interface are quantitatively accurate. A snapshot of the drop bounced off far away from the oil-air interface and the corresponding oil-air deformation are shown in figure 3c & 3g. Subsequent to this, the oil film gradually relaxes under the influence of surface tension, until it is again perturbed by a second impact of the drop (cf. figure 3d & 3h).
We remark that the drop bouncing is observed for atleast 7 cycles. The drop gradually jumps away from the first impact location due to small horizontal impact speeds and by the end of the 8th cycle the drop is completely out of the side view imaging window. We do not observe the drop to wet the surface within the experimental times. figure 4b, where in this case the average is performed over the full annulus within 0 θ < 2π. In the remainder, we choose t = 0 as the earliest time when clean DHM measurements are obtained after the drop bounce off process (cf. Appendix A). We remark that the difference between the maximum drop spreading time t + and the reference time t = 0 is always around 6 ms in our experiments. This value is in very good agreement with half the apparent contact time of water on viscous thin films under similar impacting velocities (Hao et al. 2015). Therefore the choice of t = 0, which is determined through the experimental setup, can be thought to serve as a transitional time instance between the deformation and the relaxation stages (cf. figure 1b and 1c). Figure 4a shows that deformations are highly localised, within a narrow annulus r an ≈ 0.6 − 0.8 mm. Given that flow inside the viscous film requires pressure gradients, such localised deformations suggest that spatial variations of air pressure during the bounce are highly localised during impact (cf. figure 1b). The appearance of such an annulus is reminiscent of dimple formation underneath an impacting drop: An annular local minimum of the air gap is seen for drops impacting a dry substrate (Mandre et al. 2009;Hicks & Purvis 2010;Kolinski et al. 2012;Bouwhuis et al. 2012), drops impacting a thin liquid film (Hicks & Purvis 2011) and drops impacting a liquid pool (Hendrix et al. 2016). We therefore hypothesize that the radial location of the deformation correlates to the minimum of the air gap during drop impact. The correlation cannot be proven directly in our experiments since we do not measure the evolution of the air gap thickness. The minimum air gap is at least 2 orders of magnitude smaller than the resolution of the side view camera, preventing a direct measurement (cf. figure 3b). However, Kolinski et al. (2012Kolinski et al. ( , 2014; de Ruiter et al. (2012de Ruiter et al. ( , 2015 have shown that the minimum air gap (∼ 100 nm) moves at some radial location away from the impact location. Lo et al. (2017) reports that the minimum in air gap occurs at slightly larger radius than the minimum in oil film thickness. However, the time resolution of the measurements was insufficient to quantify the general result. We expect a similar motion of the minimum air gap in our experiments which will form the deformation in a radial position.
In the present experiments, the oil surface deformations at t = 0 are not perfectly axisymmetric (cf. figure 4a). This small asymmetry is attributed to a small horizontal impact speeds, which is difficult to eliminate experimentally. This small horizontal speed affects the air layer thickness during the bounce process (Lo et al. 2017), leaving asymmetric imprint on the oil layer. We remark that figure 4b defines two quantities that will be used below to characterise the wave: the amplitude δ and the wavelength λ, respectively defined as the vertical and horizontal distance from the minimum to the maximum of the film. In the remainder we will average the profiles only over one quadrant centered around θ = π/4. We choose this window in particular to be consistent with the averaging procedure for lower and higher impact speeds. At higher impact speeds, the deformations are spread out farther from the impact center resulting in the restrictive usage of the quadrant. Although, the averaging is more appropriate around the principal direction of asymmetry (line joining the first and the second impact center) which is along θ = 3π/8, we find no significant variations in the deformation parameters. For the deformation in figure 4a, the differences in λ and δ between the averaging windows centered around θ = π/4 and 3π/8 are found to be around 4.2 µm and 17 nm which are well within the order of the experimental resolution.
Influence of film properties and drop impact velocity
We now study the influence of the film thickness and the film viscosity on the surface deformations left behind after impact. Figures 5a -5d show the surface topographies at t = 0 in one quadrant. Figures 5e and 5f show the corresponding azimuthally averaged deformation profiles at t = 0, averaged over the quadrant. Clearly, a decrease in deformation amplitude δ is seen with a decrease in initial film thickness (cf. rows in figure 5a -5d and figure 5e). On the other hand, a decrease in deformation amplitude is seen with an increase in film viscosity (cf. columns in figure 5a -5d and figure 5f).
To further quantify this, we plot the initial amplitude δ 0 = δ(t = 0) as functions of film thickness, film viscosity and initial amplitude in figure 6a, 6b and 6c. From these plots we empirically deduce that the scaling of the initial amplitude is consistent with δ 0 ∼ h 2 f η −1 f and δ 0 ∼ λ 7/2 0 , though the data only cover less than one decade in h f and η f . The δ 0 ∼ h 2 f η −1 f scaling is not immediately obvious, since the "mobility" of a thin layer flow is known to scale as h 3 f η −1 f (Oron et al. 1997). Next, we study the influence of the impact velocity on the surface deformations left behind after impact. Figures 7a & 7b show the surface topographies at t = 0 for two different impact velocities. Figure 7c shows the corresponding azimuthally averaged deformation profiles. For the higher impact velocity (cf. figure 7b), two distinct peaks in deformation are observed -we emphasise that the profile corresponds to a single impact. This is in contrast with the single peak that appears at lower velocity (cf. figure 7a). Moreover, the deformations are more radially spread out for the higher impact speed as compared to lower impact speed. The transition from one peak to two peaks and the increased radial spread is reminiscent of the transition from single dimple to double dimple formation in a falling drop, as previously observed on dry surfaces (de Ruiter et al. 2012(de Ruiter et al. , 2015. This again suggests that the deformation directly reflects the structure of the dimple below the impacting drop.
Spacetime plot of typical relaxation process
We now reveal the relaxation of viscous thin film deformations after the impact process. When the drop is far away from the film surface after the bounce, the air pressure is again homogeneous and no longer provides any forcing to the film. By consequence film deformations gradually decay via an intricate relaxation process, under the influence of surface tension. Figure 8 provides a spacetime plot of a typical relaxation process, over two decades in time (t ∼ 0.01−1 s). The figure corresponds to an azimuthal average of surface deformation within a quarter annulus (0 θ < π/2). The lines indicate the loci of deformation maxima (orange), minima (blue) and zero-crossing (green). These lines highlight that the deformation involves a decay in amplitude as well as a broadening of the lateral width of the deformation profile. Note that during
Numerical simulation
We perform numerical simulations in order to study the relaxation process of the viscous thin films. The relaxation process is modelled using lubrication theory (Reynolds 1886;Oron et al. 1997). Lubrication theory relies on the following conditions, which are indeed satisfied in the experiment, namely, (i) viscous forces in the film dominate over inertial forces (Re f ∼ 10 −2 1) and (ii) deformation amplitudes in vertical direction are much lower than the characteristic lateral length scale (δ/λ ∼ 10 −2 1). As boundary conditions we consider the free surface to be in contact with a homogenous gas pressure, as is the case after rebound, while there is a no-slip boundary condition at the substrate. The corresponding lubrication (thin film) equation reads (Oron et al. 1997): where h(x, y, t) is the vertical distance between the solid substrate and the free surface and ∇ is the two-dimensional gradient operator in the x-y plane. We perform a nondimensionalisation of (4.1) by where h f is the initial film thickness. In the following, we will study the relaxation process in an axisymmetric geometry, i.e. H = H(R, T ) such that (4.1) becomes : The asymptotics of (4.2) has also been studied by Salez et al. (2012b). Importantly, (4.2) is devoid of any free parameters. To compare to experiment, we perform numerical simulation of film relaxation using a finite element method and a second-order implicit Runge-Kutta time-stepping scheme (implemented using the framework dune-pdelab by Bastian et al. (2008bBastian et al. ( ,a, 2010). The deformation profile at t = 0 is taken from the experiment and used as an initial condition, and subsequently the film profile is evolved via numerical integration of (4.2). A direct comparison between the experiments and the lubrication theory is given in figure 9 without any adjustable parameters. Figure 9a shows the amplitude δ (defined as the difference between maximum and minimum of ∆h) as a function of time, while figure 9b shows the deformation profiles ∆h = h(r, t) − h f at different times. The comparison exhibits very good agreement between the experiment and the numerical calculations, demonstrating the success of the lubrication approximation to describe the experimentally observed relaxation.
Theoretical analysis
Now we turn to a detailed theoretical analysis of the relaxation process, from which we will establish the general scaling laws of the relaxation. To do this, we reduce the lubrication equation to a one-dimensional geometry h = h(x, t). Here, we use X = x/h f analogous to R = r/h f . The rationale behind choosing a 1D lubrication equation is that the initial deformations are far from the impact center (cf. figure 5). This is further quantified in figure 8, where the "width" of the profile is initially an order of magnitude smaller than the location of the zero crossing. Therefore the axisymmetric relaxation and a one-dimensional relaxation will yield very similar results -at least until the deformations approach the impact center and the azimuthal contributions become important.
The one-dimensional lubrication equation reads To further simplify the analysis, we use the fact seen in the experiments that the deformation amplitudes are small in comparison to initial film thicknesses (δ/h f ∼ 0.05 1). This allows us to linearise (4.3) employing the variable transformation The linearised 1D lubrication equation then reads The relaxation of localised thin film perturbations described by ( which involves the similarity variable and similarity functions φ n (U ) that can be determined analytically (Benzaquen et al. , 2015. The amplitudes M n appearing in (4.5) can be determined from the initial condition Z 0 (X) = Z(X, 0), by computing the moments It is clear from (4.5) and the similarity variable (4.6) that the width λ of the profile follows a universal scaling of the form λ ∼ T 1/4 . The decay of the amplitude δ is more subtle, since each term in (4.5) decays differently, as δ ∼ T −(n+1)/4 for the nth moment. At late times, the solution Z(X, T ) thus converges towards the lowest order term with a non-zero moment. Generically, for M 0 = 0, the amplitude will therefore decay as T −1/4 . In our case, however, the perturbation originates from an initially flat film, and by incompressibility of the layer, the perturbation is thus expected to have a vanishing volume, i.e. M 0 = 0. In the present context, the lowest order moment is therefore expected to be M 1 = 0. In this scenario, the scaling law will be δ ∼ T −1/2 , while the solution Z(X, T ) should converge to φ 1 (U ) for a zero volume perturbation. A schematic depiction of the approach to the φ 1 (U ) attractor is shown in figure 10.
To verify this scenario, we turn to an exemplary initial deformation profile of a 98 mP a.s, 10 µm film, and probe the subsequent relaxation. For the specific example, the two lowest order moments are determined as M 0 ≈ 6.4 × 10 −2 and M 1 ≈ 3.2. The very small value of M 0 is of the order of the experimental resolution (i.e. it corresponds to a typical ∆h ∼ ±10 nm), so that indeed the perturbation has a negligible volume. Figure 11 reveals that the relaxation is indeed governed by M 1 , and approaches the φ 1 (U ) self-similar attractor function (cf. Appendix B). The figure reports the scaled deformation profiles centred around the zero crossing location X 0 ≈ 65. The first row shows the deformation profile scaled with the initial film thickness. The second row shows the rescaled deformation profiles (vertical scale ∼ T −1/2 and horizontal scale ∼ T 1/4 ). During late times, the rescaled deformation profiles clearly approach the attractor function φ 1 (U ), which is superimposed on the data. This excellent match confirms that, within the experimental times, the axisymmetric effects have not yet started to contribute Benzaquen et al. (2014Benzaquen et al. ( , 2015. and the theoretical analysis of the film relaxation process over a 1D geometry is sufficient to understand the relaxation process in the experiments.
Width broadening and amplitude decay
Finally, we will compare theoretical asymptotic scaling laws for the width broadening and amplitude decay with a large number of experiments, all attained for a drop impact velocity of v w ≈ 0.16 m/s. We predict the scaling law for the width λ quantitatively, based on the approach to the attractor function φ 1 . We formally define the half-width of the similarity function as U * 1 = arg max |φ 1 (U )| ≈ 1.924, which is half the absolute distance between global maxima and global minima (cf. figure 11). From (4.6), we then find (4.8) expressing the dimensionless (half) width of the decaying profiles. A practical problem arises when comparing to experiment: at t = 0, the width takes on a finite value λ 0 , so that it is initially incompatible with the from (4.8). To resolve this, we follow Benzaquen et al. (2015) and define for each experiment a convergence time . The physical meaning of T λ is that it provides a time at which the experiment should approach the asymptotics power law (4.8). Using this definition of T λ , the scaling (4.8) then gives (4.9) which can be compared to experiments without adjustable parameters.
In figure 12, we show the temporal dependence λ/λ 0 vs T /T λ for different initial film thicknesses and viscosities. The black dashed line in the figure represents Eqn (4.9). Clearly, all experimental data points seem to collapse onto a single master curve which is independent of the film properties used. Moreover, the master curve has a very good agreement with Eqn (4.9). We remark that such a scaling λ ∼ t 1/4 is also seen in previous studies with viscous thin film configurations (Salez et al. 2012a;McGraw et al. 2012;Benzaquen et al. 2015;Hack et al. 2018). Please note that during late times, some experiments show the width broadening to slowly deviate from the 1/4 scaling as seen in figure 12. We suppose the deviations to come from the 1D to radial symmetry geometry transition and from the wave interactions coming from the second drop rebound which can contribute during late times.
A similar analysis is performed for the amplitude decay. We will once again make use of the self-similar attractor φ 1 (U ) for the relaxation process. Similar to the procedure outlined for the study of the width broadening, we employ Eqn (4.5) which implies, where |φ 1 (U * )| ≈ 0.1164 is the maximum of the similarity function (cf. figure 11). To avoid the experimental issue that δ 0 , the amplitude at t = 0, is finite, we once again determine for each experiment a convergence time T δ , using δ 0 /(2h f ) = M 1 |φ 1 (U * )|/T 1/2 δ (cf. Appendix C). Note that both δ 0 and M 1 will be different for each specific experiment, but both parameters can be determined independently. This finally gives that (4.10) can be written as The result for δ/δ 0 vs T /T δ for different film thicknesses and different viscosities are shown in figure 13. The black dashed line in the figure represents Eqn (4.11). While experiments exhibit a very good agreement with the numerical lubrication solution (cf. figure 9), it is difficult to infer the scaling behaviour from individual realisations. However, it is clear that the rescaled amplitude plot of figure 13 is consistent with the predicted asymptotic decay. Indeed, the data seem to approach the 1/2 scaling law, as indicated by the dashed line.
Conclusions and Outlook
In this work, we have performed experiments of a water drop impacting a viscous thin oil film in an ambient air environment. The considered drop impact velocities are restricted to moderately low values W e ∼ 1 at which drops bounce on thin films, due to the air cushioning effect. Digital Holographic Microscopy was employed to measure the deformations of the free oil film surface that arise due to the bouncing of drops with an unprecedented precision, allowing for the one-to-one comparison with lubrication theory.
We first investigated the deformations of the thin film immediately after rebound (t = 0) while varying the oil film thickness h f , oil viscosity η f , and the impact velocity v w of the drop. We found that the deformation amplitude after the bounce δ 0 varies quadratically with oil film thickness δ 0 ∼ h 2 f and inversely with the oil viscosity δ 0 ∼ η −1 f . When increasing the impact speed from v w ≈ 0.16 m/s to v w ≈ 0.37 m/s, the deformations in the thin film change even qualitatively: While at lower speeds, a single annular wavy deformation is found, at higher speeds the radial profiles exhibit two of such depressions and peak at two different radii.
In the second part of the manuscript, we have detailed the relaxation process of the viscous thin films when the drop is far away from the free oil surface after the bounce. Numerical calculations based on lubrication theory using the experimental deformations at t = 0 as initial condition show an excellent match with the experimentally measured evolution of the deformation profiles. Furthermore, we have successfully employed a theoretical analysis developed by Benzaquen et al. (2013Benzaquen et al. ( , 2014Benzaquen et al. ( , 2015 to obtain analytical results describing the relaxation process: Taking advantage of the fact that the deformations approach a universal self-similar attractor, at late times of the relaxation process the decay of the amplitude and the growth of the width of the deformations can be described without any free parameters. This allows us to collapse the corresponding experimental curves for all different thin film properties investigated. Measuring the deformations of the falling drop and the viscous thin film simultaneously has proven challenging in the previous literature (Lo et al. 2017) and in the present experiments. However it is worthwhile to investigate the dynamics of the coupled system as it would allow valuable insight in the deformation mechanism. To resolve both the drop-air and the oil-air interface, we therefore plan to combine the color interferometry technique by , which can be used to extract the narrow air profiles, with the DHM technique described in the present manuscript. In order to understand the film deformations theoretically and quantify possible influences on the macroscopic drop dynamics, the macroscopic impact dynamics have to be coupled to a two layer lubrication model for the air layer and the lubrication layer. Very recently, such a lubrication approach has been pursued by Duchemin & Josserand (2020) for the coalescence of a drop with an underlying thin film and when combining color interferometry with DHM it can be tested against controlled experiments.
for aiding with the 3D image graphics and Mariia Zhakharova for fabricating the PDMS gels.
Declaration of interests
The authors report no conflict of interest.
Appendix A. Measuring thin film deformations using DHM Figure 14a shows a schematic diagram where the drop has just moved out of the DHM measuring window after the bounce. The height of the measuring window is roughly 200 µm. Here, we define t = 0 as the first instance when the DHM measuring window is devoid of the water-air interface allowing for a clean measurement of the oil-air deformation. The object beam (consisting of reflection wavefronts from the measuring window) and the reference beam in the DHM interfere to form the holographic pattern recorded by the DHM camera. The recorded holograms are converted into their constituent intensity and phase information, wherein the phase information is used to reconstruct the height information. Figure 14b shows an examplary phase image at time t = 0. The reconstructed height profile of this phase image is shown in figure 4a.
To check whether the weak reflection of the glass-oil interface affects the measurements of the oil-air deformation in the bounce experiments, we perform a simple calibration experiment. The calibration experiment is performed to measure the (known) PDMS gelair profile through glass. Figure 15 shows the comparison of the real and the measured PDMS gel-air deformation. We confirm that the flat glass-PDMS gel interface (and similarly the flat glass-oil film interface) does not affect the measurements of the thin film deformations. A vertical resolution of around 20 nm is obtained from the calibration plot. It is important to note that the vertical resolution can be well below 20 nm. This only a conservative estimate since the real PDMS gel profile can suffer from small tilt issues and from surface roughness during its fabrication which are not accounted for here.
Here Γ and 0 H 2 are the gamma function and the (0,2)-hypergeometric function respectively. The (0,2)-hypergeometric function is defined in Eqn (B 2) which reads Here (.) n is the Pochhammer notation for the rising factorial. | 9,430.4 | 2021-04-19T00:00:00.000 | [
"Physics"
] |
Photoproduction $\gamma p \to K^+\Lambda(1520)$ in an effective Lagrangian approach
The data on differential cross sections and photon-beam asymmetries for the $\gamma p \to K^+\Lambda(1520)$ reaction have been analyzed within a tree-level effective Lagrangian approach. In addition to the $t$-channel $K$ and $K^\ast$ exchanges, the $u$-channel $\Lambda$ exchange, the $s$-channel nucleon exchange, and the interaction current, a minimal number of nucleon resonances in the $s$ channel are introduced in constructing the reaction amplitudes to describe the data. The results show that the experimental data can be well reproduced by including either the $N(2060)5/2^-$ or the $N(2120)3/2^-$ resonance. In both cases, the contact term and the $K$ exchange are found to make significant contributions, while the contributions from the $K^\ast$ and $\Lambda$ exchanges are negligible in the former case and considerable in the latter case. Measurements of the data on target asymmetries are called on to further pin down the resonance contents and to clarify the roles of the $K^\ast$ and $\Lambda$ exchanges in this reaction.
I. INTRODUCTION
The traditional πN elastic and inelastic scattering experiments have provided us with abundant knowledge of the mass spectrum and decay properties of the nucleon resonances (N * 's). Nevertheless, both the quark model [1,2] and lattice QCD [3,4] calculations predict more resonances than have been observed in the πN scattering experiments. The resonances predicated by quark model or lattice QCD but not observed in experiments are called "missing resonances", which are supposed to have small couplings to the πN channel and, thus, escape from experimental detection. In the past few decades, intense efforts have been dedicated to search for the missing resonances in meson production reaction channels other than πN . In particular, the ρN , φN , and ωN production reactions in the nonstrangeness sector and the KY , K * Y (Y = Λ, Σ) production reactions in the strangeness sector have been widely investigated both experimentally and theoretically.
In the present paper, we focus on the γp → K + Λ(1520) reaction process. The threshold of the K + Λ(1520) photoproduction is about 2.01 GeV, and, thus, this reaction provides a chance to study the N * resonances in the W ∼ 2.0 GeV mass region in which we have infancy information as shown in the latest version of the Review of Particle Physics (RPP) [5]. Besides, the isoscalar nature of Λ(1520) allows only the I = 1/2 N * resonances exchanges in the s channel, which simplifies the reaction mechanisms of the K + Λ(1520) photoproduction. Experimentally, the cross sections for the reaction γp → K + Λ(1520) have been measured at SLAC by Boyarski et al. in 1971 for photon energy E γ = 11 GeV [6], and by the LAMP2 group in 1980 at E γ = 2.8−4.8 *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>GeV [7]. In 2010, the LEPS Collaboration measured the differential cross sections and photon-beam asymmetries (Σ) at Spring-8 for γp → K + Λ(1520) at energies from threshold up to E γ = 2.6 GeV at forward K + angles [8].
In 2011, the SAPHIR Collaboration measured the cross sections at the Electron Stretcher Accelerator (ELSA) for the K + Λ(1520) photoproduction in the energy range from threshold up to E γ = 2.65 GeV [9]. Recently, the differential and total cross sections for the K + Λ(1520) photoproduction were reported by the CLAS Collaboration at energies from threshold up to the center-of-mass energy W = 2.86 GeV over a large range of the K + production angle [10].
Theoretically, the K + Λ(1520) photoproduction reaction has been extensively investigated based on effective Lagrangian approaches by four theory groups in 11 publications [11][12][13][14][15][16][17][18][19][20][21]. In Refs. [11][12][13][14], Nam et al. found that the contact term and the t-channel K exchange are important to the cross sections of γp → K + Λ(1520), while the contributions from the t-channel K * exchange and the s-channel nucleon resonance exchange are rather small. In Refs. [15][16][17][18], Xie, Wang, and Nieves et al. found that apart from the contact term and the t-channel K exchange, the u-channel Λ exchange and the s-channel N (2120)3/2 − [previously called D 13 (2080)] exchange are also important in describing the cross-section data for γp → K + Λ(1520), while the contribution from the tchannel K * exchange is negligible in this reaction. In Refs. [19,20], He and Chen found that the contribution from the t-channel K * exchange in γp → K + Λ(1520) is also considerable besides the important contributions from the contact term, the t-channel K exchange, the u-channel Λ exchange, and the s-channel N (2120)3/2 − exchange. In Ref. [21], Yu and Kong studied the γp → K + Λ(1520) reaction within a Reggeized model, and they claimed that the important contributions to this reaction are coming from the contact term, the t-channel K exchange, and the t-channel K * 2 exchange, while the contribution from the t-channel K * exchange is minor. FIG. 1. Predictions of photon-beam asymmetries at cos θ = 0.8 as a function of the photon laboratory energy for γp → K + Λ(1520) from Ref. [16] (blue dashed line), the fit II of Ref. [17] (red solid line), Ref. [19] (green dotted line), and Ref. [20] (black dot-dashed line). The data are located in 0.6 < cos θ < 1 and taken from the LEPS Collaboration [8] (blue square).
One observes that the common feature reported in all the above-mentioned publications of Refs. [11][12][13][14][15][16][17][18][19][20][21] is that the contributions from the contact term and the t-channel K exchange are important to the γp → K + Λ(1520) reaction. Even so, the reaction mechanisms of γp → K + Λ(1520) claimed by those four theory groups are quite different. In particular, there are no conclusive answers which can be derived from Refs. [11][12][13][14][15][16][17][18][19][20][21] for the following questions: Are the contributions from the t-channel K * exchange and u-channel Λ exchange significant or not in this reaction, does one inevitably need to introduce nucleon resonances in the s channel to describe the data, and if yes, is the N (2120)3/2 − resonance the only candidate needed in this reaction and what are the parameters of it?
On the other hand, the data on photon-beam asymmetries for γp → K + Λ(1520) reported by the LEPS Collaboration in 2010 [8] have never been well reproduced in previous publications of Refs. [11][12][13][14][15][16][17][18][19][20]. As an illustration, we show in Fig. 1 the theoretical results on photonbeam asymmetries from Refs. [16,17,19,20] calculated at cos θ = 0.8 and compared with the data located at 0.6 < cos θ < 1. It is true that the data bins in scattering angles are wide; nevertheless, it has been checked that the averaged values of theoretical beam-asymmetry results in 0.6 < cos θ < 1 are comparable with those calculated at cos θ = 0.8. One sees that, in the energy region E γ > 2 GeV, even the signs of the photon-beam asymmetries predicated by these theoretical works are opposite to the data. In the Regge model analysis of Ref. [21], the photon-beam asymmetries have indeed been analyzed, but there the differential cross-section data have been Generic structure of the amplitude for γp → K + Λ(1520). Time proceeds from left to right. The outgoing Λ * denotes Λ(1520).
only qualitatively described, and the structures of the angular distributions exhibited by the data were missing due to the lack of nucleon resonances in the s-channel interactions.
The purpose of the present work is to perform a combined analysis of the available data on both the differential cross sections and the photon-beam asymmetries for γp → K + Λ(1520) within an effective Lagrangian approach, and, based on that, we try to get a clear understanding of the reaction mechanism of γp → K + Λ(1520). In particular, we aim to clarify whether the t-channel K * exchange and the u-channel Λ exchange are important or not and what the resonance contents and their associated parameters are in this reaction. As discussed above, previous publications of Refs. [11][12][13][14][15][16][17][18][19][20] can describe only the differential cross-section data, and they gave diverse answers to these questions. It is expected that more reliable results on the resonance contents and the roles of K * and Λ exchanges in this reaction can be obtained from the theoretical analysis which can result in a satisfactory description of the data on both the differential cross sections and the photon-beam asymmetries.
The present paper is organized as follows. In Sec. II, we briefly introduce the framework of our theoretical model, including the effective interaction Lagrangians, the resonance propagators, and the phenomenological form factors employed in this work. The results of our model calculations are shown and discussed in Sec. III. Finally, a brief summary and conclusions are given in Sec. IV.
II. FORMALISM
The full amputated photoproduction amplitude for γN → KΛ(1520) in our tree-level effective Lagrangian approach can be expressed as [22][23][24][25] with ν and µ being the Lorentz indices for outgoing Λ(1520) and incoming photon, respectively. The first three terms M νµ s , M νµ t , and M νµ u stand for the amplitudes resulted from the s-channel N and N * exchanges, the t-channel K and K * exchanges, and the u-channel Λ exchange, respectively, as diagrammatically depicted in Fig. 2. They can be calculated straightforward by using the effective Lagrangians, propagators, and form factors provided in the following part of this section. The last term in Eq. (1) represents the interaction current arising from the photon attaching to the internal structure of the Λ(1520)N K vertex. In practical calculation, the interaction current M νµ int is modeled by a generalized contact current [22][23][24][25][26][27][28][29][30][31]: Here Γ ν Λ * N K (q) is the vertex function of Λ(1520)N K coupling governed by the Lagrangian of Eq. (16): with q being the four-momentum of the outgoing K meson; M νµ KR is the Kroll-Ruderman term governed by the Lagrangian of Eq. (15): with Q K being the electric charge of outgoing K meson and τ being the isospin factor of the Kroll-Ruderman term; f t is the phenomenological form factor attached to the amplitude of t-channel K exchange, which is given by Eq. (39); C µ is an auxiliary current introduced to ensure the gauge invariance of the full photoproduction amplitude of Eq. (1). Note that the photoproduction amplitudes will automatically be gauge invariant in the cases that there are no form factors and the electromagnetic couplings are obtained by replacing the partial derivative by its covariant form in the corresponding hadronic vertices. In practical calculation, one has to introduce the form factors in hadronic vertices (cf. Sec. II C) which violate the gauge invariance. The auxiliary current C µ is then introduced to compensate the gauge violation caused by the form factors. Following Refs. [27][28][29], for the γN → KΛ(1520) reaction, the auxiliary current C µ is chosen to be Here p, q, and k denote the four-momenta for incoming N , outgoing K, and incoming photon, respectively; Q K and Q N are electric charges of K and N , respectively; f s and f t are phenomenological form factors for s-channel N exchange and t-channel K exchange, respectively;ĥ is an arbitrary function going to unity in the high-energy limit and set to be 1 in the present work for simplicity; τ depicts the isospin factor for the corresponding hadronic vertex. Alternatively, one can rewrite the auxiliary current C µ in Eq. (5) as One sees clearly that if there are no form factors, i.e., f t = f s = 1, one has C µ → 0 and, consequently, M νµ int → M νµ KR . We mention that the auxiliary current C µ in Eq. (5) works for both real and virtual photons; i.e., the amplitudes we constructed in Eq. (1) are gauge invariant for both photo-and electroproduction of K + Λ(1520). In Ref. [32], another prescription for keeping gauge invariance of the K + Λ(1520) electroproduction amplitudes was introduced, where additional terms are considered besides those for photoproduction reactions.
In the rest of this section, we present the effective Lagrangians, the resonance propagators, the form factors, and the interpolated t-channel Regge amplitudes employed in the present work.
A. Effective Lagrangians
In this subsection, we list all the Lagrangians used in the present work. For further simplicity, we define the operators Γ (+) = γ 5 and the field Λ * = Λ(1520), (9) and the field-strength tensor with A µ denoting the electromagnetic field. The Lagrangians needed to calculate the amplitudes for nonresonant interacting diagrams are where M K * , M K , M N , and M Λ denote the masses of K * , K, N , and Λ, respectively;ê stands for the charge operator andκ with the anomalous magnetic moments κ p = 1.793 and κ n = −1.913. The coupling constant g γKK * = 0.413 is calculated by the radiative decay width of K * → Kγ given by RPP [5] with the sign inferred from g γπρ [33] via the flavor SU(3) symmetry considerations in conjunction with the vector-meson dominance assumption. The coupling constants g Λ * Λγ and g Λ * Λγ are fit parameters, but only one of them is free since they are constrained by the Λ(1520) radiative decay width Γ Λ(1520)→Λγ = 0.133 MeV as given by RPP [5]. The value of g Λ * N K = 10.5 is determined by the decay width of Λ(1520) → N K, Γ Λ(1520)→N K = 7.079 MeV, as advocated by RPP [5]. The coupling constant g Λ * N K * is a parameter to be determined by fitting the data. The coupling constant . For nucleon resonances in the s channel, the Lagrangians for electromagnetic couplings read [22][23][24][25] and the Lagrangians for hadronic couplings to Λ(1520)K read L 1/2± where R designates the N * resonance and the superscript of L RN γ and L RΛ * K denotes the spin and parity of the resonance R. The coupling constants g RΛ * K (i, j = 1, 2) are relevant to the reaction amplitudes, and they are what we really fit in practice.
In Ref. [11], the off-shell effects for spin-3/2 resonances in γp → K + Λ(1520) have been tested. It was found that the off-shell effects are small and the off-shell parameter X can be set to zero. In the present work, we simply ignore the off-shell terms in the interaction Lagrangians for high spin resonances and leave this issue for future work.
C. Form factors
In practical calculation of the reaction amplitudes, a phenomenological form factor is introduced in each hadronic vertex. For the t-channel meson exchanges, we adopt the following form factor [22][23][24][25]: and for the s-channel and u-channel baryon exchanges, we use [22][23][24][25] Here, q M denotes the four-momentum of the intermediate meson in the t channel, and p x stands for the fourmomentum of the intermediate baryon in s and u channels with x = s and u, respectively. Λ M(B) is the corresponding cutoff parameter. In the present work, in order to reduce the number of adjustable parameters, we use the same cutoff parameter Λ B for all the nonresonant diagrams, i.e., Λ B ≡ Λ K = Λ K * = Λ Λ = Λ N . The parameter Λ B and the cutoff parameter Λ R for N * resonances are determined by fitting the experimental data.
D. Interpolated t-channel Regge amplitudes
A Reggeized treatment of the t-channel K and K * exchanges is usually employed to economically describe the high-energy data, which corresponds to the following replacement of the form factors in Feynman amplitudes: Here s 0 is a mass scale which is conventionally taken as s 0 = 1 GeV 2 , and α ′ M is the slope of the Regge trajectory α M (t). For M = K and K * , the trajectories are parameterized as [34] α Note that, in Eqs. (35) and (36), degenerate trajectories are employed for K and K * exchanges; thus, the signature factors reduce to 1.
In the present work, we use the so-called interpolated Regge amplitudes for the t-channel K and K * exchanges. The idea of this prescription is that at high energies and small angles one uses Regge amplitudes, and at low energies one uses Feynman amplitudes, while in the intermediate energy region an interpolating form factor is introduced to ensure a smooth transition from the low-energy Feynman amplitudes to the high-energy Regge amplitudes. This hybrid Regge approach has been applied to study the γp → K + Λ(1520) reaction in Refs. [14,18,20,21] and the other reactions in Refs. [34][35][36][37]. Instead of making the replacements of Eqs. (35) and (36) in a pure Reggeized treatment, in this hybrid Regge model the amplitudes for t-channel K and K * exchanges are constructed by making the following replacements of the form factors in the corresponding Feynman amplitudes: (35) and (36) and R = R s R t with Here s R , t R , s 0 , and t 0 are parameters to be determined by fitting the experimental data.
The auxiliary current C µ introduced in Eq. (5) and the interaction current M νµ int given in Eq.
(2) ensures that the full photoproduction amplitude of Eq. (1) satisfies the generalized Ward-Takahashi identity and, thus, is fully gauge invariant [27][28][29]. Note that our prescription for C µ and M νµ int is independent of any particular form of the t-channel form factor f K (q 2 K ), provided that it is normalized as f K (q 2 K = M 2 K ) = 1. One sees that, when the interpolated Regge amplitude is employed for t-channel K exchange, the replacement of Eq. (39) still keeps the the normalization condition of the form factor: Therefore, as soon as we do the same replacement of Eq. (39) for the form factor of t-channel K exchange everywhere in C µ and M νµ int , the full photoproduction amplitude still satisfies the generalized Ward-Takahashi identity and, thus, is fully gauge invariant.
III. RESULTS AND DISCUSSION
As discussed in the introduction section of this paper, the reaction γp → K + Λ(1520) has been theoretically investigated based on effective Lagrangian approaches by four theory groups in 11 publications [11][12][13][14][15][16][17][18][19][20][21]. The common feature of the results from these theoretical works is that the contributions from the contact term and the t-channel K exchange are important for the γp → K + Λ(1520) reaction. Apart from that, no common ground has been found by these theoretical works for the reaction mechanisms of γp → K + Λ(1520). In particular, different groups gave quite different answers for the following questions: Are the contributions from the t-channel K * exchange and u-channel Λ exchange significant or not in this reaction, are the nucleon resonances introduced in the s channel indispensable or not to describe the available data, and, if yes, what are the resonance contents and their associated parameters in this reaction? On the other hand, we notice that even though the data on photon-beam asymmetries for γp → K + Λ(1520) have been reported by the LEPS Collaboration in 2010, they have never been well reproduced in previous theoretical publications of Refs. [11][12][13][14][15][16][17][18][19][20]. One believes that these photon-beam-asymmetry data will definitely put further constraints on the reaction amplitudes. In Ref. [21], the photon-beam-asymmetry data have indeed been analyzed, but there, the structures of the angular distributions exhibited by the data are missed due to the lack of nucleon resonances. In a word, all previous theoretical publications in regards to γp → K + Λ(1520) are divided over the reaction mechanism and the resonance contents and parameters of this reaction. A simultaneous description of the differential cross-section data and the photonbeam-asymmetry data still remains to be accomplished.
The purpose of the present work is to get a clear understanding of the reaction mechanism of γp → K + Λ(1520) based on a combined analysis of the available data on both the differential cross sections and the photon-beam asymmetries within an effective Lagrangian approach. As the differential cross-section data exhibit clear bump structures in the near-threshold region, apart from the N , K, K * , and Λ exchanges and the interaction current in the nonresonant background, we introduce as few as possible near-threshold nucleon resonances in the s channel in constructing the γp → K + Λ(1520) reaction amplitudes to reproduce the data.
In the most recent version of RPP [5], there are six nucleon resonances near the K + Λ(1520) threshold, namely, the N (2000)5/2 + , N (2040)3/2 + , N (2060)5/2 − , N (2100)1/2 + , N (2120)3/2 − , and N (2190)7/2 − resonances. If none of these nucleon resonances are introduced in the construction of the s-channel reaction amplitudes, we find that it is not possible to achieve a simultaneous description of both the differential crosssection data and the photon-beam-asymmetry data in our model. We then try to reproduce the data by including one of these six near-threshold resonances. If we include one of the N (2000)5/2 + , N (2040)3/2 + , N (2100)1/2 + , and N (2190)7/2 − resonances, we find that the obtained theoretical results for differential cross sections and photon-beam asymmetries have rather poor fitting qualities. As an illustration, we show in Fig. 3 the differential cross sections at a few selected scattering angles as a function of the incident photon energy which are obtained by including one of the N (2000)5/2 + (black solid lines), N (2040)3/2 + (red dot-double-dashed lines), N (2100)1/2 + (blue dashed lines), and N (2190)7/2 − (green dot-dashed lines) resonances and compared with the corresponding data [8,10]. One sees clearly from Fig. 3 that the fits with one of the N (2040)3/2 + , N (2100)1/2 + , and N (2190)7/2 − resonances fail to describe the differential cross sections at cos θ = 0.95, and the fit with the N (2000)5/2 + resonance fails to reproduce the differential cross-section data at the other three selected scattering angles. In a word, none of these four fits that includes one of the N (2000)5/2 + , N (2040)3/2 + , N (2100)1/2 + , and N (2190)7/2 − resonances can well describe the differential cross-section data. Thus, they are excluded to be acceptable fits. On the other hand, if either the resonance N (2060)5/2 − or the resonance N (2120)3/2 − is considered, a simultaneous description of both the differential cross-section data and the photonbeam-asymmetry data can be satisfactorily obtained, which will be discussed below in detail. Consequently, these two fits, i.e., the ones including the N (2060)5/2 − or the N (2120)3/2 − resonance, are treated as acceptable. When an additional resonance is further included, the fit quality will be improved a little bit, since one has more adjustable model parameters. But, in this case, one would obtain too many solutions with similar fitting qualities, and meanwhile the fitted error bars of adjustable parameters are also relatively large. As a consequence, no conclusive conclusion can be drawn about the resonance contents and parameters extracted from the available data for the considered reaction. We thus conclude that the available differential cross-section data and the photon-beam-asymmetry data for γp → K + Λ(1520) can be described by including one of the N (2060)5/2 − and N (2120)3/2 − resonances and postpone the analysis of these available data with two or more nucleon resonances until more data for this reaction become available in the future.
As discussed above, we introduce nucleon resonances as few as possible in constructing the reaction amplitudes to describe the available data for γp → K + Λ(1520). It is found that a simultaneous description of both the differential cross-section data and the photon-beamasymmetry data can be achieved by including either the N (2060)5/2 − resonance or the N (2120)3/2 − resonance. We thus get two acceptable fits named as "fit A," which includes the N (2060)5/2 − resonance, and "fit B," which includes the N (2120)3/2 − resonance. The fitted values of the adjustable model parameters in these two fits are listed in Table I, and the corresponding results on differential cross sections and photon-beam asymmetries are shown in Figs. 4−6.
In Table I, for u-channel Λ exchange, only the value of the coupling constant g Λ * Λγ is not treated as a free parameter, since it is constrained by the Λ(1520) radiative decay width Γ Λ(1520)→Λγ = 0.133 MeV as given by RPP [5], which results in g (2) Λ * Λγ = 2.13 in fit A and −13.01 in fit B, respectively. The asterisks below the resonance names represent the overall status of these resonances evalu- I. Fitted values of model parameters. The asterisks below resonance names represent the overall status of these resonances evaluated by RPP [5]. The numbers in the brackets below the resonance masses and widths denote the corresponding values advocated by RPP [5].
√ βΛ * KAj represents the reduced helicity amplitude for resonance with βΛ * K denoting the branching ratio of resonance decay to Λ(1520)K and Aj standing for the helicity amplitude with spin j for resonance radiative decay to γp. ated in the most recent RPP [5]. One sees that both the N (2060)5/2 − and the N (2120)3/2 − resonances are evaluated as three-star resonances. The symbols M R , Γ R , and Λ R denote the resonance mass, width, and cutoff parameter, respectively. The numbers in brackets below the resonance mass and width are the corresponding values estimated by RPP. It is seen that the fitted masses of the N (2060)5/2 − and N (2120)3/2 − resonances are comparable with their values quoted by RPP, while the fitted widths for these two resonances are smaller than the corresponding RPP values. For resonance couplings, since in the tree-level calculation only the products of the resonance hadronic and electromagnetic coupling constants are relevant to the reaction amplitudes, we list the reduced helicity amplitudes √ β Λ * K A j for each resonance instead of showing their hadronic and electromagnetic coupling constants separately [22,23,30,34]. Here β Λ * K is the branching ratio for resonance decay to Λ(1520)K, and A j is the helicity amplitude with spin j (j = 1/2, 3/2) for resonance radiative decay to γp.
Fit
We have, in total, as shown in Figs. 4−6, 220 data points in the fits. Fit A has a global χ 2 /N = 2.10, and fit B has a global χ 2 /N = 2.63. Note that, in the fitting procedure, 11.6% and 5.92% systematic errors for the data from the CLAS Collaboration and the LEPS Collaboration, respectively, have been added in quadrature to the statistical errors [8,10]. Overall, one sees that both the differential cross-section data and the photon-beamasymmetry data have been well described simultaneously in both fit A and fit B. Figures 4 and 5 show the differential cross sections for γp → K + Λ(1520) resulted from fit A (left panels), which includes the N (2060)5/2 − resonance, and fit B (right panels), which includes the N (2120)3/2 − resonance. There, the black solid lines represent the results calculated from the full reaction amplitudes. The red dotted lines, blue dashed lines, green dot-dashed lines, cyan double-dot-dashed lines, and magenta dot-doubledashed lines denote the individual contributions from the interaction current, the t-channel K exchange, the t-channel K * exchange, the s-channel N * resonance exchange, and the u-channel Λ exchange, respectively. The individual contributions from the s-channel nucleon exchange are too small to be clearly shown in these figures. One sees from Figs. 4 and 5 that the differential cross-section data are well reproduced in both fit A (left panels) and fit B (right panels). Note that in Fig. 5, for cos θ = 0.85, the CLAS data at cos θ = 0.84 (E γ < 3.25 GeV) and cos θ = 0.83 (E γ > 3.25 GeV) are shown. That explains why in Fig. 4 the theoretical results agree with the CLAS data at high-energy forward angles but in Fig. 5 the theoretical differential cross sections at cos θ = 0.85 overestimate the CLAS data at the last two energy points.
From Figs. 4 and 5, one sees that, in fit A, the contribution from the interaction current [cf. Eq. (2)] plays a rather important role in the whole energy region. In the near-threshold region, the differential cross sections are dominated by the interaction current and the N (2060)5/2 − resonance exchange. Actually, the contributions from these two terms are responsible for the sharp rise of differential cross sections near the K + Λ(1520) threshold, in particular, the bump structure near E γ ≈ 2 GeV at forward angles as exhibited by the LEPS data in Fig. 5. The t-channel K exchange is seen to contribute significantly at higher energies and forward angles. The t-channel K * exchange has tiny contributions at high-energy forward angles, while the contributions from the u-channel Λ exchange are negligible. In fit B, the interaction current plays a dominant role in the whole energy region and is also responsible for the sharp rise of the differential cross sections at forward angles near the K + Λ(1520) threshold. The bump structure near E γ ≈ 2 GeV at forward angles as exhibited by the LEPS data in Fig. 5 is caused by the N (2120)3/2 − resonance on the base of the background dominated by the interaction current. The t-channel K exchange and the u-channel Λ exchange have significant contributions at forward and backward angles, respectively, mostly at higher energies. Considerable contributions are also seen from the t-channel K * exchange at high-energy forward angles.
The results of photon-beam asymmetries for γp → K + Λ(1520) from fit A and fit B are shown, respectively, in the left and right panels in Fig. 6. There, the black solid lines represent the results calculated from the full amplitudes. The red dotted lines, blue dashed lines, green dot-dashed lines, cyan double-dot-dashed lines, and magenta dot-double-dashed lines denote the results obtained by switching off the contributions of the interaction current, the t-channel K exchange, the t-channel K * exchange, the s-channel N * resonance exchange, and the u-channel Λ exchange, respectively, from the full model. One sees that the photon-beam-asymmetry data are well reproduced in both fits. In fit A, when the contributions of the N (2060)5/2 − resonance exchange are switched off from the full model, one gets almost zero beam asymmetries. We have checked and found that the N (2060)5/2 − resonance exchange alone results in negligible beam asymmetries. This means that it is the interference between the N (2060)5/2 − resonance exchange and the other interaction terms that is crucial for reproducing the experimental values of the beam asymmetries. A similar observation also holds for the interaction current [cf. Eq. (2)]. The interaction current alone results in almost zero beam asymmetries, but one gets rather negative beam asymmetries when the contributions from the interaction current are switched off from the full model. This means that the interference between the interaction current and the other interaction terms is very important for reproducing the beam asymmetries. Switching off the contributions of the individual terms other than the N (2060)5/2 − resonance exchange and the interaction current from the full model does not affect too much the theoretical beam asymmetries. In fit B, the interaction current alone is found to result in almost zero beam asymmetries, the same as in fit A. Nevertheless, it is seen from Fig. 6 that one gets rather negative beam asymmetries when the contributions of the interaction current are switched off from the full model, showing the importance of the interference of the interaction current and the other interacting terms in photon-beam asymmetries for γp → K + Λ(1520). Switching off the contributions of the individual terms other than the interaction current from the full model would not affect the theoretical beam asymmetries too much. In Ref. [8], it is expected that the positive values of the K + Λ(1520) asymmetries indicate a much larger contribution from the K * exchange. In both fit A and fit B of the present work, we have checked and found that the K * exchange alone does result in positive beam asymmetries, but, when the contributions of the K * exchange are switched off from the full model, the calculated beam asymmetries do not change significantly. In particular, the theoretical beam asymmetries are still positive and close to the experimental values when the contributions of the K * exchange are switched off from the full model. Figure 7 shows the total cross sections for γp → K + Λ(1520) predicated from fit A (left panel) and fit B (right panel), which are obtained by integrating the corresponding differential cross sections calculated in these two fits. In Fig. 7, the black solid lines represent the re- FIG. 4. Differential cross sections for γp → K + Λ(1520) as a function of cos θ from fit A (left panel) and fit B (right panel). The symbols W and Eγ denote the center-of-mass energy of the whole system and the photon laboratory energy, respectively, both in MeV. The black solid lines represent the results calculated from the full amplitudes. The red dotted lines, blue dashed lines, green dot-dashed lines, cyan double-dot-dashed lines, and magenta dot-double-dashed lines denote the individual contributions from the interaction current, the t-channel K exchange, the t-channel K * exchange, the s-channel N * resonance exchange, and the u-channel Λ exchange, respectively. The scattered symbols are data from the CLAS Collaboration [10].
sults calculated from the full reaction amplitudes. The red dotted lines, blue dashed lines, green dot-dashed lines, cyan double-dot-dashed lines, and magenta dotdouble-dashed lines denote the individual contributions from the interaction current, the t-channel K exchange, the t-channel K * exchange, the s-channel N * resonance exchange, and the u-channel Λ exchange, respectively. The individual contributions from the s-channel nucleon exchange are too small to be clearly shown in these figures. Note that the data for the total cross sections of γp → K + Λ(1520) are not included in the fits. Even so, one sees that, in both fit A and fit B, the theoretical total cross sections are in good agreement with the data. In fit A, the s-channel N (2060)5/2 − exchange, the interaction current, and the t-channel K exchange provide the most important contributions to the total cross sections, while the contributions from the u-channel Λ exchange, the s-channel N exchange, and the t-channel K * exchange are negligible. The bump structure near E γ ≈ 2 GeV is caused mainly by the N (2060)5/2 − resonance exchange and the interaction current. The sharp rise of the total cross sections near the K + Λ(1520) threshold is dominated by the s-channel N (2060)5/2 − exchange. In fit B, the dominant contributions to the total cross sections come from the interaction current, which is also respon-sible for the sharp rise of the total cross sections near the K + Λ(1520) threshold. The individual contributions from the s-channel N (2120)3/2 − exchange, the t-channel K and K * exchanges, and the u-channel Λ exchange are considerable, while those from the s-channel N exchange are negligible to the total cross sections. Comparing the individual contributions in fit A and fit B, one sees that the contributions from the resonance exchange are rather important in fit A, but they are much smaller in fit B. The contributions from the t-channel K * exchange and the u-channel Λ exchange are negligible in fit A, but they are considerable in fit B. In both fits, the interaction current provides dominant contributions, and the t-channel K exchange results in considerable contributions to the cross sections.
As mentioned in the introduction section, the K + Λ(1520) photoproduction reaction has been theoretically investigated based on effective Lagrangian approaches by four theory groups in 11 publications [11][12][13][14][15][16][17][18][19][20][21]. In these previous publications, the photon-beamasymmetry data reported by the LEPS Collaboration in 2010 [8] have never been well reproduced except in Ref. [21]. But in Ref. [21], the structures of the angular distributions exhibited by the data are missed due to the lack of nucleon resonances in the employed Reggeized Fig. 4. Data are taken from the CLAS Collaboration [10] but not included in the fits.
In the present work, we found that, to get a satisfactory description of the data on both differential cross sections and photon-beam asymmetries of γp → K + Λ(1520), the exchange of at least one nucleon resonance in the s channel needs to be introduced in constructing the reaction amplitudes. The required nucleon resonance could be either the N (2060)5/2 − or the N (2120)3/2 − , both evaluated as three-star resonances in the most recent version of RPP [5]. In the fit with the N (2060)5/2 − resonance, the contributions of the resonance exchange are found to be rather important to the cross sections, and, in particular, they are responsible for the sharp rise of the cross sections near the K + Λ(1520) threshold, as can be seen in Fig. 7. In the fit with the N (2120)3/2 − resonance, although much smaller than those of the interaction current, the contributions of the resonance exchange are still considerable to the cross sections. In Refs. [11][12][13][14][15][16][17][18]21], the t-channel K * exchange is found to provide negligible contributions. In Refs. [19,20], it is reported that the contributions of the t-channel K * exchange are considerable to the cross sections. In our present work, the contributions of the t-channel K * exchange are negligible in the fit with the N (2060)5/2 − resonance, and are considerable in the fit with the N (2120)3/2 − resonance. As for the u-channel Λ exchange, important contributions are reported in Refs. [15][16][17][18][19][20], while in the present work, considerable contributions of this term are seen only in the fit with the N (2120)3/2 − resonance. From Figs. 4−7 one sees that the fit with the N (2060)5/2 − resonance (fit A) and the fit with the N (2120)3/2 − resonance (fit B) describe the data on differential cross sections and photon-beam asymmetries for γp → K + Λ(1520) almost equally well. In Fig. 8, we show the predictions of the target nucleon asymmetries (T ) from fit A (black solid lines) and fit B (blue dashed lines) at two selected center-of-mass energies. One sees that unlike the differential cross sections and the photon-beam asymmetries, the target nucleon asymmetries predicted by fit A and fit B are quite different. Future experimental data on target nucleon asymmetries are expected to be able to distinguish the fit A and fit B of the present work and to further clarify the resonance content, the resonance parameters, and the reaction mechanism for the γp → K + Λ(1520) reaction.
IV. SUMMARY AND CONCLUSION
The photoproduction reaction γp → K + Λ(1520) is of interest since the K + Λ(1520) has isospin 1/2, excluding the contributions of the ∆ resonances from the reaction mechanisms, and the threshold of K + Λ(1520) is at 2.01 GeV, making this reaction more suitable than π production reactions to study the nucleon resonances in a lessexplored higher resonance mass region.
Experimentally, the data for γp → K + Λ(1520) on differential cross sections, total cross sections, and photonbeam asymmetries are available from several experimental groups [6][7][8][9][10], with the photon-beam-asymmetry data coming from the LEPS Collaboration [8] and the most recent differential and total cross-section data coming from the CLAS Collaboration [10].
Theoretically, the cross-section data for γp → K + Λ(1520) have been analyzed by several theoretical groups [11][12][13][14][15][16][17][18][19][20] within effective Lagrangian approaches, and the photon-beam-asymmetry data [8] have been reproduced only in Ref. [21] within a Reggeized framework. In the latter, the apparent structures of the angular distributions exhibited by the data are missing due to the lack of nucleon resonances in s-channel interactions in the Regge model. In these publications, the reported common feature for the γp → K + Λ(1520) reaction is that the contributions from the contact term and the tchannel K exchange are important to the cross sections of this reaction. Nevertheless, the reaction mechanisms of γp → K + Λ(1520) claimed by different theoretical groups are quite different. In particular, there are no conclusive answers for the questions of whether the contributions from the t-channel K * exchange and u-channel Λ exchange are significant or not, whether the introduction of nucleon resonances in the s channel is inevitable or not for describing the data, and if yes, what resonance contents and parameters are needed in this reaction.
In the present work, we performed a combined analysis of the data on both the differential cross sections and photon-beam asymmetries for γp → K + Λ(1520) within an effective Lagrangian approach. We considered the tchannel K and K * exchange, the u-channel Λ exchange, the s-channel nucleon and nucleon resonance exchanges, and the interaction current, with the last one being constructed in such a way that the full photoproduction amplitudes satisfy the generalized Ward-Takahashi identity and, thus, are fully gauge invariant. The strategy for introducing the nucleon resonances in the s channel used in the present work was that we introduce nucleon resonances as few as possible to describe the data.
For the first time, we achieved a satisfactory description of the data on both the differential cross sections and the photon-beam asymmetries for γp → K + Λ(1520). We found that either the N (2060)5/2 − or the N (2120)3/2 − resonance needs to be introduced in constructing the schannel reaction amplitudes in order to get a simultaneous description of the data on differential cross sections and photon-beam asymmetries for γp → K + Λ(1520). In both cases, the contributions of the interaction current and the t-channel K exchange are found to dominate the background contributions. The s-channel resonance exchange is found to be rather important in the fit with the N (2060)5/2 − resonance and to be much smaller but still considerable in the fit with the N (2120)3/2 − resonance. The contributions of the t-channel K * exchange and the u-channel Λ exchange are negligible in the fit with the N (2060)5/2 − resonance and are significant in the fit with the N (2120)3/2 − resonance. The target nucleon asymmetries for γp → K + Λ(1520) are predicted, on which the future experimental data are expected to verify our theoretical models, to distinguish the two fits with either the N (2060)5/2 − or the N (2120)3/2 − resonance, and to further clarify the reaction mechanisms of the K + Λ(1520) photoproduction reaction. | 9,994.6 | 2021-01-22T00:00:00.000 | [
"Physics"
] |
Evolution of the Kondo lattice and non-Fermi liquid excitations in a heavy-fermion metal
Strong electron correlations can give rise to extraordinary properties of metals with renormalized Landau quasiparticles. Near a quantum critical point, these quasiparticles can be destroyed and non-Fermi liquid behavior ensues. YbRh2Si2 is a prototypical correlated metal exhibiting the formation of quasiparticle and Kondo lattice coherence, as well as quasiparticle destruction at a field-induced quantum critical point. Here we show how, upon lowering the temperature, Kondo lattice coherence develops at zero field and finally gives way to non-Fermi liquid electronic excitations. By measuring the single-particle excitations through scanning tunneling spectroscopy, we find the Kondo lattice peak displays a non-trivial temperature dependence with a strong increase around 3.3 K. At 0.3 K and with applied magnetic field, the width of this peak is minimized in the quantum critical regime. Our results demonstrate that the lattice Kondo correlations have to be sufficiently developed before quantum criticality can set in.
H eavy fermion materials, i.e. intermetallics that contain rare earths (REs) like Ce, Sm, and Yb or actinides like U and Np, are model systems to study strong electronic correlations 1,2 . The RE-derived localized 4f states can give rise to local magnetic moments which typically order (often antiferromagnetically) at sufficiently low temperature as a result of the inter-site Ruderman-Kittel-Kasuya-Yosida interaction. In addition, the on-site Kondo effect causes a hybridization between the 4f and the conduction electrons, which eventually screens the local moments by developing Kondo spin-singlet many-body states. Hence, these two interactions directly compete with each other and lead to different (long-range magnetically ordered vs. paramagnetic Fermi-liquid) ground states 3 . A zero-temperature transition between the two states can be controlled through doping, pressure or magnetic field H. A quantum critical point (QCP) and concomitantly non-Fermi liquid properties ensue if the phase transition is continuous at zero temperature [4][5][6] .
Heavy fermion metals have been established as a canonical setting for quantum criticality 2 . How the Kondo lattice coherence develops upon lowering the temperature, i.e. the hierarchy of energy scales, is, however, still a matter of debate. Intuitively, the coherence temperature T coh is set by the single-ion Kondo temperature T K of the lowest-lying crystal-field level 7 and can be further reduced by disorder 8 , while within another model T coh can exceed T K considerably 9,10 . The latter model might be related to the influence of crystalline electric field (CEF) effects 2,11 . Considerable experimental efforts have recently been devoted to the study of the quantum critical regime at sufficiently low temperatures. A key observation is that quantum criticality induces a large entropy, suggesting that it is linked with the Kondo effect. This raises the important question 12 as to how the onset of Kondo lattice coherence at elevated temperatures connects with the emergence of quantum criticality at low temperatures.
The prototypical heavy fermion metal YbRh 2 Si 2 shows an antiferromagnetic (AFM) ground state with a very low Néel temperature, T N = 70 mK, and a QCP upon applying a relatively small field μ 0 H N = 0.66 T parallel to the tetragonal c-axis. Non-Fermi liquid behavior has been observed in the quantum critical regime (i.e. at finite field), extending up to temperatures of about 0.5 K 13 , depending on the physical quantity that is measured as well as the degree of disorder 14 , see T-H phase diagram in Fig. 1. Isothermal magnetotransport 15,16 and thermodynamic 17 measurements at low temperatures have provided evidence for the existence of an additional low-energy scale T * (H), which has been interpreted as the finite-temperature manifestation of the critical destruction of the lattice Kondo effect 18 and the concomitant zero-temperature jump of the Fermi surface from large to small across the QCP. Measurements of the thermal and magnetic Grüneisen ratio strongly support this picture 19,20 . An ever pressing issue, however, is the huge specific heat coefficient even in zero magnetic field 14,21 , which implies an abundance of fluctuations. Below T N , these are of Fermi-liquid type. Above T N , an obvious cause of these fluctuations are dynamical Kondo correlations, and above~0.5 K YbRh 2 Si 2 at zero field belongs to the quantum-critical fluctuation regime 13 . Yet, alternative scenarios have been proposed as well [22][23][24] .
Scanning tunneling spectroscopy (STS) measures locally the density of states (DOS) 25 through single-particle excitations 7,26,27 . Spectra obtained at temperatures T ≥ 4.6 K and H = 0 revealed the successive depopulation of the excited CEF states as the temperature is lowered, with essentially only the lowest crystalfield Kramers doublet occupied at lowest temperatures 7 . The coupling between the localized 4f electrons in this Kramers doublet and the conduction electrons gives rise to periodic Kondo-singlet correlations which start to develop below T coh . This coherence temperature is linked to the effective single-ion Kondo temperature T K ≈ 25 K extracted from bulk measurement 28 . While these properties conform to the traditional understanding of the high-temperature behavior of the Kondo lattice 29,30 , the questions remain open on how the Kondo coherence evolves further upon lowering the temperature 13,31,32 and in applied field (green arrows in Fig. 1) and, importantly, how it connects with quantum criticality.
We therefore measure STS down to 0.3 K and in applied magnetic fields up to 12 T, complemented by magnetotransport and thermopower measurements on identical YbRh 2 Si 2 samples. We find that lattice Kondo correlations dominate only at temperatures about an order of magnitude below the single-ion Kondo temperature. Substantial lattice Kondo correlations are a prerequisite for quantum criticality to set in.
Results
Temperature evolution of tunneling spectra down to 0.3 K. Tunneling conductance curves dI/dV = g(V,T) obtained over a wide range of temperatures are presented in Fig. 2a. Both, the peaks due to CEF splitting of the Yb 3+ multiplet (marked by black dots in Fig. 2a) and the conductance dip at zero bias (V = 0), result from single-ion Kondo physics 7 . Specifically the latter signifies the hybridization between 4f and conduction electrons. and T K involve all (purple shading) and the lowest-lying (white) crystal electric field levels, respectively. The lattice Kondo effect starts to develop around T coh ≈ T K . The Kondo-exchange interaction between the two types of spins, respectively, belonging to the local moments or the conduction electrons, gives rise to Kondo correlations in the spin-singlet channel, which are always dynamical at finite temperatures. The lattice Kondo effect (gray arrow) grows as temperature is decreased. At large magnetic fields, lowering the temperature eventually turns the short-lived lattice Kondo correlations into long-lived ones (brown region) indicating a heavy Fermi liquid with renormalized (large) Fermi surface well below T FL . For small magnetic fields the correlations stay dynamical. Here, an antiferromagnetic (AFM) order (blue region) develops below the Néel temperature T N , again with long-lived lattice Kondo correlations. The reddish regime embedding the T * (H) crossover line indicates incoherent quantum critical fluctuations as the system evolves towards the respective ground state on either side [15][16][17] . This scale, anchored by the QCP, marks the finite-temperature signature of the Mott-type phase transition at T = 0, additionally visualized by the red bars corresponding to the width of the crossover in Hall effect 16 . The green arrows indicate the parameters used in STS measurements The most striking feature, however, is the evolution of the peak at about −6 mV (red arrow in Fig. 2a). This peak initially develops below 30 K, but clearly dominates the spectra only for T≲3:3 K.
We now focus on this low-temperature regime T ≤ 3.3 K (Fig. 2b). These data were obtained on the surface shown in the inset where topography over an area of 20 × 10 nm 2 is presented. Such a topography not only attests the excellent sample quality but is also indicative of Si termination (see Supplementary Note 1 and Supplementary Figs. 1, 2). This termination is pivotal to our discussion as it implies predominant tunneling into the conduction electron states. A hint toward the origin of the −6 mV-peak comes from renormalized band structure calculations 33 : a partially developed hybridization gap is seen in the quasiparticle DOS at slightly smaller energy. Here, as a result of the renormalization the 4f band is shifted close to the Fermi level. Since tunneling spectroscopy on Si-terminated surfaces primarily probes conduction electrons and the total number of electrons must remain constant, the hybridization gap in the 4f-band is seen as a peak in our tunneling spectroscopy. On the other hand, a multi-level, finite-U non-crossing approximation (NCA) described our temperature-dependent tunneling spectra away from the energy range of this peak reasonably well 7 but presented no indication for the existence of a peak at −6 mV. Since NCA does not include intersite Kondo correlations it is very reasonable to assume that this peak results from a strong development of lattice coherence, i.e. the lattice Kondo effect, and will be referred to as the Kondo lattice peak. The bulk nature of the −6 mV peak is supported by comparison to bulk transport measurements, as discussed below.
An analysis of the Kondo lattice peak is impeded by the strongly temperature-dependent zero-bias dip close by (see also Data gðV; T ≳ 30 KÞ for −15 mV ≤ V ≤−3 mV can be well approximated by a parabola and hence, we assume a parabola to describe the background below the Kondo lattice peak at low temperature, see the example of T = 0.3 K in Fig. 3a. There are finite energy ranges on both sides of the peak feature allowing to fit a parabola, cf. arrows in Fig. 3a. After background subtraction, each peak can be well described by a Gaussian (lines in Fig. 3b) from which its height and width (full width at half maximum, FWHM) is extracted. Note that the peak position in energy is . Arrows indicate onset of deviations between data and parabola. b Examples of g(V, T, H = 0)-data after background subtraction (hollow markers, data sets at T < 5.5 K are offset). Data can be well described by Gaussians (lines). c Height (circles) and width (FWHM, crosses) of the peak at −6 mV after normalizing all g(V, T)curves at −80 mV. At T P , indicated by the upward arrow, peak height and width change significantly. Results from different samples cause several markers to overlap. Dashed lines are guides to the eye. d Relative depth of the single-ion Kondo dip at zero-bias. Low-T data were obtained on several surfaces of two different samples, data at T ≥ 5 K from ref. 7 . The upward arrow indicates T P (as in c), the downward arrow T K . Dashed line is a logarithmic fit to the data as proposed in ref. 34 independent of temperature ( Fig. 3b). Clearly both, the peak height and FWHM, exhibit a significant change across T P ≈ 3.3 K, Fig. 3c. In contrast, the dip in zero-bias conductance, the hallmark of the single-ion Kondo effect, smoothly continues to deepen (Fig. 3d, for data on linear T-scale see Supplementary Fig. 4). Here, the depth of the zero-bias dip is defined as 1 À ½gðV ¼ 0; TÞ=gðV ¼ À80 mV; TÞ 7 . This depth decreases logarithmically for 10 K < T ≤ 120 K, i.e. around T K , as predicted by dynamical mean field theory 34 .
Comparison to magnetotransport and thermopower measurements. While this temperature evolution of the single-particle spectrum is surprising, it connects well with the features that appear in bulk transport measurements [14][15][16][17]35,36 . Importantly, Fig. 4 shows that the thermopower divided by temperature, −S/T, has a qualitatively similar temperature dependence as the height of the STS Kondo lattice peak. Both display a plateau below about 7 K, and a subsequent strong increase upon lowering the temperature below T P ≈ 3.3 K. Here, T P is defined as the temperature at which the −6 meV-peak strongly develops. In the zerotemperature limit, a Fermi liquid is characterized by a constant value S/T. For a Kondo lattice system, this is expected to be seen at very low temperatures, i.e., once the renormalized band structure is almost fully developed 37 . In fact, for YbRh 2 Si 2 heavy Fermi-liquid behavior was observed beyond the QCP: At fields μ 0 H = 1 T, the coefficient −S/T reaches ≈ 7 μV/K 2 for temperatures up to~0.5 K 35 , indicative of a very large effective charge carrier mass. The plateau in S/T seen in Fig. 4 occurs at a value almost an order of magnitude smaller and extends to a correspondingly higher temperature (see also Supplementary Note 5). This indicates some medium heavy Fermi liquid, i.e. prevailing Kondo-lattice correlations. Moreover, the nearly logarithmic increase in S/T below T P resembles that of the Sommerfeld coefficient γ of the electronic specific heat 14 and is a clear signature of non-Fermi liquid behavior 35 . Therefore, the comparison of our STS results with those of S/T naturally leads us to propose that the incipient saturation of the Kondo lattice peak height below about 7 K (Fig. 4) signifies some prevailing Kondo-lattice correlations and, importantly, the growth of this peak below T P , as well as the concomitant drop of peak width (Fig. 3c) capture the quantum critical behavior. This leads to the insight that quantum criticality arises not before there is sufficient buildup of lattice Kondo correlations (see Supplementary Note 4 and Supplementary Fig. 5), or conversion of the local 4f electron spins into extended quasiparticle-like, but still incoherent excitations.
To illustrate this point further, the Hall mobility μ H = R H /ρ xx as a function of temperature is also plotted in Fig. 4, right inset (R H itself is compared to S/T in Supplementary Fig. 6). In the regime where the anomalous Hall effect dominates, this quantity has been considered as capturing the buildup of the on-site Kondo resonance 38 . It is striking that the Hall mobility also shows a strong increase upon lowering the temperature. Yet, the Hall mobility does not show any plateau near 3 K, and neither does the resistivity nor the Sommerfeld coefficient as a function of temperature 14 . This implies that T P ≈ 3.3 K is not an ordinary Fermi liquid scale. The connection between the growth of the Hall mobility with quantum criticality becomes evident when we analyze its inverse 1/μ H = ρ xx /R H , which is equivalent to the cotangent of the Hall angle, cot θ H , as a function of temperature. In YbRh 2 Si 2 , a power-law behavior of 1/μ H , more specifically 1/ μ H~T 2 , is observed for 0.5 K ≲T≲ 5 K (see Supplementary Fig. 7). Such a behavior, as well as the T-linear electrical resistivity seen in relevant parts of the phase diagram of YbRh 2 Si 2 14 , has also been observed, e.g., in the cuprate high-T c superconductors 39 .
Evolution of tunneling spectra in magnetic fields. To search for more direct STS evidence for quantum criticality in the H-T phase diagram of YbRh 2 Si 2 , the system was tuned by a magnetic field at T = 0.3 K ≈ 0.1T P , i.e. where coherent lattice effects are clearly dominating. Some g(V, H, T = 0.3 K)-curves are presented in Fig. 5a. No major change in the overall shape of the spectra with magnetic field is observed. The Kondo lattice peak can again be described by a Gaussian after parabolic background subtraction (Fig. 5b). Within the energy resolution of our STM the peak's position in energy is independent on H. The resulting FWHM of the peak in dependence on H is presented in Fig. 5c. We note that the FWHM at low T and fixed H varies very little between different spectra, and even different samples, i.e. <4% (see also Fig. 3c where several data points of the FWHM fall on top of each other). This is taken as the error of FWHM, and determines the size of the error bars in Fig. 5c. Moreover, a comparison between the data and the Gaussian fit in Fig. 5b reveals an only slightly enhanced noise of g(V, H, T = 0.3 K) at elevated fields compared to zero field. Consequently, the trend displayed in Fig. 5c appears genuine.
At a field of μ 0 H = 1 T, the Kondo lattice peak FWHM exhibits a minimum, with a reduction of about 15% of its high-field value. This field is approximately of the value μ 0 H * ≈ 1.3 T at which the Hall crossover takes place at T = 0.3 K for H||c (red cross in Fig. 5c, for the field direction see Supplementary Note 6). The range in magnetic field over which the Hall crossover is observed 16 is indicated by a red arrow in Fig. 5c. This implies that changes in g(V, H, T = 0.3 K) are to be expected within a similar field range, as indeed suggested by the drop in peak width vs. H at T = 0.3 K (see also Supplementary Note 7 and Supplementary Fig. 8). Note that at this low temperature Kondo lattice effects are dominating. In this regime, the observed drop of peak width at μ 0 H = 1 T indicates a reduced quasiparticle weight and follows the expected behavior for a critical slowing down concluded from isothermal magnetotransport (Hall coefficient, R H , and magnetoresistance, ρ xx ) measurements 15,16 , revealing thermally broadened jumps at H * (T). One may therefore expect that the drop in peak width may further increase and sharpen upon cooling, (cf. Fig. 1). In this view, all our findings reflect the finite temperature remnant of a field-induced QCP at T = 0. Data from specific-heat measurements on YbRh 2 Si 2 in magnetic field 40 confirm this assignment (cf. Supplementary Fig. 8). They yield a Low-temperature data (T ≤ 6 K) were taken from ref. 35 . Left inset: same S/T-data on a logarithmic scale to show broader range. Right inset: Hall mobility μ H vs. T. All three properties exhibit a strong upturn below T P ≈ 3.3 K and saturation at lowest T relative change of the Sommerfeld coefficient between critical (H * ) and elevated fields of order 30% at T = 0.3 K, if scaled for the relevant field orientation. We believe that the larger change in Sommerfeld coefficient compared to the drop in FWHM of the STS Kondo lattice peak (Fig. 5c, about 15% compared to the value at 9 T at which YbRh 2 Si 2 is almost in the Fermi liquid regime 41 ) is related to the fact that heat capacity integrates over the whole Brillouin zone while STS is a more directional measurement. For a surface along the a-b plane (Fig. 2), tunneling along the cdirection is most relevant, yet hybridization of the Yb CEF ground state orbitals is anisotropic 33 , mostly with the Rh 4d x 2 Ày 2 .
Remarkably, the FWHM at zero field falls in line with its trend at high fields μ 0 H ≥ 3.5 T, i.e. there is no significant difference at T = 0.3 K at both sides of the QCP. While the presented STS data on its own do not allow to distinguish between quantum critical scenarios, they are in good agreement with isothermal magnetotransport data. Even at a temperature as low as~0.5 K, the Hall crossover is expected to reach all the way to H = 0 13 . In analogy, the peak width in STS at H = 0 should be close to the one extrapolated from higher fields, where a large Fermi surface constitutes the heavy Fermi liquid. Crossing the T * -line at temperatures as high as about half a K, there is still a dominating contribution of the large Fermi surface to the quantum-critical fluctuations even at zero field 42 . Upon cooling, this contribution of the large Fermi surface at H = 0 is expected to decrease 13 . To establish this trend further, lower temperatures for our STS measurements are clearly called for. We note that Lifshitz transitions and Zeeman splitting can be ruled out as origins for the drop of the peak's FWHM (see Supplementary Note 5).
Discussion
Our STS studies here have revealed two important insights. One is that the development of the dynamical lattice Kondo correlations in a stoichiometric material such as YbRh 2 Si 2 , while setting in at T coh ≈ T K , extends to considerably lower temperatures and dominate the material's properties only at much lower temperatures (see Supplementary Note 4). In the case of YbRh 2 Si 2 , the STS Kondo lattice peak height and thermopower coefficient do not indicate dominant lattice Kondo correlations before the temperature has reached T P~0 .1·T coh . Moreover, the conductance minimum at zero bias, which has been shown to capture primarily the on-site Kondo (i.e. hybridization) effect at temperatures T≳5 K 7 , also continues to deepen down to the lowest measured temperature as shown in Fig. 3d. Conversely, the strengthening of the lattice Kondo coherence only at much below T K implies that the on-site Kondo effect dominates many thermodynamic and transport properties at around and below T coh in YbRh 2 Si 2 , and gives way to the lattice Kondo correlations only slowly upon reducing the temperature. Such a persistence of this distinct signature of the single-ion Kondo effect down to temperatures substantially below T coh is consistent with observations based on different transport 37,38 and thermodynamic 8,43 properties of several other heavy-fermion metals. On the one hand, this provides a natural explanation to the applicability of singleion-based descriptions to temperatures well below T K even though they neglect lattice Kondo coherence effects 7,37,38 . On the other hand, this finding supports nicely the theoretical concept of two temperature scales, i.e. a single-ion and a lattice Kondo scale 29,30 , including the predicted order of magnitude difference 30 .
The second lesson concerns the link between the development of the dynamical lattice Kondo correlations and quantum criticality. As a function of temperature, our measurements of the height and width of the Kondo lattice peak strongly suggest b Tunneling conductance data of a after parabolic background subtraction (markers) as described in Fig. 3a. Lines are the corresponding Gaussian fits; fields and color scheme as in a. c FWHM of the Kondo lattice peak for different magnetic fields at T = 0.3 K. At this temperature and field orientation, the energy scale T * (cf. Fig. 1 that, in order for the quantum criticality to set in, the lattice Kondo correlations first have to develop sufficiently upon lowering the temperature through, and well below, T K ≈ T coh ≅ 30 K. More specifically, as the temperature is lowered through T coh , both the Kondo lattice peak height and the thermopower coefficient first reach a plateau below about 7 K signifying well-developed lattice Kondo correlations. It is against this backdrop that the Kondo lattice peak height and S/ T markedly increase below T P ≈ 3.3 K. This manifests quantum criticality at the level of the single-particle spectrum, which goes considerably beyond the quantum critical behavior seen in the divergent Sommerfeld coefficient of the electronic specific heat and the linear-in-T electrical resistivity 14 . This signature of the quantum criticality at the single-particle level is complemented by the isothermal behavior of the Kondo lattice peak with respect to the control parameter, the magnetic field, at the lowest measured temperature, T ≈ 0.3 K. The FWMH of this peak displays a minimum at a similar field value at which isothermal transport and thermodynamic measurements show a Fermi surface crossover [15][16][17] indicating its relation to quantum criticality. To put these findings into perspective, our comparative studies indicate an appealingly natural scenario: the development of the lattice Kondo correlations is the prerequisite for quantum criticality. Only if the Kondo lattice is sufficiently established quantum critical fluctuations can evolve. As such, the insights gained in our study will likely be relevant to the non-Fermi liquid phenomena in a broad range of other strongly correlated metals, such as the high-T c cuprates and the organic charge-transfer salts, which are typically in proximity to Mott insulating states and in which quantum criticality is often observed [44][45][46] .
Methods
Sample characterization. High-quality single crystals of YbRh 2 Si 2 were grown by an indium-flux method; they grow as thin platelets with a height of 0.2-0.4 mm along the crystallographic c-direction (see also Supplementary Note 6). Crystalline quality and orientation of the single crystals were confirmed by x-ray and Laue investigations, respectively. The residual resistivity ρ 0 of the six samples investigated here ranged between 0.5 and 0.9 μΩ cm with no apparent differences in their spectroscopic results. The samples were cleaved in situ perpendicular to the crystallographic c direction at temperatures~20 K. Subsequent to cleaving, the samples were constantly kept under ultra-high vacuum (UHV) conditions and did not exhibit any sign of surface degradation for at least several months, as indicated by STM re-investigation.
Scanning tunneling microscopy and spectroscopy. STM and STS was conducted (using a cryogenic STM made by Omicron Nanotechnology) at temperatures between 0.3 and 6 K, in magnetic fields μ 0 H ≤ 12 T (applied parallel to the crystallographic c direction) and under UHV conditions (p < 2·10 −9 Pa). Spectroscopic measurements were conducted using lock-in technique with V rms = 0.2 mV. For the tunneling spectra shown, g(V, T)-data were averaged over areas of 1 × 1 nm 2 on grids of 24 × 24. In zero magnetic field, the averaging area was repeatedly varied between zero (i.e. spectroscopy repeated at a given point) and 5 × 5 nm 2 to ensure local homogeneity of the g(V,T)-data. For the temperature range 4.6 K ≤ T ≤ 120 K a second UHV STM (LT-STM) was utilized (p ≤ 3·10 −9 Pa).
Thermopower measurements. The thermopower S was measured by applying a temperature gradient to a rod-shaped sample of dimensions 4 × 0.5 × 0.1 mm 3 out of the same batch as the samples used in STM/S measurements. For low temperatures 0.03 K ≲T≲ 6 K, a home-built, dilution refrigerator-based setup was used, while measurements between 2 and 360 K were conducted in a PPMS (Quantum Design Inc.). The overlap of the two temperature ranges between 2 and 6 K serves as consistency check. Thermopower data in the high-temperature range compare nicely to those obtained earlier 47 . Hall effect measurements (see ref. 48 for details) were conducted on the same sample as the thermopower measurements.
Data availability. The data that support the findings of this study are available from the corresponding author upon request. | 6,281.8 | 2017-11-14T00:00:00.000 | [
"Physics"
] |
GENERALIZED CLASSES OF STARLIKE AND CONVEX FUNCTIONS OF ORDER
We have introduced, in this paper, the generalized classes of starlike and convex functions of order by using the fractional calculus. We then proved some subordination theorems, argument theorems, and various results of modified Hadamard product for functions belonging to these classes. We have also established some properties about the generalized Libera operator defined on these classes of functions.
INTRODUCT ION.
Let A denote the class of functions of the form f(z) z + 7 a z n n n=2 (i.i) which are analytic in the unit disk U {z:Iz < I}.Furthermore, let S denote the subclass of A consisting of all univalent functions.A function f(z) of S is said to be starlike of order if Re(Zf'(z) f(z) > a (1.2) for some (0 = a < i) and for all z U We use S* () to denote the class of all starlike functions of order Similarly, a function f (z) belonging to S is said to be convex of order if we replace (1.2) by zf" (z) Re(l + > (1. 3) f' (z) We use K() to denote the class of all convex functions of order Note that f(z) K() if and only if zf' (z) S*(d) and that S*() S*(0) S*, K() K(0) K and K() S*() (0 _<-< i).The class S*() and K() were introduced by Robertson [I], and studied subsequently by Schlld [2], MacGregor [3], Pinchuk [4], and others.
For our discussion, it is more convenient to use the following definitions which were employed recently by Owa [12 and by Srivastava and Owa [13].
DEFINITION 1.1.The fractional integral of order % is defined, for a functlon f (z) by z Is an analytlc function in a simply-connected region of the l-i Z-plane containing the origin, and the multiplicity of (z-) is removed by requiring log(Z-) to be real when z-> 0 DEFINITION 1.2.The fractional derivative of order l is defined, for a where 0 _<-I < 1 f(z) is an analytic function in a simply-connected region of the z-plane, and the multiplicity of (z-) is removed as in Definition i. 1 above.
DEFINITION 1.3.Under the hypotheses of Definition 1.2 the fractional derivative of order n+% is defined by Also let K(C,l) be the class of all functlons f(z) in S such that A(l,f) S*(,l) for < 1 and 0 <-< 1 We note that S*(,0) S*() and K(,0) K().Thus S*(,l) and K(,I) are the generalizations of the classes S*(e) and K() respectively.The classes S*(,I) and K(,I) were introduced by Owa [14].Recently, Owa and Shen [15] proved some coefficient inequalities for functions belonging to the classes S*(,I) and K(,I) Let T be the subclass of S conslsting of all functions of the form The classes T* (e,l) and C(,I) were studied by Owa [14 ], and the special cases T*(,0) and C(,0) were studied by Silverman [16].Thus the classes T*(,l) and C(,I) provide an interesting generalizatlon of the ones considered by Silverman [16].
In sections 2, 3 and 4, we shall prove several results for functions belonging to the generalized classes S*(,l) K(,l) T*(,I) and C(,I) We then introduce the class S*(,l;a,b) of functions in section 5.In the last section, we shall study a certain Integral operator defined on A.
SUBORDINATION THEOREMS.
Let f(z) and g(z) be analytic in the unit disk U Then we say that f(z) is subordinate to g(z), written f(z) ,< g(z), if there exists a function w(z) analytic in the unit disk U which satisfies w(0) 0, lw(z) < i, and f(z) g(w(z)).In particular, if the function g(z) is univalent In the unit disk U then f(z) .<g(z) if and only if f(0) g(0) and f(U) g(U) (cf. [17], [18]).
In order to prove our first theorem, we require the following lemma due to Miller, Mocanu, and Reade [19].
PROOF.Note that the function g (z) defined by maps the unit disk U onto the half domain such that Re (w) > This Implies from the definition of the class S*(,I) that Furthermore, the function g(z) is analytic wlth g' ( 0) 2 (l-s) M 0 and Taking l, y 1 in Lemma 2.1, we see that the function g(z) satls fies the hypotheses of Lemma 2.1.Thus we have z z which implies (2.5).
PROOF.Note that f(z) K(,I) f and only if A(%,f) S*(,l).
Consequently, on replacing f(z) by A(l,f) in Theorem 2.1, we have Theorem 2. In this section, we derive the argument theorems for functions belonging to the classes S*(,I) and K(e,l).
THEOREM 3. i.Let the function f (z) deflned by (i.i) be in the class S*(,l) (0 _<- In view of (2.6), we can write where w(z) is an analytic function in the unit disk U and satisfies w(O) O and lw(z) < 1 We note that the linear transformation maps the disk wl & z onto the disk.This completes the proof of Theorem 3.1.COROLLARY 3.1.Let the function f(z) defined by (i.i) be in the class S*(0,I) ( < i).Then arg f(z) where AO(l,f) zl/IDl+tf (Z) CO.'.OLLARY 3.2 Let the function f(z) defined by (i i) be in the class S*(e,l) (0 =< c < i; I < i).Then l-(l-2e) Izl IA(l,f) l+(l-2e)Izl l+Izl llf(z) l-lz where A(A,f) is given by (1.8).
PROOF.The proof is clear from (3.5).
Moreover it is easy to show that THEOREM 3.2.Let the function f(z) defined by (I.i) be in the class K(,A) (0 < i; A < i).Then We recall here the following two lemmas due to Owa [14] before state and prove our results of this section.
LEMMA 4.1.Let the function f(z) be defined by (1.9).Then f(z) is in the class T*(,I) (0 <= < I; A < i) if and only if for 0 <= < i and A < 1 The result (4.3) is sharp.
LEMMA 4.2.Let the function f(z) be defined by (1.9).Then f(z) is in With the aid of Lemma 4. i, we can prove THEOREM 4.1 Let the functions f. (z) (j 1,2) defined by (4.1) be in the class T*(Q,A) (0 _<- < i; A < i).Then the modified Hadamard product fl*f2(z) is in the class T*(Z(Q,A) ,l) where (2-e+l) The result is sharp.
PROOF.We use a technique due to Schild and Silverman [20].It is sufficient to prove that 7 for 8 <-8(Q,l).By using the Cauchy-Schwarz inequality, we know from (4.3) that ,n n= 2 Therefore we need find the largest 8 such that 7.
F(n+I) F(I-
In view of (4.7), we observe that it suffices to find the largest 8 such that we note that (4.10) gives where Since, for fxed a Finally, by taking the functions f. (z) (j 1,2) defined by we can see that the result is sharp.
By using the same technique and Lemma 4.
7+31-312+I 3
The result is sharp.whlch shows that F (z) maps the unit disk U onto a domain whlch is contained in the right half-plane.Hence we complete the proof of Theorem 5.1.(5.9) 6. GENERALIZED LIBERA OPERATOR J (f).This operator J (f) when c is a natural number was studied by Bernardi [21]. c In particular, the operator Jl(f) was studied by Libera [22], Lvingston [23], and Mocanu, Reade and Ripeanu [24].It follows from (6.1) that In order to prove our theorem, we recall here the following theorem due to Jack [25].
LEMMA 6.1.Let w(z) be regular in the unit disk U with w(0) 0 Then, if lw(z)l attains its maximum value on the clrcle Izl r (0 _<-r < i) at a point z 0 we can write z0w'(z0) mw(z 0) where m is real and m->_ 1 Furthermore, we need the following lamina by Pascu [26].
LEMMA 6.2.If f(z) S*, then J (f) ( S*. c With the aid of Laminas 6.1 and 6.2, we prove THEOREM 6.1.Let the function f(z) defined by (i.i) be in the class S*(,I) S* (0 -< < I; i < i).Then the functional J (f) is in the class c s* (,l).
PROOF.Define the function w (z) by Then w(z) is a regular function in the unit disk U with w(0) 0.
F
the class C(,A) (0 -< < I; A < I) if and only if E F(n+I) F(1-A) {F(n+I) F(1-A) | 1,908.2 | 1985-01-01T00:00:00.000 | [
"Mathematics"
] |
Data pipeline for managing field experiments
As agricultural and environmental research projects become more complex, increasingly with multiple outcomes, the demand for technical support with experiment management and data handling has also increased. Interactive visualisation solutions are user-friendly and provide direct information to facilitate decision making with timely data interpretation. Existing off-the-shelf tools can be expensive and require a specialist to conduct the development of visualisation solutions. We used open-source software to develop a customised near real-time interactive dashboard system to support science experiment decision making. Our customisation allowed for: • Digitalised domain knowledge via open-source solutions to develop decision support systems. • Automated workflow that only executed the necessary components. • Modularised solutions for low maintenance cost and upgrades.
a b s t r a c t As agricultural and environmental research projects become more complex, increasingly with multiple outcomes, the demand for technical support with experiment management and data handling has also increased. Interactive visualisation solutions are user-friendly and provide direct information to facilitate decision making with timely data interpretation. Existing off-the-shelf tools can be expensive and require a specialist to conduct the development of visualisation solutions. We used open-source software to develop a customised near real-time interactive dashboard system to support science experiment decision making. Our customisation allowed for: Hardware: (1) Neutron probe soil moisture sensor (2) GreenSeeker device (3) Linux server Software:
Introduction
Transforming data into information and then into actions often demands the integration of a large number of tools to organise, clean up, store, analyse, and visualise data [ 1 , 2 ]. The onset of technologies grouped under terms such as Internet of Things (IoT) and 'big data' have ushered an increased need for data-driven methodologies to study and manage farming systems [ 2 , 3 ]. The everincreasing volume and variety of data demands particular computational skills and the use of a wide range of tools [ 4 , 5 ]. This also involves the integration of tools considering the specific needs of different disciplines as well as the requirements for different stages of data processing (e.g. for data collection, analysis, and visualisation). A variety of frameworks to formalise this integration, as well as workflows and pipelines to link the many tools available, have been proposed [ 1 , 6-8 ] and many are in development. More recently, and in the same space, the concept of 'digital twin' has been introduced. This involves combining measurements with simulation or machine learning models and data analysis to mimic in-silico a given system (a plant, a farm, or even a whole sector). It also comprises real-time interactions with the actual system to develop a more holistic understanding as well as to control it [ 9 , 10 ]. This concept uses technologies such as IoT and requires similar tools and skills to handle large and varied datasets, but goes beyond visualising and understanding data, aiming to actively monitor the system and trigger management actions.
These continuous technological advancements constantly challenge how scientific work is planned, conducted, and communicated [5] . Research projects are becoming more complex over time, accounting for a wider range of interactions, and requiring crossdiscipline engagement [ 6 , 11 , 12 ]. In agricultural and environmental research, projects need to increasingly focus on multiple outcomes (e.g. productivity, quality, resource constraints, and impacts on the environment) and thus require tracking and analysing a wide range of variables. Field experiments and monitoring are commonly used in agricultural and environmental research, but they are often difficult and costly to conduct. Because of the complexity and variability of natural and agricultural environments, it is hard to control the conditions of field trials and to collect representative data in large volumes. The use of IoT technologies offers a promising prospect to increase the amount and breadth of data collected, with higher spatial and temporal resolution and encompassing a wider number of variables [ 2 , 6 ]. Importantly, the ever-increasing complexity of experimentation and the volume of data being collected require support systems that can help researchers monitor trial performance, make decisions on managing trials, and analyse results [ 2 , 8 ].
Here, we present details of a pipeline to process, store, and examine data collected in field experiments at The New Zealand Institute for Plant and Food Research Limited (PFR). We describe the tools and techniques used to develop the pipeline, and demonstrate its application on a field trial helping to manage irrigation and guide data collection. In particular, we deal with handling manually collected data and describe our approach to standardise and streamline analyses. Our objective is to contribute towards an integrated IoT system to support scientific experiments. This includes the collection and manipulation of data to facilitate the management of trials and to aid the subsequent analyses and interpretation of results. We envisage that this methodology could be embedded into a 'digital twin' system for research at PFR.
Rationale
Conventionally, trial management and data verification are often made by the domain scientists investigating the data themselves while conducting field experiments. However, with ever-increasing complexity of agricultural and environmental research, the conventional approach faces three main challenges. Firstly, scientists are often under time pressure to produce analyses for guiding the operation team to take actions for trial management. This includes, for instance, turning irrigation on to prevent plants from suffering unduly water deficiency, or collecting leachate samples following a sufficient amount of drainage. Secondly, experiments increasingly involve multiple objectives and multi-disciplinary teams, requiring more sophisticated forms of visualising complex information to ensure all involved have an understanding of experiment status. Thirdly, details and data collection mishaps may be easily overlooked when the volume of data is large and urgent decision making is required. This may result in unamendable mistakes or delayed corrective actions that can lead to loss of information/data.
We have developed a workflow and implemented a customised pipeline for data manipulation and visualisation for use in scientific field experiments within our research institute. The base principles of this workflow were: (1) Engage with domain scientists and operational team to clarify the science questions and measurements required to answer those questions. (2) Define techniques and plan the general sampling strategy ahead of the peak data collection period; (3) Establish and standardise data entry tables and protocols with key stakeholders. (4) Engage with domain scientists and operational team to design a fit for purpose information workflow. (5) Use near real-time visualisation as the interface to interact with key stakeholders and collect feedback for continuous improvements.
We found that the implementation of customised data pipelines was accelerated by using these generic principles to guide the requirement gathering process. These generic principles can be adapted for different types of science questions, and therefore, different data.
The current workflow focuses on handling soil water and plant cover data that are collected and entered into the pipeline manually. [18] .
Workflow description
A schematic with the general workflow is shown in Fig. 1 . This focuses on handling soil water and plant cover data that are collected and entered into the pipeline manually. The workflow encompasses handling information from the essential data sources to the visualisation solution (Grafana [13] ). The decision-making phase was deliberately omitted because our aim was to assist the stakeholders to make informed decisions. Nonetheless, expanding the pipeline further to include triggering alerts or management actions is plausible and is currently under development. With the machine-aid mindset, a planning workshop was held to structure the key metrics to visualise and layout the steps for achieving the target. A customised plan of data orchestration and manipulation was developed to achieve the three following tasks: (1) Data acquisition. The data acquisition script has two purposes. Firstly, it imports the soil water content (SWC), normalised difference vegetation index (NDVI) and irrigation data from Microsoft SharePoint to the R environment, and extracts the date information to instruct the application programming interface (API) to query the relevant weather data. Secondly, the script sends queries to the NIWA National Climate Database [14] and retrieves potential evapotranspiration (PET) and daily rainfall data. (2) Data transformation. Thirteen R functions were developed to capture and implement the domain knowledge for relating different data sources together. In brief, the functions summarised raw soil moisture data and manual NDVI data to obtain the moisture status over the soil profile and estimate the canopy coverage over the plant growth period. Next, we employed a modified water balance algorithm based on the work of Horne and Scotter (2016) [15] to incorporate irrigation, rainfall, PET and NDVI data to predict soil moisture deficit and potential drainage. (3) Database update. A delegated PostgreSQL database was setup by the IT department to store the summarised data. An R function was written to upload tabulated data to the databases for subsequent visualisation. Fig. 2 illustrates a simplified data pipeline that we implemented in R by using the targets package [16] (detailed pipeline diagram in Supplementary Fig. 1). The data pipeline contains 15 functions that were stored in the "functions.R " script with their customisation details. These functions were customised to achieve five main functionalities including data source connection, meteorological data retrieve, canopy data retrieve and processing, water balance calculation, and utility. Detailed explanation as follows:
Pipeline implementation
(1) The data source connection group has two functions (function 1 and function 2). Function 1 is the Excel adaptor ( Fig. 2 ).
This function is responsible for accessing Excel files on the shared server that is protected by Windows New Technology LAN Manager (NTLM). Credentials are stored in the R environment file in the user's private directory and sourced automatically by built-in R functions. This strategy was adapted to avoid manually typing credentials each time. The second function facilitates the PostgreSQL database connection and summarised data uploading (Supplementary Fig. 1). The same authentication strategy was used for the second function. (2) Another function was written to retrieve the meteorological data (function 3). This function wrapped the R package clifro [17] and automated the Rain and PET data based on the experiment duration. (3) The canopy data group has two functions (function 4 and function 5). Canopy data were stored in multiple Excel files on the Microsoft SharePoint server. Different instruments were used for canopy measurements because of practical issues during the experiment periods. Furthermore, it is essential to take into account the fallow period and crop canopy development stage. This is because the actual evapotranspiration is often less than the PET before full canopy closure. Moreover, canopy measurements were only performed fortnightly, or less frequently in some occasions. Therefore, it was necessary to standardise (function 15) and interpolate (function 8) the data to daily values. We made two critical assumptions for processing canopy measurements. Firstly, the canopy development had a linear relationship with time between emergence and canopy closure. Secondly, the canopy closure was achieved at the NDVI value of 0.75 for all crops. These assumptions were justified by the field observations for the vegetable crops that receive adequate or surplus water. (4) The water balance calculation group has eight functions (function 6 to 13). The critical function (function 13) contains the implementation of the water balance equations [15] . Function 6 to 12 were designed to prepare the water-related data to meet the input requirements of function 13. We intended to modularise the function to obtain the essential soil moisture metrics. For example, a simple and practical approach to estimate the field capacity across the soil profile is by assuming that the field capacity soil moisture content is reached during fallow period in winter, when the evaporation is minimal. Function 12 was written to achieve this estimation. (5) Two utility functions, function 14 and 15, were used to modify time variables and perform lag time calculation, respectively.
Another R script was used to trigger the targets data pipeline (CodeSnippets 1). This script is made available as an executable programme in the laboratory computers that host the interfaces to the data tables and other scientific instruments. Therefore, the operations team members can execute the programme immediately after they complete the manual data entry, and then evaluate the visualisation of all measurements to decide whether an action is required. CodeSnippets 1 -An example for using R executable to call an R script that triggers the workflow:
Deployment and operation
The main R scripts were developed in the RStudio (version 1.4.1717) environment. The R scripts were then executed using the laboratory computer with an environment file that contained credentials for file and database accesses.
One challenge of using R in a production environment is the reproducibility of the scripts across platforms and environments. We used the R-package renv to tackle this challenge [19] . The key function of renv is to create a specific file called renv.lock to record package dependencies and version numbers. Once the environment is created, it is possible to share the lightweight environment file with the associated folder via cloud Git hosts such as GitHub or Bitbucket, or even copy and paste onto flash drivers when internet access is limited.
Visualisation component
We used Grafana as the visualisation solution. This was mainly a time-saving consideration from user interface (UI) design to ensure the data transformation script was robust and reliable. As an off-the-shelf UI solution, Grafana is designed to visualise large volumes of time-series data with drag and drop visualisation components, and built-in interactive features. Grafana was deployed as a service that is available to all internal users via a uniform resource locator (URL). The development was completed by using the docker-compose command on a Linux server (CodeSnippets 2).
CodeSnippets 2 -The YAML file to instruct the deployment of Grafana as a service via docker-compose command (detailed explanations are in the supplementary material): CodeSnippets 3 shows the actual command in the Linux server to build the Docker container for hosting Grafana. The -d flag was used to configure the container running in the background. We acknowledge that the Docker software in the Linux server may require extra effort from the IT department or individual researcher to install and maintain. Alternatively, free desktop solutions for individual personal use can be also be used. The desktop solution demands the minimum effort for deploying applications. However, a downside is limited access by other developers.
CodeSnippets 3 -The command and flag to deploy Grafana as a website application in Docker container: Once the container is deployed, it is accessible via the Linux server URL or localhost with the ports defined in the docker-compose file (Ports in CodeSnippets 2). More specifically, the access URLs should be in the following formats: (1) Internal Linux server: http:// < name or IP address > :3000/.
The Grafana application requires a minimum of two steps to display visualisations. Firstly, users must define a data source for the application to retrieve data. Secondly, users have to assemble the desired dashboard widgets by using the drop and drop features. A detailed walk-through can be found in the supplementary material ( Supplementary Information 1). Fig. 3 demonstrates an exemplar of the final visualisation for soil moisture deficit (SMD). The SMD was corrected by two factors. Firstly, soil water balance was reset to the observed soil moisture content whenever available. Secondly, PET values were adjusted by the fraction of canopy cover index (range between 0 and 1; 0 indicates no crop canopy cover and a reduced PET value was used, while 1 represents full canopy cover and full PET values were used).
Calculated drainage events are shown in Fig. 4 . Red symbols with the label prefix "N " represented different events in four different nitrogen treatments. In general, drainage events occurred either after the precipitation exceeded 20 mm during the winter period (June to August, when PET is low) or consecutive precipitation (over 20 mm) in the spring. The interactive UI allows users to see the exact values; however, the feature is omitted in the static demonstration. This information was highly valuable for taking decisions regarding when to collect N leaching samples and was also used to help managing the irrigation. For the science team, visualisations like those of Fig. 3 were found useful to verify differences between treatment as well as the variability among replicates in near real-time, which supported discussions on experiment management and potential adjusts to them or future analyses.
Conclusion and future work
We have implemented a customised targets pipeline consisting of domain knowledge, PostgreSQL database and Grafana to aid monitoring and on decision making in field experiment management and anomaly detection for a science project. The workflow successfully delivered near real-time data integration and visualisation. The workflow has performed reliably and the operations team found it to be useful to facilitate the decision making on irrigation rates, sampling time, and data quality verification. However, we do acknowledge that there is room for improvement. For example, the pipeline has not been implemented into handling large amounts of data (more than gigabytes). Therefore, the feasibility of such pipeline on big data that is generated from automated collection remains unknown. Another possible improvement is to extend the capability to the IoT for automated irrigation events without human intervention. Regarding the science quality, established domain knowledge bases can be incorporated to leverage extensive understanding of the mechanism of the soil solute movements.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data Availability
The authors do not have permission to share data. | 4,078.2 | 2023-01-01T00:00:00.000 | [
"Computer Science"
] |
An Improved Chicken Swarm Optimization Algorithm and its Application in Robot Path Planning
Chicken swarm optimization (CSO) algorithm is one of very effective intelligence optimization algorithms, which has good performance in solving global optimization problems (GOPs). However, the CSO algorithm performs relatively poorly in complex GOPs for some weaknesses, which results the iteration easily fall into a local minimum. An improved chicken swarm optimization algorithm (ICSO) is proposed and applied in robot path planning. Firstly, an improved search strategy with Levy flight characteristics is introduced in the hen’s location update formula, which helps to increase the perturbation of the proposed algorithm and the diversity of the population. Secondly, a nonlinear weight reduction strategy is added in the chicken’s position update formula, which may enhance the chicken’s self-learning ability. Finally, multiple sets of unconstrained functions are used and a robot simulation experimental environment is established to test the ICSO algorithm. The numerical results show that, comparing to particle swarm optimization (PSO) and basic chicken swarm optimization (CSO), the ICSO algorithm has better convergence accuracy and stability for unconstrained optimization, and has stronger search capability in the robot path planning.
I. INTRODUCTION
The swarm intelligent optimization algorithm, such as genetic algorithm (GA) [1], particle swarm optimization (PSO) [2], bat algorithm (BA) [3], artificial bee colony algorithm (ABC) [4] et al., is a stochastic optimization algorithm constructed by simulating the swarm behavior of natural organisms. These algorithms search for the optimal solution of an optimization problem by simulating the physical laws of nature phenomena, the living habits and behavioral characteristics of various biological populations in nature. The swarm intelligent algorithms provide a new way to solve global optimization problems in the fields of computational science, engineering science, management science and so on. The swarm intelligent optimization algorithms have become a research hotspot and are particularly important.
The associate editor coordinating the review of this manuscript and approving it for publication was Hongwei Du.
The chicken swarm optimization (CSO) algorithm [5] is a stochastic search method based on chicken swarm search behavior, which was proposed by Meng, et al. in 2014. In CSO, the whole chicken swarm is divided into several groups, each of which includes a rooster, a couple of hens and several chicks. Different chickens follow different laws of motions. There exist mutual learning and competitions between different chickens, and the hierarchy of the chicken group is updated again after several generations of evolution. The CSO algorithm has great research potential because of its good convergence speed and convergence accuracy. However, like other swarm intelligent optimization algorithms, the basic chicken swarm optimization algorithm has the disadvantages of premature convergence, whose iteration is easy to fall into a local minimum, in solving the large scale optimization problem with more complexity. Therefore, scholars have conducted in-depth research and proposed some improved chicken swarm optimization algorithm, VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ which have achieved good numerical results. For example, Fei and Dinghui [6] modified the position update formula of the chicks to avoid the iteration falling into some local optimums, but kept the position update formula of the other two groups unchanged, which only improves the CSO partially. Chen et al. [7] updated the hen's position formula to improve the accuracy and effectiveness of the CSO algorithm, but their algorithm needs more running time to reach the optimal solution. Chiwen et al. [8] substituted Gaussian distribution with an adaptive t-distribution in the rooster position update formula, and introduced an elite opposition learning strategy in the hen position update formula. These algorithms achieved good global search ability. The chicken swarm optimization algorithm has been applied in some practical areas. For example, Tiana et al. [9] used the chicken swarm algorithm to solve the problem of deadlock-free migration for virtual machine consolidation. Compared with the other deadlock-free migration algorithms, the chicken swarm algorithm have higher convergence rate. Shaolong et al. [10] solved the parameter estimation problem of nonlinear system using chicken swarm algorithm. Their numerical experiment results show that the chicken swarm algorithm is feasible for parameter estimation of nonlinear systems. An improved chicken swarm algorithm is proposed, where the Levy flight strategy is added to the hen's position update formula to make the population distributed evenly. The nonlinear weight reduction strategy is employed into chick's position update formula to prevent the iteration from premature convergent and promote the convergence precision of the proposed algorithm. The better convergence accuracy and higher convergence speed of the improved chicken swarm algorithm has been verified by the numerical experiments on 8 benchmark GOPs. The robot path planning experiment results show that, compare to CSO, the proposed improved chicken swarm algorithm is more effective in improving search speed and quality of path.
II. CHICKEN SWARM ALGORITHM
The unconstrained continuous optimization problems can be expressed as follows.
If X * ∈ R D satisfies that: The chicken swarm optimization algorithm mimics the hierarchal order and behaviors of searching food in the chicken swarm. Under specific hierarchal order, different chickens follow different laws of motions. Chicken swarm optimization algorithm uses the following four rules to idealize the behavior of chickens: 1) There are several subgroups in the chicken swarm. Each subgroup consists of a dominant rooster, a couple of hens and chicks.
2) How to divide the whole chicken swarm into several groups and how to determine the species of chicken depend on the fitness values of the chicken themselves. In the whole chicken swarm, several individuals with the best fitness values are identified as roosters; The chickens with worst several fitness values would be acted as chicks, the others would be hens. The hen chooses its subgroup randomly and the mother-child relationship between the hen and the chick is also formed randomly.
3) The hierarchal order, dominance relationship and mother-child relationship in a group keep unchanged during several generations until the role are reassigned.
4) The hens follow their rooster-mate to search food, and the chicks search for food around their mothers. The dominant individuals have advantage in searching food.
The whole chicken swarm is divided into three types of chickens. When the CSO algorithm solves the optimization problem (1), each chicken represents a feasible solution to problem (1) and different chickens follow different optimization strategies. In the CSO algorithm, the number of chickens are assumed to be N and chickens are arranged in rising order according to their fitness values. The N R chickens in front are defined as roosters, the N C chickens at last are called chicks and the remaining is the maximal iterative number. The position update formula of different type of chickens are as follows: (a) The rooster's position update formula are: where is a Gaussian distribution with mean 0 and standard deviation σ , ε is a small constant to avoid the denominator being 0. k is a rooster's index, which is selected randomly between 1 and N R except i. f i and f k is the fitness value of the i th and k th roosters. (b) The hen's position update formula are: , rand is a uniform random number between 0 and 1, r 1 is an index of the rooster, which is in the i th hen's group-mate, r 2 is an index of a chicken in all roosters and hens, and let r 1 = r 2 .
(c) The chick's position update formula are: , m is an index of the mother hen corresponding to i th chick, FL is a parameter in the range [0, 2], which keeps the chick to forage for food around its mother.
III. IMPROVED CHICKEN SWARM ALGORITHM A. IMPROVED SEARCH STRATEGY BASED ON LEVY FLIGHT
Given that the entire flock looks for food in an unpredictable environment, the ''Levy flight'' search strategy is characterized by short-range deep local search and occasional longer distance walks [11], [12], which can help to improve the search efficiency and increase disturbance to make the chickens distribute more evenly. In the chicken swarm optimization algorithm, the number of hens is the largest, so hens play an important role in the entire flock. Inspired by this, the ICSO algorithm introduces the Levy flight search strategy into hen's position update formula, which can avoid the iterations falling into a local minimum and enhance the global search capability of the ICSO algorithm. The improved hen's position update formula are as follows: is the jump path of a random search whose step size obeys the Levy distribution, λ is a scaling parameter in range [1,3], ⊗ is a vector operator representing point multiplication.
B. NONLINEAR STRATEGIES OF DECREASING INERTIA WEIGHT
In the basic CSO algorithm, the chicks only learn from their own mother. Once a mother falls into a local minimum, the chicks following this mother will also fall into the local minimum. In the proposed ICSO algorithm, the nonlinear strategies of decreasing inertia weight is employed to update chick's position, which helps the chicks not only learn from their mother, but also learn from themselves. It will be verified by the numerical experiments that coupling the nonlinear decreasing inertia weight in chick's position formula can avoid the ICSO algorithm falling into local minimum as soon as possible. The nonlinear decreasing inertia weights [13] ω update as follows: The position updating formula of chicks coupling the nonlinear decreasing inertia weights are as follows: where ω min is the minimum inertia weight, ω min = 0.4, ω max is the maximum inertial weight, ω max = 0.95, crepresents an acceleration factor, c = 10.
C. THE PROCESS OF ICSO ALGORITHM
The procedure of the ICSO algorithm is as follows: Step 1: Initialize chicken swarm and set the parameters N , N R , N H , N c , N m , G and M Step 2: For each chicken, calculate its fitness value and initialize its current and global optimal position, let t=1.
Step 3: If mod (t, G) = 1, sort all chickens in rising order according to their fitness values, the N R chickens in front would be acted as roosters, each of roosters represent a group. The N c chickens at last would be designated as chicks, the others in the middle would be hens. The hens randomly choose a group to live in, the mother-child relationship between the hens and chicks is also randomly established.
IV. NUMERICAL EXPERIMENT OF ICSO ALGORITHM A. PARAMETERS SETTING
The following eight typical unconstrained optimization problem are used to test the performance of the proposed ICSO algorithm, by comparing among PSO algorithm, CSO algorithm and ICSO algorithm. The objective functions [14] of these test problems are shown in Table 1. In three algorithms, the maximum number M of iterations is set to be a same value as 600, the size N of the population are all set to be 50 and the other parameter settings for three algorithm are shown in Table 2. The dimensions of all test problems are set to be 30.
B. EXPERIMENTAL RESULTS
To reduce the influence of contingency, for each of the three comparison algorithms, the experiment on each benchmark problem is repeated 30 times independently, and the average values of the relative results of the experiment are used to compare. The means, standard deviations, the worst and the best of objective function values obtained from the experimental data are shown in Table 3. Table 3 shows that, compared to the PSO algorithm and CSO algorithm, the ICSO algorithm performs well in the accuracy of the objective functions values and the stability. For problems F1 and F8, the basic chicken swarm optimization (CSO) algorithm is better than PSO algorithm in the accuracy of objective function value, and the ICSO algorithm has found the optimal objective function values with the highest precision. For problem F2 and F3, the accuracy of objective function values obtained by the PSO algorithm and CSO algorithm is very poor, and the resulting objective function values are far from the theoretical values. The objective functions values obtained by ICSO algorithm have significantly higher accuracy than those obtained by the PSO and CSO algorithm. For problem F4, although the best objective function values obtained by the PSO algorithm is 1e-10, its worst solution, mean value and variance are all very poor. The results obtained by the CSO algorithm are also poor and can't meet the accuracy requirements. Compared to the PSO algorithm and CSO algorithm, the ICSO algorithm have a great improvement in the objective function value accuracy and the results are more stable than those of PSO and CSO algorithm. For the complex multimodal problems F5 and F6 with a large number of local minimum values, the ICSO algorithm succeeds in finding their theoretical optimal objective function value 0 in all 30 independent experiments. The PSO algorithm and CSO algorithm perform bad obviously. For problem F7, the accuracy of the objective function value obtained by the ICSO algorithm is significantly better than that of the CSO algorithm, and the PSO algorithm performs bad and unstable, which shows that the ICSO algorithm performs far better than the PSO and CSO algorithm for this multimodal problem. The above experiments show that the ICSO algorithm performs much better than the PSO and CSO algorithms.
In order to further compare the performance of the three algorithms, the evolution curve of thirty-time average optimal objective function values for eight test problems are shown as Fig. 2-9. The vertical coordinates of the figures are taken as logarithm of the average optimal objective function values except Fig. 4 and Fig. 6-7. Fig. 2-9 show that the proposed ICSO algorithm is well performed among three algorithms. For the benchmark problem F1-F8, the number of iteration that the ICSO algorithm needed to obtain the optimal solution are much smaller than those in other two algorithms. In Fig. 2, the descending speed and accuracy of objective function value in the ICSO algorithm are significantly better than those in the PSO and CSO algorithms. The ICSO algorithm iterates about 170 times to VOLUME 8, 2020 achieve the accuracy of 1e-6. The CSO algorithm requires about 200 times iterations, and the PSO algorithm iterates about 400 times to achieve this accuracy. In Fig. 3, the PSO and CSO algorithm have similar performance and their objective function value almost keep unchanged after 400 times iterations. The ICSO algorithm makes the objective function values go or come down in all iterations. In Fig. 4, the descending speed of objective function values in CSO algorithm is faster than that in PSO algorithm in the front 100 iterations and then it reverses. Their objective function values are far from zero. However, the ICSO algorithm iterates about 100 times to make the objective function value close to the theoretical optimal value 0. In Fig. 5, after iterating 100 times, the PSO and CSO algorithm no longer make the objective function values down, which means these two algorithms both fall into local minimum. The ICSO algorithm performs significantly better than any of these two algorithms, and its descending rate of objective function value is very obvious. In Fig. 6, the ICSO algorithm and CSO algorithm make their objective function values drop quickly to the theoretical optimal value 0 in about 100 times iterations. The PSO algorithm performs bad obviously. In Fig. 7, the evaluation curves of the three algorithms are almost the same, but ICSO algorithm is slightly better than CSO and PSO algorithm. For Fig. 8 and Fig. 9, the PSO algorithm can not make the objective function values descended after about 100 iterations. The ICSO algorithm and CSO algorithm both make the objective function values keep down in all iterations and the descending rate in ICSO algorithm is bigger than that in CSO algorithm.
V. THE APPLICATION OF IMPROVED CHICKEN SWARM ALGORITHM IN ROBOT PATH PLANNING A. ENVIRONMENTAL MODELING
In recent years, more and more people pay attention to the intelligent optimization algorithm in solving practical application problems. In the field of artificial intelligence, many scholars have made in-depth research on the problem of robot path planning [15]. The chicken swarm optimization algorithm has obvious advantages in fast convergence speed and high convergence accuracy compared to other algorithms, however, its iteration is easy to fall into a local minimum. The proposed improved chicken swarm optimization (ICSO) algorithm is combined with a traditional grid method and an constrained optimization problem is established to search the optimal robot path. The simulation results show that the good global search ability of the ICSO algorithm accelerates the search speed of the robot and improves the quality of the search path.
The working environment of the mobile robot needs to be modeled and preprocessed before the mobile robot plans a path. Grid method is a traditional method used to model the work environment of the mobile robot path planning [16], which is widely used and is employed here. The grid map mainly includes two states: free grid and obstacle grid. Mat-labR2016a is hired to do the simulation experiments. The obstacles in the robot working environment are processed and projected into the grid map, as shown in Fig. 10.
In the grid map, assuming that the number of columns is N, the coordinate origin is the grid vertices in the lower left corner, the relationship between grid coordinates and grid numbers may be written as follows: where x is the horizontal coordinates of the grid under the coordinate system, y is the ordinate of grid in the coordinate system, n is the grid number.
In robot path planning, the most effective and direct way is to take the path length of the robot as the fitness function for each path. The length of the collision-free path of the robot in the course of travel is calculated as follows.
where n represents the number of path nodes, x i and y i represent the horizontal and vertical coordinates of the path i.
B. EXPERIMENTAL SIMULATION
The simulation platform is built with Matlab and the grid map of working environment of the mobile robot is as shown in Fig. 10. The grid map is a 10 × 10 grid matrix, where The results are shown in Fig. 11-12 and Table 4. It is shown that, from Fig. 11-12 and Table 4, the basic chicken swarm optimization (CSO) algorithm can not find the global optimal path using more iterations and longer search time and has ''detour'' phenomenon in the process of robot travel. The improved chicken swarm optimization (ICSO) algorithm can jump out of the local optimal both and avoid the ''detour'' phenomenon. The length of the robot path obtained by ICSO algorithm is shorter than that by CSO algorithm, and the number of iterations for ICSO to search the robot path is less than that for CSO algorithm. Compared to the CSO algorithm, the ICSO algorithm shortens the path length 15.15% and reduces the number of iterations by 38.24%, which means that the ICSO algorithm can solve the robot path planning problem more effectively.
VI. CONCLUSION
It is well known that the chicken swarm optimization (CSO) algorithm is easy to fall into a local minimum. An improved chicken swarm optimization (ICSO) algorithm based on Levy flight strategy and nonlinear weight reduction strategy is proposed. Compare to CSO algorithm, the ICSO algorithm overcomes the blindness searches and has the significant search efficiency and the high convergence rate. The numerical results on eight benchmark problems show that, compare to CSO algorithm and PSO algorithm, the proposed ICSO algorithm performs well in convergence speed and precision. In solving the robot path planning problem, the ICSO algorithm shortens the path length and needs less iterations than the CSO algorithm does, which means that the ICSO algorithm is feasible in the application of robot path planning. In the future, the CSO algorithm applications in different fields should be studied deeply, especially for high dimensional problems, such as the unmanned aerial vehicle (UAV) route planning, the location issues and so on.
XIMING LIANG received the M.S. degree in applied mathematics from Yunnan University, Kunming, China, in 1992, and the Ph.D. degree in computational mathematics from Xi'an Jiaotong University, Xi'an, China, in 1998. He is currently a Professor with the School of Science, Beijing University of Civil Engineering and Architecture. His current research interests include constrained optimization, evolutionary computation, largescale numerical optimization, and their practical applications.
DECHANG KOU was born in Jinan, Shandong, China, in 1994. He is currently pursuing the master's degree in Beijing, China. His research interest includes optimization method and application.
LONG WEN received the M.S. degree in system science from Guangxi Normal University, Guilin, China, in 2008, and the Ph.D. degree in control science and engineering from Central South University, Changsha, China, in 2011. He is currently a Professor with the Key Laboratory of Economics System Simulation of Guizhou, Guizhou University of Finance and Economics. His current research interests include evolutionary computation, nonlinear optimization, constrained singleand multioptimizations, machine learning, data mining, and their practical applications. | 4,894.4 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
A study of droplet impact on static films using the BB-LIF technique
This paper presents results of single droplet impacts on films of different heights taken using the brightness-based laser-induced fluorescence technique (BB-LIF). The dynamics of drop impingement such as the shape of the cavity and residual film thickness are investigated and analysed with a time resolution of 0.1 ms and spatial resolution of 70 μm. Additionally, a variation of the BB-LIF technique is used to investigate the change in profile of the droplet liquid during the inertial self-similar regime. The results of the analysis show that present models predicting initial development of the cavity show good agreement. Suggested amendments for some of the constants for cavity width and residual film thickness are proposed based on the film thickness that fit better with published data. The development of the profile of the droplet liquid demonstrates that for thin liquid films, the droplet liquid behaviour with strong similarity to droplet impacts on dry solid surfaces. It is noted that for some of the measured parameters, the use of the film height as the length scale gives a better fit.
Introduction
Droplet impingement on thin liquid films is a common phenomenon in nature as well as in industrial settings that has drawn the attention of researchers for more than a century. (Cossali et al. 1997;Tropea and Marengo 1999;Okawa et al. 2006). This parameter is used in this work in analysing the impact evolutions and is defined in Table 1.
Based on the experimental data, several of the semiempirical models have been developed to describe the postimpingement evolution of the film and the droplet in sprays. (Stanton and Rutland 1998) and (Mundo et al. 1997) have suggested empirical models to identify the evolution events such as coalescence, formation of Worthington jet, crown formation and splashing. More recently, a model has been suggested by (Bisighini et al. 2010) considering the impact of single and multiple droplets on deep and thick liquid layers.
Most of the studies to this date employ high-speed imaging techniques to achieve a series of time-resolved images showing the interaction between the droplet and the film (Thoroddsen et al. 2008). This method, even though very effective in capturing the crater and crown evolutions, has difficulty identifying droplet spreading on liquid films due to optical limitations. Techniques such as Fourier transform profilometry (Lagubeau et al. 2012) can measure the droplet spread during impacts ion dry surfaces, but have significantly lower resolution. The CHR sensor used by ) has a lower spatial resolution and significantly higher uncertainty level. In this work, an attempt has been made to address this issue by tracking the droplet and the film dynamics visually using brightness-based laser-induced fluorescence technique. The technique has been successfully used for measuring the evolution and characteristics of disturbance waves in sheared flow (Cherdantsev et al. 2014) and has been used to measure height information in liquid films over large areas with spatial resolutions of up to 0.04 mm at temporal resolutions up to rates up to 10 kHz. The high spatial and temporal quality of the technique makes it ideal to make measurements in this application as will be explained in the next section.
Experimental procedure and methods
The experimental apparatus consists of a syringe with a flat-tipped hypodermic needle mounted on a sliding arm of a vertical stand placed above a transparent acrylic Petri dish. The syringe was filled with distilled water and operated with a pneumatic driver at a constant rate to produce repeatable droplets of 3.5 mm diameter. Analyses of the images show that the variation in the droplet size was negligible. The height of the syringe was varied between 100 and 500 mm above the impact plane to generate a range of velocities (1.4-3.1 m/s) at the point of impact. Properties of the liquid used and the range of Weber and Reynolds numbers used in the experiments reported in this work are shown in Table 1. All the experiments were carried out at room temperature around 20 °C. The depth of the film was varied (h* = h/D = 0.43, 0.86 and 1.29, where h/D is the film height to drop diameter ratio) to match similar measurements being taken for moving films for comparison.
The BB-LIF technique was used to measure the depth of the liquid over the area of the Petri dish and a second high-speed camera with lower resolution was synchronized to image the droplet impact from the side. This allowed us to confirm the stability of the droplet size and velocity and identify the time of contact of the droplet to the surface with an accuracy of ±50 μs. Five syringe heights were used to set to achieve different impact velocities. For each syringe height, five droplets were tested to generate statistics on the reliability of the measurements.
BB-LIF technique
The BB-LIF technique uses the absorption of the laser light (de Beers law) by a selected dye and measures the intensity of the light re-emitted. The intensity of light reemitted is a function of the local dye concentration and the height of the film Eq. (1). Once the absorption rate of the dye at that concentration (α), the dark level of the camera D(x, y), the intensity distribution of the light over the image C(x, y), and the reflection coefficient of the waterair interface (k refl ) are known, the height of the film at each pixel can be specifically determined (Alekseenko et al. 2008;Cherdantsev et al. 2014). The calibration of the measurement is done by comparing the images against a known depth of the dye. This calibration removes spatial variations of the light level which is included in the derivation of the C(x, y) matrix.
(1) h(x,y) . 1 + k refl .e −α.h(x,y) The method to carry out BB-LIF involves doping the distilled water with a known quantity of Rhodamine 6G (about 8-15 mg/l depending on depth range desired) and illuminating the Petri dish through the base using a pulsed Nd-Yag laser (527 nm). The light is absorbed by the dye in the liquid and re-emitted at longer wavelength which passes through a filter to remove the original wavelength and is recorded on the camera also located below the Petri dish as shown in Fig. 1. According to (Koichiro et al. 2001), the introduction of Rhodamine dye has a negligible effect on the fluid properties and should have no significant effect on the results shown. At the noted concentration, the vertical resolution of the technique is related to the levels of thermal noise, which corresponds to about ±30 μm.
The major disadvantage with this technique is that it assumes a reflection coefficient (k refl ) of 0.02. In cases where the slope of the surface is large, such as could happen in the crown, this coefficient can increase to total internal reflection, which gives an over-prediction of the depth at that location. These values are easy to identify because they are an order of magnitude greater and can be filtered out. In certain cases, there can also be local focussing of the laser light which will also produce an over-prediction of the film thickness.
In this case, a 1280 × 496 pixel area was measured with a resolution of 70 microns/pixel at 10 kHz frequency to measure the height of the film and the droplet during the impact and the film interaction with the droplet.
It is possible with this technique to investigate separately the motion of the liquid in the film, or the motion of the liquid in the droplet separately for short periods immediately after the impact. This can be done by doping only the film liquid or the droplet liquid. For each of the droplet impact parameters studied, three cases were taken; • Case 1: Both droplet and film were doped with the fluorescent dye. • Case 2: Only the film was doped.
• Case 3: Only the droplet was doped.
Captured images for these cases can be analysed to obtain a number of features ( Fig. 2) such as the cavity width W, cavity depth of the system z C1 , cavity depth of the film liquid only z C2 , minimum film thickness beneath the cavity h res and droplet height as it deforms during the impact z C3.
Results
In the literature, droplet impact has been split into a number of broad areas. For impact onto liquid films, it has been suggested that the initial crater evolution is dependent on the droplet momentum in the initial stages ) and that the correct normalisation of the film depth, crater depth, crater width and time with respect to the droplet diameter D is: where z x is the appropriate depth scale for each of the three cases and the other variables are defined in Fig. 2.
(1) Nd-Yag pulse laser (2) High speed camera (3) Petri dish filled with distilled water (4) High speed camera (5) Syringe on height adjustable stand The dynamics of the drop impact are also dependent on the properties of the droplet impact. Cossali et al. (1997) and Okawa et al. (2006) have used an impact parameter K = We OH −0.4 that can be used to predict the impact outcome. It was noted by (Alghoul et al. 2011) that there was a change in behaviour of the droplet impact around h* = 1. Below this value there was crown formation and crown break-up. Above this value there was jet formation. Droplet impact onto a solid surface has many similarities with droplet impact onto a thin film. Lagubeau et al. (2012) and Roisman et al. (2008) have shown that the impact can be broken into several regimes. Initially there is a pressure dominated regime, when the droplet compresses under the influence of the impact, and this pressure leads to inertial motion in the radial direction (Lagubeau et al. 2012). Once this motion is present, the inertia dominates the flow over viscosity and so it is possible to derive a remote asymptotic solution for the thickness using the inviscid approximation (Yarin and Weiss 1995). (Lagubeau et al. 2012) and ) have shown that for droplet impact on dry surfaces, the profile of the droplet at time τ is given by: where η is dependent on the film height and τ g is the inverse of the initial gradient of the radial velocity. In their case, they determined constants of τ g = 0.25 and η = 0.39 for impact onto solid surfaces.
The results will be analysed using this methodology and nomenclature.
Evolution of the cavity
It is important to understand how the droplet liquid and film liquid interact, so the three cases were set up as explained earlier so that crown and droplet content could be determined.
This can be seen in Figs. 3 and 4 for two cases with the same Weber and Reynolds number of impact, but different film heights. In Fig. 3, 5 time instances are highlighted for the three cases under study. It should be noted that each case was a separate experiment, so there is a slight variation in the droplet outcome, shape and location of secondary droplets; however, the general behaviour is consistent between the cases, so comparison can be made.
The main points highlighted by the selected stages of droplet impact and interaction of the liquid core are: • Prompt splash: small droplets can be seen in Fig. 3a, f, k showing that the splash contains liquid from both the film and the droplet. Droplet fluid is spread over the surface of the crater and runs up the sides. • Crown formation: Fig. 3b, g, l shows that the crown also contains liquid from the droplet and the initiation of secondary droplet formation can be seen. • Crown ligaments formed: as shown in Fig. 3c, h, m, both film and droplet liquid are entrained into the secondary droplets generated from the crown break-up. • Secondary droplet impact: as shown in Fig. 3d, i, n, secondary droplets produce craters on surface of film after impact and liquid from film and droplet are draining back towards the centre of the crater. • Jet formation: as shown in Fig. 3e, j, o, jet is formed of film and droplet liquid.
For the deeper film h* = 1.29, Fig. 4 shows that many of the same features are also visible. In this case, there is no droplet creation at the crown.
Itemizations of the stages of impact are as follows: • Prompt splash: as shown in Fig. 4a, f, k, streaks (possibly ligaments) are seen around the point of impact containing film and droplet liquid. • Crown formation: as shown in Fig. 4b, g, l, crown is formed, but in this case, the crown is sloped inwards. Droplet liquid spreads over surface of crater and up walls, but not as far as the surface and so is not collected in crown. Droplet liquid from prompt splash collects outside the crown. • Crown receding: as shown in Fig. 4c, h, m, crown begins to recede. No secondary droplets are produced in this case. • Fluid draining: as shown in Fig. 4d, i, n, droplet liquid and film liquid drain towards the centre of the crater to initiate the formation of the Worthington jet. • Jet formation: as shown in Fig. 4e, j, o, surrounded by ripple wave that propagates out from the impact, most of the droplet fluid has collected in the jet.
When all the cases and droplet impacts were considered, it was noted that in all the cases with crown breaking, the droplet liquid was present in the secondary droplets. It also appeared as if the droplet remained largely coherent as it expanded up the side of the cavity, before receding to become present in the jet. This is truer in the deeper film experiments, where the droplet liquid often ends up as at the top of the jet or even as the topmost droplet during jet break-up. This has been observed elsewhere, but little comment has been made of this.
This has implications to heat transfer and regime limits that will be discussed later in this paper.
Development of the width of the cavity
The droplet impacts and the resulting structures are almost axially symmetrical, so another method of understanding the temporal behaviour of the droplet impact is to take profiles through the centre point of impact and present them as x-t diagrams such as those shown in Fig. 5 which shows the development of the cavity width and depth for h* = 0.43 as the droplet Weber number on the impact is increased. The three cases can be used to see clearly the locations of the droplet and film liquid. Features such as the width of the crater, time to crater collapse, capillary waves and location of droplet fluid can all be identified from these x-t diagrams, and some of these quantities will be studied in later sections.
In the initial stages of impact, the width of the impact crater is shown to be a function of the time (Roisman et al. 2008) where and It was theorized from their results that both β and τ 0 are dependent on the film height, and β is independent on the Weber number.
Comparison was made between the values of βh* 0.33 published in (van Hinsberg 2010) for a range of liquids and the results of the cavity width fit for the distilled water in these results, and it was noted that both showed a dependence on the Weber number (Fig. 6) to provide a new relationship for β This suggests that the value of β is in fact dependent on the Weber number. Looking again at Eqs. 5-7, it can be observed that there are numerous terms where the Weber number and Froude number are combined with the h * term. For example in Eq. 7, there is a term The same substitutions can be made for other terms in Eqs. 5 and 6. This suggests that β could be related to the Weber number and Froude number based on the height of the film rather than on droplet diameter.
To test this, β was plotted against We H = ρU 2 H/σ, where U is the velocity of the droplet at the impact and H is the height of the film. When this is shown (Fig. 7) for both our data and the data from (van Hinsberg 2010), it clearly shows that the data have a similar behaviour. In this case, a relationship with β = 3.1 We −0.37 H + 0.19 is fitted to the data using linear regression. This still shows a h * relationship close to that of Fig. 6. The behaviour of this relationship is consistent with a simplistic understanding of the problem. β is related to the rate of expansion of the cavity. As the cavity depth increases, the height of fluid that needs to be pushed out of the way increases, so the β decreases with increasing height. When the depth of the film becomes significantly larger than the depth of the cavity, then the mass of fluid that needs to be displaced tends to a constant. Thus, the rate of the expansion of the cavity tends to a constant value.
The implications of this are that the droplet diameter might not be the correct length scale to use for generation of dimensionless parameters in all cases.
The new relationship derived in Fig. 7 is demonstrated by generating theoretical curves of cavity, and comparison is made with the x-t diagrams in Fig. 5. These are shown to fit well with these data and with all other data in the set.
Evolution of the cavity depth for Case 1
It has been established that it is usual to normalize the depth by the droplet diameter and the time by the velocity and the diameter of the droplet for crater depth, and this is demonstrated in Fig. 8. This shows the height of the centre of the impact in Case 1 for all five droplet Reynolds and Weber numbers at all three film depths. The graphs show clearly the self-similar behaviour in the initial stages of the droplet impact. The height at the point of impact decreases linearly with τ until τ ≈ 1.5 as the top of the droplet liquid continues at its initial velocity. The base of the droplet material, however, is moving more slowly, and this results in an increase in pressure inside the droplet that forces the droplet material sideways. As the pressure in the liquid below the impact increases, the rate of cavity depth growth decreases. For a thinner film, the presence of the base wall acts to amplify the pressure below the cavity, which decelerates the increase in cavity depth. It was attempted to plot these parameters against film height, and they did not fit as well, so this suggests that the droplet diameter is the correct length scale in this case. For the length of time shown, only at the lowest Weber number impact has the base of the cavity started to rise as expected. It can also be noted that there is a local focusing effect due to internal reflection inside the droplet. This contributes to the variation of the depth value in the initial stages, but does not obscure the overall trend.
Evolution of the film only (Case 2)
One of the advantages of the BB-LIF technique is that the motion of the film and droplet liquid can be separated and the dynamics of the film liquid only can be analysed. Figure 9 shows the evolution of the film height only at the centre of the impact. This shows the response of the film to the impact without the influence of the droplet. While the top of the droplet is initially continuing at the speed of impact, the base of the droplet and the top of the film are moving at roughly 44 % of the droplet impact velocity as theorized by the work of Bisighini et al. 2010). This means that since the top of the droplet is moving at the initial droplet velocity until τ = 1.5 (Fig. 8), there is a pressure build-up in the droplet fluid that forces the droplet to expand sideways.
Again, it was attempted to normalize this by film height, but again in this case, it was determined that the droplet diameter is the correct length scale for this problem.
In this case, there is again a local focusing effect in the initial stages. As the droplet collapses, it reflects additional laser light back to the region giving an over-prediction in the early times. This effect is negligible in the self-similar region for τ > 1.5.
Minimum depth of the film beneath the crater
It has been theorized that the minimum thickness of a film formed by a droplet impact is a function of the Reynolds number. A being related to the height of the film. In the case of the deepest film however (two blue points in Fig. 10 that do not correlate to expected behaviour), the influence of viscous dissipation is negligible, so this defined relationship is not valid for low Reynolds numbers. Instead, the depth of penetration of the cavity and hence the residual film thickness is a complex function of the Weber, Reynolds and Froude number (Bisighini et al. 2010) except at the highest impact velocity when the cavity appears to be again affected by the wall. ) looked in more detail at the dependence of the constant A for different liquids and suggested that it was dependent on the film height according to However, it can be again suggested that if h* and Re have the same power, they might be combined in some way to produce a Reynolds number based on the height of the film. If the data from this study and the data from (van Hinsberg 2010) are plotted using h H = h res /H and Re H = UH/ν, as shown in Fig. 11, we can see that for both cases, the data collapses onto single relationships except for points where the cavity has not penetrated far enough to interact with the wall.
However, the two curves do not coincide. The Van Hinsberg data use different liquids, so it cannot be because of liquid properties. The two main differences in the experiments were that they used droplet sized of the order of 2.1 mm compared to the droplet sizes of 3.5 in this work, and that they used a hydrophobic glass wall while this work used an acrylic wall. Acrylic has a surface roughness of twice that of glass, but it is unlikely that this is the issue, since the flow is laminar, so it is most likely either that the constant of proportionality is a function of the droplet diameter as well as the film height, or that the hydrophobic A = 0.098h * −0.404 + 0.79 coating in the Van Hinsberg case has affected the friction loss at the wall. Further work will have to be completed to verify which of these was the correct reason for the discrepancy.
Evolution of the droplet liquid
The third case under study involved the doping of the droplet fluid only so that the thickness of the droplet liquid as it interacts with the film and the mixing generated can be quantified. Figure 12 shows that in the initial stages of the impact, the droplet liquid spread out and formed a rim around the edge of the crater. As discussed earlier in Eq. 4, spreading of the droplet liquid has many similarities to that of droplet spreading on dry surfaces.
Considering Fig. 12, it appears that the centre portion of the profile of the droplet in the inertial self-similar region is also a Gaussian profile similar to that of droplet impact onto a dry surface. The major difference is that the lamella away from the central portion is forced up the side of the cavity, and this can be seen by the apparent increase in thickness at the edge (the peaks on either side of the central peak). The side peaks therefore show the location of the sides of the cavity. To test the similarity of the centre portion further, a curve fitting was done using Eq. (4), using η and τ g as fitting variables. It was evident that curves could be fitted with a single value of τ g = − 0.1905, and that the different thicknesses h * = 0.43, 0.86 and 1.29 had to have values of η = 0.39, 0.78 and 1.13, respectively, which corresponds to η = 0.9h * .
The fit in the self-similar inertial region of the impact is quite good. The high thicknesses at the edge of spreading Fig. 11 This shows that there is disagreement in the results showing the dependence of A, the constant of proportionality used to determine the minimum film thickness h res , from (van Hinsberg 2010) and the results presented here. Error bars for our data are not clearly visible correspond to the location of the crater sidewall, and it is expected that the liquid from the droplet has spread up the walls, and this correlates with the earlier observation that for h* < 1, Fig. 3 demonstrates that some of the droplet liquid has reached the height of the crown and has been entrained in the crown. When the film is deeper, the droplet liquid does not contribute to the crown. This correlates with observations by (Alghoul et al. 2011) that crown formation happens at lower film thicknesses and that at higher film thicknesses the major outcome is jetting behaviour until the impact parameters are large enough, then a crown can form.
The physical interpretation of the variation in the value of η is that the droplet compression upon impact is less at higher film thicknesses. At deeper films, the effect of the base is less important, and so the droplet remains spherical for longer upon impact and penetrates deeper into the film before the pressure builds up enough inside the droplet for the fluid to be forced sideways.
From the series of measurements, it is possible to determine the minimum thickness of the droplet liquid during the entire impact. Using the same arguments as in the previous section, the measured droplet thickness should be a function of the Reynolds number of the flow. Figure 13 shows this using droplet diameter and film height as variables. It is clear in Fig. 13a that there is a film height dependence to the fit. Figure 13b shows that presenting these data with respect to film height generates a single curve. It can be noted that the minimum drop thickness decreases with increasing film height. In fact, if the curve is extrapolated, the minimum droplet thickness reaches zero at Re H ∼ 20,000. In (Watanabe et al. 2008;Hsiao et al. 1988), it is shown that at thicker films, the droplet and cavity react to produce vortex rings. There is no liquid at the centre of the ring, which means that the minimum thickness is zero as predicted by this extrapolation. This could be a consequence of the droplet liquid spreading over a steeper cavity therefore overturning to produce the vortex ring.
Discussion of results
The coherence of the droplet within the cavity has important ramifications for heat transfer. The drop spreads over the surface and then contracts with minimal mixing at low impact velocity and deep film. This suggests that conduction is the dominant mixing mechanism since the level of mixing between the droplet liquid and film liquid is small (even though they are the same liquid), and so convection will be small. At higher impact velocity, the droplet liquid is mixed due to being broken off the crown to form secondary droplets. For thicker films, the cavity deepens and the droplet liquid needs to have more energy to be able to reach the crown of the impact. This correlates with the Model (τ=2) Model (τ=3) Model (τ=5) result that thicker films need a higher velocity of impact to generate crown breaking. This could help explain the height of film dependence noted by (Cossali et al. 1997;Tropea and Marengo 1999 , although the cavity width is complicated and contains a Froude number term in addition. Further work at different droplet diameters and using different liquids should be completed to confirm or deny this suspicion.
Conclusions
The present study has used the new BB-LIF technique to investigate the physical mechanisms that occur during the impact of droplets onto films of varying height. It is shown that technique can separate out the motions of the liquid in the film and the droplet, and this can aid in the understanding of the relevant processes. The comparison of the presence of the droplet liquid with the presence of the film liquid in the secondary droplets has determined that for h* < 1, the droplet liquid will rise to the level of the crown, and at values of K > 2100, the droplet liquid will be present in the secondary droplets generated by the crown break-up. At higher h*, the droplet liquid will remain contained in the crater formed by the impact and will drain back to form the tip of the Worthington jet.
An example of the x-t diagrams of the droplet profile has confirmed the theoretical width relationship proposed by (Roisman et al. 2008) holds, but a modified relationship for the dependence of the fitting parameter to the Weber number based on the film height is proposed and shown to be a better fit for our data as well as the data presented by (van Hinsberg 2010).
The minimum thickness of the film below the cavity is shown to be related by h * res = ARe − 2 5 as suggested in (Bakshi et al. 2007). However, it is shown that using the film height as a length scale might also fit better. It is suggested that there might still be a droplet size component to the data fit, and this will need to be investigated further. The profile This figure shows the minimum droplet thickness measured over each run. In a it is compared using droplet size as length scale. In b it is compared using film height as length scale of the droplet liquid in the inertial self-similar regime is shown to have strong similarities to profiles measured for droplet impact onto solid surfaces with two fitting parameters η and τ g . It is shown that the shape parameter η is almost linearly dependent on the film height in this range, but must change to become a constant as the film thickness increases. The value of the initial gradient of radial velocity τ g has reversed in sign, possibly because the contact angle between the water drop and water film is reversed compared to the contact angle between water drop and solid surface. | 7,231 | 2016-03-16T00:00:00.000 | [
"Physics"
] |
Heat Shock Factor 2 Levels Are Associated with the Severity of Ulcerative Colitis
Background and Aims The morbidity of ulcerative colitis (UC) is increasing in China every year. In addition, there is a lack of accurate diagnostic indices with which to evaluate the activity of the disease. The aim of this study was to identify UC-associated proteins as biomarkers for the diagnosis, and objective assessment of disease activity. Methods Differential expression of serum proteins from UC patients compared to normal controls was analyzed by two-dimensional electrophoresis (2-DE) and matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF-MS). The expression of heat shock factor 2(HSF2)in colonic mucosa in Crohn's disease, Behcet's disease, ulcerative colitis, intestinal tuberculosis, infective enteritis, intestinal lymphoma, and normal controls was investigated by immunohistochemistry (IHC). The expression of the HSF2 in colonic mucosa of UC subjects with varying severity of disease was measured by real time-PCR and Western Blots. The expression of HSF2 was inhibited by HSF2 small interfering RNA (siRNA) transfection in Caco-2 cells. The concentrations of HSF2, IL-1β, and TNF-α in serum and IL-1β, and TNF-α in the supernatants of transfected Caco-2 cells were determined by ELISA. Results HSF2 was differentially expressed in UC patients compared to normal controls. HSF2 expression was significantly higher in the intestinal mucosa of UC patients compared to other six groups. The results of immunohistochemistry, real time-PCR, Western Blots, and ELISA showed that the expression of HSF2 increased in parallel with the severity of UC. The serum concentration of HSF2 also positively correlated with levels of IL-1β and TNF-α. After down-regulation expression of HSF2 in Caco-2 cells by RNA interference, the productions of IL-1β and TNF-α stimulated by lipopolysaccharide (LPS) increased dramatically. Conclusions HSF2 appears to be a potential novel molecular marker for UC activity, and may provide a basis for studies on the pathogenesis and novel therapeutic targets for UC.
Introduction
Ulcerative colitis (UC) is an inflammatory bowel disease (IBD) characterized by chronic inflammation of the colonic mucosa, often resulting in intermittent bloody diarrhea and abdominal pain [1]. In China, approximately 140,000 cases of UC have been diagnosed during past 15 years, with an 8.5-fold increase during the last 5 years [2]. Although genetic [3,4], infectious [5], and immunological [6] factors have been postulated to be involved in the pathogenesis of UC, the precise cause of the disease remains unclear. UC is currently diagnosed by clinical, radiologic, endoscopic and histopathological findings. Fecal markers such as calprotectin and lactoferrin have been studied for their ability to identify patients with IBD, assess disease activity, and predict relapse [7]. Serum biomarkers such as C-reactive protein and erythrocyte sedimentation rate have been used to assess inflammatory processes and predict the course of IBD progression [8]. Unfortunately, reliable biomarkers for monitoring disease activity have not been clinically established for use in UC. Therefore, more sensitive and specific biomarkers for UC are needed. Proteomics have been applied to search biomarkers in various diseases [9,10]. The aim of this study was to analyze protein expression profiles in human serum from patients with UC and normal control subjects using 2-DE and MALDI-TOF MS analysis.
Patients were confirmed as UC by endoscopy and pathological examination. All blood samples were drawn under limosis condition at the next morning after diagnosis. All UC patients recruited for this study did not take any medications (or herbal remedies), especially the drugs recommend for UC, such as 5aminosalicylate, glucocorticosteroid and other immunosuppressive agents. The occurrence of medication was excluded in all patients selected for blood samples collection. Serum and colonic mucosal tissue samples were taken from 40 patients (female to male ratio: 1/1, mean age(years): 36.5, rang from 18-60) with mild to severe UC. Disease activity was assessed using the Mayo Score system from 0 to 12, as described previously [11], in which mild, moderate and severe disease activity were defined by scores of 3-5, 6-10 and 11-12. All samples were collected at the time of diagnosis and stored at 280uC. A sample from each specimen was fixed in formalin for immunohistochemical staining. In addition, serum samples of 40 voluntary healthy controls (female to male ratio: 1/1, mean age(years): 34.5, rang from 18-56) were taken under the same conditions as patients' blood, and colon mucosal tissue samples of 10 people (female to male ratio:
Two-dimensional electrophoresis (2-DE)
Fasting blood, 5 ml, was drawn from the patients, and centrifuged at 2,000 rpm for 20 min. Non-hemolyzed serum was collected, and 300 ml from 10 cases were mixed. Highly abundant albumin and IgG were removed with an albumin/IgG removal kit (Calbiochem, CA, USA) according to the manufacturer's instructions. Total protein concentrations of serum samples were determined by a Dye Reagent protein assay kit (Bio-Rad, CA, USA). Aliquots were kept at 280uC for two dimensional gel electrophoresis which was performed using an Ettan IP Gphor Isoelectric Focusing System according to the method of Boguth et al [12]. In brief, samples containing about 250 mg of treated serum proteins were dissolved in rehydration buffer, and isoelectrically focused at 250 V for 30 min, 1000 V for 2 h, 10 000 V for 5 h, and 10 000 V for a total of 60 000 Vh/gel. After isoelectric focusing (IEF), each immobilized pH gradient (IPG) strip was soaked in equilibration solution containing dithiothreitol (DTT) and iodoacetamide, and then placed on in contact with the top surface of SDS-PAGE and sealed with agarose. Separation in the second dimension gel was performed using a Tris-glycine running buffer, at a power setting of 5 mA/gel for 0.5 h and 30 mA/gel for 2 h, thereafter at a temperature of 17uC. The protein spots were visualized by silver staining and scanned on Umax Powerlook 1100 scanner. Target gels were analyzed with PDQuest 7.1.0 software including spot detection, background subtraction, and matching. Each assay was replicated three times, and protein spots for comparative analyses were detected on all the gels. Only the spots present in all three replicate gels and qualitatively consistent in size and shape in the replicate gels were considered.
In-gel Digestion
Spots from 2-DE were excised and digested with modified trypsin (Roche, Mannheim, Germany) as previously described by Millares et al [13]. In brief, gel particles were washed and dehydrated with acetonitrile. Proteins were reduced with DTT, and S-alkylated with iodoacetamide. Gel particles were washed with NH 4 HCO 3 and then dried under vacuum rehydrated with the digestion solution. After incubation for 30 min at 4uC, supernatants were replaced by NH 4 HCO 3 , and gel particles were incubated overnight at 37uC. Trifluoroacetic acid, 2%, was added to end the digestion.
MALDI-TOF MS analysis for protein identification
Samples were mixed (1:1) with a saturated matrix solution (acyano-4-hydroxycinnamic acid prepared in 60% acetonitrile/ 0.1% trifluoroacetic acid). Matrix-assisted laser desorption ionization time of flight (MALDI-TOF) mass spectra were recorded on a BIFLEXIV mass spectrometer [14]. All spectra were collected in a positive ion reflector mode with a delayed extraction mass accuracy of about 100 ppm. The specific parameters were as follows: the ion acceleration voltage 19 kV, and nitrogen laser operating at a wavelength 337 nm. MS spectra were obtained in the mass range between 2000 and 3000 Da. The singly charged peaks were analyzed using an interpretation method present in instrument software. The nine most intense peaks were selected, and MS/MS spectra were generated automatically, excluding those from the matrix, due to trypsin autolysis peaks. Spectra were processed and analyzed by the Matrix science web which uses internal Mascot 2.1 software to search for peptide mass fingerprints and MS/MS data. Searches were performed against the SWISS-PROT and NCBI non-redundant (nr) protein database.
Immunohistochemistry (IHC) analysis
Colonic mucosal tissues were cut into three sections (5 mm each) with freezing microtome, and put them on the same slide. After fixation, we performed immunohistochemical staining of HSF2 in all cohorts. The HSF2 (1:50, Santa Cruz, TX, USA) primary antibodies were used for antigen detection. HRP-Polymer antimouse/rabbit IHC Kit and DAB (3, 39-diaminobenzidine) substrate kit were used as detecting reagents according to the manufacturer's recommendations (Maxim Biotech, Fuzhou, CHN). The slides were counterstained with hematoxylin, fixed by Scott's solution, and dehydrated with various concentrations of ethanol.
Slides were mounted with permount mounting medium and observed under a light microscope, randomly selecting four visual fields on each slide.
Semi-quantitative analysis of staining intensity for the HSF2 protein was performed using the HPIAS-2000 image analysis software (Tongji Qianping Imaging Inc., Wuhan, China) according to Chen [15] and Wang et al. [16]. In brief, four highly magnified visual fields (10640 magnification) (with no overlap) from each tissue section were randomly selected, and theirs digitalized images were submitted to the image analysis software. HSF2 positive cells defined as those having brown-yellow granules in the cytoplasm and/or nucleus. Integral optical density was automatically measured by the computer with image analysis software. The mean optical density represented the relative expression levels of HSF2 protein. Assessments were performed by two independent pathologists from The First Affiliated Hospital of Kunming Medical University, who were unaware of the HSF2 status.
Quantitative real-time PCR
Total RNA from the colonic mucosal tissue was extracted with TRIzol reagent (Invitrogen, CA, USA) according to the manufacturer's protocol. The first-strand cDNA synthesis was performed with a cDNA synthesis kit (TaKaRa, Dalian, China) according to the manufacturer's instructions. Quantitative realtime PCR was carried out using an SYBR Green real-time PCR kit (TaKaRa, Dalian, China) under the following conditions: initial denaturation at 95uC for 1 min, followed by 40 cycles at 95uC for 15 s, 60uC for 15 s, and 72uC for 20 s. The primer sequences were as follows: human HSF2 (139 bp): 59-ATAAGTAGTGCTCA-GAAGGTTCAGA-39 (forward) and 59-GAATAACTTGTTG-CTGTTGTGCATG-39 (reverse), GAPDH (107 bp): 59-ATGGG-GAAGGTGAAGGTCG-39 (forward) and 59-GGGGTCATTG-ATGGCAACAATA-39 (reverse). Each sample was run three times. The data from the real-time PCR were analyzed with the delta-delta Ct method and normalized to the amount of GAPDH cDNA as an endogenous control.
Western blotting
Colonic mucosal tissue samples and Caco-2 cells were homogenized in immunoprecipitation assay buffer (Roche, Mannheim, Germany) with a protease inhibitor cocktail (Roche, Mannheim, Germany). Homogenates were centrifuged at 4uC, 12000 rpm for 10 min, and the supernatants were collected to determine protein concentration using the Dye Reagent protein assay kit (Bio-Rad, CA, USA). Samples containing 50 mg of protein were loaded on an SDS-polyacrylamide gel electrophoresis, and then electrotransferred onto a PVDF membrane (Millipore, MA, USA). The membrane was blocked with 3% BSA and then incubated with antibodies specific for HSF2 (1:1000, Santa Cruz, TX, USA), Monoclonal ANTI-FLAG M2 antibody (1:5000, Sigma Aldrich, MO, USA), and b-actin (1:5000, Santa Cruz, TX, USA ) overnight at 4uC. Membranes were incubated with appropriate peroxidase-conjugated secondary
Cell culture
Caco-2 cells, a human colon adenocarcinoma cell line that displays enterocyte-like features in culture, were obtained from Cell Bank of Type of Culture Collection of Kunming Institue of Zoology, Chinese Academy of Sciences. Cells were grown at 37uC in 5% CO 2 in Dulbecco's modified Eagle's medium (DMEM, HyClone, NY, USA) supplemented with 10% fetal bovine serum. Cells, between passages 5 and 25, were seeded at a density of 100,000 cells/ml onto 12-well tissue culture plates (Corning, NY, USA) and used at 40%-50%, 70%-80% confluence for RNA interference and plasmid transfection, respectively. After overnight transfection, the Caco-2 cells were treated with 50 ng/ml LPS (Sigma Aldrich, MO, USA) or equal physiological saline (as a normal cotrol) for 18 h, and the culture supernates were harvested for cytokine assays.
ELISA assay
The concentrations of HSF2, IL-1b, and TNF-a in serum and IL-1b, and TNF-a in the supernatants of transfected Caco-2 cells were determined by ELISA kits according to manufacturer's protocols (HSF2, CUSABIO, Wuhan, China; IL-1b and TNF-a ELISA kits, Neobioscience, Beijing, China, respectively).
Statistical analysis
All data were presented as means 6 standard deviation (SD). Statistical analyses were performed using SPSS17.0 statistical software. Multi-group data analyses were tested by one-way analysis of variance (ANOVA). Correlation analyses between the variables were assessed using the Pearson Test which gives a correlation coefficient (Pearson ''r'') and a ''p'' value. A p value of ,0.05 was considered as the minimum level of significance in all cases.
Identification of UC-associated proteins by MALDI-TOF MS
As shown in Figure 1, a total of 39 protein spots in the UC groups with differential expression levels were found on 2-DE. Of the 39 protein spots, 12 proteins were eventually identified by MALDI-TOF-MS, and 9 pieces of peptide mass fingerprinting (PMF) were obtained (Table 1). Among these identified proteins, 6 proteins were over-expressed: haptoglobin, aldehyde reductase, receptor tyrosine kinase, pericentriole material 1, heat shock factor 2 and apolipoprotein C-III. Three proteins were under-expressed in sera of patients with UC: tropomyosin 3, filamin A interacting protein 1 and keratin1. To clearly demonstrate the difference in expression of serum proteins between UC patients and normal controls, representative protein spots were picked up from 2-DE ( Figure 1). These representative proteins had much higher expression in serum of patients with UC than that in normal controls.
Immunohistochemical staining of HSF 2 in colonic mucosa of UC
The expression profiles of HSF2 in colonic mucosa were examined by IHC (Figure 2). HSF2 was expressed in stromal cells and almost undetectable on the epithelial cells in normal intestinal mucosa (Figure 2A), but widely expressed in intestinal epithelial cells and stromal cells in UC group ( Figure 2B, C and D), The expression level of HSF2 in mucosal tissues from the group with severe disease was the highest of these three groups (p,0.01), and HSF2 expression in the moderate group was higher than that in mild group(p,0.05). The expression of HSF2 increased significantly with increasing severity of disease.
Immunohistochemical staining of HSF 2 in colonic mucosa of ulcerative colitis, Crohn's disease, Behcet's disease, intestinal tuberculosis, infective enteritis and intestinal lymphoma and normal controls The results of IHC indicated that the expression of HSF2 in the intestinal mucosa of UC patients was significantly higher than that in other six groups (Figure 3), (p,0.01, for normal controls, Crohn's disease, Behcet's disease), (p,0.05 for intestinal tuberculosis, infective enteritis and intestinal lymphoma). However, there was no significant difference among six groups (p.0.05). Transcription and expression of HSF2 in colonic mucosa of UC Real time-PCR results ( Figure 4A) showed that the mRNA transcriptional level of HSF2 in colonic mucosa increased with disease severity. The mRNA levels of HSF2 in mucosal tissues from the severe group was the highest of the three groups (p,0.01), and that of the moderate group was higher than that in mild group and normal controls (p,0.01, for all). In addition, Western Blot results ( Figure 4B) showed that the protein levels of HSF2 in colonic mucosa increased with increasing disease severity. There were significant differences among different UC severity groups.
Concentrations of HSF2, IL-1b, and TNF-a As shown in Figure 5, the serum concentrations of HSF2, IL-1b and TNF-a increased with disease severity. In addition, the serum concentrations of HSF2 were positively correlated with IL-1b (r = 0.89, p,0.001), and with TNF-a (r = 0.86, p,0.001).
After down-regulation expression of HSF2 in Caco-2 cells by RNA interference (Figure 6A), the productions of IL-1b ( Figure 6B) and TNF-a ( Figure 6C) stimulated by LPS increased dramatically compared to the other four groups (p,0.01). Enhanced expression of HSF2 by plasmid transfection ( Figure 6A) resulted in significantly decresased production of these two cytokines ( Figure 6B and C) compared to other LPS-stimulated cell groups (p,0.05).
Discussion
Because of the dearth of molecular markers for UC, colonoscopy with colonic mucosal biopsy is currently routine for diagnostic evaluation for UC. Biomarkers have been considered to be objective, and non-invasive measurements of disease activity.
Alteration in the levels of some serum proteins have been shown to be early signs of altered physiology and may be indicative of disease [18]. In the present study, we identified 12 differential protein spots using MALDI-TOF-MS, and obtained nine pieces of PMF. PMFs were identified through searches of the SWISS- PROT and NCBI nr databases. Among these identified proteins, six (heat shock factor protein 2, haptoglobin, apolipoprotein C-III, receptor tyrosine kinase, aldehyde reductase and pericentriolar material 1) were found to be up-regulated, and three (keratin 1, filamin A-interacting protein 1 and tropomyosin 3) were found to be down-regulated.
Over the last years evidence has accumulated that HSF1and HSPs are very important for the repair of colonic mucosa epithelium in inflammatory bowel disease and they suppress proinflammatory genes relevant to its pathogenesis [19][20][21][22], but little is known about the function of HSF2 in the pathogenesis of UC, most studies on HSF2 have been on protein misfolding diseases, delaying aging, development of embryo and sperm [23][24][25].
Based on the background facts and the findings of proteomic analysis, an up-regulated protein HSF2 was selected for further validation in the progression of UC. UC begins in the rectum and spreads variably to the proximal colon [26], and is characterized by continuous lesion, crypt abscess and abnormal branching [27]. Crypt abscesses are early lesions observed in inflammatory bowel disease, particularly in UC [28]. To some extent, crypt cells serves as a protective barrier between noxious stimuli and the sterile host environment. Exposure to such noxious stimuli may result in increased proliferation of crypt cells, secretion of enzymes, inflammatory cytokines and HSPs [29]. The results of the current study showed that the expression of HSF2 was up-regulated in the serum and intestinal mucosa of UC patients, suggesting that multiple stimuli might cause human stress responses, and high expression of HSF2 improves stress response. The expression of HSF2 in the intestinal mucosa of UC patients was significantly higher than that in normal controls, suggesting that HSF2 may be involved in the repair of colonic mucosa epithelium through activation of some protection proteins in response to intestinal mucosa membrane damage.
Recently, HSF1 has been shown to inhibit the expression of proinflammatory cytokines such as TNF-a and IL-1b by regulating the expression of the HSP, and suppressing key transcription factors of inflammatory signaling pathways, such as NF-kB and AP-1 [30]. The current data showed that serum concentrations of HSF2 were positively correlated with two proinflammatory factors, TNF-a and IL-1b. After down-regulation expression of HSF2 in Caco-2 cells by RNA interference, the secretions of these two cytokines stimulated by LPS increased dramatically, while enhanced expression of HSF2 by plasmid transfection resulted in significantly decresased production, suggesting that HSF2 might directly or indirectly affect inflammation-related transcription factors and down-regulates inflammatory cytokines to overcome inflammation.
It is important to understand the pathogenesis of UC and identify specific biomarkers and biological therapeutic targets [31,32]. Our results showed that HSF2 was over expressed in UC, and the increases paralleled the severity of disease. This suggests that HSF2 might be an endogenous protective factor against UC. This study will enable HSF2 as a potential novel molecular marker for UC and provide the basis for novel biological therapeutic targets. | 4,368 | 2014-02-12T00:00:00.000 | [
"Medicine",
"Biology"
] |
Non-Invasive measurement of the cerebral metabolic rate of oxygen using MRI in rodents
Malfunctions of oxygen metabolism are suspected to play a key role in a number of neurological and psychiatric disorders, but this hypothesis cannot be properly investigated without an in-vivo non-invasive measurement of brain oxygen consumption. We present a new way to measure the Cerebral Metabolic Rate of Oxygen (CMRO 2) by combining two existing magnetic resonance imaging techniques, namely arterial spin-labelling and oxygen extraction fraction mapping. This method was validated by imaging rats under different anaesthetic regimes and was strongly correlated to glucose consumption measured by autoradiography.
Introduction
The brain requires around 20% of a human's energy production, and hence requires a similar proportion of the body's oxygen supply 1,2 .There is great interest in being able to quantitatively map the Cerebral Metabolic Rate of Oxygen (CMRO 2 ) consumption, both as a marker of pathology and for the study of healthy ageing [3][4][5][6] .Although methods exist using oxygen isotopes with either Magnetic Resonance (MR) spectroscopic imaging or Positron Emission Tomography (PET) [7][8][9] , it would be advantageous to use proton-based Magnetic Resonance Imaging (MRI) methods due to their low invasiveness, lower cost, and wider availability.Recent years have seen the emergence of methods including whole-brain measurements of CMRO 2 using a combination of T2-mapping and phasecontrast velocity measurements 10,11 , voxel-wise mapping using quantitative Blood Oxygenation Level Dependent (qBOLD) 12 , BOLD calibrated with gas administration 13,14 and high-resolution mapping methods based on Quantitative Susceptibility Mapping (QSM) 12,15 .
For this study we implemented a straightforward and robust method to measure CMRO 2 , which combines measurements of Cerebral Blood Flow (CBF) and Oxygen Extraction Fraction (OEF) made with a pre-clinical MRI scanner.We calculated CBF maps using Arterial Spin Labelling (ASL) 16 .OEF maps were constructed by measuring the reversible rate of transverse relaxation R2′, which is related to the concentration of deoxyhaemoglobin (dHb) [17][18][19] .We demonstrated our method by imaging rats with two anaesthetics known to affect brain metabolism differently, and compared these MRI measurements to gold-standard autoradiography measurements of glucose metabolism under the same anaesthetics.Although we found our MRI methods underestimated metabolism, we could still detect a relative effect between anesthetics.
Ethics statement
Study procedures were conducted in accordance with the Animal (Scientific Procedures) Act 1986 and with ethical approval from the King's College London Animal Welfare And Ethical Review Body (AWERB) under the authorisation of license number P023CC39A.All harm to animals was prevented as procedures were performed under terminal anaesthesia.Animals were group housed under standard laboratory conditions with freely available food and water.There were no exclusion criteria for the animals.
Theory CMRO 2 , here measured in µmol/100g/min, is defined as the product of CBF, measured in ml/100g/min, and OEF multiplied by the constant C a which describes the amount of oxygen carried in arterial blood: Throughout this paper we use a value of C a =8.48 µmol/ml, calculated from the values for mice given in Gagnon et al. 20 .Typical values used for healthy humans are 8.04 and 8.33 µmol/ml 13,21 .
The measurement of CBF (measured in ml/100g/min) with ASL is a well-established MR method 16,22 .We chose to measure OEF from R2′, which is defined as the difference between the combined relaxation rate R 2 * and the irreversible relaxation rate R 2 (R 2 * = R 2 + R 2 ′), where relaxation rates are the inverses of relaxation times (R 2 ′ = 1/ T 2 ′).MR images can be acquired with T 2 ′-weighting using an Asymmetric Spin-Echo (ASE) sequence where the refocusing pulse is offset from the standard time to produce a spin echo, T E /2, by an echo-shift τ/2, which can be either positive (the pulse occurs later than T E /2 or negative (the pulse occurs earlier than T E /2 18 .Echoes formed at the same T E but different τ will hence have the same T 2 -weighting, but different amounts of additional T 2 ′ (or R 2 ′) weighting.By observing the signal in each voxel from multiple τ values, we can measure a mono-exponential R 2 ′ as we would measure R 2 from multiple values of T E .
However, in brain tissue the observed signal value at τ = 0 is less than would be expected from extrapolating the signal curve for τ ≠ 0 back to the origin.This discrepancy can be attributed to static dephasing of spins in susceptibility gradients.The principle biological contributor to such gradients is the presence of deoxyhaemoglobin (dHb) in capillaries and draining veins 17 .In preference to the asymptotic equations used by Stone and Blockey 18 we adapt the full qBOLD equation from He and Yablonskiy 23 : where ∫ and δω=R 2 '/DBV is the characteristic frequency.We have neglected the dependence of S 0 on TE and T 2 for clarity.The OEF can then be found by where γ =2π × 42.577 MHz is the proton gyro-magnetic ratio, B 0 is the magnetic field strength, δχ 0 = 0.264 × 10 −6 is the susceptibility difference between oxygenated and deoxygenated blood cells, and we used a haematocrit (Hct) value of 0.34 19 .
In previous clinical studies it has been possible to estimate DBV from the ASE data 19 .We found that we could not reliably fit the data for both DBV and OEF at 9.4T and hence we fixed the value of DBV to 3.3% (see discussion) 23 .
R2′ is not only affected by deoxygenated blood, but by any source of susceptibility gradients.The principal of these are background or Macroscopic Field Gradients (MFGs) from air/ tissue interfaces, which can be corrected with Z-shimming 18,19 .A Z-shim is an additional small gradient played during the spin-echo formation which partially rephases signals in voxels affected by MFGs, but de-phases signal in unaffected voxels 24,25 .By acquiring and combining multiple images with different Z-shims, the lost signal from MFGs can be restored across the whole image, but will not affect the signal from sub-voxel susceptibility gradients due to deoxygenated blood 19 .In the human brain the largest MFGs are present above the nasal sinus, where air is closest to the parenchyma, and hence the largest susceptibility gradient exists in the Z (axial, in humans superior-inferior) direction.In rodents, the largest voids within the head are the mastoids, and in addition the skull and the tissue surrounding the brain are significantly thinner than in humans.We hence found that gradients in the Y (in animals the superior-inferior) direction were also a significant issue and so added shimming in both the Z and Y directions.
Imaging protocol
A total of ten adult male healthy Sprague-Dawley rats (440-537 g; Charles River) were imaged in a 9.4 Tesla pre-clinical MR system using a four-channel head receive coil, transmit body coil and separate ASL labelling coil (Bruker GmbH).All rats were initially anaesthetised by inhaling 5% isoflurane in an 80:20 mix of air and medical oxygen.Five of the rats were maintained with 2.5% isoflurane for the duration of scanning, while the remaining five received a bolus of 65 mg kg −1 alpha-Chloralose (α-Chloralose) solution in saline, administered through a tail vein cannula, followed by continuous infusion at a rate of 30 mg kg −1 h −1 .
For ASL we used the manufacturer's Continuous ASL (CASL) sequence with a spin-echo Echo Planar Imaging (EPI) readout 22 .The matrix size was 96x96 with 18 axial (rostro-caudal) slices, 0.26x0.26x1.5 mm voxel size, TE/TR = 13.5/4000ms, partial-fourier 66%, label time 3000 ms, post-label time 300 ms 28,29 , and 30 pairs of label/control images, scan time 4 minutes.The labelling plane was positioned 5 mm behind the carotid artery split, which was found using a localizer scan acquired with the labelling coil as per the manufacturer's instructions.Two single-volume reference scans were acquired using the same sequence settings and no labelling power, one of which had reversed phase-encode direction (see below).
For the ASE sequence we modified the manufacturer's spin-echo EPI sequence to allow the 180° refocusing pulse to be offset by τ/2 as defined above.The matrix size and resolution were matched to the ASL sequence, but with TE/TR = 70/1800 ms.Partial Fourier was switched off to minimise any intensity modulation from the echo moving out of the acquisition window in the readout (X, left-right) direction 30 .Twelve values of τ spaced from -32 to 56 ms were acquired.At each, five Z-shims equally spaced from G Z = −0.8 to G Z = 0.8 mT m −1 and nine Y-shims from G Y = −1.2 to G Y = 1.2 mT m −1 were used.The Z-shim was incorporated into the slice-rephase gradient which lasted 2 ms and the Y-shim was played at the same time.
The ASE scan lasted for 16 minutes and 12 seconds.
Image processing and analysis
Image processing was carried out using a combination of FSL 5.0.1 31 , ANTs 2.1.0 32and QUIT 3.3 33 .Briefly, the complex MP2-RAGE structural images were first coil-combined 27 and then converted into both a T1 map and a uniform contrast image 34 .From these, a study-specific template image was constructed 35 which was in turn registered to an atlas image 36 The CASL images were corrected for motion 37 and susceptibility distortions 38 , and then converted into a CBF map using the BASIL tool 39 .The T1 of blood was set to 2.429 s 40 , the labelling efficiency was set to 80%, and the distortion-corrected reference image was used as the proton density during CBF quantification 41 .The reference image was registered to the MP2-RAGE structural image.
The ASE images with different Z-& Y-shims were first combined by taking the Root Sum-of-Squares (RSS) 42 .To avoid noise amplification artefacts, we calculated the mean squared intensity in a background region and subtracted this from sum-of-squared images before taking the square root 43 .The resulting shimmed ASE images were then motion and distortion corrected using the ASL reference data.The OEF was found from the corrected data by a non-linear fit to Equation 2 implemented in QUIT 33 .We found that our images were too noisy to reliably fit for the parameter DBV, which is thought to be on the order of a few percent.To improve the quality of the fit for the remaining parameters we hence fixed DBV = 3.3% 23 .We also observed that in certain brain regions the peak of our signal curve did not occur precisely at τ = 0, hence we introduced an additional parameter ΔT to account for this.The final free parameters were R2′, S 0 & ΔT, from which the parameters T c , dHb and most importantly OEF could be derived.The resulting OEF and CBF maps were then multiplied together and by C a to produce the CMRO 2 map.The parameter maps were resampled into the template space and average ROI values extracted using the template-specific masks.
Autoradiography protocol and analysis To assess regional brain glucose metabolism we performed 14 C-2-deoxyglucose (2DG) autoradiography, which measures Glucose Utilisation (GU) in µmol/100g/min as originally described by Sokoloff 44 .We used a separate cohort of ten adult male Sprague Dawley rats (weight 325-380 g).All were initially anaesthetised for approximately 30 minutes with 2.5-3% isoflurane (in 80/20 medical air/oxygen), in order to cannulate their femoral and tail blood vessels for blood sampling and compound administration, respectively.After the cannulation, a local anaesthetic was applied and the wound sutured.
Isoflurane was then set to 2.5% for five rats.In the remaining rats, isoflurane was terminated and an intravenous bolus of 65 mg kg −1 α-Chloralose was administered, followed by 30 mg kg −1 h −1 infusion for the remainder of the experiment 45 .Body temperature was maintained at 36 ± 0.5°C using a thermostatically controlled electric heating blanket and rectal probe.
Between 30 and 40 minutes was allowed for the rats to stabilise, after which we intravenously administered over 30 s 100 µCi/kg 2DG (Perkin Elmer, USA), and collected 14 timed arterial blood samples 46 over 45 minutes.After the final blood sample the animals were decapitated.Their brains were removed and frozen in −40°C isopentane and then stored at −80°C.Quantification of plasma glucose and 14 C was carried out using a blood glucose analyser (YSI 2300) and scintillation counter (Beckman Coulter LS 6500), respectively.Brains were cryosectioned at 20 µm and exposed to X-ray film (Kodak Biomax MR-2) alongside calibrated 14 C standards (GE Healthcare UK) for 7 days, after which they were developed in an automated X-ray film processor.Images were digitised using a Nikon single lens reflex camera and a macro lens, over a Northern Lights illuminator (InterFocus Ltd UK).Brain GU was calculated from the optical densities in the films using a calibration curve and the plasma glucose levels according to 44.We measured GU in eleven ROIs which matched those chosen from the MRI atlas, located at approximately +1, -3.5 and -8 mm from Bregma 47 .Readings for each ROI were taken bilaterally from two or three adjacent brain sections and then averaged.The analyst was blinded to anaesthetic group.
Statistical analysis
For statistical analyses we used the Python libraries pandas 1.0.5 and statsmodels 0.11.1 48 .The mean ROI values for each anaesthetic were compared with a non-parametric Mann-Whitney U-test with False Discovery Rate (FDR) multiplecomparisons correction.Finally, we compared our MRI oxygen metabolism measurements to the glucose metabolism measurements using a Robust Linear Model analysis of CMRO 2 against GU.In this model, the slope of the line is the number of oxygen molecules consumed per molecule of glucose during metabolic activity, while the intercept gives the amount of oxygen consumed if no glucose was being consumed.Robust regression was used because residual variance was inhomogenous across the metabolic range.As our experimental design did not use the same animals for both CMRO 2 and GU experiments, the measurements for each ROI were averaged across subjects (but not anaesthetics) before the regression, yielding a total of 22 data points for this analysis.For all analyses, a p-value of less than 0.05 was considered significant.ROI data and group average data are available in Underlying data 49 .
Pre-processing
Figure 1 and Figure 2 show a single slice through all the raw ASE images collected with different values of Z-&Y-shims at τ = 0 and τ = 56 ms, respectively.The central images have both G Y and G Z equal to zero, i.e. in Figure 1 this is a simple unshimmed symmetric spin-echo image.In Figure 1 only the central, low value shims contain significant signal and the extreme shims are mostly noise, whereas in Figure 2 the unshimmed image is mostly noise and the signal has shifted towards negative values of G Y and G Z .
Figure 3 shows the result of combining all the different shim images via RSS both with and without noise suppression.Without suppression, amplification of the Rician noise is so severe that the background has almost the same intensity as the image.Subtracting the mean squared background intensity before the square-root operation restores the correct noise properties to the image, with crisp contrast between the image and background regions.
Group comparisons
Figure 4 shows the results of the model fit to the shimmed ASE data.R2′ appears slightly higher in animals anaesthetised with α-Chloralose.Residual elevated R2′ can be observed surrounding the mastoid cavities and in a thin layer around the brain, where the Z-&Y-shimming was insufficient to correct extreme MFGs.The Root Mean Square Error (RMSE) is flat across most of the brain, indicating a reasonable model fit, but is elevated in white matter and cerebrospinal fluid (CSF), indicating the model fits less well in these areas.ΔT is increased towards the lower front of the brain.
Figure 5 shows the mean OEF, CBF and CMRO 2 for isoflurane and α-Chloralose anaesthetic.The OEF is higher under α-Chloralose.Areas with elevated R2′ due to MFGs also show artefactually high OEF.CBF is much lower under α-Chloralose anaesthetic than under isoflurane.The Inferior Colliculus shows an elevated CBF compared to other brain regions.CMRO 2 is consistently higher under isoflurane than under α-Chloralose.
In Figure 6 we display glucose consumption under both anaesthetics.Similarly to the MRI data, glucose metabolism is clearly reduced under α-Chloralose compared to isoflurane, and the Inferior Colliculus displays elevated metabolism compared to the rest of the brain.Table 1 gives the mean and standard deviation across subjects of each ROI for OEF, CBF, CMRO 2 and GU. Figure 7 shows the same data plotted graphically.CMRO 2 , GU and CBF were all lower under α-Chloralose than isoflurane, while OEF was generally higher under α-Chloralose than isoflurane.These effects were strong and consistent for both CBF and Gu, with perfect separation between α-Chloralose and isoflurane, i.e. all values in one group higher/lower than the other, with the exception of the Inferior Colliculus glucose consumption (Mann-Whitney U=22, FDR corrected p=0.17).For OEF there was some overlap between the groups, in particular the Hypothalamus showed equal OEF (U=12, FDR corrected p=1).CMRO 2 hence showed a smaller separation than GU or CBF, which despite large non-parametric test statistics did not survive multiple comparisons correction (majority of ROIs U>=24, uncorrected p≤0.017,FDR corrected p=0.07).
Finally we show the result of regressing CMRO 2 against GU for the different regions of interest (averaged across subjects) in Figure 8.The slope of the line of best fit was 2.74 (p < 0.001, 95% CI 1.96 to 3.53).
Discussion
The above results demonstrate that CMRO 2 can be measured in rats using a combination of ASE and ASL images.The method does not require administration of a gas challenge 50,51 , or administration of expensive isotopes 9 .Hence this method has the potential to be a cheap, easily available method compared to gold-standard PET measurements.Little et al. have demonstrated similar findings using separate measures of R2 and R2* to measure OEF instead of the single measurement of R2' from the ASE scan 52 .In humans, qBOLD has been combined with QSM to estimate CMRO 2 from a single multi-echo gradient-echo scan 12 .This method shows promise, but the required modelling and processing was extremely complex.In contrast, after correction for MFGs, the ASE method only requires a fit of Equation ( 2) to the data.We then combined our measurement of OEF with CBF measured by ASL to generate a map of CMRO 2 under two common anaesthetics which are known to have different effects on brain metabolism.By using a dedicated labelling coil and correcting our multi-slice 2D data with the correct post-label delay we obtained full brain maps of CBF 39,53 .
There were numerous technical challenges to implementing the ASE method at ultra-high field (9.4T) and the small dimensions of a preclinical system compared to previous clinical work.Foremost, MFGs were highly problematic, and adequately correcting them involved a large number of trade-offs which prevented full correction across all regions of the brain.Notably, we observed strong gradients in all three geometric directions.This required the implementation of shimming in both the slice-select (Z) and phase-encode (Y) directions.Providing an adequate number of shims required acquiring a total of 45 images per value of τ (nine Y-shims multiplied by 5 Z-shims), which is significantly more than the eight images that were adequate in a clinical setting 19 .Including shim gradients in the readout direction (X) may have further reduced MFG artefacts, but at the expense of additional scan-time.
Thinner slices would also reduce the impact of the MFGs, but would also lower signal-to-noise ratio (SNR) and brain coverage.Acquiring more slices would be problematic for the ASL scan, where the maximum number is determined by the time between the end of the post-labelling time and the end of TR.Increasing TR and hence the number of slices would hence increase the ASL scan time further and lead to very different post-labelling times for different slices.
As shown in Figure 2, naïve RSS combination of the different shims leads to amplification of the Rician noise in low signal areas.We could not use the Fourier Transform approach to shim combination taken by Stone & Blockley 19 as the ) for both anaesthetics.CMRO 2 is lower under α-Chloralose, however this is driven by a significant reduction in CBF as OEF is actually higher under α-Chloralose than isoflurane.Note that the slice through the inferior colliculus (marked with green arrows) for CMRO 2 has a different color scale due to the much higher rate of metabolism compared to the other slices.necessary reconstruction methods were not available from the manufacturer.Subtracting the average noise level from the squared magnitude images restored an adequate level of SNR.It is possible that using a method that accounts for the multi-channel nature of this data could improve the SNR further 54,55 ..We opted to use the former value.Fixing DBV in this manner is not ideal as it will vary due to pathology 52 .We hypothesise that adjusting the protocol to acquire fewer intermediate values of τ and additional images with high values and near τ=0 could improve the sensitivity to both DBV and R2' 58 .Increasing the maximum value of τ would necessitate either a corresponding increase in TE, which would reduce SNR and increase the effects of MFGs 19 , or the use of Partial Fourier acceleration, which we found caused unacceptable blurring and intensity artefacts from the echo moving out of the EPI acquisition window 30 .Using an alternate readout such as spiral imaging may mitigate such downsides.Such optimization was beyond the scope of the current work.
The introduction of the parameter ΔT, representing either the early or late arrival of the spin-echo peak, improved the stability of our fit on the edges of white matter and towards the lower front portion of the brain.We hypothesise that that these shifts were due to uncorrected MFGs, either in the X-direction causing signal to shift away from the k-space center at τ=0 30, 59 , or simply insufficient correction in the Y-and Z-directions.Further investigation of this was beyond the scope of the current work.We showed both an increase in OEF and root-mean-square error in white-matter regions, indicating that the current model does not account properly for the effects of myelin, which has a different susceptibility to other brain tissue 60 .
Comparing 62 .We hence conclude that our CBF measurement broadly agrees with previous literature.Previous estimates of OEF in healthy humans are 35%, measured with calibrated gas administration 13 , and 21% using the same ASE method used here 19 , however we note that the authors acknowledged that their choice of linear fitting deliberately underestimates OEF.Hyder et al. found an average grey matter OEF of 40% using PET imaging 63 .In rats, Little et Previous reported measures of CMRO 2 in rats under α-chloralose anesthetic range include 151 7 , 184 64 , 200 65 , 208 66 and 219 67 µmol/100g/min.Our reported CMRO 2 is approximately half that of the most commonly reported, and almost certainly due to the underestimate of OEF identified above.
Further evidence for this underestimation comes from the work of Hyder et al., who measured CMRO 2 and GU in awake humans using PET and found an Oxygen Glucose Index (OGI) of 5.3.Under pure aerobic metabolism, the stoichiometric ratio of six molecules of oxygen to one molecule of glucose would suggest an OGI of six 68,69 .The average OGI is equivalent to the slope of our CMRO 2 / GU regression line, which we found to be only 2.74.Although inter-species and anesthetic effects cannot be discounted, we attribute this discrepancy to our underestimation of OEF.As discussed above, a likely cause of this underestimation is the choice of equally spread τ values, and an optimized protocol may lead to more accurate OEF estimation.
It is also possible that some of the constants chosen here may be incorrect.An obvious candidate would be the chosen value of DBV, as this roughly scales the estimate of OEF.However, decreasing the value of DBV such that our CMRO 2 estimates agreed with literature values would require approximately halving it, which would then make the estimate of DBV itself significantly different to previous literature.
Despite this underestimation, we confirmed the expected differential effect of the anesthetics on cerebral metabolism, with close to double the rate of oxygen consumption under isoflurane than α-Chloralose.We note that the difference in CMRO 2 was driven primarily by the difference in CBF which was three times higher under isoflurane, while OEF only reduced by a quarter compared to α-Chloralose.This is in line with the notion that mitochondria require a particular gradient of tissue oxygenation, and because less oxygen is removed from the blood during higher flow (decreased capillary transit time), it follows that OEF is decreased with increased CBF and CMRO 2 70 .
Conclusions
We implemented a non-invasive MRI method for measuring CMRO 2 in rats which can be easily translated to clinical scanners.Methodological difficulties prevented measurement of DBV and we likely underestimated OEF, but future optimizations may be able to overcome these limitations.However the relative CMRO 2 differences between anesthetics were observed, suggesting the utility of this relatively simple method for preclinical studies interested in comparing metabolic effects of treatments or pathologies.
1.
Regarding the shift parameter, ΔT, its presence was surprising since it would seem to imply that the signal when τ ≠ 0 can be greater than the signal at τ = 0 (i.e., the spin-echo itself).I did not originally list any ways that the authors could try to investigate the origins of the shift because I did not want to impose one way when the authors may have preferred some other, which too could have been appropriate.Some combination of the techniques that I had in mind were (I am not asking that you perform all these suggested analyses!):a) Probably the most direct test would be to simulate/calculate the asymmetric spin-echo sequence (ignoring image encoding) with a range of τ offsets in a voxel with or without a static field inhomogeneity across it.Next, add various magnitudes of blipped shims.How does the "decay" curve of signal vs. τ change with shimming?Does the maximum shift from τ = 0 in any of the simulations?If you do a sum-of-squares combination of the shimmed signals, do you see a shift then?For simplicity, simulations in one-dimension, with a linear gradient inhomogeneity, without diffusion and completely ignoring any effect of CBV/dHb should suffice to isolate the effect of shimming/macroscopic field inhomogeneity.
b) Alternatively, using the data on hand, consider voxels that showed a large shift parameter and compare them to voxels that had a negligible shift parameter in some of the following ways: Look at the entire "decay" curve of signal vs. τ without shimming.Is the maximum shifted from τ = 0? How do these decay curves look at various shim levels?Does the maximum shift?What happens to the signal at τ = 0 with shimming?Does the signal for τ ≠ 0 actually exceed the signal at τ = 0 (unshimmed) for some shimming value?c) Again using the experimental data, is the shift something like a fitting artifact?For voxels with large ΔT, are the decay curves vs. τ steep or shallow (relative to the peak intensity at Δ T)?If shallow, i.e., R2' is small, then could the peak shift be the result of trying to fit a peak to a relatively flat curve + noise?Is there some correlation between R2' and ΔT that may support this? 2.
Minor Comment:
In the Imaging Protocol subsection of the Methods, the authors state, "For the ASE sequence we modified the manufacturer's spin-echo EPI sequence to allow the 180° refocusing pulse to be offset by τ as defined above."Is the refocusing pulse offset by τ or τ/2 (resulting in shifts of the echo of 2τ or τ, respectively)?
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: biophysical modelling of the blood oxygenation level-dependent (BOLD) fMRI signal, high-resolution fMRI, MRI pulse sequence development I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
Regarding the R 2 ' fitting parameter ΔT, which was introduced in the original manuscript to account for shifts in the signal maximum away from the nominal spin-echo image at τ = 0.This was comment 2 in my previous review.In their response, the authors have suggested that this could be due to large macroscopic field inhomogeneity along the non-shimmed direction (x-axis).I am still not convinced that this is the source of the shift since the whole point of the refocusing pulse is to refocus dephasing that arises from these static field inhomogeneities at the spin-echo.The reference that was added from Chen et al., was specifically discussing gradient-echo, and its relevance to asymmetric spin-echo was mentioned; however, their theory does not apply to the spin-echo.I think there are multiple ways one could go about investigating the nature of this shift with the data on hand or with some straightforward modelling.This may not, in the end, affect the fitted R 2 ' values, but I think it would go a long way to help solidify the methodological underpinnings of the proposed R 2 '/OEF mapping.
the source of the R2' prime underestimation we observed, which occurred over the whole brain, and is by far the more important issue which we have now dedicated significant space to discussing.The reviewer suggests in his reply that there are multiple ways to investigate the nature of the T2' shift but a list of such ways is missing, hence it is not possible to respond further to this comment.
Competing Interests: No competing interests were disclosed.
Version 1
Reviewer CMRO 2 measurements were taken in rodents under two different anaesthetic conditions.The authors demonstrate that CMRO 2 is lower under α-Chloralose anaesthesia compared to isoflurane anaesthesia.The metabolic activity was validated by measuring glucose consumption using autoradiography under the same two anaesthetic conditions.The glucose metabolic rate is also lower under α-Chloralose anaesthesia.The study design resulted in a clear separation of the OEF, CBF, CMRO 2 and glucose utilisation (GU) under the two anaesthetic conditions, in almost all brain regions, demonstrating the successful implementation of this non-invasive method.
Minor comments: Could the authors comment on the effect of fixing the deoxygenated blood volume (DBV) in the OEF measurements, particularly considering that there is a significant difference in the CBF between the two anaesthesia groups which may have an influence on the DBV value?Reviewer Expertise: MRI -Arterial Spin Labelling I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
missing, hence it is not possible to respond further to this comment.
Competing Interests: No competing interests were disclosed.) in rats.The novelty here lies in i) the variable shimming approach used to reduce the effect of macroscopic field inhomogeneities that otherwise contaminate the contribution to R 2 ' from deoxyhemoglobin in the vasculature, ii) the application of this approach to compare various brain physiological parameters (OEF, CBF, CMRO 2 ) under two different anesthetics (isoflurane and alpha-Chloralose) and iii) the comparison of CMRO 2 to the glucose uptake (GU) by autoradiography.
The study shows large differences in brain physiology under the different anesthetics, consistent with previous findings.The final finding that the authors fit a CMRO 2 to GU ratio of 6.4 is quite remarkable, considering the theoretical ratio is 6 (i.e., 6 molecules of O 2 consumed per glucose during oxidative glycolysis).I do have significant concerns, however, with the quantification of R 2 ' given that multiple previously unpublished techniques were employed that I think would benefit from further validation.Given that the OEF and CMRO 2 quantification rely on the estimate of R 2 ', this does have important consequences for the comparisons of the absolute OEF and CMRO 2 values across the two anesthetics and for the absolute CMRO 2 :GU ratio.
To the best of my knowledge, the way the shimmed images were combined through root sum of squares is a previously unpublished approach.Similarly, the proposed noise floor correction via subtraction of the mean-squared noise signal in the background is also novel to this study.While the noise subtraction does appear to improve the contrast-to-noise, it is not apparent that the desired absolute signal levels are preserved.Since these images are later used for absolute quantification of R 2 ', it is important that the signal levels are not systematically biased, but if they are, at least that bias can be characterized.It would be reassuring to see a validation of these new techniques. 1.
The authors introduced the R 2 ' fitting parameter ΔT to account for shifts in the signal maximum away from the nominal spin-echo time at τ = 0.The magnitude of the shifts, ±25 ms, are dramatic and are previously unheard of (to the best of my knowledge).This is possibly related to the shimming and/or the shim image combination.Again, some form of external validation of this parameter to better understand its origin would help give better trust in the resulting R 2 ' fits.
2.
Another major limitation of this study was that the R 2 ' fitting required fixing the value of deoxygenated blood volume (DBV) to 3.3%.While this may have helped stabilize the fit, it would lead to OEF estimates that would differ from their true values depending on how the true DBV compared to the assumed value, and this points to issues with the model and/or data.As DBV is regionally varying, this limits the validity of the OEF estimates.Could the authors elaborate on how they performed the fitting?In Eq. 2, there is the quadratic decay period for tau < Tc and the linear decay period for tau > Tc.Also note, typically studies have used a factor of 1.5*Tc for the transition period, although this would depend on how the authors have defined Tc (e.g, refs.17 or 19).Was data from all periods fit?How was Tc determined without a priori knowing R 2 '?In previous quantitative BOLD studies, the linear decay period is extrapolated to the signal at tau = 0 and the difference between this signal and the measured signal is proportional to DBV, was this fitting approach used here? 3.
Minor comments:
In the opening paragraph, the authors refer to "quantitative BOLD" using calibration with gases.This calibration with gases is referred to as "calibrated" BOLD or fMRI, not quantitative BOLD.Quantitative BOLD is a gas-free technique which the methods in this manuscript are based on. 1.
The values for the arterial O 2 concentration, Ca, are based on human physiological parameters.The authors should consider using values for rats.See, e.g., Gagnon et al., J Neurosci (2015). 1 2.
The labels are cropped in Figure 5 3.
In the middle of the Discussion section, the authors compare their OEF estimates to He et al. 's where they state findings that were in line with the findings of this manuscript.I think that stating the results being in line with each other is a bit generous as the OEF values in the manuscript differ by 35% (iso.) and 44% (alpha-chlor.)relative to those from He et al.
4.
It would be nice to see a discussion of other contributions to R 2 ' contamination beyond macroscopic field inhomogeneities, such as iron depositions or regionally varying myelination since these may also bias the OEF estimate and would be important to be aware of in (pre)clinical studies.
5.
It would be helpful to label some of the key ROIs discussed in the manuscript, in particular the inferior colliculus.
6.
literature, however the linear extrapolation method used by Blockley et al cannot detect such shifts even if they are present.In addition, previous literature in this area has used human subjects and lower field strengths.We suspect that strong MFGs in the x-direction, which we could not compensate due to scan time constraints, may be the underlying cause of such shifts.This putative explanation has been added to the discussion, along with an additional reference (Chen et al) We agree that fixing the value of DBV is a limitation of this study.We previously used a value of 1.5Tc for the transition period and Tc=R2'/DBV and a non-linear fit across all data points.Whave now updated the code (and relevant equations in the manuscript) to use the full integral form of Yablonskiy et al, but note that this made very little difference to the fitted values compared to the asymptotic form of the equations.This code is now available in version 3.3 of our toolbox.Likely, a major contributor to the difficulty in fitting for DBV is the relatively small maximum value of τ we could achieve within imaging constraints and only having a single τ=0 image.A more efficient protocol may be to acquire multiple τ=0 images and fewer intermediate values of τ.This point has been expanded in the discussion.
Minor comments:
Thank you for the clarification, the introduction has been reworded accordingly.1.
We thank the reviewer for bringing this to our attention.Please see the notes on the amendments at the start of the text for a full discussion of this issue.
2.
The labels have been corrected in figure 5.
3.
On reflection we agree with the reviewer.The sentence has been amended to state that the values in He et al are further evidence that our method currently underestimates OEF.
4.
Discussion of the effect of myelination has been added, as it is clear the current model does not account for this.
5.
Due to the 3D nature of the ROIs, displaying all of them would require an additional figure.The Inferior Colliculus has been marked on an existing figure with arrows.6.
Figure 2 .
Figure 2. Asymmetric spin-echo data in a single slice at τ = 56 ms, as for Figure 1.For this highly asymmetric spin-echo, the signal energy has shifted towards more negative shim values, and the majority of the signal in the un-shimmed center image has been lost.Without shimming the signal would be erroneously low.
Figure 1 .
Figure 1.Raw asymmetric spin-echo data in a single slice at τ = 0 ms for all the values of Z- & Y-shims.The signal is concentrated at low shim values as expected.
Figure 4 .
Figure 4. Slices through the fitted parameters and residual for the asymmetric spin-echo (ASE) data under both anaesthetics.R2′ values around the mastoid cavities are artifactually high.The model generally fits well across the brain, but is higher in white matter and cerebrospinal fluid.RMSE: Root Mean Square Error.
Figure 3 .
Figure 3.A The asymmetric spin-echo data after combining all shim values via naïve Root Sum-of-Squares.Noise has been amplified to the extent that the image cannot easily be distinguished from the background.B Noise suppression restores the signal-to-noise ratio to a reasonable level.The effect of R2′ decay can be observed at the high values of τ in cortical veins.
Figure 5 .
Figure 5. Slices through the mean Oxygen Extraction Fraction (OEF), Cerebral Blood Flow (CBF) and Cerebral Metabolic Rate of Oxygen (CMRO 2) for both anaesthetics.CMRO 2 is lower under α-Chloralose, however this is driven by a significant reduction in CBF as OEF is actually higher under α-Chloralose than isoflurane.Note that the slice through the inferior colliculus (marked with green arrows) for CMRO 2 has a different color scale due to the much higher rate of metabolism compared to the other slices.
Figure 7 .
Figure 7. Mean value of Oxygen Extraction Fraction (OEF), Cerebral Blood Flow (CBF), Cerebral Metabolic Rate of Oxygen (CMRO 2 ), and Glucose Utilisation (GU) in the chosen Regions of Interest (ROIs) for each subject.CMRO 2 and GU consumption are both reduced under α-Chloralose anaesthetic compared to isoflurane.Almost total separation between the two groups was achieved; ROIs and parameters where this did not occur are noted in the text.
Figure 8 .
Figure 8.A regression analysis of Cerebral Metabolic Rate of Oxygen (CMRO 2 ) against Glucose Utilisation (GU) across the averaged regions of interest data for both anaesthetics.
Report 04 June 2021 https://doi.org/10.21956/wellcomeopenres.18455.r43881© 2021 Ohene Y.This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Yolanda Ohene The University of Manchester, Manchester, UK Wood et al. have combined two techniques, arterial spin labelling (ASL) and asymmetric spin-echo (ASE), that capture the cerebral blood flow (CBF) and the oxygen extraction fraction (OEF), respectively, to provide a non-invasive measurement of cerebral metabolic rate of oxygen (CMRO 2 ).
○Introduction, Paragraph 1 :○Figure 5 :
Figure 5: Labelling of the coloured bars are overlapping on the bottom row images.○
Table 1 . Mean and standard deviation of each parameter value in each Regions of Interest (ROI), and the average across the ROIs. OEF,
Oxygen Extraction Fraction; CMRO2, Cerebral Metabolic Rate of Oxygen; CBF, Cerebral Blood Flow; GU, Glucose Utlilisation.
52.found values between 35 and 40% under Isoflurane anesthesia using an R2' method similar to ours52.He et al. reported mean OEFs of 23% and 38% under isoflurane and α-Chloralose respectively23.This comparison to previous literature would indicate that the method presented here underestimates OEF.
This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
https://doi.org/10.21956/wellcomeopenres.18455.r43877© 2021 Berman A. 1 Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA 2 Department of Physics, Carleton University, Ottawa, ON, Canada 3 Brain Imaging Centre, The Royal Ottawa Institute of Mental Health Research, Ottawa, ON, Canada Wood et al. have presented a new R 2 '-based method for quantifying the oxygen extraction fraction (OEF) and the cerebral metabolic rate of O 2 (CMRO 2 | 9,679.6 | 2021-05-13T00:00:00.000 | [
"Medicine",
"Physics"
] |
Leukocyte Trafficking and Hemostasis in the Mouse Fetus in vivo: A Practical Guide
In vivo observations of blood cells and organ compartments within the fetal mammalian organism are difficult to obtain. This practical guide describes a mouse model for in vivo observation of the fetal yolk-sac and corporal microvasculature throughout murine gestation, including imaging of various organ compartments, microvascular injection procedures, different methods for staining of blood plasma, vessel wall and circulating cell subsets. Following anesthesia of pregnant mice, the maternal abdominal cavity is opened, the uterus horn exteriorized, and the fetus prepared for imaging while still connected to the placenta. Microinjection methods allow delivery of substances directly into the fetal circulation, while substances crossing the placenta can be easily administered via the maternal circulation. Small volume blood sample collection allows for further in vitro workup of obtained results. The model permits observation of leukocyte-endothelial interactions, hematopoietic niche localization, platelet function, endothelial permeability studies, and hemodynamic changes in the mouse fetus, using appropriate strains of fluorescent protein expressing reporter mice and various sophisticated intravital microscopy techniques. Our practical guide is of interest to basic physiologists, developmental biologists, cardiologists, and translational neonatologists and reaches out to scientists focusing on the origin and regulation of hematopoietic niches, thrombopoiesis and macrophage heterogeneity.
INTRODUCTION Background
Developmental processes including formation and function of blood cells and organs in mammalian fetuses are still incompletely understood. Mortality in very low and ultralow birth weight premature infants shows only modest improvements, remains very high (mortality in 23 week old premature infants 73 vs 67% 2009 vs. 2012, respectively), and is the leading cause of death for children under 5 years of age according to the World Health Organization (WHO), showcasing the lack of adequate research and translational endeavors (Callaghan et al., 2006;Stoll et al., 2015). Human preterm infants are at high risk for infections and bleeding complications. Coping with increasingly younger gestational ages at birth challenges the clinical field, demanding further experimental workup of developmental processes. Cord blood samples of premature infants or in vitro/ex vivo analyses of animal fetuses are the sources used so far to obtain information about developmental processes of the blood system and its consequences for bleeding, inflammation and development. So far, for organ development, radiologic imaging techniques, such as CT-scans, MRIs, or pathologists' and anatomists' workup of deceased fetuses are our most suitable source of information. In this practical overview, we describe an intravital microscopy (IVM) approach to observe the growing mouse fetus including yolk sac, focusing on the fetal vasculature and blood cells with a particular interest on leukocyte trafficking during inflammation and fetal platelet function.
In vivo Imaging of Fetal Yolk-Sac Vessels and Organ Compartments
The fetal IVM model was developed to investigate functional maturation and development of blood cell populations and progenitors in the living mouse fetus and elucidate underlying mechanisms of the regulation of homing processes and cell-cell interactions in the fetus.
Due to lack of knowledge about in vivo fetal responses to inflammatory stimuli together with clinical findings regarding postnatal complications in preterm infants, we set out to study leukocyte recruitment and leukocyte-endothelial cell interactions in the developing mouse fetus demonstrating an ontogenetic regulation of fetal leukocyte function with diminished leukocyte recruitment in the yolk-sac and fetal skull during fetal development (Sperandio et al., 2013). Subsequently, we focused on platelet function and plateletleukocyte interactions in vivo. We wanted to understand to what extent thrombus formation could occur in the fetal vasculature (Margraf et al., 2017). This is an important question, as premature infants exhibit a high incidence of severe bleeding complications. In addition, adverse outcomes of premature infants have been reported in those infants receiving transfusions of adult platelets . Our results herein showed that young fetuses have difficulties to form thrombi, with platelet hyporeactivity due to diminished expression levels of integrin adaptor molecules and decreased platelet counts (Margraf et al., 2017). More recently, we could use our fetal model in additional applications related to developmental processes of different cell populations and organ compartments, including the development of monocytes/macrophages in the fetus (Stremmel et al., 2018a,b) and the impact of blood flow properties on the development of the fetal thymus (Moretti et al., 2018). Here, we provide a concise approach for the preparation and subsequent IVM observation of fetal blood vessels and microinjection into the fetal vasculature in the mouse, including techniques to image platelets and leukocytes as well as different organ compartments.
Applications of the Fetal Intravital Microscopy Model
While examination of blood cell function is one of the major benefits of the fetal IVM model, it features a large variety of possible applications. These include vascular function and development, vessel distribution and reactivity, yolk-sac stability, as well as healing and regeneration processes. As IVM is well established in adult models of the mouse [e.g., cremaster muscle preparation (Sperandio et al., 2006), dorsal skinfold chamber (Lehr et al., 1993), vessel injury (Pircher et al., 2012)], experimental protocols, such as trauma-induced inflammation, endothelial damage and/or stimulation with pro-inflammatory agents (using LPS, fMLP, TNF-α etc.) can be transferred to the fetal in vivo model. Also, barrier-crossing of maternally administered substances can be traced, and intrauterine exposure to inflammatory stimuli can be mimicked.
Comparison With Other Methods and Advantages of the System
In past decades, different approaches to examine the fetal blood system or organ compartments have been applied. Christiansen and Bacon used trans-illumination microscopy on completely exteriorized fetuses to analyze vessel patterns in the developing posterior limb of mouse fetuses (Christiansen and Bacon, 1961). Echtler et al. (2010) applied a model of prematurity, where fetuses were born through cesarean section, intubated and used for experiments. This model allowed simulation and studying of clinically relevant problems, such as ductus arteriosus closure, while the preterm infant was challenged through outside influences, representing a disease-state setting, yet not allowing analysis of a developmentally regular surrounding, such as the yolk-sac. Another approach used a partial incision of the uterus musculature and yolk sac in combination with a suture-gluefixation to prevent fluctuation of amniotic fluid, while displaying the fetal cranium in a fixed position for further analysis of the developing brain (Ang et al., 2003).
Additional methods featured MRI-trans-sections, where any movement could cause artifacts and no detailed analysis of blood cell subpopulations can be acquired at this moment due to limitations in traceable probes as well as low sensitivity (Dhenain et al., 2001;Speier et al., 2008). Garcia et al. (2011) chose an ex vivo embryo culture method to gain insight into morphogenetic events in the developing fetus, while Laufer et al. (2012) used photoacustic imaging techniques for CD-1 mice to examine embryos ex vivo and in vivo. Boisset et al. (2011) developed an approach where ex vivo confocal image acquisition of the embryo aorta was performed in order to monitor hematopoietic and endothelial cells during development. Yanagida et al. (2012) used a model for the visualization of migrating cortical interneurons in which an exteriorized E16.5 fetus, attached to the umbilical vessels is positioned in agarose gel with or without gallamine triethiodide application and scalp removal. Other techniques equally rely on incision of the yolk-sac and placement of the fetus into a heating chamber for example filled with artificial cerebrospinal fluid (Yuryev et al., 2015). Another technique used a fully mobilized uterine horn in which the mesometrium was cut and ovarian vessels had been ligated. The preparation was then mechanically immobilized, fixed in low-melting agarose and the embryo accessed by pressing it against the uterine wall and imaging it through the wall using two photon microscopy (Hattori et al., 2020).
Experimental models for murine fetal in vivo imaging are surprisingly rare and existing models have limitations regarding optical resolution, imaging techniques, or surgical preparation procedures with unintentional harming of the fetus itself. Thus, a model to study physiologically relevant developmental aspects has been lacking so far. Our in vivo model allows rather long observation times and a less artificial surrounding for the fetus itself, which remains vital throughout the experiments. Our preparatory techniques also allow removal of minute amounts of blood for further analysis from fetuses as young as age E13.5 (out of 21 days of gestation), e.g., for FACS analysis.
Animals and Timed Matings
One major logistic effort lies within the requirement of pregnant animals. Thus, timed matings are needed to ensure adequate estimates of developmental age, which further needs to be specified through correlation of anatomical properties. In our setting, timed matings were conducted through two females and one male animal in the cage put together for one night. Depending on the specific need of pregnant mice we calculated with three to four cages per successful pregnancy. Influence of pheromones is minimized through spatial separation and hygiene precautions. For this purpose, female animals are placed in a separate room, while mating takes place in the room where the male mating animals are housed. After mating, animals are checked for plug-presentation, separated into plug positive and negative and placed back into the female room and a separate plug-positive room, respectively. Behavior, weight and change of abdominal configuration are checked daily. Prior to experiments weight, agility, and abdominal curvature are re-evaluated to prevent false positive pregnancies.
Choice of Anesthetics
For in vivo experiments involving muscular preparations and requiring stable images, a combination of ketamine and xylazine (both known to cross the placental barrier) is used, to reduce movement of the uterine musculature, while ensuring sufficient anesthesia, and analgesia for the animal.
Choice of Fluorophores and Antibodies
It is crucial for in vivo imaging to ensure sufficient image contrast, stability, and penetration depth. While the last two points are mainly influenced by preparational skills and technical setup, the image contrast relies on the appropriate choice of fluorophores and plasma markers. Equally, choice of antibodies is important when only a limited amount of colors can be imaged at once. Table 1 gives a list of fluorophores and antibodies we and others have used in fetal blood cell imaging.
Fetal Ages
The choice of different developmental stages for IVM analysis is important for the experiments and depends on goal, site of expected observational events, preparational skills, and experimental duration (also compare "Limitations"). The maturation state of the fetus can be assessed by classical anatomical features and size of the mouse fetus, as described by Kaufmann (2005).
In vivo Imaging and Duration of Experiment
As any artificial manipulation can lead to serious consequences for the fetus, careful preparation and observation are necessary. Inflammatory stimulation or thrombus induction are harmful events occurring within the fetal vasculature, therefore limiting any subsequent experiments within the same fetus. Intravital imaging experiments were carried out for a maximum duration of 1 h per fetus (Figure 1).
Microinjection Volume Considerations
The developing murine fetus itself possesses only a small blood volume depending on the weight of the fetus (estimated 7-10% of the body weight). Thus, any injected substance will crucially influence circulatory mechanisms, cardiac output and vascular tone within the fetus (Russel et al., 1968). We observed that injection of volumes exceeding 5-10 µL strongly compromised the fetus and should therefore be avoided.
Mice
Adult female (C57/Bl6; minimal age 12 weeks) and male mice are used for timed matings. Pregnant female mice are used for in vivo experiments. Through mating strategies and use of appropriate genetically modified reporter mice (Table 1), it is possible to generate different phenotypes in the fetus and mother. This might help to distinguish fetal from maternal structures (cells).
Reagents
Anesthetic Use a ketamine/xylazine mix (125 mg/kg bodyweight of ketamine; 12,5 mg/kg bodyweight of xylazine) in a volume of 0.1 ml per 8g bodyweight for anesthesia of the mother animal.
FITC-Dextran-Solution
Used to stain microvasculature and for phototoxicity-induced thrombus formation. Dissolve FITC-dextran in sterile injectable distilled water or sterile phosphate-buffered saline at a final concentration of 10%.
Acridine-Orange-Solution
Injectable in vivo dye capable of crossing the placental barrier. Dissolve at a concentration of 2 mg/ml in sterile phosphatebuffered saline. Prepare an injection volume of about max. 150 µl in a syringe for later administration (usually as needed, approx. 50 -100 µl).
Microbeads
Used for in vivo blood flow velocity measurements. Ultrasonicate beads prior to usage. Dilute stock concentration of 1 × 10 10 beads/ml per factor 10 to 100 according to wished study purpose. For injection into yolk sac vessels, dilute 1 µl of bead-solution in a total of 5 µl of sterile NaCl or PBS injection solution.
In vivo Superfusion Buffer
Classical superfusion solution for IVM experiments as reported earlier (Klitzman and Duling, 1979). Prepare solution I, containing 292.9 g of NaCl, 13.3 g of KCl, 11.2 g of CaCl2, and 7.7 g of MgCl2 in a total of 3.8 liters of deionized water. Prepare solution II, containing 57.5 g of NaHCO3 in 3.8 liters of deionized water. The in vivo superfusion buffer is then obtained by adding 200 ml of solution I into a two liter cylinder. Fill the cylinder then up to 1,800 ml with deionized water. Then add 200 ml of solution II to the cylinder. Mix the solution and add a gas-combination of 95% N2/5% CO2 using a foam-disperser. If needed, inflammatory stimuli (for example fMLP) can be added to the superfusion buffer.
Intravital Microscope
The intravital microscopic setup consists of an upright microscope, together with a motorized xyz-stage, which allows to save xy-positions and to move to a previously determined exact position again later throughout an experiment. Usage of inverted microscopes are not recommended as they can result in excessive pressure application onto the fetus and thus deterioration of blood flow in the field of observation. Equally, inverted microscopes do not allow for adequate superfusion of a fetus. For illumination different light sources (halogen lamp, Hg-lamp, stroboscopic flash lamp system, laser) are used together with a CCD-camera or photomultiplier tubes as appropriate for the type of microscopic technique (f.e., conventional fluorescence microscopy, multiphoton laser scanning microscopy, spinning disk microscopy and others, compare Table 3). The choice for usage of a specific microscope should depend on the research question in which either a fast image acquisition, long-term image stability or high resolution or tissue penetration are needed (see Table 3). To ensure optimal conditions for the animals during the IVM observation period, a heating pad and superfusion solution are used. The heating pad is placed below the mother animal to ensure adequate temperature of the maternal blood circulation which nurtures the placenta. Heating pad performance must be regularly controlled to guarantee appropriate environmental temperature. The superfusion solution is administered constantly with the use of a turningpump/roller-pump. A polyethylene-tubing system is used to connect the superfusion system to the microscope objective. Temperature of the superfusion system must be adjusted to reach adequate temperature at the point of administration, thus measurements have to be performed directly at the objective. The temperature-controlled superfusion buffer in which the fetus is constantly immersed thus prevents cooling of the fetus itself.
To guarantee a stable temperature of the superfusion buffer on the preparation, the superfusion buffer is continuously removed from the microscope stage through a small hole, connected to a vacuum pump reservoir (compare Supplementary Figure 1).
To keep the preparation in a fixed position, the mother animal is placed on a custom-made plexiglass animal stage, with the fetus positioned inside a petri dish (construction plan see Supplementary Figures 1-3), from which on one side, one part FIGURE 1 | Schematic representation of preparation steps for fetal in vivo imaging. Following anesthesia, the yolk-sac is prepared for imaging and/or microinjection. of the wall is removed to allow the fetus to lay inside the petri dish without the application of pressure or force onto it. The fetus is kept in place within the petri dish containing medical siliconegel and the use of a custom made magnetic space-holder, which reduces the influence of the breathing movement of the motheranimal on the observation field. The space-holder contains a hole for microscopic access. A coverslip is placed on top of the fetus and used for observation. Care must be taken that the blood flow is not restricted by pressure application. Imaging techniques such as multiphoton-laser scanning-microscopy (MPLSM) are more sensitive to motion artifacts and thus require a higher degree of stabilization compared to conventional epifluorescence recordings. If the setup described herein is insufficient to achieve acceptable imaging conditions, users should consider the following options (also compare Tables 2, 3): 1. If insufficient perfusion of the yolk-sac is observed, try to reduce pressure exerted by the stabilization device. If necessary (especially in very old fetuses), try to only use a cover slip without fixation device and keep the cover slip in position by application of additional medical silicone-gel outside of the field of observation.
2. If the image is unstable, increase pressure while directly observing the microcirculation through the microscope, without affecting blood circulation within the yolk-sac or fetus. Generally, ensure the mother animal is not in contact with the cover slip or holding device otherwise breathing artifacts are transferred to the preparation. Also, it is helpful to model the silicone-gel against the fetus to ensure it remains within its position for the duration of the recording. Additional care must be taken to ensure adequate anesthesia, which might require re-injection of anesthetics depending on the duration of recording.
Recording of the in vivo observations is conducted via a digital recording system.
Generation of Micropipettes for Fetal Microinjection
For microinjection purposes, glass capillaries are being heatpulled utilizing a vertical heat-puller. Employing a stereoscope, pulled glass capillaries (micropipettes) are placed in a grinding device and grinded to create a syringe-like tip (open tip diameter 1-2 µm), which allows easier penetration of blood vessels. Step
Problem Possible reason Solution
Searching for the right microscopic image The vessels are not satisfactory/too small/too big/none in the field of view.
Placement of animal, constricted blood circulation.
Place the mother animal on the abdomen and thus turn the whole preparation upside down, gaining access to other vessels within the yolk sac.
Microinjection
The vessel gets ruptured/it is difficult to get into the tissue.
Diameter of glass capillary is too big, not well grinded.
Try to choose a smaller diameter and make sure you check the grinded tip again before injection.
Microinjection
It is impossible to get the substance out of the glass capillary.
Diameter too small, blood clotting at the tip.
Grind the capillary to a larger diameter, try to apply pressure before the injection onto paper tissue to see if you can successfully inject, do not re-use used glass capillaries, as smallest amounts of blood can lead to closure of the tip.
Microinjection
I have a backflow into the glass capillary.
Pressure too low. Adjust pressure to a higher level, making sure you keep a baseline level throughout puncturing and injection.
Microinjection
After injection and occlusion, tissue gets pulled out and the yolk-sac ruptures.
The cauterization device is sticking to the site of heat-occlusion.
Try to use only the tip of the cauterization device to occlude the vessel. Make sure you do it quick and precise. The bigger the area of heat-application, the more likely the yolk-sac will rupture.
Imaging
Using multiphoton microscopy, the resulting stacks are not showing the complete vessel in time lapse recordings.
The z-shift moves the vessel out of the focus. Shadow-effects by large amounts of erythrocytes contribute to poor penetration depth inside the vessel.
Increasing the z-stack size to levels sufficiently above and below the vessel of observation will prevent it from shifting out of the field of observation.
Thrombus induction
Mechanical vessel occlusion is not successful using local pressure application.
The stimulus is insufficient for studying thrombus formation.
Try to use a 8-0 suture in order to ligate the vessel locally. Keep the ligation for a longer time to increase flow restriction and vessel damage.
Imaging
Image quality is not satisfactory in the liver region.
Penetration depth is not sufficient.
Remove the surrounding tissue and liver capsule. Try to either ligate one of the ribs and pull it with a suture to the cranial direction, gaining access to the liver, or to carefully dissect part of the forming rib.
RESULTS/PROCEDURES
A single experiment from the beginning of anesthesia until the end of image acquisition takes between one to one and a half hours for leukocyte imaging and two and a half hours for platelet function studies (Figure 1).
Mouse Anesthesia and Surgical Procedure
Anesthetize the pregnant mother animal, using 100 µl narcotic cocktail per 8 g bodyweight. Administration of anesthesia should be i.m., rather than i.p., as effectiveness of i.p. injection might be influenced by the surgical procedure of the model, which requires opening of the abdominal cavity with potential leakage of the applied anesthetic drugs. In addition, i.p. injection might also harm the fetuses by misplaced injection. After administration wait for about 20 min, until the mouse is securely unconscious. Check anesthesia through a pain stimulus (e.g., compression of the foot-limb). Place the mother animal with its back on the heating pad. Disinfect the abdominal site of preparation with 70% ethanol. Shaving can be performed as needed. Make a lateral horizontal incision in the expected size of the fetus (approx. 1 cm) to open the abdominal cavity. Cauterize blood vessels from which bleeding might occur either before or during incision of the peritoneum.
Preparation of Fetal Yolk-Sac Vessels
Localize the uterus horn and carefully grab it with blunt tweezers (Supplementary Movie 1). Try to hold on to muscle tissue of the uterine wall only, without grabbing the fetal body in order to prevent injury. This is most conveniently done in a region between two fetuses. Exteriorize the uterus. Make sure you prevent cooling and drying through administration of heated superfusion buffer (37 • C) prior, during and after exteriorization. Incise the uterine musculature in a horizontal manner to reach to one vital fetus within its yolk sac. Place another incision in a vertical manner (90 • to prior incision) to reduce pressure of the uterine musculature onto the placenta. Ensure to start the incision at the opposite site of the placenta, between two fetuses, where you can easily hold the uterus muscle tissue with blunt forceps, without harming the yolk sac. From there, extend the incision, using manually blunted microsurgery scissors. The uterus muscle fibers will start to contract and retract, giving access to the yolk-sac. It is very easy to puncture and/or rupture the yolk-sac while trying to cut through the uterine wall. Also, a high amount of pressure resulting from the contraction of uterine muscle fibers of the incised uterine horn, especially in older fetuses, can lead to the rupture of the yolk-sac and worsening perfusion. At this step, patience is necessary. Very often, the fetus within the yolk-sac finds its way out through the surgical opening of the placenta without external support. After the fetus is exteriorized (Figure 2A) and still inside the yolk-sac, the fetus is gently placed into a modified petri dish (5 ml) (Figure 2B), filled with silicone and warmed superfusion Yolk-sac, brain.
Spinning disc confocal microscopy Laser; photomultiplier tubes and/or digital camera.
Fast with higher spatial resolution.
Does not reach penetration depth of MPSLM imaging.
Assessment of slow leukocyte rolling, adhesion, and migration.
High spatial resolution.
Slow
Analysis of expression patterns, assessment of stable structures or slow cell movement (migration).
Label-free cell and tissue detection (second-and/or third-harmonic generation signals); high penetration depth; good spatial resolution.
Relatively slow, risk of laser damage depending on settings; motion sensitivity leads to artifacts and/or shifting of images.
Assessment of anatomical
features; analysis of deeper organ compartments; migration; and laser-injury.
Yolk-sac, brain, liver, skin, bone marrow, and heart. buffer solution. Lifting cannot be done by directly pulling on the yolk-sac, as it will easily rupture. Therefore, try to pull on surrounding uterine musculature, located next to the preparation site. It is also possible to load the fetus onto a pre-wetted cottonstick and carefully mobilize it. It is important not to damage the placenta to decrease the risk of bleeding. After having secured the fetus within the yolk sac, the animal stage can be transferred to the IVM for imaging. We have performed extensive imaging studies on yolk sac vessels (a) to elucidate the maturation of neutrophil recruitment during inflammation throughout mouse fetal development and (b) to investigate platelet function during fetal ontogeny. The following sections will describe how we approached these two processes by intravital imaging:
Studying in vivo Neutrophil Recruitment During Fetal Development
Depending on the purpose of the project, the preparation of the fetus/yolk-sac can be performed in unstimulated pregnant mice or pregnant mice in which the uterus has been pre-stimulated with proinflammatory agents (f.e., intrauterine LPS, fMLP, or TNF-α, Figure 2C) prior to imaging. If no proinflammatory stimulus is applied, the surgical procedure itself will cause a mild inflammatory response with some rolling and adherent leukocytes, which can be compared to trauma-induced injury as described in the mouse cremaster muscle (Sperandio et al., 2001).
Using reporter mice such as Lyz2 EGFP mice (Faust et al., 2000) or Catchup IVM mice (Hasenberg et al., 2015), neutrophils can be visualized by their fluorescent signal. The yolk sac microcirculation does not need to be stained as the auto-fluorescence signal is bright enough for conventional fluorescence microscopy. Observation of rolling, adhesion and crawling of fluorescently labeled blood cells as neutrophils is then possible. In case no external stimuli are used, we see some rolling and adherent neutrophils, which increase in number with gestational age. For application of additional dyes or other agents into the maternal organism, a carotid artery catheter can be placed into the pregnant mouse before imaging. Injection of acridine-orange solution into the carotid artery will stain maternal and fetal leukocytes (placental passage!) and can be used as alternative approach in case reporter mice are not available.
Measuring of blood flow velocity: Three different techniques can be applied to study blood flow characteristics during fetal ontogeny. The most precise and reproducible technique is the microbead based method. Overall, velocity is calculated as the displacement of a bead or cell from point a to point b during a pre-determined time interval, resulting in: 1. Leukocyte based method: Using a flowing leukocyte in the center of the vessel, the movement of the cell is followed frame by frame. This gives you information on traveled distance over time. 2. Microbead based method: Microbeads are microinjected into the fetus and allowed to circulate prior to imaging. As described under (1) two consecutive frames can then be used and microbead displacement assessed over time. 3. Multiphoton-laser scanning microscopy-based line-shift determination of blood flow velocity can be performed as previously described (Dietzel et al., 2014).
Studying Thrombus Formation and Platelet-Leukocyte Interaction in the Growing Mouse Fetus
Microinjection: Fill one grinded and prepared glass microcapillary with FITC-dextran, to a volume of approximately 5 µl (/beads/antibody-solution, respectively). Depending on the desired injection site, volume and opening diameter of the glass capillary, either manual pressure infusion, or a transjector-based approach can be chosen; equally, either manual puncturing or micromanipulator-based puncturing of the yolk-sac vessel can be performed. Connect the microcapillary to the pressure-application tube. Hold the glass-capillary like a pencil, with the tip of your fingers, while using the other hand to stabilize. Make sure, to put the cauterization device close to the other hand (Supplementary Movie 1). Optional, you can hold it with your other hand (e.g., left, if right-handed), while puncturing the yolk-sac, to immediately occlude the injection-vessel in order to prevent bleeding out of the injected substance from the penetration point. Puncture a sidebranch vessel. Before doing so, observe blood flow characteristics, as it is crucial to choose a vessel which ensures distribution and flow into a bigger blood vessel. Rupturing and/or penetration of the yolk-sac is very easy during this step. Practice is necessary, while it can be helpful to know that due to the size of the glass capillary (depending on purpose of injection), tissue can be folded and pushed away during pressure application by the tip of the glass capillary (the bigger the tip diameter, the more difficult it will be to penetrate a vessel; yet the smaller the tip diameter, the more likely it will break or demolish the grinded tip). Once the applied puncturing-pressure is high enough, the tissue will allow access and the glass-capillary will slide into the blood vessel.
Applying constant injection pressure, administer the dye and/or antibody-solution into the vessel. Observe it through the stereo-microscope. Make sure to observe the level of the injection solution to prevent injection of air into the fetal vasculature. Thus, stop the injection right before air application and proceed to the next step, while maintaining a constant slightly positive pressure equivalent to the current intravascular blood pressure. It is easiest to manually adapt this pressure.
Remove the glass capillary from the vessel and immediately occlude the injection point with the electric-cauterization device. Be quick. If too slow, the injected solution will flow out of the vessel again. Additionally, it is important to minimize the applied heat/trauma using the cauterization device, to ensure sufficient blood flow in the surrounding vessels.
Leave the fetus for approximately 5-10 min in the dark room in warm superfusion buffer to guarantee circulation and thus distribution of the phototoxic dye and/or antibody-solution.
Imaging of the yolk-sac: Place the fetus onto the imagingstage. Make sure not to rupture the connection between fetus and mother animal while positioning or moving the preparation. Transfer the preparation to the in vivo microscope setup. Choose appropriate light/filter/detector settings. For imaging choose a vessel away from the point of microinjection. Apply superfusion buffer throughout the experiment.
Thrombus induction: For thrombus induction, we recommend the phototoxic or laser-induced approach, yet depending on availability of techniques, also the mentioned other approaches can be used. Nonetheless, variation in results is greater in subsequently mentioned techniques.
Phototoxic Injury
For thrombus induction use phototoxic FITC-dextran as microinjection-solution. Perform imaging using a high intensity light source (e.g., mercury lamp). For FITC-Dextran, use a filter for excitation maximum: 490 nm, emission maximum: 520 nm. Observe one vessel (20-50 µm) for up to 1 h or until stop of flow occurs. Determine fluorescence intensity using histogram values. For our experiments we chose a camera exposure time of 10 ms. Of note, the field of view will be constantly illuminated by the light source throughout the experiment. Examine platelet adherence and vessel occlusion. Examine reflow-phenomena as an inverse correlate of thrombus stability for 10 min after complete occlusion of a vessel occurred. Once reflow appears, continue observation again for up to 1 h or until stop of flow occurs.
Laser-Induced Injury
For thrombus induction use 2-photon-imaging (Nishimura et al., 2006;Kamocka et al., 2010;Koike et al., 2011). Depending on laser-settings, use a small point laser scan in the level of the vessel wall. Create a vessel-wall injury using beam intensity slightly below apparent heat damage. Observe thrombus formation using time-laps stack acquisition. Movements in z-direction are difficult to outbalance. Thus, appropriate stack settings need to be chosen, allowing for a range of motion.
Chemical Injury
Prepare a 1 mm × 2 mm filter paper patch. Place the filter paper patch into FeCl3-Solution of desired concentration (e.g., 1%). Microinject GpIbβ-X488-antibody into the fetal vasculature and occlude the vessel (see above). A concentration of 0,1 µg/g body weight is recommended. Now apply the FeCl3-saturated paper patch onto the desired vessel under the stereomicroscope. Observe the surrounding area (borderline) of the patch-applied vessel for blood flow cessation. Remove the FeCl3-paper patch after a minimum time of 30 s and proceed quickly to the next step in order to observe the different steps of platelet-vesselwall interaction. Perform imaging under the in vivo microscope using the appropriate filter sets for platelet observation. The forming clot can be noted as fluorescence enhancement at the site of adhering platelets. The antibody recommendation for imaging are to use FITC-fluorescence filter sets and exposure times between 200 and 400 ms depending on camera setup and excitation light source.
Platelet-Leukocyte Interactions
Utilizing antibody or genetic knock-in strategies to fluorescently label platelets and leukocytes (for example using a GFP-knock in for leukocytes and an Alexa649 antibody staining for platelets), platelet-leukocyte interaction can be quantified by counting double-positive (GFP+ and Alexa649+) cellular events and assessing rolling of leukocytes on adherent platelets. Both thrombotic (interaction of leukocytes with injury-related adherent platelets) and free circulating platelet-leukocyte aggregates can be enumerated. At early gestational ages no platelet-leukocyte aggregates can be observed as P-selectin and PSGL-1 expression levels are low. Transfusion of isolated, labeled adult platelets and/or leukocytes into older fetuses can help in dissecting cell-and maturation-specific phenotypes.
Fetus Exteriorization and Organ Imaging
Carefully open the yolk-sac and exteriorize the fetus (Figure 3A). Ensure the umbilical vessels are still intact and not damaged. Remove disturbing tissue and liquid. Place the fetus inside a modified petri dish filled with superfusion buffer. Proceed with preparation and imaging as described below:
Skin
Place a cover slip onto the skin pattern you wish to image. Apply the fixation device. Perform in vivo imaging with constant superfusion. MPLSM can be used for deep tissue penetration.
Liver
Make a small incision within the posterio-lateral area of the fetus (Figures 3B,C), in an area where the liver is clearly visible through the thin skin ( Figure 3D). From there, gently open the lateral side of the abdominal cavity of the fetus to display the fetal liver. If needed, carefully remove one of the forming ribs. It might be necessary to remove the liver capsule (Glisson's capsule). The liver is a well perfused organ. Thus, preparation and manipulation within this area features high bleeding risk. Place a cover slip and the fixation device on top and transfer it to the imaging setup. MPLSM might hold best results due to its penetration depth.
Cranial Imaging
Carefully incise the skin in the head region (temporal region) and place a small silicon ring (approx. 0,5 to 1 cm diameter, depending on objective used for imaging) onto the top. Fill the ring with superfusion buffer for imaging of the cranial-window. Proceed to imaging.
Blood Sampling for Flow Cytometry and Systemic Blood Cell Counts
Acquisition for systemic fetal blood cell counts requires exact volumes as the cell amount can be fairly low.
To collect fetal blood the following procedure can be applied: Completely exteriorize the fetus from the yolk-sac, dry the fetus and remove any amniotic fluid with soft cotton sticks, perform a lateral neck incision, and discard the first small droplet of blood. Then place a 5 µl collecting glass capillary onto the incision site, where the blood vessels are clearly visible. Observe blood collection through a stereomicroscope to ensure no surrounding tissue leakage is collected into the capillary. Transfer the collected sample into citrate solution (45 µl 0.11 M sodium citrate solution, pH 6.5). Prepare the sample by adding appropriate antibodies directed against the required cell subpopulation. Add microbeads of known quantity and volume for later volume determination. Transfer samples to the flow cytometer for analysis.
If functional assays with fetal blood cells are planned and higher blood volumes are necessary, the following procedure can be used: Remove the yolk-sac and completely exteriorize the fetus and wash fetus and placenta quickly once in PBS. Dry the fetus and place it into a large petri dish filled with modified Tyrodes-HEPES-heparin-buffer. Make sure to leave the placenta outside of the petri dish. Dissect the umbilical cord and cut the fetal head with sharp scissors. Leave fetuses to bleed for approx. 10 min. Remove the fetus from the petri dish and collect the suspended blood. Washing, adding of antibodies and flow cytometry analysis can then be performed.
DISCUSSION/LIMITATIONS
The described preparation and imaging techniques allow several different applications. Yolk-sac analysis and microinjection of fluorescent substances enable examination of vessel-and blood cell properties. 3D-image reconstruction will show fluorescently labeled vessels following microinjection of plasma markers in both the yolk-sac layer and the deeper layers of the fetal body. Thrombus induction with phototoxic dyes results in a negative contrast image, in which the forming thrombus can be noted as a reduction in fluorescence intensity compared to the plasma marker FITC dextran. A stable image with only slight movement at young gestational ages and more intense motion at older gestational stages, is expected. In the setting of thrombus formation, platelet adherence followed by complete vessel occlusion can be observed within 30 min to 1 h in only a reduced number of animals, depending on fluorescent dye concentration, excitation light source and gestational age of the fetus. Leukocyte observation will show slight fluorescence of the surrounding vessel wall with strong fluorescent leukocytes within the blood which show a reduced recruitment phenotype in young gestational ages. Additionally, multiphoton-microscopy can be used, to generate SHG and THG (second-and third-harmonicgeneration)-signals. Most commonly, a simple 2-D fluorescence image acquisition can be used, to display blood cells circulating in the microvasculature of the fetal yolk-sac.
Our model holds great potential for in vivo imaging of developmentally regulated processes. Nonetheless, only a restricted range of developmental stages can be selected for yolksac experiments, as insufficient size and the onset of circulation at young stages and yolk-sac involution processes at older stages of fetal development limit the practicability of such preparations. In accordance with this, our model is most suitable for in vivo experiments between developmental stages E13 and E18. Yet, with further manual skills and practice, also younger animals can be used (E9.5 onward). For older fetuses (E18 and E19), opening of the yolk-sac in the required region will facilitate vessel imaging also at this stage of development. In particular, a small incision in the cranial temporal region will allow access to the fetus for cerebral imaging purposes. Fetuses below the age of E13 are difficult to prepare due to the watery structure and lacking stability of the organism. Therefore, exteriorization of the fetus at this stage is accompanied by high fetal mortality.
As the yolk-sac is exteriorized and a counter-weighed fixationdevice is used to stabilize the imaging setup, the experimenter must be aware of the possible influence of these manipulations on basic physiologic processes (Figure 4). Additional pitfalls include usage of large beads at high concentrations to study blood flow velocity. This could result in blockage of circulatory routes. Modulations in the cardiovascular and hemodynamic parameters of the mother animal will impact placenta perfusion and thus affect the fetus. Consequently, strict control of vital parameters and physiologic conditions not only for the fetus itself but also of the mother animal is needed throughout the experiment.
Using antibodies, labeling substances or fluorescent proteinexpressing genetically modified animals (compare Table 1), this model has a wide range of applications. In this context, one must be aware of the difficulties of genetically engineered reporter mice as well as antibody labeling, as classical cell markers might be developmentally regulated and only appear late during fetal life or are temporally expressed in other cell types during fetal ontogeny. Examples include CD41 for platelets (Mikkola et al., 2003).
Ongoing research focusses on developmental aspects of leukocytes, platelets, megakaryocytes, macrophages, endothelial, and progenitor cells. Whereas general developmental aspects (platelet-hyporeactivity and leukocyte hyporeactivity) are of great interest for neonatologists and physiologists, more specific research questions will have to address the precise underlying mechanisms for each uncovered phenotype. Not only therapeutic options in the neonatal period depend on this, but knowledge with regard to aging and maturation could potentially be obtained and translated to the understanding of adult malignancies or modification of therapies. Even though this model holds great advancements for the scientific community, the performing researcher must be aware of inherent ethical conflicts. Ethical considerations with regard to the mentioned techniques must include not only general animal welfare regulations but should place special focus on the protection of unborn life, hindering unnecessary application of such invasive methodology without justifiable scientific and translatable purpose. Summarizing, this model offers a powerful technique to study in vivo processes in the developing mouse fetus and advance our understanding of basic physiologic and disease-related processes during ontogeny.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The animal study was reviewed and approved by the responsible authorities at the Regierung von Oberbayern. | 9,352 | 2021-01-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Agalma: an automated phylogenomics workflow
Background In the past decade, transcriptome data have become an important component of many phylogenetic studies. They are a cost-effective source of protein-coding gene sequences, and have helped projects grow from a few genes to hundreds or thousands of genes. Phylogenetic studies now regularly include genes from newly sequenced transcriptomes, as well as publicly available transcriptomes and genomes. Implementing such a phylogenomic study, however, is computationally intensive, requires the coordinated use of many complex software tools, and includes multiple steps for which no published tools exist. Phylogenomic studies have therefore been manual or semiautomated. In addition to taking considerable user time, this makes phylogenomic analyses difficult to reproduce, compare, and extend. In addition, methodological improvements made in the context of one study often cannot be easily applied and evaluated in the context of other studies. Results We present Agalma, an automated tool that constructs matrices for phylogenomic analyses. The user provides raw Illumina transcriptome data, and Agalma produces annotated assemblies, aligned gene sequence matrices, a preliminary phylogeny, and detailed diagnostics that allow the investigator to make extensive assessments of intermediate analysis steps and the final results. Sequences from other sources, such as externally assembled genomes and transcriptomes, can also be incorporated in the analyses. Agalma is built on the BioLite bioinformatics framework, which tracks provenance, profiles processor and memory use, records diagnostics, manages metadata, installs dependencies, logs version numbers and calls to external programs, and enables rich HTML reports for all stages of the analysis. Agalma includes a small test data set and a built-in test analysis of these data. In addition to describing Agalma, we here present a sample analysis of a larger seven-taxon data set. Agalma is available for download at https://bitbucket.org/caseywdunn/agalma. Conclusions Agalma allows complex phylogenomic analyses to be implemented and described unambiguously as a series of high-level commands. This will enable phylogenomic studies to be readily reproduced, modified, and extended. Agalma also facilitates methods development by providing a complete modular workflow, bundled with test data, that will allow further optimization of each step in the context of a full phylogenomic analysis.
Background
Transcriptome data are fast becoming an important and cost effective component of phylogenetic studies [1][2][3][4][5]. The rapid fall in sequencing prices has contributed to the growing number of phylogenetic studies that integrate data from genomes and transcriptomes, often referred to as "phylogenomic" analyses. There is wide recognition http://www.biomedcentral.com/1471-2105/14/330 of ribosomal RNA, selection of transcript splice variants, translation, identification of homologous sequences, identification of orthologous sequences, sequence alignment, phylogenetic analysis, and summary of results. Implementing a phylogenomic analysis is not just a matter of executing available tools for each of these steps. Among other challenges, results must be summarized across multiple steps, detailed records must be kept of all analysis steps, data files often need to be reformatted between analyses, and computational load must be balanced according to the available resources.
Because phylogenomic studies are complex and have been manual or semi-automated, they are difficult to implement and explicitly describe, and require extensive technical effort to reproduce. These problems can make it difficult to evaluate results, integrate data across studies, expand analyses, or test the impact of alternative analysis approaches. In addition, manual analyses often include many subjective decisions that may impact the final results.
Some higher-level pipelines have addressed subsets of phylogenomic analyses. These tools include PartiGene [9], a pipeline to aid in the assembly and annotation of Sanger transcriptome data collected across a diversity of species, and SCaFoS [10], a semi-automated tool for the curation of super matrices from previously assembled transcriptomes. No existing tool, however, can execute a full phylogenomic analysis of modern sequence data.
We addressed these needs by developing Agalma, an automated phylogenomics workflow. Using Agalma, an investigator can conduct complete phylogenomic analyses, from raw sequence reads to preliminary phylogenetic trees, with a small number of high-level commands. The results are accompanied by detailed reports that integrate diagnostic information across data sets and analysis steps. In a first pass with Agalma, the investigator conducts the analysis under default settings. The investigator then consults the reports to consider how best to optimize the analyses, and easily re-executes them with updated settings to produce new matrices and preliminary trees. The investigator can then analyze the optimized matrices with external phylogenetic inference tools not already included within Agalma to explore other factors, such as model sensitivity.
Implementation
We built Agalma with BioLite [11], a generic bioinformatics pipeline framework. BioLite profiles memory and CPU use, tracks provenance of all data and analyses, and centralizes diagnostic reporting. Agalma is a modular workflow comprised of helper scripts and a series of pipelines. Each pipeline is made up of stages that call internal analysis functions (many implemented with the help of Biopython [12]) and wrappers from the BioLite Python module. The wrappers invoke command-line tools, which include external bioinformatics tools such as the Bowtie2 aligner [13] and Trinity assembler [14], as well as several C++ tools from BioLite. All calls to these external programs are logged, as are the version numbers of the programs and a hash of the binary so that it is possible to detect differences in compiler settings. This information about calls to external programs is available to the user via the agalma diagnostics command. We provide automated installation tools for Agalma, BioLite, and all required third party software. As a result, complete installation of Agalma and all dependencies requires only a handful of commands on OS X and Ubuntu, and the scripts facilitate installation on many other UNIX compliant systems as well. Installation details are provided in the README and INSTALL files included with BioLite and Agalma.
All interactions with Agalma are with the command agalma. The command agalma assemble, for example, provides access to the assemble pipeline. These commands include built-in help via the -h option, which provides details on all arguments and the parameters for external packages that are exposed to the user through Agalma. The command agalma -h provides help for the top-level wrapper. The command agalma assemble -h, for example, provides help for the assemble command. Additional details regarding the interface can be found in the README file.
The first step for analyzing each data set, whether it consists of raw reads to be assembled or of previously assembled gene predictions, is to catalog the data. This creates a database entry that includes sample metadata and the paths to the data files. Agalma has built-in support for transcriptome assembly of pair-end Illumina data only in FASTQ format. When analyzing public data, raw reads and associated metadata can be imported directly from the NCBI Sequence Read Archives (SRA) using the command sra import. This command downloads the reads for a given SRA accession number (experiment, study, sample, or run), converts them into FASTQ format, and populates the catalog with the corresponding data paths and metadata. When previously assembled gene predictions are used, the assembly files in FASTA format can be directly catalogued.
There are several distinct tasks subsequent to cataloging the data-sequence assembly, loading the genes into the database, and phylogenetic analysis. These tasks are described in detail in the README and TUTORIAL files provided with Agalma, and are briefly summarized below.
Assembly
The pipeline transcriptome runs an assembly from read filtering through assembly and preliminary annotation. In a typical analysis, transcriptome would be run http://www.biomedcentral.com/1471-2105/14/330 once for each species for which raw Illumina transcriptome data are available. The transcriptome pipeline executes the following sub-pipelines, which can also be run individually: • sanitize filters raw paired-end Illumina data and randomizes the order of reads (maintaining read order between paired files) to facilitate subsequent subsetting. Reads are discarded if they do not meet a specified mean quality threshold, if they contain Illumina adapter sequences, or if the base composition is highly skewed (if any base represents either < 5% or > 60% of the sequence). This pipeline also generates FastQC [15] summaries of read quality. • insert_size uses subassemblies and Bowtie2 mapping to estimate the mean and variance of the insert size (i.e., the length of the fragment between the sequencing adapters). This information provides important feedback on the success of sample preparation, and is also used in some downstream analysis steps. • remove_rrna removes most reads that are derived from ribosomal RNA (rRNA). Removing rRNA in advance of assembly can reduce both the number of chimeric transcripts and the time required for assembly. This pipeline first assembles multiple subsets of reads. A range of subset sizes is used since the optimal number of reads for assembling a particular rRNA transcript depends upon multiple factors, including the fraction of reads that originate from rRNA and the uniformity of coverage across the rRNA transcripts (which can vary greatly, depending on how denatured the samples were prior to fragmentation). rRNA transcripts are then identified by blast comparison of these subassemblies to an included dataset of known rRNA sequences. The entire set of reads is then compared to the rRNA transcripts that are identified from the subassemblies, and any reads that Bowtie2 maps to them are removed. A plot of the distribution of reads across exemplar rRNA transcripts is shown to help evaluate rRNA assembly success. The top hit in the NCBI nt database is also provided as an independent check on sample identity and to help spot potential contaminants. The fraction of reads that derive from rRNA is also reported to aid in improving library preparations. • assemble filters the reads at a higher quality threshold and assembles them. Assemblies can be conducted under multiple protocols (such as multiple assemblers, or the same assembler under different settings). This pipeline can also assemble multiple subsets of different numbers of reads, which provides perspective on how sequencing effort impacts assembly results. The default assembler is Trinity [16]. The wrapper we have included in Agalma for running Trinity makes two improvements over the wrapper script that comes with Trinity. First, we have added a filter in between the Chrysalis and Butterfly stages to remove components that are smaller than the minimum transcript length parameter passed to Butterfly, since running Butterfly on these components will not yield a transcript. For the five assemblies in our test data set, this reduces the number of Butterfly commands from roughly 100,000 to 60,000. Second, we have replaced the ParaFly utility that is used for concurrent execution of the Butterfly commands with the GNU parallel tool [17] because it has better parallel efficiency. ParaFly executes the commands concurrently, but in blocks, so that the time to execute a block is the runtime of the slowest individual command. The runtimes can vary greatly because of variance in transcript length and complexity. In contrast, parallel load balances the commands across the available processors. • postassemble uses blastn against the rRNA reference sequences to identify rRNA transcripts (these could include low abundance transcripts, such as parasite contaminants, that were not removed as reads by remove_rrna) and screens against the NCBI UniVec database to identify vector contaminants (such as protein expression vector contaminants in the sample preparation enzymes, which we have encountered in multiple samples). The longest likely coding region per transcripts is identified and annotated using blastp against the NCBI SwissProt database. Reads are mapped to the assembly and all transcripts quantified using RSEM [18], and the splice variant with the highest expression is selected as the exemplar transcript for each gene for downstream analyses. It also produces a coverage map that helps assess the distribution of sequencing effort across genes. Finally, it uses blastx to compare all the transcripts against the NCBI SwissProt database to establish which are similar to previously known proteins.
Following these steps, the investigator can inspect the assembled data directly or load them into the Agalma database to prepare them for phylogenetic analysis.
Load genes into the local Agalma database
Subsequent phylogenetic analyses require that all gene sequences to be considered are loaded into the local Agalma database. The load command takes care of this process. In a typical analysis, load is executed once for each dataset that has been assembled by http://www.biomedcentral.com/1471-2105/14/330 the transcriptome pipeline described above, and once for each set of gene predictions from external sources (e.g., externally assembled 454 transcriptome data or gene predictions from genome sequencing projects).
Phylogenetic analysis
Once assemblies for multiple species are loaded into the local Agalma database, the user carries out a phylogenomic analysis by consecutively executing the following pipelines: • homologize allows the user to specify which datasets to include in a particular phylogenetic analysis. It then uses an all-by-all tblastx search to build a graph with edges representing hits above a stringent threshold, and breaks the graph into connected components corresponding to clusters of homologous sequences with the Markov Clustering Algorithm (MCL) tool [19]. • multalign applies sampling and length filters to each cluster of homologous sequences, and uses another all-by-all tblastx within each cluster to trim sequence ends that have no similarity to other sequences in the cluster (these could include, for example, chimeric regions). The sequences of each cluster are aligned using MACSE [20], a translation-aware multiple-sequence aligner that accounts for frameshifts and stop codons. Multiple sequence alignments are then cleaned with GBLOCKS [21]. Optionally, the alignments can be concatenated together to form a supermatrix. • genetree uses RAxML [22] to build a maximum likelihood phylogenetic tree for each cluster of homologous sequences. Gene trees can be filtered according to mean bootstrap support, which eliminates genes that have little phylogenetic signal [23] and reduces overall computational burden. This filter can be applied prior to running treeprune (described below), which has the added advantage of restricting ortholog selection to only well-supported gene trees. All options available in RAxML can be passed as optional arguments. If the input is a supermatrix consisting of concatenated orthologs, it builds a species tree. • treeprune identifies orthologs according to the topology of gene phylogenies, using a new implementation of the method introduced in a previous phylogenomic study [24]. It uses DendroPy [25] to prune each gene tree into maximally inclusive subtrees with no more than one sequence per taxon. Each of these subtrees is considered as a set of putative orthologs. The pruned subtrees are re-entered as clusters into Agalma's database.
After treeprune, the user can build a supermatrix and a preliminary maximum likelihood species tree with RAxML. These steps, which include rerunning multalign and genetree on the orthologs, are detailed in the Agalma TUTORIAL file. Agalma produces a partitions file that describes which regions of the supermatrix correspond to which genes. The user can then proceed with more extensive phylogenetic analyses of the supermatrix using external phylogenetic inference software of their choice (only RAxML is included with Agalma at this time). As the alignments for each gene are also provided, the investigator can also apply promising new approaches that simultaneously infer gene trees and species trees [26].
Test data and analyses
A small set of test data is provided with Agalma. It consists of 25,000 100bp Illumina read pairs for the siphonophore Hippopodius hippopus, a subset of 72-74 gene sequences assembled for each of five siphonophores, and a subset of 40 gene predictions from the Nematostella vectensis genome assembly. These data were chosen because they run relatively quickly and enable testing of most commonly used features.
These test data serve several purposes. They allow a user to validate that Agalma is working correctly, and users are strongly encouraged to run this test with the command agalma test right after installation. This test analysis takes about 20 minutes on a 2-core 2 GHz computer. The test data also serve as the foundation for the example analysis described in the TUTORIAL file. For the developer, the agalma test command serves as a regression test to check if changes break existing features. We routinely run this test in the course of adding and refactoring code. The test data also serve as a minimal case study for developing new features without needing to first download and curate data.
Results and discussion
In addition to the small test data sets included with Agalma, here we present an example analysis of larger data sets from seven species. Though most phylogenetic analyses would include more taxa than this simple example case, the size of the dataset for each species is typical for contemporary phylogenomic analyses. This seven-taxon data set consists of new raw Illumina reads for five species of siphonophores, Abylopsis tetragona, Agalma elegans, Nanomia bijuga. Physalia physalis, Craseoa sp., and gene predictions for two outgroup taxa, Nematostella vectensis [27] and Hydra magnipapillata [28], produced by previous genome projects. mRNA for the five siphonophore samples was isolated with Dynabeads mRNA DIRECT (Life Technologies) and prepared for sequencing with the TruSeq RNA Sample Prep Kit (Illumina). The sample http://www.biomedcentral.com/1471-2105/14/330 preparation was modified by including a size selection step (agarose gel cut) prior to PCR amplification. Analyses were conducted with Agalma at git commit dc549d23 and BioLite at git commit 025fe65e (versions 0.3.4 with patches).
We deposited the new data in a public archive (NCBI Sequence Read Archive, BioProject PRJNA205486) prior to running the final analysis. A git repository of the scripts we used to execute the example analyses is available at https://bitbucket.org/caseywdunn/ dunnhowisonzapata2013. These scripts download the data from the public archives, execute the analyses, and generate the analysis reports. All of the figures presented here are taken from the analysis reports generated by Agalma. This illustrates how a fully reproducible and open phylogenomic analysis can be implemented and communicated with Agalma. These scripts can be used as they are to repeat the analyses. They could also be modified to try alternative analysis strategies on these same data, or they could be adapted to run similar analyses on different data.
Assembly
The tabular assembly report (index.html in Additional file 1) summarizes assembly statistics across samples, and links to more detailed assembly reports for each sample. For the example analysis, this summary indicates, among other things, that the fraction of rRNA in each library ranged from 0.4% to 27.2% and the insert sizes were on average 266 bp long. The detailed assembly reports have extensive diagnostics that pertain to sample quality and the success of library preparation. As an example, Figure 1 shows several of the plots from the detailed assembly report for Agalma elegans, the siphonophore after which our tool is named. The distribution of sequencing effort across transcripts (Figure 1a) and the size distribution of transcripts (Figure 1b) are typical for de novo Illumina transcriptome assemblies.
Phylogenetic analyses
The Agalma phylogeny report includes a plot of the number of sequences considered at each stage of the a b analysis (Additional file 2). In the example analysis, the step that removed the most sequences was the first step of homology evaluation in the homologize pipeline ( Figure 2). This reduction is due to low similarity to other sequences and clusters with poor taxon sampling. The next major reduction occurred in cluster refinement in multalign. This reduction is largely due to the elimination of clusters that failed the taxon sampling criteria, and reflects uncertainty regarding the homology of some sequences and sparse sampling of some homologs. The next major reduction in the number of genes occurred in treeprune. These reductions are due to both uncertainty regarding orthology and poor sampling of some ortholog groups. The preliminary species tree for the example analysis ( Figure 3) is congruent with previous analyses of siphonophore relationships [29].
Resource utilization
Phylogenomic analyses are computationally intensive. Detailed information about resource utilization helps investigators plan resources for projects and balance computational load more efficiently. It is also critical for the optimization of the analyses, and can help guide design decisions. For each analysis, Agalma produces a resource utilization plot that displays the time and maximum memory used by external executables (Figure 4) Figure 3 The preliminary maximum likelihood phylogeny resulting from the example analysis. This unrooted tree was inferred from the protein supermatrix under the WAG + model. http://www.biomedcentral.com/1471-2105/14/330 Figure 4 A profile of computational resource utilization for the transcriptome pipeline. This plot is from the report for the Agalma elegans assembly. Although annotation of transcripts (the calls to blastx and blastp during the postasemble stage) showed the highest processing time (top row), assembly (the call to Trinity during the assemble stage) displayed the highest memory use and it took the longest real time to finish (bottom row).
peak memory use, and the longest-running step, in the transcriptome pipeline was the call to Trinity during the assemble stage.
Conclusions
A distinction is sometimes drawn between manual approaches that enable close user inspection of data and results, and automated approaches that isolate the user from their results. This is a false dichotomy-automating analyses and examining the results closely are not mutually exclusive. Automated analyses with detailed diagnostics provide the best of both worlds. The user has a very detailed perspective on their analysis, and the added efficiencies of automation leave the investigator with far more time to assess these results. Automation also allow improvements made in the context of one study to be applied to other studies much more effectively.
For a study to be fully reproducible, both the data and the analysis must be described explicitly and unambiguously. The best description of an analysis is the code that was used to execute the analysis. By automating phylogenomic analyses from data download through matrix construction and preliminary phylogenetic trees, Agalma enables fully reproducible phylogenomic studies. This will allow reviewers and readers to reproduce an entire analysis exactly as it was run by the authors, without needing to re-curate the same dataset or rewrite the analysis code.
There are alternative approaches to many of the steps in a phylogenomic analysis presented here. There are, for example, multiple tools that identify orthologs according to different methods and criteria [30,31]. Agalma is a http://www.biomedcentral.com/1471-2105/14/330 general framework and can be expanded to include these additional methods, and directly compare them in the context of a complete workflow that is consistent in all other regards. | 5,122.6 | 2013-07-24T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Robust structural superlubricity under gigapascal pressures
Structural superlubricity (SSL) is a state of contact with no wear and ultralow friction. SSL has been characterized at contact with van der Waals (vdW) layered materials, while its stability under extreme loading conditions has not been assessed. By designing both self-mated and non-self-mated vdW contacts with materials chosen for their high strengths, we report outstanding robustness of SSL under very high pressures in experiments. The incommensurate self-mated vdW contact between graphite interfaces can maintain the state of SSL under a pressure no lower than 9.45 GPa, and the non-self-mated vdW contact between a tungsten tip and graphite substrate remains stable up to 3.74 GPa. Beyond this critical pressure, wear is activated, signaling the breakdown of vdW contacts and SSL. This unexpectedly strong pressure-resistance and wear-free feature of SSL breaks down the picture of progressive wear. Atomistic simulations show that lattice destruction at the vdW contact by pressure-assisted bonding triggers wear through shear-induced tearing of the single-atomic layers. The correlation between the breakdown pressure and material properties shows that the bulk modulus and the first ionization energy are the most relevant factors, indicating the combined structural and electronic effects. Impressively, the breakdown pressures defined by the SSL interface could even exceed the strength of materials in contact, demonstrating the robustness of SSL. These findings offer a fundamental understanding of wear at the vdW contacts and guide the design of SSL-enabled applications.
in a multiple-contact setup 3 .A recent breakthrough in the continuous epitaxy of single-crystal graphite films implies the opportunity to achieve SSL at length scales beyond a few centimeters 20 .These achievements shed light on device-or structure-level applications of SSL instead of a typical tip-sample setup in tribological studies.
Robustness under extreme mechanical loading conditions and long service life are crucial for practical applications of SSL, to assure reliability and endow mechanosensitive functions 21 .However, high pressures at the contact may lead to structural instabilities and result in wear.Mass loss and transfer at the sliding interface during wear may break down the SSL state and shorten the service life of the mechanical parts 22 .In 1953, Archard 23 proposed the progressive wear model, suggesting that the volume of materials removed by wear at macroscopic rough contacts is proportional to both the applied load and the sliding distance.Wear is unavoidable in this picture, although it can be minute at the beginning of the sliding process.Recently, the atom-by-atom attrition model for microscopic wear was developed 24,25 , indicating that the theory of progressive wear applies to both macroscale 26 and microscale systems 27 .It is worth noting that recent experimental evidence demonstrates a wear-free behavior over the 100 km sliding distance at an SSL contact 28 , indicating that damage activation and accumulation may be absent.The picture of progressive wear thus might fail at the SSL state.The pressure on the contact in these studies is on the order of several megapascals, much lower than the extreme conditions in applications (e.g., the strength of materials), which could reach the gigapascal level.The vdW interface such as that between graphite remains stable even under the pressure of several tens of gigapascal before structural transitions into diamonds [29][30][31][32][33][34] .It would thus be interesting to probe the upper bound on the admissible pressure of an SSL state and explore the potential failure mechanisms at the interface beyond the bound, where the progressive wear mechanism may be recovered.
We explored the wear characteristics of SSL by studying two representative vdW contacts of graphite/graphite ('self-mated') and tungsten/graphite ('non-self-mated') at elevated contact pressure using a home-built loading system 35 .The critical pressure (P cr ) below which wear is absent was characterized experimentally, which could reach the gigapascal level and is shown to be strongly tied to the nature of interfacial electronic coupling.The microscopic process of wear identified in experiments above P cr was analyzed by atomistic simulations, suggesting a step-wear mechanism that can activate subsequent progressive wear processes.The study demonstrates the outstanding mechanical robustness and wear-free feature of SSL under pressure even beyond the strength of the materials in contact and lays down the principles of SSL design in tribological or device applications via material selection and pressure control.
SSL at graphite contacts under high pressure
Wear characteristics of SSL were first explored at the graphite/graphite contact because graphite is the most commonly used material for SSL (Fig. 1). Figure 1a, b illustrates the experimental setup and an optical microscopy (OM) image of the contact constructed by cleaving and transferring a self-retractably moving (SRM) flake from a microscopic graphite mesa (the 'mesa') 13 onto a mechanically exfoliated graphite flake (the 'substrate').Our previous works showed that any cleaved SRM flake from highly oriented pyrolytic graphite (HOPG) features single-crystalline surfaces without detectable defects, as characterized by atomic force microscopy (AFM), electron backscattered diffraction (EBSD), and Raman spectrum techniques 19,36,37 .Crystal orientations of the graphite substrate were studied using EBSD (Fig. 1g-l), and the surface roughness was characterized by OM and AFM before tests.The results confirm that the surfaces in contact are single-crystalline and free of grain boundaries.During the tests, normal loads were applied to the mesa through a tungsten tip with a radius of several micrometers.A home-built loading system is used to apply higher pressure over a larger contact area than the nanoscale contacts studied using the AFM-loading system 35 .The loading amplitude was controlled in a closed loop.The graphite substrate was driven through a displacement stage, which leads to relative sliding at the contact.All experiments were carried out under the ambient environment and were in-situ monitored through an OM.The friction stress between the mesa and the substrate is ultralow (~20 kPa), and the friction coefficient (the ratio of the friction force and the normal force) is on the order of 10 −5 under pressures between P = 1-6 GPa (Fig. 1c).The nearly pressure-independence of friction forces measured up to the gigapascal scale demonstrates the robustness of SSL at the graphite/graphite contact under high pressures.
Figure 1d, e shows the OM and AFM images of the graphite substrate after a sliding test under a normal load of 10 mN (the highest accessible normal load by the loading system which corresponds to a contact pressure of P = 9.45 GPa (see Supplementary Note 1 for details)) and a sliding distance up to 10 mm in 5 × 10 2 cycles.The size of effective contact (~0.8 μm) between the graphite mesa under the tip and the substrate is much smaller than the size of the mesa (6 μm × 6 μm).The edge effect 38 of the mesa on the robustness of SSL can be neglected.By further considering the atomic-level flatness of graphite, the roughness effect on the nature of local contact is expected to be minor.A pre-cleaning step was carried out to sweep out contaminants (e.g., adsorbed molecules such as water and hydrocarbons [39][40][41] , see Supplementary Note 2 for details).The contaminants originate from the environment before the construction of the contact and may not be completely excluded at the contact.However, the ultra-low friction coefficient of SSL is still preserved, indicating the robustness of SSL against the atmosphere 41 .Consequently, the pre-cleaning step before tests removes some of the confined molecules, reducing friction to a steady-state level (Supplementary Fig. 3b).Our previous work reported no obvious difference in friction of SSL in the mesa/substrate setup in ambient conditions and nitrogen atmosphere with relative humidities (RHs) of 42% and 10%, respectively 19 .After the test, debris of contaminants was characterized at the boundaries of the pre-cleaned region as a result of the edge-sweeping process.In contrast, neither aggregation of debris nor rupture of the material was observed in the region under sliding tests, suggesting that the contact may be contaminant-free.The resolution of our AFM characterization is 0.1 μm × 0.1 μm × 1 nm (Fig. 1e), which concludes the absence of wear below a minimum detectable wear rate of 10 −10 mm 3 /Nm under a normal load of 10 mN and the sliding distance of 10 mm.In comparison, the wear rates at a macroscopic steel contact and the microscopic silicon/silicon nitride contact are 10 −7 −10 −3 mm 3 /Nm 42 and 10 −6 −10 −4 mm 3 /Nm 43 , respectively.Raman spectroscopy characterization also confirms the absence of the D peak at 1350 cm -1 in the graphite substrate (Fig. 1f), suggesting no detectable atomic-level defects.These results demonstrate the exceptional wear resistance of self-mated graphite SSL under gigapascal-level pressures.
Breakdown of SSL at non-self-mated vdW contacts
The breakdown of graphite/graphite contact under high pressures proceeds with interfacial failure of the material itself.Notably, our recent efforts made it possible to construct SSL contacts with only one of the contact surfaces from the vdW-layered materials.Theoretical calculations demonstrate that the non-self-mated SSL contacts between metal and graphite exhibit weaker pressure resistance than the graphite/graphite contacts 44 , and it is natural to question the robustness of the SSL state therein.A tungsten/graphite contact was designed as a testbed to explore the breakdown and wear processes at a non-self-mated SSL contact.Tungsten is chosen for its high strength among common metals widely used in mechanical systems.The contact was constructed by pushing a tungsten tip with a radius of several micrometers onto the graphite substrate (Fig. 2a, b).Normal forces were applied to the tungsten tip through the home-built loading system.Other experimental setups follow the graphite/graphite contact.The friction coefficient is measured to be on the order of 10 −3 even under gigapascal-level pressures (Supplementary Fig. 5).However, the shear strength, τ s = 10 MPa, is much higher than that of the graphite contact (~20 kPa) possibly due to the adhesion effect.Consequently, the notion of SSL should be justified if the mechanical energy dissipation during the sliding process and thus the shear strength are of concern.
Wear tests of the tungsten/graphite contact were conducted in an increasing load sequence from 0.1 mN to 10 mN (see "Methods" section for details).Figure 2c, d shows the OM and AFM images of the graphite substrate after sliding tests under loads of 0.2 mN and 0.5 mN, which corresponds to a pressure of P = 3.74 GPa and 5.07 GPa, respectively (Supplementary Fig. 1).The sliding velocity is 10 μm/s with a reciprocating amplitude of 30 μm.Indication of wear is absent after a sliding distance of 0.6 m, or 10 4 cycles, under P = 3.74 GPa, but emerges as P increases to 5.07 GPa.The long-distance wear-free performance of SSL contacts under high pressures cannot be explained by the progressvie wear model which predicts that the mass loss is proportional to the pressure and sliding distance.This is because the SSL contact is atomically smooth and the in-plane sp 2 bonding network is strong 45,46 .The result suggests that a critical pressure (P cr > 3.74 GPa) is required to activate the wear process (see Supplementary Note 4 for more experimental results).AFM characterization of the test region shows that wear of the tungsten/graphite contact occurred at the beginning of sliding under P = 5.07 GPa (Trace I, Fig. 2c), which proceeds by tearing of graphite layers (Fig. 2e, f).
Mechanisms of SSL breakdown and wear activation
The breakdown pressure of the tungsten/graphite contact is higher than 3.74 GPa, beyond which wear was initially observed.To elucidate the breakdown and wear mechanisms of SSL represented by vdW contacts under pressure, we carried out atomistic simulations.The SSL contact shows a step-wear scenario where material loss is not activated below the critical pressure, in contrast to classical laws of progressive wear.Two key stages are proposed following the experimental evidence to include (I) wear initiation through pressure-assisted interfacial bonding, and (II) wear development through shear-induced tearing of the graphite layers.
It is noted that different from the conventional rough contacts with multi-asperities 45,46 , the SSL contact studied in this work is atomically smooth and can be described as a single-contact model by considering the stiff in-plane bonding network.Previous work 47 showed that the tungsten tip etched by KOH is smooth, with a root mean square (RMS) roughness below 0.3 nm.We characterized our tip using high-resolution SEM and AFM.The results show height variation of a few nanometers at the micrometer scale and atomistic smoothness at the nanometer scale (Supplementary Fig. 6c, d), which validates the argument that local roughness is not a crucial issue for the discussion.A recent work 48 shows that the contact pressure change converges as the roughness wavelength increases.Therefore, the effect of roughness on contact pressure is negligible.Our indentation setup in the sliding tests results in a local high pressure across an effective contact region of hundreds of nanometers (Supplementary Note 1).Wear at the SSL contact can be directly related to the pressure enforced across the interface, the mechanochemistry of which was explored by first-principles calculations based on the density functional theory (DFT).X-ray photoelectron spectroscopy (XPS) characterization of the tungsten tip suggests the existence of an oxide layer over the tip surface (Supplementary Fig. 6f).A model consisting of a ( ffiffiffi )R45 ∘ -reconstructed (001) WO 3 surface and a graphite substrate was adopted, which closely follows our experimental setup and previous studies 49,50 (Fig. 3a, b).The DFT calculation results show that the pressure increases rapidly with the compression and declines after the peak at a pressure of 3.50 GPa (Fig. 3d), which is defined as the breakdown pressure (P cr ) due to the formation of interfacial O-C bonds (Fig. 3c).The value of estimated critical pressure agrees with the experimental value of 3.74 GPa.
To quantify the change in interfacial electronic coupling across P cr , we calculated the electron localized function (ELF) that measures the extent of spatial localization of the reference electron 51 .The value of ELF ranges from 0 and 1. Perfect electron localization and free electron gas behaviors are identified by ELF values of 1 and 0.5 52 , respectively.According to our model, the ELF value between the nearest O-C atom pairs is less than 0.1 for P < 1.7 GPa, which suggests typical vdW interaction.As the pressure reaches P cr , this value exhibits an increase to 0.27, indicating the rise of ionic bonding characteristics at the interface 21 .Beyond P cr , the value of ELF increases to 0.79, suggesting the formation of stable covalent bonding between O and C atoms, which competes with the strong in-plane bonding network and results in wear (Fig. 3c).The effects of pressure-assisted bonding on the frictional characteristics were then explored.Before the formation of interfacial O-C bonds, shear strengths calculated by first-principles simulations remain as low as < 0.14 GPa (Fig. 3e), indicating the ultralow shear resistance at the WO 3 /graphite interface below the breakdown pressure.The shear strength increases to τ s = 3.65 GPa after P cr is reached (Fig. 3e).Molecular dynamics (MD) simulations show that wear is activated by the formation of wrinkles and tears in the graphite layers caused by shear forces at the interface (Fig. 3f).The deformation and failure of the graphite layer near the trailing edge can be predicted by the shear-lag model, which predicts strain localization at the contact edge 53 .This finding explains our experimental observation of wear in the form of graphite tearing.
The same argument applies to the graphite/graphite SSL contact although the value of P cr is not experimentally determined.Previous studies show that the vdW interface between graphite remains stable even under the pressure of several tens of gigapascal in prior to structural transitions into diamonds [29][30][31] .The highest accessible pressure in our experimental setup is only 9.45 GPa, below which no bonds are formed across the interface and wear can hardly be activated.Although pressure loading in our work is different from the hydrostatic compression used to explore structural transitions from graphite to diamond, the underlying mechanism remains similar, that is, the transition from sp 2 to sp 3 bonding networks.The reported gigapascal-level pressures for this transition thus provide support for the ultrahigh breakdown pressure of SSL we uncovered.To verify the step-wear mechanism at the graphite/graphite contact, vacancy defects were introduced into the graphite substrate by argon plasma treatment.The experimental results show that the breakdown pressure of the defective graphite/ graphite contact is reduced to 0.4 GPa and wear is characterized by tearing ruptures of the graphite substrate (Supplementary Note 6).These results suggest that the step-wear mechanism also applies to the graphite/graphite contact.This finding agrees with the fact that solid lubrication using graphite should avoid the formation of interlayer bonding in dry or vacuum conditions, where water can solve this problem by providing -H and -OH terminations when C-C bonds break.
Understanding SSL robustness under high pressures
Our work demonstrates a wear-free feature of the graphitic SSL contacts without interfacial bonds under GPa pressures over long sliding periods.To understand the unexpected robustness of SSL states and extend our discussion to SSL-enable applications, we studied material dependence.A W/graphite contact can be constructed by preventing oxidation and was studied by performing DFT calculations for comparison with the WO 3 /graphite and graphite/graphite contacts (see "Methods" section for details).The DFT calculation results suggest a higher breakdown pressure of P cr = 5.4 GPa for the contact with bare W (001) surface.We find that instead of covalent bonding at the WO 3 / graphite contact beyond P cr , the transition in the electronic coupling at the W/graphite interface is mediated by charge transfer 54,55 , and this electrostatic nature of interaction results in a higher value of P cr .
In contrast to the WO 3 /graphite contact, the W/graphite interface is electrically conducting and thus more interesting for device applications (Supplementary Fig. 6e).Our discussion is elaborated by including a wide spectrum of metals in contact with graphite (Fig. 4).DFT calculations report the breakdown pressures and identify two characteristic modes of failure beyond it.The first class of non-self-mated contact (e.g., Cu, Au, Ag/graphite) can withstand pressure up to ~100 GPa, which is much higher than the compressive strength of the metals themselves where plasticity is triggered 53,56 .As a result, the breakdown pressure of the SSL contact is limited by the strength of metals instead.For graphite contacts with Os, W, Re, Ni, Co, Ta, Hf, Ti, and Zr, the vdW interfaces break down by forming covalent bonds.The value of P cr is much lower, with the highest value of 9.78 GPa for the Os/graphite contact (Fig. 4a).The underlying physics behind the pressure resistance can be elucidated by the following understanding of cohesion in solids.It is well known that the 'physical' stiffness of a solid is strongly tied to the 'chemical' one defined by the ionization energy (IE) and the electron affinity (EA) 57 .This understanding is extended to the vdW interface here.A correlation analysis shows that the most relevant materials features to the value of P cr are the bulk modulus (B) and IE, where the coefficients of correlation are 0.95 and 0.90, respectively (Fig. 4b-d).Metals with high elastic moduli usually feature high surface electron densities, which lead to higher resistance to the transition in the electronic coupling.On the other hand, metals with higher IEs are less reactive, and thus higher pressures are needed to form chemical bonds between the metals and graphite.The destruction of SSL contact under pressure is thus a result of the combined effects of structural distortion in the metals and charge transfer at the interface if the metals are stable by themselves.This understanding can guide material screening robust SSL applications under high pressure.
In brief, the robustness of SSL states under gigapascal-level high pressure is reported here.Below the breakdown pressure, the interfacial electronic coupling can be tuned by the pressure but signature transitions are not present.The findings are important for tribological applications under extreme loading conditions, and reconfigurable device applications by opening the avenue of pressure control.Specifically, reconfigurable SSL-enabled devices can be constructed by harnessing the sliding motion.The robustness of SSL at graphite/graphite and tungsten/graphite (WO 3 /graphite and W/graphite) contacts are studied in our experiments under ambient conditions with a sliding velocity of 10 μm/s.The speed range covered by common friction tests conducted by the AFM and tribometers is 10 −7 −10 −2 m/s 3,19 .Reportedly, SSL can be sustained under the speed from 25 m/s 16 to 294 m/s 58 .Temperature is another crucial factor in practical applications.Thermal fluctuation not only facilitates sliding over free energy barriers and results in reduced friction 59,60 , but also helps to activate the breakdown of SSL interfaces that can be regarded as a chemical reaction 24,25 .
Sample preparation
The graphite mesa (6 μm × 6 μm) was etched from the highly oriented pyrolytic graphite (HOPG) using the oxygen plasma, before which a SiO 2 film with a thickness of 100 nm was deposited on top of the HOPG for increasing the bending stiffness and the friction resistance during the tip manipulation.The graphite substrate was mechanically exfoliated from the normal-flake-graphite (NGS, Germany) by the Scotch tape method and transferred onto a silicon substrate with a 300-nmthick SiO 2 layer.The graphite/graphite contact is constructed by transferring the graphite mesa onto the graphite substrate (the mesa/ substrate setup) using a tungsten tip manipulated by a micromanipulator (Kleindiek MM3A) 19,61 .Tungsten tips with a radius of a few micrometers were electrochemically etched in the KOH solution (5 mol/L) with a reaction expression as, W + 2KOH + 2H 2 O → K 2 WO 4 + 3H 2 ↑ 47,62 , where the reaction product K 2 WO 4 is soluble (solubility of 51.5 g/100 g H 2 O@20 ∘ C).All tips were ultrasonically rinsed with acetone, alcohol, and deionized water sequentially before conducting tests to exclude the effects of adatoms and oxides.Both graphite surfaces at the contact are single crystalline (Fig. 1g-l).
Wear tests
Experiments were conducted in a home-built loading system 35 , where the loading range of the system is 0.1−10 mN.The amplitude of the loads can be closed-loop controlled during sliding.Forces were calibrated by a high-precision balance (METTLER TOLEDO, XA205DU) before tests.All tests were conducted in an increasing load sequence of 0.1, 0.2, 0.5, 1, 2, 3, 5, 7, 9, and 10 mN with the same sliding velocity of 10 μm/s.To determine the critical pressure, P cr , the number of sliding cycles under each load is set to 10.For long-distance sliding tests, 5 × 10 2 cycles were carried out for the graphite/graphite contact to ensure a distance of sliding over 10 mm, and 10 4 for the tungsten/ graphite contact with a distance over 0.6 m.The loading system is equipped with an OM (Hirox KH-3000) to locate the tip to the microscale mesa or the substrate.
Wear characterization
Wear was first judged by in-situ OM (Hirox KH-3000) characterization, which can monitor the change of the surfaces in real-time.Detailed wear characterization was conducted by using an AFM (Oxford Instrument MFP-3D Infinity) in the tapping mode.Morphology changes of surfaces were measured by the vibration amplitudes of the AFM probe.Raman spectroscopy characterization (HORIBA Scientific) was carried out to quantify the atomic-scale defects.
Friction measurements
Friction at the graphite/graphite contact was investigated by using a home-built two-dimensional force sensor.The lateral resolution of the sensor is ~80 nN and the range of the normal load is on the order of milli-Newtons 35 .Normal loads were applied to the SiO 2 cap on top of the graphite mesa in a closed-loop control.Friction was measured at a sliding speed of 10 μm/s, which was repeated for 10 cycles.
First-principles calculations
To obtain the breakdown pressure and shear characteristics between a tungsten tip (W or WO 3 if surface oxidation is considered) and graphite, DFT-based calculations were performed by using the Vienna ab initio simulation package 63 .The generalized gradient approximation (GGA) of the Perdew-Burke-Ernzerhof parameterization (PBE) was used to describe the exchange-correlation functional 64,65 .A cutoff energy of 520 eV was used for the plane-wave basis set.The vacuum layer was set as 4 nm to avoid the interaction from the periodic images 66 .For Brillouinzone integration, the Monkhorst-Pack k-grid with a mesh density of 3 Å −1 was adopted.The structures were relaxed by using the conjugated gradient (CG) algorithm.The threshold for energy and force convergence was set as 0.1 meV/atom and 0.01 eV/Å, respectively.
The computational supercell consists of 2 layers of graphene in AB stacking or 2 (4) atomic layers of W (WO 3 ).To reduce the size effect, we used supercells (3 × 1 W/2 × 4 graphite, 2 × 3 WO 3 /3 × 5 graphite), where the lattice misfit of graphite is 2% and 3%, respectively.Periodic boundary conditions (PBCs) along the in-plane directions were enforced.The breakdown pressure was studied by moving the mesa (W or WO 3 ) towards the graphite substrate stepwisely, where the top layer of the mesa and the bottom layer of the substrate are fixed.DFT calculations were performed to determine the pressure from the forces acting on the atoms in the mesa, as a function of the interfacial distance at the contact 21 .The breakdown pressures were determined from the peaks in the pressure-displacement curves.Shear tests were performed by transversely moving one of the contact surfaces.The shear stress is calculated from the forces acting on the top layer of the mesa along the sliding direction.To calculate the compressive strengths or breakdown pressures of metals, 4 atomic layers were constructed.
Molecular dynamics simulations
Molecular dynamics (MD) simulations were carried out using the largescale atomic/molecular massively parallel simulator (LAMMPS) 67 .The all-atom optimized potential, which can successfully capture essential interatomic interactions, was adopted to describe the interatomic interactions for graphite 68 .The vdW interaction was described by the 12−6 Lennard-Jones potential V r ð Þ = 4ε½ðσ=rÞ 1 2 À ðσ=rÞ 6 with a cutoff distance of 1.2 nm.At a reduced interfacial distance of 2.5 Å, the shear strength (5.22 GPa) exceeds the breakdown pressure.PBCs along the inplane directions were used in all simulations.All constructed structures were fully energy-minimized using a conjugate-gradient algorithm before the shear test.Shear was applied by moving the tungsten layer at a velocity of 20 m/s, and the mechanical responses were investigated at 0.1 K using a Nosé-Hoover thermostat.Two edges of the graphene layer were fixed to avoid rigid displacement of the graphite.
Fig. 1 |
Fig. 1 | Wear tests of the graphite/graphite contact.a, b Experimental setup (a) and optical microscopy (OM) image (b) of the graphite/graphite contact in the mesa/substrate setup.c Experimental measurement of the average shear stress below the breakdown pressure.The results include both loading and unloading processes.The error bar represents the standard deviation of 10 repeated experiments, and k is the fitting slope.d, e OM (d) and atomic force microscopy (AFM, e) images of the graphite substrate after sliding tests.The testing procedure includes 2 steps of (1) pre-cleaning the substrate within a region of 26 μm × 14 μm (the large dashed box), and (2) conducting sliding tests at the center of the cleaned region under the pressure of 9.45 GPa with a reciprocating sliding amplitude of 10 μm and a sliding velocity of 10 μm/s (the small dashed box).The sliding distance reached 10 mm in 5 × 10 2 cycles.f Raman characterization of the graphite substrate at the marked points.g-i Band contrast (BC, g) and inverse pole figures (IPFs, h, i) of HOPG.j-l BC (j) and IPFs (k, l) of normal flake graphite.Source data are provided as a Source Data file.
Fig. 2 |
Fig. 2 | Wear tests of the tungsten/graphite contact.a, b Experimental setup (a) and OM image (b) of the tungsten/graphite contact.c, d OM (c) and AFM (d) images of the graphite substrate after sliding tests.The testing procedure includes 3 steps of (1) sliding the tungsten tip under the pressure of 3.74 GPa within Region I, (2) elevating the pressure to 5.07 GPa and move the tip from Region I to Region II through Trace I, and (3) sliding the tip under the pressure of 5.07 GPa within Region II.The sliding velocity and sliding amplitude are 10 μm/s and 30 μm, respectively.The test was stopped once obvious wear of graphite was in-situ observed by OM, or the number of sliding cycles reached 10 4 (corresponding to a total sliding distance of 0.6 m).e, f Magnified views of the step edge indicated by yellow and cyan dashed boxes in d.Source data are provided as a Source Data file.
Fig. 3 |
Fig. 3 | Pressure-assisted bonding and wear at the WO 3 /graphite interface.a, b Model of the WO 3 /graphite contact.c Electron localization function (ELF) showing the evolution of structural responses and interfacial bonding states with pressure.The dashed lines show the atom pairs with the strongest interaction.d Pressure-displacement relation obtained from density functional theory (DFT) calculations.e Shear stress-displacement relation at different pressure levels.fThe step-wear process demonstrated via molecular dynamics (MD) simulations, where pressure-assisted bonding triggers wear through shear-induced tearing.The puckering effect at the contact front is caused by accumulated in-plane deformation of the graphene layer, which prefers to bend instead of being compressed.Source data are provided as a Source Data file.
Fig. 4 |
Fig. 4 | Physics behind the breakdown pressures.a Breakdown pressures calculated for metals Os, W, Re, Ni, Co, Ta, Hf, Ti, and Zr in contact with graphite.b Top-ranked features that strongly correlation with the breakdown pressure.c, d Relation between the breakdown pressure and bulk moduli (c), and the first ionization energy of the metal (d).e Bulk moduli and shear moduli of the metals.f Breakdown pressures of graphite, tungsten, and the tungsten/graphite contact.Source data are provided as a Source Data file. | 6,827.8 | 2024-07-15T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Molecular Dynamics Simulations on trans-and cis-Decalins : The Effect of Partial Atomic Charges and Adjustment of “ Real Densities ”
A molecular dynamics (MD) simulation of the organic compounds transand cis-decalin is performed with the adjustment of their experimentally observed densities. For a trans-decalin model system, the energy and structural properties are studied for different atomic charge distributions. The relationship between the main interaction forces (Coulombic and van der Waals) of the transand cis-decalin systems has been examined, and the status of the molecular forces governing the nature of the processes in the crystal or liquid phases has been established. The obtained results on the density peculiarities are interpreted in terms of a non-uniform charge distribution and van der Waals forces efficiently inhibiting the electrostatic ones. Possible applications of the obtained MD simulations results in magnetic fluid physics are discussed. www.ccsenet.org/ijc International Journal of Chemistry Vol. 4, No. 1; February 2012 Published by Canadian Center of Science and Education 15
Introduction
Complex colloidal systems with controlled properties are currently studied in order to determine their specific structural features.Ferrofluids (FFs) are typical representatives of these systems.FFs are colloidal liquids made of nanoscale ferromagnetic or ferrimagnetic particles suspended in a carrier fluid (usually, an organic solvent or water).Their properties are close to those of the homogeneous liquids with a relatively high magnetic susceptibility (Shliomis, M. I., 1974;Rosensweig, R. E., 1985).A unique combination of such properties finds many applications in technology and engineering (Odenbach, S., 2003;Proc. of the 10th Intern. Conference on Magnetic Fluids, 2005).In recent years, possible biomedical applications of FFs have also been discussed with great interest (Fortin, J. P., 2007;Proc. of the 7th Intern.Conference on the Scientific and Clinical Applications of Magnetic Carriers, 2009).The magnetic particles are coated with a surfactant layer that prevents them from sticking together, stabilizing the whole system.
Benzene solutions of carboxylic acids (CAs), which can be used as a surfactant to produce FFs, were studied using the molecular dynamics method; and the results were compared with small-angle neutron scattering (SANS) and high-precision densitometry data.It is known that different CAs have different stabilizing abilities, which can be caused by differences in the solvent-acid interaction (Avdeev, M. V., Kholmurodov, Kh. T., Rus. J. Phys. Chem. A, 2009;Kholmurodov, Kh., Yasuoka, K., Natural Science, 2010).Decahydronaphthalene (decalin) can be used as a solvent for FFs; therefore, such interaction in this solvent is of current interest (Avdeev M. V. et al., 2011).
Decalin -a bicyclic organic compound -is an industrial solvent.It is a colorless liquid with an aromatic odor; it is used as a solvent for a number of organic compounds and polymers.Decalin exists in cis and trans isomeric forms differing only in the relative positions of the cyclic rings (Figure 1).
The trans form is energetically more stable.The boiling and melting points of this form are 185.5 ℃ and -31.5 ℃, respectively.Its density is 0.87 g/cm 3 at 20 ℃.As for cis-decalin, the boiling and melting points are, respectively, 194.6 ℃ and -43.2 ℃; the density is 0.897g/cm 3 at 20 ℃ (Dean, J. A., 1999).In the industry, decalin can be prepared as a mixture of different forms through the naphthalene hydrogenation in the presence of a catalyst (Donaldson, N., 1958).
The density peculiarities of trans-and cis-decalins described above are supposed to be governed by well-known intermolecular forces.First of all, the relationship between the Coulomb and van der Waals interactions can determine the nature of the processes in the trans-and cis-decalin systems in their liquid states.Recent theoretical and experimental studies show that van der Waals interactions define a number of critical effects in ion-stabilizing FFs materials (Cerda, J., 2010).On the surface of FFs nanoparticles, the charge distribution can be non-uniform, which causes the residual van der Waals interactions to be strongly dominant.In many aspects, the van der Waals interactions can inhibit the Coulomb one.The competing Coulomb and van der Waals forces in diluted FFs considerably affect the system characteristics obtained by SANS.In this work, we aimed at determining the structural and energy properties of trans-and cis-decalins.First, using the MD method, we generated various atomic charge distributions (Coulombic interactions) in trans-decalin.The trans-decalin molecule is more symmetric, and its structural data are more informative than those of the cis-decalin one.Next, we estimated the Lennard-Jones parameters (van der Waals potential and forces) to provide the experimentally observed densities for both trans-and cis-decalins.
Materials and Methods
Molecular dynamics (MD) simulations on trans-and cis-decalin structures have been performed using DL_POLY_2.18general-purpose code (Smith, W., 1996;Smith, W., 2008;Kholmurodov, K., Smith, W., Yasuoka, K., & Ebisuzaki, T., 2000).Initially, a MD study was fulfilled on a trans-decalin system with the crystal structure that had been recently solved and refined with an X-ray powder diffraction technique (Eibl, S., 2009).In our present study, a trans-decalin model was constructed by describing each atomic position taking into account the periodic symmetry boundary conditions (Figure 2).The simulation cell consisted of a total of 700 decalin molecules.The system's spatial size was 55 x 53 x 53 angstroms (7 x 5 x 10 cells of the crystal structure).The decalin molecules were considered to be rigid units with constant bond lengths.
The temperature of the system was controlled through the Nose-Hoover (NVT) ensemble.The Verle integration algorithm for the numerical solution of the equations of motion was realized with the time step of 0.001 ps.
The intermolecular interactions were modeled with the Lennard-Jones (LJ) and Coulomb potentials (Allen, M. P., 1989): (1) where the LJ (12-6) potential was cut off at 8 Å of the spherical radii; the cutoff radius for the Coulomb interactions (for the charged systems) was 12 Å.For the non-identical atoms, the interaction parameters were found using the Lorentz-Berthelot mixing rule (Lorentz, H. A., 1881;Berthelot, D., 1889): (2) As for the statistical data, graphs of the radial distribution functions (RDF) of various atomic pairs (C-C, C-H, H-H) were built at the start and end of the simulations and compared.The RDF behaviors demonstrate the differences between the phase states; in general, the RDF reflects the peculiarities of the liquid and crystal states.The RDF g(r) shows the time average influence of the presence of an atom on the positions of neighboring atoms; it is proportional to the probability of finding two atoms at a distance of r + ∆r from each other (Allen, M. P., 1989). (3 where N is the total number of atoms, ρ = N/V is the numerical density of atoms, r ij is the vector between the centers of atoms i and j, and angle brackets denote averaging over time.For distances shorter than one atomic diameter, g(r) = 0.At larger distances in a liquid, an atom should not influence the positions of other atoms; that is, g(r) = 1.
Intermolecular structure
From the MD simulation data, we first built an RDF graph for the neutral (with zero partial atomic charges) trans-decalin system (Figure 3).The RDF g(r) functions in Figure 3 correspond to the atomic pairs C-C, H-H, and C-H at initial and final states of the system under simulation, respectively.In Figure 3, RDF graphs are shown for the initial and relaxed final states of the trans-decalin system.The RDF behavior can be related to trans-decalin's crystal and liquid phases.From Figure 3, it is seen that the H-H pair's RDF reaches its maximum at a short distance of 1,5 Å, while the C-C pair's RDF, at 3,5 Å.The C-C pair's RDF has two maximums and a small minimum between them at 5 Å.
Next, we generated various artificial charge values in trans-decalin molecule (Table 2) corresponding to the C and H atoms -to describe the influence of the decalin charge distribution on the RDF behavior.The choice of the charge values in Table 2 was rather arbitrary, meeting only the molecule electroneutrality requirement.However, the distribution of the charge denoted ch-006 is close to that in real trans-decalin which was calculated by the ZINDO/1 semi-empirical method of quantum chemistry (Zerner, M., 1991).
The RDF results for different charge distributions described above were compared with those in Figure 3 for the neutral system.A comparison analysis shows that introducing a charge distribution like the ch-006, ch-01, ch-03, ch03, and ch04 does not almost change the RDF behavior for both crystal and liquid phases.As seen from Fig. 4, starting from the ch05 and higher values, the artificial introduction of large partial charges produces visible changes in g(r).This observation remains valid for all C-C, C-H, and H-H atomic pairs.Thus, in the plausible charge ranges, the RDF does not undergo any visible changes, indicating that the electrostatic interactions would not necessarily be dominant over the van der Waals ones.Only increasing the partial atomic charge up to its limit value of 1 (ch08-ch10) leads to a significant change in g(r) behavior (Figure 5).
Configuration and energy profiles
In Figure 6, configuration snapshots of a trans-decalin system are presented for the neutral and maximum partial charge distributions (ch00 and ch10, respectively) in their final relaxed states.For the neutral system (ch00), the average density is uniform; for large partial atomic charges (ch10), the liquid density is increasingly non-uniform as strong charge interactions cause local clusterization.Thus, the fluid of trans-decalin molecules with the zero charge (ch00) has a homogeneous structure; the comparison of its structural shape with the liquid in the case of a high non-zero charge (ch10) is straightforward (Figure 6: ch00 (left), ch10 (right)).
Next, we traced the evolution of the main energy characteristics of trans-decalin with an artificial change in the atomic charge distribution in the molecule.In Figure 7 and 8, MD simulation results are presented for the total internal and configuration energies of the system.First, it should be noted that all energy curves tend to constant levels indicating energy conservation irrespective to the charge values.An increase in the charge values leads to an increase in internal energy.Obviously, it can be explained as higher charge values generating stronger electrostatic interactions.Figure 8 shows a graph of the configuration energy of the system.
The configuration energy consists of two main components.The first one is the van der Waals interaction energy.Its behavior does not depend directly on the variation of the charge distributions up to ch05 (Figure 9).However, from the values of ch08 to ch10, the van der Waals energy curves strongly correlate with increasing the charge values.The second main component of the configuration energy is the electrostatic interaction energy (Figure 10), which underlies the fact that the partial atomic charge variations lead to an effective repulsion and attraction between the particles of the system.It is seen from Figure 10 that the electrostatic Coulomb interaction energy decreases and becomes negative with increasing a partial atomic charge.Comparing the behavior of the van der Waals and Coulomb potential energies is straightforward (Figure 9 and 10).
Adjustment of "real densities"
From the MD results presented above, we conclude that in the reasonable ranges of the partial atomic charge distributions (for example, the charge values obtained by accurate quantum chemistry calculations), the van der Waals interactions seem to be a dominant factor.The van der Waals forces can inhibit here the electrostatics ones.In other words, in the such systems (like trans-and cis-decalins), the correct description of the van der Waals potential is extremely important.In this section, MD simulation series have been performed aimed at the precise evaluation of the correct range of the LJ potential parameters for trans-and cis-decalins to ensure a good approximation of the van der Waals interaction in the system.The LJ parameters σ H and σ C have been measured with a NPT ensembles.The σ H and σ C have been multiply re-estimated in such a way that it is ensured that the densities of both trans-and cis-decalin are 0.87 and 0.897 g/cm 3 , respectively -the values experimentally observed under normal conditions.MD calculation results are summarized for trans-and cis-decalins in Figure 11 and 12, respectively, where configuration snapshots are shown along with density profiles defined through relaxations for NPT ensembles.The simulated and experimental values of the parameters σ H and σ C are shown in Figure 11 and 12. Crossing the MD-simulated density surfaces by the planes of the experimentally observed values (dotted lines), we find the unique and optimal LJ parameters, e.g.σ H (sim.) = σ H (exp.) and σ C (sim.) = σ C (exp.).Thus, from Figure 11 and 12, we found the intersections of the simulated and experimental density planes, where we get the following LJ parameters: σ H = 2.693 Å and σ C = 3.221 Å.It should be stressed that these values deviate within an error of less than 4 % from the well-known ones, σ H = 2.81 Å (Murad, S., 1978) and σ C = 3.35 Å (Tildesley, D. J., 1981), which are widely used in MD simulation analysis (compare Ref. (Murad, S., 1978): σ H = 2.81 Å and ε H /k B = 8.6 K; Ref. (Tildesley, D. J., 1981) Finally, concerning LJ parameters estimation, it is well known that the parameter σ C for carbon atom depends on its location within a molecule.As for the decalin molecule, its carbon atoms have to be sp 3 -hybridized, forming a tetragonal structure with the nearest atoms.The energy potential parameter ε C for such carbon atom is 0.06 kcal/mol.Hence, the σ C value was estimated from the results described before.Again, MD simulations have been performed to estimate the σ H value of both trans-and cis-decalin to make sure that the density values are the same as observed experimentally.Calculation results for the σ H parameter are shown in Figure 13 for transand cis-decalin (top and bottom, respectively).From Figure 13, we found that σ H is 2.273 Å for trans-decalin and 2.242 Å for cis-decalin.The comparison of the values σ H = 2.273 Å and ε C = 0.06 kcal/mol (sp 3 -hybridization) with σ H = 2.693 Å and ε C = 0.12 kcal/mol (sp 2 -hybridization) is straightforward.
Conclusion
This study presents molecular dynamics (MD) models simulating trans-and cis-decalin solutions.The structural, energy and potential parameters have been defined in multiple series of MD calculations.First, the radial distribution function (RDF) graphs were built for the trans-decalin molecule and its behavior traced depending on changes in the artificial partial atomic charge distributions in the molecule.The charge distribution in the trans-decalin molecule has no significant effect on the RDF of the C-C atomic pair within the reasonable charge range (including, the charges that can be predicted using quantum chemistry techniques).The trans-decalin fluid corresponding to the system with zero or low partial atomic charges has a homogeneous structure -contrary to the liquid model for a higher non-zero charge distribution.The crystal-liquid transitions are reflected in the RDF behavior for various charge distributions.The relationship between the main interaction forces of the system (Coulombic and van der Waals) has been examined, and the molecular forces governing the nature of the processes in the crystal or liquid phases have been cleared up.The Lennard-Jones (LJ) potential parameters have been precisely defined to ensure the correct description of the dominant van der Waals interactions.The use of the estimated LJ parameters yields the experimentally observed values of the liquid densities of both trans-and cis-decalin.It seems to be possible to describe the effect of the appearance of differences in the dispersion ability of different CAs in decalin by the MD simulation of these solutions from the structural point of view.For this purpose, the RDF of the atoms of the acid and decalin molecules for different acids should be carefully considered. | 3,567.2 | 2012-01-29T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Exploring the Determinants for Strengthening the Training Quality for Mathematics Teachers in Oman
Keeping in view the research objective “To determine the strengths of mathematics teachers training in SIPTT from trainees’ perspective” current research attempted to explore the factors affecting the training quality and become a point of strength for mathematics teachers training in SIPTT. For achieving this purpose, recent research focused to conduct interviews. In total 12 online interviews were conducted due to the strictness of covid-19 regulations. The current research used a single instrumental qualitative case study to gather extensive data from trainees’ perceptions to understand poor training results and withdrawal. Based on the interview data current research explored Context, Administration, Input, Process, and Output) factors as strengthening points for training qualities. Current research possesses remarkable implications and contributions towards the quality of mathematic teachers in Oman.
Introduction
Training of employees is essential for organizational effectiveness. Through training, employee skills and knowledge can be updated and enhanced, contributing to the sustainability of the organization. In the context of educational institutions in the Sultanate of Oman, educational training is needed to develop teachers' knowledge, skills, and abilities (Al-Jabri et al., 2018). One of the key institutes that provide training programs to teachers in the country is the Specialized Institute for Professional Training of Teachers (SIPTT), which was established in 2014 in alignment with the Omani Vision 2020 (MOE, 2014).
Notably, the SIPTT's role is also to develop some comprehensive mechanisms and plans for training and monitoring teachers' performances in education (Al-Jabri et al., 2018). Fundamentally, these training programs aim to improve the teachers' confidence, motivation, and skills to a level that meets the highest standards so that student learning outcomes are enhanced (MOE, 2014). Evidence shows that an effective training program facilitates the improvement of organizational deficiencies (Shenge, 2014). The SIPTT supports trainees through ongoing communication with the SIPTT trainers and the provision of technical support and feedback through the e-learning platform, regional visits, and provision of a tablet computer for e-learning. To support workplace learning, the MOE provides pre-paid internet services. To complete their learning activities, trainees are not required to attend their schools on the study day. Management support is a critical issue in the transfer of learning at the workplace.
SIPTT offers a two-year mathematics voluntary diploma training program to mathematics teachers to improve the mathematics education learning outcome in Oman. The objective of this program is to enhance the teachers' learning and teaching capabilities so that they could be groomed as program leaders in mathematics. The SIPTT mathematics teachers training program is voluntary and targeted at mathematics teachers in grades 5-10. The mathematics teachers training program delivers to investigative learning techniques, research methods, and the implementation of mathematical concepts. Despite being voluntary, participant recruitment targets are based on the strategic model of SIPTT. The mathematics teachers training is a blended program where each module contains three components: (a) 22.5 hours of face-to-face training in five days (30%), (b) online training (20%), and (c) workplace task application (50%). Each training module is evaluated by submitting an achievement file that contains samples of the teacher's work and a comprehensive report on all unit elements.
Despite the resources expended toward the mathematics training programs offered by MOE and organized by SIPTT, the program appears to be ineffective in meeting the objectives. The success rate of mathematics teacher trainees attending the SIPPT mathematics programs is the lowest compared to other trainees in other SIPTT programs since its establishment in 2014 (SIPTT, 2021).
As compared to other training programmes at the SIPTT, mathematics teachers have lowest scores in two cohorts. Beside this there was huge withdrawal from training cohorts, this withdrawal was on increasing trend (SIPTT, 2021). This withdrawal has been noticed as serious issue. Perhaps there are many factors those affect towards the withdrawal, low score has raised a key question on quality of training program offered by SIPTT.
One of the critical concerns in training is the quality so that the possibility of change exists (Rajasekar & Khan, 2013). According to Beardwell and Thompson (2017), training quality is a critical factor in achieving an organization's strategy. Enhancing the quality of training practices by practitioners is a possibly reliable driver of beneficial change (McChesney & Aldridge, 2018). However, frequently, organizations fail to determine the quality of their training programs despite the high spending on training every year (Singh et al., 2015). Past studies on teacher professional development programs reported disappointing results in terms of the quality to help teachers improve teaching practices and even more disappointing results in terms of their effect on learning and achievement of students (Cave & Brown, 2010;Desimone & Garet, 2015;Garet et al., 2011;Lipowsky & Rzejak, 2015;Yoon et al., 2007).
However, there is little evidence indicating that the training programmes by SIPTT have achieved the desired result in enhancing the performance of the mathematics teacher trainees. Hence, it is critical that a study be conducted to address the issue of training quality offered by the SIPTT.
Studies on training evaluation revealed that most Gulf countries, including Oman, have difficulty evaluating training (Al-Mughairi, 2018). In Oman, this occurs despite spending approximately USD156 million on the development activities in the MOE, including training programs (MOE, 2018). One of the reasons could be several crucial discrepancies between theoretical conceptualizations and empirical operationalization of training criteria that have hindered advancement in training evaluation (Sitzmann & Weinhardt, 2019). Moreover, the training evaluation processes in Oman are simply done to see trainees' reactions to the programs without centering on the quality of the training program in attaining the projected goals (Al-Nabhani, 2007). Overall, the information on the training quality has been scant regardless of the substantial utilization of resources in-service teacher training programs (Arancibia, Evans, & Popova, 2016), and it is often reported that most recent teacher training programs are over-theoretical and outdated (Loyalka et al., 2018).
Although many studies have been done on training evaluation in Oman (Alaraimi & Othman, 2015;AlBalushi, 2018;Al-Hosni, 2014;Al-Jabri et al., 2018;Al-mughairi, 2018;Al-Omrani, 2014;Alshykairi, 2020;SIPTT, 2020;Rajasekar & Khan, 2013), little academic attention is given to trainees' perceptions of non-mandatory mathematics teacher training program. This is perhaps because most mathematics teachers training programs in Gulf countries are not subject to any type of evaluating training quality (Alamri et al., 2018), resulting in scare research works. Therefore, the present study attempts to investigate mathematics trainees' perceptions about the quality of mathematics teachers training programme of the SIPTT by using a qualitative approach guided by the Context-Administration-Input-Process-Outcome (CAIPO) training evaluation model to get a comprehensive and in-depth understanding of the issue. Keeping in view the importance of highlighted issue, current research is based on the objective and research question as;
Research Objective
To determine the strengths of mathematics teachers training in SIPTT from trainees' perspective.
Research Question
What key indicators do trainees perceive as being the strengths of the mathematics teachers training programme in SIPTT?
Material and Method Research Approach
Considering the nature of the current study and stated objectives, "holistic case study with multiple cases" was appropriate and therefore followed. Dyer & Wilkins (1991) argued that holistic (or single) case study designs are essential because this allows a deep contextualized understanding of the case. In the present study, a multiple case study was considered as an appropriate in which the context was "Mathematics teachers training at the SIPTT". The case of the mathematics trainings provides deeper insight into an issue needing refinement, knowing the strengths of mathematics teachers training to enhance training quality.
This study adopted a single instrumental qualitative case study to gather extensive data from trainees' perceptions to understand poor training results and withdrawal. The investigation provides insight into an issue needing refinement (knowing the strengths of mathematics teachers training) to enrich training quality. Mathematics teachers training at the SIPTT is carried on continuously, so it is not an intrinsic case.
Study Design and Setting
Exploratory qualitative research under the case study approach was considered appropriate to explore the main objectives of the present research. The case study approach provided a detail picture of programme operations, often at some locations, resulting in a deeper understanding of how and why programme operations relate to outcomes (Newcomer et al., 2015). Qualitative research does not consider on generalizability rather every case had its own experience and viewpoint which helps to identify the real facts of the
Online training Workplace
Tasks application research (Mayring, 2007). The overall objective of this research was to disclose the key indicators of the quality of mathematics teachers training.
Data Collection and Procedures
The primary data for this research was gathered through the online interviews. Before starting the formal process of interviews, each participant was contacted through researcher for seeking permission. Ten online interviews were conducted through the zoom session. Due to some technical issue with Zoom, two telephone interviews had to be used. After twelve interviews, the saturation point came where researcher stopped interviews (seven participants who passed the training, three who withdrew, and two who failed). Notably, most data were obtained from those who passed the programme. The size of 12 participants in this study was consistent with Guest, Bunce, and Johnson (2006), who found that saturation occurred within the first twelve interviews. Additionally, Kuzel (1992) tied his recommendations to a sample of twelve to twenty data sources "when looking for disconfirming evidence or trying to achieve maximum variation" (p. 41).
To achieve this, current research followed the list of trainees in the first and second cohorts accessible at the SIPTT to recruit the participants who met the criteria. Obtaining the list was possible because the researcher is employed there. The trainees were then classified based on their results in the program (passed, failed, and withdrew). The maximal variation strategy was considered in each group. For achieving desired sample size current research used nonprobability purposeful sampling approach was chosen because it was more suitable than theoretical sampling to understand the participants' views regarding the quality of mathematics training. The present study employed the maximal variation sampling strategy to get multiple perspectives about the phenomenon from the participants who differ in some characteristics (Creswell & Poth, 2018), besides enhancing transferability (Merriam & Tisdell, 2015). The logic for using this sampling strategy was to choose mathematics trainees that represented a unique mixture of demographic features. The present study chose the interview technique to investigate the participants' perspectives on mathematics teachers' training quality at SIPTT.
An interview protocol was developed to guide the data collection process during the interview (Creswell & Poth, 2018). The initial interview protocol involved six questions, as follows: (1) why did you participate in this program? (2) Do you think the training program has fulfilled your training needs? Why or why not? (3) What was the biggest takeaway of this program for you? Why? (4) Did you face challenges in the program? Why? (5) Do you think the program should be improved? If yes, in what ways? What aspects of it needs improvement? If no, are you happy with all aspects discussed throughout the program? How? (6) Is there any comments relevant to what has been discussed before we end our meeting? Before the actual data collection, written permission to conduct the study was sought from the MOE in Oman. The letter stated the purpose of the current research, the participants, and the type of data to be collected. Due to the coronavirus pandemic and the government restriction, the researcher could not conduct a focus group as planned. Instead, the researcher had to conduct an online interview via Zoom. Before the online interview could be performed, the researcher set up an online appointment to find each participant's best and most convenient time by using phone calls or WhatsApp. The interviews were done in Arabic to ensure comfort in eliciting views and opinions.
Analyses
The current study, the collected data were analyzed according to the research questions. Nevertheless, the process of analyzing and interpreting data was continuous, as this research discovered new information every now and then. For this reason, the current study followed closely, if not all, the seven-step analysis procedure suggested by Adu (2019): Every interview was analyzed for "internal consistency" to assess dependability (Thompson, 2000, p. 272) to allow the researcher to determine if individuals truly remembered or just reflecting on their previous mathematics teachers training experiences. The technique of coding followed by arranging the ideas generated into themes representing the interviewees' responses. Codes were further analyzed to form themes. Each theme represented a broad, meaningful idea representing the participants' point of view conceived from coding analysis (Creswell, 2013). Lastly, the themes were organized for interpretation using simple frequency analysis.
Participants
Twelve participants from three groups (those who passed, failed, and withdrew) were interviewed in this study. Of 12 participants, 58.3% were between 35 and 38 years old, while 41.6% were between 41 and 44 years old. On teaching experience, 58.3% had teaching experience between 12 and 15 years, while 41.6% had between 17 and 20 years of experience. Since the programme was not mandatory, one of the criteria for selecting a candidate is that he/she should have teaching experience of five years or more. In terms of gender, 25% of participants were women, and 75% were men. Many more men were in the second cohort than women. Only a quarter of them was from the first cohort, while 75% were from the second cohort. More took part in the second cohort because much time had passed since the first batch was taken in 2014. Also, during the pilot study, the researcher found that some trainees who attended in the first cohort had changed their job titles.
Participants were from seven different educational governorates over the country. In terms of trainee results, 58.3% passed the program, 25% withdrew from the programme, and 16.6% failed the programme. Those who passed had a comprehensive overview of the program.
Findings
Five elements (based on the CAIPO model) were thematically developed from the literature to be used as a lens to investigate the participants' perceived quality of a mathematics teachers training at SIPTT. The elements were context (work environment), administration (policies and procedures), inputs (training design, content, facilities, trainer competencies, trainee characteristics), process (trainer's delivery, follow-up, and feedback), and outcomes (perceived benefits).
Context
One concern developed from the literature is how the work environment influences the perceived quality of mathematics teachers training. The findings indicate that the work environment seems to affect the perceived quality of the training programme. The present study also reveals an emergent theme of professional learning communities in influencing perceived quality.
The present findings highlight two primary areas of support influencing the first attendance in the training programme identified by the participants. They were the principal support and peer support. The findings are consistent with the importance of workplace support in motivating training participation (Matsumura et al., 2010).
Administration
According to the CAIPO model, such administrative issues could influence training quality. The study findings showed that most of the challenges facing the trainees in the program were concentrated in administrative aspects.
Input
The interviews showed that the sub-themes under the training input include training design, content, facilities, trainer competencies, and trainee characteristics. The present study revealed many factors surrounding the pre-training attitude (e.g., participants' expectations and their motivation to learn). Additionally, delivery, outcomes during and posttraining attitude, and learning style seemed to play a role in the participants' perception of training quality. Since the participants seemed to be of the same age group, experience, and educational environment, they held a similar opinion about the training quality. Unsurprisingly, the participants' high expectations for what the course would contain or eventually provide seemed to impact their perception of training quality. This has significant implications for assessing the quality of mathematics teacher training programs. If each participant has only hazy expectations from the course that are unrelated to the learning goals set by the course designer, assessing whether these outcomes were reached is useless.
Process
Process keeps track of the steps of training implementation. It serves as a continuous check on the execution of the training. From the participants' perspective, evaluating the quality of mathematics teachers training process revealed two sub-themes: training delivery and trainer follow-up and feedback.
Outcomes
The interviewees stated a large number of positive results of the training programme. Expectedly, this theme took the largest part of the strengths of the training programme, unlike other themes. Many benefits mentioned by the trainees reflected the quality elements in the programme. From the interviews, four sub-themes were extracted (a) trainee satisfaction, (b) enhanced skills, (c) teacher productivity, and (d) research skills.
Discussion and Concluding Remarks
This section gives an overview of the key findings based on the interviews conducted with the participants of the mathematics teacher training at the SIPTT. As detailed and discussed in the previous section, the key findings of the research provide a conceptual understanding and practical suggestions on mathematics teacher recruitment and their nonmandatory training in Oman generally and in the SIPTT context in particular. The study findings showed that participants hold different perspectives about the mathematics teachers' training quality at SIPTT. Based on the CAIPO model, the findings are summarized and organized into five themes: (a) context, (b) administration, (c) input, (d) process, and (e) output/outcome. In each theme, the strengths associated with the quality of the training program are identified.
Firstly, the context theme contains two sub-themes (school environment and professional learning communities) as the key indicators of the quality of mathematics teachers training. The interviewees valued the compatibility of the mathematics teachers training program offered by the SIPTT with their training needs to develop their teaching skills in the school environment. Additionally, the school principals were found to play an indirect role in enhancing the quality of the training program as they were supportive of the trainees attending the training program. The literature highlights that peer support encourages trainees to apply the new skills and transfer the training (Bhatti et al., 2013;Blanchard & Thacker, 2013;Martins et al., 2019). Through peer support and encouragement, the interviewees indicated that the program had helped them engage with the professional learning community, another indication of the quality aspects of the training program.
The administration theme consists of the sub-themes of policies and procedures. The study results showed that the administrative aspects related to the training program focus on two components: the administrative circular to the schools and one-day release from the school. Both components were perceived to constitute the key strengths and hence quality of the training program as these policies encouraged and motivated them to attend. Administrative policies at all levels strongly influence training quality (Loucks-Horsley et al., 2010).
The input quality indicators of mathematics teachers training that emerged were related to training design, content, facilities, trainer competencies, and trainee characteristics. This study also found that the content and materials used in the mathematics teachers training were critical quality indicators. The trainers were also perceived to be qualified, knowledgeable, and specialized, indicating the quality of this specific training. The present study also revealed the role of trainee characteristics in training quality. It is contended that exploring this human aspect of assessment is easier to achieve when having the individual at the heart of a qualitative training evaluation. This finding broadly supports the work of other studies in this area linking trainee characteristics with training quality (Holton, 2005;Newcomer et al., 2015;Tai, 2006;Velada & Caetano, 2007). The present results are significant in three major respects: readiness (Mavin et al., 2010;Williams, 2008), motivation (Omar et al., 2009;Tai, 2006;Yanson & Johnson, 2016;Yaqoot et al., 2017), and self-efficacy (Yanson & Johnson, 2016).
The quality indicators of mathematics teachers training at the process stage comprises training delivery and trainer follow-up and feedback. While both dimensions indicated quality, the feedback and follow-up were particularly significant to the trainees, especially in the workplace task applications component. SIPTT intentionally designed the program to constitute 50% of the feedback and follow-up component to apply and transfer their new skills at work effectively. This component was also found to be an important factor that encouraged trainees to take up voluntary training in subsequent modules.
The most apparent finding to emerge from the interviews is the outcomes of the training as one of the quality indicators. Outcomes reflect trainee satisfaction, enhanced skills, teacher productivity, and research skills obtained from training. Despite the poor performance of the first and second cohorts, all interviewees (those who passed, failed, or withdrew) stressed that the obvious significant outcomes were the updating of teaching practices and the use of technology in teaching mathematics. In general, they indicated that their performance had improved compared to those who did not attend the training. The improvement of the trainee's performance in the workplace is one of the quality indicators of the program (Harris & Sass, 2011;Oguntimehin, 2005).
Theoretical Implications
The focus of this study was to gain an in-depth understanding of the factors influencing the quality of mathematics teacher training in the SIPTT in Oman by considering the environmental difference between the training centre in the SIPTT and trainees' schools. Therefore, this study looked at the quality of voluntary mathematics teacher training in SIPTT to fill the gap in the Oman context. Essentially, this research further contributes to the existing body of knowledge by investigating the effectiveness of voluntary training programmes (Huang et al., 2014;Gelkopf et al., 2008;Meier et al., 2012).
Practical Implications
Although each training quality factor had been studied individually, previous comprehensive investigation has been studied (face-to-face training, online training, and workplace tasks application) (Figure 1.2 frame work of the research) combined as one training program and relative contribution to training quality in a natural setting. This adds new insights into the actual training practices in the SIPTT instead of focusing only on the impact of the SIPTT training programs. The study findings will aid in comprehending participants' perceptions of the perceived quality of mathematics teacher training courses, resulting in a more precise alignment of provision with the needs of individual attendees. Further, training coordinators have been classified as a stakeholder that encompasses those individuals inside the MOE in Oman involved in the licensing, procurement, and administration of training. Typically, their role inside an establishment is to organize the SIPTT training program that meet both teachers and MOE objectives for professional career growth and increased operational efficiency. The outcomes of this study can benefit SIPTT training coordinators in two ways: first, by knowing participant views of program quality to design provision that is more aligned with needs and improving the SIPTT's initiatives to market and promote voluntary training courses within MOE in Oman.
Conclusion
Hence based on the findings, current study has achieved its objective and answered the research question regarding identification of key factors (Context, Administration, Input, Process, and Outcomes) to strengthen the mathematic teachers training program through SIPTT. This study looked at the quality of voluntary mathematics teacher training in SIPTT to fill the gap in the Oman context. Practically, this study adds new insights into the actual training practices in the SIPTT instead of focusing only on the impact of the SIPTT training programmes. | 5,377.4 | 2021-12-17T00:00:00.000 | [
"Mathematics",
"Education"
] |
STK35L1 Associates with Nuclear Actin and Regulates Cell Cycle and Migration of Endothelial Cells
Background Migration and proliferation of vascular endothelial cells are essential for repair of injured endothelium and angiogenesis. Cyclins, cyclin-dependent kinases (CDKs), and cyclin-dependent kinase inhibitors play an important role in vascular tissue injury and wound healing. Previous studies suggest a link between the cell cycle and cell migration: cells present in the G1 phase have the highest potential to migrate. The molecular mechanism linking these two processes is not understood. Methodology/Principal Findings In this study, we explored the function of STK35L1, a novel Ser/Thr kinase, localized in the nucleus and nucleolus of endothelial cells. Molecular biological analysis identified a bipartite nuclear localization signal, and nucleolar localization sequences in the N-terminal part of STK35L1. Nuclear actin was identified as a novel binding partner of STK35L1. A class III PDZ binding domains motif was identified in STK35L1 that mediated its interaction with actin. Depletion of STK35L1 by siRNA lead to an accelerated G1 to S phase transition after serum-stimulation of endothelial cells indicating an inhibitory role of the kinase in G1 to S phase progression. Cell cycle specific genes array analysis revealed that one gene was prominently downregulated (8.8 fold) in STK35L1 silenced cells: CDKN2A alpha transcript, which codes for p16INK4a leading to G1 arrest by inhibition of CDK4/6. Moreover in endothelial cells seeded on Matrigel, STK35L1 expression was rapidly upregulated, and silencing of STK35L1 drastically inhibited endothelial sprouting that is required for angiogenesis. Furthermore, STK35L1 depletion profoundly impaired endothelial cell migration in two wound healing assays. Conclusion/Significance The results indicate that by regulating CDKN2A and inhibiting G1- to S-phase transition STK35L1 may act as a central kinase linking the cell cycle and migration of endothelial cells. The interaction of STK35L1 with nuclear actin might be critical in the regulation of these fundamental endothelial functions.
Introduction
Endothelial dysfunction underlies atherosclerosis and coronary heart disease [1,2].Migration and proliferation of vascular endothelial cells are important not only for repair of injured endothelium, but also for angiogenesis [3].Cells in the endothelial monolayer are in a quiescent state residing in the G o phase of the cell cycle.Injury of the endothelium leads to the local release of peptide growth factors (such as VEGF, TGF) and bioactive lipids (i.e.S1P) that stimulate endothelial cell migration and proliferation crucial for endothelial healing [4,5].Angiogenesis induced by hypoxic tissue conditions or by angiogenic stimuli, is a complex biological process involving the directional migration, proliferation, intercellular alignment and adhesion of endothelial cells [3].Healing of the endothelium and angiogenesis require the activation of a genetic program which regulates endothelial cell proliferation and migration in a coordinated manner.
Cyclins, the cyclin-dependent kinases (CDKs), and the cyclindependent kinase inhibitors (CKIs) play an important role in vascular tissue injury, inflammation and wound repair [6,7].On stimulation by growth factors or after mechanical trauma, endothelial cells exit the quiescent state and progress through G 1 and S phase of the cell cycle.G 1 phase progression is regulated by the assembly and phosphorylation of CDK complexes.Two classes of endogenous inhibitors of the CKI are dominant in cardiovascular biology: the CIP/KIP family, which includes p21 Cip1 , p27 Kip1 , p57 Kip2 , and the INK4 family, which includes p15 Ink4b , p16 Ink4a , p18 Ink4c , and p19 Ink4d .p16 INK4a binds to cyclin/CDK complexes and causes cell cycle arrest in the G 1 phase by inhibiting CDK4/6 mediated phosphorylation of Rb [8].p16 INK4a and p15 INK4b are encoded by the alpha-transcript of CDKN2A and the CDKN2B gene, respectively.Recent genomewide association scanning studies identified DNA sequence variants at chromosome 9p21 that increase the risk of coronary heart disease, myocardial infarction and, independently, type 2 diabetes [9,10].Interestingly, the genomic region of interest was found to be adjacent to the genes CDKN2A and CDKN2B.The mechanism by which these genes might influence coronary heart disease and type 2 diabetes is unknown.
Previous studies of vascular cells show that there is a link between cell cycle progression and migration [11,12,13].The maximal potential of a cell to migrate lies in the mid-late G1 phase, whereas cells in the late S or G 2 /M phase have a lower or no ability to move [14,15].p27 Kip1 has been shown to regulate G 1 -S phase cell cycle progression and cell migration of endothelial and smooth muscle cells [13,16].Endothelial cell migration requires dynamic changes of the actin cytoskeleton which is regulated by small GTPases and various protein kinases [17].The molecular mechanisms linking in endothelial cells cell cycle progression and migration are not known.
STK35L1 is a member of the class of serine/threonine protein kinases; it is mainly localized in the nucleolus and nucleus [18].Recently we identified the full length coding sequence of the STK35L1 gene which codes for a protein of 534 amino acids [18].The biological function of STK35L1 is not known.STK35L1 gene expression was found to be upregulated in colorectal cancer [19], and was altered in a rodent model of Parkinson disease [20].A kinome-wide RNAi screen revealed that STK35L1 silencing was among top five hits leading to reduce infection of hepatocytes by Plasmodium berghei sporozoites [21].These studies suggest that STK35L1 may play a role in various human diseases.
In the present study we set out to explore the function of STK35L1 in endothelial cells.We show that STK35L1 regulates the expression of CDKN2A alpha-transcript and inhibits the G1to S-phase transition of the cell cycle, and that STK35L1 is essential for endothelial cell migration and angiogenesis.STK35L1 might be part of a program underlying the integrated regulation of the cell cycle and cell migration.
Results
The N-terminal region of STK35L1 has functional nuclear and nucleolar localization signals We have previously shown that STK35L1 is mainly localized in the nucleolus and the nucleus [18].To predict a functional nuclear localization signal (NLS) in STK35L1, the basic amino acid-rich motifs of STK35L1 were analyzed by manual comparison with known NLSs of different proteins.A potential bipartite NLS was identified within the N-terminal motif (amino acids (aa) 142-153) of the protein.Unlike the classical bipartite NLS consisting of a defined spacer of 8-10 non-basic amino acids, the identified potential NLS of STK35L1 has a 6-amino acids short spacer sequence similar to LIMK2 (Figure 1A) [22].The identified NLS is highly conserved among mammals (Figure S1).Conservation of the identified bipartite NLS sequence underlies its functional importance.
To analyze whether the predicted NLS is functional, we prepared various EGFP-tagged deletion constructs of STK35L1 (Figure 1B).As compared with EGFP-STK35L1 that predominantly localized in the nucleus and nucleolus, the constructs deleted of the N-terminal 169 aa (EGFP-STK35L1D1-169) containing mainly the kinase domain, was distributed throughout the cytoplasm and the nucleus (Figure 1C).The construct containing the predicted NLS (EGFP-STK35L1-134-201) mainly accumulated in the nucleus (Figure 1C).These data suggest that the motif aa 142-153 is the functional bipartite NLS of STK35L1.
Nucleolar localization signal (NoLS) are sequences rich in arginine and lysine.So far, no specific consensus sequences for nucleolar localization have been determined.We observed that several stretches of arginine and lysine amino acids are present in the N-terminal part of STK35L1 that are also highly conserved among mammals (Figure S1).These analyses suggest a putative NoLS in the N-terminal region of STK35L1.We found that the mutant EGFP-STK35L1D1-133 (aa 1-133 were deleted) was excluded from the nucleolus but mainly localized in the nucleus (Figure 1C).This mutant corresponds to the protein previously described as STK35 [18].Our data indicate that the N-terminal aa 1-133 contains a functional nucleolar localization signal (NoLS) in STK35L1 that is absent in STK35.
EGFP-STK35 interacts with nuclear actin
STK35L1 is localized in the nucleus and nucleolus whereas STK35 is mainly localized in the nucleus [18].To identify proteins which interact with STK35L1 in the nucleus, we immunoprecipitated EGFP-FLAG-STK35 protein from nuclear extracts of EGFP-FLAG-STK35-transfected HEK293 cells using an anti-FLAG antibody.We found four co-immunoprecipitated proteins which were identified by MALDI-MS analysis as EGFP-STK35 (2 bands), HSP70B1 and actin (Figure 2A, left panel).The coimmunoprecipitation of actin with EGFP-FLAG-STK35 was confirmed by western blotting using a specific anti-b-actin antibody (Figure 2A, middle panel).The purity of the nuclear preparation was checked by probing it with an anti-b-tubulin antibody that detected tubulin in the cytoplasmic fraction only indicating that the nuclear preparation was free from cytoplasmic proteins (Figure 2A, right panel).b-actin was present in both the nucleus and the cytoplasm.These data suggest that STK35L1 interacts with nuclear actin.
To investigate further a possible colocalization of STK35 with nuclear b-actin, endothelial cells were studied by fluorescence microscopy.In transfected endothelial cells, EGFP-STK35 (green) was concentrated in dot-like structures within the nucleus (Figure 2B left panel).In some of these nuclear structures, b-actin (red) was colocalized with EGFP-STK35 (Figure 2B right panel).Together these data suggest that STK35L1 interacts with actin in the nucleus of endothelial cells.
A potential class III PDZ domain binding motif of STK35L1 is responsible for its association with actin
To gain insight into protein-protein interaction domains of STK35L1, we used the Eukaryotic Linear Motif server (ELM) for investigating candidate short non-globular functional motifs within the STK35L1 protein [23].Several protein-binding motifs were predicted that were mainly located in the first 200 amino acids of STK35L1 (Table S1).Among the motifs, an internal class III PDZ domain-binding motif (PDZ-BM) was predicted in the N-terminal part of STK35L1 (aa 173-176; Figure 3A).PDZ domains are found in many proteins, also in proteins, that are involved in the regulation of the actin cytoskeleton [24,25].To test whether the PDZ-BM is responsible for actin association, we performed GST pull down assay of nuclear lysates from HeLa and endothelial cells using a recombinant GST-tagged protein containing the potential PDZ-BM (GST-PDM).GST-PDM consists of GST coupled to 30 amino acids (aa position 170 to 204) of STK35L1 containing the potential PDZ-BM.Indeed, we found that the purified protein GST-PDM but not GST bound with nuclear actin (Figure 3B).
To further study, whether the PDZ-BM associates of STK35L1 with actin in vivo, we studied the localization of EGFP-PDM (STK35L1-170-204) in transfected endothelial cells.We found that EGFP-PDM, present both in the cytoplasm and the nucleus due to the small size of the protein and the lack of NLS and NoLS sequences, strongly associated with actin stress fibers (Figure 3C, left panel).EGFP-PDM was not only localized on stress fibers, it was also enriched in membrane ruffles of migrating endothelial cells (Movie S1).
To study, whether the PDZ-BM is responsible for the association of STK35L1 with actin, we investigated the subcellular distribution of different EGFP-tagged deletion mutants of EGFP-STK35L1 in transfected endothelial cells.The deletion mutant containing the PDZ-BM but lacking the NoLS and NLS (EGFP-STK35L1D1-169) resulted in a more cytoplasmic distribution of the fusion protein than EGFP-STK35L1 with a partial localization on fiber-like structures of the cytoskeleton (Figure 3C, middle panel).The deletion mutant that lacked the PDZ-BM in addition to NoLS and NLS (EGFP-STK35L1D1-196) was diffusely distributed throughout the cytoplasm and the nucleus (Figure 3C, right panel).In none of the transfected cells a fiber-like structure could be observed.
Together, we suggest that a class III PDZ domain binding motif is responsible for the association of STK35L1 with actin.
SiRNA-mediated depletion of STK35L1 accelerates G1-S phase progression of the endothelial cell cycle
To analyze the role of nucleolar STK35L1 in the cell cycle, we used siRNA to silence the STK35L1 gene.We designed siRNA directed against three different regions of the STK35L1 gene [18] which together down-regulated STK35L1 by 80-90% at the level of transcription (Figure 4A) and at the level of protein [18].siRNA-or nonspecific siRNA-transfected cells were synchronized in G 0 /G 1 phase by serum depletion, and thereafter the cells were released into cell cycle progression by adding 10% serum.In synchronized cells, over 80% of control and STK35L1-silenced cells accumulated in the G 0 /G 1 phase.Six hours after release, the number of control cells in the G 1 /G 0 phase was decreased by 6%, and 12% of control cells were present in the S phase (Figure 4B).In STK35L1-silenced cells, the G 1 -S phase transition was accelerated: the number of cells in the G 1 /G 0 phase was decreased by 18% (p,0.05), and the number of cells in the S phase was increased almost two fold (23%; Figure 4B, 6 hr).Twelve hours after release, 68% (64%) of control cells were in the G 1 /G 0 phase and 19% (66%) had entered S phase.After STK35L1 silencing only 55% cells remained in G 1 /G 0 phase, and 29% of cells had entered the S phase (Figure 4B).
These data indicate that knockdown of the STK35L1 gene leads to an accelerated G 1 to S phase transition.STK35L1 might therefore function as a negative regulator of the cell cycle progression from G 1 to S phase.
STK35L1 regulates the expression of G 1 /S phase specific genes
To determine the influence of STK35L1 on genes involved in cell cycle regulation, a real-time RT-PCR-based commercial gene array was used.This gene array measures the expression of 84 genes that include positive and negative regulation of the cell cycle, phase transitions, checkpoints, and DNA replication (Table S2).Endothelial cells were transfected with STK35L1 siRNA or control siRNA, followed by cell synchronization in G 0 /G 1 phase and then released by serum stimulation.
Since we found in STK35L1-silenced cells an acceleration of the G 1 /S-phase transition, we expected that by STK35L1silencing the expression of those genes might be altered, which function during G1/S phase transition.Indeed, the most prominently downregulated gene (8.8 fold) was p16 INK4a (Table 1).This protein is encoded by the alpha-transcript of CDKN2A gene [8], inhibits the cell cycle and is responsible for arresting cells in the G 1 -phase.We were unable to detect p16 INK4a , which is expressed probably at a very low level in HUVEC by immunoblotting using commercial antibodies in HUVEC.Also another study was unable to detect p16 INK4a in HUVEC under normal growth condition [26].
Two other genes were significantly down-regulated in STK35L1-silenced cells: GADD45A, a protein involved in DNA repair as well as in G 1 cell cycle arrest [27], and DDX11, which encodes a nucleolar helicase (Table 1).No gene was significantly upregulated in STK35L1 silenced cells.These data suggest that STK35L1 might function as a negative regulator for the cell cycle progression by affecting the expression levels of genes, which inhibit the G 1 -to S-phase transition, such as p16 INK4a and GADD45A.
STK35L1 is upregulated during angiogenesis
Endothelial cells cultivated on a basal membrane-like matrix such as Matrigel, undergo a rapid morphogenesis: they migrate on the matrix, form cell-cell contacts and a network of cords, but they do not proliferate.In contrast, cells cultivated on collagen-coated surfaces undergo mainly proliferation [28].We reasoned that STK35L1 expression might inhibit endothelial cell proliferation on basal membrane-like matrix, and perhaps increase the migration potential of the cells by keeping them in the G 1 -phase.Therefore, we compared first the effect of the two different matrices (Matrigel and collagen) on STK35L1 expression of endothelial cells.Indeed we found that the expression of STK35L1 transcripts was increased threefold within 4 hours only in the cells growing on basal membrane-like matrix (Figure 5A).
Knockdown of STK35L1 inhibits the formation of endothelial sprouting
Next, we analyzed whether silencing of STK35L1 affects the morphogenesis of endothelial cells on Matrigel.Endothelial cells transfected with control siRNA formed six hours after plating a network of cord-like structures on basal membrane-like matrix (Matrigel) similar to non-transfected cells (Figure 5B).In contrast, in STK35L1-silenced cells endothelial sprouting was drastically inhibited (by 70%; Figure 5B, 5C).The silenced cells also showed considerably fewer nodes with branching outlets (Figure 5B lower panel).These data indicate that STK35L1 regulates the morphogenetic process required for angiogenesis.
STK35L1 silencing inhibits endothelial cell migration
In order to find out, which step of morphogenesis is affected in STK35L1-silenced cells, we analyzed the migration of STK35L1silenced endothelial cells by using two different wound-healing assays.In the assay using the IBIDI Culture-Insert where no cell damage occurs, the migration of STK35L1-silenced cells was drastically inhibited: the cells were not able to move in the direction of the wound (Movie S2, Figure 6A).In STK35L1silenced cells, we could not observe the stable lamellipodia formation in the direction of migration (like in control), but we found transient lamellipodia formation in all the directions (Movie S2).Also after mechanical injury of confluent endothelium, where cell damage occurs, the migration of STK35L1-silenced cells was inhibited (Figure 6B, 4C).These results show that STK35L1 is crucial for endothelial cell migration.
Discussion
The present study demonstrates that the localization of the novel kinase STK35L1 in the nucleus and nucleolus is regulated by a specific bipartite NLS, and a NoLS both present in the Nterminal part of the protein.We found an interaction of STK35L1 with nuclear actin that is mediated through a potential PDZ-BM motif present in the N-terminal part of STK35L1.Furthermore, STK35L1 regulates the expression of CDKN2A and inhibits the G1-to S-phase transition of the cell cycle, and STK35L1 is essential for endothelial cell migration and cord-like structure formation on Matrigel.
The N-terminal part of STK35L1 is highly basic and rich in arginine and lysine and contains the bipartite NLS and NoLS.The identified bipartite NLS sequence was conserved among mammals, and we showed that it was functional in endothelial cells.Our study further shows that the bipartite NLS (aa 142-153) did not overlap with the NoLS sequences (aa 1-133) of STK35L1.Nucleolar localizing proteins are rich in arginine and lysine and often overlap with the NLS, suggesting a complex regulation of nucleolar localization [29].Other studies have identified small NoLS sequence motifs ranging from 7 to 30 residues have been identified that can be sufficient to target a protein to the nucleolus [22,30].We found that NoLS of STK35L1 had no well-defined small sequence motif.
In terms of functions, we show that STK35L1 is a negative regulator of the G 1 to S phase progression of the endothelial cell cycle by increasing directly or indirectly the expression of the cell cycle inhibitor CDKN2A.Previously, high expression of CDKN2A was correlated with a low proliferation of colorectal carcinomas at the invasion front [31].Moreover, enhanced levels of STK35L1 transcripts have been reported in tissues obtained from colorectal cancer patients [19].Interestingly, by using Phosphoproteomics of the Kinome across the Cell Cycle, STK35 (N-terminal 133 aa truncated-form of STK35L1) was found to be phosphorylated during the cell cycle progression in HeLa S3 cells.This study identified Thr-12 in STK35 as phosphorylated amino acid, which corresponds to Thr-145 in STK35L1 [32].Notably, Thr-145 is within the bipartite NLS sequence (STK35L1 aa 142-153) identified in the present study.Many proteins such as LIMK2, Ca 2+ /calmodulin-dependent protein kinase II, and cyclin B1 are phosphorylated near or within their NLS thereby affecting their nucleocytoplasmic shuttling [22,33,34].These results are interesting and lead us to suggest that phosphorylation of STK35L1 within its NLS during cell cycle progression might regulate its localization in the nucleus and/or nucleolus.
Our study shows that STK35L1 is upstream of CDKN2A.STK35L1 regulation of the CDNKN2A alpha-transcript (p16 INK4a ) could be a new signaling pathway regulating G 1 to S phase progression of the cell cycle.It has been shown that nuclear b-catenin stimulates the expression of p16 INK4a in various tumor cell lines but the mechanism of regulation is not well understood [31,35].Interestingly, GSK3b-dependent phosphorylation regulates proteasomal degradation of b-catenin [36] and STK35 is a potential candidate for GSK3b-regulated proteasomal degradation [37].Therefore, there are now two possibilities for regulation of CDKN2A (p16 INK4a ): either a novel GSK3b/STK35L1 pathway regulates independently of GSK3b/b-catenin CDKN2A, or STK35L1 is part of the GSK3b/b-catenin pathway regulating CDKN2A.These possibilities should be explored further as CDKN2A expression is important in various human diseases.
We found a crucial role of STK35L1 in endothelial cell migration as measured by two different cell migration assays.The question arises, how nuclear/nucleolar STK35L1 can regulate cell migration?Previous studies showed that G 1 to S phase progression and cell migration are coordinated processes in different cell type: cells in mid-late G 1 phase have the greatest ability to migrate, whereas cells in G 0 , S or G 2 /M phase have a lower or no ability to move [14,15].Therefore, it is possible that STK35L1 might promote endothelial cell migration by keeping cells in the G 1phase.Many cell cycle proteins have been reported to be involved in the regulation of cell migration [11,13,38,39].In endothelial cells and vascular smooth muscle cells, the CDK inhibitor p27 Kip1 blocks cell migration [11,13], however in other studies p27 Kip1 has been reported to promote migration by interacting with the G 1 /Sphase specific cyclin D1 [38].The subcellular distribution of p27 Kip1 was found to be important in its promigratory function: cytoplasmic but not nuclear p27 Kip1 promoted cellular migration [39].In our study, we could not find an altered subcellular distribution of STK35L1 during endothelial cell migration suggesting that the role of STK35L1 in regulating migration is restricted to its nuclear/nucleolar localization.
Endothelial cell migration is required for angiogenesis.Endothelial cells on Matrigel migrate, polarize and proceed to form cord-like structures, but they do not proliferate [28].Indeed we found that STK35L1 is upregulated in endothelial cells growing on Matrigel for 4 hours but not on collagen, and that STK35L1 was crucial for endothelial sprouting.We suggest that STK35L1 regulates this process by keeping endothelial cells in the G 1 phase and thereby promoting cell migration.
We identified nuclear actin as interacting protein of STK35L1.The interaction of STK35L1 with nuclear actin might be important for the regulation of both the cell cycle and the migration of endothelial cells [40,41,42].It is now well established that actin is present in various nuclear compartments such as the nucleolus.Nuclear actin forms structures different of cytosolic actin structures such as stress fibers [41].Nuclear actin plays a role in gene transcription by regulation of transcription factors or as a component of chromatin remodeling complexes and RNP particles, and it is closely associated with all RNA polymerases [42].For example, nuclear actin regulates the Serum response factor (SRF) by interacting with MAL, a Myocardin family transcription factor.SRF activity is a key event during cellular differentiation of many processes and transcriptionally controls many genes such as actin isoforms (Actb, Actg, Acta2) and actinbinding proteins (ABPs; e.g., Gsn) [43,44].Recently, It has been shows that SRF's play a crucial function during cell migration.In neuronal cells, cell migration is not only depends on cytoplasmic actin dynamics but also on the nuclear actin dependent functions such as gene transcription [45].The nuclear actin/STK35L1 complex might regulate gene transcription of specific cell cycle proteins such as CDKN2A and genes involved in cell migration.[45] Based on our results, it is not clear whether STK35L1 interacts directly or indirectly with actin.We identified a putative Table 1.mRNA expression of selected cell cycle specific genes in STK35L1-silenced cells.class III PDZ domain binding motif in the N-terminal region of STK35L1 that bound to actin.This suggests that STK35L1 may be a ligand for a PDZ domain containing protein.Since actin does not contain a PDZ domain, it is unlikely that STK35L1 through its PDZ domain-binding motif interacts directly with actin.The interaction should then be mediated by PDZ domain containing actin-binding proteins.Various PDZ containing proteins such as PDZ-LIM family proteins (CLP36) are known to interact with actin indirectly through binding to actin binding proteins [25,46].For example, the binding of CLP-36 to stress fibers is mediated by its binding to actinin [24].It is not known whether nuclear actin also interacts with a PDZ domain containing protein.
Together the present study unravels an important new player in the orchestrated regulation of cell proliferation and migration of endothelial cells.STK35L1 by inhibiting the endothelial cell cycle and being essential for migration is important in regulating vascular healing and angiogenesis.The interaction of STK35L1 with nuclear actin might be critical in the regulation of these cellular processes.
Materials
Oligonucleotides and siRNAs were synthesized by MWG Biotech AG (Ebersberg, Germany).Anti-b-actin was purchased from Chemicon, Germany.Anti-p16 INK4A antibody was purchased from Cell Signaling Technology and anti-b-tubulin antibody from Abcam, Germany.FLAG-M2 gel slurry was purchased from Sigma, Germany.Complete mini protease inhibitors tablets were purchased from Roche Diagnostics, Germany.Glutathione beads and GSTrap FF columns were purchased from GE Healthcare, Lifesciences, Germany.
Construction of the Expression Plasmids and sitedirected mutagenesis
The full-length coding sequence of STK35L1 was cloned into pEGFP-C1 vector as described previously [18].The deletion mutants of pEGFP-STK35L1 were generated by Quick-Change II site-directed mutagenesis kit (Stratagene) as per manufacturer's instructions.To prepare GST-PDM construct, the PDZ binding
Cell Culture and Transfection
HUVECs (Human Umbilical Vein Endothelial cells) were obtained and cultured as described previously [47].Briefly, endothelial cells (HUVECs) harvested from umbilical cords were plated onto collagen-coated plastic culture flasks, and were cultured at 5% CO 2 , and 37uC in complete endothelial growth medium (endothelial cell basal medium with supplements; Promo Cell, Germany).In all experiments, HUVECs were transfected with 5 mg DNA per 1610 6 cells using the HUVEC nucleofactor kit from Amaxa GmbH.
Isolation of cell nuclei
Nuclei of EGFP-FLAG-STK35 or EGFP-FLAG-transfected or non-transfected endothelial and HEK 293T [18] and HeLa cells (DSMZ -Deutsche Sammlung von Mikroorganismen und Zellkulturen GmbH) were isolated using nuclei Isolation Kit: Nuclei EZ prep (Sigma) as per manufacturer's instructions.Briefly, the cells grown in tissue culture Ø 10 cm dish, washed with ice cold PBS twice and then 4 ml of ice cold Nuclei EZ lysis buffer was added to each dish.The cells were harvested and lysed by thoroughly scraping and then transferred to a 15 ml centrifuge tube, The nuclei were collected by centrifugation at 5006g for 5 minutes at 4uC and the nuclei pellet was washed by resuspending in cold Nuclei EZ lysis buffer.The washed nuclei were collected by centrifugation at 5006g for 5 minutes at 4uC and use for immunoprecipitation or GST pull down assay.
Immunoprecipitation of EGFP-FLAG-STK35 from nuclear lysates
Immunoprecipitation from nuclear lysate was performed using the Nuclear CO-IP-Kit (Active motif) according to the manufacturers' recommendations.In brief, the nuclei, EGFP-FLAG-STK35transfected cells were resuspended in 50 ml complete digestion buffer (CDB).Enzymatic shearing cocktail (0.25 ml) was added in the nuclei suspension, incubated for 90 minutes at 4uC.Subsequently, the reaction was stopped by addition of 1 ml EDTA (0.5 M).After gentle vortexing the tube was incubated 5 minutes on ice.The nuclear debris was removed by centrifugation for 10 minutes at 14,0006g 4uC.The supernatant, containing nuclear proteins, was diluted in 500 ml IP-incubation buffer, and 30 ml of anti-FLAG-M2 gel-slurry (Sigma; 36 washed with 5 volumes of IP-incubation buffer) was added to the solution and incubated overnight at 4uC.The next day, the suspension was centrifuged 30 sec 40006g at 4uC The beads were washed 6 times with IP-wash buffer and finally resuspended in 60 ml 16 Laemmli-buffer and boiled for 5 minutes at 95uC and subjected to SDS-PAGE and western blotting.
Mass spectroscopy
The coimmunoprecipitated and coomassie-stained protein bands were excised from the SDS-PAGE gel and sent to the Zentrallabor fu ¨r Proteinanalytik (Ludwig-Maximilians-Universita ¨t Munich, Germany) for protein identification by MALDI TOF-MS analysis.There, the proteins were in gel digested to peptides by the endoproteinase trypsin.Peptides were eluted and directly spotted on a MALDI sample plate.MALDI-TOF measurements were performed, and the resulting spectra were then analyzed via Mascot software (Matrix Science, London, United Kingdom) using the NCBI Protein Databank.
Expression and purification of recombinant GST and GST-PDM
GST and GST-PDM plasmids were overexpressed in BL21 (DE3) pLysS (Stratagene) at 37uC after the addition of 0.5 mM isopropyl b-d-thiogalactoside at an A 600 of ,0.6 for three hours.Bacteria were harvested by centrifugation and resuspended in 10 ml of ice cold PBS buffer containing lysozyme (1 mg/ml) and complete mini protease inhibitors (Roche Diagnostics) and incubated on ice for 30 minutes.The bacterial cells were lysed by sonication and then Triton X-100 (1%) was added.Cell debris was removed by centrifugation at 600006g and the supernatant was loaded on GSTrap FF column (GE Healthcare, lifesciences) pre-equibilirated with PBS.The column was then washed with PBS and bound GST-tagged protein was eluted with elution buffer (50 mM Tris base, 10 mM Glutathione; pH 8.0).The eluted fractions were pooled, and concentrated and desalted using CentriconH Plus-20 (Millipore).
GST pull down assay
The nuclei from three culture flasks (75 cm 2 ) of confluent endothelial cells were isolated, and nuclear protein extracts were prepared as described above.Nuclear protein extract (1 ml) was aliquoted equally in two microcentrifuge tubes.Purified GST or GST-PDM protein was added to each tube and incubated overnight at 4uC.Glutathione-Sepharose beads (GE-healthcare, Lifesciences; 50 ml; 3 times washed with 5 volumes of IPincubation buffer) were added and samples were incubated for one hour at 4uC.The beads were pelleted by centrifugation, washed 6 times with IP-wash buffer and finally resuspended in 60 ml 16 Laemmli-buffer and boiled for 5 minutes at 95uC and subjected to SDS-PAGE and western blotting.
Immunofluorescent staining and fluorescence microscopy
After 8-10 hours of transfection, cells were washed and fixed with 3.7% formaldehyde in PBS for 10 minutes at 4uC and then washed briefly 2 times with PBS.For permeabilization, the cells were incubated in 0.2% TritonX100/PBS for 10 minutes at room temperature followed by 3 times washing with PBS.For immunostaining of nuclear actin, the fixed and permeabilized cells were incubated with blocking solution (2% fatty acid free BSA in PBS) for 30 minutes at room temperature, briefly rinsed with PBS and then incubated with anti-actin monoclonal antibody (clone 4, 1:100 dilution) for one hour in humidified chamber.Cells were washed three times with PBS and incubated with Alexa FluorH568 goat anti-mouse secondary antibody (1:200 dilution) for 45 minutes at room temperature and then washed 3 times.For DNA staining, cells were incubated with Hoechst 33258 dye (1 mg/ml) for 10 minutes.Cells were observed with a Nikon TE2000E-PFS fluorescence microscope.
STK35L1-silencing
STK35L1 specific siRNAs were designed directed against three different regions of STK35L1 gene [18].For silencing, HUVECs were grown to 90% confluence in 6-well plates in complete endothelial cell growth medium.Before 24 hours of transfection, the cell growth medium was changed to OptiMEM medium containing 0.5% FCS without antibiotics.HUVEC were transfected with a pool of three siRNA using Oligofectamine TM (Invitrogen, Germany) for 48 hours according to manufacturer's protocol.
Endothelial cell viability and cell proliferation assay
To measure endothelial cell viability, cells were centrifuged, and resuspended in PBS buffer after trypsinization.Cells were mixed with trypan blue (0.4%; Sigma) in a 1:1 ratio and were incubated for 3minutes at room temperature.Cell viability was calculated by counting unstained (viable cells) and stained cells using a hemocytometer.
For endothelial cell proliferation assay, cells were grown in 24 well plates in complete endothelial growth medium.AlamarBlueH (Invitrogen) [48] reagent was added (1/10 th of volume of growth medium in each well) at different time points (24 hrs, 48 hours and 72 hours) and incubated for 4 hours at 37uC.The absorbance was measured at 570 nm and at 690 nm as reference wavelength (normalized to the 690 nm value) using Mithras LB 940 Multimode Reader.Cell proliferation is directly correlated to the absorbance value of developed color.The absorbance of control siRNA treated cells was considered as 100% and the proliferation was calculated as % of control (Figure S2).
RT-PCR
To measure the STK35L1 expression in silenced-cells, or in other experiments, the cells, grown on collagen or on Matrigel were harvested by trypsinization or by using BD TM cell recovery solution, respectively.Total RNA was isolated from the harvested cells using RNeasy mini kit (Qiagen, Germany).First-strand cDNAs were synthesized with Omniscript reverse transcriptase kit (Qiagen) using random hexamer primers as per manufacturer's protocol.The relative expression of a STK35L1 transcript was measured by quantitative RT_PCR using PuReTaq Ready-To Go qPCR beads (GE Lifesciences) as per manufacturer's instructions.The data was normalized against the b-actin gene.
Cell cycle analysis of endothelial cells
Endothelial cells, transfected with STK35L1 siRNA or control siRNA, were grown to confluence.Twenty-four hours after transfection, the cells were trypsinized, split in a 1:2 ratio, replated and cultivated in serum-free medium for 24 hours.The cells were then released into the normal cell cycle progression by changing the medium to endothelial growth medium containing 10% FCS.The cells were harvested 6, 12, and 24 hours after release from starvation.Cells were fixed by adding 90% methanol drop-wise to the cell pellet.The cell suspension was kept for 30 minutes at 4uC and then cells were pelleted at 800 rpm followed by two times washing with PBS.Finally the cell pellet was resuspended in Propidium iodide, incubated at 37u for 1 hour and then analyzed by FACS (FACSCalibur flow cytometer, Becton Dickinson).The cell cycle data were analyzed by Modefit software.
Cell-cycle RT 2 Profiler TM PCR Array
Endothelial cells, transfected with STK35L1 siRNA or control siRNA were grown to confluence for 24 hours.The cells were trypsinized, splitted in a 1:2 ratio, re-seeded and grown in endothelial cell basal medium (Promo cell) containing 0.5% serum for 24 hours.The cells were then released into normal cell cycle progression by changing the medium to complete endothelial cell growth medium containing 10% FCS.The cells were harvested six hours after release from starvation and total RNA was isolated by using the RNeasy mini kit (Qiagen).The RNA was reverse transcribed using the specific RT 2 First Strand Kit (SA Biosciences).cDNA (2 mg) of STK35L1-silenced and control cells were mixed with RT 2 qPCR Master Mix.The mixture was aliquoted into each well of the 96 well PCR array plate containing pre-dispensed gene specific primer sets.Quantitative real time PCR of 96 well plate was performed by using the iCycler (BioRad).Baseline and threshold values of real-time PCR were defined manually and were kept the same across the PCR array.The resulting threshold cycle values for all wells were exported to Microsoft Excel for use with the Data Analysis Template Excel file.For the analysis of these RT-PCR data, we used 4 control genes to calculate the normalization factor: b-2-microglobulin (B2M), hypoxanthine phosphoribosyl-transferase 1 (HPRT1), ribosomal protein L13a (RPL13A), and glyceraldehyde-3-phosphate dehydrogenase (GAPDH).Data were generated from three independent silencing experiments (n = 3).We considered the genes, whose expression was significantly up-or down-regulated by a factor of .4(P value of #0.05).
Matrigel cord formation assay
MatrigelH (BD Biosciences) was thawed on ice overnight, and 10 ml were pipetted with ice cold pipette tips into the lower chambers of an IBIDI angiogenesis slide (IBIDI GmbH,) and allowed to harden for 30 minutes at 37uC.After 48 hours of siRNA transfection, STK35L1-silenced and control cells were trypsinized and resuspended in complete endothelial growth medium.Transfected cells were more than 95% viable.The cell suspension (50 ml; 3610 5 cells/ml) was seeded on Matrigel into the well.The cells were incubated at 37uC and cord formation was observed on Nikon TE2000E-PFS fluorescence microscope.After six hours of seeding, pictures were taken and the images were analyzed with NIS Elements software.Network formation was quantified by measuring the total cord length and compared between silenced and non-silenced cells.
Endothelial cell Migration after mechanical injury
HUVECs were seeded onto collagen coated six-well plate and following 24 hours of starvation the cell layer was scratched once from one edge to the other edge of the well using a pipette tip.The cells were washed with endothelial growth medium to remove cell debris and then incubated in complete endothelial growth medium.Wound healing was determined by measuring the cellfree area reaming in the wound, which is inversely correlates with the ability of the HUVECs to migrate.
Endothelial cell migration using the IBIDI culture insert
An IBIDI culture insert (IBIDI GmbH) consists of two reservoirs separated by a 500 mm thick wall.For the endothelial migration assay, a BIDI culture insert was placed into one well of the 24 well plate and slightly pressed on the top to ensure tight adhesion.An equal number of control and STK35L1-silenced endothelial cell (70 ml; 4610 5 cells/ml) were added into the two reservoirs of the same insert and incubated at 37uC/5% CO 2 .After 10 hours, the insert was gently removed creating a gap of ,500 mm.The well was filled with complete endothelial growth and the migration was observed by live cell imaging using Nikon TE2000E-PFS microscope.Table S1 Prediction of protein-binding motifs within STK35L1 using the ELM web server.Predicted binding motifs within STK35L1 are shown.The consensus binding sequence for the given binding domains is labeled in red.LIG, binding for.(PDF)
Supporting Information
Table S2 List of genes and their position on the RT-PCR array 96 well plate.(PDF) Movie S1 Live cell imaging of human endothelial cells transfected with EGFP-PDM.EGFP-PDM containing the PDZ-binding motif (see text for details), distributes thought the cytoplasm and the nucleus.In migrating cells, it concentrates in membrane ruffles at the leading edge as indicated by white arrows.Pictures were taken every four minutes for 90 minutes.Movie was edited with QuickTime Pro and iMovie software from Apple Inc. (MOV) Movie S2 Migration assay using the IBIDI insert.Endothelial cells transfected with STK35L1 siRNA (left side) or control siRNA (right side) were seeded into different reservoirs of an IBIDI insert.After 8 hours the insert was removed and the closure of the gap was observed on Nikon TE2000E-PFS fluorescence microscope equipped with incubation camber (37uC) and CO 2 supply.The microscope function was controlled by NIS elements software.Pictures were taken every 7 minutes for 15 hours.Movie was edited with QuickTime Pro and iMovie software from Apple Inc. (MOV)
Figure 1 .
Figure 1.Nuclear and nucleolar localization signals in STK35L1.A) Prediction of nuclear localization signals in STK35L1.NLS was aligned with the known monopartite NLS of ESXR1 and bipartite NLSs of Plk1[22], LIMK2 [22] and HDGF [49].Basic amino acids are shown in red colors.B) Schematic representation of different EGFP-STK35L1 deletion mutants.The upper axis with coordinates represents the position of amino acids.The predicted bipartite NLS (aa 142-153) is shown as red box.C) Subcellular distribution of different EGFP-STK35L1 deletion mutants.EGFP-STK35L1 was mainly localized in the nucleus and nucleolus (Top left panel; white arrow).The nucleolus appears as a dark spot excluded by the DNA staining with Hoechst-dye (lower left panel, white arrow head).The expression of the various EGFP-STK35L1 deletion mutants in endothelial cells is shown (see text for details).doi:10.1371/journal.pone.0016249.g001
Figure 2 .
Figure 2. EGFP-STK35 intracts with nuclear actin.A) Nuclear actin coimmunoprecipitates with EGFP-FLAG-STK35.HEK293T cells were transfected with EGFP-FLAG-STK35 and EGFP-FLAG.The nuclei were isolated and EGFP-FLAG-tagged proteins were immunoprecipitated with anti-FLAG antibody.Bound proteins were resolved by SDS-PAGE and stained with Coomassie blue.The specific bands were cut, in-gel digested by trypsin and identified by MALDI-TOF and peptide mass fingerprinting.The protein band around 45 kDa (under ''*'') was as b-actin (left panel).The other bands were identified as HSP70B1 and EGFP-FLAG-STK35.The same immunoprecipitated samples were blotted with monoclonal anti-actin antibody and the band of 45 kDa was confirmed as b-actin (middle panel).To check the purity of the nuclear preparation, 10 ml of 100 ml nuclear fraction and 10 ml of 4 ml cytoplasmic fraction were subjected to SDS-PAGE and blotted with an anti-actin and an anti-tubulin antibody.The apparent 50:50 ratio of nuclear to cytoplasmic actin is due to the high amount of actin in the diluted cytoplasmic fraction.Tubulin (55 kDa) could be detected in the cytoplasmic but not in the nuclear fraction, demonstrating the nuclear fraction was free from cytoplasmic proteins (right panel).B) EGFP-STK35 and bactin colocalized partially in the nucleus.Endothelial cells transfected with EGFP-STK35 (left panel, green) and stained for b-actin (middle panel, red) with anti-actin monoclonal antibody (clone C4) 8 h after transfection.The overlay (right panel) shows colocalization of EGFP-STK35 and b-actin.Arrows indicate some sites of colocalization.doi:10.1371/journal.pone.0016249.g002
Figure 3 .
Figure 3. Identification of Putative class III PDZ-binding domains motif in STK35L1 that mediates its interaction with actin.A) Class III PDZ binding domains motif (PDZ-BM) was predicted by ELM software and its position within the STK35L1 sequence is indicated as green box; kinase domain: yellow box, NLS: red box.B) GST pull down assay of nuclear lysates from HeLa cells was preformed using recombinant GST or GST-PDM protein (GST coupled to 30 amino acids (aa position 170 to 204) of STK35L1 containing the potential PDZ-BM).Bound proteins were resolved by SDS-PAGE and immunoblotted using a monoclonal anti-b-actin antibody C) Subcellular distribution of different EGFP-STK35L1 deletion mutants in transfected endothelial cells.EGFP-PDM (aa 170-204, contains only the PDZ-BM of STK35L1) was localized to actin stress fibers.EGFP-STK35L1D1-169 mutant (lacking NoLS and NLS) was distributed throughout the nucleus and cytoplasm with a partial localization to fiber like structures of the cytoskeleton.EGFP-STK35L1D1-96 (lacking NoLS, NLS and PDZ-BM) was diffusely distributed throughout the cytoplasm and the nucleus but no green fiber like structures were visible.doi:10.1371/journal.pone.0016249.g003
Figure 4 .
Figure 4. G 1 to S-phase progression is accelerated in STK35L1-silenced endothelial cells.A) STK35L1 expression level of STK35L1 silenced cells and control cells.B) Cell cycle distribution of control siRNA-and STK35L1 siRNA treated endothelial cells.Cells arrested in G o /G 1 phase were stimulated by serum for various times.Values are mean 6 S.E. of three independent experiments.Asterisk ''*'' indicates level of significance p#0.05.doi:10.1371/journal.pone.0016249.g004
Figure 5 .
Figure 5. STK35L1 regulates endothelial morphogenesis.A) STK35L1 expression in endothelial cells cultivated in the presence of VEGF on collagen or on Matrigel.The expression level of STK35L1 transcripts at the indicated time points was analyzed by RT-PCR.The relative expression level compared to time 0 hr is shown.B) and C) STK35L1 regulates the formation of cord-like structures on Matrigel.Control siRNA and STK35L1-siRNA transfected cells were grown on Matrigel for 6 h.B) phase contrast micrograph, Scale bar 500 mm.(C) Bar diagram, values are mean 6 S.E. of three independent experiments; * p,0.05.doi:10.1371/journal.pone.0016249.g005
Figure 6 .
Figure 6.STK35L1 is required for endothelial cell migration.A) Migration assay using the IBIDI insert.Endothelial cells transfected with STK35L1 siRNA (red shaded part) or control siRNA (green shaded part) were seeded into different reservoirs of an IBIDI insert.After 8 hours the insert was removed and the closure of the gap was observed by video microscopy.The pictures from one experiment representative of three are shown, Scale bar 100 mm.B) Migration after mechanical injury.Left panel, representative micrographs, cells treated with control siRNA, or cells treated with STK35L1 siRNA are shown immediately 0 hr or 7 hr after mechanical injury; width of the wound is shown in mm.Right panel, Bar diagram, wound closure was quantified as described in the method section.Values are mean 6 S.E. of three independent experiments; '*', p,0.05.doi:10.1371/journal.pone.0016249.g006
Figure
Figure S1 Protein sequence alignment of mammalian STK35L1.N-terminal region and kinase domain of STK35L1 are shaded in gray and yellow color respectively.The conserved bipartite NLS (boxed) is marked in red color.Stretches of arginine and lysine are colored in gold.(PDF) Figure S2 Endothelial cell prolifiration using Alamar-BlueH.HUVECs were seeded (25000 cells/well) in 24 well plates and were grown for 24 hours, 48 hours and 72 hours.Before four hours of every time points, cells were incubated with AlamarBlue reagent as described in Materials and methods.The absorbance of control siRNA treated cells was considered as 100% and the proliferation was calculated as % of control.(JPG) Mean fold change of mRNA expression of selected cell cycle specific genes in STK35L1-silenced cells vs. control cells. | 9,860.2 | 2011-01-20T00:00:00.000 | [
"Biology",
"Medicine"
] |
Clinico-pathological relationship between androgen receptor and tumour infiltrating lymphocytes in triple negative breast cancer
Background Triple negative breast cancer (TNBC) is an aggressive subtype of breast cancer (BC) with ill-defined therapeutic targets. Androgen receptor (AR) and tumour-infiltrating lymphocytes (TILs) had a prognostic and predictive value in TNBC. The relationship between AR, TILs and clinical behaviour is still not fully understood. Methods Thirty-six TNBC patients were evaluated for AR (positive if ≥1% expression), CD3, CD4, CD8 and CD20 by immunohistochemistry. Stromal TILs were quantified following TILs Working Group recommendations. Lymphocyte-predominant breast cancer (LPBC) was defined as stromal TILs ≥ 50%, whereas lymphocyte-deficient breast cancer (LDBC) was defined as <50%. Results The mean age was 52.5 years and 27.8% were ≥60 years. Seven patients (21.2%) were AR+. All AR+ cases were postmenopausal (≥50 years old). LPBC was 32.2% of the whole cohort. Median TILs were 37.5% and 10% (p = 0.1) and median CD20 was 20% and 7.5% (p = 0.008) in AR− and AR+, respectively. Mean CD3 was 80.7% and 93.3% (p = 0.007) and CD8 was 75% and 80.8% (p= 0.41) in AR− and AR+, respectively. All patients who were ≥60 years old expressed CD20. LDBC was found to be significantly higher in N+ versus N− patients (p = 0.03) with median TILs of 20% versus 50% in N+ versus N−, respectively (p = 0.03). LDBC was associated with higher risk of lymph node (LN) involvement (odds ratio = 6; 95% CI = 1.05–34.21; p = 0.04). Conclusions AR expression was evident in older age (≥50 years). Median CD20 was higher in AR− TNBC, while mean CD3 was higher in AR+ tumours. LDBC was associated with higher risk of LN involvement. Larger studies are needed to focus on the clinical impact of the relation between AR and TILs in TNBC.
Background
Triple negative breast cancer (TNBC) is a challenging heterogeneous disease with distinct molecular subtypes that does not have receptors for oestrogen, progesterone hormones and the human epidermal growth factor receptor 2 (HER2) protein. TNBC was grouped into six molecular subtypes: basal-like (BL) 1, BL2, mesenchymal (M), mesenchymal stem-like (MSL), immunomodulatory (IM) and luminal androgen receptor (LAR) [1]. But thereafter, Lehmann et al [2] found that transcripts in the previously defined IM and MSL subtypes came from tumour-infiltrating lymphocytes (TILs) and tumour-associated stromal cells, respectively, and they reduced the number of TNBC molecular subtypes to four (BL1, BL2, M and LAR).
TILs play an essential role in predicting response to chemotherapy and improving clinical outcomes in breast cancer (BC). Moreover, as the immunotherapy landscape continues to evolve, there is interest in whether the immune system could be playing a more substantial role in TNBC specifically. The association between TNBC subtypes and the impact of TILs is still not fully understood. However, accumulating evidence from several studies indicates that intra-tumoural levels of TILs in TNBC are: a) predictive for response to neo-adjuvant chemotherapy and b) prognostic in patients treated with adjuvant chemotherapy, being correlated with improved overall survival (OS) and disease free survival (DFS) [3].
Besides the immune cell markers, the androgen receptor (AR), which controls the transcription of different genes including the immune response genes, has been recognised as a valuable biomarker in TNBC [4]. The AR expression was correlated with better survival outcomes in TNBC [5], albeit its clinical utility and immunological impact remain unclear. However, many opened questions still need to be answered such as what is the prevalence of AR positivity in TNBC and whether AR expression correlates with the mean TILs or with CD3, CD4, CD8, CD20 expression. Also, it remains not fully clear whether there is any relationship between the predominance of TILs and the age or stage. Here, we addressed these questions and explored the correlation between the AR expression and the total and differential TILs in TNBC.
Methods
In this cross-sectional, pilot study, patients' records were reviewed retrospectively to select patients with TNBC. TNBC was defined based on the American Society of Clinical Oncology/College of American Pathologists (ASCO/CAP) recommendations (2010) [6], as tumours with negative (<1% of nuclear staining) oestrogen receptor (ER), progesterone receptor (PR) and lack HER2 receptor overexpression or oncogene amplification. From a cohort of 800 BC patients who were diagnosed in 2012, at the clinical oncology department, Ain Shams University; 10% (80 patients) were diagnosed as TNBC in this year; of whom 36 patients had available tumour paraffin tissue and medical records. The clinico-pathological data and survival outcomes were collected. Tumour (T), nodes (N) and metastases (M) (TNM) staging was done according to the seventh edition of the American Joint Committee on Cancer (AJCC). The study protocol was approved by the Research Ethics committee, Faculty of Medicine, Ain Shams University, Cairo, Egypt.
Pathological evaluation was performed by a dedicated pathologist (TH), who was blinded for the clinical data. Haematoxylin and eosinstained sections were revised for the negativity of ER, PR and Her2 and assessed for the histologic type and grade of the tumour. Then, the sections were examined to quantify the stromal TILs according to the 2014 TILs International Working Group [7], where it was defined as the percentage of lymphocytes in direct contact with tumour cells. Lymphocyte-predominant breast cancer (LPBC) was defined as TILs ≥ 50%, while lymphocytic deficient breast cancer (LDBC) was defined as TILs < 50%.
Formalin-fixed, paraffin-embedded tissue specimens were available for the evaluation of both of AR and TILs in 28 patients, AR alone in 5 patients and TILs alone in 3 patients. The AR expression (Code 200M-18) was evaluated by immunohistochemistry (IHC), and considered positive if ≥1% nuclear staining of the tumour cells [4]. Also, immunostaining was performed for T cell markers CD3 (Code 00000 51564), CD4 (Code 104R-28), CD8 (Code 108M-98) and B cell marker CD20 (Code 00000 27500). All antibodies were ready to use, from Cell Marque, California, USA. CD3, CD4, CD8 and CD20 immunostaining results were evaluated as mean percentage of the stained lymphocytes in relation to the total lymphocytes in the whole tissue section. Then the mean (for CD3 and CD8) and median (for CD4 and CD20) were calculated. The primary aim of our study was to describe the expression of AR and immune cells (CD3, CD4, CD8 and CD20) in TNBC, and the percentage of TILs as well. The secondary aim was to correlate the clinico-pathological parameters with these biomarkers.
Statistical analysis
Recorded data were analysed using the Statistical Package for Social Sciences, version 20.0 (SPSS Inc., Chicago, Illinois, USA). Quantitative data were expressed as mean ± standard deviation (SD) or median and interquartile range (IQR). Qualitative data were expressed as frequency and percentage. Independent-samples t-test of significance and Mann-Whitney (z) test were used to compare two means and non-parametric data, respectively. Analysis of Variance (ANOVA) test and Kruskal-Wallis test were used to compare more than two means and multiple-group comparisons in non-parametric data, respectively. Chi-square (x 2 ) test was used in order to compare proportions between qualitative parameters. As multivariate analysis is not suitable for small sets of data, estimates are represented according to univariate analysis. Spearman's correlation coefficient (r) test was used to assess the degree of association between two sets of variables. The p-value was considered significant if ≤0.05.
Patient characteristics
Thirty-six TNBC patients with available enough tumour material were identified for analysis. The patients' characteristics are shown in Supplementary Material, Table S1. The mean age at diagnosis was 52.5 years (range: 30-75 years), and 27.8% of cases were ≥60 years old. Most of the tumours (58.3%) were of invasive duct carcinoma (IDC) type, while medullary carcinoma and invasive lobular carcinoma (ILC) accounted for 22% and 11%, respectively. Grade II and III tumours were 30.6% and 52.8%, respectively. Stages I, II, III and IV represented 5.6%, 30.5%, 52.7% and 8.3%, respectively. Lymph nodes (LNs) were positive in 77.8% (28 patients). After a median follow-up of 39 months, nine patients had developed a disease progression and the 3-year OS was reached in 44.4% of the patients.
AR expression and its relation with the clinico-pathological and survival parameters
AR was tested in 33 patients and it was expressed in 21.2% (7 patients). All AR+ cases (100%) were postmenopausal (≥50 years old). Although patients with AR+ tumours were older than those who were AR− (mean age: 55 versus 51.6 years), there was no statistically significant difference in age between the two groups (p = 0.47). LNs were involved in 77% and 85.7% in AR− and AR+, respectively, (p = 0.61). No statistical difference was found in median OS between AR− and AR+ groups (31.5 versus 25 months, p = 0.77). The clinico-pathological parameters according to AR expression are shown in Table 1.
Discussion
It is well established that the expression of AR differs according to molecular subtypes of BC with more frequent expression in ER negative cancers. The prevalence of AR+ expressing tumours is generally ranging from 10% to 41% in TNBC cases [1,[8][9][10][11][12][13][14], with rare reports showing rates up to 79% [15,16]. In accordance with most published reports, our rate of AR expression in TNBC was 21.2%.
Whether clinico-pathologic characteristics of TNBC vary based on AR expression status have been extensively studied [8,13,15,17,18]. Some studies showed that patients with AR+ tumours were significantly older, exhibited tumours with significantly lower grades (I-II), more frequent nodal involvement, non-ductal histology and lower Ki67 [14,15,17,18]. Other reports described reduced LN metastases in AR+ TNBCs [8], or just similar clinico-pathologic profile between AR+ and AR− TNBC tumours [13]. Herein, there was no statistically significant difference in the clinico-pathological parameters according to AR expression. However, AR+ cases were older in age and exhibited more regional nodal spread. Despite statistically insignificant, this profile was analogous to the LAR subtype described by Lehmann et al [2].
Available evidence about the prognostic value of AR in TNBC is controversial. Some reports suggested that AR-positivity was associated with good outcomes [8,13], whereas others concluded that AR status conferred worse prognosis [19] or had no significant impact on disease prognosis [4,20,21]. Many factors may explain these discrepant results across studies including the sample size limited cohorts, differences in the ethnic origin, the anti-AR antibodies used for staining, staining/scoring method, as well as variability in the thresholds used to define AR positivity [4]. A meta-analysis published in 2017, demonstrated that AR-positive status was associated with better DFS and OS in TNBC (hazard ratio (HR) = 0.64; 95% CI = 0.51-0.81; p < 0.001 and HR = 0.64; 95% CI = 0.49-0.88; p < 0.001, respectively), in univariate analysis [5]. Of note, no multivariate analysis was provided and this meta-analysis included heterogeneous studies in terms of methods of AR scoring, clinical cohorts' characteristics, therapies received and length of follow-up. A large multi-institutional study including about 1,407 TNBC tumours issued after this meta-analysis concluded that the AR-positivity was a marker of good prognosis in USA and Nigerian cohorts, whereas it conferred poor prognosis in Norway, Ireland and Indian cohorts, and was neutral in UK cohort [4]. Whereas a more recent meta-analysis (2020) [21] Importantly, the presence of more frequent special histological subtypes with poor prognosis as medullary carcinoma, ILC and adenoid cystic carcinoma in the AR− versus the AR+ group (42% versus 28%) in our cohort, may have an impact on survival as pointed to by other studies [22].
Compared to other subtypes, TNBC was shown to exhibit higher levels of TILs [23]. There is heterogeneity of TILs cut-off used in published studies in order to distinguish between LPBC and LDBC. Some studies defined LPBC as showing more than 50% of lymphocyte infiltration [24,25], whereas others used different cut-offs [26]. In our cohort, median TILs were 30% (range: 1%-70%), with a LPBC prevalence of 32.2%, which is not in full agreement with other reports. Adams et al [24] reported much lower median TILs percentage (10%), and with using the same cut-off of ≥50% TILs, only 4.4% were LPBC, whereas Pruneri et al [25] described a median TILs level of 20%, with LPBC prevalence of 22% of cases.
Little is known about the association between TNBC clinico-pathologic features and lymphocytic predominance. A pooled analysis of nine large studies by Loi et al [26] demonstrated that TILs were significantly lower in older age. Whilst, Adams et al [24] reported no strong associations between TILs scores and age or menopausal status. Despite not statistically significant, we showed lower median TILs in patients ≥60 years versus <60 years old (10% versus 38%, p = 0.45).
Interestingly, we found that patients with LN involvement were significantly more likely to be LDBC, where a total TILs expression < 50% (LDBC) was associated with higher risk of LN involvement (OR = 6; 95% CI = 1.05-34.21; p = 0.04). This is in agreement with Loi et al [26], but in contrast to a recent meta-analysis which concluded that no significant association between decreased TILs and LN metastasis risk [27].
Our knowledge about the association between TILs and AR is still limited. In a large cohort study about non-metastatic TNBC of LAR subtype, this tumour subset was found to exhibit lower median stromal TILs and to be less likely LPBC (≥50% TILs) compared to non-LAR, although this did not reach statistical significance [28], similarly to our study. However, we did not examine the genetic profiles of our AR+ tumours to classify them into the LAR subtype. Other reports using IHC described significant association between AR expression and lower levels of stromal TILs [11,17].
Studies about the immune cells subsets composition of TILs according to AR expression are very scarce. In our study, median CD20 was significantly higher in AR− tumours compared to those with AR+ (20% versus 7.5%, respectively, p = 0.008). Whereas, mean CD3 was significantly lower in AR− versus AR+ (80.7% versus 93.3%, respectively, p = 0.007). On the other side, previous publications reported that CD8+ were more frequent in AR+ than AR− tumours [12,29,30], in contrast to our study which showed that neither CD8 nor CD4 were statistically different between AR+ and AR− tumours.
Based on two large-scale BC genomics data, evidence from a comprehensive analysis of 26 immune gene-sets including 15 immune cell type and function suggested that TNBC had the strongest tumour immunogenicity. Comparison of the immune infiltrate densities of different immune cell subpopulations demonstrated higher degree of infiltration in TNBC than non-TNBC, including CD3, CD8 and CD20 and others [31].
T-lymphocytes represent the main lymphocyte type in the tumour microenvironment, and the majority of T lymphocytes express a cytotoxic effector phenotype (CD8+). Intra-tumoural and adjacent stromal CD8+ T-cell infiltration have been found to be significantly associated with ER negativity and basal phenotype [32,33]. Infiltrating CD8+ T-cells have been reported in more than 60% of TNBC cases [33,34]. In our study, CD8 was expressed in 100% of the cases with the mean of its expression was 73.4%.
The role of tumour-infiltrating B cells (CD20) as components of TILs in BC subtypes is still unclear. A positive correlation between higher numbers of total CD20+ B cells and ER and PR negativity, and basal phenotype has been reported [35]. In our study, CD20 was expressed in 90.3% of the tumours and its median expression was significantly higher in AR− versus AR+ TNBC (20% versus 7.5%, respectively, p = 0.008).
Using a digital pathology computational workflow to quantify the spatial patterns of five immune markers (CD3, CD4, CD8, CD20 and FoxP3) in TNBC, Mi et al [36] demonstrated positive correlations between CD3 and CD8 cells. Similarly, we also showed a significant positive correlation between CD3 and CD8. Data from a study that used multiplexed ion beam imaging to simultaneously quantify in situ expression of 36 proteins in 41 patients with TNBC, suggested that all patients with B cells had also CD4 T cells and CD8 T cells [37]. In contrast, we found in our study significant inverse correlations between CD20 and CD8, as well as CD20 and CD3.
Immune cellular subpopulations in BC representing the innate immunity (natural killer, CD68+ and CD11c+ cells) and adaptive immunity (CD3+ cells (CD8+ or CD4+) and CD20+ cells) [38], worth thorough evaluation in TNBC, with the aim of understanding its clinical implications in BC management. In a recent consensus report for the management of TNBC, the majority of the panellists concluded that more evidence to support the predictive value of TILs and its impact on the clinical decision are warranted [39].
As no prior studies evaluating the exact relation between AR and TILs in this unique disease entity 'TNBC', we tended to represent all the statistical analyses and correlations which we evaluated, keeping with the aim of this exploratory study which may help future research. This study had mainly described the expression patterns of AR and TILs in TNBC. Moreover, the correlation between AR and the total and different TILs subpopulations was illustrated. TILs were evaluated by one pathologist who was blinded to the clinical characteristics and according to the International Working Group. However, our findings should be interpreted carefully. The limitations of our study include: i) the retrospective nature, ii) the small sample size (despite this, there were some significant correlations) and iii) the survival data was not mature due to the short follow-up duration (median: 39 months).
Conclusions
This study highlighted the probable relationship between the AR and total and differential TILs expression in TNBC; and the clinico-pathological characteristics as well. Understanding the immune micro-environment in a subset of tumours with poor prognosis and less identified therapeutic targets like TNBC, may pave the way for the advent of immunotherapy in specific group of patients. Moreover, lower TILs density may identify a subpopulation of TNBC who warrants more radical regional LNs management. The prognostic relevance and the potential predictive impact of AR and TILs in TNBCs merit further evaluation in larger scale studies.
Conflicts of interest
All authors declared no conflicts of interest.
Funding
None. | 4,137 | 2021-11-11T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Effects of Blends of Enugu Coal and Anthracite on Tin Smelting Using Nigerian Dogo Na Hauwa Cassiterite
The effects of blending Enugu coal and anthracite on tin smelting using Nigerian Dogo Na Hauwa cassiterite have been studied. The work utilized various blends ranging from 100% to 0% anthracite. The content of the Enugu coal in the blend varied from 5% to 100%. The various tin metal recovery percentage for each batch of smelting using various blends was noted. Anthracite alone had the highest recovery of 71.90% followed by 5% blend of Enugu with anthracite. The result, however, showed that as the Enugu Coal was increased in the blend, the recovery was also decreasing. This equally affected the quality of tin metal recovered by increasing the grade. The work recommended that since the cost of production is the critical issue, 5% - 15% range of Enugu Coal should be used in preparing blends to bring down the cost of imported anthracite which is put at $906.69 per ton. The use of 15% Enugu coal will result in lowering the cost of imported anthracite by $136.0.
Introduction
The cost of energy in the extractive industry is in the increase on daily basis in the world today. In Nigeria, for instance, the high cost of electricity, petroleum products and imported anthracite has caused most of the tin smelting companies in the country to close down [1]. This is a very ugly trend and has resulted in the loss of jobs and a falling standard of living in the country. Today a lot of people in Jos, Plateau State of Nigeria, who were working in tin smelting companies, are without job and seriously suffering with their families [1]. The only way out of this problem is a concerted effort towards research. It has become necessary to develop a cost effective approach in tin smelting in Nigeria. Areas in production chain that are responsible for high cost of tin smelting have to be reviewed, and data collected from most of the smelting companies around Jos revealed that a major contributor to the high cost of smelting tin ore is the input anthracite used as the reductant. This raw material is imported at the cost of $906,667 per ton [1] and contributes about almost 50% to the cost of production per ton of tin produced. This high cost of anthracite and the need to increase local content in smelting of cassiterite inform-ed this research.
Nigeria has a large deposit of coal which was only well exploited during the colonial era; much of the period after the exit of the colonial masters has seen the mines idle without any activity [2]. This coal was in past time used for firing of locomotive trains and the orji River thermal power station and other heating activities in homes and factories [2]. Nigerian coals are sub-Bituminous and Bituminous in nature. They are not fully matured. The stages in the formation of coal from vegetable/Plant (wood) matters are as follows: Plant debris (wood) → Peat → Lignite → Brown coal → Sub-bituminous coal → Bituminous coal → Semi anthracite → anthracite coal → graphite [3].
Anthracite is a natural occurring coal, and has a good purity [4] which is absent in most Nigerian coals. Some of the Short comings of Nigerian coals include high moisture content, Ash and volatile matters. Other problems are high sulphur content in some deposits and low fixed carbon [1,5].
Blending is a practice which has been in long usage in the extractive industry majorly to bring down cost and to improve the quality of the final mix. Blending is the mixing of two or more types of materials from two or more sources to even out variations in physical and or F. A. AYENI ET AL. 344 chemical qualities to obtain a more uniform material of desired qualities over an extended period [4,6]. This means that different coals from various sources can be blended, just as iron ore blend may be made up of ores from different sources or it may also include materials much as Coke breeze, Flue dust, Lime stone etc. [4,6].
Plateau state and indeed Nigeria have abundant tin ore deposits. As it is well known that tin does not occur naturally in nature as a free element, the main source is a mineral ore called Cassiterite, with a chemical formula SnO 2 [7]. Most of the tin ore mined in Nigeria is in Alluvia form. Proven reserve of the mineral in the country is put at 300,000 MT [1,8]. The Tin ore is concentrated before smelting to reduce the tin ore to elemental tin using reverberatory furnaces and the operating temperature is usually in the range of 1200˚C -1300˚C. The smelted tin is then transferred to the refining unit where it is brought to the commercial specified composition [1,9].
The objective of this work is to determine the effects of using a blended coal made up of anthracite (imported) and Enugu Coal (local) for extracting tin metal from Dogo Na Hauwa cassiterite deposit near Jos, Plateau State, North-Central Nigeria.
The objective of this work is to know the cost, quality and yield economic effects of the blended coal on smelting of cassiterite. The work is particularly tailored to bring down the cost of production of tin smelting plants in Nigeria using coal blends of local coal and high purity imported anthracite.
Materials
The materials used for the work were high grade Cassiterite concentrate assaying 70% Tin (Sn) (Tin ore deposit located at Dogo Na Hauwa Village and Kudedu mines in Jos East Local Government area of Plateau state, North Central Nigeria), Anthracite (Sourced from Makeri Smelting Company, Jos), Enugu Coal (Sourced from National Metallurgical Development Centre, Jos), Limestone from Jakura mines, engine oil, diesel, and firewood.
Equipment
Some of the equipment used for the research work included: a 20 kg capacity reverberatory furnace, rambling metal rod, ladle, launder, Siphon, mould, Tapping and Bleeding rod, Tapping Pot, Energy Dispersive X-Ray Florescence (ED-XRF) Spectrometer, Shovel, Pyrometer, and weighing balance.
Methods
10 kg of high concentrate Cassiterite was mixed thor-oughly with 100% by weight of Anthracite amounting to 800 gms and lime of weight 400 gms. These were then charged into already preheated furnace at a high temperature of 1400˚C for a soaking period of 8 hrs. After which rambling or stirring with rambling rod was done frequently, especially during the later stage. The firing continued for 4 hrs, till concentrate melt enough to give a pool of molten tin. The unwanted impurities floating on top of the molten tin was tapped using a tapping rod. This was followed by continuous heating of the Furnace for another 2 hours, after which final bleeding of the tin metal in molten form was carried out through the tapping hole of the furnace, and then cast into ingot. After the bleeding, firing of the furnace continued for another 2 hours till the slag became molten enough at a temperature of 1450˚C for slag tapping. The slag was solidified in water tanks to granules. The furnace was then cleaned for another round of smelting operation.
The tin metal and the slag recovered were weighed and the weights recorded. The next charge of 10 kg cassiterite, 400 gms of limestone and variation of the 800 gms of reducing agent, being blends of anthracite and Enugu coal in the following proportions 95/5, 90/10, 85/15, 80/20, 75/25, 50/50, and 0/100 was smelted and the recovery and slag quantity recorded.
Results
The chemical composition of Nigerian Dogo Na Hauwa Cassiterite concentrate, proximate analysis and calorific value of as-received Anthracite/Coal blends, ultimate analysis-moisture and ash-free basis, and chemical analysis of Jakuralimestone using wet analysis are presented in Tables 1-4 respectively. Figures 1 and 2 were the graphical representation of effect of Enugu Coal in the blend on the recovery and grade of tin metal respectively.
Discussion
The compositional analysis of the input raw materials used during the smelting operation is shown in Tables 1-4. Table 1 shows that Dogo Na Hauwa tin concentrate has a cassiterite content of 85.4% which has a tin content close to 78.6% obtained for the purest cassiterite mineral (SnO 2 ) [9]. The impurities present were equally minimal. The silica content was 0.02% the cassiterite is therefore a high grade cassiterite. Tables 2 and 3 show the proximate and ultimate analysis of anthracite, Enugu Coal and Coke. Enugu Coal has the highest moisture content followed by anthracite and coke has the least. This is true when considering the age and geological formation of the two coals. And coke by nature of production is supposed to have the least moisture content. High moisture content is undesirable since it reduces the calorific value of the fuel; this is confirmed by the calorific value of the coals in Table 2. It also increases the consumption of coal for heating purposes although there may be instances where moisture is deliberately introduced to the coal [3,10]. Anthracite has the highest percentage of carbon (94.96%) followed by the coke (82.4%) and Enugu coal has the least (66.7%). The high carbon containing coal will have higher calorific value [3,10]. Enugu coal has a higher Ash content of 3.9%, higher ash content in coal are undesirable. In general, because it makes coal harder and stronger and has lower calorific value [3,10], it also produces more slag impurities in the furnace. Table 4 shows the chemical composition of the Jakura limestone used as fluxing agent. It has a Calcium oxide (CaO) value of 55.23% and less impurity thereby reducing the likelihood of containing the tin metal produced. Figure 1 shows the effect of adding Enugu Coal to anthracite on the recovery of thin metal during smelting of Dogo Na Hauwa tin concentrate. The figure shows that as the amount of Enugu Coal increases against the quantity of cassiterite; the recovery of tin metal from the concentrate reduces. It therefore means that increasing the amount of Enugu coal in the blend will lead to decrease in the recovery of tin metal. This is a negative effect which can be traced back to the composition of Enugu Coal as discussed earlier. It has low fixed Carbon, high moisture content and ash and high volatile matter and low fix calorific value when compared to the anthracite. However, the essence of the introduction of the Enugu Coal to the anthracite is to bring down the cost of producing tin metal using anthracite only, it will therefore be reasonable to choose a range of the blend where the recovery is not too low and this range is 5% -15% addition of Enugu coal. This will amount to the saving of $136 per ton of anthracite used [11]. Figure 2 shows the effect of adding Enugu Coal to anthracite on grade/quality of tin metal extracted. The figure shows that as Enugu coal is increased in the blend the grade of tin metal is decreased. This is indicated by the falling of the curve from left to right. It can therefore be said that as the quantity of Enugu coal is increased against anthracite, there is a negative effect on the grade and quality of tin metal recovered. This negative effect has to do with the quality of Enugu Coal [1,5]. However, to address the problem of cost of production the 5% -15% range of blending with Enugu coal which produces a reasonable grade of tin metal is recommended; to reduce the cost of importing a ton of anthracite by $136 [11].
Conclusions
The effect of blending Enugu coal with anthracite which is an imported coal for the smelting of tin concentrate from Dogo Na Hauwa has been investigated. The conclusions drawn from the study are as follows: 1) As the quantity of Enugu coal in the blend increases, the recovery of tin metal decreases.
2) As the quantity of Enugu coal in the blend increases, the grade/quality of the tin metal recovered decreases.
3) Considering the cost of production which is a critical issue in the survival of the smelting plants, it is recommended that smelting plants should use between 5% -15% of Enugu coal in their blends for smelting tin. This will lead to the saving of $136 per ton of imported an-thracite used. This is a good saving when compared with the cost of anthracite per ton which is put at $906.67. | 2,885.4 | 2013-11-26T00:00:00.000 | [
"Materials Science"
] |
Multimodal Brain Image Fusion using Graph Intelligence Method
Image fusion plays a vital role for enhancing the quality of images in medical applications. It is known that CT images of brain shows the details of the bone structure and MRI images of brain shows the details of the soft tissue. The Objective of this research is to fuse CT (Computed Tomography) and MRI(Magnetic Resonance Imaging) of normal brain images and tumor affected brain images and to find out structural similarity(SSIM) of the fused image. Axial slice of normal brain and brain tumor images are taken for image fusion. Totally, 24 brain images has been taken out of which 6 pairs are normal brain images and another 6 pairs are tumor affected brain images. Techniques used are Graph-cut method for segmentation, Maximum method for fusion and Swarm Intelligence method for optimization. The proposed fusion method increases SSIM (Structural Similarity) when compared to conventional method of fusion. Tumor size in the fused image is also extracted and this fused image is helpful for doctors to analyse the post radio therapy patient or operated patient whether any tumor residues still exist. Also this method minimises the number of pixels and increases the information content in a single fused image. This technique aids the physician to analyse complementary details in a single image.
Image fusion
Image fusion method is stated as collecting all of the necessary data from various images and fusing it into a single fused image. More informative data will be obtained from the single fused image than any of the input images, and it contains all the mandatory data. The aim of the image fusion is not only to scale back the number of information but also to construct images that are more applicable and suitable for the machine and human perception.
Basically, image fusion is de ined as a process of combining a particular information from two or more images into a single fused image. In remote sensing applications, the accessibility will be increased, which offers inspiration for numerous image fusion algorithms. Many things in image fusion need more spatial and spectral information. All the accessible instrumentation will not have that much capable of giving such information convincingly. An Image fusion method permits the blending of assorted data sources. The resultant image can have corresponding spectral and spatial information characteristics. However, the standard image fusion techniques can deform the spectral data from the information, whereas combining.
Medical Image Fusion
Image fusion has been widely used tool for enhancing the standard of the images in medical applications. Nowadays, medical image fusion has become a common term in the ield of medical diagnostics and treatment. This term is used where there are a number of registered images of a patient which seems to be dif icult to diagnose the disease and so the image fusion will be used to fuse the registered images which will give a single fused image. Fused images will be done by the various images of same modality or by images from different modalities such as Magnetic Resonance Imaging (MRI), Tomography (CT), Positron Emission Tomography (PET) and Single Positron Emission Computed Tomography (SPECT). Mostly in the ield of Oncology, these fused images acts in different purposes. For instance, CT images can be used more often to certain differences in tissue density, whereas MRI images are used in the diagnosis of brain tumours.
Radiologists must collect a lot of data from various image formats for the proper diagnosis of the patients. The output fused images has become more bene icial in cancer treatment. According to these new technologies, Radiation oncologists take full advantage of Intensity Modulated Radiation Therapy (IMRT).
The conversion of diagnostic images will result in high accuracy IMRT target volumes. In the case of medical Image fusion, reference metrics are more suitable compared to the non-reference metrics for the proper fusing of the output image.
Need For Image Fusion
The need for image fusion is to obtain a single fused image which is enhanced anatomically, highly desirable spectral than compared to a raw single scanned image. The image fusion can be done by using different medical images to obtain the fused image. In the case of disease, the fused image will be helpful in extracting the size of the tumour. Both the anatomical structures and the changes are endured with effectively reduced distortion.
It decreases the volume of data, holds signi icant features, removes artifacts and it provides an output image which will be more suitable for interpretation. The image fusion can improve reliability and capability of the images by complementing the information in the input images. The fusion also decreases the storage of data required and the time for transmission. Image fusion is done to extract the useful information from the source images and to fuse it into a single fused image.
Types of Image fusion
Image Fusion can be broadly categorized as follows
Multi-view Image Fusion
This fusion is done where the images to be fused are from the same modality and the images which takes the same amount of time under different conditions. The main aim of this fusion is to have all the complementary information under different conditions in the output fused image.
Multi-temporal Image Fusion
This fusion is done where the images to be fused are from the same modality and those images which will be taken at different times. In this fusion, the process is done by subtracting the input images and detecting the changes in the image at different times.
Multi-focus Image Fusion
The images are obtained from different modalities like CT and MRI, different resolutions, IR and visible, and distinctive sizes. Therefore pre-processing is the initial phase in image fusion processing. Registration process is utilized to change the spatial alignments. By decomposing the image volume to frequency-bands, the fusing process starts. The images are separated into frequency sub-bands by using the double tree discrete wavelet transform. Each image of the frequency bands will be analyzed to determine which image can be combined and which has to be expelled from the least volume of coef icients by the fusion rules. To get back the fused image, the inverse transform must be applied so that it is very much informative when compared with the single images.
Multi-modal Image Fusion
Various modalities like CT and MRI images can be fused. The CT image gives the less distortion of the thick structures like bones and hard tissues which cannot recognize the physiological changes. The MRI image gives the normal and pathological soft tissues information which will not support the bones data. The fused image can also be used for classi ication or detection.
In this method inal fused image have a less spatial resolution.
Decision level fusion Combines the result from multiple algorithms to yield a inal fused decision.
Complexity of method increases.
Pixel Level fusion
It is a combination of input data from multiple sources and converting into an image with a single resolution. This will be more informative when compared to the source images. It will be useful in remote sensing, medical applications and night vision applications Figure 7. For example, images from different modalities will be fused to produce an accurate result in medical imaging. Some of the prerequisites that are imposed on the output fused images are 1. The fusing process should protect all the remarkable data in the input images, 2. The process should not give an introduction to any inconsistencies or artifacts, 3. The output process must be invariantly shifted.
Image fusion using Maximum method, Image fusion using Minimum method and Image fusion using averaging comes under Pixel level fusion. Figure 2, if we take two images, it is needed to check the pixels in both the images and then the image with the highest pixels will be shown as the output.
Figure 3: (a) Input MRI (b) Input CT image
This method functions similar to the maximum method, but it takes the lowest pixel values and ignores the other pixels and does the fusion process Figure 3 and Figure 4. The images having the lowest brightness can use this type of fusion.
Image Fusion Using Averaging Method
The images can be fused using averaging method. In this method, it takes up the two images and the resultant images will have average pixels of both the images Figure 5. The pixels of each image will be taken and added and it will be divided by the number of images used Figure 6. All the pixels in the images will continue this method and the output fused image will be obtained.
Feature Level Fusion
This fusion is the extraction of multiple features in an image like the corners, parameters, edges, textures etc. It collects the different features from multiple images and combines it into a single image having one or more features. This will be helpful in the image processing as an input data where the image can be segmented and the colour can be detected.
This level fusion algorithm comprises of four steps
1. Transforming the input images into multiresolution images 2. The regions of different coef icients will be extracted 3. A simple features will be computed in all the regions 4. Using this method, the regions with all the same features should be combined and the inverse transformation should be applied. The comparison of the three types of fusion at different levels are shown below in Table 1.
Decision level Fusion
This type of fusion will fuse the results from multiple techniques to give a inal fused decision. If the results are shown as con idences rather than decisions, then it is called soft fusion. Then the other results may be called hard fusion. This level of fusion consists of statistical methods, fuzzy logic based methods and voting methods.
The advantages and disadvantages of the existing image fusion techniques are shown in Table 2 below
Literature Survey
In (Abdulkareem, 2018), this paper presents a development of multimodality image fusion using Discrete Wavelet Transform, and the output fused image is highly anatomical and has a spectral information. These images can be used in the clinical diagnosis of patients. In (Li et al., 2017), this paper presents an advanced data fusion using both the MRI and MRSI for brain tumour patients to improve the differentiation accuracy of the tumour in MRSI only.
In (Bosco et al., 2017) , this paper proves that the fusion of the images will result in valuable reciprocal data. This paper explains that CT gives the better information on denser tissue with less twisting, while MRI offers better information on fragile tissue with all the more bending. In (Suthakar et al., 2014), this paper explains the different types of image fusion and all the techniques which will be useful for the concept of Image Fusion. In (Miles et al., 2013), this paper explores the CT and MRI spine image combination calculated based upon the graph cuts. This calculation enables doctors to evaluate the delicate tissue and hard details on a single picture taking out mental arrangement and relationship required when both CT and MR pictures are required for diagnosis. In (Altaf, 2015), this paper inspects the utility of coordinating images from CT and MRI and analyse Gross tumor volume depicted in the informational collections of the fused image independently. Depiction of GTV is characterized by combining the two imaging modalities which could give critical distinction. In (Haddadpour et al., 2017), this paper explains about the fusion of PET and MRI images using 2-D Hilbert transform and its main objective is to preserve the spectral and spatial features of the input source images. In (Xia et al., 2017) , this paper explains the research of image combination innovation for showing the situation of the anodes contrasted and postoperative MRI. The situation of the cathodes was exceptionally related between the combination and postoperative MRI. The CT-MRI combination pictures could be utilized to dodge the potential dangers of MRI after DBS in patients with PD. In (Eapen et al., 2015), this paper proposes a swarm intelligence motivated edge-versatile weight work for managing the vitality minimization of the conventional graph cut model. The model is approved both subjectively (by clinicians and radiologists) and quantitatively on publically accessible igured tomography (CT) datasets. The trial result delineates the productivity and adequacy of the proposed strategy. In (Bhavana and Krishnappa, 2016), this paper presents a detailed literature review done on image fusion and also the concepts and materials that help for clear understanding of various fusion techniques. Fusing various modalities of images in the medical ield into a distinct image with more detailed anatomical information and high spectral information is highly desired in clinical diagnosis.
Slices of the Brain
There are three planes to get the slices of the Brain. They are 1. Coronal Plane 2. Horizontal Plane
sagittal Plane
Process of these three plane 1. The Coronal plane is also called a Frontal plane. This plane cuts the slices of the brain, similar to the slices cut from a loaf of bread.
2. The Horizontal plane cuts as like cutting a bagel or a hamburger bun. This plane is also known as an axial slice.
3. The sagittal plane divides the brain into left and right parts. It cuts the brain as like cutting a potato from its middle.
Axial Slice of Brain
In the axial plane, there are 24 slices of the Brain. The axial slice has been chosen because it covers all the parts of the brain and shows many tissues in a single slice.
CT Scan
A Computed Tomography (CT) which was earlier known as Computerised Axial Tomography (CAT) scan makes use of computer processed combinations of X-ray measurements taken from different angles which will produce cross sectional (tomographic) images. More details will be shown in the CT scan when compared to the X-ray. The structure details of the bones, the pelvis, blood vessels, brain, lungs and heart will be collected from a CT scan.
The three-dimensional volume images from multiple images of two-dimensional radiographic images have been generated using digital geometry processing. The application of the X-ray and CT scan is the medical imaging which is widely used.
CT scan uses a narrow X-ray beam that circles around the parts of the human body in 360 degrees. By this, a number of images will be collected from different angles. CT scan produces information which can detect bone and joint problems. In the case of cancer or heart disease, a CT scan will spot the part on which the disease is caused. It can also detect the presence of a tumour and its size and how much it has affected the tissues nearby.
The advantages of CT scan are short scan time, high resolution, wider scanning area and higher penetration depth. The applications where the CT scan is used are tumour simulation, brain diagnostic and treatment, tumour detection, deep brain simulation and brain tumour surgery.
MRI Scan
Magnetic Resonance Imaging (MRI) is a widely used technique in radiology which gives the details of soft tissues. MRI scanners use strong magnetic ield and radio frequencies rather than ionizing radiation, which is used in X-ray and CT scan.
Because of the hazards of X-rays, MRI has provided to be a better choice compared to CT scan in medical imaging. It is used in hospitals and clinics for medical diagnosis, to ind the stages of the disease and not to expose the human body into radiation which will create risk in the body. MRI scans will take longer and are much louder than compared to the CT scan. People with some medical implants or other nonremovable metal inside the body may be unable to undergo the MRI process safely.
The magnetic ield strength of an MRI machine is measured in Tesla (T). The majority of MRI scanners in clinical MRI will be performed at 1.5-3T. These produce an extremely strong magnetic ield up to 50,000 times that is of the earth's magnetic ield. The MRI consist of the primary magnet, gradient magnets, radio frequency (RF) coils and the computer system. In clinical and research MRI, hydrogen atoms are widely used to generate a detectable radio frequency signal that is received by antennas.
Hydrogen atoms are naturally abundant in people and other biological organisms particularly in water and fat. In this case, most MRI scans plot the location of water and fat in the body. Pulses of radio waves excite the nuclear spin energy transition and magnetic ield gradients localise the signal in space. Different contrasts may be generated between the tissues based on the relaxation properties of the hydrogen atoms by varying the parameters of the pulse sequence.
Proposed image fusion technique
The proposed Graph intelligence method consists of following stepsFigure 8.
Pre-processing
Median iltering is a nonlinear method used to remove noise from images. It is widely used as it is very effective at, removing noise while preserving edges. It is particularly effective at removing 'saltand-pepper' type noise. The median ilter works by moving through the image pixel by pixel, replacing each value with the median value of neighbouring pixels. The pattern of neighbours are called the "window", which slides, pixel by pixel over the entire image. The median is calculated by irst sorting all the pixel values from the window into numerical order, and then replacing the pixel being considered with the middle (median), pixel value.
Segmentation using graph-cut
Segmentation is an important part of image analysis. It refers to the process of partitioning an image into multiple segments. Image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The goal of segmentation is to
Figure 11: (a) Input MRI image (b) CT images
simplify and change the representation of an image into something that is more meaningful and easier to analyse. To segment an image by constructing a graph such that the minimal cut of this graph will cut all the edges connecting the pixels of different objects with each other.
Fusion using averaging method
The images can be fused using averaging method. In this method, it takes up the two images and the resultant images will have average pixels of both the images. The pixels of each image will be taken and added and it will be divided by the number of images used. All the pixels in the images will continue this method and the output fused image will be obtained.
Optimisation using swarm intelligence
The fused output is optimised using Swarm Intelligence method. In Swarm Intelligence, Particle Swarm Optimisation (PSO) is one of the type used in optimising the output fused image.
Graph Cut Algorithm
The study of graphs is called as the Graph theory Figure 9. It is an associate degree abstract illustration of set of objects, wherever many parts of the objects are connected by links. It is used to model pair-wise relations between objects from a certain collection, and it is a mathematical structure.
It introduces some de initions to give a more mathematical representation of the graphs: In a graph G = (V, E), let V and E denotes the vertices and edges of G graph. A directed graph associates a con ident label (weight) with each in the graph. It consists of a pair of vertices V and a set of wellorganized pair's of edges. A weighted directed graph with two identi ied nodes is called the s-t graph. The nodes present in the s-t graph are the source\and the basin t. An s-t cut, c ( , t), in a graph G is a set of edge E cut if there is no path from the source to the sink when E cut has removed from G. The sum of the edge weights in E cut is the cost of the cut E cut . The low f (u,v) is a representing f : E → R + ,(u, v) → f(u, v) which\ful ills the conservation of lows and the weight constraint. The value of the low f (u,v)is de ined by |f| = ∑ v∈V f ( s,v), where the source of the graph. It denotes the amount of low passing as of the source to the sink.
The most maximum low problem is to boost |f|, that way to course as much stream as possible from the source to the sink. The minimum cut problem is to limit c(S, T) for example, to discover an s-t cut with an insigni icant expense. The maximum stream mincut hypothesis expresses: The greatest estimation of an s-t low is equivalent to the base weight of an s-t cut. The objective is to section an image by building a graph to such an extent that the minimal cut of this chart will cut all the edges connecting the pixels of various objects with one another.
The number of pixels grouped together based on similarity is known as LABEL. This Graph cut algo-
Figure 12: Output Fused Image using (a) Maximum method (b) Minimum method (c) Average method (d) Graph-cut & Swarm Intelligence
rithm takes a minimum cut that is a minimum number of edges used to represent the label. It reduces the number of edges needed to represent a label.
Algorithm for graph cut steps
Step 1: Input Image Step 2: Separation of foreground and background Step 3: Weights assigned between adjacent pixels in the foreground based upon the connectivity to the foreground.
Step 4: Values for weight is high for pixels in the region of interest.
Step 5: Values for weight is less for pixels, not in the region of interest.
Step 6: Pixels are segmented based on the minimum cut and maximum low (maximum information pro-vided in the fused image) Figure 10.
Swarm intelligence
Swarm intelligence is used to optimize the image segmented by any algorithms. It focuses on the feature of the extraction of the features in image processing.
Two of the most important algorithms are
1. Ant Colony Optimization (ACO)
Ant colony optimization
Ant colony Optimization or ACO is a class of Optimization algorithms modelled on the actions of an ant colony. Arti icial ants' simulation agents I-locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees' algorithm, which is more analogous to the foraging patterns of the honey bee.
Particle swarm optimization
Particle swarm optimization or PSO is a global optimization algorithm for dealing with problems in which the best solution can be represented as a point or surface in an n-dimensional space.
Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. Particles then move through the solution space and are evaluated according to some itness criterion after each timestamp. Over time, particles are accelerated towards those particles within their communication grouping, which have better itness values. The main advantage of such an approach over other global minimization strategies such as simulated annealing is that the large numbers of members that make up the particle swarm make the technique impressively resilient to the problem of local minima. In this study, we will learn about some of the major applications of swarm intelligence.
Input Images
Source -Brain Tumour CT and MRI Images collected from The Whole Brain Atlas www.med.harvard.edu /aanlib/
Structural Similarity Index
The Structural Similarity (SSIM) Index quality assessment index is based on the computation of three terms, namely the luminance term, the contrast term and the structural term. The overall index is a multiplicative combination of the three terms.
If α = β = γ = 1 (the default for Exponents), and C 3 = C 2 /2 (default selection of C 3 ) the index simpli ies to: SSIM (x, y) = (2µ x µ y + C 1 ) (2σ xy + C 2 ) (µ 2 x + µ 2 y + C 1 ) (σ 2 x + σ 2 y + C 2 ) Discussion The 12 sets have been taken for fusion. 6 sets are the images of Brain Tissue, and 6 sets are the images of Brain Tumour images Figure 11. The set of images has been taken from the Axial slice of the brain because this plane covers all the parts of the brain and shows many tissues in a single slice. The fusion is done in order to get the complementary information from the fused images Figure 12. The extraction of the pixel size of the tumour can be done from the output fused image Figure 13. Image segmentation is done using Graph-cut algorithm, and then the output is optimised using Swarm Intelligence. Particle Swarm Optimization (PSO) is the technique used for the calculation of Structural Similarity Index (SSIM) of the brain Tables 3 and 4. It is inferred that the graph cut results show better performance than the averaging method and the other two methods of Pixel level fusion Figure 14 and Figure 15. This Graph cut method is ef icient since it has high SSIM.
CONCLUSIONS
The 12 sets of brain images have been taken for fusion. Out of these, 6 sets are the Brain Tissue images, and the other 6 sets are the Brain Tumour images. From the fused images, complementary details such as bone information and tissue information can be identi ied, and it aids the physicians to analyse the tumour details approximately. The proposed method Graph cut and Swarm Intelligence minimises the pixel size infusion and provides more information with minimum pixels. This method has been compared with the pixel level method, but SSIM and SNR for this method is found to be high which means more similar information with respect to the original image has been obtained in the fused image. This method can be extended by combining the graph cut algorithm with advanced Swarm Intelligence Optimisation techniques. | 6,101.2 | 2020-06-22T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Nonrelativistic giant magnons from Newton Cartan strings
We show nonrelativistic (NR) giant magnon dispersion relations by probing the torsional Newton Cartan (TNC) geometry with (semi)classical nonrelativistic rigidly rotating strings. We construct NR sigma models over R×S2 and consider two specific limiting cases those are of particular interest. Both of these limiting conditions give rise to what we identify as the small momentum limit of the giant magnon dispersion relation in the dual SMT at strong coupling. We further generalize our results in the presence of background NS-NS fluxes. Our analysis reveals that unlike its relativistic counterpart, the NR string theory lacks of single spike solutions. 1 Overview and Motivation The quest for a UV complete theory of nonrelativistic (NR) gravity gives rise to what is known as the NR formulation of string theory [1]-[2] over curved manifolds. Recent advances along this line of research reveals the existence of two parallel formulations of NR string sigma models those at the first place appear to be quite distinct from each other. One of these formulations goes under the name of Newton-Cartan (NC) formulation of NR string sigma models [3]-[11] whereas on the other hand, the other approach is based on the so called null reduction of Lorenzian manifolds known as toraional Newton-Cartan (TNC) geometry[12]-[24]. It has been conjectured that 1/c limit of TNC sigma model (thus obtained via null reduction of AdS5×S (super)strings) is dual to λ→ 0 limit of the full N = 4 SYM spectrum on R× S known as the Spin Matrix Theory (SMT) [12],[15]. Keeping the spirit of the above conjectured duality, the purpose of the present article is therefore to check whether the NR analogue of the relativistic giant magnon spectrum [25]-[31] is actually reproducible considering 1/c limit of TNC sigma models on R × S. Below we further elaborate on this taking specific example from N = 4 SYM. The single spin magnon excitation in N = 4 SYM theory is expressed in the form of the dispersion relation [25], E − J = √ 1 + λ π2 sin pm 2 (1) ∗E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>Recently it has been argued that both of these approaches are in fact equivalent to each other [23]. However, it is the second approach that is of particular interest as far as the present analysis is concerned. 1 ar X iv :2 00 1. 01 06 1v 1 [ he pth ] 4 J an 2 02 0 where, λ is the coupling constant in the dual gauge (SYM) theory. On the other hand, the effective t’Hooft coupling (g) of the SMT [15] is related to the N = 4 SYM coupling with the rescaling of the form, g = cλ where c(→ ∞) is the speed of light and |λ| 1 such that g is finite. Rewriting (1) in terms of SMT coupling and considering the small momentum (|pm| 1) limit we find, E − J ∼ √ 1 + g 4π2 ∣∣∣pm c ∣∣∣2 (2) which in the limit of strong (g 1) coupling finally yields, E − J ∼ √ g 2π ∣∣∣pm c ∣∣∣ (3) where we understand the limit, ∣∣∣pm c ∣∣∣ 1 such that the R.H.S. of (3) is finite in the limit of strong coupling. The purpose of the present paper is therefore to reestablish the above strong coupling result (3) from classical NR sigma model calculations on R × S [22] where we identify the giant magnon momentum on the R.H.S. of (3) as being some effective geometrical entity (|∆φm|) on the dual string theory side [26] at strong coupling. As a further continuation of our analysis, we explore single spin giant magnon dispersion relation in the presence of background NS-NS fluxes. There have been several reasons for studying giant magnon as well as single spike solutions in the presence of NSNS fluxes [30]-[31]. The most important one goes under the name of “flux stabilization” which states that the background NS-NS fluxes sort of stabilizes D branes (wrapping S) from shrinking it to zero size. The Page charge associated with such D brane configurations is measured in terms of flux associated with the two sphere namely, QD ∼ ∫ S2 B which takes quantized values on the world-volume of the D brane. The other reason for considering NS-NS fluxes stems from the fact that the background magnetic filed naturally breaks supersymmetry as it couples differently with particles (magnons) of different spins [30]. Therefore such a configuration is more appropriate in a less supersymmetric setup which is what we are concerned here. 2 Nonrelativistic magnons 2.1 NR sigma model We start with the Nambu-Goto (NG) action for TNC strings [12] on R× S, SNG = √ λ 2π ∫ dτdσLNG ; √ λ = L α′ (4) where, the corresponding Lagrangian density could be formally expressed as [22], LNG = ε ′ ε ′ εααχα∂α′ζ (∂α′ζ∂β′ζ − χα′χβ′) ( ∂αθ∂βθ + sin 2 θ∂αφ∂βφ ) − ε cos θ∂αφ∂βζ. (5) Here, ζ is the additional compact direction associated with the null reduced target space geometry along which the string winds [12]-[13]. On the other hand, ε = −ε01 = 1 is the 2D Levi-Civita symbol together with, χα = 2∂αt+ ∂αψ − cos θ∂αφ. (6)
Overview and Motivation
The quest for a UV complete theory of nonrelativistic (NR) gravity gives rise to what is known as the NR formulation of string theory [1]- [2] over curved manifolds. Recent advances along this line of research reveals the existence of two parallel formulations of NR string sigma models those at the first place appear to be quite distinct from each other. One of these formulations goes under the name of Newton-Cartan (NC) formulation of NR string sigma models [3]- [11] whereas on the other hand, the other approach is based on the so called null reduction of Lorenzian manifolds known as toraional Newton-Cartan (TNC) geometry 1 [12]- [24]. It has been conjectured that 1/c limit of TNC sigma model (thus obtained via null reduction of AdS 5 ×S 5 (super)strings) is dual to λ → 0 limit of the full N = 4 SYM spectrum on R × S 3 known as the Spin Matrix Theory (SMT) [12], [15]. Keeping the spirit of the above conjectured duality, the purpose of the present article is therefore to check whether the NR analogue of the relativistic giant magnon spectrum [25]- [31] is actually reproducible considering 1/c limit of TNC sigma models on R × S 2 . Below we further elaborate on this taking specific example from N = 4 SYM.
The single spin magnon excitation in N = 4 SYM theory is expressed in the form of the dispersion relation [25], where, λ is the coupling constant in the dual gauge (SYM) theory. On the other hand, the effective t'Hooft coupling (g) of the SMT [15] is related to the N = 4 SYM coupling with the rescaling of the form, g = c 2 λ where c(→ ∞) is the speed of light and |λ| 1 such that g is finite. Rewriting (1) in terms of SMT coupling and considering the small momentum (|p m | 1) limit we find, which in the limit of strong (g 1) coupling finally yields, where we understand the limit, pm c 1 such that the R.H.S. of (3) is finite in the limit of strong coupling. The purpose of the present paper is therefore to reestablish the above strong coupling result (3) from classical NR sigma model calculations on R × S 2 [22] where we identify the giant magnon momentum on the R.H.S. of (3) as being some effective geometrical entity (|∆ϕ m |) on the dual string theory side [26] at strong coupling.
As a further continuation of our analysis, we explore single spin giant magnon dispersion relation in the presence of background NS-NS fluxes. There have been several reasons for studying giant magnon as well as single spike solutions in the presence of NS-NS fluxes [30]- [31]. The most important one goes under the name of "flux stabilization" which states that the background NS-NS fluxes sort of stabilizes D branes (wrapping S 2 ) from shrinking it to zero size. The Page charge associated with such D brane configurations is measured in terms of flux associated with the two sphere namely, Q D ∼ S 2 B which takes quantized values on the world-volume of the D brane. The other reason for considering NS-NS fluxes stems from the fact that the background magnetic filed naturally breaks supersymmetry as it couples differently with particles (magnons) of different spins [30]. Therefore such a configuration is more appropriate in a less supersymmetric setup which is what we are concerned here.
Nonrelativistic magnons 2.1 NR sigma model
We start with the Nambu-Goto (NG) action for TNC strings [12] on R × S 2 , where, the corresponding Lagrangian density could be formally expressed as [22], Here, ζ is the additional compact direction associated with the null reduced target space geometry along which the string winds [12]- [13]. On the other hand, ε 01 = −ε 01 = 1 is the 2D Levi-Civita symbol together with, The next step would be to take the so called large c(→ ∞) limit [12] associated with the world-sheet d.o.f. which thereby yields the NR sigma model, together with the rescaled Lagrangian (density) of the following form [22], where, g = c 2 λ is the effective string tension [13] in the NR limit. Following NR sigma model/SMT duality [15] we understand g( 1) as the coupling constant on the dual gauge theory side such that the corresponding SYM coupling (λ) is considered to be weak.
In order to proceed further, we choose the following parametrization [28] for the rigidly rotating NR strings over R × S 2 namely, which upon substitution into (8) yields, The equation of motion that readily follows from (10) could be formally expressed as, The above equation (11) could be integrated once to obtain, which is subjected to the realization of the following constraints, where C being the constant of integration. In the subsequent analysis, we further impose constraints on the parameter space of solutions namely, κ 1 and C+ κ 1.
The spectrum
Our next step would be to compute the conserved charges associated with the 2D sigma model. We first note down the energy associated with the NR stringy configuration, Like in the relativistic example [28], requiring the fact that the above entity (16) to be real, we find the following two bounds on the azimuthal angle namely, (I) α 2 β 2 < sin 2 θ 2 < ξ 2 η 2 and (II) ξ 2 η 2 < sin 2 θ 2 < α 2 β 2 . Keeping these facts in mind, our next step would be to explore the dispersion relations in the following two limits namely, |η| → ξ as well as |β| → α.
However, before getting into that, it is customary first to note down the second conserved quantity namely the angular momentum, as well as the angular difference between the end points of the solitonic excitation, With these above set up in hand, we are now in a position to explore various limiting conditions associated with the 2D sigma model. The first limiting case we are interested in is |η| → ξ. In order to explore this limit, it is first customary to rewrite (16) as, where we set the upper limit of the integral 2 , together with the lower limit, Clearly, the limit |η| → ξ stands for setting γ max = π/2. Next, we implement the above limit into (12) and find, Using (22), the angular difference between the end points of the string soliton turns out to be 3 , 2 The physical picture that we have in mind is that of a string soliton wrapping along the equator of S 2 whose excitations (magnons) travel with a specific momentum, |p m | 1. 3 Here we rescale the azimuthal angle, |∆ϕ m | ≡ cos γ min |∆ϕ| which we further identify as being the effective geometrical angle which corresponds to giant magnon momentum (|p m | 1) in the dual SMT sector at strong coupling.
which for a specific value γ min = π 3 yields, On the other hand, a straightforward computation reveals the dispersion relation, where we note down the constant shift, In order to identify (25) to that with the actual magnon dispersion relation (3) below we define the difference between the limits of the integral, A careful look at the stringy embedding further reveals that, δγ ∼ |∆ϕm| |∆ϕ| . In order to arrive at the NR magnon dispersion relation one must therefore set the limit 4 , |δγ| 1 which upon substituting into (26) yields, ∆ ∼ δγ ∼ |∆ϕ| together with, |∆ϕ m | ∼ pm c ∼ (δγ) 2 . Putting all these facts together, we finally arrive at the cherished giant magnon dispersion relation of the following form, which we interpret as single spin giant NR magnon in the dual SMT at strong coupling. The other limiting condition we are interested in is the following, and, where, we set |β| → α which thereby yields, γ max ∼ π 2 . Substituting this limit into (12) one exactly recovers (22) and thereby the original NR magnon dispersion relation (28). This observation is related to the underlying fact that unlike the relativistic example [27]- [28] both the conserved charges (E and J ϕ ) as well as the deficit angle (∆ϕ) are invariant under the exchange of the limits of the integral namely, γ max ↔ γ min . The absence of single spike solutions [28] is what we identify as a special feature of NR string sigma models (over R × S 2 ) in contrast to its relativistic cousins.
Adding NS-NS fluxes 2.3.1 1/c limit
We now generalize our results in the presence of background fluxes [23] and in particular confine our attention only to the NS-NS sector. Under these circumstances, the sigma model Lagrangian on R × S 2 could therefore be formally expressed as, where we identify the new NS-NS contribution [23], together with the following specification, where, X M s are the so called embedding coordinates 5 along with B ζµ = 0, b ζ = 1 [23]. Our next step would be to take 1/c limit of (31). The scaling limit corresponding to NS-NS fluxes are introduced as, together with [22], Using (9) and (34), we finally obtain NS-NS contribution in the NR limit 6 , where we identifyB as rescaled background magnetic field [30]- [31] with fluxes attached to S 2 . From (36), it is quite evident that the background NS-NS fluxes do not affect the equation of motion corresponding to θ(σ) and hence for example, the poles appearing above in (19) also remain unchanged.
Dispersion relation
Below we note down conserved charges and their respective modifications due to the presence of background fluxes. To start with, notice that the energy (19) associated with the classical stringy configuration does not receive any correction due to the presence of NS-NS fluxes. Furthermore, the deficit angle (18) between the end points of the string remains unchanged too. Therefore the only change that appears to be in the dispersion relation (28) is through (17) namely, where we note down the constant shift in the angular momentum as, Finally, using (27) and (38) and considering the geometrical analogue of the NR limit (|δγ| 1) we arrive at the giant magnon dispersion relation of the following form, where we identify the R.H.S. of (39), as the effective NR magnon momentum in the presence of background NS-NS fluxes.
Summary and final remarks
The conclusion as well as key observations of the present paper is as follows. We reestablish the strong coupling nonrelativistic (NR) giant magnon spectrum strating from NR rigidly rotating strings over torsional Newton-Cartan (TNC) geometry with R × S 2 topology [14]. We further generalize our results considering background NS-NS fluxes and obtain effective NR magnon momentum in the presence of background magnetic field. All these findings strongly indicate towards NR sigma model/SMT correspondence [12]. However, quite surprisingly, we also notice that unlike its relativistic counterpart [28], the NR sigma model lacks of single spike solutions. We identify this as a special feature associated to 1/c limit itself which may be of worth exploring further in the near future. | 3,841.6 | 2020-01-04T00:00:00.000 | [
"Physics"
] |
Mathematical modeling of ferrous sulfate oxidation in presence of air
In the present paper we studied the oxidation of ferrous sulfate salt with oxygen. ferric ammonium sulfate and ferrous sulfate were used to prepare standard solutions of Fe(III) and Fe(II) solutions. Oxidation experiments were carried out by mixing FeSO4∙7 H2O in H2O. Air was supplied using a gas washing bottle in which air entered the bottle through the center tube, and exited into the bottom of the bottle. Samples were taken periodically and analyzed in the UV-Vis spectrophotometer. We consider that basic Fe(III) sulfate was one of the main compounds produced during the reaction and proposed a model to describe the process. We found solution to the differential equations that described the profile of FFFFSSSS4 and FFFFSSFFSSSS4 concentration in time and observed agreement between the experimental results and data predicted by the model. Moreover, we determined values of rate constants using the model and confirmed the determined values by means of experiments. This suggests that basic ferric sulfate was generated after aeration of ferrous sulfate solution.
Introduction
Ferrous sulfate (FeSO 4 ) is a compound that participates in a diversity of chemical reactions and processes, such as reducing agent or as colorant in dying and tannery industry, additionally, is generally used to treat iron deficiency. It is stable at normal conditions of pressure and temperature, it can be present in several hydrated forms, which upon contact with dry air lose water, it can be oxidize to ferric sulfate in presence of water. In a molecule of ferrous sulfate, 36.7% corresponds to iron metal. Iron can be found in soluble form in water as Fe(II) or Fe(OH) + or as a complex as in Fe(III) or Fe(OH) 3 . The presence of these compounds in drinking water causes a problem because it gives reddish color, bad odor, unpleasant taste and clogging [1][2][3].For these reasons, several attempts have been reported concerning the separation of iron from solutions containing ferrous sulfate, namely electrocoagulation, ion exchange, ultra filtration and oxidation.
The oxidation of ferrous sulfate generally leads to ferric sulfate in many cases. However, according to Reedy and Machin, ferrous sulfate is not completely oxidized easily to ferric sulfate in presence of air as experimental data demonstrates that the rate of reaction decreases very rapidly requiring extremely long time for complete oxidation 4 . Research has also been performed using air in a hot solution of ferrous sulfate with precipitation of iron as basic sulfate. During the process, poor soluble compounds such as Fe (III) oxides are formed. Moreover, it has been reported that the concentration of ferrous sulfate affects slightly the reaction rate. The reaction is favored in neutral pH and high temperature. The precipitated salts are mainly basic salts, such as basic ferric sulfate and ferric hydroxide [4]. Tolchev et al., reported the precipitation of iron (III) oxides by addition of oxidants at specific conditions of pH, concentration, temperature, activity, feed rate and salts anion nature using hydrogen peroxide [5]. From all of these factors there is still diversity on opinions about which one are the most important for improving the oxidation rate and formation of oxides. It has been reported that in case of acid solutions, the addition of calcium hydroxide has increased the oxidation rate, as well as the use of air that provides O 2 transfer and supports the oxidation of iron (II) to iron (III) [6]. The formation of ferric hydroxide has been also reported to catalyze the oxidation at lower pH values [7]. Additionally, studies have also been performed about the analysis of the morphological properties of iron oxides and oxohydroxide particles in which the shapes and sizes of particles grown from solutions under different conditions used during the precipitation method, namely pH adjustment, mixing and reaction temperature [8]. It is also important to consider that it has been challenging to predict accurately the precise effect of these parameters. Therefore, a better comprehension of the effect of these variables during the oxidation, precipitation process, compounds formed and morphology of iron oxides is needed [8][9].
The oxidation of ferrous ions in concentrated solutions of H 2 SO 4 -FeSO 4 was studied in a pressurized autoclave under isothermal and isobaric conditions [10] showing that temperature and pressure affect the oxidation rate. Kinetic parameters were estimated with nonlinear regression and the experimental data fitted properly to simulated values. During the oxidation procedure, four electrones are transfered from Fe 2+ ions to one molecule of oxygen. A mechanism for the oxidation in aqueous phase involving a peroxide ion as an intermediate was studied by Sykes [11]. Accordingly, iron peroxides complexes were reported by Wang et al [12] using in situ FTIR by the contact of H 2 -O 2 of N 2 O with Fe-Al-P-O catalyst prepared by a sol-gel method.
The reaction mechanism is the quantitative description of the dependences of the reaction rates on the concentration of the reacting components. The description is generally explained using vector differential equation, which solutions are time dependent of the concentrations of the oxidation intermediate products [13][14][15]. In some cases, the which solution leads to: Helios Alpha UV-Vis spectrophotometer from Thermo Fisher was used for iron quantification.
Methods.
Fe(III) standard solution was prepared by mixing 2.41 g of ferric ammonium sulfate with 20 mL of H 2 SO 4 , transferred into a 500 mL volumetric flask and filled up with deionized water. Accordingly, Fe(II) standard solutions were prepared using FeSO 4 •7H 2 O. Dilutions were prepared from the respective stock solutions and absorbance at 400 nm was recorded. Oxidation experiments were carried out by mixing 19.46 g of FeSO4•7H 2 O in 100 mL H 2 O. Then air was supplied using a gas washing bottle in which air entered the bottle through the center tube, and exited into the bottom of the bottle. Samples were taken periodically every 30 minutes and analyzed in the UV-Vis spectrophotometer.
Results
The summary of results is presented in Table 1 describing the variation of Fe (III) concentration of each sample. Experimental results were compared to modeling data obtained from equations (6) and (12) and comparison is showed in Figure 1. As can be seen, the concentration of Fe (III) increases over time and the concentration remain stable after 2h of aeration and agreement between experimental results and model was observed. Additionally, dimensionless values of concentration for experimental and mathematical model were calculated and are presented in Table 2. The value of rate constant k can be determined using equation (12) or (6), which corresponds to = 1.73. Accordingly, this value can be also calculated using linearization of equation (6)
Fig. 2. Linear plot of the equation that describes the variation of concentration C i in time τ
A similar procedure can be performed using linearization of equation (12). We obtain first the inverse:
Conclusions
In the present paper, we aimed to study the oxidation of ferrous sulfate salt with oxygen. We proposed that one possible compound is basic Fe(III) sulfate. We solved the differential equations that describe the profile of 4 and 4 concentration in time. We observed agreement between the experimental data and the results predicted by the model. Moreover, the values of rate constant calculated from both methods were very approximate. These results confirm that basic ferric sulfate is generated after aeration of ferrous sulfate solution. This information could be important for getting a better understanding about the reaction mechanism in oxidation reactions and in purification steps. | 1,744.4 | 2018-01-01T00:00:00.000 | [
"Mathematics",
"Chemistry",
"Environmental Science"
] |
MANORS AND SCATTERED FARMS: SPECIAL SETTLEMENT FORMS OF OUTSKIRT AREAS IN HUNGARY
Manors and scattered farms: special settlement forms of outskirt areas in Hungary The Hungarian settlement network is very varied and multiple. In the teeth of its small territory we can find many area-specific settlement forms in the country. These settlement forms are usually not independent municipalities, but mostly occupied the outer areas of some towns and villages. In this study we try to demonstrate two types of these special settlement forms: scattered farms and manors. Scattered farms are sporadic, lonely settlements of the Great Hungarian Plain, which are centres of agricultural works and generally the centres of economic activities now, but they used to serve as winter shelters for the livestock. Most of the manors could be found in Transdanubia. The leader utility is the agriculture, but among others we found manors with industrial, sanitary, tourism functions also.
Introduction
The settlement network of Hungary has many special characteristics, the majority of which -at least in traces -serve as a still tangible, very good basis for their indepth analysis, and for the mapping of their changes.Despite the relatively small geographical extension of the country, the characteristic features within the settlement network usually coincide with rather definite spatial segregation.The reasons for that are to be found in the history of the Hungarian nation, the orography or the country, the farming habits, the settlement order and traditions of the different ethnic groups living in Hungary, and not last in the settlement policy changing from time to time.
The present essay focuses on two dominant elements of the Hungarian settlement network which are products of different times in the medieval ages, they were born in large numbers, and whose development path is well demonstrated by the subsequent phases of birth-maturity-decline, and which, although in very much decreased numbers and in most of the cases after the change of their original functions, are living in the shadow of their glorious days gone by.These two types of settlements are usually not independent municipalities, they did not become sovereign during their history; they functioned and still function as auxiliary settlements.One of them is the so-called scattered farms, most typical of the Great Hungarian Plain, the other one can more typically be found in Transdanubia, these are the manors or manors.In our analysis we demonstrate major socio-economic differences between the two.
Scattered farms
The most general definition of scattered farms is provided by István Györffy: in his words scattered farms are the sporadic, lonely settlements of the Great Hungarian Plain, which are centres of agricultural works and generally the centres of economic activities now, but they used to serve as winter shelters for the livestock.Scattered farms are not a type of sovereign settlements; they belong, together with their estates, to a town or a large village (Becsei 2001, 155).Actually Györffy's definition was taken over by Ferenc Erdei when he defined the characteristic features of scattered farms as follows: they 1. are lonely settlements, buildings or groups of buildings located outside the closed blocks of towns or villages; 2. serve agricultural or in more general smallholders' purposes, i.e. they are locations of animal husbandry or field cultivation, or forestry or fishing; 3. are the dwelling places of those active in production for a shorter or longer time, but never simply the places of permanent settlement (Becsei 2001, 155).
The general conditions allowing the birth of scattered farms were as follows: • Large outskirt areas of settlements that were impossible to cultivate intensively and economically from the inner parts; • The need or constraint of intensive farming (cereals production, later viticulture and fruit production); • The refusal of prohibition of final settling out from the towns (i.e. the establishment of new villages in the more distant outer areas of existing settlements), for different reasons (insistence on rights and advantages gained in the country boroughs, or the insistence of the community of the country boroughs to keep their inhabitants (for taxation and fiscal purposes); • Individual ownership of (one part of) the towns' outskirts and free land use (Beluszky 1999, 98).
Fig. 1: A scattered farm in the Great Hungarian Plain.
Scattered farms are most often seen as successors of the "outskirts gardens" having gone through a change of function.Outskirts gardens were land areas in private use and appeared as early as in the 16 th and 17 th century.They originally served the purposes of animal husbandry: they were winter shelters for the livestock taken out from the common herds or flocks, they were the places where fodder was collected and stored, and manure was used to cultivate the land.In other words: animal husbandry was accompanied by the cultivation of the land.If the cultivation of the land and stable-based, indoor animal husbandry became more important in the farming structure of these dwellings, i.e. when a more permanent settlement took place, a scattered farm was born (Beluszky 1999, 100).The first scattered farms thus were economic units established in the outskirts gardens, dividing the vast pastures of the "puszta", the waste land (Frisnyák 1990, 86).The majority of the scattered farms was later established independent of the outskirts gardens, when it became necessary or possible to create "farming centres" on the outskirts (e.g. after the formerly common lands became private holdings).
The manor
It is a settlement form even more ancient in its look than the scattered farm; also, its appearance and penetration precedes that of the scattered farms by some 200 years.Although they were also established in the Great Hungarian Plain in large numbers (e.g. in Békés county), they were basically a special residential and economic unit typical of Transdanubia.On the basis of its development, a manor is a double concept: it means a piece of land that is the management and administrative centre of a large estate, on the one hand; on the other hand, it is a form of settlement, i.e. the residential place of the farming workers or even the owner of the estate (Balogh and Bajmócy 2011, 13).Manors in their initial form appeared in the early or mid-13 th century, but their appearance in large numbers only took place in the 16 th century.The majority of lands was in private property in Hungary by the 12 th century.Estates were scattered all over the place, which was due to the typically self-sustenance farming.Different branches of agriculture (plough lands, orchards, vineyards etc.) all required different types of soil, so it was natural that different parts of the estate were in areas of different endowments (Herber and Martos and Moss and Tisza 2002, 184).In the privately owned lands, so-called praediums were established, which were the scenes of economic activity, i.e. they can be considered as the economic units of the landowner but they also served as residential places of the people working there.The praediums were inhabited by serfs who were obliged to do boon work for their landowners (Kristó and Barta and Gergely 2002, 87).The praediums thus contained some economic site of the landowner (a stable, a barn, a workshop etc.), so in its original meaning a praedium was an economic plant.In the first half of the 13 th century this kind of working organisation was strikingly declining, as the serfs living there were uninterested in production, as opposed to the more and more widespread serfplots which came to Hungary from Western Europe (the very fist datum of such a unit is from 1214), used by families possessing a house and land.They harvested the crop themselves and paid a contribution in kind to the owner of the estate.If the serfs fulfilled their obligations to their landowner, they could not be deprived of their land (Kristó and Barta and Gergely 2002, 88).The largest part of the praediums thus disintegrated and peasant farms were born in their stead; landowners hardly kept any land -right until the early 16 th century -for their own farming purposes.If they ever did so, they had these lands cultivated by serfs and -in a smaller proportion -day labourers, i.e. the "prototypes" of manors appeared (Frisnyák 1990, 20).Their size hardly exceeded that of the serfs' sites.As these manors were organised in the "stead" of the former landowners' economic units, in many references the term 'praedium' was still used for a long time -but with a totally different meaning: it meant a piece of land and not a landowner's estate.After some time even the expression went out of use, replaced by the term 'manor' (Balogh and Bajmócy 2011, 14).
Similarly to the scattered farms, manors mostly occupied the outer areas of some towns and villages, a smaller part of them have by now become parts of the respective settlement, and we can even find manors which by now have become administratively independent settlements.On the whole, a manor is a spatial unit with usually 10 to 50 inhabitants, located on outskirts most of the times, segregated from the other elements of the Hungarian settlement network both in its birth and its original morphology, which initially functioned as the management and administrative centre of a large estate and as the residential place of the people working there (Balogh and Bajmócy 2011, 15).A significant difference between scattered farms and manors is that in its classic age a manor always meant an area around the castle or -in case of less affluent landowner -the mansion of its owner, with an area ranging from a few hundred acres to thousands of acres, including the totality of the cultivated lands and the settlement.In the case of scattered farm this is unknown; scattered farms had much closer ties to those towns in whose outskirts they were located.
1. Scattered farms
The history of the scattered farms is a sequence of continuous transformations, decays and rebirths (Becsei 2001, 156).The system of scattered farms on plough lands was actually established by the mid-18 th century.During the 18 th and 19 th century scattered farms as settlements and economic units were the largest sporadic settlements in Europe (Frisnyák 1990, 86).The further development of the scattered farms can be demonstrated with the change of the residential functions of the farms (Beluszky 1999, 102): 1.In the beginning, only "sleeping places" were established on the outskirts, without more durable buildings, and family members only lived there in the season of agricultural works.2. Later more durable buildings were erected and wells were dug, so the family members could move to the farms for the summer months.3. A more intensive form of livestock breeding using stables required the permanent stay of some member of the family on the farm.More durable and heatable buildings and heated pig pens were built.4. The separation of the residential house of the farm and the stable allowed the longer stay of the family on the farm, but they did not sell their homes in the town.It was typical for the families to move into the town houses for the winter months.5. Finally -from the late 19 th century -people of the farms gave up their houses in the towns and the scattered farms became real sporadic settlements (Beluszky 1999, 102).
River regulations also had a significant contribution to the penetration of scattered farms.Regulations doubled the extent of arable lands, but this was not accompanied by the birth of new villages; areas saved from floods increased the territories of existing county boroughs and villages.The owners possessing lands in these now flood-free areas were only able to cultivate their lands -often located at a distance of 20 to 25 kilometres from the towns -if they moved there permanently, i.e. established scattered farms.The period from the turn of 19 th and 20 th century until the end of World War II is a new era in the life of the scattered farms.The number of the permanent population of farms kept on increasing.Thereby the character of the scattered farms changed from being auxiliary settlements; the birth of sporadic settlements with permanent population became typical.In addition, new forms of farms, i.e. lease farms appeared (Becsei 2001, 160).After 1945 the destruction and differentiation of the system of scattered farms started.The collectivisation of agriculture, the preference of urban settlements, the radical fall in the number of agricultural employment, the penetration of industry and then services led to the decrease in the number of the inhabitants living on the outskirts (Tab.1). Tab
2. Manors
From the 16 th century, the extension of lands in the own management of the landowners started to increase.The Hungarian manors, however, were not so important at this time -due to the shortage of labour typical in Hungary -as their Czech, Polish or East German counterparts.The manors established in the estates of the landowners were not created at the cost of the peasants' lands, but in derelict, uncultivated or cleared lands.In addition, the boon work and thereby the transfer of the technical level used by the serfs blocked their development (Kristó and Barta and Gergely 2002, 237).What was a progress is the spatially more optimal location of the manors, on the one hand, determined by the transport tracks and market centres of the time; on the other hand, the introduction of many species of cultivated plants never known before -in addition to cereals -, like Smyrna melon, Persian peaches, several species of cherry, nut, strawberry, chestnut etc. (Frisnyák 1990, 42).After the 18 th century, expropriations of the serfplots contributed more and more often to the growth of the manors.After the liberation of serfs and induced by the growing demand for food, a new solution had to be found for the effective cultivation of the lands.This solution was the farming of the manors.Landowners settled down their liberated serfs on the lands of their manors (as paid servants) and they went on cultivating their lands.The notion of manor thus expanded from the second half of the 1800s: manors as settlements were born.Manors as a piece of land and as a settlement were present in landowners' estates right until 1945.On the one hand, manor was the piece of land owned by the landowner and cultivated by the descendants of the liberated serfs and the day labourers of the nearby villages; on the other hand, it was also a settlement, with a special society and agriculture related economic activity (Pócsi and Bajmócy and Józsa 2008, 323).After World War II, following the distribution of lands in 1945 manor as a piece of land lost its reason for existence and survived as a settlement type.Parallel to this, their decline and decay started.The utilisation of the former demesne lands and their buildings -provided that they still existed -brought a rather strong differentiation of their functions.
One of the most populous types of outskirt settlements in the Carpathian Basin was manors in the early 20 th century (Balogh and Bajmócy 2011, 20).In the territory of the historical Hungary, by the Census of 1900 approximately 8,000 manors were identified, the Census of 1910 registered 6,000 of them.The distribution of the manors, however, was far from being balanced in the Carpathian Basin.In 1910, half of the manors (3,030) were in Transdanubia.Another 1,400 manors existed on the other side of the Danube, in the northern areas, in the western half of Upper North Hungary.In addition, a significant number of manors could be found in the Danube-Tisza mid-region (210), in the Northern Middle Mountains (430) and in the Banat region (440).The Transdanubian majority of manors is shown by the fact that in 1910 Somogy county had the largest number of them (approximately 11% of all of them), other counties with the largest number of manors included Tolna, Fejér, Veszprém, Vas and Zala (Tab.2).
In 1910, a total of 431 thousand people, i.e. 2.4% of the population of Hungary lived in manors, which means that one in every forty persons was an inhabitant of manor.Of them, 233 thousand (54%) lived in Transdanubia (Balogh and Bajmócy 2011, 21).
Tab. 2: The number of manors in the counties with the largest number of manors in the territory of the historical Hungary, 1910.
The present and future of scattered farms and manors
4. 1. Scattered farms: a case study from the Homokhátság (The Sand Hills) In 2005 the Hungarian government assigned the VÁTI Hungarian Public Nonprofit Company for Regional Development and Town Planning and the Great Plain Research Institute of the Centre for Regional Studies, Hungarian Academy of Sciences to explore the situation of the areas accommodating scattered farms and map their development possibilities.The target area of the survey was 104 settlements in the so-called Homokhátság (The Sand Hills) area.The Homokhátság area, situated in the Danube-Tisza mid-region, is not a selected area on it own; however, it is one of the most active fields of researches on scattered farms.A significant proportion of all Hungarian scattered farms can be found here, accommodating approximately half of the total population of these farms.During the survey the typifying of the scattered farms was also done, identifying the following categories (Csatári and Jávor 2005, 14): A. Scattered farms gone by B. Scattered farms with economic functions (28% of existing farms in the Homokhátság) C. Scattered farms with residential functions (50%) D. Uninhabited farms (22%) A. Scattered farms gone by: territory of former farms whose buildings have collapsed by now, their place has been occupied by field cultivation (e.g.plough lands) or other activity (Fig. 3).B. Scattered farms with economic functions: farms where economic activity is done either on its own (without residential function) or together with residential function.This type of farms is one of the viable groups of the scattered farms.The following sub-types can be identified (Csatári and Jávor 2005, 15): 1. farms engaged with small-scale agricultural production (71% of the farms with economic functions) (Fig. 4); 2. farms engaged with large-scale agricultural production (13%); 3. agricultural self-sustenance without residential functions (4%); 4. farms engaged with rural tourism (2%); 5. farms engaged with other economic activities (10%).C. Scattered farms with residential functions: farms without economic activity but with residential function (Fig. 5).Farms with residential functions can be: 1. farms with residential functions and maybe also with agricultural selfsustenance as an auxiliary activity (44% of farms with residential functions).They make the other group of viable farms; 2. farms inhabitant by elderly people, those with financial problems or homeless (41%); 3. hobby farms (15%).Although manors -similarly to scattered farms -are usually located on the outskirts of towns and villages, there are 9 allodiums in West Transdanubia that have become sovereign settlements by now.All of them are in the category with a substantial number of inhabitants.On the other hand, a significant proportion of the outskirts with original manor buildings are often inhabited by disadvantaged, impoverished social layers.C. Present functions of manors: typifying manors on this ground is an extremely complicated task, as the way the outskirts formerly operating as manors is rather varied; in addition, in the larger part of them we often find 2-3 functions mixing with each other.(This is why the total of the proportions of manors belonging to the respective categories exceeds 100%: as a consequence of multiple functions, one unit may belong to more than one category.)Of the 184 establishments in the survey, 91% have some function (Balogh and Bajmócy 2011, 72).The main subtypes are as follows: 1. manors with residential functions, only: 27% of the units in the survey; 2. agricultural function: in 42% of the manors we find agricultural activity.It is usually combined with residential functions but can also be the exclusive function.Within agricultural activity, animal husbandry is more frequent than plant cultivation.The buildings used can be old demesne buildings and brand new ones as well (Fig. 8).3. Industrial function can be found in 7.5% of the manors.It is more typical of the ones with a substantial number of inhabitants; it only appears in two cases without permanent local labour force and never as a sole function.The industrial activities pursued are extremely varied: wood processing, metal industry, construction materials industry, printing industry, packaging industry, food processing industry etc (Balogh and Bajmócy 2011, 73). 4. Tourism is an economic activity in 13.5% of the manors.This is mostly the provision of accommodation (Fig. 9), or equestrian schools, in fact, the two can be combined in some cases.It is usually not the original manor buildings that are used but it happens in come cases, especially for keeping horses.In five manors -one in Győr-Moson-Sopron and four in Zala county -touristic activity can be a function on its own (wellness, equestrian schools, animal petting, reserve, holiday resort). 5. Basic services (in 12.5% of the manors) are typical in the former outskirts areas with the largest number of population, often functioning as sovereign settlements by now.Coming from the nature of the function it must always be accompanied by residential function.The contribution to the improvement of the local living conditions can be a grocery, a pub, a church, a local government, maybe a post office.6.The 'other' category includes a wide range of activities including intellectual, transport, nature protection, social, sports and recreation etc. activities.These services can be found in 11% of the manors.It is especially ones with social care functions that utilise authentic manor buildings, especially castles and mansions.A permanent population is not an absolute necessity, as in many cases those in search of recreation are awaited by holiday homes, weekend gardens or excursion facilities (Balogh and Bajmócy 2011, 73).
Fig. 7 :
Fig. 7: A manor house in bad condition on the outskirts of Mikosszéplak.Source: Balogh and Bajmócy 2011.
. 1: Changes in the number of outskirts population in the Great Hungarian Plain | 4,991 | 2013-12-31T00:00:00.000 | [
"Economics",
"History"
] |
DJ-1 Alleviates Neuroinflammation and the Related Blood-Spinal Cord Barrier Destruction by Suppressing NLRP3 Inflammasome Activation via SOCS1/Rac1/ROS Pathway in a Rat Model of Traumatic Spinal Cord Injury
DJ-1 has been shown to play essential roles in neuronal protection and anti-inflammation in nervous system diseases. This study aimed to explore how DJ-1 regulates neuroinflammation after traumatic spinal cord injury (t-SCI). The rat model of spinal cord injury was established by the clamping method. The Basso, Beattie, Bresnahan (BBB) score and the inclined plane test (IPT) were used to evaluate neurological function. Western blot was then applied to test the levels of DJ-1, NLRP3, SOCS1, and related proinflammatory factors (cleaved caspase 1, IL-1β and IL-18); ROS level was also examined. The distribution of DJ-1 was assessed by immunofluorescence staining (IF). BSCB integrity was assessed by the level of MMP-9 and tight junction proteins (Claudin-5, Occludin and ZO-1). We found that DJ-1 became significantly elevated after t-SCI and was mainly located in neurons. Knockdown of DJ-1 with specific siRNA aggravated NLRP3 inflammasome-related neuroinflammation and strengthened the disruption of BSCB integrity. However, the upregulation of DJ-1 by Sodium benzoate (SB) reversed these effects and improved neurological function. Furthermore, SOCS1-siRNA attenuated the neuroprotective effects of DJ-1 and increased the ROS, Rac1 and NLRP3. In conclusion, DJ-1 may alleviate neuroinflammation and the related BSCB destruction after t-SCI by suppressing NLRP3 inflammasome activation by SOCS1/Rac1/ROS pathways. DJ-1 shows potential as a feasible target for mediating neuroinflammation after t-SCI.
Introduction
Traumatic spinal cord injury (t-SCI) caused by events such as a sports accident, traffic accident or a fall from height often results in persistent and severe motor and sensory dysfunction, leading to a reduced quality of life and a heavy medical burden on families and society [1]. Existing clinical treatments for t-SCI can partially reduce damage, but their long-term effects are limited [2]. Neuroinflammation is one of the essential pathological reactions after t-SCI and is regarded as a significant determinant of consequent neurological outcomes [3][4][5]. Previous studies have found that suppressing the neuroinflammation response could inhibit the generation and expansion of t-SCI induced secondary tissue damage to improve locomotor function recovery after t-SCI [6]. Therefore, it is necessary to further understand the complex inflammation cascade mechanism after t-SCI and find new targets for regulating neuroinflammation [7].
Drug and Small Interfering RNA Administration
Sodium benzoate (SB) (100 mg/kg, Sigma-Aldrich, St. Louis, MO, USA) was diluted with 100 µL water and administered by gavage 1 h after t-SCI. DJ-1 mRNA mixtures, SCOS1 mRNA mixtures, and scramble siRNA were obtained from Thermo Fisher Scientific (Waltham, MA, USA). Entranster TM-in vivo transfection reagent delivered the siRNA according to the manufacturer's recommendations. Intrathecal injection of siRNA solution was performed 48 h before operation. The needle tip was punctured into the subarachnoid space between L5 and L6 at a rate of 2 µL/min. The rats in the sham group underwent the same procedure but were not injected with the drug.
Experimental Procedures 2.3.1. Experiment 1
To detect changes in expression and distribution of DJ-1 after t-SCI, the rats were randomly divided into the sham group and t-SCI groups with different time points (3 h, 6 h, 12 h, 24 h, 48 h, and 72 h) (n = 6). The location of DJ-1 was assessed by double immunofluorescence staining in the sham and t-SCI 24 h groups (n = 6).
Experiment 3
To examine the effect of DJ-1 on neurological function, the Basso, Beattie, and Bresnahan (BBB) score and inclined plane test (IPT) were determined after t-SCI (n = 6). Rats were divided into the sham group and t-SCI groups with vehicle, SB, and DJ-1 siRNA, respectively (n = 6).
Experiment 4
To further analyze the pathway of DJ-1-mediated NLRP3-related inflammation, we introduced the SOCS1 siRNA to inhibit the expression of SOCS1. Rats were assigned into five groups: sham group and t-SCI with vehicle, SB, SOCS1 siRNA, and SB + SOCS1 siRNA, respectively (n = 6). The expressions of DJ-1, SOCS1, and NLRP3 were detected by Western blotting in each group (n = 6). Rac1 activities and ROS levels were detected in these groups, respectively. The experimental design and animal groups are shown in Figure 1.
Long-Term Neurological Function Analysis
The BBB score was used to evaluate hind limb locomotor function. The score range is from 0 to 21; higher scores mean poorer motor function [22]. The IPT was investigated using a board secured at one end with the free edge of the board gradually raised to increase the angle of the incline. The maximum angle at which the rats maintained stability for at least 5 s was recorded as the inclined plane test angle. These two tests were conducted at 3, 7, 14, and 21 days following surgery.
Long-Term Neurological Function Analysis
The BBB score was used to evaluate hind limb locomotor function. The score range is from 0 to 21; higher scores mean poorer motor function [22]. The IPT was investigated using a board secured at one end with the free edge of the board gradually raised to increase the angle of the incline. The maximum angle at which the rats maintained stability for at least 5 s was recorded as the inclined plane test angle. These two tests were conducted at 3, 7, 14, and 21 days following surgery.
Western Blot
We used an NE-PER Nuclear and Cytoplasmic Extraction Kit (Thermo, Rockford, IL, USA) to extract nuclear and cytoplasmic proteins.
Equal amounts of protein from samples were loaded and separated by sodium dodecyl sulfate (SDS)-polyacrylamide gel electrophoresis and transferred onto PVDF membranes (Bio-RAD Laboratories, Hercules, CA, USA). Firstly, the membrane and the corresponding primary antibody of the protein to be measured were incubated overnight at 4 • C. Then, the secondary antibody (1:10,000, Zhongshan Jinqiao ZB-2301 or ZB-2305) was incubated at room temperature for 1 h with an ECL kit (Thermo Scientific, Waltham, MA, USA). The band density of observed proteins was quantitatively analyzed by Image J software (NIH).
Cellular Immunofluorescence
A 0.5 cm segment of the spinal cord was taken from the center of the injury site. An axial frozen section of the spinal cord 2 mm from the center of the injury site (20 µm) was taken and incubated with anti-DJ1 (1:125, AB76008, Abcam, Waltham, MA, USA) and anti-IBa1 (1:500, Abcam AB5076) antibodies at 4 • C overnight, respectively. Following this, the samples were reacted with secondary antibodies (1:500, Invitrogen, Thermo Fisher Scientific, Waltham, MA, USA) at 25 • C for 2 h. Finally, DAPI (1µg/mL, Roche Inc., Basel, Switzerland) was used for dyeing the nucleus and mounting. A fluorescence microscope and Photoshop 13.0 software (Adobe Systems Inc., San Jose, CA, USA) were used to observe the tissue sections and for photograph post-processing. Six sections were obtained from each sample, and one random grey matter field per section was used to count the cell numbers at 200× magnification. The DJ-1 expression was expressed as the mean ratio of DJ-1-positive cells to total cells in each group. Neuroinflammation was evaluated by the proportion of Iba1-positive cells in each group.
Rac1 Activation Assay
Rac1 activity was detected using a Rac1 Activation Assay Kit (ab211161, Abcam) according to the product instruction. GST-PBD (p21-binding domain of PAK) was used for the Rac1 activity assays. The CRC cells were transfected with DMTN overexpression, knockdown, or control vector and washed with PBS after 72 h. Then, the cells were lysed in ice-cold Mg 2+ lysis buffer and centrifuged for 5 min at 13,000× g at 4 • C. Next, 40 µL of supernatant was taken to determine the total Rac1 levels. The remaining supernatants were incubated with GST-PBD on glutathione-sepharose beads and rotated at 4 • C for 2 h. The beads were washed extensively in lysis buffer, and the bound proteins were separated by SDS-PAGE and then immunoblotted with anti-Rac1 antibodies.
Measurement of ROS Levels
A ROS assay kit from Nanjing, China was used to detect the ROS level according to the protocol of the manufacturer. A 0.5 cm-long spinal cord sample was taken from the center of the injured site and infiltrated with 0.1 mol/L PBS, and then the samples were weighed and homogenized in PBS (1 g: 20 mL). After that, the mixtures were centrifuged at 1000× g for 10 min at 4 • C. The supernatant (190 µL) and DCFH-DA (10 µL, 1 mol/L) were added to 96-well plates and, as a control, the same volume of PBS was mixed with the supernatant. Then, the supernatant was detected at an emission wavelength of 525 nm and excitation wavelength of 500 nm via a strip reader (Biotek Instruments Inc., Winooski, VT, USA). The ROS levels were displayed in the form of fluorescence/mg protein.
Statistical Analysis
Data were represented as mean ± standard deviation (SD). Statistical analysis was carried out with SPSS (version 25.0) and GraphPad Prism for Windows (GraphPad, Inc., San Diego, CA, USA). One-way analysis of variance (ANOVA) was adopted to analyze the statistical differences among groups. p < 0.05 was considered to be statistically significant.
Temporal Patterns and Localization of DJ-1 after t-SCI
Western blotting indicated that the level of DJ-1 began to increase significantly at 3 h and peaked at 24 h post-injury compared with the sham group. The protein levels of DJ-1 significantly decreased after 24 h post-injury but were still higher than those in the sham group ( Figure 2A). Furthermore, DJ-1 was mainly located in neurons ( Figure 2B
Downregulation of DJ-1 Increased NLRP3 Inflammasome Activation and Destruction of BCSB
Given that cleaved caspase 1, IL-1β, and IL-18 are common downstream molecules of the NLRP3 inflammasome and neuroinflammation indicators, these levels were examined after upregulating and downregulating DJ-1. Western blotting showed that the protein levels of DJ-1, NLRP3, cleaved caspase 1, IL-1β, and IL-18 were significantly increased after t-SCI compared to the sham group. Treatment with DJ-1 siRNA significantly decreased DJ-1 and elevated the NLRP3. The level of NLRP3-related downstream molecules (cleaved caspase 1, IL-1β, IL-18) exhibited an increase, indicating that the downregulation of DJ-1 could activate NLRP3 inflammasome and aggravate neuroinflammation ( Figure 3A Previous studies have demonstrated that neuroinflammation might lead to the destruction of BCSB, which is usually accompanied by the degradation of tight junction proteins including Claudin-5, Occludin, and ZO-1. Our results showed that t-SCI promoted MMP-9 expression and reduced tight junction proteins, which means the destruction of
Downregulation of DJ-1 Increased NLRP3 Inflammasome Activation and Destruction of BCSB
Given that cleaved caspase 1, IL-1β, and IL-18 are common downstream molecules of the NLRP3 inflammasome and neuroinflammation indicators, these levels were examined after upregulating and downregulating DJ-1. Western blotting showed that the protein levels of DJ-1, NLRP3, cleaved caspase 1, IL-1β, and IL-18 were significantly increased after t-SCI compared to the sham group. Treatment with DJ-1 siRNA significantly decreased DJ-1 and elevated the NLRP3. The level of NLRP3-related downstream molecules (cleaved caspase 1, IL-1β, IL-18) exhibited an increase, indicating that the downregulation of DJ-1 could activate NLRP3 inflammasome and aggravate neuroinflammation ( Figure 3A-F).
SB Increased the Neuroprotective Effects of DJ-1 on Inhibiting NLRP3 Inflammasome Activation and Alleviating BCSB Disruption
Conversely, treatment with SB significantly increased DJ-1 and decreased the NLRP3, cleaved caspase 1, IL-1β and IL-18, compared to the t-SCI + vehicle group. Furthermore, tight junction proteins were higher, and MMP-9 was lower in the t-SCI + SB group. The results indicated that SB increased the neuroprotective effects of DJ-1 on mediating NLRP3-related neuroinflammation and BCSB disruption. Additionally, DJ-1 siRNA can reverse the effect of SB when the protein levels display no obvious differences between the t-SCI + vehicle and t-SCI + SB + DJ-1 siRNA groups (p > 0.05, Figure 4A-G). Previous studies have demonstrated that neuroinflammation might lead to the destruction of BCSB, which is usually accompanied by the degradation of tight junction proteins including Claudin-5, Occludin, and ZO-1. Our results showed that t-SCI promoted MMP-9 expression and reduced tight junction proteins, which means the destruction of BCSB, whereas, after downregulating DJ-1, the expression of MMP-9 was higher, and the tight junction proteins were lower than the t-SCI + scramble siRNA ( Figure 3G-J).
SB Increased the Neuroprotective Effects of DJ-1 on Inhibiting NLRP3 Inflammasome Activation and Alleviating BCSB Disruption
Conversely, treatment with SB significantly increased DJ-1 and decreased the NLRP3, cleaved caspase 1, IL-1β and IL-18, compared to the t-SCI + vehicle group. Furthermore, tight junction proteins were higher, and MMP-9 was lower in the t-SCI + SB group. The results indicated that SB increased the neuroprotective effects of DJ-1 on mediating NLRP3-related neuroinflammation and BCSB disruption. Additionally, DJ-1 siRNA can reverse the effect of SB when the protein levels display no obvious differences between the t-SCI + vehicle and t-SCI + SB + DJ-1 siRNA groups (p > 0.05, Figure 4A-G).
DJ-1 Improved Long-Term Neurological Function
In this part of this study, blinded BBB scores and the IPT results were used to evaluate the locomotion function. Throughout post-SCI recovery, BBB scores were higher in the t-SCI + SB group than t-SCI + vehicle at each time point and were significantly higher at the 21st and 28th day post-injury (p < 0.05, Figure 5A). The angles of incline were bigger in the t-SCI + SB group than t-SCI + vehicle and showed significant change on the 28th day postinjury (p < 0.05, Figure 5B). The results showed that the SB treatment group experienced much greater functional recovery after t-SCI. This demonstrates that upregulating DJ-1 could promote long-term neurological function rather than short-term, whereas DJ-1 reduction on neuromotor function was not significant.
DJ-1 Improved Long-Term Neurological Function
In this part of this study, blinded BBB scores and the IPT results were used to evaluate the locomotion function. Throughout post-SCI recovery, BBB scores were higher in the t-SCI + SB group than t-SCI + vehicle at each time point and were significantly higher at the 21st and 28th day post-injury (p < 0.05, Figure 5A). The angles of incline were bigger in the t-SCI + SB group than t-SCI + vehicle and showed significant change on the 28th day post-injury (p < 0.05, Figure 5B). The results showed that the SB treatment group experienced much greater functional recovery after t-SCI. This demonstrates that upregulating DJ-1 could promote long-term neurological function rather than short-term, whereas DJ-1 reduction on neuromotor function was not significant.
DJ-1 Improved Long-Term Neurological Function
In this part of this study, blinded BBB scores and the IPT results were used to evaluate the locomotion function. Throughout post-SCI recovery, BBB scores were higher in the t-SCI + SB group than t-SCI + vehicle at each time point and were significantly higher at the 21st and 28th day post-injury (p < 0.05, Figure 5A). The angles of incline were bigger in the t-SCI + SB group than t-SCI + vehicle and showed significant change on the 28th day postinjury (p < 0.05, Figure 5B). The results showed that the SB treatment group experienced much greater functional recovery after t-SCI. This demonstrates that upregulating DJ-1 could promote long-term neurological function rather than short-term, whereas DJ-1 reduction on neuromotor function was not significant.
DJ-1 Mediated Neuroinflammation through SOCS1
The next part of this study was conducted to determine whether the effects of DJ-1 mediating NLRP3 inflammasome in the pathological process following t-SCI were exerted through SOCS1. SOCS1 siRNA was introduced to inhibit SOCS1. Western blotting indicated that DJ-1 and SOCS1 were higher after SB treatment than in the t-SCI + vehicle group, whereas the administration of SOCS1 siRNA increased the DJ-1, possibly through a negative feedback mechanism, indicating that SOCS1 could act as a downstream molecule of DJ-1 ( Figure 6A-C).
DJ-1 Mediated Neuroinflammation through SOCS1
The next part of this study was conducted to determine whether the effects of DJ-1 mediating NLRP3 inflammasome in the pathological process following t-SCI were exerted through SOCS1. SOCS1 siRNA was introduced to inhibit SOCS1. Western blotting indicated that DJ-1 and SOCS1 were higher after SB treatment than in the t-SCI + vehicle group, whereas the administration of SOCS1 siRNA increased the DJ-1, possibly through a negative feedback mechanism, indicating that SOCS1 could act as a downstream molecule of DJ-1 ( Figure 6A-C).
The results revealed that SB decreased Rac1-GTP, ROS, and NLRP3; however, these effects were reversed by SOCS1 siRNA treatment (t-SCI + SB + SOCS1 siRNA vs. t-SCI +SB, Figure 6D-F). We also observed that reducing SOCS1 could eliminate the neuroprotective effect of DJ-1 since the level of Rac1-GTP, ROS, and NLRP3 inflammasome did not obviously alter in the t-SCI + SB + SOCS1 siRNA group (Figure 6D-F). The evidence above suggests that DJ-1 plays a role in mediating NLRP3 inflammasome-related neuroinflammation via elevating the protein levels of SOCS1. The results revealed that SB decreased Rac1-GTP, ROS, and NLRP3; however, these effects were reversed by SOCS1 siRNA treatment (t-SCI + SB + SOCS1 siRNA vs. t-SCI +SB, Figure 6D-F). We also observed that reducing SOCS1 could eliminate the neuroprotective effect of DJ-1 since the level of Rac1-GTP, ROS, and NLRP3 inflammasome did not obviously alter in the t-SCI + SB + SOCS1 siRNA group ( Figure 6D-F). The evidence above suggests that DJ-1 plays a role in mediating NLRP3 inflammasome-related neuroinflammation via elevating the protein levels of SOCS1.
Discussion
In this study, we made the following major finds: (1) the expression of DJ-1 was increased after t-SCI and was mainly located in neurons. (2) The knockdown of DJ-1 with specific siRNA significantly activated NLRP3 and its associated inflammatory cytokines, increased MMP-9, and reduced tight junction proteins, which resulted in increased neuroinflammation and destruction of BSCB integrity. (3) The up-regulation of DJ-1 reversed these effects and also improved neurological locomotor function. (4) The use of SOCS1 siRNA abolished the neuroprotective effects of DJ-1 induced by SB and increased the levels of ROS, Ras1, and NLRP3. The working model for this study is available in Figure 7.
Discussion
In this study, we made the following major finds: (1) the expression of DJ-1 was increased after t-SCI and was mainly located in neurons. (2) The knockdown of DJ-1 with specific siRNA significantly activated NLRP3 and its associated inflammatory cytokines, increased MMP-9, and reduced tight junction proteins, which resulted in increased neuroinflammation and destruction of BSCB integrity. (3) The up-regulation of DJ-1 reversed these effects and also improved neurological locomotor function. (4) The use of SOCS1 siRNA abolished the neuroprotective effects of DJ-1 induced by SB and increased the levels of ROS, Ras1, and NLRP3. The working model for this study is available in Figure 7. Regulating post-injury inflammation to improve neurological prognosis is a hot topic in the field of spinal cord injury [23]. Methylprednisolone sodium succinate was one of the first drugs used clinically to prevent secondary damage, by stabilizing cell membranes and providing anti-inflammatory effects, but with clinical efficacy and unavoidable side effects [24]. There have been many subsequent clinical studies targeting multiple drugs including COX inhibitors [25], corticosteroids [26], and minocycline [27], but none of them have achieved satisfactory results. The underlying reason is that the cognition of the mechanism of inflammation is not comprehensive and clear, and appropriate targets and drugs have not been found.
DJ-1 is a small, ubiquitously expressed protein in the brain that exists as a homodimer in the cytoplasm, mitochondria, and nucleus [28]. The DJ-1 gene has been identified as the pathogenic gene in familial Parkinson's disease and was later found to show neuroprotective effects in neurodegenerative diseases [29,30]. The suggested neuroprotective mechanisms of DJ-1 include reducing neuronal oxidative stress and attenuating microglial activation [31][32][33]. Our previous experiments found that DJ-1 could decrease oxidative stress related to nerve injury after t-SCI [34]. However, there are few studies on the Regulating post-injury inflammation to improve neurological prognosis is a hot topic in the field of spinal cord injury [23]. Methylprednisolone sodium succinate was one of the first drugs used clinically to prevent secondary damage, by stabilizing cell membranes and providing anti-inflammatory effects, but with clinical efficacy and unavoidable side effects [24]. There have been many subsequent clinical studies targeting multiple drugs including COX inhibitors [25], corticosteroids [26], and minocycline [27], but none of them have achieved satisfactory results. The underlying reason is that the cognition of the mechanism of inflammation is not comprehensive and clear, and appropriate targets and drugs have not been found.
DJ-1 is a small, ubiquitously expressed protein in the brain that exists as a homodimer in the cytoplasm, mitochondria, and nucleus [28]. The DJ-1 gene has been identified as the pathogenic gene in familial Parkinson's disease and was later found to show neuroprotective effects in neurodegenerative diseases [29,30]. The suggested neuroprotective mechanisms of DJ-1 include reducing neuronal oxidative stress and attenuating microglial activation [31][32][33]. Our previous experiments found that DJ-1 could decrease oxidative stress related to nerve injury after t-SCI [34]. However, there are few studies on the mechanism of DJ-1 mediating neuroinflammation in spinal cord injury. Our results indicate that DJ-1 increased after injury and peaked at 24 h and was involved in the acute injury stage. Furthermore, the upregulation of DJ-1 decreased inflammatory cytokines, including IL-18, IL-1β, and ca caspase-1, and improved long-term neuromotor function.
To further analyze the potential mechanisms of DJ-1 mediating the inflammation, we explored the activation of the NLRP3 inflammasome, which controls the maturation and release of pro-inflammatory cytokines, especially IL-18, IL-1β, and caspase-1 [35].
The NLRP3 inflammasome is the most characteristic in the NLR family and was found to participate in neurodegenerative diseases and cerebrovascular diseases [36][37][38]. A previous study found that NLRP3 inflammasome significantly increased after SCI [39], and that suppressing NLRP3 inflammasome activation could alleviate SCI-induced neuroinflammation and spinal nerve injury [40]. We found that the protein expressions of NLRP3, caspase-1, IL-1β, and IL-18 were increased after SCI, indicating that the NLRP3 inflammasome was activated. Downregulating DJ-1 reduced the activation of the NLRP3 inflammasome, and SB reversed this effect. A previous study showed that the suppressor of cytokine signal 1 (SOCS1) inhibited the activity of NLRP3 [41]. SOCS1 is a key physiological suppressor of cytokine found to weaken cytokine signals, usually via the NF-κB, JAK2, and TLR4 pathways [42], as well as regulating neuronal immunity in CNS cells [43]. SOCS1 also participates in oxidative stress to degrade active Rac1 and inhibit the production of reactive oxygen species (ROS) [44], which is a vital trigger for the activation of the NLRP3 inflammasome [45]. Consistent with our predictions, the inhibition of SOCS1 increased DJ-1 in feedback, which acted as an upstream of SOCS1, and inhibited the activation of the NLRP3 inflammasome by inhibiting ROS.
Additionally, BSCB destruction is the main pathological change after t-SCI, and results in a poor prognosis [46]. The regulatory and protective functions of the BSCB stem from a highly evolved, complex network of tight junction proteins, including ZO-1, Occludin, and Claudin-5 [47,48]. After t-SCI, tight junction proteins degrade and BSCB permeability increases [49], resulting in leukocyte infiltration and inflammatory cascade reaction [50]. MMP-9 is a gelatinase secreted by infiltrating neutrophils and is also involved in this process, acting as a key mediator of early inflammation [51]. After t-SCI, neutrophils infiltrate and release the MMP-9, which degrades the extracellular matrix, tight junction proteins and surrounding substrate composition [52]. In our study, MMP-9 was in a higher condition, and tight junction proteins (Claudin-5, Occludin and ZO-1) were in a lower condition after t-SCI. Furthermore, the upregulation of DJ-1 alleviated BCSB damage by decreasing MMP-9 and increasing tight junction proteins. As also seen in previous research, activating signaling pathways and releasing inflammatory cytokines after the activation of the NLRP3 inflammasomes creates a pro-inflammatory environment [53], which will mediate MMP-9 and tight-junction protein expression and promote BSCB destruction [54].
Through our study, the potential role of DJ was preliminarily explored, which provided some theoretical reference for further clinical transformation. However, there are several weaknesses in our study. DJ-1 has been proven to exhibit neuroprotective effects after t-SCI through multiple mechanisms, but we only focused on NLRP3 inflammasome-related neuroinflammatory. The underlying mechanisms of how DJ-1 reduces BSCB distribution require more investigation, and in vivo studies should be completed in future research.
Conclusions
In conclusion, we have shown a previously uncharacterized signaling pathway in which DJ-1 played a neuroprotective role after t-SCI. Our data indicated that DJ-1 suppressed t-SCI induced neuroinflammation by inhibiting the activation of NLRP3 inflammasome via the SOCS1/Rac1/ROS pathway. Furthermore, the pharmacological upregulation of DJ-1 alleviated inflammasome-related BSCB destruction and promoted long-term neurological locomotor function. Altogether, DJ-1 might be a promising therapeutic target for t-SCI, and further studies are needed.
Institutional Review Board Statement:
The animal study protocol was approved by the ethics committee of Zhejiang University for studies involving animals.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest:
All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | 5,786.2 | 2022-06-27T00:00:00.000 | [
"Biology"
] |
An Orderly Untangling Model against Arching Effect in Emergency Evacuation Based on Equilibrium Partition of Crowd
To untangle the arching effect of a crowd as much as possible in emergency evacuations, we employ a theoretical model of equilibrium partition of crowd batch. Based on the shortest time arrangement of evacuation, the crowd is divided into appropriate batches according to the occupied time of evacuation channel in order to determine the occupant number of every evacuation passageway.The number of each batch crowd is calculated under the condition that the time of entering the evacuation passageway is equal to the time of crossing over the evacuation passageway. Subsequently, the shortest processing time (SPT) rule establishes the evacuation order of each batch. Taking a canteen of ChinaThree Gorges University as a background, we obtain the waiting time from the first person to the last one entering the evacuation channel in every batch by simulation. This research utilizes data from simulations to observe an untangling process against the arching effect based on the SPT rule. More specifically, evacuation time only lasts for 180.1 s in order and is 1.6 s longer than that in disorder, but the arching effect disappears. Policy recommendations are offered to improve the evacuation scheme in disaster operations.
Introduction
When a large number of people out of control swarm into an evacuation passageway, they will be firmly stuck in there and will bring about an arching effect.The arching effect is produced frequently in emergency evacuations.Due to the arching effect, crowd evacuation fails now and then.In 1994, an art show at the Karamay oil field in China caused a fire to form an arch of 800 students in the evacuation exit due to the poor evacuation strategy, and eventually 323 people died in the blaze.The arching effect in emergency evacuations is a problem that cannot be fenced all the time during the researching process of emergency management [1].Therefore, untangling the arching effect is of great significance for crowd evacuation safety and reducing the occurrence of secondary disasters such as trampling.
Up to now, many mathematical models have been developed to simulate the pedestrian evacuation process inside buildings.Classically, these models can be categorized into two types: continuous and discrete.The social force model [2][3][4] is widely used as a representatively continuous model, in which the interactions between occupants and various environmental stimuli are quantified as force.The representatives of the discrete models are a cellular automaton model [5] and a lattice gas model [6].These models after calibration can usually obtain satisfactory simulation results and are able to reproduce various behaviors and self-organization phenomena, such as the "herding" behavior [7], arching effect [8], and "faster-is-lower" phenomena [9].For the discrete models, the representatives are cellular automaton and lattice gas models.
Existing evacuation models have achieved effective analysis of personnel micromovement or the overall macrotraffic situation during evacuation.However, in the real story of emergency, people are affected invariably by individual or group psychology, so their behavioral reactions which can cause many uncertainties are quite different from nonemergency [10,11].Hoogendoorn and Bovy [10] stated that individual behavior was affected by external factors, such as environmental stimuli and obstructions, and by internal factors, such as pedestrian intention and time constraints.Sime [11] presented an analytical model of pedestrian evacuation time, in which the impact of multiple factors, such as pedestrian psychology, building structures, and evacuation facilities, on pedestrian route choice behavior was considered.Fridolf et al. [12] provided a review on previously reported fire accidents in underground transport systems in addition to evaluating four different human behavior theories that could be used in fire safety design of railway stations.Apart from the issues mentioned above, evacuation speed and counterflow studies were also given much attention [13,14].A study where the actual experiment was conducted with pedestrian counter flow was presented by Isobe et al. [15], who clarified the characteristic properties of pedestrian channel flow and showed the simulation model representing the counter flow.Cłapa et al. [16,17] showed that the people density confirmed the dependency between the flow density of evacuating people and the speed of moving people.This dependency was proportional; the speed of people movement rose when the people density decreased.Stoppage of evacuating people was caused by the willingness to pass through the rescue teams as quickly as possible.
In this paper, we have developed an evacuation strategy for solving the arching phenomenon.With the Queuing Theory, quantitative analysis is conducted for the batching of a crowd.We put forward an equal of the time of entering an evacuation passageway and the time of crossing over an evacuation passageway to divide the crowd into reasonable batches and to determine crowd evacuation order on the basis of ordering rule SPT.We attempt to verify whether the arching effect will disappear within the security evacuation time.
Method
2.1.Orderly Partition Evacuation.In general evacuation, all pedestrians flock to the exit at the same time and the evacuation capacity of the passageway becomes unable to hold that load, so pedestrians get firmly stuck in the passageway.The longer the congestion time, the greater the risk of trampling.
In the partition evacuation process, the crowd is divided into several parts and the congestion time is changed into waiting time for relieving potential hazards.When the crowd evacuation starts, the batch of pedestrians closest to the exit start to escape from the disaster scene at first and the rest of the batches of people need to stay where they are.All batches of pedestrians evacuate one by one.Thus stated, we determine whether the orderly partition evacuation is feasible under critical conditions where the arching phenomenon disappears and there is not much difference between the orderly partition evacuation time and disorderly evacuation time.And the orderly partition evacuation time is within standard time.
Basic Parameters.
The evacuation time depends on the escape speed.Therefore, we should make sure of the velocity initially.Generally, a single pedestrian's evacuation speed is affected by his/her own physical conditions and construction environment factors, and the occupants density is a critical factor for crowd evacuation speed.As usual, the crowd evacuation parameters include horizontal movement velocity, stair movement velocity, and flow coefficient [18].
(1) Horizontal Movement Velocity.The escape speed will not be affected and the initial speed will stay the same when the population density is small.However, the relation between velocity and density is represented by an increasing linear function when the density is greater than 0.55 pers/m 2 .The concrete formula is shown as follows: where is the crowd density, 0 is walking velocity in different buildings when the population density is less than 0.55 pers/m 2 , and min is the minimum velocity in different environments.
(2) Movement Velocity on Stairs.The type of stair treads is a major factor affecting the evacuation speed.Stair treads include tread length and height.In formula (1), the influence parameter of horizontal movement is 1.4.Through the actual observation and data statistics, the influence parameter of stairs movement is 0.86(/) 0.5 .The formula is shown as follows: where is tread length and is tread height.
(3) Unit Outflow Coefficient.Egress capacity (flow rate) is expressed in persons per minute per unit width and represented by the product of density and velocity, shown as follows: where takes 1.4 in horizontal channel and takes 0.86(/ ) 0.5 in stairs.
Partition Parameters.
According to Section 2.1, the two most important parameters are the number of pedestrians in each batch and the waiting time of each batch among the orderly partition evacuation.We tend to regard the partition evacuation system as a queuing system for more easily analyzing the relations among the evacuation parameters.In this queuing system, the input procedure is deterministic and is composed of waiting and walking behavior of each part.The service regulation is first in first out (FIFO) and the time taken by each batch of people through the passageway is service time.Thus stated, the corresponding queuing model is shown in Table 1.
In the event of disaster, the arching effect does not occur under the critical condition that the average cross time is equal to the average preparatory evacuation time when the crowd flocks to bottlenecks such as an escape staircase or evacuation passageway; the critical condition is defined as follows: where is the average cross time and is the average preparatory evacuation time.The average cross time is the time during which the crowd crosses the evacuation passageway (stairs, corridors, etc.) and is defined as follows: where is the length of the evacuation passageway and is obtained by ( 1) and ( 2).The average preparatory evacuation time is the total number of evacuation occupants divided by the number entering the evacuation passageway per unit time, which is defined as follows: where is the total number of evacuation occupants, is the evacuation passageway unit flow coefficient, and is the effective width of entering the evacuation passageway corridor or door.
Considering multiple evacuation routes [19], the total crowd evacuation time can be the shortest under the condition that the holding time of each evacuation passageway is the same (to avoid the phenomenon of the evacuation passageway appearing as idle).Therefore, the number of evacuating people for each alleyway is obtained by the following equilibrium equation: where is the total number, is the number of evacuating people by th export, is the unit outflow coefficient of th export, and is the effective width of th export.
To avoid the arching effect, the crowd partition model is adopted after the crowd evacuation direction is determined by the equilibrium equation (7).In order to figure out the number of each batch for single export, we divide the balanced evacuation numbers for each export and combine them with the critical condition (4), as follows: where is the number of each batch of th export.
When one batch's occupants escape from the disaster scene, the rest of the batches' occupants need to stay where they are.The waiting time is obtained in accordance with the condition that each batch of occupants escapes end-to-end.
When the crowd flows, the population density changes in pace with the surrounding environment variety.Therefore, the evacuation time can be calculated quantitatively unless we know the people's density in the evacuation passageway.However, we can solve this problem with the aid of virtual simulation.Simulation of evacuation in batches can obtain the time of the first man and last man entering the evacuation passageway of the th batch.The waiting time of each batch is calculated as follows: where 0 is the time of the first person entering the passageway of the th batch, 1 is the time of the last person entering the passageway of the th batch, and is the waiting time of the th batch.
The Evacuation Order.
The evacuation order of each batch crowd can be seen as a single channel scheduling problem, as follows: where is the finish time of evacuation of the th batch and is the weight of the th batch.
For the single channel scheduling problem, applying WSPT (weighted shortest processing time first) can get the optimal evacuation order.In particular, the WSPT rule transforms into SPT rule (shortest processing time first) with the case that each batch is weighted equally.
Because each batch is weighted equally when the crowd evacuates, we should employ the SPT rule to obtain the evacuation order of each batch.Accordingly, each batch of occupants begins to evacuate with the order based on the evacuation time of each batch, from short to long.
The Basic Data and the Crowd Division.
For the sake of convenience, we set XinYuan canteen of China Three Gorges University as the physical background, which is a building with 2 storeys.We suppose that the dining room is saturated and every seat is occupied.The capacity of the canteen is 1200 diners and each layer can accommodate 600.Four seats are posted to stand by a table.Each table in this canteen has 0.6 m right-and-left clearance and 1.2 m fore-and-aft clearance from the next.There are two 2.7-meter-width doors at both sides of the dining room, and occupants can reach the evacuation stairs through the door.The effective width of the left-hand and right-hand evacuation stairs is, respectively, 2.1 m and 2.57 m.Both sides of the stairs' pedal have identical flight run and flight raise, which are, respectively, 0.4 m and 0.17 m.The real scene is shown in Figure 1.
In order to ensure the full use of the evacuation passageway, evacuation should be finished at the same time on both side stairs without leisure of the evacuation passageway.Due to the fact that the flight run and flight raise of both side stairs are the same, the effective egress capacity on both sides is equal.Effective egress capacity is proportional to the width of the stairs, and first division serves to obtain the number of each side's people according to the equilibrium equation ( 7) as follows: According to (11), the number of people evacuating through the left stairs is 268 (on the left side of the area in Figure 2), and the number of those evacuating through the right stairs is 332 (on the right side of the area in Figure 2).
After the first division, we partition the right-hand people to let the crowd line up in batches.In order to avoid the arching effect at the entrance of the evacuation passageway, the average passing rate has to be equal to the average preparatory evacuation rate.
The total length of the right side evacuation stairs is 11.4 m, and the length of the middle platform is 2.6 m.By actual observation, the density is 2.4 persons/m 2 when the crowd passes through the evacuation stairs.As a result, the movement speed on the stairs turns out to be 0.6 m/s according to (2).According to (3), we further calculate a unit flow coefficient of 1.45/(m⋅s) in the evacuation passageway.The movement speed of walking on the middle platform can also be seen as 0.6 m/s owing to the short platform.The number of each batch of the right side is as follows: The right-hand crowd is divided into 87 occupants per batch.332 people egress through the right stairs; the first three batches comprise 87 occupants; the last batch is 71; the batch is marked as ( = 1, 2, 3, 4), as shown in Figure 2 and Table 2.
The left-hand people are divided in the same way as follows: where is the length of stairs on the left side, is the length of the platform in the middle on the left side, and is the effective width of stairs on the left side.
When the crowd on the left side is divided into 76 persons per batch under the premise of (4), 268 people will need to be evacuated on the left side; the volume of the first three batches is 76 persons/batch; the last only has 40 persons; four batches are set as ( = 1, 2, 3, 4), as shown in Figure 2 and Table 3.
Waiting Time and Queuing
Sequence.Time of the first and last man starting to enter the stairs is obtained by simulation, and the waiting time of each batch is obtained from (9).The results are shown in Table 4.
Based on the SPT rule and data in Table 4, the evacuation order is as follows: On the left side: On the right side: 1 → 2 → 3 → 4 .
Evacuation Simulation.
The simulation is conducted using the above data that includes basic parameters, such as the occupant load of the facility, the characteristics of the population (e.g., shoulder breadth), the speed of movement, the waiting time, and the evacuation direction of each batch.
The above analysis of batch partition and orderly evacuation for the occupants only serves for the second layer.First of all, we should discuss the problem of whether the first layer should be analyzed.This question remains whether the evacuation time of the second floor is equal to the evacuation time of the whole building.For the disorderly state with no interval time, the simulation result is shown in Table 5.
The evacuation time of the second floor is 179.5 s, and that of the whole building is 179.3 s.Obviously, the error Le -hand is caused by computer performances and program settings, and its relative value is 0.11% and can be ignored.Therefore, the second floor is determined as a key evacuation layer, which controls the emergency evacuation time of the whole building.From the simulation results, the evacuation process of the first layer does not exhibit the arching effect.As a result, it is only necessary to establish the orderly evacuation strategy for the crowd on the second floor.By comparing the orderly evacuation time and the disorderly evacuation time, the simulation results are shown in Table 6.
Informed by Table 4, the orderly evacuation time of the crowd after partial division is 1.6 s more than the disorderly evacuation time.From Figure 3(a), we can clearly see that the occupant variation at the right-hand passageway entrance is wavy under the orderly evacuation and the maximum value of variation is 27 people.Figure 3(b) shows that the variation under the disorderly evacuation is trapezoidal, the number of people is more than 30 during 20∼120 s, and the number even remains at about 60 during 50∼110 s.This is much greater than the maximum value of orderly evacuation.
Simulation results indicate that (1) the arching effect is serious under the disorderly evacuation situation; (2) the maximum radius of the crowd arch is 7 meters; (3) the maximum density is 4 occs/m 2 .However, when the crowd escapes in the queue of orderly evacuation, the arching effect of the crowd does not appear at the stairs' entrance as shown in Figure 4.
The critical density for accidents involving trampling is approximately 5 occs/m 2 [20].However, these critical figures are very variable and depend strongly on the physical characteristics of the pedestrians involved.Thus, even though the maximum density in simulation is less than the critical density for trampling, the possibility of trampling still remains high.
According to Hong Kong's Guidelines on Formulation of Fire Safety Requirements for New Railway Infrastructures (HK Fire Services Dept., 2013), there shall be adequate means of escape for all calculated populations under the worst-case scenario, to escape safely from the fire scene to an adjacent place of Safe Passage within 4.5 minutes without being overwhelmed by the effects of fire and smoke [21].As a result, 181.1 s is less than the lowest requirement of 4.5 minutes.
Conclusions
We have developed a strategy for solving the arching effect.Therein, the evacuation balanced equation is based on the principle that the occupation time of each evacuation exit is equal.We divided the occupants firstly with consideration of the width of the evacuation passageway.Based on the Queuing Theory, the crowd was divided into batches according to the situation where the time of entering the evacuation passageway is equal to the time of crossing over an evacuation passageway.Then, a simulation program served to calculate the waiting time of each batch in accordance with the time from the first person to the last one entering the evacuation passageway.After that, the SPT rule determined the evacuation sequence of each batch after partitioning.
After simulations, we found that the arching effect disappears although the orderly evacuation does not make a positive contribution to the reduction of the evacuation time.We think the reason may be that the propulsive force of the arching effect makes occupants walk down the stairs faster.However, the density in disorderly evacuation is close to the critical value of trampling, but orderly evacuation kept the occupants density in the low level all the time.At last, we had faith in the possibility of reduction of trampling in orderly evacuation.
This paper studied evacuation strategy preliminarily.We are going to take the influence of various factors into account in the future, such as the cluster effect, the diversity of individual behavior, and physical characteristics.
Figure 3 :
Figure 3: The different progress curves of evacuation.
Figure 4 :
Figure 4: Two kinds of evacuation state.
Table 1 :
Queue system of the crowd evacuation.
Table 2 :
Number of persons per batch on the right side.
Table 3 :
Number of persons per batch on the left side.
Table 4 :
Waiting time of each crowd in line.
Table 5 :
The time comparison between the whole building and the second floor.
Table 6 :
The evacuation time comparison. | 4,711 | 2017-11-15T00:00:00.000 | [
"Computer Science"
] |
Foraging with MUSHROOMS: A Mixed-Integer Linear Programming Scheduler for Multimessenger Target of Opportunity Searches with the Zwicky Transient Facility
Electromagnetic follow-up of gravitational wave detections is very resource intensive, taking up hours of limited observation time on dozens of telescopes. Creating more efficient schedules for follow-up will lead to a commensurate increase in counterpart location efficiency without using more telescope time. Widely used in operations research and telescope scheduling, mixed integer linear programming (MILP) is a strong candidate to produce these higher-efficiency schedules, as it can make use of powerful commercial solvers that find globally optimal solutions to provided problems . We detail a new target of opportunity scheduling algorithm designed with Zwicky Transient Facility in mind that uses mixed integer linear programming. We compare its performance to \texttt{gwemopt}, the tuned heuristic scheduler used by the Zwicky Transient Facility and other facilities during the third LIGO-Virgo gravitational wave observing run. This new algorithm uses variable-length observing blocks to enforce cadence requirements and ensure field observability, along with having a secondary optimization step to minimize slew time. \blue{We show that by employing a hybrid method utilizing both this scheduler and \texttt{gwemopt}, the previous scheduler used, in concert, we can achieve an average improvement in detection efficiency of 3\%-11\% over \texttt{gwemopt} alone} for a simulated binary neutron star merger data set consistent with LIGO-Virgo's third observing run, highlighting the potential of mixed integer target of opportunity schedulers for future multimessenger follow-up surveys.
INTRODUCTION
The detection of GW170817 in August 2017 signaled the beginning of a new era of multimessenger astronomy, promising advances in r-process nucleosynthesis (e.g. Chornock et al. 2017;Coulter et al. 2017;Cowperthwaite et al. 2017;Pian et al. 2017), the neutron star equation of state (e.g. Abbott et al. 2018;Radice et al. 2018;Coughlin et al. 2019Dietrich et al. 2020a), and the value of the Hubble constant (e.g. Hotokezaka et al. 2019;Dietrich et al. 2020a). This was thanks to the detection of GW170817 ) and its electromagnetic counterparts: a kilonova (ultraviolet/optical/near-IR emission generated by the radioactive decay of rprocess elements) (e.g. Kasliwal et al. 2017;Evans et al. 2017;Kilpatrick et al. 2017;Pian et al. 2017;Shappee et al. 2017;Smartt et al. 2017), a short gamma ray burst (e.g. Goldstein et al. 2017), and an afterglow (e.g. Hallinan et al. 2017;Troja et al. 2017). However, since then, no further electromagnetic counterparts to gravitational-wave detections have been confirmed, despite several other detected binary neutron star and neutron star black hole mergers during LIGO-Virgo's third observing run (Abbott et al. 2021). This can mostly be explained by the localization areas of neutron star containing mergers being much larger than expected, (e.g. Petrov et al. 2021), making efficient observation planning all the more important. With, on average, a much larger area than previously thought to search, there are many more choices for potential schedules, but it will take an optimal scheduler to maximize scientific output.
Fundamentally, telescope scheduling software determines which fields to observe in what order, subject to environmental and programmatic constraints. In the case of the follow-up of large sky localizations produced by gravitational-wave (e.g. Coughlin et al. 2019a;Anand et al. 2020) or gamma-ray burst (e.g. Coughlin et al. 2019b; Ahumada et al. 2021) events with wide field of view surveys such as the Zwicky Transient Facility (Bellm et al. 2019b;Graham et al. 2019;Dekany et al. 2020;Masci et al. 2018), the goal is usually to maximize an objective function, which is typically taken to be the integral of the probability skymap over the combined footprint of all the observations, although other choices are possible . These observations should also be completed in the minimal amount of time, as many different science programs time share on the same telescope, and therefore any time saved can be utilized by other science programs (Bellm et al. 2019c).
While, in principle, this could be done manually, schedules designed this way are labor-intensive and suboptimal, and it is unclear how the heuristics translate to survey effectiveness. Another common approach is the use of "Greedy" algorithms, which compute a metric or score for each possible target, select the target with the current highest value, observe it, and then repeat the process. This is a ubiquitous approach, adapted by commonly used packages ranging from Astroplan (Morris et al. 2018) to gwemopt (Coughlin et al. , 2019cAlmualla et al. 2020) for a variety of purposes. Unfortunately, due to the inability to plan ahead, the fields chosen are not optimal; instead, an optimal schedule not only accounts for the current possible observations but also for past and potential future observations to maximize the scientific output from those observations, such as the Zwicky Transient Facility's need for a minimum 30-minute cadence when searching for transients to rule out asteroids.
Unfortunately, the scheduling problem is NP-complete and the number of observing sequences is combinatorially large. A well-known model that can make these problems computationally tractable is the use of Integer Linear Programming (ILP); ILP problems have variables which take only discrete integer values, linear objective functions, and linear constraints. In the following, we will also use mixed ILP, which can include some non-integral variables. One popular application of this in astronomy is within the Las Cumbres Observatory (LCO) 1 scheduler, who operate a network of identical imagers and spectrographs; their scheduler (Lampoudi et al. 2015) uses ILP to maximize the total number of observations obtained, weighted by the priority assigned to them by the Time Allocation Committee (TAC). ALMA solves a similar ILP model to maximize TAC-assigned scientific priorities, program completion, and telescope utilization (Solar et al. 2016). ZTF's timeallocation scheduler uses ILP to solve both programlevel and global scheduling constraints and optimally or-1 https://lco.global/ der individual observational blocks (Bellm et al. 2019b), however, the schedulers used to plan within each observation block do not all use ILP, such as the greedy scheduler gwemopt. Bellm et al. 2019a, a paper whose authorship spans many surveys and open-source astronomy software producers, advocate for community emphasis on the use of quantitative objective functions and ILP-based scheduling approaches to address the rapid proliferation of instruments, many of which will benefit from coordination.
In this paper, we introduce MUSHROOMS (Milp-Using ScHeduleR Of sky lOcalization MapS), a MILPbased scheduler for multimessenger follow-up with wide field-of-view surveys. We structure the paper as follows. We start by describing requirements faced by ZTF gravitational-wave follow-up observations in section 2. We then introduce MUSHROOMS and describe the scheduling algorithm in section 3, laying out its MILP formalism and the reasoning behind certain design decisions. In section 4, we use MUSHROOMS to schedule simulated skymaps based on LIGO-Virgo's third observing run and characterize its run time, efficiency and other relevant metrics, and summarize our results and future outlook in section 5.
OBSERVING REQUIREMENTS
Multimessenger astronomy supplements electromagnetic observations with observations using other information carriers such as gravitational waves or neutrinos. Since 2018, ZTF has been used for target of opportunity, multimessenger follow-up searches, both searching for the sources of gravitational-wave detections during the third LIGO-VIRGO observing run (Coughlin et al. 2019a;Anand et al. 2020;Kasliwal et al. 2020), and gamma-ray bursts from detectors like the Fermi gamma-ray burst monitor (Coughlin et al. 2019b;Ahumada et al. 2021). However, there are a number of factors one has to consider when designing schedules for such systems, both in terms of general observational requirements for ground-based surveys and certain ZTF-specific restrictions or demands. For example, targets are only observable at night and when they are above a minimum altitude from horizontal (i.e., below a minimum airmass). There are also common sense constraints: for example, the scheduler cannot schedule more than one field observation at the same time, and it must restrict observations to the window of time available for observing. In addition, there are also limits imposed by the telescope and observing system itself, such as slew speed.
There are also a number of multimessenger transient follow-up restrictions that must be accounted for. For TSP stands for travelling salesperson example, for the ZTF gravitational wave follow-up program, there is a 30-minute cadence requirement, observing once in r-and g-bands, which serves to both eliminate asteroids and gain color information about detected transients. This requirement imposes not only a limit on the return time, but also must account for the filter exchange time within ZTF, which is 2 minutes long. Another special feature of ZTF follow-up is that the system uses a fixed grid of reference images, a preset selection of a limited number of telescope pointings to choose from.
In addition to the requirements, the goal is to limit the total amount of time required for these observations through the selection of an objective function, whose choice we will describe below.
SCHEDULING ALGORITHM AND MILP FORMULATION
Due to the design of the ZTF survey and data system, ZTF has a fixed grid of 1778 telescope pointings. Given a probability density map in right ascension and declination, a span of time t 0 to t o + T , and a fixed exposure time ∆t, the goal is to produce a schedule that meets the observing requirements laid out in Sec 2 by selecting a set of fields S to observe and arranging them in time (with repeats). The objective is to maximize the total probability density contained in the area observed by at least 1 field in S minus a penalty factor proportional to the amount of fields observed with proportionality constant p. We maximize the probability density contained because the overall goal of this scheduler is to identify new transients that could potentially be gravitational wave source for follow-up observation. This is done by comparing observations to reference images to find new sources, so maximizing the probability density observed in theory maximizes the probability of detecting the source for follow-up. We introduced the penalty factor p with the other survey priorities in mind. It allows the user to restrict the search to only fields that introduce more than p to the total probability observed. The expected range of p is [0, 0.02], though it has to be manually selected for each skymap. While not making it impractical for use in scheduling, using it does require some extra effort from the user to determine the trade off between observing time and detection probability best for their situation. This creates shorter duration schedules that only target the highest probability fields and are less intrusive to the other programs; with a value of zero for p, the schedule will fill all available time.
MUSHROOMS (Parazin 2022) is a python-based mixed-integer scheduler that uses the commercially available software Gurobi, which is free with an academic license. When the network of ground based gravitational-wave detectors localize a new event, they release a probability map of where in the sky the source is most likely located (Singer & Price 2016), which MUSHROOMS takes as one of its inputs. An example schedule overlaid on its corresponding probability map is shown in Fig. 2. For each given source localization probability map, referred to as a skymap from here on, the MUSHROOMS algorithm works in a three-step process: a preliminary pruning step, a block-division step, and an observation sequencing step. A flowchart illustrating the whole algorithm can be seen in Fig. 1.
In the pruning step, MUSHROOMS reduces the field grid to a user-provided number of fields using a max weighted coverage algorithm (Nemhauser et al. 1978). This is done to reduce run time during the block-division step.
In the next step, the block-division step, a number of observing blocks are constructed out of the field shortlist produced by the pruning step. MUSHROOMS defines an observing block by a start time, an expected end time, and a collection of fields that are visible for the entire expected duration. To observe each block, all the fields within it are observed in same filter, a filter change is executed, and the fields are observed in another filter. These blocks have a minimum size which depends on the given exposure time and ensures that there are at least 30 minutes of observations between field re-observations. MUSHROOMS calculates the expected block length using an average slew time assumption of 10 seconds since the order of observations, which determines the actual slew time for each block, is not found until the next step. We use this block-division heuristic rather than giving the scheduler complete freedom to order fields and filter changes as it sees fit due to the computational complexity of complete freedom, which would require orders of magnitude more time to run.
To minimize slew time within each block, MUSH-ROOMS calculates the slew times from each field within a block to all other fields in that block using a travelling salesperson (TSP) algorithm to find the order of observations that will minimize slew time within each block.
Finally, MUSHROOMS post-processes the schedule to ensure that it is valid and satisfies all the requirements laid out in Appendix 2. Because the block-division algorithm uses a fixed slew time, if one of the blocks has an average slew time higher than that, the block will run longer than expected, and all subsequent blocks will have to be delayed to avoid scheduling 2 observations at the same time. There is an edge case where this delay means MUSHROOMS schedules a field for observation when it is (barely) below the visibility requirement and cannot be observed. In this scenario, we automatically re-run the schedule utilizing a gap parameter to add more time between the offending blocks. However, in the 951 simulations utilized for this paper (see below), it did not occur once.
The variables, constraints and objective function behind MUSHROOMS are laid out explicitly in Appendix A. This algorithm is a modification of the classic max weighted coverage problem (Nemhauser et al. 1978), Figure 3. Run time of block-division algorithm. The large number of schedules clustered at 500 seconds is a result of setting a 500-second time limit for this step of optimization; all 500 second run times are when the MILP solver used would converge quickly on a high-quality solution but then spend the duration trying to lower the optimality gap.
with additional constraints to allow for the creation of valid observing blocks, as well as an additional (optional) penalty factor p in the objective function.
SIMULATED OBSERVING PLANS
To assess the efficiency of the generated schedules, we ran both the MUSHROOMS and gwemopt algorithm on 951 simulated Binary Neutron Star detections consistent with the third LIGO-Virgo observing run (O3) from When performing the block-division algorithm, we recorded the run time of all 951 schedules, which can be seen in Fig. 3. The mean and median run times for this step were 107 and 15 seconds respectively. The large difference between the mean and median can be attributed to the 140 schedules that took the entire time limit of 500 seconds to complete. In these cases, the solver would quickly converge on a high-quality solution, but would then spend the rest of the time limit attempting to narrow the optimality gap. This 500-second time limit was chosen because when developing MUSHROOMS it was observed that most schedules which converged in a reasonable amount of time did so before 500 seconds, though it could easily be lowered to even 100 seconds without a substantial decrease in efficiency. Only an an additional 64 schedules would be truncated and use the near-optimal candidate solutions instead of a solution proven to be globally optimal. Additionally, with a maximum number of 6 blocks (and thus a maximum of 6 filter changes in a night), the average number of filter changes scheduled was 4.7.
Comparisons to gwemopt
For all 951 skymaps we first measured the total probability density observed by each schedule, hereafter referred to as "probability coverage." MUSHROOMS saw an average probability coverage of 0.418, while gwemopt had an average probability coverage of 0.387, an 8.0% increase in probability coverage, however MUSHROOMS' schedules had an average runtime of 23700 seconds, while gwemopt had an average runtime of 16800 seconds, a 41.5% increase in runtime. This is because MUSH-ROOMS was run with p = 0, meaning it filled all available time, while gwemopt has some logic to stop when as it gets diminishing returns by adding more fields to observe. To make a more equal comparison, we focused on the skymaps where MUSHROOMS and gwemopt made no more than 6 additional observations compared to the other. This value was chosen because it kept the difference in the average run time of the schedules low, while still including a large amount of skymaps. The average run times were 22900 seconds for MUSHROOMS and 22800 for gwemopt over this subset of 329 simulated events. An important note here is that there is selection bias here towards skymaps with a greater 90% credible area, since they are the ones that gwemopt usually makes longer schedules for, as well as more well-localized schedules which, due to the event time and location, MUSH-ROOMS and gwemopt both filled almost all available time. A frequency histogram comparing the area distributions can be seen in figure 4.
For this subset, MUSHROOMS has an average probability coverage of 0.353, while gwemopt has an average probability coverage of 0.333, only a 5.8% improvement for MUSHROOMS. For a skymap-by-skymap comparison, figure 5 is a scatter plot comparing the probability coverages achieved by MUSHROOMS and gwemopt over this subset.
An important note, however, is that MUSHROOMS does not always out-perform gwemopt, even if the solution was not truncated by the 500-second time limit. This is because, even though the solution found is an optimal solution for MUSHROOMS' block-division heuristic, it may not be a globally optimal schedule. The design of MUSHROOMS forces the solutions to take on a certain format with observing blocks that are repeated in two different filters. This means the problem is (comparably) easy to implement using MILP and runs quickly, but if the best possible schedule does not fit such a format, MUSHROOMS cannot produce it. gwemopt has more freedom in ordering filter changes and block observations, meaning it can sometimes surpass MUSH-ROOMS, even with a greedy algorithm. Producing a more complex MILP formulation that lacks these restrictions and can always surpass gwemopt is an avenue for future research.
For now, this means we can use a hybrid scheduler to achieve better results than either MUSHROOMS or gwemopt alone. By running both MUSHROOMS and gwemopt and using the schedule with a higher probability coverage, we can get an average coverage of 0.360, an 8.1% improvement over gwemopt alone and a 2.1% improvement over just MUSHROOMS.
Detection efficiency Characterizations
Probability coverage, however, is not equivalent the actual performance a schedule will have, since it fails to capture the difficulties in identifying a kilonova, even if the field containing it is observed, a kilonova might not be detected due to it being too dim to significantly differ from the reference. As fast-fading transients, kilonova vary in magnitude significantly even over the 24 hours both schedules were allotted to search, meaning that the order of observations has a significant impact on the schedule's quality not captured by probability cov-erage. To address this, following Petrov et al. 2021, we characterized the resulting schedules' efficiencies with gwemopt's simulation and injection recovery suite for two different kilonova light curve models, this is done by injecting 10,000 kilonovae into the sky following the skymap's probability distribution, and each schedule's efficiency is the proportion of those kilonova that ZTF would have been able to detect following each schedule. The light curve models for the kilonovae used here, an optimistic and a conservative model, were generated by by the radiative transfer code POSSIS (Bulla 2019) and summarized in Dietrich et al. 2020b. For details about the physical properties of each light curve, see Table 1.
Tables 2 and 3 compare the efficiencies of MUSH-ROOMS, gwemopt and the hybrid implemenation of the two, for all skymaps (table 2) and for just the ones where both schedules are of a similar length (table 3). Due to the large number of simulations, the monte carlo uncertainty in these values is negligible. In both cases, the hybrid method out-performs both schedulers acting on their own, with an efficiency increase of about 11.5% (11.1%) for a conservative (optimistic) light curve in the subset where both schedules are the same length compared to just using gwemopt alone. Figure 6 compares the 90% credible area of each skymap to the percent improvement in efficiency that would result from utilizing the hybrid method as opposed to gwemopt to produce a schedule for it. The selection bias against more-well localized skymaps is clear, as well as MUSHROOMS' comparative weakness at scheduling for these more localized detections. Only 37 out of 97 (38.1%) of detections below 1000 deg 2 were improved upon by mushrooms, while 131 out of 232 (56.5%) of detections above 1000 deg 2 were improved upon by MUSHROOMS.
Because these smaller localizations where MUSH-ROOMS does worse make up a larger proportion of the total observations, this means that in actual use employing a hybrid MUSHROOMS-gwemopt strategy will result in less than a 11.5% (11.1%) improvement in detection efficiency for a conservative (optimistic) light curve. Additionally, since the hybrid strategy will never do worse than gwemopt alone, for a worst-case scenario, where MUSHROOMS is better for none of the remaining 622 skymaps when schedule lengths are equal, that will result in a minimum 3.1% (3.2%) efficiency increase for a conservative (optimistic) light curve, establishing an upper and lower bound of 11% and 3% respectively on the potential performance improvement of this applying hybrid method in real observing scenarios. Figure 6. Percent improvement in detection efficiency by utilizing the hybrid scheduling method. A 0% improvement means that gwemopt performed better than or equal to MUSHROOMS for that skymap. The three outliers at above 100% improvement are schedules that had low efficiencies when run with gwemopt and a small absolute efficiency increase from MUSHROOMS resulted in a large relative increase In this paper, we presented a novel scheduling algorithm for scheduling wide field-of-view survey follow-up for multimessenger events, outlined its MILP formulation, and compared its performance to gwemopt, the Target of Opportunity scheduler used by ZTF and other surveys in recent observing runs. We focused on the MUSHROOMS block-division algorithm, outlining the parameters, decision variables, objective function and constraints used to define this problem. Fundamentally, the block division algorithm is an alteration of a max weighted coverage problem, but instead of simply choos-ing a certain number of fields to look at, the algorithm assigns fields to variable-length blocks that are under further constraints to ensure all fields within them are observable and that no two blocks overlap. We include an additional optional penalty factor introduced into the objective function which allows for one to only observe fields that add enough probability coverage to overcome the penalty factor, leading to shorter schedules that infringe less on other programs. We also introduce a postprocessing step to check for block overlaps that could be introduced by the fixed slew time approximation.
Next, we compared MUSHROOMS to gwemopt, with MUSHROOMS achieving similar efficiencies gwemopt for both light curve models used. We showed that the differing strengths and MUSHROOMS and gwemopt mean when used in concert, one is able to achieve efficiencies 3% to 11% higher than either gwemopt alone.
The algorithm behind MUSHROOMS is a comparatively straightforward one, designed to quickly run on everyday computer hardware while still producing efficient schedules, and it already is able to increase the detection efficiency when used alongside the previous greedy scheduler. This shows the potential of using mixed-integer linear programming for scheduling multimessenger target of opportunity follow-up, and for observational scheduling as a whole, but also that there is significant room for improvement in MUSHROOMS or another Mixed-Integer scheduler, since problem formulation's rigidity in its schedules means it can still be out-done by gwemopt for some schedules.
There are a number of planned improvements for MILP schedulers for multimessenger follow-up. For example, MUSHROOMS does not account for the moon distance and lunar phase when scheduling observations. MUSHROOMS also does not have a straightforward way to respond to weather and other unexpected events. Currently, one would have to edit the input skymap, setting probabilities associated with affected healpix to zero before renormalizing and inputting it to MUSH-ROOMS. Improving it to account for both of those is important future work. Potentially more importantly, it treats the source as having constant flux for the duration of the schedule, which is not correct for the fast transient kilonova models considered here from (Dietrich et al. 2020b). The most straightforward way to address this issue would be to accept a desired light curve model as an additional parameter and make an alteration to the objective function of the block division step, using that model to alter the weight of each pixel by when it is observed, such as multiplying the weight associated with that pixel by the ratio of the light curve magnitude at the observation time to the maximum magnitude. As the complete field order is not determined until the travelling salesman problem step, one may have to use an approximation of when each pixel is observed, such as the midpoint of the first block to observe a given pixel.
The block-division formulation, while a useful heuristic for limited time and computing power, has some limitations, especially when variable exposure times are desired. Producing a model that is not constrained by blocks and can jointly be optimized over the selection and ordering of all fields, subject only to the observing and time constraints, would lead to more efficient schedules. Also allowing the model to vary the exposure times of individual observations would lead to higher chances of detection because it would adjust for time and position dependent sky background. However, both improvements are much more computationally complex, and will require much greater optimization and application of high-level operations research techniques. Using the experience gained from working on this project, among others, several authors of this paper have begun development on a more general multi-facility observation scheduling toolkit which will add those considerations into its problem formulation. ∀i, j, U i ≥ B i,j If a block makes at least 1 observation, it is being used (A17) ∀i > 0, U i ≤ U i−1 All unused blocks are at the end of the night (A18) ∀l, i∈S l ,j B i,j ≥ y l A pixel is observed if it is in any observed field (A19) ∀i, j t o,i + 2 · (t exp + t slew ) A block's end time must be before the observability end time of all fields within it (A21) A block's start time must be after the observability start time of all fields within it (A23) A block's start time must be after the previous block finishes (A25) | 6,246.8 | 2022-02-28T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Polishing of optical media by dielectric barrier discharge inert gas plasma at atmospheric pressure
In this paper, surface smoothing of optical glasses, glass ceramic and sapphire using a low-power dielectric barrier discharge inert gas plasma at atmospheric pressure is presented. For this low temperature treatment method, no vacuum devices or chemicals are required. It is shown that by such plasma treatment the micro roughness and waviness of the investigated polished surfaces were significantly decreased, resulting in a decrease in surface scattering. Further, plasma polishing of lapped fused silica is introduced. Based on simulation results, a plasma physical process is suggested to be the underlying mechanism for initialising the observed smoothing effect.
INTRODUCTION
For a number of high-end systems and devices such as laser sources, UV lithography optics and high performance mirrors, precisely-shaped and -polished optics surfaces are required.Against this background, precision polishing of relevant optical media is of great interest in order to realise surface roughnesses in the nanometre or even angstrom range.Such precision smoothing can be achieved by classical polishing using pitch tools and a polishing agent, i.e. a suspension of water and fine abrasives [1].However, this technique is usually limited to spherical and plane surfaces.Polishing of glass surfaces is also achieved by laser-induced heating above the glass transition temperature.As a result of the accompanying decrease in viscosity, material flow due to surface tension mechanisms occurs [2].Here, carbon dioxide (CO 2 ) lasers are usually employed due to the improved energy coupling at glass surfaces at this particular laser wavelength of 10.6 µm.Beyond laser processing, atmospheric pressure plasmas (APP) are suitable for thermally-induced polishing of rough glass surfaces.Paetzelt et al. reported such polishing of fine ground fused silica by a microwave-powered APP jet source where a mixture of argon and helium was used as process gas.Applying this tech-nique at a microwave power of 135 W, the surface arithmetic mean roughness Ra was significantly decreased from 550 nm to 0.64 nm due to a surface heating to 1900 K [3].Polishing of glasses and silicon (Si) -based media in general at low temperature can be performed by APP techniques using fluorine (F) -containing process gasses.Here, material removal is achieved by chemical reactions according to and Si + 4F → SiF 4 (2) respectively.Applying this technique, Zhang et al. accomplished smoothing of silicon surfaces where a reduction from 2.39 to 1 nm (root mean squared roughness Rq) and 1.76 to 0.63 nm (Ra) was achieved [4].A reduction in surface roughness was also reported in the case of plasma jet machining of silicon carbide (SiC) surfaces [5].Yao et al. and Jin et al. presented plasma machining of Zerodur using a RF-excited plasma jet at atmospheric pressure.It was shown that the surface roughness was strongly dependent on the process parameters and in particular on the mixing ratio of the working gas, SF 6 and O 2 [6,7].However, such plasma figuring methods can provoke surface roughening due to inhomogeneous etching, an effect which was also observed in the case of reactive ion etching (RIE) of pyrex glass [8].Such roughening can be overcome by the choice of the applied gas mixture in order to mitigate the sputter effect of heavy ions from the inert gases [9].
Against this background, atmospheric pressure plasma polishing of different technically relevant optical media is presented in this contribution.Here, the goal was not to perform any surface figuring but to provide a novel technique for surface smoothing based on a chemically neutral APP.In contrast to existing APP techniques, surface smoothing was achieved at low temperature below 100 • C and low power without applying any reactive process gas mixtures.
EXPERIMENTAL SETUP AND PROCEDURE
Plasma polishing experiments were performed on optically polished plane samples made of fused silica Suprasil 3, boron crown glass BK7, glass ceramic Zerodur and sapphire.These materials are well-established and commonly used for the realisation of high precision optical devices as for example, in case of fused silica and sapphire, UV optics.In this context, precision polishing and finishing in the (sub)nanometer scale is of great interest in order to reduce scattering and interreflection effects.For plasma polishing of these optical media, a dielectric barrier discharge (DBD) at atmospheric pressure was applied using a rotational-symmetric cone-shaped plasma source [10,11].As shown in Figure 1, this plasma source consists of an internal high-voltage (HV) electrode and an external ground (GND) electrode.The dielectric separation of both electrodes was realised by the particular sample itself, where the effective discharge gap d e f f was 21 mm.
By the process gas flow, i.e. argon (Ar) 4.6 from Linde at a flow rate of 4 standard litres per minute (slm), a stable compressed plasma beam with an 1/e 2 diameter of approx.200 µm was formed within the discharge gap (compare inset in Figure 1).However, the resulting plasma tracing point on the sample surface showed a surface discharge-like behaviour due to an accumulation of charge carriers.This tracing point was several millimetres in diameter, featuring a Gaussian rotation symmetric intensity distribution as confirmed by high-speed camera measurements.During the experiments, neither the plasma source nor the sample was moved.The plasma source was driven at a pulse repetition rate frep of 7 kHz with an averaged plasma power P av of 1.19 W. Taking the applied plasma energy of 0.17 mJ for each HV-pulse train with a duration of approx.80 µs into account, the fluence per pulse within the plasma beam can be calculated to amount to 540 mJ/cm 2 .
As ascertained by spectroscopic measurements, the electron temperature was approx. 1 eV.Plasma treatment was performed in steps of 30 seconds up to a total treatment time of 90 seconds (boron crown glass, glass ceramic) and 150 seconds (fused silica, sapphire), respectively.For each step, both the arithmetic mean roughness Ra and the root mean squared roughness Rq were measured by the use of an atomic force microscope (AFM) easyScan 2 AFM from Nanosurf where the measured area was 50×49.5 µm 2 .
For comparison, plasma polishing using argon as process gas (flow rate = 0.9 slm) was also performed on lapped fused silica surfaces.Here, the plasma beam diameter was approx.500 µm whereas the effective discharge gap amounted to 14.5 mm.The fluence per pulse was 15 mJ/cm 2 .Plasma treatment was carried out in steps of 10 minutes up to a total treatment time of 60 minutes.In addition to the roughness parameters Ra and Rq, the transmission T was measured after each step using an UV/VIS-spectrometer Lambda 650 from Perkin Elmer.
RESULTS
By the plasma treatment as described above, significant topographic surface modifications of the investigated optical media were achieved.As shown in Figure 2 and listed in Table 1, Ra was continuously decreased with increasing plasma treatment duration.The same effect was observed for Rq.Such surface smoothing of optical glasses was already observed in previous work where the same plasma source and treatment procedure was applied to barite crown glass N-BaK4 and heavy flint glass SF5.After a plasma treatment duration of 60 seconds, Ra was reduced by 7.8% (N-BaK4) and 59.7% (SF5), respectively [12].Generally, a reduction in surface roughness comes along with a decrease of the surface contact angle, corresponding to an increase in total surface energy γ s [13].Against this background, γ s was measured using a Contact Angle Measurement System G10 from Kr üss.For instance, γ s of fused silica was increased from 68.66 mJ/m 2 to 72.69 mJ/m 2 (i.e.+5.9%) after a plasma treatment duration of 60 seconds, verifying the observed surface smoothing at this instant of time (∆Ra = -9.9% and ∆Rq = -7.5%).Moreover, no mentionable change in the polar fraction of the total surface energy was detected.Since this parameter is directly related to the molecular structure and orientation of near-surface bonds within the sample bulk material it can thus be stated that no modification in chemical and, as a consequence, optical properties occurred due to the plasma treatment.
In addition to the roughness parameters, the waviness w as well as the maximum depth of waviness w d was extracted from the AFM profiles using the evaluation software Gwyddion from the Department of Nanometrology of the Czech Metrology Institute.Here, the measuring length l meas was 70 µm, i.e. the diagonal of the overall measured area.As also observed in the case of roughness parameters, the depth of waviness of the investigated optical media was reduced continuously by the plasma treatment as shown in Figure 3.
According to DIN 4760, the surface roughness is a 3 rd and 4 th order irregularity of form whereas the waviness represents a 2 nd order form deviation.It can thus be stated that as a result of the plasma treatment not only high order but also low order irregularities were decreased as listed in Table 1.
For comparison, plasma polishing was also carried out on lapped fused silica surfaces with a comparatively high initial surface roughness.As shown in Figure 4, a considerable decrease in surface roughness was achieved by such plasma treatment.Consequently, the transmission T was progressively increased.
The increase in transmission as a result of the reduction of diffuse reflection on the rough fused silica surface is further supported by the measured decrease in waviness by a factor of 4.8 after a plasma treatment duration of 60 minutes.The observed decrease in both surface roughness and waviness of polished and lapped surfaces clearly indicates a selective material removal of roughness peaks and contour maxima by the plasma treatment.
DISCUSSION
In previous work, the heating of a glass sample surface exposed to the used plasma beam was measured using an infrared camera VarioCAM from InfraTec/Jenoptik.Here, a surface temperature of 88 • C was determined [14].Since this temperature is significantly below the softening temperature of the investigated media, thermal smoothing due to surface melting (which is the underlying mechanism in the case of laser-induced surface smoothing) can be excluded in the present case.The presented method thus differs significantly from thermal APP jet polishing as reported by Paetzelt et al. [3] (compare introduction section).The observed decrease in surface roughness and waviness is rather explained by plasma physical effects.In principle, the concentration of argon plays an important role for plasma etch mechanisms.Li et al. showed that during etching of fused silica by the use of fluorine-based plasmas, an increase in argon concentration allows increasing the etch rate [15].For pure silicon, Coburn and Winters observed certain etching by pure argon ion bombardment [16].Against this background, the presented surface smoothing using argon as inert process gas can be explained by different mechanisms: First, a slight argon ion bombardment by reason of the formation of plasma sheath at the sample surface, resulting in an acceleration of ions towards the surface [17], can contribute to material removal.Second, material removal can be due to the deexcitation of excited argon and metastable argon species and an accompanying energy transfer to the sample surface.Here, the possible underlying effects are electron quenching, twoand three-body collisions with argon atoms and Penning ionisation at the sample surface [12].The existence of such species in the near-surface area can most likely be assumed due the direct plasma ignition on the substrate based on electron excitation and ion-impact excitation in the plasma sheath close to the sample surface [18] and/or resonant neutralisation.In the last case, argon metastables are provided as a result of the recombination of argon ions with the negative surface charge on the dielectric sample [19].However, both effects, ion bombardment and de-excitation of argon species, should result in an almost uniform material removal.Due to the selective removal of surface texture maxima it can thus be assumed that the plasma discharge causes high electric field strengths at roughness peaks, initialising the observed erosion effect.This assumption was confirmed by a simulation of the distribution of the electric field strength on a rough dielectric surface within a parallel-plate capacitor, which is an appropriate model for the plasma discharge used in the present work.Such simulation was performed using the COMSOL Multiphysics software.The simulated dielectric was fused silica with different micro volume elements (i.e.hemispheres with radii r in the range from 1 to 100 nm and a pyramidal peak with a height h of 150 nm and a point angle α of 30 • ), representing the roughness and waviness profile.As shown in Figure 5, the highest electric field strengths are found on the top of the particular volume elements whereas low electric field strengths occur within the valleys in between roughness peaks and surface texture maxima, respectively.
This simulated behaviour confirms a selective material removal of maxima at rough and wavy surfaces.An increase in electric field strength due to an increased electrode surface roughness, initialising an additional potential, was also already reported by Zhao et al. on the example of a parallelplate capacitor [20].The above-presented surface smoothing of polished optical media has a considerable impact on surface scatter characteristics.Basically, the amount of light scattered at a technical surface can be calculated from its root mean squared roughness Rq by applying the total integrated scatter (TIS) which is given by Here, AOI is the angle of incidence of the incoming light and λ its wavelength [21].As an example, at perpendicular incidence (AOI = 0) and a wavelength of 546 nm, TIS was reduced by a factor of 1.08 (boron crown glass), 1.17 (fused silica), 1.62 (glass ceramic) and 1.38 (sapphire), respectively, after a plasma treatment duration of 60 seconds.Regarding the quality of imaging optics, such decrease in surface scattering generally improves the contrast ratio.The observed decrease in waviness further contributes to a reduction of blurring.As a result of these plasma-induced surface effects, the modulation transfer function (MTF) of an optical system could be improved by the presented method.Moreover, image distortions could be mitigated in consequence of the diminution of form deviations.
CONCLUSIONS
The presented low-power plasma polishing technique allows a considerable surface smoothing of different optical media of high technical relevance.In the case of polished surfaces, a reduction in surface roughness by approx.18-28% was achieved.In contrast to other precision polishing techniques using lasers or hot plasmas, surface smoothing is not induced by heating.Thermally-induced disturbing effects such as stress birefringence are thus avoided.The presented method could be applied for post-processing and finishing of high-end optics after precision figuring, for example by RIE or ion beam etching (IBE) methods in order to reduce surface scattering.However, relatively high final roughness values were achieved in comparison to other techniques which are based on thermally-induced mechanisms.Further, the observed removal of roughness peaks approaches a certain saturation which can be explained by the smoothing process and the accompanying increasing uniformity of the electric field strength distribution.In ongoing work, the variation of crucial plasma parameters will thus be investigated in order to improve the process efficiency and to evaluate the limits of this method.Here, a continuous and regulated increase in voltage during the plasma polishing process is of essential interest to maintain the required high electric field strengths on the top of the continuously shrinking roughness peaks and waviness maxima.Owing to the advantage of a negligible substrate heating due to the used dielectric barrier discharge, which is commonly referred to as a cold plasma, the treatment of temperature-sensitive optical media and materials such as coatings is made possible by the presented method.
The smoothing of such surfaces will be investigated in the near future.
FIG. 1
FIG.1Experimental setup for APP polishing of optical media.
FIG. 2
FIG. 2 Arithmetic mean roughness Ra of different optical media vs. plasma treatment duration t plasma including 3D AFM views of polished fused silica at t plasma = 0 (left) and 150 (right) seconds.
FIG. 3
FIG. 3 Maximum depth of waviness w d of different optical media vs. plasma treatment duration t plasma , inset: example for waviness w of polished fused silica at t plasma = 0 and 150 seconds vs. measuring length l meas .
FIG. 4
FIG. 4 Comparison of the arithmetic mean roughness Ra and the transmission T (λ = 193 nm) of a lapped fused silica sample vs. plasma treatment duration t plasma including 3D AFM views of lapped fused silica at t plasma = 0 (left) and 60 (right) minutes.
FIG. 5
FIG. 5 Qualitative distribution of the electric field strength on a dielectric surface (bottom) with micro volume elements (top).
TABLE 1
Absolute change ∆ in arithmetic mean roughness Ra and maximum depth of waviness w d after particular overall plasma treatment duration. | 3,788.4 | 2013-12-27T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
Annales Geophysicae On ion gyro-harmonic structuring in the stimulated electromagnetic emission spectrum during second electron gyro-harmonic heating
Recent observations show that, during ionospheric heating experiments at frequencies near the second electron gyro-harmonic, discrete spectral lines separated by harmonics of the ion-gyro frequency appear in the stimulated electromagnetic emission (SEE) spectrum within 1 kHz of the pump frequency. In addition to the ion gyro-harmonic structures, on occasion, a broadband downshifted emission is observed simultaneously with these spectral lines. Parametric decay of the pump field into upper hybrid/electron Bernstein (UH/EB) and low-frequency ion Bernstein (IB) and oblique ion acoustic (IA) modes is considered responsible for generation of these spectral features. Guided by predictions of an analytical model, a two-dimensional particle-in-cell (PIC) computational model is employed to study the nonlinear processes during such heating experiments. The critical parameters that affect the spectrum, such as whether discrete gyroharmonic on broadband structures is observed, include angle of the pump field relative to the background magnetic field, pump field strength, and proximity of the pump frequency to the gyro-harmonic. Significant electron heating along the magnetic field is observed in the parameter regimes considered.
Introduction
Strong, high-frequency electromagnetic (EM) waves transmitted into the ionosphere during ionospheric heating experiments generate secondary electrostatic (ES) and electro-magnetic waves through nonlinear plasma processes.The power spectrum of the signal acquired by ground-based receivers shows these new generated waves, so-called stimulated electromagnetic emission (SEE), that are frequency shifted within 100 kHz relative to the heater or pump frequency (Thidé et al., 1982).
Understanding of this process is critical from both practical and theoretical standpoints.Various SEE spectral features can be studied as diagnostic tools that give information about the condition of the ionosphere and nonlinear plasma processes (Leyser, 2001).For instance, electron temperature (Bernhardt et al., 2009) or amplitude of the local geomagnetic field (Leyser, 1992) in the interaction region may be estimated.Parametric decay of the pump field into new plasma waves has been introduced as a fundamental process that generates SEE.The EM pump wave can be directly involved in the decay process and generate new EM and ES waves.For example, decay of the pump field into ion acoustic or electrostatic ion cyclotron waves and scattered EM wave is involved in the stimulated Brillouin scatter process (Norin et al., 2009;Bernhardt et al., 2009).On the other hand, for other SEE features, the EM pump undergoes conversion to another ES wave which then decays into new ES waves that are then back converted to EM waves.For example, the downshifted maximum (DM) spectral feature is proposed to be generated by this process (Zhou et al., 1994;Bernhardt et al., 1994).
Recent experimental observations of SEE during second electron gyro-harmonic heating show structures ordered by harmonics of the ion gyro-frequency (Bernhardt et al., 2011).The spectrum may show up to sixteen discrete spectral lines and is distinctly different from magnetized stimulated Published by Copernicus Publications on behalf of the European Geosciences Union.
Published by Copernicus Publications on behalf of the European Geosciences Union.Samimi, A., Scales, W. A., Bernhardt, P. A., Briczinski, S. J., Selcher, C. A., and McCarrick, M. J.: On ion gyro-harmonic structuring in the stimulated electromagnetic emission spectrum during second electron gyro-harmonic heating, Ann.Geophys., 30, 1587-1594, doi:10.5194/angeo-30-1587-2012, 2012.Brillouin scatter involving decay into electrostatic ion cyclotron waves in which only one spectral line downshifted by ion gyro-frequency is observed (Bernhardt et al., 2009).Brillouin scatter is an electromagnetic parametric process of direct conversion of the EM wave into the high-frequency EM sideband and low-frequency ES wave in the long wavelength regime (k ⊥ ρ i < 1 where k ⊥ is the perpendicular wavenumber and ρ i is the ion gyro-radius).The process here, as will be discussed, involves conversion of the EM wave first into an ES upper hybrid/electron Bernstein (UH/EB) wave, then an electrostatic parametric process (short wavelength regime k ⊥ ρ i > 1 ), and finally backscattering into the EM wave observed as SEE.Cascading during Brillouin scatter has been observed to produce additional lines (one or maximum two) but not a large number as observed in these experiments.Also, the amplitude of these lines greatly decreases with harmonic number with offset from the pump frequency unlike the observations to be described here.It should be noted that our very recent experiments show that magnetized Brillouin scatter is clearly suppressed for the pump frequency near the second gyro-harmonic (also third) while these gyro-features are enhanced, again consistent with theory.A more careful comparison of these two processes is beyond the scope of the current manuscript, and this is the subject of ongoing work that will be presented in a future publication.
The original explanation for the gyro-harmonic structuring was parametric decay of the pump EM wave into electron and pure ion Bernstein waves (k || = 0 where k || is the wavenumber along the geomagnetic field) (Bernhardt et al., 2011).In addition to the ion gyro-harmonic structures, some of the experimental measurements also exhibit a broadband spectral feature downshifted from the pump frequency within 1 kHz that coexists with the ion gyro-harmonic structures.The object of this paper is to investigate the generation of ion gyroharmonic and associated broadband spectral features within 1 kHz of the pump frequency and to provide fundamental parameters that characterize these types of spectra that may have critical diagnostic information of the heated plasma.Also for the first time, a computational model will be used to follow the nonlinear evolution more accurately than has been possible in the past.This paper is organized as follows.
In the next section, experimental observations are provided.An analytical model is then used to provide guidance on important parameters that characterize the proposed parametric instability process.Section 4 is devoted to the computational model and results.Finally, conclusions are provided.
Experimental observations
In order to obtain the SEE spectra, a 30 m folded dipole antenna with a receiver with around 90 dB dynamic range was set up by the Naval Research Laboratory (NRL) close to High Frequency Active Auroral Research Program (HAARP) site (63.09• N Lat., −145.15 • E Long.) in Gakona, Alaska.The receiver shifts the frequency of the acquired signal by the heater frequency by mixing, and sampling it at 250 kHz.The acquired data are post-processed by utilizing the fast Fourier transform (FFT) to obtain spectrograms of the received signal.During the experiment the pump duty cycle was 4 min on/4 min off and heater frequency was set to 2.85 MHz.This is almost twice the electron gyro-frequency above HAARP, calculated using the International Geomagnetic Reference Field (IGRF) model.The digital ionosonde at HAARP estimated the reflection altitude was 221 km.At 221 km altitude, the magnetic field strength is B = 0.052632µT , the electron gyro-frequency ce = eB/m e = 2π(1.47×10 6) rad s −1 , and the ion gyro-frequency is ci = eB/m i = 2π(50.1)rad s −1 where m e and m i are the electron and ion masses.The experiment was carried out in O-mode during which the heater was operating at full power, i.e. 3.6 MW, and effective radiated power is estimated to be 280 MW.The heater beam was pointed to magnetic zenith with an azimuth of 202 degrees and a zenith angle of 14 degrees.
Figure 1 shows the power spectrum of the acquired signal for two experiments with almost 1 h difference between measurements.In the first experiment, conducted at 02:34 UT Ann.Geophys., 30, 1587-1594, 2012 www.ann-geophys.net/30/1587/2012/on 28 October 2008 (around the local sunset), distinct spectral structures at harmonics of ion gyro-frequency are evident in the upshifted and downshifted sidebands where in the downshifted side 16 spectral lines are distinguishable and the strongest is the fourth harmonic.Harmonic interference of the power supply at multiples of 120 Hz is also seen in the spectrum and should not be mistaken as the SEE feature.In the second measurement that was carried out at 03:32 UT, in addition to ion gyro-harmonic structures, a broadband spectral feature that is peaked at around 500 Hz below the pump field frequency appears in the spectra.Similar characteristics were repeatable during other observations.Experimental observations of the discrete spectral structures were reported in Bernhardt et al. (2011).However, the broadband feature and its relation to the discrete structures is presented in this paper for the first time.
Analytical model
Parametric decay of the pump field into upper hybrid/electron Bernstein and neutralized ion Bernstein (IB) waves (with important differences in dispersive characteristics from pure ion Bernstein waves) has been proposed as a viable process for generation of these spectral features (Scales et al., 2011).While pure IB waves propagate perpendicular to the background magnetic field, neutralized IB waves propagate slightly off-perpendicular, i.e. k || /k ⊥ ≥ √ m e /m i .These waves exhibit neutralizing Boltzmann electron behavior and have closer dispersive relationships and phenomenological behavior to ion acoustic waves.By enforcing wavenumber and frequency matching conditions, i.e. k 0 = k 1 + k s and ω 0 = ω 1 + ω s , where subscripts 0, 1 and s represent parameters of the pump field, the high-frequency decay mode and the low-frequency decay mode respectively, the general dispersion relation describing weak coupling is (Porkolab, 1974) where ε(ω) = 1+χ e (ω)+χ i (ω), and ε e (ω) = 1+χ e (ω).The susceptibility of the j -th species is given by where b j = k 2 ⊥ ρ 2 j , k is the wavenumber, k ⊥ (k ) is the component of k perpendicular (parallel) to the magnetic field B, ρ j is the gyro-radius, ζ j n = (ω+iν j −n n )/k v tj , n is the gyro-frequency, v tj is the thermal velocity, ν j is the collision frequency, n (b j ) = I n (b j ) exp(−b j ), Z is the Fried-Conte function, I n is the modified Bessel function of the first kind of order n, λ Dj is the Debye length.β e , the coupling coefficient, proportional to the pump field E 0 , is given by For simplicity, collisional effects will be neglected when discussing solutions of Eq. ( 2).It is assumed the pump field strength is above the threshold for the instability and emphasis is placed in part on considering the structure of the predicted spectrum.
The pump wave is modeled using the dipole approximation, i.e. k 0 = 0.A more refined approach would be to consider the pump wavenumber to be given by the wavenumber of the irregularities generated by the oscillating two stream instability OTSI (Huang and Kuo, 1995).The simplified approximate approach here is deemed adequate for initial interpretations of the experimental data.Future work will consider more refined calculations.
The Bernstein modes propagate almost perpendicular to the magnetic field.Therefore, it is assumed the parametric decay process occurs at the upper hybrid altitude in which the electric field is almost perpendicular to the geomagnetic field (Leyser, 1991); also the double resonance condition is assumed for second gyro-harmonic heating, i.e. ω 0 ≈ ω uh = 2 ce .Although this model is valid for weak coupling (β e < 1), it still provides guidance for the more general computational model of the following section.The full dispersion relation is solved numerically for various parameter regimes.Two cases are discussed here in which oxygen ions are assumed, the pump field frequency ω 0 = 2 ce − 40 ci and the electron to ion temperature ratio T e /T i = 3 (Bernhardt et al., 2011).Furthermore, the pump field is described by the electron oscillating velocity, i.e. v osc = eE 0 /m e ω 0 where e is the electron charge.In the growth rate calculations, v osc /v the = 0.5 corresponding to E 0 ≈ 10 V m −1 is used.The electric field is assumed to be slightly off-perpendicular to the background magnetic field, and the off-perpendicular angle is denoted by θ E .Figure 2 shows the dispersion relation of the low-frequency decay mode (shift of the destabilized wave from the pump frequency) and the corresponding growth rates (i.e.ω s = ω r + j γ ) for two cases: θ E = 5.3 • and θ E = 7.6 • .The left vertical axis is normalized frequency (solid curves); the right is normalized growth rate (dashed curves); and the horizontal axis is the perpendicular normalized wavenumber.It is clear that, at small off-perpendicular angles (θ E ≈ 5 • ), a discrete band of upper hybrid/electron Bernstein (UH/EB) waves is destabilized and shifted below the pump frequency by multiples of the ion gyro-frequency.This is parametric decay of the pump field into UH/EB and neutralized IB waves (otherwise known as electrostatic ion cyclotron harmonic waves; e.g.Kindel and Kennel, 1971 The neutralized ion Bernstein waves exist for k /k ⊥ ∼ 0.1 and have dispersion relation for the n-th harmonic number.For these parameters, maximum growth occurs in relatively narrow frequency bands with downshifts approximately at (n + 1 2 ) ci .At slightly higher off-perpendicular angles of the pump field, θ E ≈ 8 • , a broadband mode is destabilized as well as lower harmonic discrete modes.This is the decay of the pump field into UH/EB and neutralized IB waves, and also highly oblique ion acoustic IA waves with dispersion relation ω ≈ k ⊥ c s where c s ≈ √ KT e /m i is the sound speed and K is the Boltzmann constant.θ E is an important parameter in determining the transition from discrete ion Bernstein (IB) decay to broadband oblique ion acoustic (IA) decay.Equation (1) shows this may also be influenced by parameters v osc /v the and |ω 0 − 2 ce |/ ce .We should emphasize that both of the parametric decay processes that have been discussed here occur in the upper hybrid altitude in which electric field has a specific angle relative to the magnetic field.Electron density, its gradient relative to the height in the interaction region and the direction of the transmitter beam are three critical parameters that determine the angle of the E-field relative to the magnetic field.Our calculations show that the threshold electric field intensity required to excite the oblique IA wave is actually higher than for the neutralized IB mode.Since the oblique IA wave is excited by stronger electric fields (i.e.v osc /v the ), its growth rate is larger than the neutralized IB modes.It is broadband since, for higher growth rates, the growth rate is near the ion gyro-frequency γ ≈ ci and the ions are unmagnetized.It can be shown in the analytical calculations that reducing the E-field intensity causes the discrete neutralized IB modes to be excited rather than the broadband IA mode as the ions become magnetized.Experimental observations also show that the broadband feature develops faster than the discrete structures.
Therefore, there are actually at least two important parameters that determine whether there is a discrete or broadband spectrum.These are the pump field strength as well as the electric field angle.From a practical point during an experiment, this transition may be prompted by varying either pump power or propagation effects (such as possible change in beam angle or ionospheric parameters that influence propagation).Further experiments are required to carefully validate this transition.On the other hand, the most important parameter determining the discrete IB decay line with maximum growth (e.g. 4 in Fig. 2) appears to be |ω 0 −2 ce |/ ce .Note, in the aforementioned calculations, the electric field intensity of the pump is assumed to be E 0 = 10 V m −1 (a round figure used for demonstration purposes), and the threshold intensity required to excite the instability is The presumption of the discussed generation mechanism is that the pump EM wave is converted into the electrostatic pump UH/EB wave.The thermal oscillating two stream instability (OTSI) is the mechanism that has been proposed for the conversion of the EM wave into the electrostatic UH/EB pump wave (e.g.Huang and Kuo, 1994).It has been shown by these authors that the field amplitude for this OTSI driving process actually has a relatively low threshold of 1 V m −1 or less.Initial simulations, to be described in more detail in the next section, indicate that the resulting OTSI field (from the pump EM field) should then grow significantly large enough for driving the parametric decay instability.In order to check effect of the electron to ion temperature ratio on the spectral features, we did similar calculations for T e /T i =1.For T e /T i =1, just the first five modes of the neutralized IB wave are destabilized at θ E = 8.2 • .Also, the transition boundary between exciting IB modes and IA mode shifts to θ E ≥ 10 • in this case.
Simulation model and results
In order to study nonlinear processes involved in generation of ion gyro-harmonic structures more thoroughly, a periodic two space and three velocity dimension (2D3V) magnetized electrostatic particle-in-cell (PIC) (Birdsall and Langdon, 1991) model is used.It is assumed that the background Ann.Geophys., 30, 1587-1594, 2012 www.ann-geophys.net/30/1587/2012/magnetic field is along the z-axis.The long wavelength pump wave is modeled by the dipole approximation (i.e.E = E 0 sin(θ E )ẑ + E 0 cos(θ E ) ŷ cos(ω 0 t)).The length along B is 256λ D and its length perpendicular to B is 512λ D , and 50 particles per cell per species are used for sufficiently low noise level.The initial velocity distributions are Maxwellian for both electrons and ions and T e = T i .The ion to electron mass ratio m i /m e = 200 is sufficient for separating electron and ion timescales and provides reasonable computational efficiency.For scaling purposes, λ D ∼ 1 cm and v the ∼ 2 × 10 5 m s −1 at altitudes of interest.
A number of simulations were conducted for a variety of parameter regimes.Two cases are discussed here for which the pump field is applied during the whole simulation period v osc /v the = 2.8, reasonably consistent with its estimated strength in the interaction region (Bernhardt et al., 2009).The pump frequency is set slightly above 2 ce , i.e. ω 0 = 2 ce + 0.1 ci (note that |ω 0 −2 ce |/ ce is comparable for both analytical and simulation models).In the first case, θ E = 18 • and in the second, θ E = 24 • .Figure 3 shows the time evolution of the electrostatic field energy (W E = 1 2 0 |E| 2 dv in relative simulation units).Initially in the E y energy, the lower hybrid (LH) parametric decay instability (also predicted by Eq. ( 1) but growth rate calculation not shown) is observed associated with the prominent downshifted maximum (DM) SEE spectral feature.Afterwards, in the E z field energy, the ion Bernstein parametric decay instability can be seen to develop.Note that the overshoot in the E y energy is related to the second phase of the growth of the LH parametric decay instability.The energy first increases, then reaches a quasiequilibrium state and then again starts to increase until it reaches a saturation state before decaying.This is indicative of the pump being continuously applied.It should be noted that frequency power spectra during the LH parametric decay phase in the case of both angles show sidebands upshifted and downshifted from the pump frequency by the lower hybrid frequency (not shown).The growth rate from the field energy during the IB/IA parametric decay phase is comparable in both angle cases (γ / ci ∼ 0.1), but the θ E = 24 • case is slightly larger as qualitatively predicted from Eq. (1).
To consider the power spectrum for comparison with observations, the current density frequency spectrum along the magnetic field (z-direction) at a fixed point (i.e.|J z (ω)| 2 ) is shown in Fig. 4 for θ E = 18 • and θ E = 24 • .Similar to the analytical model predictions, at smaller off-perpendicular angles (θ E = 18 • ) the power spectrum of |J z (ω)| 2 exhibits discrete structures shifted below the pump frequency by harmonics of the ion-gyro frequency and at higher offperpendicular angles (θ E = 24 • ) a broadband spectral feature appears and coexists with lower gyro-harmonics near the pump frequency.The frequency spectrum of the current density across the magnetic field (i.e.|J y (ω)| 2 ) exhibits sidebands shifted below and above the pump frequency by ω LH due to the lower hybrid parametric decay instability (Hussein and Scales, 1997).This is believed to produce the down- shifted maximum and upshifted maximum emission lines in the SEE spectrum (Leyser, 2001).On these timescales the amplitude of the current density along and across the magnetic field depends upon the growth and the decay of the LH and the IB parametric decay instability.Thus, the current density is not necessarily the strongest along the pump field direction.
Figure 5 shows the time evolution of the electron kinetic energy, K e = 1 2 m e v 2 e , and also the velocity distribution function both along the magnetic field at the end of the simulation.There is significant heating along the magnetic field due to the development of the parametric instabilities.The heating is result of the collisionless damping (i.e.waveparticle heating).For these parameters, there is more heating for θ E = 18 • which indicates more local heating associated with the gyro-harmonics relative to the broadband oblique IA structure.Less free energy can be seen to go into total electrostatic field energy in this case.Note that the free flowing electrons along the magnetic field are not considered in the current model due to employing the periodic boundary condition.This would require refreshing the particles when they leave the system along the magnetic field.This is not considered too great of a limitation due to the timescale of the simulation, which is primarily to consider basic physics of the parametric decay process.This alteration of the boundary condition will be made on a future version of the simulation model.With the current model, the electron kinetic energies perpendicular to the magnetic field show much less growth in comparison to the parallel component of the kinetic energies.This behavior is relatively similar for the cases of the two angles.The significant acceleration of the electrons along the magnetic field will have connections to airglow, which is of interest for a spectrum of diagnostics in the heated volume.Correlation of the spectral features with such diagnostics will also be quite useful during future experiments.
Conclusions
In this study, parametric decay of the pump field into another UH/EB and neutralized IB waves is proposed as a viable process for generation of ion-gyro structures in the SEE spectra.It is found that this process can occur at the upper hybrid altitude where the electric field is almost perpendicular to the geomagnetic field.Characteristics of the spectrum are pre-dicted to change from discrete harmonics to more broadband involving parametric decay into broadband oblique IA mode depending on the orientation of the pump field relative to the background geomagnetic field, θ E .During heating experiments, varying the heater antenna beam angle relative to the geomagnetic field is expected to effectively vary this parameter and produce important variations in the SEE spectrum.However, this is just one important parameter that influences and characterizes the SEE gyro-harmonic features and associated nonlinear plasma processes in this frequency range.Some preliminary predictions made here include the pump wave amplitude and frequency relative to the second electron gyro-harmonic.The strongest excited SEE gyro-harmonic line depends on the frequency offset of the pump field relative to the second electron gyro-harmonic.Moreover, in addition to the off-perpendicular angle of the pump field relative to the geomagnetic field, the strength of the electric field also determines whether the oblique IA wave or the neutralized IB modes are excited.For a fixed θ E angle, for a small pump field strength, the neutralized IB modes are excited (producing the discrete SEE spectrum) while for the higher pump field intensities, the oblique IA mode is excited (producing the broadband spectrum).As θ E reduces, it is more difficult to excite oblique IA modes even by using stronger pump powers.
Note that a similar generation mechanism that is proposed for the broadband feature has been introduced for the downshifted peak (DP) feature observed during the heating near the third electron gyro-harmonic (Huang and Kuo, 1995).It is possible that these two features have similar physical characteristics; however, a final conclusion will require comparison experiments at the second and third electron gyrofrequencies.Differences should be noted however.First the broadband feature here occurs near 500 Hz.The DP is near 1.5 kHz, so there is a significant difference in frequency.Both features appear to be enhanced for the pump frequency near the electron gyro-harmonic.From a theoretical standpoint, note that the broadband structure here may have some structuring at ion gyro-harmonic frequencies due to ion cyclotron damping.The work by Huang and Kuo considered only a cold plasma ion susceptibility where the susceptibility here is fully kinetic which allows for these physical effects.Other experimental checks predicted by the theory to compare the two lines are as follows: (1) transition from discrete to broadband structure with increasing pump power and (2) transition from discrete to broadband structure with varying beam angle.In summary, due to considerable diagnostic information available from these two new SEE spectral features and other related SEE spectral features (Sharma et al., 1993;Huang and Kuo, 1995;Tereshchenko et al., 2006), a comprehensive study of the impact of various parameter regimes on the parametric instabilities and further experimental observations are being conducted and will be presented in the future.
Fig. 1 .
Fig. 1.Experimental observations of (a) ion gyro-harmonic structures and (b) simultaneous broadband and ion gyro-structures observed at HAARP during which heater frequency was tuned close to second electron gyro-harmonic frequency.
Fig. 3 .
Fig. 3. Simulated time evolution of electrostatic field energy parallel (z-field) and perpendicular (y-field) to the background magnetic field shows development of lower hybrid (LH) decay instability (y-field) for both cases, IB decay instability (z-field) for the case θ E = 18 • and neutralized IB/oblique IA decay instabilities (z-field) for the case θ E = 24 • .
Fig. 4 .
Fig. 4. Power spectrum of simulated current density along the magnetic field taken over time ranges corresponding to IB/IA parametric decay for θ E = 18 • shows discrete ion gyro-harmonic structures and for θ E = 24 • shows broadband oblique IA spectral feature.
Fig. 5 .
Fig. 5. Time evolution of parallel electron kinetic energy and corresponding electron velocity distribution functions at the end of the simulation for θ E = 18 • and θ E = 24 • . | 6,158.6 | 2012-07-01T00:00:00.000 | [
"Physics"
] |
Quantum Logic Gate Synthesis as a Markov Decision Process
Reinforcement learning has witnessed recent applications to a variety of tasks in quantum programming. The underlying assumption is that those tasks could be modeled as Markov Decision Processes (MDPs). Here, we investigate the feasibility of this assumption by exploring its consequences for two fundamental tasks in quantum programming: state preparation and gate compilation. By forming discrete MDPs, focusing exclusively on the single-qubit case (both with and without noise), we solve for the optimal policy exactly through policy iteration. We find optimal paths that correspond to the shortest possible sequence of gates to prepare a state, or compile a gate, up to some target accuracy. As an example, we find sequences of $H$ and $T$ gates with length as small as $11$ producing $\sim 99\%$ fidelity for states of the form $(HT)^{n} |0\rangle$ with values as large as $n=10^{10}$. In the presence of gate noise, we demonstrate how the optimal policy adapts to the effects of noisy gates in order to achieve a higher state fidelity. Our work shows that one can meaningfully impose a discrete, stochastic and Markovian nature to a continuous, deterministic and non-Markovian quantum evolution, and provides theoretical insight into why reinforcement learning may be successfully used to find optimally short gate sequences in quantum programming.
Introduction
Recent years have seen dramatic advances in the field of artificial intelligence [1] and machine learning [2,3].A long term goal is to create agents that can carry M. Sohaib Alam<EMAIL_ADDRESS>complicated tasks in an autonomous manner, relatively free of human input.One of the approaches that has gained popularity in this regard is reinforcement learning.This could be thought of as referring to a rather broad set of techniques that aim to solve some task based on a reward-based mechanism [4].Formally, reinforcement learning models the interaction of an agent with its environment as a Markov Decision Process (MDP).In many practical situations, the agent may have limited access to the environment, whose dynamics can be quite complicated.In all such situations, the goal of reinforcement learning is to learn or estimate the optimal policy, which specifies the (conditional) probabilities of performing actions given that the agent finds itself in some particular state.On the other hand, in fairly simple environments such as the textbook grid-world scenario [4], the dynamics can be fairly simple to learn.Moreover, the state and action spaces are finite and small, allowing for simple tabular methods instead of more complicated methods that would, for example, necessitate the use of artificial neural networks [3].In particular, one could use the dynamic programming method of policy iteration to solve for the optimal policy exactly [5].
In recent times, reinforcement learning has met with success in a variety of quantum programming tasks, such as error correction [6], combinatorial optimization problems [7], as well as state preparation [8][9][10][11][12] and gate design [13,14] in the context of noisy control.Here, we investigate the question of state preparation and gate compilation in the context of abstract logic gates, and ask whether reinforcement learning could be successfully applied to learn the optimal gate sequences to prepare some given quantum state, or compile a specified quantum gate.Instead of exploring the efficacy of any one particular reinforcement method, we investigate whether it is even feasible to model these tasks as MDPs.By discretizing state and action spaces in this context, we circumvent questions and challenges involving convergence rates, reward sparsity, and hyperparemeter optimization that typically show up in reinforcement learning scenarios.Instead, the discretization allows us to exactly solve for and study quite explicitly the properties of the optimal policy itself.This allows us to test whether we can recover optimally short programs using reinforcement learning techniques in quantum programming situations where we already have well-established notions of what those optimally short programs, or circuits, should look like.
There have been numerous previous studies in the general problem of quantum compilation, including but not limited to, the Solovay-Kitaev algorithm [15], quantum Shannon decomposition [16], approximate compilation [17,18], as well as optimal circuit synthesis [19][20][21].Here, we aim to show that optimally short circuits could be found through solving discrete MDPs, and that these circuits agree with independently calculated shortest possible gate sequences for the same tasks.Since the initial posting of this work, numerous works have continued to explore the interface between classical reinforcement learning and quantum computing.These include finding optimal parameters in variational quantum circuits [22][23][24], quantum versions of reinforcement learning and related methods [25][26][27][28][29], Bell tests [30], as well as quantum control [31][32][33][34], state engineering and gate compilation [35][36][37][38][39][40], the subject of this paper.
In such studies, reinforcement learning is employed as an approximate solver of some underlying MDP.This raises the important question of how, and under what conditions, can the underlying MDP be solved exactly, and what kind of solution quality does it result in.Naturally, such MDPs can only be solved exactly for relatively small problem sizes.Our paper explores the answer to this question in the context of single-qubit state preparation and gate compilation, and demonstrates the effects of native gate choice, coordinate representation, discretization effects as well as noise.
The organization of this paper is as follows.We first briefly review the formalism of MDPs.We then investigate the problem of single-qubit state preparation using a discretized version of the continuous {RZ, RY } gates, as well as the discrete gateset {I, H, S, T }.We then study this problem in the context of noisy quantum channels.Finally, we consider the application to the problem of single-qubit compilation into the {H, T } gateset, and show, among other things, that learning the MDP can be highly sensitive to the choice of coordinates for the unitaries.
Brief Review of MDPs
Markov Decision Processes (MDPs) provide a convenient framing of problems involving an agent interacting with an environment.At discrete time steps t, an agent receives a representation of the environment's state s t ∈ S, takes an action a t ∈ A, and then receives a scalar reward r t+1 ∈ R. The policy of the agent, describing the conditional probability π(a|s) of taking action a given the state s, is independent of the environment's state at previous time steps and therefore satisfies the Markov property.The discounted return that an agent receives from the environment after time step t is defined as is the discount factor.The goal of the agent is then to find the optimal policy π * (a|s) that maximizes the state-value function (henceforth, "value function" for brevity), defined as the expectation value of the discounted return received from starting in state s t ∈ S and thereafter following the policy π(a|s), and expressed as More formally then, the optimal policy π * satisfies the inequality V π * (s) ≥ V π (s) for all s ∈ S and all policies π.For finite MDPs, there always exists a deterministic optimal policy, which is not necessarily unique.The value function for the optimal policy is then defined as the optimal value function V The value function satisfies a recursive relationship known as the Bellman equation relating the value of the current state to that of its possible successor states following the policy π.Note that the conditional probability of finding state s and receiving reward r having performed action a in state s specifies the environment dynamics, and also satisfies the Markov property.This equation can be turned into an iterative procedure known as iterative policy evaluation which converges to the fixed point V k = V π in the k → ∞ limit, and can be used to obtain the value function corresponding to a given policy π.In practice, we define convergence as |V k+1 −V k | < for some sufficiently small .Having found the value function, we could then ask if the policy that produced this value function could be further improved.To do so, we need the state-action value function Q π (s, a), defined as the expected return by carrying out action a in state s and thereafter following the policy π, i.e.
According to the policy improvement theorem, given deterministic policies π and π , the inequality where π (s) = a (and in general π (s) = π(s)) for all s ∈ S. In other words, having found the state-value function corresponding to some policy, we can then improve upon that policy by iterating through the action space A while maintaining the next-step state-value functions on the right hand side of Eq. ( 2) to find a better policy than the current one ( -greedy algorithm for policy improvement).
We can then alternate between policy evaluation and policy improvement in a process known as policy iteration to obtain the optimal policy [4].Schematically, this process involves evaluating the value function for some given policy up to some small convergence factor, followed by the improvement of the policy that produced this value function.The process terminates when the improved policy stops differing from the policy in the previous iteration.Of course, this procedure to identify the optimal policy for an MDP relies on the finiteness of state and action spaces.As we will see below, by discretizing the space of 1-qubit states (i.e. the surface and interior of the Bloch sphere corresponding to pure and mixed states), as well as identifying a finite gate set, we create an MDP with the goal of state preparation for which optimal policies in the form of optimal (i.e.shortest) quantum circuits may be found through this method.
We note that one could view state evolution under unitary operations or left multiplication of unitaries by other unitaries as deterministic processes.These could be thought of as trivially forming a Markov Decision Process where the probabilities p(s |s, a) have a δ-function support on some (point-like) state s .Once we impose discretization, this underlying determinism implies that the dynamics of the discrete states are strictly speaking non-Markovian, i.e. the conditional probability of landing in some discrete state s depends not just on the previous discrete state and action, but also on all the previous states and actions, since the underlying continuous/point-like state evolves deterministically.However, we shall see below that with sufficient care, both the tasks of state preparation and gate compilation can be modeled and solved as MDPs even with discretized state spaces.
Preparation of single-qubit states
In this section, we will discuss the preparation of singlequbit states as an MDP.In particular, we will focus on preparing a discrete version of the |1 state.We will do so using two different gate sets, a discretized version of the continuous RZ and RY gates, and the set of naturally discrete gates I, H, S and T , and describe probabilistic shuffling within discrete states to arrive at optimal quantum programs via optimal policies.We will also consider states of the form (HT ) n |0 .
State and Action Spaces
We apply a fairly simple scheme for the discretization of the space of pure 1-qubit states.As is well known, this space has a one-to-one correspondence with points on a 2-sphere, commonly known as the Bloch sphere.With θ ∈ [0, π] denoting the polar angle and φ ∈ [0, 2π) denoting the azimuthal angle, an arbitrary pure 1-qubit state can be represented as The discretization we adopt here is as follows.First, we fix some small number = π/k for some positive integer k.Next, we identify polar caps around the north (θ = 0) and south (θ = π) pole.The northern polar cap is identified as the set of all 1-qubit (pure) states for which θ < for some fixed , regardless of the value of φ.Similarly, the southern polar cap is identified as the set of all 1-qubit (pure) states for which θ > π − , independent of φ.Apart from these special regions, the set of points n ≤ θ ≤ (n+1) and m ≤ φ ≤ (m+1) for some positive integers 1 ≤ n ≤ k−2 and 0 ≤ m ≤ 2k−1 are identified as the same region.The polar caps thus correspond to n = 0, k − 1, respectively.We identify every region (n, m) as a "state" in the MDP.As a result of this identification, elements of the space of 1-qubit pure states are mapped onto a discrete set of states such that the 1-qubit states can now only be identified up to some threshold fidelity.For instance, the |0 state is identified as the northern polar cap with fidelity cos 2 π 2k .Similarly, the |1 state is identified with the southern polar cap with fidelity sin 2 (k−1) π 2k = cos 2 π 2k .In other words, if we were to try and obtain these states using this scheme, we would only be able to obtain them up to these fidelities.
Having identified a finite state space S composed of discrete regions of the Bloch sphere, we next identify single-qubit unitary operations, or gates, as the action space A. There are some natural single-qubit gate sets that are already discrete, such as {H, T }.Others, such as the continuous rotation gates {RZ, RY }, require discretization similar to that of the continuous state space of the Bloch sphere.We discretize the continuous gates RZ(β) and RY (γ) by discretizing the angles β, γ ∈ [0, 2π].The resolution δ = π/l must be sufficiently smaller than that of the state space = π/k so that all states s ∈ S are accessible from all others via the discretized gateset a ∈ A. In practice, a ratio of /δ ∼ O (10) is usually sufficient, although the larger this ratio, the better the optimal circuits we would find.
Without loss of generality, and for illustrative purposes, we identify the discrete state corresponding to the |1 state (hereafter referred to as the "discrete |1 state") as the target state of our MDP.To prepare the |1 state starting from any pure 1-qubit state using the gates RZ and RY , it is well-known that we require at most a single RZ rotation followed by a single RY rotation.For states lying along the great circle through the x and z axes, we need only a single RY rotation.As a test of this discretized procedure, we investigate whether solving this MDP would be able to reproduce such optimally short gate sequences.We also consider the gateset {I, H, T } below, where we include the identity gate to allow for the goal state to "do nothing" and remain in its state.For simplicity and illustrative purposes, we also include the S = T 2 gate in the case of single-qubit state preparation.
Reward Structure and Environment Dynamics
An obvious guess for a reward would be the fidelity | φ|ψ | 2 between the target state |ψ and the prepared state |φ .However, here we consider an even simpler reward structure of assigning +1 to the target state, and 0 to all other states.This allows us to directly relate the length of optimal programs to the value function corresponding to the optimal policy, as we show below.
To finish our specification of the MDP, we also estimate the environment dynamics p(s , r|s, a).Since our reward structure specifies a unique reward r to every state s ∈ S, these conditional probabilities reduce to simply p(s |s, a).The discretization of the Bloch sphere implies that the action of a quantum gate a on a discrete state s = (n, m) maps this state to other states s = (n , m ) according to a transition probability distribution p(s |s, a).This non-determinism of the effect of the actions occurrs because the discrete states are themselves composed of entire families of continuous quantum states, which are themselves mapped deterministically to other continuous quantum states.However, continuous states from the state discrete state region can land in different discrete final state regions.A simple way to estimate these probabilities is to uniformly sample points on the 2-sphere, determine which discrete state they land in, then perform each of the actions to determine the state resulting from this action.We sample uniformly across the Bloch sphere by sampling u, v ∼ U[0, 1], then setting θ = cos −1 (2u − 1) and φ = 2πv.Although other means of estimating these probabilities exist, we find that this simple method Optimal values for various states on the Bloch sphere using the discrete RZ and RY gates, with a discount factor γ = 0.8.The color of a state corresponds to its optimal value function Vπ * , where lighter colors indicate a larger value.Those colored in green are also exactly the states whose optimal circuits to prepare the discrete |1 state consist of a single RY rotation, while those in blue are also exactly the ones whose optimal circuits consist of an RZ rotation followed by an RY rotation.
works well in practice for the particular problem of single-qubit state preparation.
Note that for the target state |1 , the optimal policy is to just apply the identity, i.e.RZ(0) or RY (0).This action will continue to keep this state in the target state, while yielding +1 reward at every time step.This yields an infinite series V (t) = ∞ k=0 γ k , where V (s) := V π * and t is the target state, which we can trivially sum to obtain (1 − γ) −1 .This is the highest value of any state on the discretized Bloch sphere.For γ = 0.8, we obtain V (t) = 5.0.For some generic state s ∈ S, we can show that with our reward structure, the optimal value function is given by where the elements of the matrix P are given by P s ,s = p(s |s, π (s)).From Eq. 4, it immediately follows that V (s) ≤ V (t) for all s ∈ S. The Markov chain produced by the optimal policy has an absorbing state given by the target state, and for some large enough number of steps, all (discrete) states land in this absorbing state.Indeed, the smallest K for which the Marko- vian process converges to a steady state, such that for all s, s ∈ S provides an upper bound for the length of the gate sequence that leads from any one discrete state s to the target discrete state t.Thus, for the target state itself, K = 0. Since P k s ,s ≤ 1, for states that are one gate removed from the target state s 1 , we have V (s 1 ) ≤ V (t), and more generally V (s k+1 ) ≤ V (s k ).This intimately relates the length of the optimal program to the optimal value function.
The optimal value landscape for the two gatesets are shown in Figs. ( 1) and ( 2).Note that while in the case of the discretized {RZ, RY } gates we have a distinguished ring of states along the equator around the x-axis that are only a single gate application away from the target state, we have no such continuous patch on the Bloch sphere for the {I, H, S, T } gateset, even though there may be indidividual (continuous) states that are only a single gate application away from the target state, e.g.H|1 for the target state |1 .This shows that states which are nearby on the Bloch sphere need not share similar optimal paths to the target state, given such a gateset.
Optimal State Preparation Sequences
Using policy iteration allows for finding the optimal policy in an MDP.The optimal policy dictates the best action to perform in a given state.We can chain the actions drawn from the optimal policy together to find an optimal sequence of actions, or gates, to reach the target state.In our case, the actions are composed of unitary operations, which deterministically evolve a quantum state (note that we consider noise below in which case the unitary gates are replaced by non-unitary quantum channels).However, due to the discretization, this is no longer true in our MDP, where the states evolve according to the non-trivial probabilities p(s |s, a).The optimal policy is learned with respect to these stochastic dynamics, and not with respect to the underlying deterministic dynamics.In other words, we are imposing a Markovian structure on essentially non-Markovian dynamics.Therefore, if we simply start with some specific quantum state, and apply a sequence of actions drawn from the optimal policy of the discrete states that the evolving quantum states belong to, we might not necessarily find ourselves in the target (discrete) state.For instance, the optimal policy in any one discrete state may be to apply the Hadamard gate, and for a subset of quantum states within that discrete state, this may lead to another discrete state for which the optimal policy is again the Hadamard gate.In such a case, the evolution would be stuck in a loop.
To circumvent this issue, in principle one may allow "shuffling" of the quantum states within a particular discrete state before evolving them under the optimal policy.However, this may increase the length of the gate sequence and moreover lead to poorer bounds on the fidelity, since in general, where the "shuffling" transformations are given by U (i) s : |ψ → | ψ such that |ψ ∼ | ψ belong to the same discrete state, while the U i specify (unitary) actions sampled from the optimal policy.On the other hand, without such "shuffling", the fidelities in the target states from sequences that only differ in their starting states is the same as the fidelities of the starting states, i.e.
where |ψ i and |ψ i are two different initial pure states that belong to the same initial discrete state, and U = i U i is the product of the optimal policies U i .
To avoid such shuffling while still producing convergent paths, we sample several paths that lead from the starting state and terminate in the target (discrete) state, discarding sequences that are larger than some acceptable value, e.g. the length K defined by Eq. 5, and report the one with the smallest length as the optimal (i.e.shortest) program.Schematically, this can be described in pseudo-code as in Algorithm 1.This algorithm can be used to generate optimal programs for any given (approximately universal) single-qubit gateset.In our experiments, we found M to be 2 for the discrete {RZ, RY } gateset, and 88 for the {I, H, S, T } gateset, and took K to be 100.Optimal-Programs[s] ← Optimal-Prog 31: end for
Discrete RZ and RY gateset
In the case of discrete RZ and RY gates, we find what we would expect at most a single RZ rotation followed by a single RY rotation to get from anywhere on the Bloch sphere to the (discrete) |1 state.For (discrete) states lying along the equatorial ring around the Yaxis, we need only apply a single RY rotation.Empirically, we choose a state resolution of = π/16 so that we would find sequences generating the pure |1 state from various discrete states across the Bloch sphere with cos 2 π 32 ∼ 99% fidelity.The optimal programs we find via the preceding procedure for this gateset are composed of programs with lengths either 1 or 2.
Discrete {I, H, T } gateset
We can also use the procedure described above to obtain approximations to the states (HT ) n |0 for integers n ≥ 1.The unitary HT can be thought of as a rotation by an angle θ = 2 arccos cos(7π/8) √ 2 about an axis n = (n x , n y , n z ) = The angle θ has the continued fraction representation which is infinite, and thus the angle θ is irrational.In the above, we have used the Gaussian notation for the continued fraction and x is the flooring operation x → n where n ∈ Z is the closest integer where n ≤ x.The states (HT ) n |0 lie along an equatorial ring about the axis n, and no two states (HT ) n |0 and (HT ) m |0 are equal for n = m.Increasing the value of n corresponds to preparing states from among a finite set of states that span the equatorial ring about the axis of rotation.We choose to investigate state preparation up to n = 10 10 .Although as their form makes explicit, these states can be reproduced exactly using n many H and T gates, using our procedure, they can only be obtained up to some fidelity controlled by the discretization as described above.The advantage is that we can obtain good approximations to these states with much fewer gates than n.This is illustrated in Table 1 where short gate sequences can reproduce states of the form (HT ) n |0 for very large values of n using only a few (between 3 and 17 gates).
Noisy state preparation
Reinforcement learning has previously shown success when applied in the presence of noise [13,14].Indeed, the ability to learn the effects of a noise channel has apparent practical use when applied to the current generation of noisy quantum computers.These devices are often plagued by errors that severely limit the depth of quantum circuits that can be executed.As full error correction procedures are too resource intensive to be implemented on current hardware, error mitigation methods have been developed to decrease the effect of [41][42][43][44].However, there are also pre-processing error mitigation schemes that aim to modify the input circuit in order to reduce the impact of noise.Examples are quantum optimal control methods and dynamical decoupling [45][46][47].Such techniques attempt to prepare a desired quantum state on a noisy device using circuits (or sequence of pulses) that are different from the ones that would be optimal in the absence of noise.This idea is immediately applicable in our MDP framework as we now demonstrate.
State and Action Spaces
In the presence of noise, the quantum state becomes mixed and is described by a density matrix, which for a single qubit can generally be written as Here, r = (r x , r y , r z ) are real coefficients called the Bloch = (r sin θ cos φ, r sin θ sin φ, r cos θ) , (10) where r ≡ |r| ∈ [0, 1], θ ∈ [0, π], and φ ∈ [0, 2π).
We perform the state discretization analogously to the previous section, but now need to discretize states within the full Bloch ball.To this end, we fix = π/k and δ = 1/k for some positive integer k.Now the set of points n ≤ θ ≤ (n + 1) , m ≤ φ ≤ (m + 1) , and lδ ≤ r ≤ (l + 1)δ for integers 1 ≤ n ≤ k − 2, 0 ≤ m ≤ 2k − 1, and 0 ≤ l ≤ k − 1 constitute the same discrete state s = (n, m, l) in the MDP.As before, the polar regions n = 0, k − 1 are special as these regions are independent of φ, i.e. they are described by the set of integers s = (n, m = 0, l).This discretization corresponds to nesting concentric spheres and setting the discrete MDP states s to be the 3-dimensional regions between them.
Let us now introduce the action space A in the presence of noise.We model noisy gates using a composition of a unitary gate U and a noisy quantum channel described by a set of Kraus operators Application of a noisy quantum channel can shrink the magnitude r of the Bloch vector as the state becomes more mixed.Evolution under a unitary gate U in this noisy channel results in We here again consider the discrete gateset U ∈ {I, H, T }.Once we specify the type of noise via a set of Kraus operators, its sole effect on our description of the MDP is to change the transition probability distributions p(s |s, a).While noise can change the optimal policies, we may nevertheless solve for the optimal policies using the exact same procedure that we used in the noiseless case.In the following, we compare the resulting shortest gate sequences found by an agent that was trained using the noisy transition probabilities p and compare them to those found by an agent lacking knowledge of the noise channel.The noise observed in current quantum computers is to a good approximation described by amplitude damping and dephasing channels.The amplitude damping channel is described by the two Kraus operators with 0 ≤ γ ≤ 1. Physically, we can interpret this channel as causing a qubit in the |1 state to decay to the |0 state with probability γ.In current quantum computing devices, the relaxation time, T 1 , describes the timescale of such decay processes.For a given T 1 time and a characteristic gate execution time τ g , we parametrize γ = 1 − e −τg/T1 .(14) Note that application of the amplitude damping channel also leads to dephasing of the off-diagonal elements of the density matrix in the Z basis with timescales T 2 = 2T 1 .
The dephasing channel takes the form and is described by the Kraus operators A 0 = √ 1 − p 1 and A 1 = √ pZ.This channel leads to pure phase damping of the off-diagonal terms of the density matrix in the Z basis.It is described by a dephasing time, T 2 .We use Pyquil [48] (specifically the function damping after dephasing from pyquil.noise) to construct a noise channels consisting of a composition of these two noise maps by specifying the T 1 and T 2 times as well as the gate duration τ g .The amplitude damping parameter γ is set by T 1 using Eq.(14).Since this also results in phase damping of the off-diagonal terms of the density matrix (in the Z basis), the dephasing time T 2 is upper limited by 2T 1 .We thus parametrize the dephasing channel parameter (describing any additional pure dephasing) as The dephasing channel thus describes dephasing leading to T 2 < 2T 1 and acts trivially if T 2 is at its upper bound T 2 = 2T 1 .In the following, we consider T 1 = T 2 such that the dephasing channel acts non-trivially on the quantum state.
Reward Structure and Environment
For the reward structure of our noisy state preparation, we consider the purity of the state when calculating the reward.This is to account for the fact that there may be no gate sequence that results in a state with a high enough purity to land in the pure goal state.As such, assigning a reward of +1 to the pure target state and 0 to all other states can lead to poor convergence.We assign the reward as follows: the pure target state ρ target is one of the MDP states (n target , m target , k − 1), where r = 1 and thus l = k − 1. Considering a state ρ in the MDP state (n , m , l ).If n = n target and m = m target , we assign a reward of l /k.Otherwise, we assign a reward of 0. With this construction, we reward reaching a state in the target direction (i.e. with correct angles θ, φ) using a reward amount that is proportional to the purity of the state.We thus also reward gate sequences that do not end up in the pure goal state, while still encouraging states with higher purity.One can expect the optimal value function for a noisy evolution to take on smaller values due to the fact that the rewards are smaller.Indeed, it can be easily verified that for a simplified noise model consisting of only a depolarizing quantum channel E(ρ) = (1 − p)ρ + p 3 (XρX + Y ρY + ZρZ), the resulting optimal value function is simply uniformly shrunk compared to the optimal value function of a noiseless MDP.Since the change is uniform across all values, the optimal policy is unchanged from the noiseless setting.This is no longer the case for realistic noise models such as described by amplitude and dephasing quantum channels, in which case we rederive the optimal policy using policy iteration.
This requires updating the conditional probability distributions p(s |s, a) by performing Monte-Carlo simulations as before by drawing random continuous states from within a given discrete states, applying deterministic noisy gates, and recording the obtained discrete states.Now we find transitions to states s with lower purity than the initial state s.Note that the randomness of the probability distribution p arises solely from the randomly sampled continuous quantum states to which we apply the noisy gate actions.The randomness due to noise is fully captured within the mixed state density matrix description of quantum states.
Optimal Noisy State Preparation Sequences
We now consider the task of approximating states of the form (HT ) n |0 for n 1, starting from the state |0 , using a gate set {I, H, T } in the presence of noise.We use the MDP formulation with transition probabilities p(s |s, a) obtained in the presence of amplitude and dephasing noise channels.We find the optimal policy using policy iteration that yields optimal gate sequences via Algorithm (1).
In Table 2, we present results up to n = 10 10 that includes the shortest gate sequences found by the optimal policies of noisy and noiseless MDP.We also compare the final state fidelities produced by these optimal circuits.The fidelities F that are listed in the table are found by applying the optimal gate sequences for a given n to the exact state |0 .Since the resulting states are mixed, we calculate the fidelity between the target state σ and the state resulting from an optimal gate sequence ρ as We list both the gate sequences found by the noiseless MDP, the agent whose underlying probability distribu- .We choose such stronger noise values in order to highlight the difference in gate sequences (and resulting fidelities) produced by the optimal policies π * for noisy and noiseless MDPs.We expect that this result is generic and robust when considering MDPs for multiple qubits, where two-qubit gate errors are expected to lead to more pronounced noise effects.
The results in Table 2 demonstrate that even in the presence of (strong) noise, the noisy MDP is able to provide short gate sequences that approximate the target state reasonably well.Importantly, for all values of n shown (except for n = 10 4 ), the optimal policy of the noisy MDP π * noisy yields a gate sequence that results in a higher fidelity than the gate sequence obtained from π * noiseless of the noiseless MDP (if applied in the presence of noise).This shows that noise can be mitigated by adapting the gate sequence according to the noise experienced by the qubit.Solving for the optimal policies of a noisy MDPs are a convenient approach to finding such adapted quantum circuits.
In Fig. 3 we compare the gate sequences and fidelities obtained from the optimal policies of noisy and noiseless MDPs for a fixed value of n = 10 7 as a function of T 1 = T 2 .We observe the noisy MDP to outper-form the noiseless MDP for all noise strengths.This indicates that by learning the noise channel, the agent can adapt to the noise and find gate sequences that yield higher fidelities in that channel.Note that if we applied the gate sequences found by the noisy MDP in a noiseless setting, they would yield lower fidelities than gate sequences produced by the optimal policy of a noiseless MDP.Based on these result, we conclude that dynamic programming and reinforcement learning methods provide a powerful and generic way to perform pre-processing error mitigation by identifying optimal gate sequences for qubit state preparation in the presence of noise.Future work should be directed towards exploring these approaches for two and more coupled qubits.
Compilation of single-qubit gates
In the previous sections, we considered an agentenvironment interaction in which we identified Hilbert space as the state space, and the space of SU (2) gates as the action space.Shifting our attention to the problem of quantum gate compilation, we now identify both the state and action spaces with the space of SU (2) matrices, where for convenience we ignore an overall U (1) phase from the true group of single-qubit gates U (2).We first consider an appropriate coordinate system to use, and discuss why the quaternions are better suited to this task than Euler angles.We focus exclusively on the gateset {I, H, T }, and modify the reward structure slightly so that we now have to work with the probabilities p(s , r|s, a) instead of the simpler p(s |s, a) as in the previous section.We present empirical results for a few randomly chosen (special) unitaries.noisy (orange) and π * noiseless (blue) of noisy and noiseless MDPs, respectively.The noisy policy gives gate sequences that are different from the noiseless case, which consistently yield higher fidelities.The optimal noisy gate sequence is HT HT HT H for all times T1 = T2 ≥ 60µs.We fix the gate time to τg = 200 ns when generating the Kraus operators as defined by Eqs. ( 14) and (16).For each value of T1, T2, we generate the transition probabilities p(s |s, a) according to the corresponding noise map and use policy iteration to find the optimal policy.The fidelity is then calculated by applying the gate sequence found by both the noisy and noiseless MDPs to |0 in the error channel for that specific value of T1, T2.The point at infinity represents the noiseless case, and corresponds to the transition probabilities learned by the noiseless MDP.
Coordinate system
We consider the gateset {I, H, T }.We include the identity in our gate set since we would like the target state to possess the highest value, and have the agent do nothing in the target state under the optimal policy.Because we would like to remain in the space of SU (2) matrices, we define H = RY (π/2)RZ(π), which differs from the usual definition by an overall factor of i, and T = RZ(π/4).Note that owing to our alternative gate definitions, we have that H 2 = T 8 = −1 = 1 so that we may obtain up to 3 and 15 consecutive applications of H and T respectively in the optimal program.Next, we choose an appropriate coordinate system.One choice is to parametrize an arbitrary U ∈ SU (2) using the ZYZ-Euler angle decomposition.Under this parametrization, given some U ∈ SU (2) for some angles α, β and γ.Note that for β = 0, we have a continuous degeneracy of choices in α and γ to specify some RZ(δ) with α + γ = δ.However, the transformations above will conventionally fix this to α = γ = δ/2.Under the action of T , i.e.T : U → U = T U = RZ(α )RY (β )RZ(γ ), or equivalently T : (α, β, γ) → (α , β , γ ), the ZYZ-coordinates transform rather simply as α = α + π/4, β = β, γ = γ.Under a similar action of H however, the coordinates transform nontrivially.The matrix entries, on which these parameters depend, transform as This is a non-volume preserving operation for which where J denotes the Jacobian of the transformation from (α, β, γ) to (α , β , γ ) under the action of H, and which diverges for values of α and β such that cos(α) sin(β) = ±1.This implies that for such pathological values, a unit hypercube in the discretized (α, β, γ) space gets mapped to a region that covers indefinitely many unit hypercubes in the discretized (α , β , γ ) space.In turn, this means that a single state s gets mapped to an unbounded number of possible states s , causing p(s |s, a = H) to be arbitrary small.This may prevent the agent from recognizing an optimal path to valuable states, since even if the quantity (r + γV π (s )) is particularly large for some states s , this quantity gets multiplied by the negligible factor p(s |s, a = H), and therefore has a very small contribution in an update rule such as Eq (2).These problems can be overcome by switching to using quaternions as our coordinate system.Unlike the ZYZ-Euler angles, the space of quaternions is in a one-to-one correspondence with SU (2).Given some U ∈ SU (2) as in Eq (18), the corresponding quaternion is given simply as q = (a, b, c, d).Under the action of T , its components transform as while under the action of H, its components transform as and det(J (T ) ) = det(J (H) ) = 1 for the Jacobians associated with both transformations, so that these operations are volume-preserving on this coordinate system.In turn, this implies that a hypercube with unit volume in the discretized quaternionic space gets mapped to a region with unit volume.
For the purposes of the learning agent, this means that the total number of states that can result from acting with either T or H is bounded above.Suppose we choose our discretization such that the grid spacing along each of the 4 axes of the quaternionic space is the same.Then, since a d-dimensional hypercube can intersect with at most 2 d equal-volume hypercubes, a state s can be mapped to at most 16 possible states s .While this is certainly better than the pathological case we noted previously using the ZYZ-Euler angles, one could ask if it is possible to do better and design a coordinate system such that a state gets mapped to at most one other state.
One possible approach to make the environment dynamics completely deterministic is to consider a discretization q = (n 1 ∆, n 2 ∆, n 3 ∆, n 4 ∆) where n 1 , n 2 , n 3 , n 4 ∈ Z, and choose ∆ such that the transformed quaternion can also be described similarly as q = (n 1 ∆, n 2 ∆, n 3 ∆, n 4 ∆), and try to ensure that n 1 , n 2 , n 3 , n 4 are also integers.Essentially this would mean that corners of hypercubes map to corners of hypercubes, so that discretized states map uniquely to other discretized states.However, consider the transformation under H, Eq. (23).For this transformation, re- for some k ∈ Z (and similarly for the other components).This implies that n 1 cannot be an integer, and so the map given with this gateset over this discretized coordinate system cannot be made deterministic in this manner.Nevertheless, we find that our construction is sufficient to solve the MDP that we have set up.
Reward Structure and Environment Dynamics
Some natural measures of overlap between two unitaries include the Hilbert-Schmidt inner product tr(U † V ), and since we work with quaternions, the quaternion distance |q − q |.However, neither does the Hilbert-Schmidt inner product monotonically increase, nor does the quaternion distance monotonically decrease, along the shortest {H, T } gate sequence.As an example, consider the target quaternion q = [−0.52514,−0.38217, 0.72416, 0.23187] from Table (3) with shortest compilation sequence HT T T T HT HHH (read right to left) satisfying |q−q | < 0.3, where q is the prepared quaternion via the sequence.After the first H application, |q−q | ∼ 1.34, which drops after the second H application to |q − q | ∼ 0.97, and then rises again after the third H applciation to |q − q | ∼ 1.49, before eventually falling below the threshold error.Similarly, the Hilbert-Schmidt inner product starts at ∼ 0.21, rises to ∼ 1.05, then falls to ∼ −0.21 before eventually becoming ∼ 1.96.On the other hand, we showed previously how assigning a reward structure of +1 to some target state, and 0 to all other states, made it possible to relate the optimal value function to the length of the optimal path.
Instead of specifying a reward of +1 in some target state and 0 in every other state however, we now assign a reward of +1 whenever the underlying unitary has evolved to within an -net approximation of the target unitary.Since we work with quaternions, we specify this as obtaining a reward of +1 whenever the evolved quaternion q satisfies |q−q | < , for some > 0 and q is the target quaternion, and 0 otherwise.We note that the Euclidean distance between two vectors (a, b, c, d) and (a + ∆ bin , b + ∆ bin , c + ∆ bin , d + ∆ bin ) equals 2∆ bin , however both those vectors cannot represent quaternions, since only either one of them can have unit norm.Nevertheless, this sets a size of discrete states, and we require that be comparable to this scale, setting = 2∆ bin in practice.This requirement comes from the fact that in general, the -net could cover more than one state, so that we now need to estimate the probabilities p(s , r|s, a), in contrast to the scenario where a state uniquely specifies the reward.Demanding that ∼ ∆ bin ensures that p(s , r = 1|s, a) does not become negligibly small.
We could estimate the dynamics by uniformly ran-domly sampling quaternions, track which discrete state the sampled quaternions belong to, evolve them under the actions and track the resultant discrete state and reward obtained as a result, just as we did in the previous section.However, here we now estimate the environment dynamics by simply rolling out gate sequences.Each rollout is defined as starting from the identity gate, then successively applying either an H or T gate with equal probability until some fixed number K of actions have been performed.The probabilities for the identity action p(s , r|s, a = I) are simply estimated by recording that (s , a = I) led to (s , r) at each step that we sample (s , r) when performing some other action a = I in some other state s = s .The number of actions per rollout K is set by the desired accuracy, which the Solovay-Kitaev theorem informs us is O(polylog(1/ )) [15], and in our case has an upper bound given by Eq. 5. Estimating the environment dynamics in this manner is similar in spirit to off-policy learning in typical reinforcement learning algorithms, such as Q-learning [4].
Optimal Gate Compilation Sequences
Solving the constructed MDP through policy iteration, we arrive at the optimal policy just as before.We now chain the optimal policies together to form optimal gate compilation sequences, accounting for the fact that while the dynamics of our constructed MDP is stochastic, the underlying evolution of the unitary states is deterministic.The procedure we use for starting with the identity gate and terminating, with some accuracy, at the target state is outlined in pseudo-code in Algorithm 2, where the length of the largest sequence K is dictated by Eq. 5, and in our experiments we took 100 rollouts.
Algorithm 2 Optimal Gate Compilation Sequence The accuracy with which we would obtain the minimum length action sequence in Algorithm (2) need not necessarily satisfy the bound set by the reward criterion, r = 1 for |q − q | < , for reasoning similar to the shuffling discussed in the context of state preparation above.This is why we require Algorithm (2) to report the minimum length action sequence that also satisfies the precision bound.In practice, we found that this was typically an unnecessary requirement and even when the precision bound was not satisfied, the precision did not stray too far from the bound.It should be emphasized that due to the shuffling effect, there is no a priori guarantee that optimal-sequence returned by Algorithm (2) need even exist, since the precision bound is not guaranteed to exist, and the only bound we can safely set is |q −q | ∆ bin k, where k is the number of actions in the sequence that prepares q.In practice however, we find the algorithm to work quite well in producing optimal sequences that correspond to the shortest possible gate sequences to prepare the target quaternions q .
To benchmark the compilation sequences found by this procedure, we find shortest gate sequences for com-pilation to some specified precision using a brute-force search that yields the smallest gate sequence that satisfies |q − q | < for some > 0 with the smallest value of |q − q |, where q is the prepared quaternion and q is the target quaternion.This brute-force procedure can be described in pseudo-code as in Algorithm 3. for Seq in Sequences Shortest-Sequence ← Seq with Min(Quaternion-Distances)
Algorithm 3 Shortest Gate Compilation Sequence
As an experiment, we drew 30 (Haar) random SU (2) matrices, and found their compilation sequences from Algorithms (2) and (3).We set = 2∆ bin = 0.3, estimated the environment dynamics using 1000 rollouts, each rollout being 50 actions long, and each action being a uniform draw between H and T .The findings are presented in Table (3), where the sequences are to be read right to left.We find that although the two approaches sometimes yield different sequences, the two sequences agree in their length and produce quaternions that fall within of the target quaternion.We expect in general that the two approaches will produce comparable length sequences and target fidelities, though not necessarily equal.
Conclusions
We have shown that the tasks of single-qubit state preparation and gate compilation can be modeled as finite MDPs yielding optimally short gate sequences to prepare states or compile gates up to some desired accuracy.These optimal sequences were found to be comparable with independently calculated shortest gate sequences for the same tasks, often agreeing with them exactly.Additionally, we investigated state preparation in the presence of amplitude damping and dephasing noise channels.We found that an agent can learn information about the noise and yield noise-adapted optimal gate sequences that result in a higher fidelity with the target state.This work therefore provides strong evidence that more complicated quantum programming tasks can also be successfully modeled as MDPs.In scenarios where the state or action spaces grow too large for dynamic programming to be applicable, or where the environment dynamics cannot be accurately learned in the simple manner described above, it is therefore highly promising to apply reinforcement learning to find optimally short circuits for particular tasks.Future work should be directed towards using dynamic programming and reinforcement learning methods for noiseless and noisy state preparation and gate compilation for several coupled qubits.
We provide the required programs for qubit state preparation as open-source software, and we make the corresponding raw data of our results openly accessible [49].
where in the first equality, we have simply used the fact that the reward is 1 in the target state t and 0 in every other state, in the 4th inequality we have expanded V (s ) using the same fact, in the 6th equality we have used the fact that s ,s p(s |s )p(s |s) = s p(s |s) and used the notation (P k ) s ,s = s1,...,s k−1 p(s |s k−1 ) . . .p(s 1 |s), and finally in the last equality we have recursively expanded V (s ), just as in the preceding steps, a total of K times.Noting that we can carry out this recursive expansion arbitrarily many times, in the limit K → ∞, we find The above expression is valid for the value function corresponding to an arbitrary policy.Specializing to the optimal policy, for which V (t) = (1 − γ) −1 , we find precisely Eq. 4.
Figure 1 :
Figure1: Optimal values for various states on the Bloch sphere using the discrete RZ and RY gates, with a discount factor γ = 0.8.The color of a state corresponds to its optimal value function Vπ * , where lighter colors indicate a larger value.Those colored in green are also exactly the states whose optimal circuits to prepare the discrete |1 state consist of a single RY rotation, while those in blue are also exactly the ones whose optimal circuits consist of an RZ rotation followed by an RY rotation.
Figure 2 :
Figure 2: Optimal value landscape across the Bloch sphere using the set of gates {I, H, S, T }, with a discount factor γ = 0.95.The color of a state corresponds to its optimal value function Vπ * , where darker colors indicate a larger value.States distributed around the equator of the Bloch sphere are especially advantageous to start from in order to reach the target |1 state, as their optimal circuits consist of short sequences of S and H gates.
1 .
vector and σ = (X, Y, Z) are the Pauli matrices.Since density matrices are semi-definite, it holds that |r| ≤ 1.If ρ is a pure state, then |r| = 1, otherwise |r| < Pure states can thus be interpreted as points on the surface of the Bloch sphere, whereas mixed states correspond to points within the Bloch sphere.The maximally mixed state ρ = I/2 corresponds to the origin.To find r, one can calculate the expectation values of each Pauli operator r = Tr(ρX), Tr(ρY ), Tr(ρZ)
Figure 3 :
Figure 3: Fidelity F of the state σ prepared using optimal gate sequences with target state ρtarget = (HT ) n |0 for fixed n = 10 7 as a function of noise strength T1 = T2.The shortest gate sequences (indicated in the figure) are produced by optimal policies π *noisy (orange) and π * noiseless (blue) of noisy and noiseless MDPs, respectively.The noisy policy gives gate sequences that are different from the noiseless case, which consistently yield higher fidelities.The optimal noisy gate sequence is HT HT HT H for all times T1 = T2 ≥ 60µs.We fix the gate time to τg = 200 ns when generating the Kraus operators as defined by Eqs.(14) and(16).For each value of T1, T2, we generate the transition probabilities p(s |s, a) according to the corresponding noise map and use policy iteration to find the optimal policy.The fidelity is then calculated by applying the gate sequence found by both the noisy and noiseless MDPs to |0 in the error channel for that specific value of T1, T2.The point at infinity represents the noiseless case, and corresponds to the transition probabilities learned by the noiseless MDP.
Table 1 :
Gate sequences obtained from the optimal policy to approximately produce target states |ψtarget = (HT ) n |0 .The optimal policy and fidelity are calculated for the noiseless case.The fidelity is defined as F = ψtarget|ψ , where |ψ is obtained from application of the shown gate sequences to the state |0 .The sequences are to be read from right to left.
noise.Methods such as zero-noise extrapolation (ZNE), Clifford data regression (CDR), and probabilistic error cancellation (PEC) involve post-processing of circuit results (with and without making use of knowledge about the underlying type of noise)
Table 2 :
Shortest gate sequences and noisy fidelities F produced by the optimal policies π * of noiseless MDP (columns 2 and 3) and noisy MDP (columns 4 and 5).The gate sequences should be read right to left.The noise is characterized by T1 = T2 = 1 µs and the gate time is set to τg = 200 ns.While this corresponds to a noise level that is stronger than in current day NISQ hardware, where T1, T2 ≈ 100µs, these parameters yield sufficiently strong noise to highlight differences in the optimal gate sequences.The fidelities (with the target state) are obtained by preparing mixed states using the shown gate sequences applied to |0 in the presence of noise.Note that the states generated by (HT ) n |0 for n = 3, 4 have an overlap fidelity of 0.99.This is also the case for n = 7, 10.This explains the similarity of gate sequences found for these cases.
tion p(s |s, a) is constructed from exact unitary gates, and the noisy MDP, whose transition probabilities are generated from noisy gates considering combined amplitude damping and dephasing error channels.We set the relaxation and dephasing times to T 1 = T 2 = 1µs and the gate time to τ g = 200 ns.While the value for τ g is typical for present day superconducting NISQ hardware, the values of T 1 , T 2 are about two orders of magnitude shorter than typical values on today's NISQ hardware, where T 1 , T 2 100µs | 13,129.2 | 2019-12-27T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Review and Design Overview of Plastic Waste-to-Pyrolysis Oil Conversion with Implications on the Energy Transition
Plastics are cheap, lightweight, and durable and can be easily molded into many di ff erent products, shapes, and sizes, hence their wide applications globally, leading to increased production and use. Plastic consumption and production have been growing since its fi rst production in the 1950s. About 4% of global oil and gas production is being used as feedstock for plastics, and 3 – 4% is used to provide energy for their manufacture. Plastics have a wide range of applications because they are versatile and relatively cheap. This study presents an in-depth analysis of plastic solid waste (PSW). Plastic wastes can be technically used for oil production because the calori fi c value of the plastics is quite comparable to that of oil, making this option an attractive alternative. Oil can be produced from plastic wastes via thermal degradation and catalytic degradation, while gasi fi cation can be used to produce syngas. Plastic pyrolysis can be used to address the twin problem of plastic waste disposal and depletion of fossil fuel reserves. The demand for plastics has continued to rise since their fi rst production in the 1950s due to their multipurpose, lightness, inexpensiveness, and durable nature. There are four main avenues available for plastic solid waste treatment, namely, reextrusion as a primary treatment, mechanical treatment as secondary measures, chemical treatment as a tertiary measure, and energy recovery as a quaternary measure. The pyrolysis oil has properties that are close to clean fuel and is, therefore, a substitute to fresh fossil fuel for power generation, transport, and other applications. The study showed that plastic wastes pyrolysis o ff ers an alternative avenue for plastic waste disposal and an alternative source of fossil fuel to reduce the total demand of virgin oil. Through plastic pyrolysis, plastic wastes are thermally converted to fuel by degrading long-chain polymers into small complex molecules in the absence of oxygen, making it a technically and economically feasible process for waste plastic recycling. The process is advantageous because presorting is not required, and the plastic waste can be directly fed without pretreatment prior to the process. Products of plastic pyrolysis are pyrolysis oil, a hydrocarbon-rich gas, with a heating value of 25 – 45MJ/kg, which makes it ideal for process energy recovery. Hence, the pyrolysis gas can be fed back to the process to extract the energy for the process-heating purpose, which substantially reduces the reliance on external heating sources.
Introduction
The world is currently faced with the twin challenge of fossil fuel depletion and strict emission requirements because of global concern over emissions and global warming, leading to the emission targets set at the Paris Agreement [1].Extraction of oil from waste plastics has attracted the attention of industry and academia as a feasible measure to mitigate the challenge of fossil fuel depletion, clean environment, global warming, and growing demand for fossil fuels [2][3][4].The global plastic production has continuously grown despite the fact that recycling rates are comparatively low, with just about 15% of the 400 million tonnes of plastic currently produced annually being recycled.The rate of global plastic production and use has been growing faster than recycling rates over the last 30 years, which implies that more and more plastic wastes are released to the environment.The global production of plastics is forecast to triple by 2050 and is expected to account for a fifth of global oil consumption [5].Through pyrolysis, a combustible liquid fuel can be produced from waste plastics [6].These plastic wastes are expected to increase to around 12 billion tonnes by the year 2050 [7].The industrial-scale production of plastics started in the 1940s and 1950s, with demand and production growing steadily over time.In 2013, about 299 million tonnes of plastics was produced globally [8].The production increased to more than 322 million tonnes per annum in 2015 [9].The global production of plastics is currently over 400 million tonnes per annum [10,11].
The use of plastics generates significant quantities of waste plastics that currently contaminate the waterways and aquifers and limit the landfill areas.Over 14 million tonnes of plastics is annually dumped, and as a result, they cause the death of 1,000,000 species of aquatic life [12].Common options for plastic waste disposal include feedstock recycling, mechanical recycling, energy recovery, and incineration as municipal solid waste.Incineration of plastics releases noxious odors, harmful gases, dioxins, HBr, polybrominated diphenyl ethers, and other hydrocarbons based on composition [10].About 15 million tonnes of these plastics reach seas and oceans annually.US emissions from plastic incineration reached 5.9 million tonnes of carbon dioxide in 2015 and will reach about 49 million tonnes by 2030 and 91 million tonnes by 2050.In the past, the United States and other Western countries used to send contaminated waste to China, thus moving responsibility of waste management to China [11].However, in 2018, China stopped importation of the wastes from Western countries [1].
The global market is dominated by thermoplastic types of polypropylene (PP) at about 21%, low-density polyethylene (LDPE) and linear low-density polyethylene (LLDPE) at about 18%, polyvinyl chloride (PVC) with a market share of 17%, and high-density polyethylene (HDPE) at 15%.Others with high demand are polystyrene (PS) and expandable PS at 8% and polyethylene terephthalate (PET) at 7%, excluding PET fibre and the thermosetting plastic polyurethane.The main challenge with plastic products is that about 40% have a short service life of less than one month [8].Plastics have a wide range of applications like for making synthetic fibres, foams, adhesives, coatings, and sealants.In Europe, 38% of plastics are used in packaging, 21% are used in building industry, automotive industry uses about 7%, and electrical and electronic applications account for 6%, while other sectors account for 28% of plastic consumption like in medicine and leisure [8,13].
It has been concluded that waste plastic fuel has similar properties to diesel fuel and can be used instead of diesel [11,14].The removal and disposal of huge plastic wastes is a global concern with economic and environmental implications mainly because of population growth, industrialization, and attractive properties of plastic materials [15].The current rate of economic development and growth will not be sustainable without sustainable and controlled consumption rates of fossil fuel sources of energy [16].The demand for fossil fuels in power generation, transport, industrial, and domestic applications continues to grow globally while fossil fuel reserves face imminent depletion [17].The growing population and demand for industrial products and, hence, packing materials have also seen increased use and disposal of plastics as solid wastes [18].As solid wastes, plas-tics are undesirable because they are nonbiodegradable [17].The main advantage of plastics, hence their increasing usage, is that they are cheap, quick to produce, easy to design and fabricate, durable, nonperishable, and recyclable [17,18].Synthetic plastic production is about 400 million tonnes, of which about 50% ends up in landfills, while about 15 million tonnes of the plastics ends up in oceans and seas annually [11].Global plastic production accounts for about 8% of world oil production [19].Over one trillion plastic bags are produced annually, making them an important end user plastic product globally [20,21].Other common forms of plastic wastes include single-use plastics in the form of drink bottles, wet wipes, cotton bud sticks, sanitary items, which are often disposed of via incineration and landfilling [11].
The European strategy recommends a hierarchy of measures in waste prevention and management, beginning with prevention, preparation for reuse, recycling, recovery, and waste disposal [22].There are four main avenues available for plastic solid waste treatment, namely, reextrusion as a primary treatment, mechanical treatment as secondary measures, chemical treatment as a tertiary measure, and energy recovery as a quaternary measure [23].Waste recycling is the process of recovering and processing used plastics into useful products.With recycling of plastics, the demand on landfills is reduced, leading to environmental protection, reduced emissions from incineration, and additional socioeconomic benefits [21,24].Waste plastics can be processed via catalytic conversion and thermal conversion, which effectively reduce waste pollution, and reduce dependence on fresh or virgin oil for several applications by using pyrolysis oil instead [25].Waste oil pyrolysis is proving to be one of the energy conversion techniques that yield sustainable energy and a feasible solution to plastic waste disposal for a clean environment [26].Plastic pyrolysis is a better option to incineration as it allows for energy recovery alongside waste disposal [27,28].
There is massive growth of plastic production and consumption as a result of urbanization, industrialization, and a cheap supply of plastic materials.Waste plastics have become a nuisance to the land and marine environments, posing significant risks to human and aquatic life [1].Plastic pollution is a result of limited recycling, yet plastics are generally nonbiodegradable.As an example, out of about 280,000 tonnes of plastics produced in the year 2018, just about 15.3% were recycled, with the rest ending up in landfills, open spaces, and drainages, hence the need for more recycling to reduce plastic waste disposal to the environment [29,30].About 50% of plastics produced are disposable and can be dumped, reused, or recycled [31].The impact of plastics to aquatic life includes the increased presence of microplastics in aquatic ecosystems in the form of spheres, pellets, and fragments.Plastics contain hazardous chemicals like phthalates, polyfluorinated chemicals, and antimitroxide and brominated flame retardants [32].Brominated flame retardants e.g.PBDEs (polybrominated diphenyl ethers) can cause neurotoxic effects in aquatic microorganisms [25].Waste Plastics can be used to manufacture oil through the pyrolysis process via a recycling technique which involves degradation of polymeric materials to produce 2 Journal of Energy pyrolysis fuel oil that can be used in internal combustion engines and boiler furnaces.However, due to the production of low-yield oil with high acidic content, plastics like polyvinylchloride (PVC) and polyethylene are not ideal for pyrolysis oil production.The objective of this study is to establish the feasibility of waste plastic pyrolysis to produce fuel from a wide range of plastic materials, e.g., high-density polyethylene (HDPE), polyethylene terephthalate (PET), polystyrene (PS), and polypropylene (PP).Available recyclable wastes are estimated, and a preliminary design specification for a pyrolysis plant is proposed in this research.The study focuses on estimating the plant and process cost as well as final product price development of fuel using the existing pyrolysis technology.
1.1.Problem Definition.New solutions and technologies are needed to address the current challenges facing the plastic industry, i.e., rapid growth in demand and production of plastics and low levels of recycling of used or waste plastics [33][34][35].The population growth, economic growth, and global industrialization have led to the generation of huge quantities of wastes, including plastic wastes.Plastic wastes, in particular, plastic bags, bottles, and packaging materials, are visibly littered all over, including in water bodies [36].Waste combustion generates thousands of pollutants that are harmful to people, especially those living near the incineration facilities.Although landfilling has a lower climate impact compared to incineration, many landfills are full or are getting full.Landfilling also causes soil contamination, water pollution, and may harm to wildlife, flora and fauna [11].Modern offices, homes, and industries generate huge amounts of plastic wastes that range from packaging materials, electronic parts and equipment, plastic containers, and other forms which are often difficult to isolate and recycle [37].Plastic wastes are a serious challenge because of the huge quantities being produced and the fact that plastics do not biodegrade for very many years.Plastic products that are heavily produced are the polyolefins, such as polyethylene and polypropylene, which have many applications, like packaging, building, electricity and electronics manufacture, agriculture applications, and health care.The resulting huge wastes are disposed of mainly via land filling [16].About one-quarter of all plastics produced is made of polypropylene while less than 5% of all plastics are recycled annually.Significant quantities of waste plastics end up in landfills and oceans where they cause pollution and require over 450 years to biodegrade.Conversion of plastics to fuel would create over 39,000 direct jobs, increase the gross domestic product by over $9 billion, and create a cleaner and safer avenue of plastic waste disposal.The quantity and range of plastic products are so huge and continue to grow for various applications.For example, about 1.5 billion tires are manufactured annually around the world.This not only is economically important but also comes with serious challenges of waste disposal and recycling [38].It costs over $4000 to recycle one tonne of plastic waste, making it attractive to dispose of plastics via burning and landfilling as opposed to recycling which is more expensive [39].Disposal of plastics via incineration is a major source of emissions globally.In the US alone, incineration of plastic wastes accounted for 59 million tonnes of carbon dioxide in 2015 and will rise to 49 million tonnes by 2030 and 91 million tonnes by 2050 [11].This makes plastic waste incineration a significant contributor to greenhouse gas emissions [40].
Plastic incineration releases tonnes of pollutants to the surrounding, while landfilling leads to soil and water contamination of the dumpsite and surrounding areas [11].Land filling is a challenge for many cities like Nairobi in Kenya, where the only dumpsite facility for landfilling located at Dandora is filled and has no space left for more dumping [41].
There are, however, several challenges related to plasticto-fuel production via pyrolysis which need to be considered.There are some concerns around health risks due to energy recovery from the waste.This is because burning waste plastics emits nitrous oxides, sulphur dioxides, some particulate matter, and other harmful pollutants that are dangerous [1,30].However, continuous regulation and pollution control technologies can ensure that emissions are well managed and controlled.Some countries like Sweden which rely so heavily on imported garbage to sustain their industries will be left with a deficit in supplies if wider recycling via the pyrolysis process is adopted.Plastic recycling, like other recycling systems, needs careful planning and adherence to various environmental and industrial regulations and must balance the needs of existing recycling processes [37,39].
1.2.Rationale of the Study.Plastic waste pollution is a serious problem facing the world today, while at the same time, the production and use of plastics continue to accelerate, yet plastic recycling rates are relatively low, as only close to 15% of the close to 400 million tonnes of plastic are being recycled globally [6].Plastic production has been growing over the last half century, thus creating a global environmental crisis with most of the over 4.9 billion tonnes of plastics ever produced ending up in landfills [8].With the looming depletion of fossil fuel sources of energy like gas, diesel, and petrol, it has become increasingly necessary to identify and develop alternative sources of energy to fossil fuels [26].Plastic recycling offers the lowest environmental impact in terms of Global Warming Potential (GWP) and Total Energy Use (TEU) [42].Chemical recycling of plastic wastes is simple and cheap and requires little or no sorting where high temperatures are involved [39].Waste plastic pyrolysis will strongly benefit the fossil fuel industry through improved process sustainability, leading to a cleaner environment and reduced fossil fuel demand via the production of alternative oil.Therefore, research and development in pyrolysis oil production and use is immensely important for a sustainable energy transition involving the controlled consumption of fossil fuels.The various advantages of converting plastic waste into useful fuel coversion includes reduction in carbon emissions by switching from incineration to recycling, recovery of exhaustible or nonrenewable natural resources like resins and rare metals, a lower or reduced carbon footprint of the produced pyrolysis oil compared with the original fossil fuels, development of alternative fuels for transport and other thermal applications, and reduction of landfilling and related pollution [39].
Plastic pyrolysis has created extraordinary attention globally with a number of processing plants currently in operation.A typical example is the recycling plant located at Swindon in the UK operated by Recycling Technologies.The RT7000 plant has the capacity to recycle 7,000 tonnes of plastic wastes, including polystyrene and flexible packaging, to produce 5,250 tonnes of oil for export and local use.The plant employs 130 workers, and the main products are oil, of which 70% is exported for use as sustainable fuel and feedstock for plastic manufacture while the remaining 30% of the products is a wax equivalent product for candle and paint manufacture.Leftover gas and char provide energy for the recycling process.Therefore, production of pyrolysis oil has the potential to displace the use of virgin oil in plastic and lubricant manufacture [5].
Economically, the fuel produced from waste plastics can reduce the import bill and consumption of primary oil, as the oil produced can be used to power transport and for power generation which are the main consumers of fossil fuels.Again economically, these reduces the import bill of non-oil-producing countries, thus saving on foreign reserves, further strengthening local currencies of oilimporting countries leading to a more consumptionsustainable global oil reserve [17].
Plastic pollution is a serious health issue because the various harmful additives used in the manufacture of plastics end up in human and animal body systems.Additives used with health implications are the plasticizers, fire retardants, antioxidants, lubricants, antistatic agents, stabilizers, thermal stabilizers, and pigments.A specific health concern is that these additive substances can mimic, block, and interfere with hormones in the body's endocrine system.As a result, they are classified as endocrine-disrupting chemicals (EDCs) [18].Other additives are known to be persistent organic pollutants (POPs) [6].
As a result of the failure of conventional approaches to control plastic pollution, globally, a number of authorities have intensified the call for the banning of some categories of plastics from the market.Recycling of plastics is one of the most widely accepted solutions to the growing concern over huge volumes of plastic waste in land and water bodies.Pyrolysis and other thermochemical recycling techniques are, however, associated with challenges associated with need for separation, sorting and cleaning, and high electricity and transport costs, as well as the fear by environmentalists and conservationists that recycling instead of banning will only delay the transition from fossil fuels whose use should be abolished [6,43,44].
Plastic Production
Plastics are synthetic organic polymers, manufactured from petrochemical materials.The first plastic was invented in the early 1900s, with the type and quantities drastically increasing later for different applications.Plastic management is a serious concern today because increased use of plastics has led to waste disposal challenges as more than 300 tonnes are currently produced per year.In the European Union, about 25% of plastic wastes are recycled as recycling faces challenges like low quality of the recovered and recycled material and products [45].Between the year 1950 and 2015, some 8300 million tonnes (Mt) of plastics was manufactured globally, of which 6300 million tonnes representing 9% was recycled, 12% was incinerated, and 79% was landfilled [6].The annual global production of plastics has reached 400 million tonnes in the year 2020 [9,10].The production grew by 3.4% between 2014 and 2015.Analysis of annual plastic demand shows a compound annual growth rate (CAGR) of 8.6% between 1950 and 2015 [42,46].
A significant portion of these plastics ends up in oceans and seas where they disintegrate into microplastics and nanoplastics.These chemical products are consumed by aquatic animals causing a negative impact on zooplankton in terms of population and mortality [6].Table 1 shows the growth in global plastic production between 1950 and 2020.
From Table 1, it is noted that global plastic production was about 1.5 million tonnes in 1950 and about 400 million tonnes in 2020.This represents a growth of about 266% in global plastic production between the year 1950 and 2020.
Plastic Disposal and Recycling
3.1.Plastic Waste Recycling.To attain the best environmental outcome, a hierarchy of measures is proposed in waste prevention and management [41].The measures are waste prevention, waste reuse, and waste recycling.Plastic waste recycling can be mechanical or feedstock recycling, energy recovery, and final waste disposal.End-of-life treatments for plastic wastes include mechanical recycling, like reprocessing for the production of new products.Plastic recycling transforms the materials to smaller molecules which can be used for the manufacture of new petrochemicals and polymers [42,47].Energy recovery techniques include combustion of plastic waste for the production of heat, steam, and electricity.Combustion of plastics is a convenient energy source because of their high energy content.Landfilling is the end-of-life treatments in the hierarchy of plastic waste management [14,42].
Environmental challenges of plastic wastes can be best addressed through recycling.There are four major types of recycling that can be applied to plastic wastes.These are primary, secondary, tertiary, and quaternary recycling.Through chemical recycling, plastics can be used as feedstock or fuel which reduces the net cost of disposal aside from producing useful energy.Plastics can also be converted into basic petrochemicals which take the form of a hydrocarbon for several process and energy applications [28,48].Oil can be processed from waste plastics through thermal degradation, gasification, and catalytic cracking [16].The main challenge facing plastic recycling is lack of incentives and proper systems for waste collection and separation [14,49].Recycling reduces the quantity of waste left available for landfilling and indiscriminate disposal in the environment.4 Journal of Energy Socially, plastic recycling creates jobs and adds value to the gross domestic product of a country [50].Many countries globally have made effort to address the challenges of waste plastic besides the ban on production and use of some plastics.Waste plastics are significant, and due to a high and growing demand, the waste menace should be sustainably addressed.In Japan, some 2.75 million tonnes of plastic wastes, mainly PET and PVC, was landfilled or incinerated in 2014 as a result of insufficient recycling options and facilities.The Sapporo Plastic Recycling (SPR) plant was established in the year 2000 for commercial liquefaction in Japan through plastic pyrolysis with a recycling capacity of 50 tonnes/day [51].Through its cascade facility, the plant mixes plastics from the municipal solid waste stream with waste from other recycling processes in the ratio 40% to 50% of the total feedstock material.Another important innovation from the recycling is that they have learnt to deal with the benzoic acid by converting it to benzene with little or no effect to the fuel potential.Lessons from this plant show that pyrolysis can be done for residues with high PET and PVC content that can be blended with the municipal solid waste plastic stream at up to 40 wt.% with little or no effect to the reactor as well as the product quality [16,51].
The main products of plastic pyrolysis are light oil, medium oil, heavy oil, and sludge.The common applications of pyrolysis products are solid duel, cogeneration oil, engine fuel, and raw material for further processing to produce plastics and other products.
The six essential steps in waste plastic recycling are as follows: (i) Plastic waste collection (ii) Categorization of plastic wastes via sorting 3.2.Types of Recycling 3.2.1.Primary Recycling.Primary recycling is also called mechanical reprocessing, and it involves taking the plastic through the original forming process.The product of primary recycling has the same specification as the original material.The process requires the primary material to be as clean as possible, which makes it relatively expensive and unpopular since a lot of cleaning is required, yet getting clean waste may be a challenge.It is the preferred choice where the waste is easier to sort by resin [16].The key steps in the primary recycling process are waste separation based on resin and colors, washing or cleaning, and reextrusion into pellets for addition to the original resin [16,52].
3.2.2.Secondary Recycling.In secondary recycling, the solid plastic waste is mechanically processed into other products.
The main benefit of secondary recycling is the conservation of energy needed in the manufacture of plastics to realize some financial advantages [52].The process can handle contaminated or less separated plastic waste upon cleaning.Secondary recycling uses different products and is not the same as the original production process [16].Secondary recycling can effectively process unsorted or contaminated waste with less separation, although they should generally be cleaned.The inputs and outputs of secondary recycling range multiple products with outputs being different from the material feedstock.Secondary recycling can be applied for contaminated or less separated waste unlike the primary recycling [30].
3.2.3.Tertiary Recycling/Cracking Process.In the tertiary or cracking recycling process, the plastic waste is broken down at high temperatures in a process broadly referred to as thermal degradation.Thermal degradation can also occur at lower temperatures when suitable catalysts are used in a process referred to as catalytic degradation.The process of tertiary recycling or cracking leads to relative loss in the value of the original plastic material, and the process is more appropriate in cases of high levels of plastic waste contamination.Cracking can be used to recover the monomers of condensation polymers.The processes and mechanisms in tertiary plastic recycling include hydrolysis, methanolysis, or glycolysis [16,33].
3.2.4.Quaternary Recycling.The quaternary recycling process is preferred for plastic waste with high energy content, hence ideal for incineration.Quaternary recycling involves the recovery of energy, which is the primary objective of the process, and volume reduction.The product of quaternary recycling or incineration usually has significant weight reduction to about 20 wt.% of the original weight and to about 10 vol% of the original plastic waste.The product of quaternary recycling is usually landfilled.The limitations of quaternary recycling are the generation of solid waste and air pollution from plastic combustion [16,33].
Methods of Plastic-to-Oil Conversion
There is growing importance of plastic-to-fuel conversion due to increasing awareness and evidence of environmental 5 Journal of Energy degradation caused by plastic wastes, especially the singleuse plastics amidst people's limited recycling habits [53].It is estimated that less than 5% of manufactured plastics are recycled, yet production is projected to increase 3.8% annually until the year 2030, with about 6.3 billion tonnes having been produced since plastic production started over 60 years ago.A significant fraction of these plastics ends up in seas and oceans, causing disruption of the marine environment.It is estimated that these plastics can take a minimum of 450 years to biodegrade.Economically, it is estimated that plastic-to-fuel investment will create about 39,000 jobs and about $9bn in economic output [53].Plastics can be converted to fuel in the form of hydrogen, crude oil, diesel, and sulphur.
4.1.Plastic to Hydrogen Fuel.Hydrogen is normally produced commercially via catalytic steam reforming of natural gas, naphtha, and hydrocarbons.There is growing interest in alternative feedstock for hydrogen production mainly because of concerns over the security of supply and price instability.Hydrogen can be produced from plastics, especially the polyolefins [54].As a major development in the waste plastic to fuel, researchers from Swansea University were able to convert plastic waste into hydrogen fuel via addition of a light-absorbing photocatalyst to the plastic material for the absorption of sunlight that then transforms it to chemical energy in the process of "photoreforming."In this process, the plastic and catalyst mixture is put into an alkaline solution and exposed to sunlight, which leads to a chemical breakdown and production of hydrogen gas [53].At Oxford's Department of Chemistry, particles of mechanically pulverised plastic were mixed with a microwavesusceptor catalyst of iron oxide and aluminium oxide and subjected to microwave treatment that produced a large volume of hydrogen while a carbonaceous residue material that formed was mainly of carbon nanotubes.This proved that 97% of hydrogen in plastic can be recovered in a short time period directly with no carbon dioxide emission [7].
In the study by [55], hydrogen gas was produced from polyethylene waste via a two-stage pyrolysis-lowtemperature plasma catalytic steam-reforming process carried out at about 250 °C.This produced pyrolysis gas that was catalytically steam-reformed in the presence of lowtemperature nonthermal plasma in the form of a dielectric barrier discharge to produce hydrogen gas.The hydrogen yield was increased in the absence of a catalyst by increasing the plasma power.Various catalysts that can be used in hydrogen production include Ni/Al 2 O 3 , Fe/Al 2 O 3 , Co/ Al 2 O 3 , and Cu/Al 2 O 3 .Tests showed that the Ni/Al 2 O 3 catalyst resulted in the highest yield of hydrogen at 1.5 mmol/g of plastic used.Investigation on the impact of steam addition to the plasma catalytic process at different steam weight hourly space velocities (WHSVs) with Ni/Al 2 O 3 as the catalyst showed that steam promotes catalytic steam-reforming reactions leading to more hydrogen production.The highest yield realized was 4.56 mmol/gram of plastic at a WHSV of 4 g h −1 g −1 catalyst [55].Therefore, plastic-to-hydrogen fuel conversion is a feasible process and, hence, a promising plastic-to-green-energy pathway.
4.2.Plastic to Diesel.In this conversion, plastics are broken down with heat and/or reactive, toxic chemicals, which results into the breaking of the plastic material bonds.This process can produce a liquid fuel that can be used to run engines for transport and industrial applications.The catalysts selected have to be compatible with various types of polyolefin additives so that plastic wastes can be processed without pretreatment, making it easier and cheaper [39].
4.3.Plastic to Crude Oil.Plastics have successfully ben converted to crude oil via pyrolysis of high-density polyethylene bags.The plastic crude oil produced can be refined via fractional distillation to produce gasoline and two different types of diesel [39].By adding antioxidants, the oil product made is superior to conventional diesel fuels in terms of parameters like lubricity and derived cetane number, which is a measure of combustion quality of a diesel fuel [53].
4.4.Plastic to Sulphur.Waste plastics can be converted to sulphur fuel by using the discarded material as feedstock [39].This technique is, however, still under research and development with the main concern being the application of sulphur as a fuel when the main environmental concern is to limit sulphur dioxide emissions [1].
Plastic Pyrolysis
One of the feasible processes in converting plastic waste into fuel is through pyrolysis.Pyrolysis is a complex series of reactions that are both chemical and thermal that depolymerize an organic material in the absence of oxygen [56].The process requires the heating of plastics to a high temperature, and then, the volatile material is distilled or separated for reuse as an energy material [39].In waste plastic pyrolysis, the plastic waste material is subjected to high temperatures in the absence of oxygen and the presence of a catalyst to help in the gentle cracking of long chains.The gases produced are condensed in the condenser to yield low sulphur content-distilled waste plastic oil [17,18].The use of catalysts prevents the formation of dioxins and furans (benzene ring) during the process [17].Thermal degradation of plastics decomposes plastics' three main fractions, namely, gas, crude oil, and solid residue.The crude oil consists of the higher boiling point hydrocarbons from the noncatalytic pyrolysis process.Efficient production of gasoline and diesel from plastic wastes requires optimization of parameters like catalysts used, pyrolysis temperature, and plastic-to-catalyst ratios.The quality of crude oil can be improved by copyrolysis of plastics with coal or shale oil which reduces the viscosity of the crude oil produced [25].Processes used in the conversion of plastics to fuels include gasification, pyrolysis, plasma process and incineration.In pyrolysis, plastic wastes are converted into solid, liquid, or gaseous fuels via the thermal degradation of long-chain polymers into shorter and simple molecules in the absence of oxygen.The main products of pyrolysis are combustible gas with high calorific value, combustible oils, and carbonized char [20].
6
Journal of Energy Pyrolysis refers to the process of converting plastics into solid, liquid, or gaseous fuels through thermal degradation of long-chain polymers into less complex molecules in the absence of oxygen.The products of pyrolysis include high calorific gas, high quality oils, and carbonized char.Pyrolysis can produce as high as 80 wt.% at moderate conditions and requires temperatures of 300-900 °C [57,58].The main processes are slow, fast, flash, and catalytic plastic pyrolysis [14,59].The various types of pyrolysis are conventional pyrolysis (slow pyrolysis), which proceeds under a low heating rate with solid, liquid, and gaseous products in significant portions.It is an ancient process used mainly for charcoal production [14,59].Vapors can be continuously removed as they are formed.Fast pyrolysis, which is associated with tar, can proceed at lower temperature (850-1250 K) or at high temperatures with a gas (1050-1300 K).Others are thermal pyrolysis and catalytic pyrolysis [20,60].Fast or flash pyrolysis is the most preferred technology, and it takes place at high temperatures and short residence time.Fast pyrolysis is also called thermolysis and involves rapid heating of a carbonaceous material to high temperatures in the absence of oxygen.
Pyrolysis technology has been improved over the years to produce valuable liquid oils via valorization of organic materials.The process still faces challenges that arise from the use of the different feedstock types.Critical to the success of pyrolysis is the design of new reactor types to optimize product yields while, at the same time, minimizing energy consumption and process costs.Pyrolysis is a mature technology with a number of commercial plants in operation for biomass and plastic feedstock.Pyrolysis can be effectively used to treat various plastic wastes ranging from packaging waste to more complex plastic materials like rubber and WEEE (waste electrical and electronic equipment), ELV (end-of-life vehicle), and hospital waste that are often contaminated with toxic and hazardous substances.Products of plastic pyrolysis are pyrolysis oil, a hydrocarbon-rich gas, with a heating value of 25-45 MJ/kg that makes it ideal for process energy recovery and char.Hence, the pyrolysis gas can be fed back to the process to extract the energy for the process-heating purpose that substantially reduces the reliance on external heating sources [33].Figure 1 below demonstrates a pyrolysis process with inputs and product outputs.
Figure 1 above shows the main elements of the plastic pyrolysis process, namely, the feed preparation chamber, reactor, char collector, fuel storage tank, and cyclone.
The waste plastics are mixed and heated in the absence of oxygen in a process called cracking, which leads to production of a gas.The gas or vapor is distilled into products ranging from heavy wax and oils to light oils and gas.Most of these products have a potential to provide new building blocks for new polymers or can instead be used as fuel [5]. Figure 2 demonstrates the plastic pyrolysis process.
From Figure 2, it is noted that the main elements of a plastic pyrolysis process plant are the reactor, condenser, fractionating tower, and condenser.
A pyrolysis plant can recycle even the hard-to-recycle plastics like polystyrene and flexible packaging into 75% liq-uid oil, which can be used as sustainable fuel and plastic manufacture feedstock.The remaining 25% can be used as a wax equivalent for the manufacture of candles and paint, while the produced gas and char can be used as energy sources for the plant [5].The most common technology used in plastic waste disposal is the thermal treatment that generates heat for either process-heating applications or steam generation and power production [22].The process starts by feeding a cylindrical chamber where heating is done.In thermal pyrolysis, plastics are heated in inert conditions to decompose organic parts into liquid and gaseous fuels.However, thermal pyrolysis results in lower-quality fuels due to high temperatures as the products have a lower molecular weight [62].The generated pyrolytic gases are condensed in a condenser system, to produce a hydrocarbon distillate.The distillate is comprised of straight-and branched-chain aliphatic, cyclic aliphatic, and aromatic hydrocarbons.These products are separated by means of fractional distillation to produce liquid fuel products.Plastic pyrolysis takes place at 300 °C-9000 °C.The plastic is first evenly heated to a narrow temperature range without excessive temperature variations, followed by purging oxygen from the plastic pyrolysis chamber.The third step involves managing the carbonaceous char byproduct before it acts as a thermal insulator and lowers the heat transfer to the plastic.The final stage involves condensation and fractionation vapors to produce a distillate of good quality and consistency [16].
The pyrolysis process can occur under varying operating conditions, which can be used as the basis of classification [59].They can be distinguished on the basis of the residence time requirement for the pyrolyzed material in the reactor, feedstock particle size, pyrolysis process temperature, and process heat rate among others.Four mechanisms are involved in the thermal degradation of plastics.These mechanisms are the end-chain scission or unzipping mechanism, random-chain scission/fragmentation mechanism, chain stripping/elimination of side-chain mechanism, and crosslinking mechanism.The mode of decomposition is a function of the type of polymer, i.e., the molecular structure; for example, cross-linking is common in thermosetting plastics when heated to a high temperature that causes two adjacent "stripped" polymer chains to form a bond resulting in a chain network like in char formation [1,16].
Catalytic Pyrolysis.
In catalytic pyrolysis, a suitable catalyst is used to facilitate the cracking reaction by lowering the reaction temperature and process time [1].Catalytic pyrolysis is cheaper and, hence, economically attractive [63].The optimization strategy in catalytic pyrolysis is to minimize the use of the catalyst through reuse and the use of low quantities of catalysts.The benefits of catalytic pyrolysis include cost-effectiveness, less pollution from plastic wastes, and less solid residues [1].
The process of catalytic-cracking pyrolysis involves heating the plastic material in an inert environment in the presence of catalysts.The temperature range for catalytic cracking is between 350 °C and 550 °C.This process can be applied for the recycling of either pure or mixed plastics.It 7 Journal of Energy yields higher quality fuel oils as compared to thermal pyrolysis.The catalyst promotes decomposition reactions at low temperatures with low energy consumption, reduced costs, faster cracking reactions, increased process selectivity, and increased yield of products with high added value [21].In this process, a cracking vessel is used for the storage of the melted waste plastic in which the primary heating component for heating the waste plastic is done from a surface at the base of the cracking vessel with a second-level heating being done at the upper surface of the cracking vessel.A cooling vessel is used for cooling the cracked gas vaporized in the cracking vessel, which results in a cracked oil byproduct [37].
Mechanism of Catalytic Degradation
(1) Depropagation.This is achieved via the reduction in the molecular weight of the main polymer chains through successive attacks by acidic sites or carbonium ions and chain cleavage, yielding approximately C 30 -C 80 .Further, cleavage of the oligomer fraction, like through direct β-emission of end-chain carbonium ions, produces a gas and a liquid fraction (approximately C 10 -C 25 ).
(2) Isomerization.In the isomerization mechanism, the carbonium ion intermediates are rearranged via hydrogen-or carbon-atom shifts, which then leads to a double-bond isomerization of an olefin.Some other isomerization reactions include the methyl-group shift and isomerization of saturated hydrocarbons.
(3) Aromatization.The intermediates of the carbonium ion can undergo cyclization reactions, for hydride ion abstraction may first take place on an olefin at a position removing several carbons from the double bond that results in the formation of an olefinic carbonium ion.Then, the carbonium ion may undergo an intramolecular attack on the double bond.(1) Initiation.This stage involves random breakage of the C-C bond on the main chain, which occurs upon heating to produce hydrocarbon radicals.
(2) Propagation.In the propagation stage, the hydrocarbon radical is decomposed into lower hydrocarbons such as propylene, followed by β-scission and abstraction of H-radicals from the other hydrocarbons resulting in a new hydrocarbon radical.
(3) Termination.This stage involves the disproportionation or recombination of two radicals: this involves the catalytic degradation with iron-activated charcoal in the presence of hydrogen.The hydrogenation olefin, i.e., hydrocarbon ion and the abstraction of the H-radical in the HC (hydrocarbon) or HC radical, would produce a radically improved or enhanced degradation rate.For reactions that occur below 400 °C as well as fast reactions taking less than 1 hr, the existence of hydrocarbon radicals in the reactor is high, which readily causes recombination of these radicals as they do not move fast.As for iron-(Fe-) activated carbon in the presence of hydrogen, the radicals are hydrogenated, which suppresses recombination.Therefore, the decomposition of the solid products is promoted.The same applies for the low polymers whose molecular diameter is larger than the catalysts' pore size [16,64].
Characteristics and Application of Pyrolysis
Oil. Pyrolysis oil has similar characteristics with light fuel or gas oil and has proven to be a good furnace and engine oil but requires property modifications [38].Modification can be achieved by the use of alcohols to modify corrosiveness, acidity, ignition temperature, calorific value, and volatility, as well as energy density.Blending with alcohols also reduces the negative environmental impact caused by pyrolysis oils.For use as an engine fuel, blending with up to 20% alcohol can be done with no engine modifications.The higher latent heat of vaporization of methanol and ethanol compared to diesel initiates a longer ignition delay period of combustion with pyrolysis oil.Pyrolysis oil from tires and plastics has been tested and proven to be a feasible diesel engine fuel and has comparable properties to conventional diesel oil.The oil is a complex mixture of C 5 -C 20 organic compounds.However, oil from tire pyrolysis contains a higher proportion of aromatics and a sulphur content as high as 1.4%.Unsorted plastics may also lead to higher chlorine content.There is, however, need for more research on the costs of substituting fossil fuel with pyrolysis oil [56].
Oil from plastic wastes has a higher sulphur content compared to the conventional fuel, making it unattractive as an automobile fuel, and increases sulphur dioxide emissions that are a major contributor to acid rain.In a study on oil from the pyrolysis of high-density polyethylene, the waste plastic pyrolysis oil (WPPO) was found to have a sulphur content of 0.246%, which is significantly higher than that of conventional gasoline, kerosene, and commercial grade diesel.The conventional diesel has about 0.15% sulphur, while gasoline contains about 0.014% sulphur making them claener than the waste plastic pyrolysis oil (WPPO) [6]. Figure 3 below shows the sulphur content of waste plastic pyrolysis oil (WPPO), heavy fuel oil, furnace oil, diesel, kerosene, light fuel oil, and gasoline.
From Figure 3, it is noted that the pyrolysis oil of waste plastic has a higher sulphur content compared with conventional diesel, kerosene, light fuel oil, and gasoline.However, its sulphur content is lower compared to that of heavy fuel oil and furnace oil which are widely used for power generation and industrial heat generation, which makes waste pyrolysis oil a cleaner substitute for heavy fuel oil (HFO) and furnace oil.
The flash point for pyrolysis oil from plastic waste is <40 degrees Celsius, but the exact value varies based on the type of plastic used as feedstock, for example, a single feed plastic like polyethylene yields oils with flash point as low as 10 degrees compared to the flash point of conventional diesel being about 50 degrees.The low flash point is an indication of high volatility which is a safety concern in transportation [6].
The oil from tire pyrolysis yields a better engine performance while the heating value of the plastic pyrolysis oil is higher than that of the tire pyrolysis oil.Economically, the pyrolysis oil can replace the conventional diesel oil in terms of engine performance and energy output as long as the price of the pyrolysis oil is less than 85% of the price of conventional diesel [56,65].According to [5], crude oil prices above $65 per barrel are necessary to justify commercial investment in waste plastic to fuel without a premium, but with a green premium, recycling waste plastic to fuel is still viable at prices below $65 per barrel.A diesel engine was noted to give better performance when using fuel containig pyrolysis oil of up to 30%, and it may perform well with a maximum of up to 50% of the waste plastic oil blended into conventional diesel fuel.In another study by [16,66], it was observed that an engine has a stable performance with the brake thermal efficiency being comparable to that of conventional diesel, and the value of the brake thermal efficiency is higher with the pyrolysis oil by up to about 80% of the engine load.However, the engine emissions were notably higher with pyrolysis oil compared to conventional diesel [56].In a study on a petrol engine fueled by waste pyrolysis oil, the engine had a higher brake thermal efficiency of up to 50% of the petrol engine rated load.The higher calorific value and more oxygen content of the waste pyrolysis oil leads to higher heat release rates [17,67,68].In another study by [17,69], it was observed that the high viscosity with blends is the reason for poor atomization, which leads to a longer ignition delay and a longer combustion duration compared to conventional diesel when tested at full load.In other studies, by [17,[70][71][72], it was noted that the use of the waste plastic pyrolysis oil increases the ignition delay by about 2.50 CA.The study also noted an increase in emissions with NOx increasing by 25%, CO increasing by 5%, and unburned hydrocarbon increasing by about 15%, and smoke, although smoke was reduced by 40% to 50%.In yet 9 Journal of Energy another study by [73], the waste plastic pyrolysis oil yielded a brake thermal efficiency (BTE) of 27.75% compared to 28% for diesel, brake-specific fuel consumption (BSFC) of 0.292 kg/kWh compared with 0.276 kg/kWh for diesel, unburned hydrocarbon emission (uHC) of 91 ppm compared with 57 ppm for diesel, and NO x emission of 904 ppm compared with 855 ppm for conventional diesel.
An important parameter in the combustion performance is the injection timing for all compression ignition engines.In the study by [6], the injection timing varied for an engine.The study showed that by changing the injection timing from 230 bTDC (before top dead center) to 140 bTDC (before top dead center), the emissions for NO x , CO, and unburnt CO were reduced.In the study by [74], it was experimentally established that the brake thermal efficiency of blends is lower than that of conventional diesel, but at 25% blending, there was a similar performance between the blend and pure diesel as a fuel for the engine [67].Therefore, the plastic pyrolysis oil could improve the engine performance through modification like injection timing.Economically, the pyrolysis oil can replace diesel in terms of the engine performance and energy output on the condition that the cost or price at the point of usage is not more than 85% of conventional diesel oil [75] if crude oil prices are more than US$65 per barrel [5].
Oil Yield of Different Plastics.
There are three major products of pyrolysis of plastic waste, namely, liquid oil, gas, and solid residue or char.Liquid oil results from condensation of vapors released from the reactor, whereas gas is obtained from the noncondensable vapors and solid residue remains in the reactor [58,76,77].In studies by [78][79][80], the liquid oil, gas, and solid residue were analyzed for their composition, yield percentages, and potential applications.Liquid oil is the dominant of the products, followed by gas and finally the solid residue.However, the process conditions, such as temperature, use of catalysts, feedstock type, and retention time, could be altered to favor production of either liquid or gas.Additionally, Reference [79] showed that calorific values for the liquid oil produced from the pyrolysis of HDPE, PP, and LDPE plastic wastes were close to those of commercial diesel and gasoline.For PS, PVC, and PET, low calorific values were attributed to an aromatic ring within the PS chemical structure, chlorine in PVC, and benzoic acid in PET.The kinematic viscosity, density, and flash point of the pyrolysis oil showed values near those of commercial diesel and gasoline [58].According to [78], pyrolysis oil can be used as a good transport fuel when blended with diesel in ratios of 10%, 20%, 30%, and 40% that produced various engine performance characteristics.Table 2 compares liquid oil properties with those of commercial grade diesel and gasoline.
From Table 2 above, it is noted that different plastics produce different outputs in terms of composition of the main products, i.e., oil, gas, and char.In terms of calorific value, polystyrene (P) produces oil with the same calorific value as conventional diesel, but it has a significantly lower flash point; hence, it is more flammable and more viscous and has a higher density than diesel.Oil from PVC has the lowest calorific value.The pyrolysis oil is more viscous and has a lower flash point than conventional diesel.Compared to petrol, the pyrolysis oil has a higher density, but its flash point may be less or higher than the pyrolysis oil depending on the oil source or plastic used [30].
The quality of the pyrolysis oil is a function of both the type of plastic and the process used.Table 3 below is a summary of the process, type of plastic, and product quality realized.
Table 3 above shows the various plastics, oil yield potential, and optimum conditions for pyrolysis and solid waste generation.The pyrolysis temperature used is between 350 and 740 °C.The highest oil yield is 97% of the oil yield from polystyrene processed in a batch reactor at 425 °C.
Fuel can be produced from plastics through gasification, thermal pyrolysis, and catalytic pyrolysis.Catalytic pyrolysis of homogenous waste plastics generates better oil in terms of both quantity and quality.The main product of gasification is syngas at a higher temperature than those realized in the pyrolysis process.Syngas can be used as a feedstock for the Fischer-Tropsch (FT) process for the production of diesel
Char and Gaseous
Products.The pyrolysis gas produced mainly consists of methane, hydrogen, ethane, propene, propane, and butane.The gas produced is a function of the Journal of Energy feedstock material used and the process parameters.Carbon dioxide (CO 2 ) and carbon monoxide (CO) are produced when the feedstock contains PET material while the hydrogen chloride gas is produced as an additional gas if PVC is part of the feedstock [84].The composition of the gas influences the calorific value of pyrolysis gas and, hence, its application [6,16].The pyrolysis gas has a potential internal use as a source of energy for the pyrolysis plant process.Additionally, during production of polyolefin, the pyrolysis gas is feasible after extracting ethene and propane gas components [81,85].Research by Reference [6] provided a summary of pyrolysis studies using various feedstock materials, process parameters, and product yields.
Solid residue or char produced during pyrolysis majorly consisted of volatile matter and carbon with low levels of moisture and ash.Its calorific value was estimated to be 18.84 MJ/kg, thus suitable for use as a fuel, e.g., in combustion with coal.Upgraded chars are used in heavy metal adsorption from industrial wastewater and toxic gases.Other uses include the energy source for boilers and feedstock for activated carbon [81].
Design and Construction of Plastic Pyrolysis Plants
This study examined the plastic industry and plastic waste management globally and recommends the development of the waste-to-oil pyrolysis plant as a solution to the plastic waste disposal challenges.This study involved data collection through document analysis and review, interviews, and questionnaires.The quantity of plastics available was estimated for the market using data on production, importation, usage, and recycling.The specification of the pyrolysis plant equipment was done through design analysis with mass balance and energy balance.The design was realized via the application of techniques like mass and energy balances, financial and economic analyses, and use of design software.The energy and mass balance equations were used to determine the amount of input, throughput, and output per unit cycle of production.
6.1.Design Overview.The main raw material for the pyrolysis plant is a mixture of plastic wastes that are consumed in different products.These include PVC, HDPE, LDPE, PS, PP, PET, and PE that are appropriate for liquid oil production due to their high volatile matter content.The condition parameters (catalysts, pressure, residence time, temperature, and the heat transfer from heat source) determine the quality and quantity of the oil produced.The order of the steps is as follows: crushing of the plastic waste in the crushing machine, heating in the reactor, condensation in the condenser, acid removal, and, finally, distillation in the fractionating column to separate the oil mixtures.Figure 4 below shows the overall process flow diagram.
Figure 4 is a flow diagram showing the main elements of a proposed pyrolysis plant.These parts are the crushing unit and area, reactor, char remover, condenser, fractionating column, and fuel storage vessels.
Mass and Energy
Balances.The amount of output expected from a specific input of the raw material and throughput for the plant was estimated using the general standard assumptions that have been developed and applied in several functional pyrolysis plants and from pyrolysis of plastic waste experiments that have given close results.The assumptions include the percentages of the reactor products at specified conditions of an average temperature of 470 °C.Similar reactor conditions were selected for this design, so the mass and energy balance assumptions were directly applied.Table 4 gives the thermal and catalytic pyrolysis yields that were adopted in the design.
From Table 4, it is noted that thermal pyrolysis yields more char compared to catalytic pyrolysis.On the other hand, catalytic pyrolysis yields more liquid fuel than thermal pyrolysis.The average gas production is slightly higher in thermal pyrolysis compared to catalytic pyrolysis.
For thermal pyrolysis, the reactor products from the input are as follows: some small percentage of the raw material, i.e., about 0.053%, is lost at the reactor mainly as moisture content which can be recovered as water.For the catalyst to be used, an estimated mass fraction of 0.023 of the raw material masses at the reactor is used for efficient functioning.At the reactor, 1 kg requires 844 W for complete thermal decomposition per hour.Though the required power input is high, more power is produced at the end of the reaction, which is estimated at 9500 W/kg/h.Upon application of the above mass and energy balance assumptions, the heating value of the gas produced is 50 MJ/h, and the heating value of the crude liquid is expected to be 46 MJ/h [54].
Reactor Design.
The value of plastic pyrolysis products is influenced by the feedstock composition and the reactor technologies applied.All pyrolysis reactors have their pros and cons too, but the choice is guided by the required product and desired flexibility for handling feedstock variations.The two very important factors in reactor design and selection are heat and mass transfer efficiency [33].
The different types of reactors are batch, semibatch, fixed bed, and fluidized bed reactors, hence the need to do analysis to identify the best reactor choice from the many different types of reactors.The batch reactor is a closed system with no entry or exit of reactants and products during the processing time.With time, this reactor can achieve a high yield and has been proven to be the most reliable, but the system is labor intensive; hence, the labor cost is high and the whole process is expensive.The semibatch reactor provides flexibility in the addition of reactants and the removal of process products [16].The products can be removed and the reactants added simultaneously while using this kind of reactor.Even though it provides flexibility, this reactor has a high operation cost, making it unfriendly especially for largescale applications.The fixed bed reactor has catalysts broken down into pellets and packed into the bed.The main limitation of this reactor is that it does not provide enough surface area for the reactants to assess the catalyst; thus, it is usually not preferred.This method is rarely used.The fluidized bed reactor, on the other hand, provides for solutions to all the shortcomings of the other reactors.In this reactor, the catalyst is shred into pellets and packed in the reactor bed.A fluidizing gas is passed through the bed carrying the catalyst particles in a fluid state.This provides better access to the catalyst, as it is well mixed with the fluid, and there is a large surface area for the reaction to take place, hence a higher efficiency and output rate.This is the most preferred reactor because less heat is required, and the addition of feedstock is frequently minimized keeping the labor cost and, therefore, the operation cost low [16].
The most common reactors for plastic pyrolysis are fixed bed/bath reactors, screw kilns, rotary kilns, vacuum bed, and fluidized bed reactors.The type of the reactor selected influences the selection of the heating temperature and heat rates which effectively influence the product yield, composition of the pyrolysis oil, and composition of the pyrolysis gas produced [38].Important parameters in the design of a plastic pyrolysis plant are the operating temperature based on the material and process, the heat rate, the residence time of materials in the reactors, and the products produced which influence their properties [16,27].
6.4.
Cost-Benefit Analysis.The economic and financial feasibility was done through comparison of the total costs involved in the production process with expected profits.The capital costs included equipment, piping, electrical, direct installation cost, instrumentation and controls, site preparation, setup, and engineering costs.The net present value (NPV) method was used to evaluate the feasibility of the project.Positive NPV will prove profitability while a negative one means that the project will only lead to losses.The formula for calculating NPV is as follows: NPV = ∑ T t=1 ðC t / ð1 + rÞ t Þ − C 0 , where C t is the net cash flow during period t , C 0 is the total initial investment cost, r is the discount ratio, and t is the number of time periods.6.5.Limiting Conditions.It was not possible to access all plastic manufacturers and dealers for the generation of accurate and detailed data.Data used for the evaluation, like for the economic analysis, was the estimated value and not exact.The exact value keeps varying, and thus, average values were used; hence, the obtained results may not be quite accurate.However, available and accessible accurate values were used to improve on the accuracy and reliability.Some information was treated as confidential in some facilities visited.
6.6.Design Principles and Considerations.The pyrolysis system design must take into account four main steps or stages of the process of pyrolysis.These critical steps are (1) even heating of the plastic to reduce the thermal gradient; (2) removal of oxygen from the pyrolysis reactor or chamber; (3) effective control of the char to avoid solidification which converts it to an insulator, thus reducing the heat transfer to the plastic feedstock; and (4) controlled condensation and 13 Journal of Energy fractionation of the pyrolysis to produce a good quality in terms of composition and consistency [16].6.7.Pyrolysis Process Design.The comprehensive conversion process of the plastic waste to fuel is demonstrated in the process flow diagram in Figure 5.The most fundamental elements in the process of pyrolysis are the source of heat, reactor condenser, and distillation unit.The input and output vary according to the capacity of the plant selected.To maximize on the plant capacities, several units could be used within the same production line.
From Figure 5, it is noted that water, diesel, and other oils are the main products of the fractionating chamber.The main parts of the system are the heat source, the reactor, the condenser, the oil storage tank, and the fractionating chamber.
6.8.Choice of Reactor.The reactor chamber is heated indirectly by the flow of hot air from the burner.Its temperature is regulated via the controller which operates a valve to allow more hot air or less, depending on the operating temperature.Uniform heating is, thus, achieved at the reactor.The rotating mixer ensures that all of the plastic material is heated evenly.The reactor is illustrated in Figure 6.
Figure 6 shows the reactor and its accessories, which include the controller and mixer.The reactor is connected to the condenser in the assembly.
The two types of pyrolysis-processing methods are continuous and discontinuous cycles.The discontinuous processing uses a batch reactor while the continuous reactors are used for the continuous processing.
6.9.Safety Considerations.The best material for boiler and pressure vessels is carbon steel (material 516 gr 70) as they can withstand high temperatures and pressures without cracking.This material has high tensile and yield strengths that are suitable for the reactor design.Tensile strength = ð 510 -610Þ N/mm 2 , and yield stress per min = 335 N/mm 2 .
Reactors have huge differences in terms of design and manufacturing but more so on safety.Table 5 shows the batch reactor design comparison.The dish-end batch design was selected due to its ability to withstand higher pressures, up to 7 bars, thus reducing chances of accidents.
From Table 5, it is noted that batch reactors available in the market have capacities ranging from 5 tonnes to 15 tonnes per batch.To achieve higher input-output capacities, a parallel arrangement of four batch reactors was used.Four reactors were used for the design due to constraints of the initial cost of the plant, or else, more units would have been used.The plant layout with the reactors shown in Figure 7 is used in the design.
Figure 7 shows the batch process option with 4 reactors, one fractionating column, and two condensers.6.10.Reactor Mass Balance.The mass balance in the reactor is demonstrated in Figure 8.The mass entering the reactor and total mass leaving the reactor should be balanced.mpw is the mass of plastic waste, in tonnes;mf is the mass of pyrolysis oil, in tonnes; mc is the mass of char, in tonnes; andmg is the mass of pyrolysis gas, in tonnes.
From Figure 8, the mass balance expression is shown as follows: mass in = mass out, mpw = mf + mc + mg ð1Þ 14
Journal of Energy
The assumptions made in the mass balance are that there are no leakages and that measurements are done accurately.
6.11.Comparison of Thermal and Catalytic Pyrolysis Processes.From the calculations of the mass balance for both thermal and catalytic depolymerization processes, the two can be compared according to the outputs expected and by using the lower limits of the expected outputs.A further comparison of thermal and catalytic processes is provided in Table 6 below.
From Table 6, it is noted that catalytic pyrolysis gives a higher oil yield than thermal pyrolysis.Thermal pyrolysis has a higher yield of noncondensable gases compared to catalytic pyrolysis.The solid deposit in the form of char is almost the same for the catalytic and thermal pyrolysis processes.
6.12.Condenser.The condenser serves mainly to cool down the vapors from the pyrolysis reactor.Two condenser configurations, i.e. vertical and horizontal, are considered.The main design parameters are the pipe length, pipe diameter, the number of tubes, cooling area, and cooling method applied.Figure 9 shows the side and front views of the condenser [87].
Figure 9 shows the main elements of the pyrolysis plant condenser, i.e., the vapor inlet, the cooling water inlet and outlet, and the condensate outlet.
6.13.Acid Removal.The pyrolysis oil formed may contain acidic contaminants, such as hydrochloric acid and benzoic acid from atomic constituents of PVC and PET, respectively.This is undesirable as the acid corrodes the plant parts and reduces the quality of oil; hence, a neutralizing mechanism was provided.An alkaline solution plastic tank was installed from which a dosing pump delivers the alkaline solution into the pyrolysis oil line.The pump is turned on once the oil flow to the distiller is initiated [1].
6.14.Oil Distiller.The pyrolysis oil has a wide range of boiling points.It is passed through the fractionating column in the oil distiller to obtain useful liquid and gas fractions such as diesel and gasoline.Main control variables are the temperature change in the distillation column, pressure, and reflux ratio of the product oil.The distillation column is demonstrated in Figure 10.
From Figure 10, it is noted that the main parts of a distiller are the distillation column, the condenser, the reflux drum, and the reboiler.6.15.Cooling Tower.The hot water from the condensers and the oil distiller is cooled down in the cooling tower.The main design parameters for the cooling tower are the cooling range, wet bulb temperature, mass flow rate of the water, dry bulb temperature, air velocity through the tower, atmospheric pressure, and packing mechanism.Since there are losses due to evaporation, drift, and blowdown, makeup water may be required to maintain the water level in the cooling tower basin [1].
6.16.Plant-Processing Method.The research considered both thermal and catalytic polymerizations to identify the best option.From the thermal-and catalytic-processing comparison, the catalytic process was found to produce more oil than the thermal process about 3% [88,89], but high expenses have to be incurred on the catalyst.Thermal pyrolyzing yields more gas and char compared to catalytic pyrolysis.In the literature review, it is evident that for the same amount of input, catalytic pyrolysis consumes 40% less power than thermal pyrolysis at the reaction stage, but the catalysts used are very expensive, accounting for about 65% of total operating costs [86,88].
Due to the factors considered, thermal polymerization was selected for this design.With thermal polymerization, the oil product was expected to range from 46.2 to 54 tonnes per day, gases were expected to range from 4.8 to 6 tonnes per day, and the char produced was expected to range from 1.2 to 7.8 tonnes per day.The design provided for the channeling back of the char and gases produced to the burner for combustion to provide energy that is consumed during the process of pyrolysis.The burner was used to provide heat for melting the material in the reactor that utilized a heavy fraction of oil from the fractionating tower, char, and gases (all products produced from the same process), thus minimizing costs that would be incurred from using entirely continuous energy to run the plant.The specifications of the reactors, condensers, fractionating tower, cooling tower, tanks, pumps, valves, and pipes were done through a simple calculation of flow rates and mass and energy balances.The equipment that had specifications close to those required for the design was selected from manufacturers [28,48].
6.17.Safety Considerations.Safety is a very important aspect of plant design.Carbon steel (material 516 gr 70) that has high tensile strength and high yield stress was selected for the pipes, reactor, and valves of the plant to avoid accidents caused due to cracking of the material under high pressures.The dish-end reactor design was selected over the flat-end reactor due to its ability to withstand pressures up to 7 bars, thus eliminating the possibility of explosions in the reactors in cases of pressure rise.
Noncondensable gases from the condenser (methane, ethane, propane, butane, and hydrogen) can be a serious fire risk and should never be allowed to leak to the environment via proper sealing and selection of the right materials and fittings.In the proposed design, the gases were channeled back to the reactor and burned to release energy while the noncombustible gases were flared to release carbon dioxide and water, 16 Journal of Energy which are not toxic substances [59].Measures applied to guarantee safety of the installation include the following: (i) Lagging was provided for the burner, reactor, and pipes that carried content to avoid burning the workers in case of contact (ii) The design provided a closed system where no content was released till the end of the processing line after distillation ensuring workers were not exposed to any contaminants throughout the process (iii) Neutralization of the acidic content in the oil is done immediately after the condensation of the pyrolysis liquid, thus avoiding acids when workers handle the final products directly
Challenges and Opportunities of Plastic Recycling
Plastics are cheap, lightweight, and durable and can be easily molded into many different products, shapes, and sizes, 17 Journal of Energy hence their wide applications globally, leading to increased production and use.The high level of plastic consumption and related disposal has created serious environmental concerns with about 4% of global oil and gas production being used as a feedstock for plastics and 3-4% being used to provide energy for their manufacture.A significant proportion of plastics is used to make disposable items of packaging and short-lived products that are soon discarded to the environment that is an indication that the current use of plastics is not sustainable [1,30,90].
Recycling is one of the most appropriate measures available to minimize the environmental impacts in the plastic today.Recycling reduces oil usage, CO 2 emissions, and waste disposal.Other measures include downgauging or product reuse, use of biodegradable materials, and fuel recovery.The quantities of plastics recycled vary widely geographically according to the type of plastic and applications, with plastic recycling being done since the 1970s.More opportunities for recycling are being created via advances in technologies and systems that are applied for the collection, sorting, and processing of used plastics aided by the combined effort of the public, environmental authorities, industry, and governments to ensure that majority of plastic wastes are recycled [91].
A leading challenge in plastic recycling is the recycling of mixed plastic wastes.The main advantage is the ability to recycle a significant proportion of the waste stream through the postconsumer collection of plastics to include a wider variety of materials and pack types.Product design should consider recycling to increase the ease and potential of recycling of plastic products in terms of composition, shape, and size.Studies in the UK in 2007 showed that the amount of packaging in a regular shopping basket cannot be effectively recycled in the range of 21 to 40% even if it is collected.This calls for implementation of policies that promote the application of environmentally friendly design principles by plastic industries to increase recycling performance by increasing the proportion of materials that can be technically and economically collected and recycled [1,91].
Light plastics like packaging materials are quite problematic during collection and sorting, which is the main reason for most postconsumer recycling schemes targeting rigid packaging.A significant number of recovery facilities cannot handle flexible plastic packaging due to different handling characteristics of rigid packaging.Additionally, the low weight-to-volume ratio of films and plastic bags makes them less economical to invest in in collection and sorting facilities, except for plastic films recycled from sources like secondary packaging, e.g., the shrink-wrap of pallets and boxes and some agricultural films, which is feasible under correct conditions.The solutions to the economical recycling of films and flexible packaging include investment in separate collection, sorting, and processing facilities at recovery facilities that can handle mixed plastic wastes.Successful recycling of mixed plastics requires highperformance sorting to ensure that plastic types are separated to high levels of purity in addition to the development of end markets for each polymer-recycling stream [91].
Rationalizing the diversity of materials to a subset of current usage could dramatically increase the effectiveness of postconsumer packaging recycling.If rigid plastic containers ranging from jars and bottles to trays were all PET, HDPE, and PP, without clear PVC or PS, which are difficult to sort from comingled recyclables, all rigid plastic packaging could be collected and sorted to make recycled resins with very little cross-contamination.Labels and adhesive materials should be collected and used to maximize recycling performance.Sorting/separation within recycling plants should be improved to increase the potential for recycling and better eco-efficiency by reducing waste fractions, water, and energy use with the recycling objective being to maximize the volume and quality of recycled products [91].
Results and Discussion
Plastics are popular and have found a wide range of applications because of the lightweight; they do not rust, do not rot, are cheap, are reusable, and have significant socioeconomic positive impact.However, plastics are nonbiodegradable, and huge quantities of plastic wastes have become a nuisance to the environment.Sustainable use of plastic wastes should encourage reuse and recycling [92,93].
Plastics are important, widely used materials used to make products having a wide range of mechanical and chemical properties suitable for application in many industries like packaging, automotive, and electronics.However, use of plastics is a danger to the environment mainly because they are generally nondegradable leading to accumulation in waste landfills and natural habitats including seas.Recycling, including use in the production of pyrolysis oil, will lead to reduced CO 2 emissions, reduced energy consumption, and waste disposal with the main challenge being waste collection and transport to recycling facilities that requires policy interventions.Research and technology into new recycling solutions should be undertaken to increase the level of recycling.
Huge quantities of plastic wastes can be can be used as feedstock to produce fossil fuel substitutes.Plastic pyrolysis plants have been set up in a number of countries globally to convert waste plastic to hydrocarbon fuel, which may be a cheaper partial substitute of the petroleum oil.Production of the pyrolysis oil from plastic wastes is ecologically and economically superior to many other options since it addresses the environmental pollution and partially addresses the challenge of fossil fuel reserve depletion by reducing demand of primary oil by acting as a substitute feedstock in the production of lubricants and other oil products [34,94].The overall effect of waste plastic pyrolysis is the reduction in hazardous plastic waste as well as reduction in the import bill for oil-importing countries.
The challenges facing plastic pyrolysis as a process for oil production are the existence of limited standards for the regulation of the process and products of recycled plastics and the development more efficient and cheaper pyrolysis technologies.The economic viability of plastic pyrolysis is dependent on the design and development of suitable reactors to suit the wide range of plastic waste feedstock with 18 Journal of Energy varying process requirements and product output, and a reduction in the capital investment and cost of operation and maintenance [1,90].Plastic production has been growing steadily since the first production in the 1950s to date.Due to their short life cycle, there is a high rate of waste generation, hence the need to recycle.All the more as only about 15% of plastic wastes are recycled globally.The total amount of plastic waste that is disposed to the environment each year in Kenya was estimated at 221,000 tonnes with a low recycling rate of 15% of the total plastic material consumed yearly [25].The low rates of recycling indicated a gap in the plastic market that has not yet been fully ventured into, thus the massive pollution caused by the waste plastic material.Development of the pyrolysis plant for the Kenya plastic market was therefore a viable solution due to its ability to process a large amount of waste.Large amounts of the waste and high plastic consumption were also an indicator of availability and reliability of the raw material (plastic waste) for the pyrolysis plant.Studies have shown that thermal pyrolysis yields more char compared to catalytic pyrolysis.On the other hand, catalytic pyrolysis yields more liquid fuel than thermal pyrolysis.The average gas production is slightly higher in thermal pyrolysis compared to catalytic pyrolysis.Overall, the difference in the output is close enough in terms of the product output, and the process plant designer would rather analyze other factors like the cost in the process selection, although catalytic pyrolysis is superior in terms of solid remnants that should be landfilled [1,30,89].
Not all stakeholders are positive about the rationale of converting plastics to fuel.Some environmentalists have argued that widespread adoption of plastics to fuel could slow down efforts to develop sustainable alternatives to fossil fuel which will work against the effort to realize zero emissions by 2050 [53].Further analysis shows that pyrolysis as a recycling measure has a lower carbon footprint incineration even though it has a significant energy potential [5].A well-established plastic waste collection system is necessary for the proper success of plastic waste recycling.Through continues awareness, the public will support the recycling effort as contributors and users of the plastic wastes and pyrolysis oil.The government should provide a conducive environment for plastic waste recycling through policy and legal measures as well as provision of economic and financial incentives.
For developing countries like Kenya, Nairobi City has the highest amount of plastic generation, collection, and recovery, with just 45% of plastics recovered.Kisumu City has the lowest waste collection and recovery rates at 20% and 8%, respectively, making it the most plastic-littered city in Kenya.Nairobi and other cities face the challenge of plastic waste disposal because of weak policies and the failure to adhere to the guidelines on solid waste management by the stakeholders.There is the need to develop a policy framework that promotes plastic waste recycling through education and sensitization, creating a market for recycled products, and changing people's attitudes to embrace recycling.The government should promote the recycling industry through legislation, funding, and other forms of support.
These will increase the levels of recycling that is below the global average of 15%.
Pyrolysis is an important process due to the increasing quantities of plastic waste and the increasing percentage composition of thermosetting plastics that need to be recycled.However, pyrolysis also produces toxic byproducts from the pyrolysis of some plastics like polyvinyl chloride (PVC).Additionally, the pyrolysis oil produced from plastic pyrolysis has inferior properties like a high sulphur content which is undesirable as it is harmful to the environment.The oil has a relatively lower flash point generally ranging from 10 to 40 °C compared to about 50 °C for commercialgrade diesel, which is an indicator of high volatility and, hence, is risky compared to conventional fossil fuels.This can, however, be improved by blending with commercialgrade fuels and the installation of sulphur systems, especially for industrial applications.Since the waste plastic pyrolysis oil has a higher sulphur content compared to conventional diesel, kerosene, light fuel, oil and gasoline, it may not be the best direct substitute where these fuels are used.However, its sulphur content is lower compared to that of heavy fuel oil and furnace oil.This implies that, as far as emission and environmental pollution is concerned, the waste plastic pyrolysis oil can be a better substitute for diesel engine power plants running on HFO and industrial processes using furnace oil.According to [95], proposed improvements to improve the quality of the pyrolysis oil include the inclusion of hydrogen chloride and sulphur dioxide scrubbers, for the removal of significant impurities like the HCl vapor and sulphur dioxide from the recovered oil.Blending is also recommended in varying proportions with renewable fuels like biodiesel, ethanol, and, to limited proportions, conventional fossil fuels to improve properties as well as reduce the carbon footprint of the fuels.
Economically, the pyrolysis oil can replace the conventional diesel oil in terms of the engine performance and energy output if the price of the pyrolysis oil is less than 85% of the price of conventional diesel [49].According to [50], crude oil prices above $65 per barrel are necessary to justify commercial investment in waste plastic to fuel without a premium, but with a green premium, recycling waste plastic to fuel is still viable at prices below $65 per barrel.A diesel engine was noted to give better performance when using a fuel that contains pyrolysis oil of up to 30%, and it may perform well with a maximum of up to 50% of the waste plastic oil blended into conventional diesel.In another study by [51], it was observed that an engine has stable performance with the brake thermal efficiency being comparable to that of conventional diesel, and the value of the brake thermal efficiency is higher with the pyrolysis oil by up to about 80% of the engine load.However, the engine emissions were notably higher with the pyrolysis oil compared to the conventional diesel [51].To improve the efficiency and further reduce the cost of the plastic waste pyrolysis process, a proposal is made with regard to the process improvement to reduce energy use and generate power from the pyrolysis gas and other combustible products for plant use and to export the excess power to the grid to earn extra revenue for the recycling facility and substitute generation from other fossil fuels.Melting plastics at higher pressure zones may reduce the fuel-burning temperature and, hence, energy consumption through a high-pressure zone so that the fuel-burning temperature can be decreased and the efficiency improved, as it leads to a reduced boiling point.Increased applications and adoption of pyrolysis products will lead to increased investment and, hence, economies of scale leading to further process efficiencies and cost reduction, making pyrolysis a more profitable and attractive venture.Pyrolysis liquids can be used as feedstock to substitute the fresh oil in the manufacture of plastics, lubricants, industrial fuels, power plant oil, and industrial fuel.This calls for policy incentives and research into efficient pyrolysis technologies [1,5,42,95].
The oil produced from plastic wastes generally has a higher sulphur content compared to the conventional fuel oil.This requires the users of the pyrolysis oil to be equipped with environment protection devices to scrub the flue gases produced, which raises concerns from users and environmentalists.Sulphur causes emissions of oxides to the environment that can increase cases of acid rain.The pyrolysis oil has a higher sulphur content compared to conventional gasoline, kerosene, and commercial-grade diesel.In a typical case, the sulphur content of the waste pyrolysis (WPPO) was 0.246% compared to the 0.15% content for diesel oil and 0.014% for gasoline.When mixed plastics are used, typical studies showed a 4.8% sulphur content for thermal pyrolysis and 4.36% for the catalytic pyrolysis liquid fuel from mixed waste plastics [6,96].
The main products of plastic pyrolysis are the pyrolysis oil, a hydrocarbon-rich gas, with a heating value of 25-45 MJ/kg that makes it ideal for process energy recovery and char.Hence, the pyrolysis gas can be fed back to the process to extract the energy for the process-heating purpose which substantially reduces the reliance on external heating sources.
From the literature review, it was deduced that the fuel produced from this process is of high quality with properties like diesel oil (heating value of about 45 MJ/kg, density of 0.77-0.86g/cm 3 , and cetane number 40-60) and needs little modifications like fuel injection timing in diesel engines.The pyrolysis oil can be used as the transport fuel.Other feasible applications are gas turbine fuels, diesel generators, aviation industry fuel, and jet propulsion.When blended with diesel in suitable ratios, the pyrolysis oil can be suitable for various applications like fuel for boiler furnaces, lubrication oil production, waxes, and plastic feedstock production.Waste plastic pyrolysis offers a more sustainable solution to plastic waste pollution and reduced use of virgin oil.The economic analysis of this project has proven it to be highly profitable, returning the investment cost after a period of five years.The increased investment and use of the waste plastic pyrolysis oil will reduce the demand for fresh fossil fuels, while blending with renewable fuels for use in power generation will make plastic waste pyrolysis a significant player in the energy transition to low-carbon power generation and energy consumption [30].
With an increase in the global production and consumption of plastics, and less than 20 percent of plastics recycled globally, the plastic waste is increasing and mismanaged.The impacts of plastic production, use, and disposal on the environment and people create risks and opportunities for investors through different sectors and companies that are a part of the plastic value chain.This report highlights the global challenge for plastics, as well as some potential solutions.
Global plastic production and consumption have grown by over 20 times since the 1960s.About 40% of plastics produced are for packaging with 95% of them for single-use.The production and consumption of plastics continue to grow amid an inefficient global waste management system, with less than a fifth of plastic wastes being recycled.The various stages of the plastic value chain create significant greenhouse gas emissions.It is estimated that greenhouse gas emissions from plastics may account for 10-13% of the entire remaining carbon budget by 2050 within the context of the 1.5 °C target of the United Nations Framework Convention on Climate Change Paris Agreement [97].
Conclusions
Plastics are cheap, lightweight, and durable and can be easily molded to many different products, shapes, and sizes, hence their wide applications globally leading to increased production and use.Plastic consumption and production have been growing since first production in the 1950s.About 4% of the global oil and gas production is being used as feedstock for plastics, and 3-4% is used to provide energy for their manufacture.The short life cycle of many plastics and their wide use in domestic and industrial applications have led to significant quantities of plastic wastes globally.Plastic demand is high globally, and it continues to grow mainly because of its flexibility and affordability for a wide range of industrial and commercial applications.Many countries and regulatory authorities are calling for the ban of certain categories of plastics while others like Kenya have already banned the manufacture, sale, and use of single-use carrier bags.This has serious socioeconomic impacts like loss of jobs, business opportunities, and livelihoods, although it brings environmental benefits related to reduced plastic waste pollution.Plastic waste recycling can be used as a feasible solution to the huge plastic waste pollution.There are many strategies and options for plastic recycling classified as primary, secondary, tertiary, and quaternary recycling techniques.There are four main avenues available for plastic solid waste treatment, namely, reextrusion as a primary treatment, mechanical treatment as secondary measures, chemical treatment as a tertiary measure, and energy recovery as a quaternary measure.Various operations involved in plastic recycling include the separation, sorting, and cleaning of the wastes.The main challenges facing plastic recycling include high collection and transport costs, high electricity bills, and feedstock with a wide variation of quality and properties due to different plastic types.
The common methods used in plastic disposal are incineration and landfilling, while plastic pyrolysis can be applied to produce useful oil and other products from plastic wastes.Although some countries have banned the production and 20 Journal of Energy sale of some plastics like single-use plastic bags, the quantities of waste plastics remain high, hence the need to promote alternative methods of plastic recycling.This study proved the significance of plastic waste recycling as a means of combating greenhouse gas emissions from the energy sector by using waste plastic as a source of an alternative high-value fossil fuel.The study showed that plastic pyrolysis is a cleaner, efficient, technical, and commercial way of plastic waste disposal, which besides solving the environmental crisis from solid waste disposal will also generate fuel revenue and savings by producing an alternative fossil fuel which will reduce the demand for primary fossil fuels and, hence, the related environmental costs and resource depletion.The best and most sustainable solution to plastic pollution is recycling.There are various methods of plastic recycling like primary, secondary, tertiary, and quaternary recycling.Plastic pyrolysis is an attractive option for the plastic waste as many authorities and countries globally call for a ban of several plastic categories from the market due to failure of the conventional methods of plastic waste control to address environmental pollution due to plastic wastes.The process, however, still faces a number of limitations particularly with respect to the collection, separation, sorting, and cleaning operations and the high cost of power and transportation.
Pyrolysis generally produces fewer toxic products as long the process is well designed and the process conditions controlled appropriately.However, the process has notable issues surrounding some toxic byproducts from plastics like polyvinyl chloride, which requires proper feedstock selection.Another challenge is that the liquid fuel from plastic pyrolysis is not a perfect fit for many engineered applications mainly because of the relatively high sulphur content, which can be addressed though further treatment and blending with commercial-grade oil products.Other than liquid pyrolysis oil, a hydrocarbon-rich gas, is produced, having a heating value of 25-45 MJ/kg, which makes it ideal for process energy recovery.Hence, the pyrolysis gas can be fed back to the process to extract the energy for the processheating purpose that substantially reduces the reliance on external heating sources.A leading challenge in plastic recycling is the recycling of mixed plastic wastes.The main advantage is the ability to recycle a significant proportion of the waste stream through the postconsumer collection of plastics to include a wider variety of materials and pack types.More opportunities for recycling are being created via advances in technologies and systems that are applied for the collection, sorting, and processing of used plastics aided by the combined effort of the public, environmental authorities, industry, and governments to ensure that the majority of plastic wastes are recycled.
Recommendations
Furthermore, this study recommends investigation into the best waste-to-energy conversion process for a cost-effective waste plastic recycling and performance of various biodiesels and waste plastic pyrolysis oils to increase the green value of plastic pyrolysis oil production.Additionally, the study rec-ommends plastic waste liquefaction as another recycling pathway for the enhanced application of plastic wastes in the production of products like naphtha by converting polymers to liquid chemical products.Research into the economic and technical feasibilities of bioplastics is recommended as a strategy of improving plastic recycling.
(
iii) Cleaning to remove impurities via processes like washing (iv) Waste shredding and resizing of cleaned plastic waste (v) Identification and separation of the wastes (vi) Compounding of plastic wastes[33]
Figure 6 :
Figure 6: Internal view of pyrolysis reactor.Table 5: Comparison of reactor designs.Flat-end batch pyrolysis reactor Dish-end batch pyrolysis reactor Withstands 1-2 bars of pressure Withstands close to 7 bars Leaks faster, less reactor life More reactor life Cheaper More expensive
Table 6 :
Thermal depolymerization vs. catalytic depolymerization.Higher yield of gases and char Lower yield of gases and char Generally requires more energy Less energy required Gases produced are channeled back to the reactor to boost energy Catalysts used are very expensive, accounts for 65% of production cost | 20,543.6 | 2023-05-22T00:00:00.000 | [
"Engineering"
] |
The ionic DTI model (iDTI) of dynamic diffusion tensor imaging (dDTI)
Graphical abstract
Measurements of water molecule diffusion along fiber tracts in CNS by diffusion tensor imaging (DTI) provides a static map of neural connections between brain centers, but does not capture the electrical activity along axons for these fiber tracts. Here, a modification of the DTI method is presented to enable the mapping of active fibers. It is termed dynamic diffusion tensor imaging (dDTI) and is based on a hypothesized ''anisotropy reduction due to axonal excitation'' (''AREX''). The potential changes in water mobility accompanying the movement of ions during the propagation of action potentials along axonal tracts are taken into account. Specifically, the proposed model, termed ''ionic DTI model'', was formulated as follows.
First, based on theoretical calculations, we calculated the molecular water flow accompanying the ionic flow perpendicular to the principal axis of fiber tracts produced by electrical conduction along excited myelinated and non-myelinated axons. Based on the changes in molecular water flow we estimated the signal changes as well as the changes in fractional anisotropy of axonal tracts while performing a functional task. The variation of fractional anisotropy in axonal tracts could allow mapping the active fiber tracts during a functional task.
Method details
Rationale of DTI modification: the ionic DTI model Herein, we put forward the hypothesis that task-dependent neural stimulation modifies the macromolecular and ionic environments of axonal water molecules, resulting in increased water movement across the membrane through open ion channels (Na + , K + ), and we formulate a model elucidating this mechanism. As this postulated increase in water mobility would be anisotropic and prevail in the plane perpendicular to the major axis of the axon, we further hypothesize that a diffusion tensor imaging (DTI) signal decrease should be observed with neuronal activation such as during the performance of a functional task. We named this hypothesis ''anisotropy reduction due to axonal excitation'' or AREX. Schematically, the quantitative model that reflects AREX and represents these processes within a test region of interest (ROI) would be as follows: We called this model of dynamic DTI (dDTI), ''ionic DTI model'' or ''iDTI'', given that water exchange through open ion channels is the physiological basis for the hypothesized diffusion anisotropy changes during axonal excitation. Unlike functional MRI (fMRI), which indirectly models the connections between disparate cortical/subcortical centers, dDTI aims to measure changes in water mobility associated water movement through open cation channels, thereby providing direct measurements of electrical activity in these connections. Thus, we will be able to characterize the precise path of axonal conduction between the centers during neuronal activation as in a dynamic task.
The biological and biophysical model iDTI is formulated in the theoretical context of a motor task in humans. According to the model proposed herein, a reduction in FA should be observed during the performance of this task due to the increased mobility of water flowing through open ion channels perpendicular to the direction of the axons conducting the action potentials. This trans-axonal membrane exchange of water takes place at the membrane surface of unmyelinated axons and at discrete sites along the main fiber axis of myelinated axons, i.e., at the nodes of Ranvier. At the same time, glial cells in the extra-axonal space would contribute to maintain the ions/water equilibrium needed for axon conduction. Overall, the water movement would be incoherent at macroscopic level (i.e., at the dimensions of a typical voxel size in an MRI experiment) and would be indistinguishable from ''true'' diffusion. However, only the exchange between intra-and extra-axonal compartments would contribute to the reduction of FA since, to a first approximation, the possible contribution of glial cells could be considered isotropic. Based on these and other theoretical assumptions, we modeled the physiological mechanisms underlying axonal and fiber tract excitation, and the diffusion tensor imaging processes required to detect a DTI signal. Due to the ubiquity of water, the model can be applied in both, myelinated fibers at the nodes of Ranvier and unmyelinated fibers. Also, it applies to the glial cells (oligodendrocytes and astrocytes) present in the extra-axonal space. Herein, we present this physiological model as the conceptual basis for dynamic DTI, whose implementation will require technological advances in MRI by individuals seeking to obtain non-invasive direct measurements of electrical activity between cortical/subcortical centers with the goal of determining differences between active and inactive as well as normal and disease states.
Computation of the ionic DTI model
The model of ion and water displacement during neuronal excitation in myelinated and unmyelinated axons is grounded on basic anatomical, physiological and physical-chemical principles, logical assumptions and accepted values for parameters derived from the literature. Then, we computed the fraction of water through the ion channels (with the greatest streaming potentials) during axonal excitation for the period equivalent to that used for the collection of diffusion imaging data.
During an action potential, the largest flux of ions is due to that of sodium. Sodium fluxes in model sodium channels are accompanied by the streaming of two to three water molecules [1]. Thus, in order to assess the vectorial exchange of water along an excited fiber tract for the duration of a neuroimaging experimental observation, we modeled this exchange for saltatory (which occurs in myelinated axons at the nodes of Ranvier), and non-saltatory (which concerns unmyelinated axons) neural conduction. Glial cells are considered as buffer of the electric charge/mass in the extra-axonal space to Although technological advances are necessary to enable the robust and routine measurement of this electrical activity-dependent movement of water molecules perpendicular to axons, the proposed model of dDTI defines the vectorial parameters that will need to be measured to bring this much needed technique to fruition.
ß 2014 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/ licenses/by/3.0/). maintain the necessary conditions for neural conduction, however the exchange is assumed to be isotropic and does not contribute to the overall changes in anisotropy. This mixed model was used to compute the total water exchange across the Na + and K + channels due to neural conduction in a fiber tract, considering separately each contribution, as follows.
In myelinated axons at the nodes of Ranvier, the model takes into account: (1) the number of myelinated axons in a given excited fiber bundle; (2) the number of nodes of Ranvier per mm of axon; (3) the surface area of a node of Ranvier, and (4) the density of sodium channels at these nodes.
Similarly, in unmyelinated axons, the model takes into account: (1) the number of unmyelinated axons in a given excited fiber bundle; (2) the surface area of an unmyelinated axon, and (3) the density of sodium channels at these surface areas.
In addition, we assumed for both myelinated and unmyelinated axons that: (1) the maximum flux of sodium ions per unit time has an equivalent accompaniment of water molecules as that measured during conditions of osmotic stress; (2) the overwhelming inward-flow of sodium, which is accompanied by 2-3 water molecules per sodium ion [2,3] is vectorially canceled-out in terms of electrical charge by the flow of potassium ions [4][5][6][7][8] that is accompanied by 0.9 to 1.4 water molecules per ion [9], in the opposite direction; (3) the total inward-flow of water molecules is compensated by an equal amount of outward-flow of water, and (4) transient changes in charge and mass occurring between the intra-and extraaxonal spaces and glial cells, re-equilibrate relatively fast compared with the time scale of the MR experiment. It could be a short-lived and topographically discrete swelling of the axon at the node of Ranvier (in the myelinated axons) or re-polarized regions along its axis (in the unmyelinated axons) and of glial cells occurring within the observation time. Nevertheless, it should be taken into consideration that to enable and maintain repetitive axonal conduction of action potentials, the ionic concentration in the axon environment must be tightly controlled [10].
Thus, the ion-related molecular water flow (F MA w ) in a sample volume in myelinated axons (MA), which is defined as the isotropic volume having a cross-section equal to that of the axonal bundle of interest, was calculated according to the equation: where N MA e represents the number of excited myelinated axons in the bundle, N NR is the number of nodes of Ranvier in these axons per sample volume, S NR is the surface area of a node of Ranvier, r cNa þ NR is the density of sodium ion channels on the membrane in a node of Ranvier, N Na þ w is the number of water molecules associated to a sodium ion and t represents time. The factor of 2 derives from the assumption that the number of water molecules accompanying the inward ionic flow (related to Na + primarily) is equal to the number of water molecules accompanying the outward ionic flow (related to K + primarily). Given a balance of matter equal to zero, i.e., the inward-flow of water equals the outward-flow of water, the summation of these two opposite flows is twice the size of the flow in each direction. Eq. (1) provides the flow of water expressed as number 1. Illustration of the ionic DTI (iDTI) model that reflects the ''anisotropy reduction due to axonal excitation'' or AREX hypothesis: model of the physiological mechanism underlying axonal and fiber tract excitation and its detectability by DTI. Schematically, it is represented how task-dependent neural stimulation would modify the macromolecular and ionic environments of water molecules of the axonal membrane resulting in increased water flow across the membrane. This anisotropic water flow prevails in the plane perpendicular to the principal axis of the axon and fiber tract and this flow is reflected in a FA reduction and DTI signal attenuation during neuronal activation, such as a functional task. Thus, dynamic DTI (dDTI) provides direct functional measurements of excited axons and excited fiber tracts. Abbreviations: DTI, diffusion tensor imaging; FA, fractional anisotropy; l 1 , l 2 , l 3 , eigenvalues of the corresponding diffusion tensor in directions parallel, l 1 , and perpendicular, l 2 and l 3 , to the main fiber axis; ROI, region of interest. of molecules per unit time, which multiplied by the time the system is being observed in a diffusion imaging experiment, i.e., the diffusion time gives the amount of water molecules associated with an activation event within the defined sample volume.
Similarly, the computation of molecular water flow per sample volume F nMA w across the axonal membrane during neural conduction in unmyelinated axons (nMA) is as follows where N nMA e represents the number of excited unmyelinated axons in the bundle, S nMA is the membrane area of an unmyelinated axon in the sample volume, r cNa þ nMA is the density of sodium ion channels on the membrane in unmyelinated axons, and N Na þ w and t, as indicated earlier.
The computation of total molecular water flow (F w ) per sample volume within a fiber tract is derived by the summation of the water flow as expressed in Eq. (1) for the myelinated axons and in Eq. (2) for the unmyelinated axons as follows in Eq. (3):
Determination of FA variation
We modeled the water displacements associated with the ion flow during axonal excitation for the hand representation in the corticospinal tract in a diffusion MRI experiment. Our estimates were based on published data as shown in detail in Table 1. Assuming there are about 10 6 axons present in the corticospinal tract (CST) with a mean diameter of 3 mm [11,12], the dimensions of an isotropic sample volume rendering a cross-section to contain the whole tract would be 3.5 mm  3.5 mm  3.5 mm, where a distribution of axon diameters [13] is considered to be packed as a regular array of cylinders in a square lattice with an interaxonal space of 17%. Assuming 70% of the CST axons are myelinated ($700,000) and 30% are unmyelinated ($300,000) [12][13][14], two distinct computations should be done, one for each type of axonal fibers. Thus, for myelinated axons Eq. (1) will be used. Assuming an internodal distance proportional to the axon diameter, 1 a , $100  1 a [15], the average number of nodes of Ranvier per sample volume would be equal to 1.28  10 7 . To estimate the number of sodium channels per node of Ranvier, a node surface area is calculated for each axon diameter (assuming a length of node equal to 2 mm) [16], and an average density of these channels equal to 10,000 per mm 2 [16]. Therefore, the total number of node of Ranvier sodium channels per sample volume would be 1.61  10 12 . If the number of water molecules that accompany a sodium ion is 2.5 and the ionic flow is 8.8  10 3 sodium ions per millisecond [16], the resulting in-flow in the sample volume is about 3.53  10 16 water molecules per ms or 1.06  10 À6 g of water per ms.
For unmyelinated axons Eq.
(2) will be used. To estimate the number of sodium channels per unmyelinated axon, a mean membrane area of 33,000 mm 2 per axon is considered (mean axon diameter equal to 3 mm [17] and length of axon equal to 3.5 mm -the voxel length), and an average density of these channels equal to 200 per mm 2 [17]. Thus, the total number of sodium channels per sample volume would be 1.98 Â 10 12 . If, as indicated above, the number of water molecules that accompany a sodium ion is 2.5 and the ionic flow is 8.8 Â 10 3 sodium ions per millisecond [16], the resulting in-flow in the sample volume is about 4.35 Â 10 16 water molecules per ms or 1.30 Â 10 À6 g of water per ms. Therefore, the total in-flow of water molecules per ms in the sample volume could be determined by adding the contributions of the myelinated axons and unmyelinated axons, Eq. (3), and it would be equal to 7.89 Â 10 16 water molecules or 2.36 Â 10 À6 g of water per ms. This represents the amount of water associated to the Na + inward-flow, but an equal amount of water should leave the axon (outward-flow) associated to the K + out-flow to prevent excessive axon swelling. Consequently, the total flow of water molecules per ms F w is 2 Â 7.89 Â 10 16 or 2 Â 2.36 Â 10 À6 g of water per ms. [12][13][14]. The total number of axons considered is about 10 6 and the fraction corresponding to diameters listed in next right column are in parenthesis. b The length of a node of Ranvier is considered equal to 2 mm [16]. c The mean membrane area per axon for the volume of interest here (voxel of 3.5 mm in length) is 33,000 mm 2 , approximately. d The density of Na + channels per mm 2 in a node of Ranvier of a myelinated axon is assumed to be 10,000 [16]. In non-myelinated axons, the surface density of Na + channels per mm 2 is considered equal to 200 [17].
In order to get an estimate of the change that could be expected in the D tensor due to neuronal activation, several assumptions were made. As the proposed measurements of the D tensor are performed with no a priori orientation specified, none of these assumptions compromises the generality of the model. First, we assumed that the main fiber axis is aligned with the z-axis of the laboratory reference frame defined by the orthogonal set of gradient coils XYZ, and the imaging and crusher gradients have negligible effect. Hence, the diffusion coefficients along the xyz coordinates would correspond to the eigenvalues of the D tensor, l 1 , l 2 and l 3 , with l 1 > l 2 ! l 3 . Second, as a first approximation we hypothesized that l 1 D jj remains practically unchanged during inactive and active states, D jj ¼ D à jj (active state denoted by an asterisk). The rationale for this assumption is based on the fact that the impulse conduction in nerve fibers occurs by an increase in ionic and water transport across the axonal membranes, at re-polarized regions in unmyelinated axons and at the nodes of Ranvier in myelinated ones. Third, we assumed that the diffusion coefficients in the plane normal to the main fiber axis, l 2 and l 3 D ? , are equal and the magnitude of the change between active and inactive states, D à ? À D ? , is the same in l 2 and l 3 . Then, the macroscopic average diffusion coefficient (apparent diffusion coefficient) in inactive state could be expressed as follows: and in active state as Due to their low permeability, the diffusion of water in white matter is restricted primarily by the axonal membranes [18]. Based on the model proposed by Szafer et al. [19] to describe the diffusion of water in tissues, Ford and Hackney [13] showed in a spinal cord white matter model that at long diffusion times, D ! 20 ms, the main factors contributing to the apparent diffusion coefficient are the permeability of the membranes and the inter-axonal space. Assuming the inter-/intraaxonal space fractions do not change significantly during axonal activation and taking into account that the diffusion time in the measurements performed here exceeds the 20 ms indicated above, we hypothesize that because of the large change in axonal membrane permeability to Na + during nerve impulse conduction [20], the change in D app from inactive to active state would be mainly due to the fraction of fast moving (activated) water, f à w , associated with the ion flux occurring between inter-and intra-axonal spaces in a voxel. This could be expressed as follows: where D 0 ? represents the diffusion coefficient associated to the fraction of fast moving water. Assuming, as stated earlier, that D à jj ¼ D jj and considering f à w ( 1, from Eq. (5) results: This model could be expanded taking into consideration few characteristics of tissue microstructure, as described in a recent review by Nilsson et al. [21], provided the hardware available for clinical diffusion MRI allows probing the influence of specific tissue features. Also, as indicated later, the blood-oxygen-level-dependent (BOLD) effect on these measurements must be taken into account or a strategy designed to minimize them. For the simplified model described here, if D jj = 1  10 À9 m 2 s À1 and D jj /D ? = 5 [13], the magnitude of the change in D app caused by nerve impulse transmission could be derived by estimating f à w and D 0 ? . To calculate f à w in the experiment described here, we take into account F w and that the firing rate of the motor neuron, 15 Hz on average [22][23][24], leading to a mean duration of an action potential of 67 ms and that sampled every TR of 2.5 s. Since, as calculated above, F w is 4.71  10 À6 g ms À1 in the sample volume, the amount of fast moving water for a diffusion time of 35 ms would be 35  4.71  10 À6 g or 1.65  10 À4 g. Assuming white matter composition about 90% water [25] and a tissue density of $1 g cm À3 , the weight of water in the 42.87 mm 3 sample volume would be 0.039 g and the f à w would be 4.28  10 À3 . To estimate the diffusion coefficient of fast moving water, D 0 ? , we would consider that during depolarization/repolarization the inter-/intra-axonal compartments are in fast exchange at the nodal (myelinated axons) and membrane activated (unmyelinated axons) regions, behaving as if no membrane was present. Thus, to a first approximation, the diffusion coefficient of activated water would be that of free water at 37 8C, 3  10 À9 m 2 s À1 [26]. Then, from Eq. (6), D à ? would be equal to 2.13  10 À10 m 2 s À1 and, therefore, the change in D ? from inactive to active state would be 6.4%. Since we assumed no change in D jj during axonal impulse conduction, from Eq. (5), D à app ¼ 4:75  10 À10 m 2 s À1 . This represents an increase in the observed average diffusion coefficient in the active with respect to the inactive state of 1.8%.
Based on the experimental conditions outlined herein and to a first approximation, the white matter in inactive state behaves as a two compartment model in slow exchange; in other words, the diffusion coefficients in the parallel and perpendicular directions to the main fiber axis are the result of contributions from the intra-and inter-axonal compartments. Thus, to get an estimate on the effect of the predicted change in D ? upon the observed signal intensity, the following expression could be used provided that D ? represents the average (weighted inter-and intra-axonal contributions) diffusion coefficient where A(g ? ) and A(0) are the amplitude of the echo in the presence of a gradient pulse with amplitude g perpendicular to the main fiber axis and 0, respectively, b = (ggd) 2 (D À d/3) where g is the gyromagnetic ratio of the nucleus being observed, d represents the duration of the gradient pulse and D is the time between the rising edges of the gradient pulses. A change in D ? from 2.00 Â 10 À10 to 2.13 Â 10 À10 m 2 s À1 would cause a drop in the amplitude of the echo, A(g ? ), of about 0.77% for b values of about 600 s mm À2 . This would be just above the detection limit for imaging data with a SNR of 400. These limiting conditions for the detection of an MR signal change associated with a nerve impulse during the performance of a given task were reached by considering a total density of Na + channels involved, r Na þ , of 3.58 Â 10 12 for a sample volume of 42.87 mm 3 or 8.35 Â 10 10 Na + channels/mm 3 . Also, it is possible to estimate the change in the fractional anisotropy, FA, using the following expression and the corresponding values estimated above for the inactive, FA, and active, FA*, states: Thus, FA = 0.770 and FA* = 0.754, which represents a 2.1% reduction in the value of FA due to nerve impulse conduction.
Additional information
Blood-oxygen-level-dependent (BOLD) fMRI has allowed the characterization of the neural activity in the brain systems performing motor, sensory, cognitive, and affective operations [27][28][29][30]. However, the propagation of neural activity between cortical areas along axonal pathways is not captured by fMRI measurements. Their understanding could potentially provide important insights into the integration of neural activity into systems. Likewise, it would enable the diagnosis and monitoring of potential therapeutic interventions aimed at restoring brain functions due to changes in axonal conduction. In this context, the analysis of fMRI data using network or graph theory is receiving considerable attention [31][32][33]. These studies aim at determining the connectivity between active nodes in the brain while performing a functional task, but they may not be mapping the actual anatomic connectivity.
MRI has shown that the anisotropy of water movement in tissues is associated with the presence of arrays of intra-and extra-cellular structures (i.e., mainly in white matter axonal membranes and myelin sheets) restricting and hindering the random movement of water [18]. These observations have been exploited to develop MR methods that map the distribution of white matter fiber tracks in the central nervous system (CNS). These are known as diffusion tensor MRI (DTI) [34][35][36][37][38][39], or MR tractography [40][41][42]. Whereas DTI provides anatomical information about the fiber tracks and CNS, functional deficits can only be inferred from DTI data. The existing technique does not measure electrical conduction deficits in these nerve fibers. As described above, the generation of action potentials in activated neurons involves significant ion and water transport across axonal membranes. MRI measurements of these movements have the potential to provide information about nerve conduction along these axons.
Over the past two decades there has been debate and controversy as to whether a diffusion imaging method will permit the direct imaging of action potentials in bundles associated with the performance of a functional task. Unfortunately, the published evidence does not provide a biophysical mechanism for the observed changes in DTI signal associated with the performance of a task, nor is the evidence incontrovertible.
Small changes in the apparent diffusion coefficient of water, D app , associated to activation of neurons in the visual cortex have already been observed [43]. This increase in the value of D app of water has been attributed to the swelling of neuronal bodies during membrane repolarization by ion transport between the extra-and the intra-cellular spaces [43] or to a vascular contribution during hypercapnia [44].
Early imaging investigations failed to detect any change in the anisotropic diffusion of water in nerves using an in vitro preparation. Using an excised frog spinal cord and sciatic nerve, Gulani et al. [45] found no significant difference of the diffusion coefficient in the directions perpendicular or parallel to the direction of the nerve between the stimulated and the inactive states. They also suggested that positive results might be a misinterpretation of an underlying vascular BOLD effect as blood flow and volume increase. On the other hand, their outcome could be due to several factors. First, the level of ATP required to sustain in vivo-like electrical activity would seem to preclude a reproducible measurement over the duration of the experiments -although the authors provided some electrophysiological data no direct correlations between electrophysiological measurements and diffusion measurements were shown in the paper. Second, the diffusion time used (i.e., 11.5 ms) was very short as compared to what is used in clinical scanners (i.e., 50-70 ms). Third, the temperature of 13 degrees Celsius used in their experiment is very low as compared to 37 degrees Celsius of routine in vivo conditions. The last two factors reduce considerably the sensitivity of anisotropy and the observed difference in ADC (by a factor of 4 or 5). Nevertheless, although this experiment published negative results, it was an important step to address this question in some detail. Future studies aimed at demonstrating that a drop of the signal in diffusion anisotropy is due to a diffusion effect need to assure that this drop is decoupled from a possible BOLD effect [46].
Another paper by Anderson et al. [47] used isolated optic nerves to claim there were no large changes in anisotropic diffusion as a function of depolarization. We believe the data in that paper do not adequately address the predictions of this model in vivo. First, Anderson et al. used isolated optic nerves that have the same limitations noted with regards to ATP levels and temperature as discussed above. In the Anderson et al. paper the authors did not perform any electrophysiological experiments to demonstrate that action potentials could be sustained or generated with their preparations. Instead they used perturbations that are well-known to affect the osmolarity of the glial cells. Therefore, the negative results they obtained using a ''depolarizing solution'' to change FA were never verified as to actually producing a depolarization. The other perturbations they performed were to examine the effects of osmotic changes upon D jj and D ? , which they, not surprisingly, found had a large effect on D values. The effects they observed were largely confined to the glial cells. These artificial preparations are actually quite a ways off from what one might observe in vivo, indeed, the magnitude of the osmotic perturbations used in the paper would almost certainly not be found in vivo except in conditions like stroke or massive ATP depletion. Thus, it remains to be determined whether or not proof for changes in FA may occur with stimulation in vivo.
To determine whether MR signal measures may or may not reflect electrical conduction along nerve tracts, a physiological model, which takes into account ion and water movements accompanying action potentials, would be critical in setting forth the parameters and goals would have to be met to insure that the observed MR signal changes during a functional task come from electrical conduction along nerve tracts. Mandl et al. [48,49] have reported fMRI experiments in which they found increased FA value during and after the performance of a task, and proposed that swelling of the glial cells may be the mechanism underlying the observed increase in the value of FA. This observation appears to contradict an anticipated decrease in the FA-value produced by cation and water movements perpendicular to axonal membranes conducting action potentials.
Also, over the last decade, MR methods have been developed in attempts to image directly neuronal currents induced during action potentials measuring possible perturbations to the MR signal (magnitude and phase) caused by these currents. Presently, the results obtained in humans by these methods, termed neuronal current MRI (ncMRI) (also, Lorentz effect imaging and encephalographic MRI -eMRI), remain controversial [50,51].
As previously mentioned, current technology does not provide a definitive solution to the problem of imaging currents on axons of firing neurons using MRI. The model we proposed herein seems to capture an important mechanistic aspect of the action potential and neural conduction, which has not been considered so far in the existing literature. The impact of BOLD seems to be the major confounding factor in this research and it may well be that the observed increase in FA by Mandl and colleagues reflects changes in BOLD signal as pointed out recently by Autio and Roberts [52]. Our calculations predict that there will be a decrease in FA with increased transport of water through Na + /K + channels as a function of action potentials. We realize the need for doing more complicated modeling, especially taking into consideration time dependence of the diffusion equations, however we suggest that this should be the subject of further studies. Future studies attempting to perform the dynamic or functional diffusion imaging experiment should make sure to control for the impact of BOLD signal changes and the effect of hypercapnia on FA measures. Furthermore, technological advances in MRI are needed to achieve robust and reliable measure of axonal activity during the performance of a functional task. Thus, performing the measurements at high magnetic fields will improve the signal-to-noise ratio and facilitate the detection of small changes with higher sensitivity. Also, state of the art Connectome diffusion imaging technology [53], using two or three times stronger field gradients as compared to the ones routinely used will allow the use of shorter echo times, and hence higher signal-to-noise ratios and lower BOLD contamination. Combined with optimized data acquisition strategies, i.e., number of gradient orientations and averages, this newly emerging technology may be critical in fulfilling the goal of testing this model. | 7,259 | 2014-09-26T00:00:00.000 | [
"Medicine",
"Physics"
] |
Introgression and rapid species turnover in sympatric damselflies
Background Studying contemporary hybridization increases our understanding of introgression, adaptation and, ultimately, speciation. The sister species Ischnura elegans and I. graellsii (Odonata: Coenagrionidae) are ecologically, morphologically and genetically similar and hybridize. Recently, I. elegans has colonized northern Spain, creating a broad sympatric region with I. graellsii. Here, we review the distribution of both species in Iberia and evaluate the degree of introgression of I. graellsii into I. elegans using six microsatellite markers (442 individuals from 26 populations) and five mitochondrial genes in sympatric and allopatric localities. Furthermore, we quantify the effect of hybridization on the frequencies of the genetically controlled colour polymorphism in females of both species. Results In a principal component analysis of the microsatellite data, the first two principal components summarised almost half (41%) of the total genetic variation. The first axis revealed a clear separation of I. graellsii and I. elegans populations, while the second axis separated I. elegans populations. Admixture analyses showed extensive hybridization and introgression in I. elegans populations, consistent with I. elegans backcrosses and occasional F1-hybrids, suggesting hybridization is on-going. More specifically, approximately 58% of the 166 Spanish I. elegans individuals were assigned to the I. elegans backcross category, whereas not a single of those individuals was assigned to the backcross with I. graellsii. The mitochondrial genes held little genetic variation, and the most common haplotype was shared by the two species. Conclusions The results suggest rapid species turnover in sympatric regions in favour of I. elegans, corroborating previous findings that I. graellsii suffers a mating disadvantage in sympatry with I. elegans. Examination of morph frequency dynamics indicates that hybridization is likely to have important implications for the maintenance of multiple female morphs, in particular during the initial period of hybridization.
Background
Hybridization and introgression are increasingly recognized as important factors in the evolution of plants, animals [1,2] and prokaryotes [3], and can lead to the creation of novel genotypes and phenotypes. Thus, the study of contemporary hybridization between species and the extent of genomic introgression between them provides an excellent opportunity to examine evolutionary processes such as adaptation, gene flow and, ultimately, speciation [4][5][6]. Determining the degree of genetic exchange between species may be of particular interest when studying recently diverged species, since they typically show incomplete reproductive barriers.
Hybridization has an inherent spatial component, as the process requires direct contact between populations of the different species. For this reason, the spatial setting is a crucial determinant of hybridization, and, in turn, the specific conditions under which hybridization occurs can, sometimes, be inferred from the geographical distribution of hybrids. Studies of hybrid zones have indicated that natural hybridization is most likely to take place in intermediate habitats, which are often found at the ecological limits of the species' distributional ranges, and where both taxa are found in close proximity to each other [5]. When some of the interspecific matings lead to fertile first-generation (F 1 ) hybrids, there is a possibility that these will backcross with at least one of the parental genotypes, with introgression as a consequence. If the resulting backcrossed individuals subsequently mate with the most similar parental genotype, novel genes and gene complexes can be particularly rapidly introduced into the new genetic background [7]. In some cases, stable and long-lasting hybrid zones are formed as a consequence of spatial range overlap between two species [8][9][10]. However, in the vast majority of cases, one of the two species, or possibly even the new hybrid cross, becomes more successful and displaces one or both of the original taxa [11].
In odonates (damselflies and dragonflies), a high level of hybridization between species is a rare phenomenon [12,13]. So far, only three molecular studies have investigated hybridization between closely related odonate species, namely between Coenagrion puella and C. pulchellum [14], Mnais costalis and M. pruinosa [15,16] and Calopteryx splendens and C. virgo [17]. All these studies failed, however, to detect extensive hybridization between the species. There was no evidence of hybridization between Coenagrion puella and C. pulchellum in any of the populations examined [14], i.e. no hybrids were found. In addition, between Mnais costalis and M. pruinosa only two F 1 hybrid females were found among 900 individuals [15,16] and between Calopteryx splendens and C. virgo only seven hybrids out of 1600 putative hybrids were detected [17]. Despite the lack of evidence from molecular studies, observational and experimental studies have found evidence for putative intrageneric hybrids in some additional species [18]. For example, two of the best documented cases of hybridization among odonates are between Ischnura gemina and I. denticollis [19], and I. elegans and I. graellsii [20,21].
The latter species pair consists of two closely related damselfly species that co-occur in southern Europe [18]. Specifically, I. graellsii is a widespread species on the Iberian Peninsula (see Figure 1), while I. elegans is a species with a more northern and easterly distribution, which has recently spread into new regions within the Iberian Peninsula [20,21]. For example, the first record of I. elegans in north-west Spain was made in 1984, whereas I. graellsii is known from this area since 1917 [22]. In addition, I. elegans is the dominant species in the coastal lagoons of Galicia (north-west Spain), a region where it was a very rare species less than 30 years ago. Furthermore, I. elegans is still rapidly expanding in the area, and in several coastal populations, that are dominated by I. graellsii, immigrant individuals of I. elegans are now starting to appear [21]. In a literature revision, Monetti et al. [20] have found that at least six Spanish localities that held both species simultaneously before the 1980's, had only I. elegans in 2002. Recent observations of I. elegans individuals in south Spain (personal observation), where only I. graellsii populations have been detected until now, and in western Spain, support that this species is gradually expanding its distribution in southern latitudes and western longitudes. Ischnura elegans has also expanded elsewhere; in the UK it has expanded its northern range by approximately 168 km in the last few decades, which is more than double than the average expansion distance of other odonates [23]. Likewise, Parmesan et al. [24] showed that 22 of 35 European butterfly species have shifted their ranges over the last century. The recent change in the climate has been suggested to drive changes in both phenology [25] and distribution [23] of odonates.
Ischnura elegans and I. graellsii are very similar ecologically and morphologically, but they can be unambiguously identified by the morphology of prothorax and anal appendages and by the comparatively small body and short wings of I. graellsii [20]. Furthermore, genetic analyses have shown that I. elegans and I. graellsii are very similar both at allozymes [26], and at the mitochondrial Cytochrome b and Coenzyme II genes [0.2% genetic distance, 21]. Previous work has documented that the two species hybridize in the laboratory [21], and hybrids (i.e. morphologically intermediate individuals) have been detected in one sympatric locality in north-western Spain [20].
Wirtz [27] reviewed the factors promoting unidirectional or reciprocal hybridization and proposed a hypothesis based on sexual selection to explain unidirectional hybridization. He proposed that hybridization is more likely between the female of the rare species and the male of the common species. However, when the rare species is also the bigger species, hybridization can be impeded by mechanical incompatibility [28], and the outcome degree and direction of hybridization is difficult to predict. Previous findings from the field (one population) [20] and laboratory [21] indicate that I. graellsii (the common and smaller species in Iberia) suffers a mating disadvantage in sympatry with I. elegans (the less abundant and larger species in Iberia). In particular, males of I. elegans readily mate with females of I. graellsii in the laboratory but males of I. graellsii are mechanically incapable of mating with I. elegans females [21]. The resulting F 1 -hybrid males can only mate with I. graellsii females, whereas the F 1hybrid females are mechanically incapable of mating with males of I. graellsii but can instead mate with I. elegans males [20]. Furthermore, F 1 -hybrids (males and females) show a similar degree of reduced viability and fertility [Sánchez-Guillén RA, Wellenreuther M and Cordero-Rivera A: Strong asymmetry in the relative strengths of prezygotic and postzygotic barriers between two damselfly sister species, submitted]. These conditions can be hypothesized to result in a directional bias in hybridization in favour of I. elegans that could explain the recently documented range expansion of I. elegans into areas that were previously only occupied by I. graellsii [20,21]. This agrees with the colonization pattern of I. elegans in the area. Under the described scenario of introgressive hybridization, we hypothesize extensive introgression of I. graellsii genes into the Spanish I. elegans populations. Furthermore, introgression and interspecific competition are probably contributing to the fast range expansion of I. elegans and contraction of I. graellsii in Iberia.
Interestingly, several Ischnura species, including I. elegans and I. graellsii, are characterized by a conspicuous colour polymorphism that is limited to females [21,[29][30][31]. Females exhibit three colour morphs; one androchrome 1] and two gynochromes colour morphs (the green-brown infuscans and the orange-brown infuscans-obsoleta (I. elegans) or aurantiaca (I. graellsii)) [21,29]. The colour polymorphism is controlled by a simple genetic system consisting of one gene with three alleles that are in a dominance hierarchy [21]. How the colour polymorphism of Ischnura damselflies is maintained in space and time has been a much discussed subject. Indeed, hybridization was the first mechanism proposed for maintaining the colour polymorphism in this genus [32]. For example, hybridization was hypothesized to maintain contrasting androchrome frequencies in nearby populations of I. damula and I. demorsa, and I. elegans and I. graellsii, respectively [21,32]. Under the aforementioned scenario of extensive introgressive hybridization of I. graellsii genes into the Spanish I. elegans populations, and the hypothesised role of the hybridization for the temporal maintenance of contrasting androchrome frequencies; female morph frequencies of I. elegans in nearby populations to I. graellsii must be more similar to I. graellsii frequencies, due to the absorption of the typical morph frequencies, than in populations of I. elegans located far away of I. graellsii Figure 1 Map showing spatial distribution of I. elegans and I. graellsii in Europe and northern Africa. I. elegans (grey region) and I. graellsii (dark grey) in Europe and northern Africa, and the overlapping distribution of the two species in Spain (grey with stars). We sampled nine populations (166 samples) dominated by individuals that we phenotypically classified as I. elegans in three areas in Spain; north-west (1. Laxe, 2. Louro, 3. Doniños), north and central (4. Arreo, 5. Alfaro, 6. Baldajo) and east ( populations. In fact, Gosden & Svensson [33] proposed that the contrasting androchrome frequencies observed in the Spanish populations of I. elegans could be a by-product of the hybridization in combination with a founder effect. The objectives of the present study were to examine the spatial distribution of I. graellsii and I. elegans in the Iberian Peninsula, evaluate the extent of introgression of I. graellsii genes into the recently established I. elegans populations in Spain, and to understand the role of the hybridization on the temporal maintenance of female colour morph frequencies in both species. Therefore, we conducted a detailed reconstruction of the distribution of the two species in Spain to quantify the overlap in space between the species. Furthermore, we examined the degree of introgression in nine I. elegans populations within the zone of overlap using both nuclear microsatellite markers [34] and mitochondrial genes [35]. Previous studies have shown that analyses based on differentiated markers between parental species can be used to efficiently and accurately describe admixture proportions, i.e. the degree of introgression, in F 1 -hybrids and backcrosses see [36][37][38][39]. Furthermore, studies have also shown the usefulness of mitochondrial DNA in the study of introgressive hybridization because of their maternal inheritance [40]. Specifically, with this method F 1 -and F 2 -hybrids may be assigned to a particular maternal species based on the mtDNA haplotype they carry. However, a prerequisite is that the species do not share haplotypes, which may happen because of hybridization and ancestral polymorphisms [41]. Finally, in order to evaluate if the previously observed uncharacteristic morph frequencies of I. elegans populations in north-western Spain, where frequencies are broadly different between populations [21], extend into the sympatric area (northern, central and eastern Spain), we combined morph frequency data from previous studies [42] with new data from this study, and discuss the proposed hypothesis about the role of hybridization in the temporal maintenance of the colour polymorphism.
Results
Spatial distribution of I. graellsii and I. elegans in the Iberian Peninsula Using the available distribution data of I. elegans and I. graellsii in Spain, we constructed two geographic maps to show their overall distributional range (Figure 2A, Ischnura elegans; 2B, I. graellsii). Species overlap in north and central Spain (from west to east). In particular, I. graellsii is found all over the Iberian Peninsula, while I. elegans is very rare in southern Spain ( Figure 2).
Introgressive hybridization
All population groups (Spanish I. elegans populations, European I. elegans populations excluding Spain and I. graellsii populations) exhibited a high degree of molecular diversity (see Table 1). Estimates of observed and expected heterozygosity were similar and ranged from 0.56-0.70 and 0.61-0.76, respectively (Table 1). In the Spanish I. elegans populations, we detected a total of 87 alleles, 11 less than in the other European I. elegans populations. In contrast to I. elegans, we found somewhat fewer (66) alleles for I. graellsii, perhaps due to the fact that the microsatellites were specifically developed for I. elegans which could cause an ascertainment bias. Estimates of allelic richness were comparable between the Spanish and the other European I. elegans populations (6.54 and 6.03, respectively), and were considerably higher than in I. graellsii (3.41; Table 1).
Analyses of the overall genetic structure showed that I. elegans populations were significantly differentiated from one another in Europe outside Spain (F ST = 0.031, P < 0.0001) as well as within Spain (F ST = 0.049, P < 0.0001). Moreover, populations of I. graellsii were also significantly differentiated from one another (F ST = 0.029, P < 0.0001).
Four of 23 principal component axes accounted for a significant amount of genetic variation among samples, as indicated by a screen plot. The first axis contained 24% (F ST = 0.024, P = 0.11), the second axis 17% (F ST = 0.017, P = 0.08), the third axis 14% (F ST = 0.014, P = 0.07) and the fourth axis 11% (F ST = 0.011, P = 0.05) of the total variation ( Figure 3). The first two principal components thus summarise almost half (41%) of the total variation inherent in all I. elegans and I. graellsii populations. The scores for the first principal component axis revealed a clear separation of I. graellsii and I. elegans populations, suggesting that major population differences were predominantly caused by species rather than geographic areas per se. Major population differences (nested inside species) were revealed by the second PCA axis, where two of the Spanish I. elegans populations (Louro and Laxe; Table 2 and 3) were situated in the same quadrant that was occupied by all I. graellsii populations ( Figure 3).
Analyses of the population structure in STRUCTURE supported the molecular differentiation between the two Ischnura species, although the ΔK-method suggested three clusters as the most likely population structure ( Figure 4). More specifically, the results showed that one of the genetic clusters highly corresponded to the I. graellsii group, while the other two genetic clusters were best represented by I. elegans genotypes (one mainly corresponding to northern, central and southern European populations, and one to populations in eastern Europe), with the Spanish populations showing an intermediate assignment to each of these three genetic clusters ( Figure 4).
Based on these results, we included all populations of I. graellsii and I. elegans in the STRUCTURE runs (K = 2) to analyse the individual admixture proportions of I. elegans individuals in Spain (Table 4). These results showed that one genetic cluster clearly corresponded to I. graellsii, while the other cluster corresponded to I. elegans (European populations outside Spain; Table 4; Figure 5). The majority of individuals of the European I. elegans outside Spain (95%) and I. graellsii (88%) were assigned with a certainty of at least 90% to each of these clusters. However, only 27% of individuals of the Spanish I. elegans were strongly assigned to I. elegans (admixture proportions ≥ 90%) and one individual (from the sympatric population Alfaro) was assigned to I. graellsii (admixture proportion to I. elegans < 10%). The rest of the Spanish I. elegans individuals were intermediate between the two clusters (with a skew of admixture proportion towards I. elegans, see additional file 1), suggesting a significant degree of introgressed I. graellsii alleles (Table 4; Figure 5). The admixture proportions of the artificial hybrids and backcrosses ranged between 11-89% (Table 5; Figure 5). The F 1 and F 2 showed admixture proportions to I. elegans between 67 and 21%; the 1 GB (first I. graellsii backcross) between 67 and 11%; and the four I. elegans backcrosses (1-4 EB) between 89 and 21% (Table 5; Figure 5). Based on these results, we defined three conservative assignment groups: backcrosses with I. elegans with admixture between 89-68%; backcrosses with I. graellsii ranging between 20-11%; and a mixed group of F 1 , F 2 and backcrosses ranging between 67-21%. When the Spanish I. elegans individuals (N = 166) were allocated by their admixture proportions in one of these three groups, a total of 97 individuals (admixture proportions to I. elegans between 89-68%) were assigned to backcrosses with I. elegans (1-4 EB), whereas not a single individual was assigned to the backcross with I. graellsii (1 GB) (Table 5). Finally, the rest of the Spanish I. elegans (i.e. 23 individuals with admixture proportions to I. elegans between 67-21%), excluding the single individual from Alfaro that was assigned to I. graellsii, were assigned to the mixed-hybrid group, including F 1 , F 2 and backcrosses with the parental species (Table 4).
Low genetic variation and shared polymorphism at mitochondrial genes
The alignments for the Cytochrome C Oxidase I and II (COI-COII), Cytochrome B (CYTB), 12S rRNA (12S) and NADH Dehydrogenase 1 (ND1) fragments included 591 bp, 673 bp, 457 bp, 370 bp and 591 bp, respectively ( Table 6). All new sequences were deposited in Gen-Bank (accession numbers: HQ834794-HQ834810). COI fragment showed no polymorphic sites, revealing a unique haplotype (H1) that was shared by the two species. COII showed one polymorphic site, revealing two haplotypes (haplotype diversity, H = 0.409 ± 0.133; nucleotide diversity, π = 0.00122). Haplotype H2, the most abundant haplotype, was shared by the species in both allopatric and sympatric regions, while haplotype H3 appeared only in the three samples of I. graellsii from one allopatric region (Morocco). CYTB fragment showed two polymorphic sites revealing three haplotypes (H = 0.163 ± 0.099; π = 0.00053). Each species showed one unique haplotype (H5 and H6), in a single sample from one allopatric region (Greece and Morocco), respectively, while the rest of the samples of both species shared a common haplotype (H4). The 12S fragment showed three polymorphic sites revealing four haplotypes (H = 0.087 ± 0.047; π = 0.00024). Ischnura elegans showed three unique haplotypes (H7, H8, H9), each of which appeared in a unique sample (two haplotypes in samples from allopatric regions and one in a sample from a sympatric region), while the rest of the samples of both species shared the common haplotype H10. The last fragment, ND1, showed one polymorphic site, revealing two haplotypes (H = 0.250 ± 0.180; π = 0.00042). The most abundant haplotype (H11) was shared by both species (from allopatric and sympatric regions), while the second haplotype (H12) appeared only in one sample of I. graellsii (from an allopatric population in Portugal).
Colour morph frequencies
All populations showed all three female morphs, with the exception of Saïdia in northern Africa (where the aurantiaca morph was missing) ( Table 7). In populations dominated by I. elegans from north-central, central and eastern Spain, androchrome, infuscans and infuscans-obsoleta frequencies were highly variable between species [21], also showed in our study high levels of androchrome frequencies (
Discussion
Hybridization and genomic introgression are repeatedly suggested to be important elements in evolutionary processes such as maintenance of genetic variation, adaptation and speciation [4][5][6]. Determining the degree of genetic exchange between recently diverged species may be of particular interest in this sense since they typically show incomplete reproductive barriers. Moreover, the pattern and extent of hybridization and introgression can have important conservation implications because it may lead to the replacement of one of the hybridizing taxa [10,[43][44][45].
In the present study, we have revealed extensive hybridization and introgression in I. elegans populations in Iberia where it co-occurs sympatrically with its closely related sister species I. graellsii. Distribution maps for both species show an extended area of overlap in northern and central Spain. Ischnura graellsii is generally very abundant all along the Iberian Peninsula, while I. elegans has a patchy distribution. For example, I. elegans is very rare in southern Spain. In the sympatric regions, I. elegans is also less abundant than I. graellsii (with the exception of the area around Valencia) and is not present at all in some provinces ( Figure 2). Consequently, I. graellsii occurs allopatrically in southern Iberia (Figure 2), whereas I. elegans is exclusively found in areas that are also occupied by I. graellsii. Our admixture analyses in STRUCTURE revealed clear evidence of past and present hybridization and introgression between I. elegans and I. graellsii over a large geographic area in northern Spain. The degree of introgression in the populations is consistent with I. elegans backcrosses and occasional F 1 -hybrids. Thus, hybridization between these two damselfly species is a relatively recent and widespread phenomenon. Interestingly, the studied populations have been going through a recent species turn-over and are now dominated by I. elegans individuals that appear to carry introgressed I. graellsii genetic material. These kinds of dramatic demographic and genetic effects have not previously been documented in odonates, although it is known from other taxa. For instance, over the past century, the blue-winged (Vermivora pinus) warbler has rapidly replaced the golden-winged warbler (V. chrysoptera) over an extensive part of their hybrid zone in eastern north America. Marker-based analyses show asymmetric and rapid introgression from blue-winged warbler into goldenwinged warbler in some areas and bidirectional maternal gene flow in others [46,47]. Rapid introgression has also been detected in other taxa, e.g. in pocket gophers (Geomys bursarius major and G. knoxjonesi) [48].
Extent of hybridization and direction of introgression
In vertebrates, hybridization is particularly common in fish, where several hundred interspecific and intergeneric crosses have been reported, and in birds, with roughly 10% of all species known to have bred in nature with another species e.g. [5,27,[46][47][48][49][50]. In comparison, detailed genetic studies of hybridization and introgression, and thus the knowledge about these phenomena, are lacking for most odonates [12,13]. As mentioned above, three previous genetic studies failed in detecting extensive hybridization between different pairs of odonate species [14][15][16][17]. Our admixture analysis in STRUCTURE showed that the I. elegans populations Louro and Laxe in northwestern Spain had the highest degree of introgression of I. graellsii alleles. This is not surprising, given that northwestern Spain is the region where hybridization between I. elegans and I. graellsii was most recently detected [20]. In 1990, I. elegans, I. graellsii and hybrids were found in Foz (north-western Spain), which is geographically close to Louro and Laxe. However, after only ten years, the population was dominated by individuals that were phenotypically classified as I. elegans, although morphological intermediate and I. graellsii males were occasionally detected [21]. The population Laxe was visited for the first time in June 2001 (two visits) and only I. elegans were detected, although at a low densities (between 0.2 to 1.3 captured males/minute). At a visit in June 2002, I. elegans were found at a similar density (0.5 males/minute). Five years later (in 2007), we revisited the population and the density had reached the highest value in the region. A total of seven visits were conducted in 2007, between June and August, and the density ranged between 3.3 and 14.7 males/minute; and neither I. graellsii nor putative hybrids were detected. Nevertheless, the admixture analyses showed that none of the 14 examined individuals at Laxe could be genetically assigned to pure I. elegans status. Nine individuals were assigned to F 1 , F 2 , or backcrosses with I. elegans (admixture proportion between 67-21%), and five individuals were assigned to backcrosses with I. elegans (admixture proportion between 89-68%). In 1980, Louro was visited for first time by Ocharan [22] and mainly I. graellsii individuals were observed. However, Torralba and Ocharan [51] reviewed Ocharan's samples from Louro (sampled in 1980) and have now identified both species in the sample; nevertheless, I. graellsii species still remains the dominant species. However, on our first visit in 2000 and on subsequent visits, I. elegans individuals completely dominated in numbers (a couple of I. graellsii individuals were detected among hundreds of I. elegans). Only one out of the 15 Louro individuals that were molecularly examined in the present study was assigned to be pure I. elegans. In addition, the vast majority of the individuals (93%) showed an assignment proportion between 89-68%, suggesting that these individuals were backcrosses, and not F 1 or F 2 hybrids. These results are corroborated by the PCA-analysis. Both Laxe and Louro were placed within the I. graellsii quadrant, indicating a significant degree of molecular similarity between these I. elegans populations and I. graellsii. Analyses of the remaining Spanish I. elegans populations showed that only 31% of the individuals were pure I. elegans, and that the remaining 69% showed admixture proportions expected for hybrids and backcrosses with I. elegans. In the Alfaro population in the north-central Spain where both species co-occur in equal numbers, one individual classified as I. elegans had a very high proportion of I. graellsii alleles. This either suggest that this individual was misclassified despite the fact that we only collected males to minimise misidentifications [20], or that it is a backcross that has inherited a very high proportion of I. graellsii alleles at the few markers analysed.
Our suggest that hybridization between I. elegans and I. graellsii is asymmetric and largely unidirectional. This corroborates previous findings in the field [20] and laboratory [21] showing that I. graellsii has a mating disadvantage in sympatry with I. elegans. Heterospecific matings, and matings between the F 1 -hybrids and the parental species, rarely take place with I. elegans females, but occur more frequently among I. graellsii females, I. elegans males and hybrids [Sánchez-Guillén RA, Wellenreuther M and Cordero-Rivera A: Strong asymmetry in the relative strengths of prezygotic and postzygotic barriers between two damselfly sister species, submitted]. The reason for the almost complete lack of hybridization between I. graellsii males and I. elegans females is that males cannot grasp the female by their protothorax [20], a mechanical handicap that appears to be a very efficient prezygotic isolation mechanism. Previous work on plants [52,53] and animals [27] has suggested that unidirectional hybridization usually occurs between the females of the rare species and the males of a common species, but not vice versa. Wirtz [27] reviewed the factors promoting unidirectional or reciprocal hybridization and proposed a sexual selection hypothesis for unidirectional hybridization based on the fact that females normally invest more in offspring and, therefore, are more discriminating than males. When heterospecific males are less abundant than conspecific males, females rarely mate with heterospecific males. Consequently, under such a condition, the rare species is usually the maternal parent of the hybrids. However, this is not the case in our study where the more abundant species is initially I. graellsii. Ischnura elegans, on the other hand, appears to be the intruding species and is hence initially the rare species, which has been expanding its range in Spain and is now displacing I. graellsii from some populations and regions. Thus, the direction of hybridization between these two Ischnura species is opposite that expected based on the rare female hybridization hypothesis just outlined e.g. [27], but follows the prediction proposed by Grant and Grant [28], namely that when the rare species is also the bigger species (in our study I. elegans), hybridization can be impeded by mechanical incompatibility. Laboratory tests have detected that I. elegans females and I. graellsii males are mechanically impaired to form a tandem, preventing over 93% of all matings, while only 13% of the matings between I. graellsii females and I. elegans males are mechanically prevented [Sánchez-Guillén RA, Wellenreuther M and Cordero-Rivera A: Strong asymmetry in the relative strengths of prezygotic and postzygotic barriers between two damselfly sister species, submitted].
The role of hybridization in the maintenance of colour polymorphisms
Johnson [33] proposed that the colour polymorphisms in odonates could be maintained due to hybridization between closely related species and balanced by differential predation pressure. According to this hypothesis, androchrome females benefit from avoiding matings with heterospecific males, while gynochromes females are involved in heterospecific matings (usually sterile); the relative fitness of the colour morphs is balanced by a higher probability of predation of the androchrome females. In a recent study on I. elegans and I. graellsii [21], a role of hybridization for the temporal maintenance of contrasting androchrome frequencies in nearby populations in north-western Spain was suggested. The relatively low androchrome frequencies in I. elegans populations located close to I. graellsii populations, and the relatively high androchrome frequencies in I. graellsii populations adjacent to I. elegans populations, were hypothesized to be caused by the absorption of the, in other circumstances, typical morph frequency [21]. Ischnura elegans data from other sympatric regions in the Iberian Peninsula now shows that the substantial variation in female morph frequencies between different populations in north-western Spain is not unique to this region; also in other parts of Iberia morph frequencies are drastically variable between populations (androchrome: 3.3-70.8%; infuscans: 6.7-72.9%; infuscans-obsoleta: 3.0-76.7%). Furthermore, other recent studies have revealed that disparate morph frequencies are not restricted to Iberian populations [53,54], although androchrome frequencies are typically more abundant in northern latitudes and show less variation between populations than in Iberia [with the exception of Iberia, 53]. In our study, the highest frequencies of androchromes were found in the four populations with both species present (Xuño, O Vilar, Alfaro and Las Cañas; Table 7); in I. graellsii androchrome frequencies ranged between 12.5 and 17.4% and in I. elegans between 40.0 and 70.8%. Nevertheless, the I. graellsii population Ribeira de Cobres had androchrome frequencies of 18.8%, for this species an atypically high level. However, this comparatively high androchrome frequency cannot be explained by the influence of nearby I. elegans populations, because this population is located in the allopatric region (southern Portugal). Furthermore, in I. elegans the lowest androchrome frequencies were found in north-central Spain (population named Arreo) where the nearest I. graellsii population (named Troi) had higher androchrome frequencies (14.8%) than the I. elegans population (3.3%), and in a coastal population near the Mediterranean coast (eastern Spain) (Amposta; 3.3% androchromes), but here we lack data of I. graellsii morph frequencies in nearby populations.
In conclusion, hybridization is likely to have important implications for the maintenance of multiple female morphs, but only during the short period when two species with contrasting morph frequencies start to hybridize.
The evolution, maintenance and adaptive function of genetic colour polymorphism have received very much attention in a broad range of organisms, e.g. in birds [55], reptiles [56] and insects [57]. However, the role of hybridization in this context has received little attention, which may simply reflect the fact that only a few suitable study systems are available to study; one being the presently described Ischnura damselfly complex in Spain. Another case is the hybrid zone of the land snails Mandarina mandarina and M. chichijimana in the oceanic Bonin Islands [58]. In that system, the variability of the colour polymorphism in the hybrid population was substantially higher than that in the pure populations, suggesting that morphological variation was maintained by hybridization [58]. These results highlight the importance of hybridization as a source of morphological variation, diversity and evolutionary novelty.
Conclusions
When hybrids are fertile and backcross with one of the parental species, hybridization will inevitably result in introgression, thereby increasing the genetic variability of the introgressed species [59]. Even at low levels, introgression of novel genetic material could be an important factor as a source of new genetic and phenotypic variability, and subsequent evolution. Thus a certain degree of hybridization could create favourable conditions for new adaptations [4][5][6]. In contrast, extensive hybridization and introgression can, as mentioned above, sometimes have important conservation implications when leading to the replacement of one of the hybridizing taxa [10,[42][43][44]. We have documented a unique case of hybridization and unidirectional introgression in polymorphic Ischnura damselflies, where hybridization is likely to having important implications on the temporal maintenance of multiple female morphs. The potential adaptive significance of introgression in the system, and whether this per se has contributed to the rapid species turn-over in sympatric populations in the region, remains to be evaluated.
Methods
Spatial distribution of I. graellsii and I. elegans in the Iberian Peninsula We conducted a revision of the distribution data of the two species along the Iberian Peninsula from 1866-2008 using data from Baixeras et al. [60], Jödicke [61] and Ocharan [22,62], in the region around la Rioja using data from Tomás Latasa (personal communication) and along the Iberia using data from Jean Pierre Boudot (personal communication). Using DMAP (Distribution mapping software, Version 7.0) we constructed two geographic maps showing the distribution of both species in Iberia ( Figure 2).
Study populations and sample collection
Samples of I. elegans and I. graellsii were collected from 26 populations from Europe and northern Africa (see Table 2 for details of sampling locations). In particular, all European populations, except the populations from Spain, were classified as allopatric I. elegans populations and included a total of 220 individuals from 13 I. elegans populations. In depth sampling of both I. elegans and I. graellsii was carried out in the sympatric region in Spain; in the north-western corner, the central parts and along the east coast ( Figure 1). In these areas, nine sympatric populations (166 individuals) were sampled, and these were, with the exception of Alfaro, dominated by individuals that were phenotypically classified as I. elegans (see Table 3 for details of species proportions in the sampled populations). In addition, four allopatric I. graellsii populations (56 I. graellsii individuals) from the north, central and south Iberia, and northern Africa were sampled.
At each allopatric and sympatric I. elegans population (see Table 2), and at the four allopatric I. graellsii populations, a minimum of 20 adult males were collected during the flight season between 1999-2008 using hand nets. Captured individuals were stored in 100% ethanol until DNA extraction. Only males were sampled because the identification of male I. elegans, I. graellsii and hybrids is more reliable than that of females. Note that the few individuals classified as I. graellsii and hybrids in the Spanish I. elegans populations, mainly found in Alfaro in northcentral Spain (Table 3), were not included in the genetic analyses because the aim of this study was to test for introgression in I. elegans populations.
DNA extraction and microsatellite genotyping
To extract DNA from the samples, the head of each damselfly was removed, dried and then homogenized using TissueLyser (Qiagen). DNA was extracted by proteinase K digestion followed by a standard phenol/chloroformisoamylalcohol extraction [63]. The purified DNA was re-suspended in 50-100 μl of sterile water. The genotypes of all damselflies were assayed at six microsatellite loci previously isolated for this species [I-002, I-015, I-041, I-053, I-095, I-134, for details see 33]. These loci did not deviate statistically from Hardy-Weinberg expectations and linkage equilibrium, and showed no evidence for presence of common null-alleles (using Micro-Checker; [64], within populations of both species [34]. One primer of each pair was 5'-labelled with 6-FAM, HEX or NED florescent dyes. Polymerase chain reactions (PCRs) were carried out in 10 μL volumes on a GeneAmp PCR System 9700 (Applied Biosystems) and contained 4 pmol of each primer, 15 nmol MgCl 2 , 1.25 nmol dNTP, 0.5 U Amplitaq polymerase and 10-20 ng template. The conditions were: denaturation step of 94°C for 2 minutes, then 35 cycles at 94°C for 30 s, touch-down from 62-58°C for 30 s, 72°C for 30 s followed by 72°C for 10 minutes. Multiplex primer reactions were performed for combinations of primers with matching annealing temperatures but differing size ranges and dye labels, then mixed with a labelled size standard and electrophoresis was conducted on an ABI PRISM 3730 Genetic Analyzer (Applied Biosystems). GeneMapper 3.0 (Applied Biosystems) was used for fragment size determination and allelic designations.
Microsatellite DNA Analyses
The program FSTAT [65] was used to calculate several basic population genetic measures, namely, the expected heterozygosity (H E ), observed heterozygosity (H o ), number of alleles, and the allelic richness for each population. These aforementioned measures as well as the genetic differentiation between populations (F ST ) were also calculated for three regions, namely the Spanish populations of I. elegans, the European populations of I. elegans (excluding Spain), and the entire sample of I. graellsii.
Principal component analysis (PCA) was used to reduce the variation in the multivariate data set (consisting of 117 alleles at six loci) to two linear combinations. The analysis was done using PCA-GEN [66]. The significance of each principal component was assessed from 5000 randomisations of genotypes. The allocation of each species across the two principal components provides a quantitative measure of the degree of the genetic dissimilarity among the populations/species [18].
The Bayesian statistical framework provided by the program STRUCTURE [version 2.2.3, 67] was used to understand the genetic structure among populations and to determine which individuals from the allopatric and sympatric populations of I. elegans and I. graellsii can be classified to a high degree as pure species. STRUCTURE applies a Bayesian Markov chain Monte Carlo (MCMC) approach that uses model-based clustering to partition individuals into groups. The model accounts for the presence of Hardy-Weinberg and linkage disequilibrium by introducing group structure and attempts to find groupings that (as far as possible) are in equilibrium [68]. We applied the 'admixture model' with 'correlated allele frequencies' for more details [69]. For the model, a 'burn-in' period of 20,000 MCMC replicates and a sampling period of 100,000 replicates were used. We performed runs for a number of genetic clusters (K), ranging from one to ten; and for each K, 20 iterations were run. In this way, multiple posterior probability values (log likelihood (lnL) values) for each K were generated, and the most likely K was evaluated by the ΔK-method following Evanno et al. [66].
Admixture analyses in STRUCTURE were also used to assign all individuals of the Spanish Ischnura populations into each of two genetic clusters, one representing I. graellsii genotypes and one I. elegans. We used the 'prior population information' option in the models to (i) facilitate the clustering process of the reference individuals (i.e. pure I. elegans from central and eastern Europe, and I. graellsii, respectively), and (ii) to calculate the admixture proportions (and ± 90% credible regions) of each individual in the Spanish I. elegans populations. This approach was hence used to measure of the degree of introgression of I. graellsii genetic material into the genome of I. elegans in Spain. The model was run for K = 2, where one cluster corresponded to I. graellsii and the other to I. elegans. We used the 'population flag' option to exclude Spanish I. elegans as reference individuals, which implied that the clustering process was based on only I. graellsii samples and I. elegans samples collected outside of Spain. The model was run for 100,000 MCMC replicates, after an initial burn-in period of 20,000 replicates, using the admixture model and correlated frequencies for five iterations [32][33][34]63]. To generate simulated genotypes of hybrids and backcrosses, we applied the program HYBRID-LAB [70] using the genotypes of 66 individuals of I. graellsii and 240 genotypes of I. elegans collected outside of Spain as initial genotypes. We generated 50 genotypes of each of the following crosses: first-generation hybrid (F 1 ; i.e. I. graellsii × I. elegans), second-generation hybrid (F 2 ; i.e. F 1 × F 1 ), first backcross with I. elegans (1 EB; i.e. F 1 × I. elegans), first backcross with I. graellsii (1 GB; F 1 × I. graellsii), second backcross with I. elegans (2 EB; 1 EB × I. elegans), third backcross with I. elegans (3 EB; 2 EB × I. elegans), and forth backcross with I. elegans (4 EB; 3 EB × I. elegans). We then evaluated the admixture proportions (± 90% credible intervals) of these artificial crosses with STRUC-TURE in the same way as done for the I. elegans samples from Spain (above). To determine the level of introgression of I. graellsii into the Spanish I. elegans populations, the individual admixture proportions of the I. elegans samples from Spain were compared to the admixture proportion for the artificial hybrids and backcrosses. | 9,634.8 | 2011-07-18T00:00:00.000 | [
"Biology"
] |
Economic Feasibility Study of Modern and Conventional Central Heating Systems for Villa Located in Duhok City, Iraq
-This research includes the evaluation of economic feasibilities based on the Iraqi’s market cost by using three different types of central heating systems in a residential villa located in Duhok city / Iraq. It also compares between modern and conventional central heating systems. A life cycle cost analysis based on detailed heating and operation load profiles considered in this work. Initial, running and maintenance costs for three central heating systems examined for fifteen years as a working period. The used systems in this study were; modern central heating system (heat pump water heater system) and conventional central heating system (fuel oil hot water boiler system and electric hot water boiler system). Transfer function method with hourly analysis program platform (HAP4.9) used for estimation the heating loads within each zone of the project. The proposed technique of modern central heating system found to be more efficient for thermal and economic efficiencies and it uses an environmentally friendly refrigerant such as (R410A). It also works efficiently in severe cold climate with low temperature of (-20°C) in winter season. The results shows that the heat pump water heater central heating system is most efficient system, this system has provided an energy saving range up to 57.5% compared with electric hot water boiler system and 70.6% compared with fuel oil hot water boiler system.
Introduction
Air conditioning system for residential building amortize up to 50% of electricity in many regions around the world [1]. However, the commercial building sector was responsible for 64% of electricity consumption [2]. Therefore, minimize energy consumption becomes the main target in designing new air conditioning systems. Moreover, Iraqi HVAC designers have come up with sustainable designs that lower energy use in residential or commercial buildings. However, a central heating system that saves operating costs usually requires a higher initial investment. In this case, engineers should decide whether it is worth paying the extra initial cost for a system that has lower operating cost [3]. Salim [4] showed that the total cost of two A/C systems (RAC & CAC) for residential apartment are equal after 20 months of the beginning run. Then after breaking point the total cost for CAC system began to decline, which indicates that there is saving in electric consumption. This presents a worthy cost over a period of ten years operation interval. Wang et al. [5] indicated that the low temperature radiant floor heating system is more suitable for natural gas condensing water boilers. It is also more comfortable, more economical, and can save more energy than other heating systems. Li et al. [6] pointed out that the composite energy heating systems has higher initial investment but lower operating costs, combined static payback period of comparison. The composite energy-heating mode has significant economic benefits as well as good prospect in market competitiveness. Shah et al. [7] submit the compares of life cycle influences for three residential heating and cooling systems over a thirty-five years study period of the systems at four locations in the United States. In Minnesota, Pennsylvania, and Texas, the heat pump has the highest impacts whereas in Oregon the heat pump has the lowest impacts. The purpose of this work is to compare three different types of central heating systems, taking in to account the initial cost, maintenance and operating cost. The systems are Fuel oil hot water boiler system, Heat pump water heater system and Electric hot water boiler heating system, each central heating system is briefly described.
Moreover, the design and operation of mentioned systems presented in details. A residential building (Villa) as a model located in the Duhok city was selected to test the above mentioned systems, with daily operation period of 20 hours. In addition, the life cycle cost for these systems is analyzed and economically evaluated based on detailed load estimation, initial cost, maintenance cost and operating costs for 15 years period [8,9].
Description the Project Sample with Heating Load Estimation
The sample of residential house considered in this study is villa, which consists of two floors with gross area of ground and first floor respectively as 370 m² and 329 m². The villa located in Duhok city / Iraq (36.820 latitude and 43.146 longitude), Figures (1-a) and (1-b) shows the architectural plan and result of heating load for ground and first floors. The central heating period for Duhok city, which has extremely cold, winter temperatures and wet climate covers 182 days approximately. The indoor air conditions of dry bulb temperature and relative humidity are (24.2°C & 40%) respectively. The outdoor design temperature that used in this study at winter is (-6°C) dry bulb temperature and (-11°C) wet bulb temperature [10]. In this study, the transfer function method based on derivative of the heat balance that used within hourly analysis program (HAP v4.9) so as to estimate the heating load for each zone within the villa as shown in Table 1. The total heating load for ground and first floor are 23.9 kW and 27 kW respectively, Figure 2 shows flow chart that use to estimate heating load for each zone of the project. Roof construction consist of (Flat , 40mm high density concrete shtyger, 70 mm dry sand, 150mm high density concrete and 10mm juss plaster) while walls construction consist of (15 mm cement plaster , 240 mm common brick and 15 mm juss plaster).
Description of the Types of Central Heating Systems
The second step after finalizing the heating load estimation for the project is choosing the appropriate central heating system that consumes lower running cost during fifteen years of running interval with giving easy facilities in installation, high efficiency and maintenance work in future. In this study, three different types of central heating system used as follows: i. Fuel Oil Hot Water Boiler System (FHWBs). ii. Heat Pump Water Heater System (HPWHs). iii. Electric Hot Water Boiler System (EHWBs).
I. Fuel Oil Hot Water Boiler System (FHWBs)
Oil fuel boilers are pressure vessels designed to transfer heat produced by combustion to a fluid. In most boilers, the fluid is usually water in the form of liquid or steam. The firebox, or combustion chamber, of some boilers is also called a furnace. Excluding special and unusual fluids, materials, and methods, a boiler is a castiron, carbon or stainless steel, aluminum, or copper pressure vessel heat exchanger designed to burn fuels and transfer the released heat to water (in water boilers) [11]. A radiator unit is the heat distributing devices used in low temperature and water heating systems. They supply heat by a combination of radiation and convection and maintain the desired air temperature and/or mean radiant temperature in a space without fans [12]. The advantages of using oil central heating boiler as oil as a fuel tends to be truly efficient, the return cost is quite satisfactory. Additionally, the replacement of old boiler with some other new efficient model is very hassle free process. Figure 3 shows the schematic diagram of radiator units with two-pipe system. The fuel oil hot water central heating system design by using HVAC solution software that used to state the bill of quantity is shown in Table 2 [13]. The Figure 4 shows the water pipe design with radiator distribution that is used in this study. During the last two decades, radiant floor heating applications have increased significantly. In the 1950s and 60s, floor heating installations using steel or copper pipes were installed in middle Europe. Unfortunately, at this time, buildings were not well-insulated so very high floor temperatures were required to heat the houses, which gave floor heating a bad reputation. Then, at the end of the 1970s, the introduction of plastic pipe for floor heating prompted the system to become standard, especially in many countries in the world. Today, plastic pipes of the PEX-type are mainly used. Floor heating is mostly used in residential buildings [14]. Modern underfloor heating systems use heat pump as heat source for water flowing within plastic pipes to heat the floor. Electric heating elements or hydraulic piping can be cast in a concrete floor slab ("poured floor system" or "wet system"). They can also be placed under the floor covering ("dry system") or attached directly to a wood sub floor ("sub floor system" or "dry system"). Even in the cold months the heat pump water heater system will still work up to (-20°C). They do require electricity to run but the heat they are extracting from the air is being renewed as a natural process. Heat pumps work much more efficiently at a lower temperature than a standard boiler system, so they are more suitable for under floor heating systems or larger radiators, which give out heat at lower temperatures over longer periods. Figure 5 shows the schematic diagram of heat pump water heater system with under floor radiant temperature. The heat pump water heater system design by using heat cad software that used to state the bill of quantity is shown in Table 3. Figure 6 shows the under floor heating system design [15].
III. Electric Hot Water Boiler Heating System
(EHWBs) This type of central heating system is the same as fuel hot water boiler system except the source as this type depends on electricity for source heat of water. The advantages of the electric hot water heater include quicker heating and safety ratings that are higher as well as it may have a slightly longer lifespan than other units. This largely depends on local water quality and owner maintenance. While the disadvantage occurs when there is an unlikely event of energy failure or energy cut. Figure 7 shows the hot water central heating system design by using HVAC solution software that is used to state the bill of quantity as shown in Table 4.
Costs Analysis of Central Heating Systems and Results
The overall cost analysis for central heating system that used in the residential building (Villa) classified as initial cost, operation cost and maintenance cost, that will be incurred over the lifetime of A/C system should be taken into account.
I. Initial Cost
The total initial cost of central heating systems includes purchasing and installation cost. The details within Tables 2, 3 and 4 show the estimated initial costs for three different types of central heating system, Fig.(8) shows the initial cost result for three types of A/C systems. It is seen that the initial cost for the HPWHs is 20.2% lower than EHWBs and 28% lower than FHWBs.
II. Operation Cost
The running energy for heat pump hot water system and electric hot water oiler system depend on the electrical energy while the oil fuel hot water boiler system use the fuel for operation the system. The estimated electric and fuel power consumption depend on the book data for each central heating system to run the different types of central heating systems [16,17]. The electricity tariff is the sale price of the electrical unit to calculate the total cost per month for residential [18]. The net operating time of the residential is twenty hours per one day for all days of year. Equation (1) illustrates the amount of electricity running cost for one day only Table 5 shows the amount of energy cost for different type of central heating systems. Figure 9 shows the running cost per one year for different system. It is seen that the annual cost for HPWH is 57.8% lower than EHWBs and 70.6% lower than FHWBs. On the other hand, the annual maintenance of EHWBs is 8.5% lower than HPWHs and is 26.5% lower than FHWBs as illustrated in Table 5 and Figure 10.
III. Maintenance Cost
Maintenance costs include all planned equipment maintenance, such as cleaning, replacement and repair. It depends on age of the system and the length of time of operation of central heating system [19]. The cost that the customer pays annually for maintenance the central heating system is expressed by eq. (2). Table 6 shows the maintenance cost for each central heating system. Figure 10 shows the maintenance cost per one year for different central heating system [20].
Life Cycle Cost Analyses of Air Conditioning System
A life cycle cost analysis carried out to analyze the total cost as initial, operating and maintenance costs for different types of air conditioning systems. The air-conditioning systems has fifteen years as life cycle and the present-worth cost technique used to evaluate the central heating system. It is also use to examine total costs. Equation (3) shows the life cycle cost equation [21] and Figure 11 shows the life cycle cost of central heating system for fifteen years. It is obvious that it found that the HPWHs has significant advantages compared to other two systems. This is because this system offers the lowest running and total costs compared to other central heating systems. In addition, the HPWHs works extremely at low temperatures reach up to (-20°C). It can be equipped with modern technologies as smart integrated controls, variable speed drives, ozone friendly refrigerant and flexible operation.
Conclusions
This paper conducts a study to analyze three different central heating systems. These systems examined in a villa located in Duhok city northern part of Iraq. The transfer function method used to calculate the heating load in each zone, with the aid of HAP software. Fuel oil hot water boiler system FHWBs, heat pump water heater system HPWHs and Electric hot water boiler heating system EHWBs examined and compared taking into account the initial and running costs for 15 years working time. It is concluded that HPWHs saves significant amount of energy of about 57.5% and 70.6% if compared with EHWB and FHWB respectively. HPWHs works on the lowest running cost when compared with other systems. In contrast, EHWBs has the lowest maintenance cost if compared with the other ones. Likewise, EHWBs has operation cost lower than FHWBs for 15 years period. It is inferred that HPWHs has superior advantages in both running and initial cost when compared with its competitive. | 3,278.8 | 2017-07-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Expression Analysis of Ligand-Receptor Pairs Identifies Cell-to-Cell Crosstalk between Macrophages and Tumor Cells in Lung Adenocarcinoma
Background Lung adenocarcinoma is one of the most commonly diagnosed malignancies worldwide. Macrophage plays crucial roles in the tumor microenvironment, but its autocrine network and communications with tumor cell are still unclear. Methods We acquired single-cell RNA sequencing (scRNA-seq) (n = 30) and bulk RNA sequencing (n = 1480) samples of lung adenocarcinoma patients from previous literatures and publicly available databases. Various cell subtypes were identified, including macrophages. Differentially expressed ligand-receptor gene pairs were obtained to explore cell-to-cell communications between macrophages and tumor cells. Furthermore, a machine-learning predictive model based on ligand-receptor interactions was built and validated. Results A total of 159,219 single cells (18,248 tumor cells and 29,520 macrophages) were selected in this study. We identified significantly correlated autocrine ligand-receptor gene pairs in tumor cells and macrophages, respectively. Furthermore, we explored the cell-to-cell communications between macrophages and tumor cells and detected significantly correlated ligand-receptor signaling pairs. We determined that some of the hub gene pairs were associated with patient prognosis and constructed a machine-learning model based on the intercellular interaction network. Conclusion We revealed significant cell-to-cell communications (both autocrine and paracrine network) within macrophages and tumor cells in lung adenocarcinoma. Hub genes with prognostic significance in the network were also identified.
Introduction
Lung cancer remains the leading cause of cancer incidence and death worldwide; lung adenocarcinoma is the largest subtype with increasing incidence [1][2][3]. Previous studies have suggested that the tumor microenvironment, including that of lung adenocarcinoma, plays crucial roles in the different steps of tumorigenesis and therapeutic responses [4][5][6][7]. The function of macrophages has been reported to be altered in lung cancer [8]. Ohtaki et al. revealed that CD204+ macrophages represented a tumor-promoting phenotype in lung adenocarcinoma [9]. Lavin et al. determined that tumorassociated macrophages had a distinct transcriptional signa-ture in lung adenocarcinoma and summarized their immunosuppressive role in early stages of the disease [10]. However, the network of cell-to-cell communications (both autocrine and paracrine network) within macrophages and tumor cells in lung adenocarcinoma has not been fully explored. The communications among different cells are regulated by pairs of ligand and cell-surface receptor.
In recent decades, gene profiling of cancers has primarily depended on RNA sequencing (RNA-seq) technology, in which samples are regarded as a whole. Tumors, together with the tumor microenvironment, are comprised of heterogeneous cell populations, including macrophages, T cells, and cancer cells. However, bulk RNA-seq measures the averaged expression level across all cell subtypes, which fails to reflect the intrinsic heterogeneities of gene profiling and functional features [11]. Single-cell RNA sequencing (scRNA-seq) enables investigations of the tumor microenvironment at a single-cell level rather than cell-population level [12][13][14]. Therefore, the applications of scRNA-seq allow us to go a step further in analyzing cell-to-cell crosstalk within macrophages and tumor cells.
In this study, we explored the coexpression of ligandreceptor pairs by both RNA-seq and 10× genomics singlecell RNA sequencing (10× scRNA-seq) data, which might provide us a framework to investigate the cell-to-cell communications within macrophages and tumor cells in lung adenocarcinoma. We identified differentially expressed genes of ligand-receptor pairs in both autocrine and paracrine network within macrophages and tumor cells. Their clinical significance was also tested in lung adenocarcinoma using a machine learning model.
Study Cohorts.
We integrated three independent cohorts of scRNA-seq data as the main study population. One was composed of tumor samples based on 14 patients with primary lung adenocarcinoma from previous literature following relevant data availability statement [15]. The other two cohorts of scRNA-seq samples were downloaded from ArraryExpress (https://www.ebi.ac.uk/arrayexpress/) database (accession numbers: E-MTAB-6149 and E-MTAB-6653) based on previous literatures. Detailed clinicopathological characteristics of all patients enrolled in the scRNA-seq cohort were showed in Supplement Table 1.
Five Gene Expression Omnibus (GEO, https://www.ncbi .nlm.nih.gov/geo/) datasets (GSE30219, GSE31210, GSE50081, GSE37745, and GSE68465) were enrolled in this study. The first four datasets were derived from the GPL570 GeneChip of Affymetrix (Santa Clara, CA, USA), while GSE68465 was based on GPL96 GeneChip of Affymetrix. Raw data and GeneChip files were downloaded directly from GEO database. For data integration of different datasets, we adopted a robust multichip average method based on RMAExpress for background adjustment, quantile normalization, and summary for gene profiling [16][17][18]. The GPL570 GEO cohort (544 patients) was adopted for the correlation and prognostic analyses of ligand-receptor pair genes, and the GPL96 cohort (443 patients) was used as the training cohort for the construction of the machinelearning prognostic model. Moreover, level 3 RNA-seq data of lung adenocarcinoma patients were also downloaded from The Cancer Genome Atlas (TCGA) before October 6, 2019 (https://portal.gdc.cancer.gov/). A total of 493 tumor samples were obtained with complete follow-up information. We chose the TCGA dataset as the validation cohort for the construction of machine-learning prognostic model. Detailed baseline features of all patients from both GEO and TCGA database were listed in Supplement Table 2. 2.2. Analyses of 10× scRNA-Seq Data. The detailed methods of 10× scRNA-seq and data preprocessing are described in the Supplement Methods. The normalized 10× scRNA-seq data was transformed into a Seurat object using the Seurat R package [19]. Principal component analysis (PCA) was performed based on the top 2000 highly variable genes. To integrate three 10× scRNA-seq cohorts in this study, we used the Harmony R package. Uniform manifold approximation and projection (UMAP) was conducted for cell clustering and visualization (Supplement Figure 1). The identifications of different cell subtypes were achieved using the CellMarker dataset and the SingleR R package [20,21]. According to the literature, EPCAM, SOX4, and MDK are considered as gene markers for tumor cells, while SFTPD, AGR3, and FOLR1 are closely associated with epithelial cells [14]. Owing to the distinct subtypes of myeloid cells, we used CD163, LYZ, ELANE, and FCER1A to differentially identify macrophages, Langerhans cells, and granulocytes [10,20,21]. Detailed information of the cell typing markers is shown in Supplement Figures 2 and 3.
Cell-to-Cell Communication Analyses.
In this step, we basically followed the steps as described in the previous literatures [22,23]. The list of ligand-receptor pairs was downloaded from the FANTOM5 project [24].
First, we explored the network of autocrine ligandreceptor gene pairs in tumor cells and macrophages. The expressions of ligand or receptor genes were compared between lung adenocarcinoma cells and normal epithelial cells using the MAST package in the scRNA-seq cohort [25]. Then, we selected pairs of ligand-receptor genes that were concurrently upregulated or downregulated in lung adenocarcinoma cells or tumor-associated macrophages. To quantify the coexpression levels of ligand-receptor pairs, Spearman's rank correlation coefficients were adopted for calculations in the bulk RNA-seq cohort (the GPL570 GEO cohort). We selected a coefficient value of 0.3 as the threshold for further screening. Gene set variation analysis (GSVA) with the Hallmark gene set was conducted to detect changes of enriched pathways [26].
Second, we explored the paracrine network of crosstalk between macrophages and lung adenocarcinoma cells. Comparisons of ligand or receptor gene expressions were also performed in macrophages stratified by its neoplastic and nonneoplastic origins in the scRNA-seq cohort. Then, we selected ligand-receptor pair genes that were separately highly expressed in these two types of cells. Subsequent correlation analyses in the bulk RNA-seq cohort and coefficient threshold were the same as above. Furthermore, we selected pathways of Hallmark and Kyoto Encyclopedia of Genes and Genomes (KEGG) which contain selected top ligandreceptor gene pairs in the above analyses. We comprehensively studied the expression changes of genes in the selected pathways in the scRNA-seq cohort. Here, we aimed to observe the transcriptional consequences at the single-cell level of ligand-receptor pathways activation. Also, the Gene Ontology (GO) analyses were performed based on selected ligand-receptor genes.
Third, we displayed the potential roles and interactions of ligand-receptor gene pairs within tumor cells and subtypes of macrophages. To calculate the M1/M2 polarization 2 Journal of Immunology Research and pro-/anti-inflammatory potential of macrophages cells, we retrieved associated gene sets following previous literatures [12,27]. In the scRNA-seq cohort, we classified and annotated subclusters of tumor-associated macrophages [15]. Based on the significantly differentially expressed ligand and receptor gene pairs in the scRNA-seq cohort, we evaluated the interaction scores of gene pairs within tumor cells and subtypes of macrophages by toolkit CellChat [15,28].
2.4. Construction of the Machine-Learning Model. The Extreme Gradient Boosting (XGBoost) method is an advanced machine-learning algorithm based on the Gradient Boosting framework, which has been widely adopted. XGBoost enhances upon the base Gradient Boosting framework by systematic and algorithmic optimizations. XGBoost provides a parallel tree boosting for effective prediction, which has been proven in many cases [29][30][31]. Details of the XGBoost algorithm can be obtained elsewhere (https://xgboost.readthedocs.io/en/latest/). The GEO GPL96 (GSE68465) dataset was split into low-risk (I-II stage patients) and high-risk (III-IV stage patients) groups for machine-learning predictions. Then, it was randomly divided into a training and internal test cohort with a ratio of approximately 2 : 1. We adopted significantly differentially expressed ligands or receptors in the scRNA-seq analyses as the initial gene set, and then selected those genes with prognostic values in the GSE684865 cohort. The sklearn package of Python was adopted to establish the machine-learning model based on the selected gene set. Finally, the TCGA dataset was used as the validation cohort for the machinelearning model evaluation.
Validations of Hub Ligand-Receptor Gene Pairs in
Tumor Cells and Macrophages in Lung Adenocarcinoma. Ten lung adenocarcinoma samples and matched normal tissues were selected for validations with flow cytometry and quantitative real-time polymerase chain reaction (qRT-PCR). Experiment steps were described in previous literatures [23,32]. Single cells of selected samples were suspended in phosphate-buffered saline with 3% fetal bovine serum and incubated with human IgG (20 μg/ml, Sigma-Aldrich) for 15 minutes to remove nonspecific antibody binding. Afterwards, single cells were placed on ice and incubated with Alexa 647-conjugated mouse antihuman EPCAM (10 μl/10 6 cells; cat. no.: 566658, BD Biosciences, San Jose, CA, USA), PE-conjugated mouse antihuman FOLR1 (10 μl/10 6 cells; cat. no.: FAB5646P, R&D Systems, Minneapolis, MN, USA), or Alexa 647-conjugated mouse antihuman CD163 (10 μl/10 6 cells; cat. no.: 562669, BD Biosciences, San Jose, CA, USA) for 30 minutes. We applied Fortessa analyzer (BD Biosciences) and FACS Arial III (BD Biosciences) to quantitate and isolate stained single cells. Moreover, FlowJo software (Version 10, TreeStar, Woodburn, OR, USA) was adopted for generating and analyses. To validate the associations of selected hub ligand or receptor genes with macrophages, we adopted a public resource (Tumor IMmune Estimation Resource, TIMER) by computational approaches in the TCGA cohort. We analyzed the correlations of selected hub ligand or receptor gene expres-sion with the level of macrophage infiltrating. Moreover, the above sorted single cells were used for subsequent RNA extraction and reverse transcription by an RNA kit (Takara, Kusatsu, Japan). We tested and compared the expressions of selected hub ligand or receptor genes in lung adenocarcinoma cells, normal epithelial cells, and macrophages (Supplement Methods).
2.6. Statistical Analyses. All statistical analyses were performed with IBM SPSS Statistics 22.0 (IBM, Inc., Armonk, NY, USA) and R version 3.6.1 (R Foundation for Statistical Computing, Vienna, Austria). The ligand-receptor network among cells in lung adenocarcinoma were displayed by Cytoscape version 3.7.2 (https://cytoscape.org/). Survival curves were estimated and compared following the Kaplan-Meier method and the log-rank test. Patients were divided based on the median level of gene expression. A two-tailed P value <0.05 was set as the threshold of statistical significance.
Cell Typing and the Identification of Tumor Cells and
Macrophages. After quality filtering and merging of datasets, 159,219 cells from 21 patients (23 lung adenocarcinoma samples and 7 normal lung tissue samples) were identified based on 10× scRNA-seq (Figure 1(a) and Supplement Figure 1C). A total of 122,082 cells (76.7%) were derived from lung adenocarcinoma samples and 37,137 cells (23.3%) originated from normal lung tissue. The whole single-cell cohort was then classified into clusters using the PCA and UMAP algorithms. Subsequently, the displayed cell clusters were further distinguished by marker genes. We identified single cells in the alveolar cluster (
Expression Correlation Analyses Suggested Significant
Autocrine Ligand-Receptor Gene Pairs of Tumor Cells in Lung Adenocarcinoma. We detected 13,560 differentially expressed genes by comparing lung adenocarcinoma tumor cells and normal epithelial cells based on 10× scRNA-seq cohort (Figure 2(a)). As a result, we identified 240 upregulated and 234 downregulated ligand-receptor pair genes that were significantly increased or decreased simultaneously in lung adenocarcinoma tumor cells, which constituted the autocrine network of tumor cells. Correlation analyses were performed for each pair in the GEO GPL570 dataset. We chose 44 upregulated and 63 downregulated pairs with coefficients >0.3 (Figures 2(b) and 2(c), Supplement Table 3
Expression Correlation Analyses Revealed Important Autocrine Ligand-Receptor Gene Pairs of Tumor-Associated
Macrophages in Lung Adenocarcinoma. A total of 11,192 differentially expressed genes were identified in macrophages stratified by origin (neoplastic cells vs. nonneoplastic cells) based on the 10× scRNA-seq cohort (Figure 3(a)). Similarly, 307 upregulated and 73 downregulated ligand-receptor pair genes in tumor-associated macrophages were identified, which constituted the autocrine network of tumor-associated macrophages. Correlation analyses were performed for each pair in the GEO GPL570 dataset. We detected 84 upregulated and 25 downregulated ligand-receptor pair genes with coefficients >0.3 (Figures 3(b) and 3(c), Supplement Table 3). The top five upregulated and downregulated gene pairs were as follows: TGFB1-ENG, B2M-HLA-F, SELPLG-ITGB2, SERPING1-LRP1, and AGRP-SDC3; 2PRS19-CCR7, IL1RN-IL1RL2, CCL19-CXCR3, CD70-CD27, and CXCL13-CXCR5, respectively. We also compared the enriched pathways between macrophages with different origins in the scRNA-seq cohort. Glycolysis pathway was still the leading enriched pathway in tumor-associated macrophages.
Crosstalk between Tumor Cells and Macrophages Is
Associated with Prognosis of Lung Adenocarcinoma Patients. To assess how macrophages connect with tumor cells in lung adenocarcinoma, we chose 52 gene pairs, of which ligands in tumor-associated macrophages and receptors in tumor cells were highly expressed, respectively (Figure 4(a)). The top five upregulated gene pairs were: TGFB1-ENG, TGM2-TBXA2R, AGRP-SDC3, HLA-G-KIR2DL4, and GNAI2-TBXA2R. In total, there were 54 ligands or receptors in the network that showed prognostic significance in the GPL570 GEO cohort (Supplement Table 3). We selected pathways containing top five upregulated ligand-receptor gene pairs and analyzed the gene expression changes of these pathways in the scRNAseq cohort. We found that there was a trend of overexpression of genes in the TGF-β signaling pathway of cancer cells, which suggested potential pathway activation of TGFB1-ENG at the single-cell level (Supplement Figure 4A). Then, gene functional enrichment analysis of GO suggested that the crosstalk between macrophages and tumor cells were significantly associated with cytokine productions and secretions (Supplement Figure 5A).
To evaluate how lung adenocarcinoma tumor cells communicate with macrophages, we selected 70 gene pairs, of which ligands in tumor cells and receptors in tumorassociated macrophages were upregulated, respectively (Figure 4(b)). The top five upregulated gene pairs were: TGFB1-ENG, HSPG2-PTPRS, HLA-G-CD4, BMP5-ACVR2A, and MFGE8-ACVR2A. In total, there were 80 ligands or receptors in the network that showed prognostic significance in the GPL570 GEO cohort (Supplement Table 3). We selected pathways containing top five upregulated ligand-receptor gene pairs and analyzed the gene expression changes of these pathways in the scRNA-seq cohort. We found that there was a trend of overexpression of genes in the allograft rejection, antigen processing, and presentation signaling pathways of tumor-associated macrophages, which suggested potential pathway activations of HLA-G-CD4 at the single-cell level (Supplement Figures 4B and 4C). Then, gene functional enrichment analysis of GO indicated that the communications between tumor cells and macrophages were significantly related to Figure 5B).
Heterogeneities of Interaction Roles of Ligand-Receptor Gene Pairs within Tumor Cells and Subtypes of Tumor-Associated Macrophages.
Considering the heterogeneities of tumor-associated macrophages, we tried to display the differences of interaction roles of ligand-receptor gene pairs in the autocrine and paracrine network of tumor cells and macrophages. In the scRNA-seq cohort, we reclustered tumor-associated macrophages and 4 subtypes were revealed in our study ( Figure 5(a)). We also calculated the M1/M2 polarization and pro-/anti-inflammatory scores based on previous study [27]. The M1/M2 polarization and pro-/ anti-inflammatory scores for each subtypes of macrophages were shown in Figures 5(b) and 5(c). Then, interaction scores were evaluated for the significantly differentially expressed ligand-receptor gene pairs ( Figure 5(d)).
Machine-Learning Prognostic Model Based on Ligand-Receptor Interactions.
To further investigate the prognostic significance of the above ligand-receptor gene pairs, we built a machine-learning model using XGBoost algorithm. Differentially expressed ligands or receptors in the scRNA-seq analyses (Supplement Table 3) with prognostic value in the GSE68465 cohort were included for calculation. We enrolled a gene set composed of 155 genes for subsequent model construction. The entire GSE68465 cohort was randomly divided into a training and test dataset with a ratio of approximately 2 : 1. We found that the machinelearning high-and low-risk predictive model achieved a precision value of 0.94 and a recall value of 0.78 in the Journal of Immunology Research randomly selected test dataset based on GSE68465 cohort. We then adopted the TCGA cohort to validate the XGBoost predictive model. There was a significant prognostic difference between the predicted high-and lowrisk groups in the TCGA cohort (P = 0:029, Figure 6).
Validations of Hub Ligand-Receptor Gene Pairs in Tumor Cells and Macrophages in Lung Adenocarcinoma.
We used flow cytometry to validate lung adenocarcinoma cells, normal epithelial cells, and macrophages with cell markers (EPCAM, FOLR1, and CD163). Supplement Figure 6 shows the reliability of selected cell markers in this study. EPCAM+/FOLR1-cells had a larger proportion in tumor samples, while there were more EPCAM-/FOLR1 + cells in normal lung tissues (Supplement Figure 6A-6D). Moreover, CD163+ cells accounted for a larger proportion in tumor samples than normal lung tissues, which was mainly consistent with our initial results of cell typing for macrophages by scRNA-seq (Supplement Figure 6E-6H).
In the TIMER database, we selected top ligand or receptor genes which were associated with the cell-to-cell paracrine or autocrine communications of macrophages. As shown in Supplement Figure 7, we found that the selected ligand or receptor genes were significantly associated with the level of macrophage infiltrating in the TCGA cohort. It proved the potential significance of ligand and receptor genes in macrophages based on previous scRNA-seq investigations. Furthermore, we investigated the expression level of selected ligand or receptor genes in the above sorted cells by qRT-PCR. We found that TGFB1, ENG, TGM2, TBXA2R, HSPG2, and PTPRS were significantly upregulated in lung adenocarcinoma cells (Supplement Figure 8A). Also, TGFB1, ENG, B2M, HLA-F, SELPLG, and ITGB2 were significantly increased in tumor-associated macrophages, compared with those nontumor-associated macrophages (Supplement Figure 8B). These findings were similar to the scRNA-seq results which effectively explored the differentially expressed genes. And the identified ligand or receptor genes in scRNA-seq analyses showed significant expression changes in tumor cells and tumorassociated macrophages.
Discussion
An increasing number of studies have revealed the crucial roles of the tumor microenvironment in cancer proliferation, invasion, metastasis, and therapeutic efficacy, especially in lung adenocarcinoma [6,33,34]. However, the most commonly used bulk RNA-seq fails to reflect intrinsic expression differences and cell heterogeneities within tumors and in the surrounding stromal cells. Moreover, cell-to-cell crosstalk within the tumor microenvironment has not been fully investigated. The establishment of multicellular gene network may facilitate to identify promising biomarkers for predicting prognosis and therapeutic resistance of cancer patients [35,36]. We are now able to explore cell-to-cell communications of lung adenocarcinoma as a result of scRNA-seq [14,34]. Here, we explored the network of cellto-cell crosstalk within lung adenocarcinoma cells and macrophages based on analyzing coexpressions of ligandreceptor pairs. The tumor microenvironment is of great importance in promoting tumor proliferation, invasion, and metastasis. Macrophages are enriched in the core site and play roles in biological functions, such as migration, metabolism, and polarization [37]. First, we explored the network of autocrine ligand-receptor gene pairs of tumor cells in lung adenocarcinoma. We found that TGFB1 and its binding partner ENG were both highly expressed in tumor cells and TGFB1-ENG gene pair occupied a key position in the Journal of Immunology Research network (Figure 2(b)). The comparison of enriched pathways between tumor cells and normal epithelial cells were consistent with previous literatures with respect to cancer proliferation, invasion, and metastasis [38][39][40][41]. Second, we analyzed the network of autocrine ligand-receptor gene pairs of macrophages in lung adenocarcinoma. Some of the selected ligand or receptor genes were found to have potential vital roles in the communications, which were similar to previous literatures. For example, LRP1 is an endocytic and cell-signaling receptor that regulates cell migration. Staudt et al. observed that LRP1 mediated macrophage recruitment and angiogenesis in tumors [42]. In this study, we detected the significant role of LRP1 in the network of macrophage autocrine signaling (Figure 3(b)). Also, previous research has shown that CXCR3 is correlated with decreased M2 macrophage infiltration and a favorable prognosis in gastric cancer. Our results indicated that CXCR3 was downregulated in tumor-associated macrophages (Figure 3(c)) [43].
In the differentiated pathway analyses, glycolysis pathway was the leading one in tumor-associated macrophages. Studies have found the significant roles of immunometabolism in the tumor microenvironment, suggesting potential therapeutic implications [44][45][46][47]. Finally, we established the network of crosstalk between tumor cells and macrophages in In this study, we found that overexpression of TGM2 conferred a significantly worse survival in the GPL560 GEO cohort and had an active part in the paracrine crosstalk (Figure 4(a) and Supplement Table 3). Furthermore, we identified M1/M2 polarization and pro-/ anti-inflammatory tumor-associated macrophages based on previous studies [27]. Then, we explored potential associations of ligand-receptor gene pairs with cell-to-cell communications among subclusters of cells (tumor cell and macrophage). In addition, we established a machine-learning model to predict survival based on identified ligand-receptor pairs in lung adenocarcinoma. Good performance in both test and validation cohorts suggested the significance of autocrine and paracrine in tumorigenesis and progression. Taken together, our study provides a landscape of the autocrine interactions and cell-to-cell communications within macrophages and tumor cells, which may help guide future experiments. Traditional bulk RNA-seq fails to reveal the heterogeneity of gene profiling and tumor-infiltrating cells [11]. Recently, silico algorithms have been developed to estimate the tumor microenvironment using bulk RNA-seq; however, these methods are still not as direct and thorough as scRNA-seq [48,49]. The use of scRNA-seq may provide new insight about new potential targets or cell-specific abnormally expressed genes. For example, we observed that PLXNA1, PLXNA2, and PLXNA3 were all significantly associated with prognosis in the GPL570 GEO dataset (Supplement Table 3). However, few studies have focused on the plexin-A family in terms of cancer progression or tumorassociated macrophages. The advancements of scRNA-seq have greatly facilitated novel approaches for precision and translational medicine [50]. For example, Kim et al. adopted scRNA-seq and extensively showed the molecular and cellular dynamics in metastatic lung adenocarcinoma [51]. Kim et al. detected the transformation of proinflammatory monocytes into macrophages with cells losing their proinflammatory nature and gaining anti-inflammatory signatures by trajectory analyses [51]. The identifications of transitions and subpopulations during the process revealed potential targets in cancer-microenvironment interactions.
There were limitations of this research that should be mentioned. We investigated the crosstalk of autocrine and paracrine networks of macrophages and validated the strategies we employed in the scRNA-seq analyses by flow cytometry and qRT-PCR. However, the detailed mechanisms of these ligands and receptors will require further validations in vitro and in vivo, such as immunofluorescence. Moreover, the activations of ligand-receptor or downstream pathways require confirmations to the communications between macrophages and tumor cells. Thorough experimental plans may be needed, especially for top listed ligand-receptor pairs. The above validations could further lead to identity effective signatures with predictive value for survival and therapeutic resistance [35]. Furthermore, with the extensive applications of scRNA-seq, increasing tools were built to model not only functional intercellular communications but also intracellular gene regulatory network, such as scMLnet [52]. These tools may facilitate us in the establishment of crosstalk network. More importantly, the extensive morphologic heterogeneities among tumors, including tumor cellularity, extent, and compositions of matrix and vascularity, should also be further considered, which requires highly precise evaluation and extraction process of single cells. Kim et al. collected different samples, like pleural fluids and lymph node or brain metastases, to elucidate the cellular dynamics in LUAD progression [51]. However, we still need to consider the differences of
12
Journal of Immunology Research clinicopathological features among patients, like EGFR mutation status and ground-glass imaging feature, which will require a larger study population.
Conclusion
We explored the landscape of cell-to-cell communication and crosstalk between macrophages and tumor cells in lung adenocarcinoma. Hub genes with prognostic significance in the network were also identified. The machine-learning predictive model showed the significance of ligand and receptor genes in tumor progression.
Data Availability
The datasets generated and/or analyzed during the current study are available from previous literatures listed in the references, public datasets, and the corresponding authors on reasonable request.
Conflicts of Interest
All authors have no conflicts of interest to declare. Table 1: Characteristics of the 21 LUAD patients included in this study for scRNA-seq analysis. Supplement Table 2: Baseline characteristics of enrolled patient cohorts from GEO and TCGA databases. Supplement Table 3: Identified significant ligand-receptor gene pairs in the cell-to-cell communications within tumor cells and macrophages in lung adenocarcinoma. (Supplementary Materials) | 5,808.6 | 2022-09-22T00:00:00.000 | [
"Medicine",
"Biology"
] |
Research Paradigm: A Philosophy of Educational Research
This paper attempts to provide insights on research paradigm as the philosophical foundation for educational research. The main purpose of writing this paper is to provide basic idea and knowledge about research paradigm to the prospective researcher with a claim as it is a philosophy of educational research. Different books and journal articles were consulted, reviewed and discussed to prepare this write up. After accumulating the idea and insights on research paradigms, the paper begins with the overview of research in terms of basic features. Then, it introduces about the research paradigms as the research philosophy and followed by the major components of research paradigms viz. ontology, epistemology, methodology and axiology. The key components of research paradigms are defined and discussed in terms of their basic premises in relation to the educational research contexts. Moreover, the paper also presents a brief discussion on the implications of research paradigms in educational research. Keywords— Axiology, epistemology, methodology, ontology, research paradigm, research implications.
INTRODUCTION
Research is an organized and systematic approach of inquiry on specific phenomenon. It refers to the process through which the researcher accomplishes answers to research questions. The term research is defined in various ways by different scholars in the field. Creswell (2012), defines research as "a process of steps used to collect and analysis information to increase our understanding of a topic or issue (p.3)". Here, Creswell regards research as a process followed in understanding of certain issue through the collection and analysis of the data. Moreover, Denzin, and Lincoln (2005), state research as "an organized scholarly activity that is deeply connected to power (p, 87)". This definition implies that research can be understood as a systematic scholarly process of acquiring knowledge, power and position in which it takes place. Similarly, Cohen, Manion and Morrison (2008) write "Research is the systematic and scholarly application of the principles of a science of behavior to the problem of people within their social context (P. 48). From this definition, it can be inferred that research follows scientific procedures to study the phenomena and problems in a social context where problem exist. It allows interaction between the researchers and social behaviors and ongoing social activities. In course of research, researcher has to supply philosophical and theoretical foundation of his/her research work. Moreover, the researcher has to create his /her position to view the world and phenomena that guides him/her towards certain destinations. Therefore, this paper attempts to define and discuss research paradigm in terms of its four philosophical perspectives. Moreover, the paper also presents some implications of research paradigm in the educational research. Regarding the methodology, the paper has been prepared consulting secondary sources of data in which different books, journal articles and research reports were reviewed and ideas and statements have been extracted to discuss the ideas throughout the paper.
II. RESEARCH PARADIGM: A PHILOSOPHICAL FOUNDATION OF RESEARCH
Simply, research paradigm refers to the theoretical or philosophical ground for the research work. It is viewed as a research philosophy. American philosopher Thomas Kuhn (1962) first used the word paradigm in the field of research to mean a philosophical way of thinking. In educational research, the term paradigm is used to describe a researcher's 'worldview' (Mackenzie & Knipe, 2006). This worldview is the perspective, or thinking, or school of thought, or set of shared beliefs, that informs the meaning or interpretation of research data. In the same regard, Willis (2007) defines research paradigm as "a comprehensive belief system, worldview or framework that guides research and practice in a field (p.8)". Thus, it is the In this vein, Lather (1986) mentions that a research paradigm inherently reflects the researcher's beliefs about the world that s/he lives in and wants to live in. It constitutes the abstract beliefs and principles that shape how a researcher sees the world, and how s/he interprets and acts within that world. Moreover, Hughes, (2010) says "paradigm is perceived as a way of seeing the world that frames a research topic and influences the way that researchers think about the topic (p. 35)". Fraser and Robinson (2004) further argue that a paradigm is "a set of beliefs about the way in which particular problems exist and a set of agreements on how such problems can be investigated". Likewise, Guba and Lincoln (1994) who are leaders in the field define a paradigm as a basic set of beliefs or worldview that guides research action or an investigation". Similarly, Denzin and Lincoln (2000), state that paradigm as human a construction, which deals with first principles or ultimate indicating where the researcher is coming from so as to construct meaning embedded in data. From the above discussions, we can say that research paradigm constitutes researcher's worldview, abstract beliefs and principles that shape how he/she sees the world, and how s/he interprets and acts within that world. It is the lens through which a researcher looks at the research topic and examines the methodological aspects of their research work being based on certain philosophical foundation. In a similar vein, Kivunja and Kuyini, 2017) mention that paradigms are thus important because they provide beliefs and dictates, which, for scholars in a particular discipline, influence what should be studied, how it should be studied, and how the results of the study should be interpreted. The paradigm defines a researcher's philosophical orientation and exerts significant implications for every decision made in the research process, including nature of reality, types and sources of knowledge and choice of methodology and methods. Thus, all researches are required to be based on some underlying philosophical assumptions about what constitutes 'valid' research. In order to conduct and evaluate any research, it is therefore important to know what these assumptions are. So, the following section will briefly discuss the major components of research paradigm which also constitute philosophical assumptions of research paradigms.
III. COMPONENTS OF RESEARCH PARADIGMS
As literature suggests, research paradigm is a basic and comprehensive belief system to view the research phenomena. It is the researcher's worldview perspective, or thinking, or school of thought, or set of shared beliefs that inform about the meaning or interpretation of research data. In course of research, it is important for the researcher to be aware and informed about his/her position on seeing and observing the world and its phenomena. It means he/she needs to have clear philosophical perspectives about how the reality or truth is viewed, how the knowledge is gained by what methods and methodology and how values are addressed in the research being carried out based on a particular research paradigm. Thus, such perspectives and assumptions through which reality, knowledge, methodological approaches and values are defined under each paradigm are simply known as components of research paradigm. According to Lincoln and Guba (1985), a paradigm comprises four elements, namely, epistemology, ontology, methodology and axiology. The perspectives of research paradigm pronounce Ontology-as the nature of reality, Epistemology-as the nature of knowledge and the relationship between the knower and that which would be known, Methodology-as the appropriate approach to systematic inquiry and Axiology-as the nature of ethics (Mertens, 2010). To be specific, ontology, epistemology, methodology and axiology are the components of research paradigm. Thus, it is important to have a firm understanding of these elements because they comprise the basic assumptions, beliefs, norms and values that each paradigm holds. In what follows, the next section presents a brief description of four perspectives/components of research paradigm.
Ontology
Ontology deals with the philosophical assumptions about the nature of reality or existence. It is simply called theory of reality. As Scotland (2012) says that ontology is a branch of philosophy concerned with the assumptions we make in order to believe that something makes sense or is real, or the very nature or essence of the social phenomenon we are investigating. Similarly, in Krauss's, (2005) words, "ontology involves the philosophy of reality (p. 758)". Moreover, Scott and Morrison (2005) states that ontology deals with the level of reality present in certain events and objects, but more importantly with the systems which shape our perceptions of these events and objects (p. 170)". Thus, ontology is the philosophical standpoint about the nature of existence or reality, of being or becoming, as well as the basic categories of things that In other words, Is reality of an objective nature, or the result of individual cognition? What is the nature of the situation being studied? Therefore, ontology is essential to a researcher because it helps to provide an understanding of the things that constitute the world as it is known (Scott & Usher, 2004). It also helps the researcher orientate his/her thinking about epistemological and methodological beliefs in relation to the research problem so as to contribute to its solution.
Epistemology
Epistemology is another component of research paradigm dealing with how knowledge is gained from different sources. It is simply known as theory and philosophy of knowledge. Trochim (2000) contends, "epistemology is the philosophy of knowledge or how we come to know p. 758)". Similarly, Blaikie (1993) describes epistemology as "the theory or science of the method or grounds of knowledge' expanding this into a set of claims or assumptions about the ways in which it is possible to gain knowledge of reality" (as cited in Flowers, 2009, p 2). In research, epistemology is used to describe how we come to know something; how we know the truth or reality. Regarding epistemology, Cooksey and McDonald (2011) state that what counts as knowledge within the world. Moreover, it is concerned with the very bases of knowledgeits nature, and forms and how it can be acquired, and how it can be communicated to other human beings. It focuses on the nature of human knowledge and comprehension that you, as the researcher or knower, can possibly acquire so as to be able to extend, broaden and deepen understanding in your field of research. While stating the epistemological assumptions in the research, the researcher seeks to answer the questions such as: What is the nature of knowledge and the relationship between the knower and the would-be known? Guba & Lincoln (1994), Is knowledge something which can be acquired on the one hand, or, is it something which has to be personally experienced? What is the relationship between me, as the inquirer, and what is known? These questions are important because they help the researchers to position themselves in the research context so that they can discover what else is new, given what is known (Kivunja and Kuyini, 2017). Along with these very questions, it is also important to know the answer of the questions: what counts as Knowledge?
To articulate the answer to these questions, it is essential to be informed about the sources of knowledge as suggested by (Slavin, 1984), intuitive knowledge-based on beliefs, faith, and intuition; authoritative knowledgegathered from people in the know, books, leaders in organizations; logical knowledge-the reason as the surest path to knowing the truth; and empirical knowledgederived from sense experiences, and demonstrable, objective facts. Thus epistemology is a philosophical perspective about the nature and sources of knowledge gained during the research. Epistemology is important because, it helps to establish the faith put in data. It affects how the researcher will go about uncovering knowledge in the social context that he/she will investigate. It provides guidelines to researchers to define the scope of entire research.
Methodology
Methodology is an important component of research paradigm. It deals with the how aspects of inquiry process. Keeves (1997) states that methodology is the broad term used to refer to the research design, methods, approaches and procedures used in an investigation that is well-planned to find out something. From this, it is inferred that methodological considerations in a paradigm simply include participants, instruments used in data gathering, and measures for data analysis through which knowledge is gained about the research problem. The methodology articulates the logic and flow of the systematic processes followed in conducting a research project, so as to gain knowledge about a research problem. It includes assumptions made, limitations encountered and how they were mitigated or minimized. It focuses on how we come to know the world or gain knowledge about part of it (Moreno, 1947). Moreover, methodology is concerned with the question like: How shall I go about obtaining the desired data, knowledge and understandings that will enable me to answer my research question and thus make a contribution to knowledge? In a similar vein, Guba, & Lincoln (1994) mention that the methodological question asks "how can the inquirer (would be knower) go about finding out whatever he or she believes can be known? (p. 108)". Form this; it is clear that methodological questions guide the researcher to the process of knowing through which the research questions are answered. Therefore, the researcher should have clear understanding of the methodological assumptions to be employed in course of his/her own research. 15 1438
Axiology
Axiology is another component of research paradigm dealing with ethical issues that need to be considered during research work. It considers the philosophical approach to making decisions of value or the right decisions (Finnis, 1980). Therefore, it is called theory of value. It involves defining, evaluating and understanding concepts of right and wrong behaviour relating to the research. It considers what value we shall attribute to the different aspects of our research, the participants, the data and the audience to which we shall report the results of our research. Axiology addresses the questions such as: What is the nature of ethics or ethical behaviour? What values will you live by or be guided by as you conduct your research? What ought to be done to respect all participants' rights? What are the moral issues and characteristics that need to be considered? Which cultural, intercultural and moral issues arise and how will I address them? How shall I secure the goodwill of participants? How shall I conduct the research in a socially just, respectful and peaceful manner? How shall I avoid or minimize risk or harm, whether it be physical, psychological, legal, social, economic or other? (ARC, 2015, as cited in Kivunja and Kuyini, 2017).
Specifically, the theory of value is concerned with two aspects: ethics and aesthetics. Ethics is the philosophical approach to making the right decision. It also involves defining, evaluating and understanding concepts of right and wrong behavior. In this side, the researcher needs to consider typical ethical questions such as: what is good/bad? What is right/wrong? What is the foundation of moral principles? What is justice? Aesthetics deals with the study of the nature and value of works and the aesthetic experience. The typical aesthetic questions include: why are works of research considered to be valuable? What is beauty? These questions might be different in different research disciplines and research paradigms.
IV. IMPLICATIONS OF RESEARCH PARADIGM IN EDUCATIONAL RESEARCH
As stated and discussed above, research paradigm is a philosophical standpoint of the researcher from which research phenomena are observed and analyzed. It is the comprehensive belief system and philosophical worldview that guide the process and actions of the whole research activity. More specifically, research paradigm is a philosophical base of research dealing with the nature of reality, whether it is external or internal (i.e., ontology); the nature, type and sources of knowledge generation (i.e., epistemology); a disciplined approach to generate that knowledge (i.e., methodology); the ethical issues that need to be considered in research (i.e. axiology). Moreover, the research paradigm guides the researcher to frame and proceed his/her research activity and to derive certain meaning of the researched phenomena. In this regard, Joubish et al. (2011) opined that paradigm guides a whole framework of beliefs, values and methods within which research takes place. The researcher should create a clear position regarding the reality that his/her research believes , the nature and sources of knowledge that the research derives, the methods he/she employs to gain meaning of the researched phenomenon and he/she needs to be equally sensitive to the values of the research activity. For all these concerns, researcher needs to be based on certain research paradigm and its basic philosophical perspectives. In addition, the research paradigm provides philosophical foundation for the researcher to determine the basic philosophical dimensions: such as ontology, epistemology, methodology and axiology of his/her research work.
Ontology is one of the important philosophical dimensions of research paradigm. It refers to the nature of reality and what human beings can know about it (Guba & Lincoln, 1994). In course of research, the researcher should clearly define about the ontological position of his/her research. The researcher needs to mention about how his/her research states about the nature of reality that is derived from the researched phenomena. He/she should have clear position whether there is subjective reality or objective reality as derived by the research work. Regarding this, researcher needs to have knowledge of a specific research paradigm. The research paradigm provides clear framework and guidelines to the researcher about the worldview of reality. In the same line, the nature of reality is determined by the nature of phenomena to be researched. If the researched phenomenon is about the relationship between different variables and testing of hypothesis, it leads towards the objective reality whereas if the phenomena to be researched are about the human experiences and social cultural processes, the researcher's ontological beliefs will be multiple realities. Therefore, researcher should have explicit understanding of research paradigm during the research.
Epistemology is another important component of research paradigm that researcher needs to consider in course of research. Epistemological assumptions refer to the nature of the relationship between the knower and what can be known (Guba & Lincoln, 1994). As research is a process of generating knowledge following certain procedures, the researcher can be based on particular framework and research paradigm. Moreover, the In the same way, methodological consideration is another important component of research paradigm. In course of research, the researcher should clearly define how he/she is going to find out the meaning of the phenomenon to be researched. Guba & Lincoln (1994) state that methodological assumptions refer to how the researcher can go about discovering the social experience, how it is created, and how it gives meaning to human life. Thus, methodology is the theory and a disciplined approach that informs about how researchers gain knowledge in research contexts. In the academic research, the researcher has to specify the subjects, tools, measures and techniques to be employed in his/her research work. In order to get firm understanding about all methodological considerations, the researchers can gain insight form research paradigm that clearly guides about the how aspects of the research.
Similarly, the research paradigm is equally contributory to the researcher to define and determine the value system that his/her research addresses. In the research, the researcher needs consider about the ethical issues. The researcher should be clear whether his research is value free of value laden. For this issue, research paradigm provides explicit framework and guidelines to the researcher. So research paradigm is important for the researcher. Nonetheless, different paradigms have different assumptions about the value system being either value free or value laden. Axiological assumptions of research paradigm help the researcher to think about the space of researched subjects, contexts and him in the entire research works. In addition, this component is also important for the researcher to determine what things and activities are good and acceptable and what are not. The researcher becomes aware of the ethical issues to be followed by him own side and from the side of participants. Moreover, axiology makes the researcher aware of the respect, space and justice of the participants in the entire research. Thus, the insight of the research paradigm is essential for the researcher to address the ethical and aesthetic issues in the research work.
V. CONCLUSION
Research paradigm is simply known as philosophical foundation or framework of a research work. It is also termed as comprehensive belief system and world view that guides the researcher to frame his/her research process in certain pattern. In other words, it is also taken as the philosophical positions of the researcher in the research in which he/she claims and justifies how he/she views the reality, what his/ her assumptions about knowledge, methodology and value. To make clear, research paradigm explicitly states the researcher's positions on ontology, epistemology, methodology and axiology of his/her research work. This philosophical positioning of the researcher becomes a philosophical dimension of his/her research. The philosophical base of the research guides the researcher to precede the entire process and derive the meaning form the researched phenomenon. Therefore, the knowledge of research is essential for the researcher to create his/her own research philosophy. | 4,818.8 | 2020-01-01T00:00:00.000 | [
"Philosophy",
"Education"
] |
Artificial intelligence algorithm comparison and ranking for weight prediction in sheep
In a rapidly transforming world, farm data is growing exponentially. Realizing the importance of this data, researchers are looking for new solutions to analyse this data and make farming predictions. Artificial Intelligence, with its capacity to handle big data is rapidly becoming popular. In addition, it can also handle non-linear, noisy data and is not limited by the conditions required for conventional data analysis. This study was therefore undertaken to compare the most popular machine learning (ML) algorithms and rank them as per their ability to make predictions on sheep farm data spanning 11 years. Data was cleaned and prepared was done before analysis. Winsorization was done for outlier removal. Principal component analysis (PCA) and feature selection (FS) were done and based on that, three datasets were created viz. PCA (wherein only PCA was used), PCA+ FS (both techniques used for dimensionality reduction), and FS (only feature selection used) bodyweight prediction. Among the 11 ML algorithms that were evaluated, the correlations between true and predicted values for MARS algorithm, Bayesian ridge regression, Ridge regression, Support Vector Machines, Gradient boosting algorithm, Random forests, XgBoost algorithm, Artificial neural networks, Classification and regression trees, Polynomial regression, K nearest neighbours and Genetic Algorithms were 0.993, 0.992, 0.991, 0.991, 0.991, 0.99, 0.99, 0.984, 0.984, 0.957, 0.949, 0.734 respectively for bodyweights. The top five algorithms for the prediction of bodyweights, were MARS, Bayesian ridge regression, Ridge regression, Support Vector Machines and Gradient boosting algorithm. A total of 12 machine learning models were developed for the prediction of bodyweights in sheep in the present study. It may be said that machine learning techniques can perform predictions with reasonable accuracies and can thus help in drawing inferences and making futuristic predictions on farms for their economic prosperity, performance improvement and subsequently food security.
The world population by 2050 is projected to increase to 9.9 billion and the global demand for various meat and animal products is set to increase by over 70% in the next few decades 1 .Therefore, there is a dire need to increase food production by 2050 by intensifying production on almost the same amount of land and while using the same resources.This puts pressure on the animal husbandry sector as well because, there now is a need to produce more animals using the limited land, water, and all other natural resources.It means that we need to find new and innovative approaches to produce more food which is a huge challenge for animal scientists despite a vast genetic wealth 2,3 .To address this, new technologies are are being adopted on animal farms which are evolving from traditional to high-tech 4 .Farming operations are now becoming more and more automated and the use of sensors is increasing in all aspects of farm management.This is not just reducing drudgery and labour but is also leading to an exponential increase in the amount of data generated on a daily basis.All this is leading to an exponential increase in farm data.The traditional methods and conventional strategies are not quite able to keep up with this enormous data, which is resulting in declining trends of production, especially in developing countries [5][6][7][8][9][10] .
As artificial intelligence is transforming all industries in a big way, it offers solutions to the analytic problems of animal husbandry and veterinary sciences 11 .These would help in proving many aspects of farm management which are important for reducing mortality and improving productivity 12 .They cannot just efficiently handle data but can also draw inferences that were hitherto unknown because ML techniques possess capabilities that are not present in conventional techniques.The modelling tolerance of such methods is considerably higher than statistical methodologies.This is because there is no requirement for assumptions or hypothesis testing in ML.
Artificial neural networks.
The hyperparameter optimization graph for a thousand iterations is given in Fig. 2. The results of the training of ANNs are given in Table 1.Our results indicated that the PCA+FS dataset converged earlier than the other two datasets.The results obtained by hyperparameter optimization were further refined heuristically and through this, the models could not be improved anymore.From this one may infer that the application of good searching algorithms, in this case, was enough to obtain optimum results.Out of the three datasets, the PCA dataset showed the highest correlation coefficient of 0.977.This dataset also had the highest number of neurons per layer This dataset also showed the lowest MSE, MAE, and loss when compared to the other datasets.The FS dataset alone performed better than the PCA+FS dataset and PCA dataset.The reduction in the number of features in this dataset was not enough to achieve the highest predictive ability of this dataset.The search results yielded the sigmoid activation function as well as a low learning rate as the most appropriate one for the prediction of body weights.For the hyperparameter tuning, stochastic gradient descent (sgd) and Adam both performed well as optimizers.For the activation function, ReLU and sigmoid both performed better than the rest.Of the hyperparameters trained, ReLU (rectified linear unit) and Adam (adaptive moment estimation) were the best optimizers and activation functions respectively.The number of hidden layers was 9 for all the tree models after the application of genetic algorithms.With the increase in the number of iterations, the correlation coefficient also increased.It was also seen that the more the number of iterations, the higher the correlation coefficients.
Genetic algorithms.
Genetic algorithms were sufficiently able to predict the bodyweights of sheep, but less efficiently than the other algorithms.The prediction power of genetic algorithms was the lowest among all trained algorithms for body weight prediction.Among the three (PCA, PCA+FS as well as FS) datasets for body weight prediction, the PCA+FS dataset yielded the highest correlation coefficient between true and predicted breeding values.The number of generations, fitness threshold, pop size, activation mutation rate, RMSE, MAE, R 2 , and correlation coefficients for the PCA dataset were 100, 0.980, 300, 0.001, 1.930, 1.248, 0.835, 0.874, FS + PCA dataset were 100, 0.980, 300, 0.001, 1.322, 1.031, 0.917 and 0.944 while for the FS dataset it was 100, 0.980, 300, 0.001, 1.363, 1.036, 0.929 and 0.940 respectively.The best model evolved using genetic algorithms had the number of generations, fitness threshold, population size, activation mutate rate RMSE, MAE, R 2 , and correlation coefficient of 100, 0.980, 300, 0.001, 1.322, 1.031, and 0.917.
Support vector machines.
The FS dataset had the highest correlation coefficient with the test labels the hyperparameters for which the grid search was performed.The hyperparameters for the same were 'C': 1000, 'gamma': 1, and 'kernel': 'linear' .Table 2 gives the results obtained from training and testing this algorithm.The linear kernel consistently outperformed the rbf kernel which goes on to say the weight prediction data is linearly separable.Support vector machines for body weight prediction using default parameter kernel = rbf had the RMSE, MAE, R 2 and correlation for the FS dataset were 1.569, 1.005, 0.832 and 0.944 respectively, for the PCA+ FS dataset, they were 1.461, 1.012, 0.861 and 0.959 respectively while for PCA they were 1.538, 1.025, 0.834 and 0.956 respectively.The hyperparameter optimization revealed the best hyperparameters of 'C': 1000, 'gamma': 1, 'kernel': 'linear' for the FS dataset, 'C': 1000, 'gamma': 0.0001, 'kernel': 'rbf ' for the PCA and FS dataset and 'C': 100, 'gamma': 0.001, 'kernel': 'rbf ' for the PCA dataset.The best-trained model had the following parameters: C: 1000, gamma: 1 kernel: linear.
Regression trees and random forests for bodyweight prediction.
Hyperparameter tuning improved the prediction results with random search performing better than grid search for breeding value prediction for most predictions except FS where grid search gave the best correlation results.For bootstrap = TRUE and max features = auto for the search algorithms.The highest correlation (0.990) was obtained for the FS dataset with grid search.Without hyperparameters, the FS dataset performed best for regression trees.The FS dataset had the highest correlation when compared with other datasets with all algorithms.Hyperparameter tuning improved the prediction ability of the random forests (Table 2).3.
Polynomial regression.
The highest correlation was found for the FS dataset with the average correlation reaching up to 0. 901.The 1st-degree polynomial gave the best-fit model.The training results for the algorithm are given in Table 3.The MAE values for the PCA, FS and FS+PCA datasets were 1.096, 0.709 and 1.078 respectively.
XGBoost.The FS dataset had the highest correlation coefficient for the testing dataset with the XGBoost algorithm.All values are indicated in Table 3.The time elapsed for running the algorithm was the greatest for the PCA+FS dataset.The wall times for the PCA, FS and FS +PCA datasets were 93 ms, 91 ms, and 511 ms respectively.Colsample bytree, learning rate, Max depth, Min child weight, N estimators and Subsample for the PCA dataset were 0.7, 0.05, 3, 5, 1000 and 0.5, for the FS dataset were 0.7, 0.1, 3, 3, 1000 and for the FS+PCA dataset, they were 0.7, 0.01, 5, 5, 1000, 0.5 and 0.7 respectively.
K nearest neighbours.
The highest correlation between true and predicted values was found for the FS + PCA dataset (Table 3).The PCA dataset had the highest n-neighbours using hyperparameter tuning.The N neighbours for the PCA, FS and FS +PCA datasets were 7,4,3 respectively.
MARS for body weight value prediction.
The predicted and true value correlation coefficient was 0.993 while applying multivariate adaptive regression splines.The highest correlation coefficient was found for the FS dataset.All values are indicated in Table 3.
Algorithm ranking.
For the bodyweight prediction, the MARS algorithm gave the best predictions based on the correlation coefficient (Table 4) and for breeding value prediction, the tree-based algorithms gave the best results.Random forests had the highest correlation coefficient (Table 4).The FS dataset outperformed the PCA and PCA+FS datasets in most cases except for genetic algorithms and neural networks trained both by hyperparameter optimization as well as heuristic modelling and KNN (but only by a very narrow margin).For genetic algorithms, the dataset with the lowest number of features gave the best correlation coefficients.In the case of principal component regression, the PCA dataset performed best.Bayesian regression outperformed ridge regression by a small margin.The correlations between true and predicted values are given in Figs. 3 and 4.
Discussion
Overall, all values that are to be taken at birth in the data were more meticulously recorded than the parameters that are to be recorded later in the life of the animal.Missing values are universal in real-world datasets and the use of winsorization to give the distribution more desirable statistical properties has also been published in literature for lowering the weight of influential observations and removing unwanted effects of outliers without the introduction of more bias.Anderson et al. 27 converted a much higher range viz. the upper and lower 10% of data to the 90th percentile with a little introduction of error.A two-sided winsorization approach was used in www.nature.com/scientificreports/this study which was also reported to be better than the one-sided approach by Chambers et al. 28 and Hamadani et al. 29 .
The results of the present study indicate that the number of features was effectively reduced in the dataset using principal component analysis which substantially lowered the effective number of parameters characterizing the underlying model.The body weights taken at various ages from weaning had the greatest feature scores.This is expected as it is also evident from the growth curves of various animals in which body weight is the most important parameter 30 .
Feature selection has been shown by researchers to increase learning algorithms' working both in terms of computation time and accuracy 31,32 .Our results of PCA reducing the multicollinearity to 1 correspond with the results of many authors 33,34 as PCA has been reported in the literature as one of the most common methods to reduce multicollinearity in the dataset.The FS dataset had high multicollinearity as feature selection lessens the number of total features without dealing with the multicollinearity present within the dataset.It has been reported in the literature that multicollinearity does not affect the final model's predictive power or reliability.The model predictions for ridge regression and Bayesian ridge similar to ours were reported by 19 who also used various machine learning techniques for the prediction of weights and reported high R 2 values approaching 0.988.A tenfold cross-validation for training the model was used which was also reported to be the most appropriate by 19 .However 35 , used 20-fold cross-validation for their study to predict breeding values.
A high coefficient of determination (0.92) was also stated by Kumar et al. 36 and Adebiyi et al. 37 for the estimation of weight from measurements and prediction of disease while 38 reported R 2 values of 0.70, 0.784 and 0.74 for the prediction of body weights in three Egyptian sheep breeds, Morkaraman sheep and in Malabari goats respectively.The R 2 value, as well as the coefficient of correlation of the PCA dataset, was greater than the PCA+FS dataset from which it may be inferred that PCA is not just an effective technique for data reduction but also that further data reduction in the dataset caused some loss of variance in the dataset.
Compared to heuristic modelling, optimization algorithms took more time to execute.As the number of computations increases, they become increasingly difficult to solve and consume higher and higher computational www.nature.com/scientificreports/power, sometimes even causing system crashes.This is so because optimization algorithms test a much higher number of options available to tune the best mode.
Our results indicate that all three datasets trained are comparable in terms of the correlation coefficient or training error.PCA+FS dataset converged earlier than the other two datasets upon hyperparameter tuning which may be because the number of features within this dataset is less than the other two and hence the convergence occurred earlier than the other two datasets.This is important for training efficiency especially when the datasets are large and the computational power available to the researcher is limited.
Out of the three datasets trained using both hyperparameter optimization and then by heuristic modelling, the PCA dataset showed the highest correlation coefficient of 0.977.From this one may infer that PCA efficiently took care of the selection of features that could sufficiently explain the variance of the data.FS alone performed better than the PCA+FS dataset which goes on to say that some of the explained variances may have been lost when both techniques were used together.The reduction in the number of features in this dataset alone was not enough to achieve the highest predictive ability of this dataset.Higher correlation for the prediction for the fat yield of 0.93 when predicted by ANN by Shahinfar et al. 39 .Peters et al. (2016) used the MLP-ANN model to achieve predictive correlations of 0.53 for birth weight, 0.65 for 205-day weight, and 0.63 for 365-day weight which is much lower than our prediction.Khorshidi-Jalali and Mohammadabadi 40 compared ANNs and regression models for arriving at body weight in Cashmere goats and found the ability of the artificial neural network model to be better.However, unlike our results, this value was 0.86 for ANN.
Genetic algorithms performed poorly when compared to the other algorithms The lower-than-expected values may also be the reason that genetic algorithms are seldom used for direct regression.Genetic algorithms were also reported to be better suited for optimizing large and complex parametric spaces 41 .
For SVM the FS dataset had the highest correlation coefficient with the test labels and the hyperparameters for which the grid search was performed.The linear kernel consistently outperformed the rbf kernel suggesting that the weight prediction data is linearly separable.The rbf kernel has been reported to perform better in nonlinear function estimation by preventing noise to have a high generalization ability 42 .Ben-Hur et al. 43 also observed that nonlinear kernels, Gaussian or polynomial, lead to only a slight improvement in performance when compared to a linear kernel.However, using a linear kernel, Long et al. 44 reported a lower correlation coefficient of 0.497-0.517for the prediction of quantitative traits.Alonso et al. 45 also used 3 different SVR techniques for the prediction of body weights and reported higher prediction errors (MAE) of 9.31 ± 8.00, 10.98 ± 11.74, 9.61 ± 7.90 for the 3 techniques.Huma and Iqbal 19 also used support vector regression for the prediction of body weights in sheep and reported correlation coefficients, R 2 , MAE, and RMSE of 0.947, 0.897, 3.934, and 5.938 respectively which are close to the values in the present research.
Hyperparameter tuning improved the prediction results with random search performing better than grid search for breeding value prediction for most predictions except FS where grid search gave the best correlation results.Random search is very similar to grid search, yet it has been consistently reported to produce better results comparatively 46 by effectively searching a larger, less promising configuration space.
Due to a difference in the relevance of hyperparameters for different models at hand, grid search sometimes becomes a poor choice for constructing algorithms for different data sets.Hyperparameters improved the prediction ability of the random forests which has also been published by 47,48 .Huma and Iqbal 19 also used regression trees for the same prediction and reported R 2 and MAE of 0.896, 4.583.They also used random forests for the prediction of body weights in sheep and reported correlation coefficients, R 2 , MAE, and RMSE of 0.947, 0.897, 3.934, and 5.938 respectively.When compared with other models.Many authors 19,49 have stated the random forests method and their variants produce the lowest errors.Lower values for random forests (RF) were reported by Jahan et al. 50who reported an R 2 of 0.911 for the bodyweight prediction of Balochi sheep.Çelik and Yilmaz 51 also used the CART algorithm and reported lower values than the present study of R 2 = 0.6889, Adj.R 2 = 0.6810, r = 0.830 and RMSE = 1.1802, respectively.RF has also been suggested to be an important choice for modelling complex relationships between variables as compared to many other ML models for researchers based on its features.Similar to the results reported in the present study, random forests were also generally found to outperform other decision trees, but their accuracy was reported lower than gradient-boosted trees.Boosting algorithms are reported to perform well under a wide variety of conditions 52,53 .It is however important to mention that the convergence of algorithms also depends to a large extent on the data characteristics 54,55 .
Morphometric parameters along with body weights were used for the prediction of body weight with high correlation in this study.The highest variation of body weight was reported to be accounted for by the combination of chest girth, body length, and height for prediction of body weights by 56 .
XgBoost outperformed the gradient boost algorithm for the prediction of bodyweights.For the XGBoost algorithm, both the accuracy and the training speed were found to be better.This has also been published by Bentéjac et al. 57 who compared XGBoost to several gradient-boosting algorithms.The XGBoost Algorithm was also shown to achieve a lower error value in comparison to random forests by Niang et al. 58 .XGBoost uses advanced regularization (L1 and L2), which may have been the reason for the improved model generalization capabilities 36 .
The greatest correlation was found for the FS + PCA dataset which means that through this technique a better prediction can be made using the least number of features.Support vector regression gave a slightly better convergence than k-nearest neighbours which was also stated by Ramyaa et al. 59 in their study on phenotyping subjects based on body weight.KNN results have also been reported to be somewhat biased towards the mean with the extreme values of the independent variables but this did not affect the results of the present study.
The FS dataset gave the highest correlation coefficient using the multivariate adaptive regression splines algorithm.Again, the presence of a greater number of features than the other two datasets could have contributed to this.The R 2 values closer to the ones obtained in this study of 0.972 obtained from the MARS algorithm for prediction of the fattening final weight of bulls were reported 60 .Çelik and Yilmaz 51 used MARS for bodyweight prediction as well and reported slightly higher values of R 2 equal to 0.919, RMSE equal to 0.604, and r equal to 0.959.MARS algorithm was reported to be a flexible model which revealed the interaction effects and minimized the residual variance 61 .
For the bodyweight prediction, the MARS algorithm gave the best predictions based on the correlation coefficient and for breeding value prediction, tree-based algorithms gave the best results.The FS dataset outperformed the PCA and PCA+FS datasets in most cases except for genetic algorithms and neural networks trained both by hyperparameter optimization as well as heuristic modelling and KNN (but only by a very narrow margin).This may be attributed to a greater number of features present within the FS dataset contributing to each causing the addition of some additional explained variance within the dataset towards the predicted variable.Bayesian regression outperformed ridge regression by a small margin going on to say that multicollinearity within the FS dataset did not cause any convergence issues which is also supported within the literature.
Conclusion and recommendations
Artificial Intelligence is a promising area which has the potential to make accurate predictions about various aspects of farm management and can thus be a viable alternative to conventional strategies.12 deployable and reusable models were developed in this study for the prediction of body weights at 12 months of age.All the models had high prediction ability with tree-based algorithms generally outperforming other techniques in regression-based tasks.These, if customized and deployed on farms, would help in taking informed decisions.Farm modernization would thus be beneficial for animal production, and the farm economy thus contributing to the larger goal of achieving food security.
Methods
Data preparation.To predict the body weight, data for 11 years (2011-2021) for the Corriedale breed was used and was collected from an organized sheep farm, in Kashmir.The total number of data points available for the study was 37201.Initial raw data included animal numbers (brand number, ear tag), dob, sex, birth coat, litter size, weaning date, parent record (dam number, sire number, dam weight, dam milking ability, parturation history), coat colour, time of birth, body weights (weekly body weights up to 4th week, fortnightly weights up to 6th fortnight, monthly body weight up to 12th month), monthly morphometric measurements up to weaning, weather data (daily temperature and humidity), disposal records, treatment records.Features were determined heuristically as well as using techniques discussed later.The raw data was cleaned, and duplicate rows with too many missing values were removed.Data imputation was done iteratively using Bayesian ridge regression 62 .Winsorization was used for handling outliers and the data were appropriately encoded and standardization was also done.This was achieved by dividing subtracting mean from each feature and dividing by the standard deviation.The data was split into training and testing, and the optimal train test split was heuristically determined with testing data equal to 10%, training data equal to 90 per cent of the dataset.The total training dataset was again for validation and the validation data proportioned to 10 per cent of the training data.
To decrease the number of input variables in the dataset and to select the ones contributing most to the variance, dimensionality reduction was performed using principal component analysis (PCA) and feature selection.PCA is a statistical technique which converts correlated features into a set of uncorrelated features linearly.This is done by orthogonal transformation.Feature selection was done in Python based on the F-test estimate of the degree of linear dependency between two numerical variables: the input and the output.Feature selection was performed both for the original datasets and after extracting features from PCA.The input variables were constant across all the ML methods used in this study so as to eliminate the bias that an uneven number of features/ input variables could cause during the training process.Thus, three datasets were created: • The principal component analysis dataset (PCA) in which primarily the PCA technique was used for dimen- sionality reduction • The feature selection dataset (FS) where the F-test estimate of the degree of linear dependency between two numerical variables was used for dimensionality reduction • The PCA+FS dataset wherein both techniques were used to achieve a much-reduced number of features.
• Pure morphometric measurements were also used for predicting body weight using ANNs.This constituted the DM dataset which was used for the prediction of weaning weight.This was done because morphometric measurements were very scarce in the dataset after weaning.
Body weights at 12 months of age were used as labels.Weaning weight was also used as the label for one of the algorithms.
Machine learning techniques.
A total of 11 AI algorithms were employed in this study.Prediction of the weight parameter was done using body measurements as well as earlier body weights as input attributes to artificial neural networks.Hyperparameters were optimized using search-grid and random-search algorithms and later by heuristic tuning as well.
A comparison of the following machine learning algorithms was done in this study: Bayesian ridge regression (BRR) 63 .This technique works on the principle that the output 'y' is drawn from a probability distribution and not a single value.Due to the inclusion of a probabilistic approach, the model is expected to train better.The prior for the coefficient "w" is thus derived using spherical Gaussian and the L2 regularization tested which is an effective approach for multicollinearity [10].The cost function is a lambda term for a penalty to shrink the parameters thereby reducing the model complexity to get unbiased estimates.Default www.nature.com/scientificreports/parameters of 1e −6 for alpha 1 and alpha 2 were used.These are hyperparameters for the shape and rate param- eters of the distribution.
Artificial neural networks 64 .This popular machine-learning technique is inspired by the neurons found in animal neural systems.A neural network is therefore only a group of units/nodes which are connected together to form artificial neurons [18].This connection is similar to a neuron.Numbers just like signals in an actual brain are transmitted as signals among the artificial neurons and the output of each is calculated after a non-nonlinearity is added to the sum of all inputs to that particular neuron.In a larger picture, the network of neurons is formed when many such neurons are aggregated into layers.The more the number of neurons, the denser is the neural network is formed.The addition of many inner layers is what makes the network deep.The hyperparameter ranges for PCA+FS, PCA and FS datasets respectively for Artificial Neural Networks were iterations = 1000, 200, 1000.Learning rate = 0.001, 0.5 for PCA +FS dataset, 0.001, 0.5 for PCA dataset, 0.001, 0.5 for FS dataset.Dropout rate = 0.01, 0.9 for PCA +FS dataset, 0.01, 0.9 for PCA dataset, 0.01, 0.9 for FS dataset.The hidden layers for the PCA+FS dataset = 1-5, PCA dataset = 1-7, and FS dataset =1- Classification and regression trees algorithm (CART) 66 .CART algorithm works by building a decision tree.This decision tree works on Gini's impurity index and uses it to arrive at a final decision.Analogous to an actual tree, each branching or fork represents a decision and the predictor variable is segregated towards either of the many branching points.And at the end, the end node arrives at the final target variable.
Random forests 67 .Random forests are similar to other tree-based algorithms.The theory, however, utilizes ensemble learning methods wherein many decision trees are constructed to arrive at a solution which is the most optimum.Thus the average of the prediction obtained from all such trees is taken as the final output.
Gradient boosting 54 .Again a tree-based ensemble algorithm utilizing many weak prediction decision trees.Thus the final model is built stage-wise.This allows the optimization of an arbitrary differentiable loss function which makes this algorithm better than many tree-based ones.The gradient boost algorithm hyperparameter options were learning rate = 0.001, 0.01, 0.1, N estimators = 500, 1000, 2000, subsample = 0.5, 0.75, 1, max depth = 1, 2, 4, and Random state = 1.
Polynomial regression 69 .Polnominal regression takes monomial regression a step ahead because here, the relationship between independent and dependent variables is represented as the nth-degree polynomial.This technique is useful for non-linear relationships between the dependent and independent variables.10 degrees of polynomials were checked for the polynomial regression with a mean of 6 for each algorithm.Polynomial regression was implemented using the sklearn package in Python.The best parameters for the algorithm were derived using hyperparameter tuning as well.
K nearest neighbours 70 .A simple and effective machine learning algorithm which is a non-parametric learning classifier.It uses proximity for predicting data points.The assumption is that similar points would be close to each other on a plot and thus a predicted value is taken as the average of the n number ( k nearest neighbours) of points similar to it.that points that are similar would be found close to each other.Grid search was employed for KNN with the range of 2-11.
Multivariate adaptive regression splines (MARS) 71 .MARS combines multiple simple linear functions to aggregate them by forming the best-fitting curve for the data.It combines linear equations into an aggregate equation.This is useful for situations where linear or polynomial regression wouldn't work.MARS algorithm was also used for all three datasets' K-fold cross-validation.10 splits and 3 repeats were used.www.nature.com/scientificreports/Genetic algorithms 72 .Techniques that solve constrained and unconstrained optimization problems as they are heuristic adaptive search algorithms belonging to the larger class of evolutionary algorithms.Being inspired by natural selection and genetics, genetic algorithms simulate the "survival of the fittest" among individuals of each generation for solving a problem.Each generation consists of a population of individuals all of whom represent points in search space.
Evaluation metrics.For the model evaluation, four scoring criteria were used.And since the task at hand was a regression, these were mean squared error (MSE) given in Eq. 1, mean absolute error (MAE) given in Eq. 2, coefficient of determination (R 2 ) presented in Eq.,3, and correlation coefficient r represented in Eq. 4.
Here y i equals the actual value for ith observation, x i is the calculated value for ith observation and n represents the total number of observations.
Figure 3 .
Figure 3. Pair plot for multicollinearity for the PCA+FS dataset.
Table 2 .
Results obtained from regression trees, random forests and gradient boost.The highest values obtained are in bold.
PCA FS Regression trees Random forrests Gradient boost Regression trees Random forrests Gradient boost Regression trees Random forrests Gradient boost Hyperparameters Default Grid search Random search Default Grid search Default Grid search Random search Default Grid search Default Grid search Random search Default Grid search
www.nature.com/scientificreports/Gradient boost.The feature selection (FS) dataset had the highest correlation coefficient for the gradient boost algorithm with or without hyperparameters.The training results for the algorithm are given in Table
Table 3 .
Results obtained from XGBoost, KNN, Polynomial regression and MARS.The highest values obtained are in bold.
Table 4 .
Ranking of algorithms for the prediction of body weights. | 7,186.2 | 2023-08-15T00:00:00.000 | [
"Computer Science"
] |
Levan Production by Suhomyces kilbournensis Using Sugarcane Molasses as a Carbon Source in Submerged Fermentation
The valorization of byproducts from the sugarcane industry represents a potential alternative method with a low energy cost for the production of metabolites that are of commercial and industrial interest. The production of exopolysaccharides (EPSs) was carried out using the yeast Suhomyces kilbournensis isolated from agro-industrial sugarcane, and the products and byproducts of this agro-industrial sugarcane were used as carbon sources for their recovery. The effect of pH, temperature, and carbon and nitrogen sources and their concentration in EPS production by submerged fermentation (SmF) was studied in 170 mL glass containers of uniform geometry at 30 °C with an initial pH of 6.5. The resulting EPSs were characterized with Fourier-transform infrared spectroscopy (FT-IR). The results showed that the highest EPS production yields were 4.26 and 44.33 g/L after 6 h of fermentation using sucrose and molasses as carbon sources, respectively. Finally, an FT-IR analysis of the EPSs produced by S. kilbournensis corresponded to levan, corroborating its origin. It is important to mention that this is the first work that reports the production of levan using this yeast. This is relevant because, currently, most studies are focused on the use of recombinant and genetically modified microorganisms; in this scenario, Suhomyces kilbournensis is a native yeast isolated from the sugar production process, giving it a great advantage in the incorporation of carbon sources into their metabolic processes in order to produce levan sucrose, which uses fructose to polymerize levan.
Introduction
Exopolysaccharides are natural biopolymers that can be synthesized by some microorganisms such as fungi, bacteria, and yeast [1] and isolated from various sources such as extremophiles, halophiles, psychrophiles, and acidophiles, and their properties depend on the nature of the microorganism [2].The principal advantage of microbial EPSs is their extracellular nature, and as a consequence, their recovery is relatively cheap compared with their intracellular counterparts [3].The EPSs produced by microorganisms can be classified as hetero-polysaccharides and homo-polysaccharides. Hetero-polysaccharides are formed by the polymerization of different types of monosaccharides and their derivatives, whereas homo-polysaccharides consist of a single type of monosaccharide such as glucans, galactins, or fructans [4].
Fructans are fructose polymers, which include EPSs such as inulin and levan.Fructooligosaccharides (FOSs) are synthesized by fructosyltransferases (FTases; 2.4.1.9),which are a group of enzymes that have hydrolytic and transfructosyl activities.EPS production is carried out through the hydrolysis of sucrose and subsequent polymerization into FOSs.EPSs are well known for their properties and are used as sweeteners in the food and beverage industry and as prebiotics [5].Also, they have been reported as safe for inclusion in food products because of their low caloric content as they are scarcely hydrolyzed by digestive enzymes and play an important role in reducing the levels of triglycerides and cholesterol.In addition, their production initially requires a high concentration of sucrose [6].Nowadays, FOSs are gaining attention for their valuable attributes and economic potential in the sugar industry.Nonetheless, production processes with low-cost sources are needed in order to contribute to developing more sustainable and profitable processes [7].The molecular weight of microbial fructan is usually hundreds of times higher than that of vegetable fructan as a result of the different enzymes responsible for fructan synthesis in microorganisms [8].In the particular case of levan, levansucrase catalyzes the transfer of fructose units from sucrose to form β-2,6 glycosidic linkages, resulting in the formation of levan, which is primarily digested by the enzyme levanase, which breaks down the β-2,6 glycosidic linkages, releasing fructose as the main metabolite.It is generally synthesized by various microorganisms, e.g., bacteria such as Zymomonas mobilis, Erwinia herbicola, and B. subtilis or fungi such as Aspergillus sydowii and Aspergillus versicolor; reports of its production using yeast are scarce.
Levan is an EPS mainly composed of fructose units linked by β-(2,1) bonds [9] and has a wide range of applications.For example, it is used as an emulsifier, sweetener, and prebiotic in the food industry [10][11][12]; as a humectant and an antioxidant in the cosmetics industry [13,14]; and as an anti-inflammatory agent and an immunomodulator in the medical industry and pharmaceutical industries [15][16][17].Levan is considered a novel EPS with a wide range of possible applications; for instance, it can be used as a thickener, stabilizer, fat substitute, or flavoring agent in dairy products because of its non-digestibility, non-toxicity, high stability, and solubility in water and oil; high water-holding capacity; and low intrinsic viscosity.Levan-based films have beneficial physicochemical and biological properties, such as biodegradability, edibility, and antibacterial and antifungal activity, and thus have good prospects as packing materials in the food, industrial, and medical sectors.On the other hand, levan possesses antioxidant, antitumor, antidiabetic, and immunomodulatory activities.In combination with levan's promising characteristics in forming nanoparticles via self-assembly in water, levan-based nanoparticles have been proposed as prospective drug delivery carriers and cell proliferation agents [18].Levan can be obtained from plant and microbial sources; however, microorganisms such as bacteria, fungi, and yeast have the ability to synthesize this EPS.Microbial levan is typically obtained through fermentation or enzymatic reactions using isolated enzymes, in which sucrose serves as both a principal carbon source and substrate, respectively.The optimal sucrose concentration for achieving maximum synthesis efficiency varies not only among different species but also often among different strains of microorganisms [19,20].In the literature, there are various reports that indicate the capacity of some microorganisms to produce levan using sucrose as the only carbon source; among these, we can highlight Bacillus subtilis [21,22], Lactobacillus reuteri [23], and Leuconostoc citreum [24] and Gram-negative bacteria such as Gluconobacter albidus [25], Brenneria goodwinii [26], Erwinia tasmaniensis [27], and Halomonas smyrnensis [28].Likewise, there are reports of the overexpression of the enzymes responsible for levan production; an example is the expression of the genes of Rahnella aquatilis [29] and Leuconostoc mesenteroides in Saccharomyces cerevisiae [30] and the genes of B. subtilis expressed in Pichia pastoris [31].In the present work, levan was produced by Suhomyces kilbournensis, which has not been reported as an EPS producer.Suhomyces species have been discovered in association with insects, moths, flowers, moss, soil, and maize kernels [32].Specifically, Suhomyces kilbournensis has been reported from one isolate obtained from uncharacterized soil in Mexico, and it has been isolated from maize kernels harvested in Illinois, USA.Moreover, S. kilbournensis has been reported as non-pathogenic.The growth of this yeast takes place via multilateral budding, and the cells occur singly and in pairs.Colony growth is white, opaque, creamy in texture, low with a slightly raised center, and bordered by pseudohyphae [33].
The production of EPSs depends on the synthesis of extracellular enzymes such as levan sucrose (EC 2.4.1.10),which is (regularly) responsible for the hydrolysis and transfructosylation reactions needed to synthesize levan using sucrose as a substrate [34,35].Various reports show that the synthesis of both levan and the enzymes involved can be carried out by a diversity of microbes, among which, the most reported are bacteria, fungi, and archaea [36].
For the production of EPSs, it is important to characterize the production systems since the cultivation conditions, such as temperature, fermentation time, pH, and the sources and concentrations of carbon and nitrogen, are essential.FOS enzymes and EPS production require a fermentation system, which can be achieved through solid-state fermentation (SsF) or submerged-liquid fermentation (SmF).Both systems are well documented and can use agro-industrial byproducts to reduce manufacturing costs and obtain high yields of products.SsF can utilize agro-industrial byproducts, thus preventing negative environmental impact from waste accumulation.Nonetheless, SmF possesses several biotechnological advantages such as easy control of the fermentation parameters (pH, temperature, oxygen content), and it can be easily implemented at any scale [37].Levan production has been carried out mainly using SmF since the biomass and EPS yields that have been obtained from it using various microorganisms are acceptable [38][39][40]; however, the development of processes using low-cost carbon sources is needed, as is the search for new microorganisms that enable increasing yields for the development of industrial processes that allow for low production costs [7,23].Agro-industrial byproducts represent a promising alternative for the production of EPSs, FOS-producing enzymes, and FOS production.Byproducts, such as sugar cane molasses; beet molasses; agave syrups; fruit peels; some bagasse, such as sugar cane bagasse, coconut bagasse, corn bagasse, and agave bagasse; aguamiel; and coffee processing byproducts, are bioresources for levan-type FOS production [1,6].Specifically, sugar cane molasses is the viscous liquid byproduct of the sugar extraction process from sugarcane juice and can have different chemical compositions depending on plant type, cultivation area conditions, plant maturity, and juice processing level.Molasses regularly contains sugar (content >43% in weight), polyphenols, vitamins, minerals, and ash.Owing to its nutrients, sugar cane molasses can be used as a carbon source for the production of EPSs and FOS enzymes [6].In the literature, there are reports that demonstrate that agroindustrial byproducts such as beet molasses, sugar cane molasses, and syrup have been used as alternative carbon sources and have allowed for adequate microbial growth and EPS production via SmF [26,41].Levan is an EPS with a high potential to be used in various industries given its physicochemical and functional characteristics [24,42]; however, to improve performance and quality, the design of a process allowing for large-scale, efficient, ecological, and profitable production is necessary [7,21].Because of this, the objective of the present work was to evaluate the EPS production potential of the indigenous yeast strain Suhomyces kilbournensis under different process conditions.
Kinetics of Exopolysaccharide Production by Suhomyces kilbournensis
The results in Figure 1 show that the maximum EPS production using sucrose (40 g/L) as a carbon source was after 6 h of cultivation at all temperatures tested; however, the best performance occurred at 30 • C. The results show that the maximum production yields of EPSs at 25, 30, and 35 • C were 0.86, 0.99, and 0.71 g/L, respectively.In addition, after 9 h of cultivation, a decrease in productivity was observed at all temperatures tested.The time required for maximum EPS production (6 h) by S. kilbournensis was shorter than the time reported for other microorganisms such as Bacillus subtilis (20 h), Tanticharoenia sakaeratensis (35 h), Leuconostoc citreum BD1707 (96 h), Gluconobacter albidus (48 h), Halomonas smyrnensis (169 h), and Acetobacter xylinum NCIM2526 (122 h) [24,26,[43][44][45][46].On the other hand, Figure 1 shows that the highest biomass yield was present after 12 h of cultivation at 35 • C; however, the maximum production peak was not observed at any of the tested temperatures, indicating that EPSs and biomass production are not directly related.Sarilmiser et al. [45] indicated that the production of EPSs is associated with growth in some cases but not in others, depending on the microorganism used for this objective.The results of EPS production in the present investigation agree with what was reported by Abou-Taleb et al. [46], who reported that maximum EPS production occurred at 30 • C using Bacillus lentus V8 and at 25 to 30 • C using Leuconostoc citreum BD1707 [26].It is important to mention that temperature is an important parameter that affects microbial growth, intracellular metabolic processes, and EPS yield [26,47,48].There are reports that indicate that the optimal temperature for the enzymatic activity of a levansucrase produced by B. subtilis is between 30 and 37 • C [36].These extracellular enzymes are responsible for the synthesis of EPSs such as levan.Since production occurs regularly in a microbial system, it is important to control the culture conditions, as they influence both the metabolism of the microorganism and the catalytic activity of the enzyme [27].Furthermore, the optimal temperature of enzymatic activity is the fundamental condition since it can ensure the efficient synthesis of EPSs [32].
tested temperatures, indicating that EPSs and biomass production are not directly related.Sarilmiser et al. [45] indicated that the production of EPSs is associated with growth in some cases but not in others, depending on the microorganism used for this objective.The results of EPS production in the present investigation agree with what was reported by Abou-Taleb et al. [46], who reported that maximum EPS production occurred at 30 °C using Bacillus lentus V8 and at 25 to 30 °C using Leuconostoc citreum BD1707 [26].It is important to mention that temperature is an important parameter that affects microbial growth, intracellular metabolic processes, and EPS yield [26,47,48].There are reports that indicate that the optimal temperature for the enzymatic activity of a levansucrase produced by B. subtilis is between 30 and 37 °C [36].These extracellular enzymes are responsible for the synthesis of EPSs such as levan.Since production occurs regularly in a microbial system, it is important to control the culture conditions, as they influence both the metabolism of the microorganism and the catalytic activity of the enzyme [27].Furthermore, the optimal temperature of enzymatic activity is the fundamental condition since it can ensure the efficient synthesis of EPSs [32].
The results in Figure 1 show a direct relationship between temperature and growth; as the temperature increases, biomass production increases until it has a yield of 1.79 g/L at 35 °C.Similar results were found by Jadhav et al. [32], who reported that S. kilbournensis presents its optimal growth between 30 and 37 °C.Since, in the present work, the maximum EPS production was obtained at 30 °C (0.99 g/L), subsequent experiments were carried out at 30 °C.
Effect of pH on Growth and Production of Exopolysaccharides
The results of EPS production at different pH values (Figure 2) demonstrate that EPS concentration increased as pH increased, presenting a maximum production of 1.66 g/L at pH 6.5; however, after this pH, the production of EPSs decreased.In accordance with these results, the following experiments (the effects of nitrogen and carbon sources on EPS production) were conducted with an initial pH of 6.5.The EPS production profile during SmF at different initial pH values may be because EPS synthesis depends on the action of The results in Figure 1 show a direct relationship between temperature and growth; as the temperature increases, biomass production increases until it has a yield of 1.79 g/L at 35 • C. Similar results were found by Jadhav et al. [32], who reported that S. kilbournensis presents its optimal growth between 30 and 37 • C. Since, in the present work, the maximum EPS production was obtained at 30 • C (0.99 g/L), subsequent experiments were carried out at 30 • C.
Effect of pH on Growth and Production of Exopolysaccharides
The results of EPS production at different pH values (Figure 2) demonstrate that EPS concentration increased as pH increased, presenting a maximum production of 1.66 g/L at pH 6.5; however, after this pH, the production of EPSs decreased.In accordance with these results, the following experiments (the effects of nitrogen and carbon sources on EPS production) were conducted with an initial pH of 6.5.The EPS production profile during SmF at different initial pH values may be because EPS synthesis depends on the action of an extracellular enzyme that has a catalytic response to pH changes, directly impacting EPS yields [19].As shown in Figure 2, the highest EPS concentration was obtained in the production system implemented in the present investigation when the initial pH of the SmF was adjusted to 6.5.The results of the statistical analysis indicated that EPS production did not show significant differences at the different initial values of pH tested.This result was similar to the results reported by Belgith et al. [49], who indicated that pH is very important for the synthesis of EPSs and obtained the best result at a pH value of 6.5, probably because of the synthesis of levansucrase being improved at these pH values when Bacillus spp.were used as an inoculum.Furthermore, this study reported that this enzyme was responsible for fructose polymerization in the synthesis of the EPSs.Likewise, Mummaleti et al. [22] reported similar results (pH 6.8) using Bacillus subtilis as an inoculum.This agreed with the results reported by Öner et al. [50], who reported the highest levan production at pH 6.0 using B. methylotrophicus and that the optimal pH for levansucrase activity was between 5.0 and 6.5.Likewise, in a fructosylated EPS (levan) production system using Gluconobacter albidus, it was reported that the levan produced at pH 6.5 maintained a constant size and molecular weight [44].It is important to mention that reports indicated that transfructosylation activity can occur in slightly acidic conditions at a pH range of 4.0-6.5 [51].
an extracellular enzyme that has a catalytic response to pH changes, directly impacting EPS yields [19].As shown in Figure 2, the highest EPS concentration was obtained in the production system implemented in the present investigation when the initial pH of the SmF was adjusted to 6.5.The results of the statistical analysis indicated that EPS production did not show significant differences at the different initial values of pH tested.This result was similar to the results reported by Belgith et al. [49], who indicated that pH is very important for the synthesis of EPSs and obtained the best result at a pH value of 6.5, probably because of the synthesis of levansucrase being improved at these pH values when Bacillus spp.were used as an inoculum.Furthermore, this study reported that this enzyme was responsible for fructose polymerization in the synthesis of the EPSs.Likewise, Mummaleti et al. [22] reported similar results (pH 6.8) using Bacillus subtilis as an inoculum.This agreed with the results reported by Öner et al. [50], who reported the highest levan production at pH 6.0 using B. methylotrophicus and that the optimal pH for levansucrase activity was between 5.0 and 6.5.Likewise, in a fructosylated EPS (levan) production system using Gluconobacter albidus, it was reported that the levan produced at pH 6.5 maintained a constant size and molecular weight [44].It is important to mention that reports indicated that transfructosylation activity can occur in slightly acidic conditions at a pH range of 4.0-6.5 [51].
Effect of Nitrogen Source on Exopolysaccharide Production
The results for the effect of the nitrogen source on EPS production can be observed in Figure 3.The effect of the nitrogen source on the metabolism of S. kilbournensis was determined by evaluating four nitrogen sources: bacteriological peptone, meat peptone, tryptone, and meat extract.The results of the statistical analysis of EPS production showed a significant difference when bacteriological peptone was used at a concentration of 7.5 g/L, followed by concentrations of 0.93 and 0.90 g/L when meat extract and meat peptone were used at a concentration of 5 g/L, respectively, and a yield of 0.88 g/L when tryptone was used at a concentration of 2.5 g/L.Since the maximum EPS production was obtained at 7.5 g/L of bacteriological peptone, subsequent experiments were carried out under those conditions.The results obtained in the present work are consistent with those reported by Srikanth et al. [43], who reported a maximum yeast yield of 1.14 g/L produced with Acetobacter xylinum NCIM2526 using 10 g/L of bacteriological peptone.The findings are also similar to the results obtained by Mamay et al. [52], who obtained the best results when they used bacteriological peptone as a carbon source with Bacillus licheniformis BK AG1.In the literature, some reports indicate that the nitrogen source used for EPS production can have negative or positive effects on production depending on the microorganism
Effect of Nitrogen Source on Exopolysaccharide Production
The results for the effect of the nitrogen source on EPS production can be observed in Figure 3.The effect of the nitrogen source on the metabolism of S. kilbournensis was determined by evaluating four nitrogen sources: bacteriological peptone, meat peptone, tryptone, and meat extract.The results of the statistical analysis of EPS production showed a significant difference when bacteriological peptone was used at a concentration of 7.5 g/L, followed by concentrations of 0.93 and 0.90 g/L when meat extract and meat peptone were used at a concentration of 5 g/L, respectively, and a yield of 0.88 g/L when tryptone was used at a concentration of 2.5 g/L.Since the maximum EPS production was obtained at 7.5 g/L of bacteriological peptone, subsequent experiments were carried out under those conditions.The results obtained in the present work are consistent with those reported by Srikanth et al. [43], who reported a maximum yeast yield of 1.14 g/L produced with Acetobacter xylinum NCIM2526 using 10 g/L of bacteriological peptone.The findings are also similar to the results obtained by Mamay et al. [52], who obtained the best results when they used bacteriological peptone as a carbon source with Bacillus licheniformis BK AG1.In the literature, some reports indicate that the nitrogen source used for EPS production can have negative or positive effects on production depending on the microorganism used.In the case of peptone and yeast extract, there are several reports that indicate positive effects, attributable to the content of polypeptides, vitamins, and minerals that favor the metabolism of the microorganism for EPS production [53].In the particular case of S. kilbournensis, there are reports that indicate that the sources of organic nitrogen can easily influence its metabolism, which agrees with the results obtained in the present study.Likewise, it has been reported that S. kilbournensis cannot assimilate nitrate [32,54].
used.In the case of peptone and yeast extract, there are several reports that indicate positive effects, attributable to the content of polypeptides, vitamins, and minerals that favor the metabolism of the microorganism for EPS production [53].In the particular case of S. kilbournensis, there are reports that indicate that the sources of organic nitrogen can easily influence its metabolism, which agrees with the results obtained in the present study.Likewise, it has been reported that S. kilbournensis cannot assimilate nitrate [32,54].
Effect of Carbon Source on Exopolysaccharide Production
In order to determine the effect of the carbon source on the production of EPSs, SmF was realized using sucrose and molasses as a carbon source and bacteriological peptone as a nitrogen source.The results shown in Figure 4 indicate that the highest EPS concentration (44.33 g/L) was obtained when 400 g/L of molasses was used as a carbon source, while the maximum yield was 4.46 g/L (a yield of 10 times more) when sucrose was used at a concentration of 550 g/L.Likewise, Figure 4 shows a direct relationship between the molasses concentration and EPS production; when the molasses concentration was increased, the EPS concentration increased until reaching 400 g/L, and after this, it decreased proportionally.On the other hand, when sucrose was used as a carbon source, the behavior was similar to when molasses was used; however, the yields were 10 times lower than those obtained with molasses.According to the statistical analyses, molasses at 400 g/L showed the highest production, and this carbon source has the advantage of being the cheapest feedstock, reducing the production costs of EPSs.
The results obtained in this research are similar to other reports that indicate that high concentrations of a carbon source can improve EPS production, particularly for levan [26,53].Likewise, the results are consistent with the results obtained by Zhang et al. [55], who reported a maximum EPS yield when 300 g/L of sucrose was used as a carbon source with Bacillus methylotrophicus.Furthermore, the EPS yield decreased significantly above this sucrose concentration, probably because of the increase in viscosity and an enzymatic inhibition that consequently impacted EPS synthesis [19,41].On the other hand, the use of recombinant yeasts for levan production has been reported; e.g., Ko et al. [29] reported a levansucrase from Rahnella aquatilis expressed in Saccharomyces cerevisiae, obtaining yields of 3.17 g/L/h.Likewise, by expressing a fusion enzyme between endolevanase from B. licheniformis and levansucrase from B. subtilis in Pichia pastoris, yields of 0.82 g/L/h were obtained [5].Also, Shang et al. [56] reported that the levansucrase enzyme from Zymomonas mobilis was expressed in Saccharomyces cerevisiae EBY100, obtaining a levan production yield of 1.42 g/L/h.Furthermore, there was a significant increase in EPS production using molasses as a carbon source, probably because molasses contains high levels of sucrose,
Effect of Carbon Source on Exopolysaccharide Production
In order to determine the effect of the carbon source on the production of EPSs, SmF was realized using sucrose and molasses as a carbon source and bacteriological peptone as a nitrogen source.The results shown in Figure 4 indicate that the highest EPS concentration (44.33 g/L) was obtained when 400 g/L of molasses was used as a carbon source, while the maximum yield was 4.46 g/L (a yield of 10 times more) when sucrose was used at a concentration of 550 g/L.Likewise, Figure 4 shows a direct relationship between the molasses concentration and EPS production; when the molasses concentration was increased, the EPS concentration increased until reaching 400 g/L, and after this, it decreased proportionally.On the other hand, when sucrose was used as a carbon source, the behavior was similar to when molasses was used; however, the yields were 10 times lower than those obtained with molasses.According to the statistical analyses, molasses at 400 g/L showed the highest production, and this carbon source has the advantage of being the cheapest feedstock, reducing the production costs of EPSs.nitrogen compounds, and trace elements that promote microbial growth and enhance EPS synthesis [41,57].
Characterization of Exopolysaccharides with Fourier-Transform Infrared Spectroscopy (FT-IR)
Fourier-transform infrared spectroscopy was used to determine the structure of EPSs produced by S. kilbournensis (Figure 5).A strong band of OH stretching was observed at 3249 cm −1 .The bands within the region of 3600-3200 cm −1 were due to OH vibration [58], and the band at 2924 cm −1 specifies CH bending.The region in the range of 3000-2800 cm −1 indicates the stretching vibration of CH and confirms the presence of fructose [22].The spectrum band at 1644 cm −1 indicates carbonyl stretching [43], and the peak at 1440 cm −1 The results obtained in this research are similar to other reports that indicate that high concentrations of a carbon source can improve EPS production, particularly for levan [26,53].Likewise, the results are consistent with the results obtained by Zhang et al. [55], who reported a maximum EPS yield when 300 g/L of sucrose was used as a carbon source with Bacillus methylotrophicus.Furthermore, the EPS yield decreased significantly above this sucrose concentration, probably because of the increase in viscosity and an enzymatic inhibition that consequently impacted EPS synthesis [19,41].On the other hand, the use of recombinant yeasts for levan production has been reported; e.g., Ko et al. [29] reported a levansucrase from Rahnella aquatilis expressed in Saccharomyces cerevisiae, obtaining yields of 3.17 g/L/h.Likewise, by expressing a fusion enzyme between endolevanase from B. licheniformis and levansucrase from B. subtilis in Pichia pastoris, yields of 0.82 g/L/h were obtained [5].Also, Shang et al. [56] reported that the levansucrase enzyme from Zymomonas mobilis was expressed in Saccharomyces cerevisiae EBY100, obtaining a levan production yield of 1.42 g/L/h.Furthermore, there was a significant increase in EPS production using molasses as a carbon source, probably because molasses contains high levels of sucrose, nitrogen compounds, and trace elements that promote microbial growth and enhance EPS synthesis [41,57].
Characterization of Exopolysaccharides with Fourier-Transform Infrared Spectroscopy (FT-IR)
Fourier-transform infrared spectroscopy was used to determine the structure of EPSs produced by S. kilbournensis (Figure 5).A strong band of OH stretching was observed at 3249 cm −1 .The bands within the region of 3600-3200 cm −1 were due to OH vibration [58], and the band at 2924 cm −1 specifies CH bending.The region in the range of 3000-2800 cm −1 indicates the stretching vibration of CH and confirms the presence of fructose [22].The spectrum band at 1644 cm −1 indicates carbonyl stretching [43], and the peak at 1440 cm −1 corresponds to the CH vibration [22].The band at 987 cm −1 corresponds to the vibration of the glycosidic bond, and the region in the range of 1200-900 cm −1 is characteristic of polysaccharides because the ring vibrations overlap with the vibration of the COC glycosidic bond and the stretching vibration of the COH side groups [22,43].The EPSs produced by S. kilbournensis showed bands corresponding to levan.
Inoculum Preparation
For inoculum preparation, S. kilbournensis was inoculated in SmF in Luria broth medium supplemented with (g/L) sucrose (BD Bioxon ® , Mexico City, Mexico) (6), peptone (Hycel ® , Mexico City, Mexico) (1), (NH 4 ) 2 SO 4 (Meyer ® , Mexico City, Mexico) (0.2), KH 2 PO 4 (Fermont™, Mexico City, Mexico) (0.1), and MgSO 4 •7H 2 O (J.T. Baker ® , Mexico) (0.1) with an adjusted initial pH of 6.8 and was maintained at 30 • C at 150 rpm for 24 h.For biomass recuperation, the culture was centrifuged at 3500× g for 15 min; then, the pellet was resuspended in distilled water, and the number of cells was determined using a Neubauer chamber.The suspension obtained was considered the SmF inoculum for the production of EPSs and was stored at 4 • C until use.
Production of Exopolysaccharides
The production of EPSs was carried out via SmF in glass containers of uniform geometry with a capacity of 170 mL by adding 75 mL of a culture medium [43].The medium was composed of (g/L) sucrose (40), bacteriological peptone (10), (NH 4 ) 2 SO 4 (1), KH 2 PO 4 (1), and MgSO 4 •7H 2 O (1) with an initial adjusted pH of 6 and an inoculum concentration adjusted at 1 × 10 6 CFU/mL and maintained at 30 • C with constant stirring at 150 rpm.Sampling was carried out at regular intervals of 3, 6, 9, and 12 h for the quantification of biomass and EPSs [41].
Recovery and Purification of Exopolysaccharides
The EPSs produced with SmF were recovered for fermented culture boiled for 30 min, followed by centrifugation at 3500× g for 15 min.The supernatant obtained was subjected to a second boiling treatment for 5 min, followed by a pH adjustment to 10 using 1 M of KOH.Finally, the EPSs were precipitated by adding chilled ethanol (80% v/v) at a ratio of 2:1 (v/v).The mixture was maintained by stirring at 4 • C overnight, followed by the addition of CaCl 2 (1%) with constant stirring for 20 min.The precipitate obtained was recovered via centrifugation at 3500× g for 20 min, and the pellet obtained was washed with a 1.5 volume of chilled ethanol (80% v/v) [43] and a lyophilizer (Labconco™ FreeZone™ 4.5, Kansas City, MO, USA) for subsequent analyses.
Effect of Different Variables on Exopolysaccharide Production via SmF
The effects of temperature, pH, and carbon and nitrogen sources on EPS production in SmF were determined, evaluating the effect of individual parameters.
Effect of pH on the Production of Exopolysaccharides
To determine the effect of pH on EPS production, the initial pH of the culture medium was adjusted to different initial pH values between 5.0 and 8.0 [55].
Effect of Temperature on the Production of Exopolysaccharides
The effect of temperature on EPS production was determined, maintaining the SmF at 25, 30, and 35 • C [58].
Effect of Different Nitrogen Sources on the Production of Exopolysaccharides
The effect of four nitrogen sources (bacteriological peptone, meat peptone, meat extract, and tryptone) on EPS production in SmF was evaluated [43].
Effects of Carbon Source and Concentration in Exopolysaccharide Production
The effects of the source and concentration of carbon on the production of EPSs were determined.The SmF was carried out with sucrose and molasses at different concentrations (50,100,150,200,250,300,350,400,450, 500, 550, 600, and 650 g/L) [45,55].
Figure 1 .
Figure 1.Effect of time on EPS production.The quantities of EPSs produced (g/L) were evaluated.
Figure 1 .
Figure 1.Effect of time on EPS production.The quantities of EPSs produced (g/L) were evaluated.
Figure 2 .
Figure 2. Effect of pH on the production of EPSs by S. kilbournensis.
Figure 2 .
Figure 2. Effect of pH on the production of EPSs by S. kilbournensis.
Figure 3 .
Figure 3.Effect of nitrogen source on EPS production by S. kilbournensis.
Figure 3 .
Figure 3.Effect of nitrogen source on EPS production by S. kilbournensis.
Figure 4 .
Figure 4. Effect of carbon source on EPS production by S. kilbournensis.S: EPS produced with sucrose; M: EPS produced with molasses.
Figure 4 .
Figure 4. Effect of carbon source on EPS production by S. kilbournensis.S: EPS produced with sucrose; M: EPS produced with molasses.
Figure 5 .
Figure 5. FT−IR of the recovered EPSs, EPS M, EPS S, and the levan reference.
Figure 5 .
Figure 5. FT−IR of the recovered EPSs, EPS M, EPS S, and the levan reference. | 7,445.2 | 2024-02-29T00:00:00.000 | [
"Environmental Science",
"Biology",
"Chemistry"
] |
Identification of dynorphins as endogenous ligands for an opioid receptor-like orphan receptor.
To identify the endogenous ligands for a cloned orphan receptor that shares high degrees of sequence homology with opioid receptors, this orphan receptor was expressed in Xenopus oocytes and in mammalian cell lines CHO-K1 and HEK-293. The coupling of the receptor to a G protein-activated K+ channel was used as a functional assay in oocytes. Endogenous opioid peptide dynorphins were found to activate the K+ channel by stimulating the orphan receptor. This activation was dose-dependent, with EC50 values at 45 and 37 nM for dynorphin A and dynorphin A-(1-13), respectively. The dynorphin effect was antagonized by the non-selective opioid antagonist naloxone but at rather high concentrations in the micromolar range. Naloxone also caused a rightward shift of the dose-response curve for dynorphin A, suggesting a competitive antagonism mechanism. In transiently transfected cells, 5 microM dynorphin A-(1-13) inhibited the forskolin-stimulated cyclic AMP increase by 51 and 35% in CHO-K1 and HEK-293 cells, respectively. Other classes of endogenous opioids, i.e. enkephalins and endorphins, caused very little activation of this receptor. These results suggest that this orphan receptor is a member of the opioid receptor family and that dynorphins are endogenous ligands for this receptor.
To identify the endogenous ligands for a cloned orphan receptor that shares high degrees of sequence homology with opioid receptors, this orphan receptor was expressed in Xenopus oocytes and in mammalian cell lines CHO-K1 and HEK-293. The coupling of the receptor to a G protein-activated K ؉ channel was used as a functional assay in oocytes. Endogenous opioid peptide dynorphins were found to activate the K ؉ channel by stimulating the orphan receptor. This activation was dose-dependent, with EC 50 values at 45 and 37 nM for dynorphin A and dynorphin A-(1-13), respectively. The dynorphin effect was antagonized by the non-selective opioid antagonist naloxone but at rather high concentrations in the micromolar range. Naloxone also caused a rightward shift of the dose-response curve for dynorphin A, suggesting a competitive antagonism mechanism. In transiently transfected cells, 5 M dynorphin A-(1-13) inhibited the forskolin-stimulated cyclic AMP increase by 51 and 35% in CHO-K1 and HEK-293 cells, respectively. Other classes of endogenous opioids, i.e. enkephalins and endorphins, caused very little activation of this receptor. These results suggest that this orphan receptor is a member of the opioid receptor family and that dynorphins are endogenous ligands for this receptor.
After the cloning of all three major types of opioid receptors, , ␦, and (1), a novel receptor was cloned from several species by using a homology screening technique (2)(3)(4)(5)(6)(7)(8). The amino acid sequence of this receptor is similar to those of the opioid receptors. However, whereas the three opioid receptors share about 70% amino acid sequence similarity among themselves, there is a reduced homology level at about 65% between this receptor and any of the opioid receptors (4). This suggests that this novel receptor may be a member of the opioid receptor family, different from the other three receptors, and was thus designated various names including XOR1 1 (4). In vitro and in vivo assay systems have been used to find its ligands. A synthetic non-selective opioid agonist etorphine was shown to inhibit adenylyl cyclase in CHO-K1 cells transfected with this receptor clone, and diprenorphine and naloxone antagonized the inhibitory action of etorphine (2). However, since no endogenous ligands have been found for this novel receptor, it remains an "orphan" receptor.
To identify endogenous ligands for an orphan receptor, one could perform receptor binding with radiolabeled compounds. This approach has been used for the identification of 5HT 1A receptor (9). However, this approach is limited in its scope, since many endogenous ligands are not available in radiolabeled form. An alternative approach is to use a functional assay, in which the orphan receptor is expressed in cells and a measurable cell function is used as a readout of receptor activation, such as changes in second messenger levels or membrane currents. In this way compounds can be tested in unlabeled form and, if a proper cellular function is chosen that the orphan receptor does couple to, there is an opportunity to identify the endogenous ligands.
Xenopus oocytes have been used in many functional studies for membrane receptors and ion channels (10,11). In particular, opioid receptors have been shown to couple to a cloned G protein-activated K ϩ channel (KGA) in oocytes (12)(13)(14)(15). Because of the high degree of homology of this orphan receptor with the opioid receptors, it may also be capable of functionally coupling to KGA in Xenopus oocytes, thus constituting an assay system for identifying endogenous ligands that can activate this receptor. We took such an approach, using XOR1 cloned from rat brain (4) for oocyte expression. Here, we report the results of this study.
EXPERIMENTAL PROCEDURES
Materials-Opioid ligands were from Peninsula Laboratories Inc., Research Biochemicals International, Bachem Inc., and the National Institute on Drug Abuse. CHO-K1 and HEK-293 cell lines were from the American Type Culture Collection. Xenopus laevis was from Xenopus I and African Fish Farm. Culture media were from HyClone Laboratories Inc. and Life Technologies, Inc. In vitro transcription kit T7 mMessage mMachine was from Ambion. The cyclic AMP assay kit was from DuPont NEN. All other chemicals were from Sigma.
Oocyte Injection and Electrophysiology-Xenopus oocytes were prepared as described (16). In vitro transcribed RNA was injected into oocytes (1-2 ng/oocyte) by a Drummond automatic microinjector. Oocytes were incubated in 50% L-15 medium supplemented with 0.8 mM glutamine and 10 g/ml gentamycin at 18°C. Three days after injection, oocytes were voltage-clamped at Ϫ80 mV with two glass electrodes (filled with 3 M KCl and having a resistance of 2-3 megaohms) using an Axoclamp-2A (Axon Instruments) under the control of pCLAMP software (Axon Instruments). Oocytes were superfused with either ND96 (96 mM NaCl, 2 mM KCl, 1 mM MgCl 2 , 1.5 mM CaCl 2 and 5 mM HEPES, pH 7.5) or a high potassium solution (hK: 96 mM KCl, 2 mM NaCl, 1 mM MgCl 2 , 1.5 mM CaCl 2 , and 5 mM HEPES, pH 7.5). The membrane currents were recorded with the aid of the pCLAMP software and on a Gould chart recorder.
Cell Transfection and Cyclic AMP Assay-The CHO-K1 and HEK-293 cells were transiently transfected with either XOR1 cDNA in pRc/ * This work was supported in part by National Institutes of Health Grants DA09116, DA09444, and NS28190 and by a research grant from the Adolor Corp. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 The abbreviations used are: XOR1, rat opioid receptor-like novel receptor; KGA, G protein-activated potassium channel; hK, high potassium solution; dyn, dynorphin; CHO-K1, Chinese hamster ovary cells; HEK, human embryonal kidney cells. CMV (Invitrogen) or vector only (as control) by the calcium phosphate method (17). Three days after transfection, cells were harvested by trypsin treatment, washed, and resuspended in serum-free medium. Cells were treated with ligands in the presence of 10 M forskolin and 1 mM 3-isobutyl-1-methylxanthine at 37°C for 20 min. The reaction was terminated by adding 1 ⁄3 volume of 0.25 M HCl. The mixture was boiled for 5 min and centrifuged at 14,000 ϫ g for 10 min. The supernatant was dried under vacuum and dissolved in assay buffer. The cAMP in cells was determined by the nonacetylated method using the radioimmunoassay kit (DuPont NEN) according to the manufacturer's instructions.
RESULTS AND DISCUSSION
Orphan Receptor Is Functionally Activated by Some Endogenous Opioid Peptides-Activation of opioid receptors has been shown by their ability to couple to a cloned KGA in Xenopus oocytes (12)(13)(14)(15). Due to the sequence similarity of XOR1 to the opioid receptors, it is possible that this orphan receptor may also couple to the KGA channel in oocytes. To test this possibility, XOR1 and KGA were coexpressed in oocytes, and functional coupling of the receptor to the K ϩ channel was assessed by two-electrode voltage clamp. Because of the high level of sequence homology between XOR1 and the opioid receptors, we examined all three classes of the endogenous opioids, dynorphins, endorphins, and enkephalins, for their ability to activate the XOR1. As shown in Fig. 1, dynorphins and some dynorphin fragments induced an inwardly rectifying K ϩ current, presum-ably by stimulating the receptor coupled to KGA, while endorphins and enkephalins caused very little activation of the receptor. Oocytes injected with cRNA for XOR1 or KGA alone did not show any response to any opioid ligands (data not shown). This excluded the possibility of endogenous currents in oocytes activated by dynorphins.
Activation of Receptor by Dynorphins Is Dose-dependent-To further study the activation of XOR1 by dynorphins, we chose the most potent peptides dyn A and dyn A-(1-13) to perform dose-response experiments. Due to the variability of individual oocytes in expressing exogenous proteins, we normalized the receptor-mediated response by taking the ratio of the receptoractivated current (I a ) over the spontaneous current in hK solution (I s ), shown in Fig. 2A. As has been observed before (12)(13)(14)(15)19), in cells expressing the KGA channel, there is an increase in membrane K ϩ current (the spontaneous current I s ) when the K ϩ concentration in the extracellular solution is increased, such as by switching the oocyte bath solution from ND96 to hK. When the KGA channel is coexpressed with a receptor that is capable of coupling to the channel, application of an agonist for the receptor induces a receptor-activated cur-rent I a (Fig. 2A). Since both I a and I s are related to the expression level of KGA, their ratio serves as an index that is not heavily influenced by the variability in the expression level in different cells. Using this method, the dose-response relations were determined for dyn A and dyn A-(1-13). As shown in Fig. 2, B and C, both dynorphin peptides induce receptor-mediated activation of the K ϩ channel in a dose-dependent manner. A sigmoid curve was fitted to the data for each dynorphin peptide, and an EC 50 value was calculated to be 45 and 37 nM for dyn A and dyn A-(1-13), respectively (Fig. 2, B and C).
Opioid receptors are capable of regulating membrane conductance in neurons, leading to membrane hyperpolarization and a decrease in the neuronal firing rate or inhibition of neurotransmitter release (20). The KGA has been shown to exist in the brain (12,19) and was suggested to be the K ϩ channel mediating the neuronal effects of neurotransmitters including opioids. The functional coupling of XOR1 to KGA in Xenopus oocytes suggests that this receptor may mediate the activation of the KGA in the central nervous system and function in the neuronal regulation.
Naloxone Is Low Potency Antagonist for XOR1-Reversibility by the non-selective opioid antagonist naloxone has been considered a major criterion for classification of an "opioid" action (21). To determine the antagonism of naloxone for XOR1, we tested naloxone in oocyte assays. As shown in Fig. 3A, after the receptor-mediated activation of the K ϩ current by dyn A-(1-13), inclusion of 1 mM naloxone in the bath solution blocked the current. The blockade of naloxone for XOR1 activation was dose-dependent (Fig. 3B). At concentrations of 10 or 100 M, which are enough to completely block the other opioid receptor effects, naloxone only partially reversed the activation of XOR1. At a high concentration of 1 mM, naloxone completely blocked the receptor's activation (Fig. 3, A and B).
Does naloxone antagonize dynorphin effects on this receptor in a competitive manner, as for the other opioid receptors? By using naloxone with different concentrations of dyn A to perform dose-response experiments, we found that 10 M naloxone caused a parallel rightward shift of the dose-response curve for the dyn A-activated response (Fig. 3C). The parallel shift of the dose-response curve suggests that the antagonism by naloxone at the XOR1 is competitive in nature. In this case, the EC 50 value of dyn A was shifted from 45 nM without naloxone to 372 nM with 10 M naloxone. These data gave the apparent dissociation constant K e of naloxone for the receptor at about 1.37 M using the Tallarida variation of Schild analysis (22). Compared with the nanomolar affinity values of naloxone for , ␦, and opioid receptors (23), this value is 2-3 orders of magnitude higher, thus making naloxone a low potency antagonist at this novel receptor.
Activation of XOR1 Can Inhibit Forskolin-stimulated cAMP Increase-A hallmark of the cellular effect for all three major types of opioid receptors, , , and ␦, is that their activation results in a reduced level of intracellular cAMP (24), an important second messenger in cell functions. This effect is mediated by inhibition of the adenylyl cyclase activity upon opioid receptor activation. Mammalian cells have been used as an efficient expression system for cloned opioid receptors, and it has been shown that all three cloned opioid receptors are negatively coupled to adenylyl cyclase (1). To examine whether this novel receptor is capable of coupling to the cAMP pathway, we transiently transfected CHO-K1 and HEK-293 cells with XOR1 and tested the cAMP level after the treatment by forskolin with or without the endogenous ligand dynorphins. As shown in Fig. 4, 5 M dyn A-(1-13) inhibited the forskolin-stimulated cAMP increase by 51% in CHO-K1 and by 35% in HEK-293, respectively. These values are significantly different from the forskolin only treated cells (p Ͻ 0.01), indicating that XOR1 is capable of negatively coupling to the adenylyl cyclase. The cells transfected with plasmid vector pRC/CMV alone did not show any inhibition to the cAMP increase (Fig. 4). Thus, like the other three major opioid receptors, , ␦, and , this novel opioid receptor also exerts an inhibitory effect on the cAMP/adenylyl cyclase pathway.
XOR1 Is a Novel Opioid Receptor Distinct from the Opioid Receptor-The above results indicate that of the three major classes of endogenous opioid peptides, only dynorphins are capable of activating this orphan receptor, whereas the other two classes, namely endorphins and enkephalins, are not. This raises the question of whether this novel receptor is more closely related to the opioid receptor, the receptor type that dynorphins interact with at high affinity (25). To investigate this question, -selective agonists were used in the oocyte functional assay. Two -selective compounds, U-50,488 and U-69,593, were used, because these compounds have nanomolar affinity at the opioid receptor. When applied in the bath solution to stimulate the XOR1, however, neither of the compounds induced any detectable K ϩ current even at micromolar concentration (data not shown). Affinity values of various compounds for the cloned opioid receptor have been reported, and Table I shows a comparison of these values between the XOR1 and the opioid receptor from rodent species. It can be seen that while dynorphins have subnanomolar affinity values at the opioid receptor, its EC 50 values at this novel receptor are above 30 nM. Also, synthetic -selective agonists U-50,488 and U-69,593 do not activate this receptor. In addition, naloxone has a 1.37 M affinity value at this receptor, whereas it displays nanomolar affinity values at the opioid receptor. These data indicate that the XOR1 is distinct from the opioid receptor.
It is interesting to note that, while the overall sequence homology between this orphan receptor and each of the cloned , ␦, and opioid receptors is similar, there is apparent resemblance of the highly negative charges in the second extracellular loop between this receptor and the opioid receptor. There are seven negatively charged amino acid residues in this region for both the receptor and this receptor, whereas there are only two negatively charged residues in either the or ␦ receptor (4). In opioid receptors, this loop is the longest among the three extracellular loops with a low level of sequence homology (26), suggesting the possibility that it may be involved in ligand binding specificity for the receptors. Indeed, this region in the receptor has been shown to be critical for high affinity binding of dynorphin peptides (27), which are basic peptides with five positively charged amino acid residues for both dyn A and dyn A-(1-13) (18,28). The highly negative charges in this region of the orphan receptor may contribute to the interaction between dynorphins and the receptor.
What might be the physiological role of this novel receptor? Reports in the literature have provided certain clues. In vivo studies showed that dynorphins caused certain physiological effects that may not be mediated entirely through the opioid receptor, such as biphasic anti-nociception effect, motor effects, immunomodulation, inflammation response, and modulation on respiration and temperature control (29 -32). Our results suggest the possibility that this novel opioid receptor may play a role in mediating some of the dynorphin effects that are not contributed by the opioid receptor.
In conclusion, the data presented in this report indicate that this opioid receptor-like orphan receptor is indeed a novel member of the opioid receptor family, because it can be activated by the endogenous ligand dynorphins. Similar to the other opioid receptors, this receptor also inhibits adenylyl cyclase activity. Unlike the other opioid receptors, however, naloxone does not effectively block this receptor, suggesting that it may mediate some of the "non-opioid" effects of dynorphins. Endorphins or enkephalins are rather ineffective at this receptor, whereas several of the dynorphin peptides can activate it, suggesting that there may be other endogenous ligands for this receptor. | 4,065.6 | 1995-09-29T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Investigation into the antioxidant role of arginine in the treatment and the protection for intralipid-induced non-alcoholic steatohepatitis
This study investigated the possible roles of arginine (Arg) in ameliorating oxidative damage of intralipid (IL)-induced steatohepatitis (NASH). NASH was induced in Sprague-Dawley rats by intravenous administration of 20 % IL for three weeks and then rats were pre- and post-treated with intraperitoneal injection of Arg for two weeks. Several biochemical parameters (blood and hepatic lipid peroxidation, glutathione, glutathione peroxidase and superoxide dismutase, hepatic cytochrome P450 2El monooxygenase (CYP2E1), nitric oxide (NO), endothelial nitric oxide synthase (eNOS) and tumor necrosis factor-α “TNF-α”) and liver histopathology were detected for rat groups. The administration of Arg either before or after IL significantly ameliorated uncontrolled elevation of TBARS content, CYP2E1 activity (0.32 ± 0.01 or 0.3 ± 0.02 IU/mg) and TNF-α level. These effects were associated with a significant increase in the levels of glutathione, activities of antioxidant enzymes, NO level (1.649 ± 0.047 or 1.957 ± 0.073 μmol/g) and activity of hepatic eNOS (0.05 ± 0.002 or 0.056 ± 0.002 IU/mg) compared to the IL-treated rats. Moreover, the injection of Arg in NASH-induced rats showed normal hepatocytes, no steatosis and no bile duct proliferation but mild inflammation in the group which received IL after Arg. These results proved that pre- and post-treatment with Arg blocked oxidative stress-induced NASH by inhibiting CYP2E1 activity, decreasing TNF- α level and restoration activities of eNOS and antioxidant enzymes as well as glutathione level. This antioxidant effect of Arg leads to reverse signs of liver pathology of NASH with amelioration of liver and kidney functions.
Introduction
Arginine (Arg) is one of the most metabolically versatile amino acids, serves as a precursor for synthesis of nitric oxide (NO) and other biologically important compounds involved in cellular homoeostasis [1]. There has been considerable interest in the metabolism of Arg, primarily because it is the source of NO in biological tissues from its terminal guanidinium group by nitric oxide synthase (NOS). Some NOS isoforms (endothelial NOS (eNOS) and neuronal) are Ca dependent-constitutively expressed and Ca-independent inducible NOS (iNOS) isoform [2,3]. Arg and its product NO possess potent antistress activity [4] thus Arg regulates cellular redox status and plays an effective role against oxidative stress [5,6]. Oxidative stress plays a major role in the pathogenesis of several liver disorders, including non-alcoholic steatohepatitis (NASH) [7]. NASH is a severe phenotype of fatty liver (steatosis) with the development of necroinflammatory changes (hepatitis) that can progress to cirrhosis, with subsequent liver failure and an increased risk of hepatocellular carcinoma [8].
Free radicals accelerate the onset and progression of NASH [9]. An overload of free radicals cannot gradually be destroyed leads to generate deleterious processes that can seriously alter structure and functions of the cell membranes and macromolecules. Free radicals also enhance abnormal synthesis of several cytokines leading to hepatic necroinflammatory changes [10,11] which mediated mainly through tumor necrosis factor-α (TNF-α). TNF-α has been incriminated to play an important pathogenic role in NASH, possibly partly related to its ability to induce oxidative stress [12,13]. The body has several mechanisms to counteract oxidative stress by producing antioxidants, which are either naturally produced in situ, or externally supplied through foods. The endogenous cellular defense system consists of enzymatic scavengers (superoxide dismutase (SOD), glutathione peroxidase (GPx) and others) and non-enzymatic scavenger components such as reduced form of glutathione (GSH) [14]. Several previous studies have demonstrated the importance of GSH, SOD, and GPx in protecting liver against oxidative damage [15,16] which may be mediated by hepatic microsomal cytochrome P450 2El monooxygenase (CYP2El). CYP2El induces fatty acid oxidation in NADPH-dependent manner producing prooxidant species, which if not encountered by antioxidants, may induce steatohepatitis [17].
Animal models of NASH may be divided into two broad categories: those caused by genetic mutation and those with an acquired phenotype produced by dietary or pharmacological manipulation [18]. Natural nutritional models have been described, including the use of fat-rich diet that aggravates oxidative stress, leading to steatohepatitis and provides a system for screening therapeutic targets to treat NASH [19]. In our previous study, El-Gamal et al (2006) [20], we reported that intralipid (IL) induced hyperglycemia and dyslipidemia and increased significantly body weight of rats which become normalized by the injection of Arg (500 mg/Kg) for 14 days either before or after the IL administration. In the present study, we investigated further possible roles of Arg administration in ameliorating oxidative damage of IL-induced steatohepatitis.
Experimental groups
Fifty male Sprague-Dawely rats weighing from 110-134 were obtained from MISR University for Science and Technology (animal welfare assurance no. A5865-01). The animals were housed in cages with free access to standard chow (14 % protein, 4.5 % fat, 52.9 % carbohydrate, 9 % fiber) and water and kept under conventional conditions of temperature and humidity with 12 h light-12 h dark cycle. Rats were divided randomly into five groups; control group (the untreated normal rats), IL-treated group (rats received a daily dose of intravenous (i.v.) injection of 8 mL of 20 % IL/kg for three weeks), Arg-treated group (rats received a daily dose of intraperitoneal (i.p.) injection of 500 mg Arg/kg for two weeks), IL + Arg-treated group (rats i.v. injected with 20 % IL for three weeks then treated with 500 mg Arg/kg, i.p. daily for two weeks) and Arg + IL-treated group (rats received 500 mg Arg/kg daily for two weeks and injected with 8 mL of 20 % IL for three weeks).
At the end of the experimental period, all rats were sacrificed by decapitation under diethyl-ether anesthesia after an overnight fasting. Blood samples and liver tissues were collected.
Determination of blood and hepatic lipid peroxidation product levels Lipid peroxidation was determined by measuring thiobarbituric acid reactive substances (TBARS) in the plasma and liver tissues as described by Ohkawa et al. (1979) [21]. Briefly, 50 μL of plasma or liver homogenates were mixed with 8.1 % SDS, 20 % acetic acid containing 0.27 M HCl, pH 3.5 and 0.8 % thiobarbituric acid (Sigma-Aldrich, USA) and heated at 95°C for 60 min. After cooling, 2.5 mL of n-butanol: pyridine mixture (15:1 V/V) was added, centrifuged and the absorbances of this organic layer were measured at 532 nm.
Determination of hepatic CYP2E1 activity
Activity of CYP2El in liver homogenate was measured according to the method of Waxman, et al. (1989) [22]. Briefly, liver samples were mixed with 0.1 M potassium phosphate buffer, pH 7.4 containing 8 mM aniline and 1 mM NADPH (Sigma-Aldrich, USA) and incubated for 60 min at 37°C. The reaction was terminated with 40 % TCA, centrifuged, 400 μL of supernatants were incubated with 10 % sodium carbonate and 2 % phenol for 60 min. The absorbances of samples were read at 630 nm. Protein contents (mg) of liver samples were determined using a kit obtained from Biodiagnostics, Egypt.
Determination of hepatic NO level and eNOS activity
Nitrate plus nitrite measurement represents NO production as described by the method of Garner et al. (1956) [23] and its concentration was used to assess eNOS activity as described by Giordano et al. (2002) [24] and Chang et al. (1998) [25]. Briefly, 500 μL of liver samples were added to the reaction mixture containing 0.1 M Arg, 1.25 mM NADPH, 0.5 μM FAD, 0.5 μM FMN, 1 μM BH 4 (Sigma-Aldrich, USA) and 10 μM CaCl 2 . The mixture was incubated at 37°C for 60 min. Five hundred microliters of the mixture were collected for NO measurement after heat inactivation, incubated with 0.96 μM NADPH, 5 U/mL nitrate reductase and 0.1 M sodium phosphate buffer saline (pH 7.4) for 3 h. Finally, Griess reagent was added to the previous mixture and absorbances of sample were measured at 540 nm. The total protein contents (mg) of samples were determined as described above.
Determination of blood and hepatic GSH levels
Content of GSH was determined in plasma and supernatant of liver homogenates by the method of Ellman (1959) [26]. The proteins in liver samples or plasma were precipitated using 5 % TCA. Supernatants were incubated with Ellman's reagent and 0.2 M potassium phosphate buffer, pH 8 for 10 min. The absorbances of the developed yellow color of samples were measured against blank at 412 nm.
Determination of blood and hepatic GPx activity
Total GPx activity was measured in plasma and supernatants of liver homogenates using the method described by Rotruck et al. (1973) [27]. Briefly, 50 μL of plasma or liver samples and 50 mM Tris-HCl buffer, pH 7.6 containing 0.127 mM EDTA and 1.63 mM GSH (Sigma-Aldrich, USA) were added to 0.026 M cumene hydroperoxide and incubated at 37°C for 5 min (samples). For control, 50 μL plasma or liver samples and the previous buffer were added to 1.63 mM GSH and incubated at 37°C for 5 min. One milliliter of 15 % TCA was added to control and sample and supernatants were incubated with 19.8 mg % DTNB for 5 min. The absorbances of samples and control were read at 412 nm.
Determination of blood and hepatic SOD activity
A simple and rapid method for the SOD activity in plasma and liver homogenate supernatant was described by Marklund (1974) [28]. In quartz cuvette, 1 mL of 20 mM Tris-HCl buffer, pH 8.2 containing diethylene triaminopentaacetic acid and 20 mM pyrogallol (Sigma-Aldrich, USA) were mixed with 20 μL of plasma, liver samples or serial concentrations of SOD (20-200 ng/mL). The change in the absorbance of sample and standard was measured at 420 nm after 60 s and 120 s.
Determination of blood and hepatic TNF-α level
Livers were homogenized in phosphate buffer saline containing 0.05 % sodium azide, 0.5 % Triton X-100 and protease inhibitor cocktail, pH 7.2 and centrifuged at 12,000 xg for 10 min. TNF-α concentrations in plasma and liver homogenate supernatant were measured using a commercial rat TNF-α ELIZA kit (RayBio, USA).
Assessment of liver and kidney functions
Activities of blood alanine aminotransferase (ALT) and aspartate aminotransferase (AST) and albumin, urea and creatinine levels were determined in plasma samples using kits purchased from Biodiagnostics, Egypt.
Histopathological study of liver samples
The formalin-fixed liver specimens were dehydrated in ascending grades of alcohol then cleaned by immersion in xylene followed by impregnation in melted paraffin and embedding to form solid paraffin blocks. Then, blocks were cut into 5 μm thick sections which stained
Statistical analysis
Data were expressed as mean ± standard error of the mean (SEM) by the multiple comparisons one-way analysis of variance (ANOVA) with probability (p) values < 0.05 considered statistically significant. (Pairwise comparison applied and significance level adjusted for multiple comparisons).
Results
The intravenous administration of 20 % IL into rats produced a significant increase in the lipid peroxidation which was manifested by the elevation of blood and hepatic TBARS levels (1370 ± 55 nmol/L and 358.5 ± 10.6 nmol/g, respectively) as shown in Fig. 1. This significant elevation of TBARS may be attributed to a significant increase in the hepatic activity of CYP2E1 in IL group (1.02 ± 0.01 IU/mg) as shown in Fig. 2. In addition, Fig. 3 illustrates that IL administration resulted in a significant decrease in hepatic content of NO and hepatic activity of eNOS (0.147 ± 0.002 μmol/g and 0.004 ± 0.0001 IU/mg, respectively). Also, GSH content and activities of GPx and SOD decreased significantly below the control values in the blood and liver (Table 1). This oxidative stress associated with a mark elevation in blood and hepatic TNF-α levels ( Table 2). Moreover, administration of IL resulted in a significant increase in activities of ALT and AST, as well as a significant decrease in blood albumin level and elevation in urea and creatinine levels, demonstrating liver and kidney dysfunction (Figs. 4 and 5).
On the other hand, administration of Arg either before or after IL ameliorated significantly uncontrolled elevation of TBARS content in the blood (374 ± 16 or 376 ± 17 nmol/L) and liver (135.7 ± 3.5 or 129.2 ± 6.1 nmol/g) as well as the hepatic activity of CYP2E1 activity (0.32 ± 0.01 or 0.3 ± 0.02 IU/mg) and these values were close to the levels of control rat group (3.74 ± 0.15 nmol/mL, 132.1 ± 5.3 nmol/g and 0.38 ± 0.01 IU/mg, respectively). These effects were associated with a significant increase in the levels of GSH and activities of GPx and SOD in the blood and liver, compared to IL-treated rats (Figs. 1, 2 and Table 1). Also, hepatic NO level (1.649 ± 0.047 or 1.957 ± 0.073 μmol/g) and activity of hepatic eNOS (0.05 ± 0.002 or 0.056 ± 0.002 IU/mg) increased significantly compared to IL-treated rats (Fig. 3). In addition to both plasma and hepatic levels of TNF-α were ameliorated to be near to normal values ( Table 2). Furthermore, pre-and post-treatment using Arg maintained the Fig. 3 Hepatic level of NO (μmol/g) and hepatic activity of eNOS (U/mg) in various animal groups. All values are expressed as the mean ± SEM and compared with: † control group; ‡ IL-treated group; § Arg-treated group; ¶ IL + Arg-treated group; II Arg + IL-treated group. Significance (p < 0.05) blood parameters of liver and kidney functions near to control values indicating the restoration of their normal functions (Figs. 4 and 5). The histopathological study supported the results obtained from the biochemical tests. Figure 6a shows the normal architecture of hepatocytes of control rats, whereas IL-treated group demonstrates small fatty vacuoles in their cytoplasm and proliferation of bile ducts. Also, steatosis was detected with microvascular changes, including heavy inflammatory cells infiltration and clearing of cytoplasm and nuclei (necroinflammation) which are the histological hall markers of steatohepatitis (Fig. 6b, c, and d). In addition, histopathological examination of Arg group shows normal hepatocytes with dilating sinusoids (Fig. 6e) and IL + Arg and Arg + IL groups showed normal hepatocytes, no steatosis and no bile duct proliferation but mild inflammation in the group received IL after Arg ( Fig. 6f and g).
Discussion
Our previous study confirmed that intravenous administration of 20 % IL resulted in the elevation of hepatic lipid contents [20] which may deposit as lipid droplets in hepatocytes [29]. The hepatic steatosis has become the main cause of liver tests abnormalities [30] and increases the sensitivity of the liver to injury, necrosis and inflammation [10]. Hepatic steatosis leads to mitochondrial dysfunction that plays a key role in abnormal generation of free radicals [31]. In addition to mitochondrial dysfunction, the cumulative effect of extramitochondrial fatty acids oxidation represents a further increase in oxidative stress and mitochondrial impairment. The microsomal oxidation of fatty acids was catalyzed by CYP2E1 which enhanced its expression in IL-induced hyperlipidemia as it was proved in Ng et al. (2015) study on pigs [32].
Free radicals that initiate lipid peroxidation, depletion of antioxidants, destruction of membrane and oxidative damage of proteins lead to increased TBARS and decreased GSH level, GPx activity, SOD activity and albumin level after IL administration [33,34]. This hepatic All values are expressed as the mean ± SEM and compared with: † control group; ‡ IL-treated group; § Arg-treated group; ¶ IL + Arg-treated group; II Arg + ILtreated group. Significance (p < 0.05) Fig. 4 Activities (U/mL) of ALT and AST and blood level of albumin (mg/dl) in various animal groups0. All values are expressed as the mean ± SEM and compared with: † control group; ‡ IL-treated group; § Arg-treated group; ¶ IL + Arg-treated group; II Arg + ILtreated group. Significance (p < 0.05) oxidative stress is tightly associated with TNF-αmediated hepatic inflammation. TNF-α level was elevated significantly in nonalcoholic fatty liver disease and NASH in both humans and animals [13,35]. Moreover, proinflammatory cytokines activate transcription of CYP2E1 [36] and iNOS. Over expression of iNOS enhances uncontrolled production of NO that favors the formation of ONOO-when it is accompanied by increasing O 2 .production; thus this reaction results in reduction of NO availability and eNOS inactivation [37,38]. In the same time, hyperlipidemia which is developed after IL administration leads to eNOS deficiency and subsequently, decreases NO production [39,40]. The resulting liver injury is associated with bile duct proliferation [41]. Also, hyperlipidemia that induces oxidative stress could exert their injurious activities in other extrahepatic tissues such as kidney. Scheuer et al. (2000) reported that high-fat diet induced hyperlipidemia led to a rise in glomerular tubulointerestitial generation of ROS leading to significant chronic tubulointerestitial damage was associated with elevation of serum level of urea and creatinine, indicating renal dysfunction [42].
On the other hand, this study provides evidence that daily administration of Arg whether before or after IL injection, leads to various beneficial effects. Arg injection resulted in a significant decrease in level of TBARS and CYP2El activity indicating that Arg reduced oxidative stress. This is consistent with previous results which proved that pre-and post-treatment with Arg lowered oxidative stress in animal model of hepatotoxicity [43]. Also, we found that this effect of Arg was associated with induction of GSH and activities of SOD, GPx and eNOS as well as NO level.
The Arg efficacy may be attributed to its direct antioxidant effect that is due to the alpha-amino group, a chemical moiety different from that necessary for NO production in an uncoupled status. In addition to its indirect antioxidant effect by generation of NO [44], it is reported that NO prevents oxidative stress in tissues firstly by interrupting chain reaction of lipid peroxidation via forming non radical novel nitrogen-containing lipid adducts [45] and inhibiting the catalytic activity of CYP2E1 [46,47]. Secondly, NO may trigger the expression of antioxidant enzymes and novel nitrosative stress resistance genes [48]. Lastly, NO augments the antioxidant potency of GSH [49] by forming S-nitrosoglutathione which is approximately 100 fold more potent than that of GSH [50]. GSH is not only a major antioxidant, but also upregulates GPx activity which protects against oxidation and nitration reactions [51].
Therefore, the ROS-scavenging property of Arg together with its ability to sustain spare endogenous antioxidants and decrease activity of the hepatic lipogenic enzyme may inhibit oxidative liver damage and decrease inflammation status [6,47,52]. Recent studies were reported that Arg inhibits uncontrolled synthesis of TNF-α thus block its deleterious effects [53,54]. The experimental study of Ozsoy et al. (2011) [55] demonstrated that i.p. administration of 500 mg/kg Arg for 7 days significantly prevented the elevation of liver enzymes and decreased bile duct proliferation and leukocyte infiltration with no necrosis. Studies of Engin et al (2011) [56] and Nanji et al (2001) [47] proved that i.p. pre-and post-treatment of Arg provided a significant treatment and protection against the liver injury (necroinflammatory lesions, hepatocellular degeneration and fibrosis). Infusion of Arg in experimental animals increased renal plasma flow and glomerular filtration rate. So animals were treated with gentamicin or cisplatininduced nephropathy together with Arg showed a significant amelioration in renal functions as indicated by blood urea nitrogen and creatinine levels [57,58]. Moreover, this dose of Arg (500 mg/kg) has been clinically applied for several other human diseases [59][60][61].
Conclusions
In this study, we have demonstrated that 2 weeks of preand post-treatment with Arg protected and treated liver against IL-induced NASH. This Arg effect is most likely attributed to its direct and NO dependent antioxidant property.
Competing interest
Marwa M Abu-Serie, Basiouny A El-Gamal, Mohamed A El-Kersh and Mohamed A El-Saadani state that they have complied with all ethical requirements during the preparation of this manuscript and declare that they have no conflict of interest and no financial interest. All applicable international, national, and/or institutional guidelines for the care and use of animals were followed.
Authors' contributions All authors have contributed significantly and are in agreement with the content of manuscript. MMA-S, BE-G, and ME-K designed the research plan, executed it, troubleshooted, interpreted the data being published and prepared the article. MMA-S and ME-S participated in the following up experiments and put the substantial suggestions in writing and editing the paper. | 4,615.8 | 2015-10-14T00:00:00.000 | [
"Biology"
] |
Surface/Interface Engineering for Constructing Advanced Nanostructured Light-Emitting Diodes with Improved Performance: A Brief Review
With the rise of nanoscience and nanotechnologies, especially the continuous deepening of research on low-dimensional materials and structures, various kinds of light-emitting devices based on nanometer-structured materials are gradually becoming the natural candidates for the next generation of advanced optoelectronic devices with improved performance through engineering their interface/surface properties. As dimensions of light-emitting devices are scaled down to the nanoscale, the plentitude of their surface/interface properties is one of the key factors for their dominating device performance. In this paper, firstly, the generation, classification, and influence of surface/interface states on nanometer optical devices will be given theoretically. Secondly, the relationship between the surface/interface properties and light-emitting diode device performance will be investigated, and the related physical mechanisms will be revealed by introducing classic examples. Especially, how to improve the performance of light-emitting diodes by using factors such as the surface/interface purification, quantum dots (QDs)-emitting layer, surface ligands, optimization of device architecture, and so on will be summarized. Finally, we explore the main influencing actors of research breakthroughs related to the surface/interface properties on the current and future applications for nanostructured light-emitting devices.
Introduction
The area of nanostructure materials refers to a new system that is composed or assembled according to certain rules based on the material units at the nanoscale (usually below 100 nm). According to the spatial dimension of the nanometer scale, nanostructured materials can be divided into zero-dimensional (0D) nanometer materials (such as nanoparticles, artificial atoms, clusters, etc.), one-dimensional (1D) nanomaterials (such as nanowires, fine rice filaments, nanorods, nanotubes, and nanofibers, etc.), and two-dimensional (2D) nanomaterials (such as nanoribbons, nanoscale disks, superlattices, multilayer membranes, etc.). As nanomaterials are in line with development trends related to the miniaturization and miniaturization of future devices, they will be regarded as the basic The interface refers to the interface between different substance phases or different kinds of substances. Similar to surface states, interface states are introduced at the interface state and occur at the interface where two different substances are in contact or at the interface of a heterogeneous junction due to the interruption of a periodic lattice constant. These interface states can also be introduced by lattice mismatch and interface roughness. In addition, the interface state can also be generated because the thermal expansion of the two materials does not match. The interface state is a local electronic state that cannot propagate in the material. Interface states are generally divided into a donor and recipient. Regardless of the position of the energy level in the forbidden band, if the energy level is neutral when occupied by electrons and positively charged after releasing electrons, it is called the donor interface state. If the energy level is neutral when it is empty and the electron is negatively charged, it is called the acceptor interface state. The interface state is another key factor to determine the performance of nanostructured devices [24][25][26].
As mentioned above, theoretically studying the influence of surface interface on nanomaterials and devices is of great significance to improve the performance of nanostructured devices. Fortunately, with the development of material characterization technology, there are many means and instruments to characterize the properties of surface interfaces, which can be used to validate and promote theories and help scientists to understand their impact on nanomaterials and devices. The analysis of surface interface properties mainly includes surface composition, surface morphology, surface structure, and surface energy state. The main characterization methods used in the experiment and the feedback information are shown in Table 1 [27][28][29][30][31]. Through these characterization methods, we can have a deep and comprehensive understanding of the surface or interface properties of nanomaterials and devices, which provide beneficial help for the further utilization of the surface/interface properties. Figure 2 contains three-dimensional images of the surface/interface topography using the confocal microscope with far-field configuration. The interface refers to the interface between different substance phases or different kinds of substances. Similar to surface states, interface states are introduced at the interface state and occur at the interface where two different substances are in contact or at the interface of a heterogeneous junction due to the interruption of a periodic lattice constant. These interface states can also be introduced by lattice mismatch and interface roughness. In addition, the interface state can also be generated because the thermal expansion of the two materials does not match. The interface state is a local electronic state that cannot propagate in the material. Interface states are generally divided into a donor and recipient. Regardless of the position of the energy level in the forbidden band, if the energy level is neutral when occupied by electrons and positively charged after releasing electrons, it is called the donor interface state. If the energy level is neutral when it is empty and the electron is negatively charged, it is called the acceptor interface state. The interface state is another key factor to determine the performance of nanostructured devices [24][25][26].
As mentioned above, theoretically studying the influence of surface interface on nanomaterials and devices is of great significance to improve the performance of nanostructured devices. Fortunately, with the development of material characterization technology, there are many means and instruments to characterize the properties of surface interfaces, which can be used to validate and promote theories and help scientists to understand their impact on nanomaterials and devices. The analysis of surface interface properties mainly includes surface composition, surface morphology, surface structure, and surface energy state. The main characterization methods used in the experiment and the feedback information are shown in Table 1 [27][28][29][30][31]. Through these characterization methods, we can have a deep and comprehensive understanding of the surface or interface properties of nanomaterials and devices, which provide beneficial help for the further utilization of the surface/interface properties. Figure 2 contains three-dimensional images of the surface/interface topography using the confocal microscope with far-field configuration. Table 1. Surface/interface characterization of nanomaterials and devices 1 .
Influence of Surface and Interface on LED and Optimization Method
In the last few years, tremendous progress has been achieved in increasing the efficiencies, stabilities, and lifetime of light-emitting devices [32]. In this section, the application of nanostructured materials in LED devices will be introduced according to the dimension of materials, especially the influence of surface/interface properties on materials and devices, and the optimization mechanism will be summarized.
Influence of Surface and Interface of 0D Nanomaterials on LED and Optimization Method
Quantum dot-based LEDs (QD-LEDs) have attracted considerable attention owing to their high color purity, thermal stability, and size-dependent emission wavelength tenability, making them suitable candidates for next-generation solid-state lighting [13]. As noted in the introduction, QDs have a very large surface-to-volume ratio due to their small size. The surface/interface composition and structure of QDs can significantly affect the photoluminescence (PL) emission, charge injection, and charge transport. Thus, the surface/interface engineering of QDs plays a vital role in the realization of high performance QD-LEDs. In order to reduce the surface/interface effects and improve the device performance, some advances related to surface purification, different functional layers, the design of nanostructures, and different device architects have been made in the engineering of nanostructures and surfaces of QDs. The zero-dimensional material name, material structure, and optical properties of LED devices are summarized in Table 2.
Surface Purification Methods
Inevitably, the synthesis of QDs will introduce many different type impurities, including metal carboxylate precursors, free ligands, and non-crystalline side products, which significantly diminish the stability, radiation efficiency, and potential applications of QDs in the field of nano-optoelectronic devices. Therefore, a surface purification for QDs is necessary for the improvement of the optical properties of QDs and device performance. Yang et al. fabricated the colloidal QDs from solution by the addition of a non-solvent [33]. The most common solvent and non-solvent pair was toluene and methanol, although other combinations including trichloromethane or hexanes as the solvent and methanol or acetone as the non-solvent have often been used as purification processes, as reported previously [34]. In addition, the hexane-methanol extraction process can also be used to purify the surface of QDs [35]. However, the non-solvent and hexane-methanol extraction approaches did not completely remove raw materials from the solution of colloidal QDs. Yang et al. developed a new purification scheme in which trichloromethane was selected as the solvent additive to enhance the solubility of the cadmium sources, while acetonitrile was selected as the solvent additive to precipitate the QDs, which was more effective than methanol [36]. Therefore, the improved purification scheme can effectively remove residual impurities in colloidal QD solution, including metal carboxylic acid precursors, non-volatile solvents, and non-crystalline by-products. Therefore, this benign purification process provides a good foundation for the surface engineering of quantum dots, and provides more opportunities for the photoelectric application of quantum dots. To solve the lifetime issue, Cao et al. tackle the hole barrier issue directly by tailoring the band-energy levels of QDs to reduce the injection barriers and improve device performance. A high-quality colloidal QD usually comprises an inorganic semiconductor core and a semiconductor shell with a wider energy bandgap to passivate dangling bonds of core surface and confine electron and hole wavefunctions for good luminescent properties and reliability [37].
QDs Emitting Layer Method
The QDs emitting layer method, especially involving perovskite QDs, has significant potential and has been extensively studied for light-emitting devices. Organo-inorganic lead halide perovskite is a direct bandgap semiconductor material with many excellent properties and is attractive in electroluminescence (EL) applications [38][39][40][41][42]. Many high-efficiency perovskite light-emitting diodes, such as the green LED with peak external quantum efficiency (EQE) ≈ 8% [43] and the near-infrared LED with peak EQE of 11.7% [41], have the nanolevel structure of perovskite materials as their photon emission core. Kovalenko et al. synthesized monodisperse CsPbX 3 perovskite quantum dots (X=Cl, Br and I, or Cl/Br and Br/I in the mixed halide system) [44,45]. These all-inorganic perovskite quantum dots cover the entire visible region with very high luminescence stability and photoluminescence quantum yields (PLQYs), and the gamut covers 140% of the NTSC standard. In addition, compared with organic-inorganic hybrid perovskite materials, all-inorganic perovskite QDs have better environmental stability [40,46,47].
Surface Ligand Method
The developments of surface coordination chemistry allow facile ligand-displacement reactions which enable the rational design of surface ligands for QDs used in LEDs [48]. Shen et al. report high-efficiency blue-violet QD-LEDs by using high quantum yield ZnCdS/ZnS graded core-shell QDs with proper surface ligands. Such ligand exchange results in an even greater increase in hole injection into the QD layer, thus improving the overall charge balance in the LEDs and yielding a 70% increase in quantum efficiency [11]. Next year, Zhong et al. developed an in situ ion exchange method to improve the performance of QD-LED devices [49]. The in situ ligand exchange process is shown in Figure 3. The results show that this method is very effective and the photoluminescence (PL) quantum yields are almost unchanged after the ligand exchange process. As a result, significant device performance improvements have been shown. Li et al. demonstrate a highly efficient solution-processed CsPbBr 3 QD-LED through balancing surface passivation and carrier injection via ligand density control [50]. Compared to surface ligand exchange, the control of ligand density on QD surfaces is a more proper strategy to promote the performance of CsPbX 3 QLEDs. In general, the solubility of QDs will decrease with the decrease of ligand size. On the other hand, the spatial separation between quantum dots caused by surface ligands will affect the colloidal stability of quantum dot solution [51]. In 2016, Peng et al. used "entropy ligands" [52,53] to improve the performance of nanocrystalline optoelectronic devices. Pan et al. have shown that the charge transfer characteristics of QD films can be regulated by using specially designed polymer ligands with colloids and perovskite groups instead of insulating ligands [54,55]. Therefore, a QD/polymer hybrid can be used as an important candidate material for the emitting layer of QLEDs [56]. Brown et al. demonstrated that the ligand-induced generation of surface dipoles is an effective way to control the absolute energy levels of QD films [57]. It is found that the strength of the surface dipole induced by the ligands can be regulated by the chemical binding group and dipole moment of the ligands, and the size of the surface energy level can be controlled. This method enables fine-tailoring of the band-energy calibration of PbS quantum dots, thus improving the optical performance of the device [58]. Yang et al. used this method to make QLEDs. Combined with the size control of QDs, the bandgap and band position of a PbS QD film can be fine-tuned by the quantum effect, so that the QD film can be used as electron transport layers (ETLs), hole transporting layers(HTLs), and the transmitting layer of LEDs [59].
Core/Shell Interface Structure Method
Although surface purification is an important way to reduce impurities on the surface of QDs, the purification process may also bring about electron lone pairs or vacancies, which serve as surface traps by capturing excitons, thereby leading to non-radiative recombination. QDs often suffer from surface-related trap states, which act as non-radiative de-excitation channels for photogenerated charge carriers, thus decreasing their PL QDs. The photochemical stability, thermal stability, and photochemical stability of the quantum dot can be improved by covering the epitaxial layer and optimizing the growth parameters of the epitaxial layer, and the non-radiative recombination can be effectively reduced. Therefore, core-shell structure QDs are widely used to improve the performance of QLEDs. Bawendi et al. obtained high-quality CdSe/CdS core/shell QDs by using octanethiol and cadmium oleate as precursors and maintaining an appropriate growth rate at 310 • C. The obtained core/shell structure of the QDs has high uniformity; PLQYs can reach 97%, and the full width at half maximum (FWHM) of the PL peak is only 67.1 meV [60]. Pal et al. studied the optical properties of CdSe/CdS core/shell quantum dot films with different shell thicknesses and compared them with the luminescence properties of corresponding quantum dot solutions. The results show that the luminescence properties of core/shell materials are obviously improved, and the redshift effect of the PL spectrum is gradually weakened with the increase of the thickness of the CdS shell [61]. Li et al. showed that ZnCdSe-based core/shell QDs featuring a ZnS shell with 10 monolayers offered the highest external quantum efficiency of~17%, which could compare favorably with the highest efficiency of green QLEDs with traditional multilayered structures [62]. The epitaxy of a gradient alloy shell on a core crystal can synthesize quantum dots with gradient band structures. Klimov et al. have shown that "smoothing" the shape of the constraint potential through the interfacial alloying of the core-shell interface can effectively inhibit an auger recombination in CdSe/CdS QDs [63,64]. Peng et al. prepared a series of small, scintillation-free and bleach-resistant high-quality CdSe/Cd x Zn 1−x S and CdSe and CdSe y S 1−y /Cd x Zn 1−x S core/shell QDs. Quantum dots with a continuously adjustable luminous range and high PLQYs have important potential applications in LEDs [65].
Core/Shell Interface Structure Method
Although surface purification is an important way to reduce impurities on the surface of QDs, the purification process may also bring about electron lone pairs or vacancies, which serve as surface traps by capturing excitons, thereby leading to non-radiative recombination. QDs often suffer from surface-related trap states, which act as non-radiative de-excitation channels for photogenerated charge carriers, thus decreasing their PL QDs. The photochemical stability, thermal stability, and photochemical stability of the quantum dot can be improved by covering the epitaxial layer and optimizing the growth parameters of the epitaxial layer, and the non-radiative recombination can be effectively reduced. Therefore, core-shell structure QDs are widely used to improve the performance of QLEDs. Bawendi et al. obtained high-quality CdSe/CdS core/shell QDs by using octanethiol and cadmium oleate as precursors and maintaining an appropriate growth rate at 310 °C . The obtained core/shell structure of the QDs has high uniformity; PLQYs can reach 97%, and the full width at half maximum (FWHM) of the PL peak is only 67.1 meV [60]. Pal et al. studied the optical properties of CdSe/CdS core/shell quantum dot films with different shell thicknesses and compared them with the luminescence properties of corresponding quantum dot solutions. The results show that the luminescence properties of core/shell materials are obviously improved, and the redshift effect of the PL spectrum is gradually weakened with the increase of the thickness of the CdS shell [61]. Li et al. showed that ZnCdSe-based core/shell QDs featuring a ZnS shell with 10 monolayers offered the highest external quantum efficiency of ∼17%, which could compare favorably with the highest efficiency of green QLEDs with traditional multilayered structures [62]. The epitaxy of a gradient alloy shell on a core crystal can synthesize quantum dots with gradient band structures. Klimov et al. have shown that "smoothing" the shape of the constraint potential through the interfacial alloying of the core-shell interface can effectively inhibit an auger recombination in CdSe/CdS QDs [63,64]. Peng (e,f) The power efficiency and current efficiency as a function of the current density. Reproduced with permission from reference [49].
Optimization of LED Device Interface Architecture Methods
Device architecture using different charge transport layers (CTLs) and interfacial engineering is another important method to improve the performance of luminescent devices. In a typical QD-LED, apart from the QD-emitting layer, the charge-injection layers (CILs) and CTLs also significantly contribute to the overall device performance. The design of these layers can be made to favor the balance of the carrier injection, charge transport, and radiative recombination of excitons in the QD-emitting layer. Therefore, interfacial engineering between the QD-emitting layer and the CTL plays a critical role in enhancing the device performance of the QD-LEDs. Kim in EQE, which are also the highest efficiency values ever reported in flexible QLEDs. Apart from engineering the compositions, the size, structure, and shape control of QDs may provide additional benefits regarding the accessibility in band structure engineering and enhance the out-coupling efficiency in QD-LEDs. Nam et al. synthesized double-heterojunction nanorods consisting of two offset and staggered bandgaps, which offered independent control over the electron-and hole-injection processes in devices [12]. The out-coupling efficiency was significantly enhanced due to the nanorods and was assembled parallel to the substrates. More importantly, the anisotropic shape introduces a transition dipole along the rod axis. Liu et al. used the balanced charge-injection process to enhance the external quantum efficiency of nonblinking blue QD-LEDs [67]. Using nonblinking ZnCdSe/ZnS/ZnS QDs as the emissive layer, highly efficient blue QD-LEDs were prepared. The charge-injection balance within the QD active layer was improved by introducing a nonconductive layer of poly-(methyl methacrylate) (PMMA) between the electron transport layer (ETL) and the QD layer, where the PMMA layer takes the role of coordinator to impede excessive electron flux. The optimized LED device shows excellent performance such as a maximum luminance of 14,100 cd/m 2 , current efficiency of 11.8 cd/A, and external quantum efficiency (EQE) of 16.2%.
Influence of Surface and Interface of 1D Nanomaterials on LED and Optimization Method
1D nanomaterials possessing natural structures that can act as resonant cavities are ideal platforms to realize laser diodes and light-emitting diodes. Various nanostructures such as nanowires, nanotubes, and nanobelts have been synthesized to fabricate optoelectronic devices. In order to reduce the surface/interface effects and improve the device performance, some advances such as surface or interface structure design, interface control, and interface modification and modulation have been made in the engineering of 1D nanomaterial. The one-dimensional material name, material structure, and optical properties of LED devices are summarized in Table 3.
Surface or Interface Structure Design Methods
Similar to 0D nanomaterials, material and device structure design is a common method to suppress the surface states of 1D nanomaterials. For example, in order to reduce the reflection and improve the transmission of light, nanostructure arrays have been developed as effective antireflective surfaces [68]. By controlling the surface wetting properties of a polydimethylsiloxane release template, Liu et al. was able to pattern a random AgNWs network with uniform conducting property [69]. Modifying the core-shell structure is another method to overcome the total internal reflection at the semiconductor and air interface, improve the escape probability of the light, and consequently increase the light extraction efficiency [70,71], as shown in Figure 4. Zhang, Yao, and Zheng's team used the high directionality of waveguide mode transmission and the efficient energy transfer of localized surface plasmon (LSP) resonances to increase the spontaneous emission rate of LEDs, respectively [72][73][74].
Similar to 0D nanomaterials, material and device structure design is a common method to suppress the surface states of 1D nanomaterials. For example, in order to reduce the reflection and improve the transmission of light, nanostructure arrays have been developed as effective antireflective surfaces [68]. By controlling the surface wetting properties of a polydimethylsiloxane release template, Liu et al. was able to pattern a random AgNWs network with uniform conducting property [69]. Modifying the core-shell structure is another method to overcome the total internal reflection at the semiconductor and air interface, improve the escape probability of the light, and consequently increase the light extraction efficiency [70,71], as shown in Figure 4. Zhang, Yao, and Zheng's team used the high directionality of waveguide mode transmission and the efficient energy transfer of localized surface plasmon (LSP) resonances to increase the spontaneous emission rate of LEDs, respectively [72][73][74].
Interface Control 1D Nanomaterial Methods
ZnO has inspired considerable attention to develop ultraviolet (UV) LEDs and LDs due to its wide direct bandgap of 3.37 eV and high exciton binding energy up to 60 meV. However, in the process of emission of 1D ZnO nanometer materials, it is difficult to avoid an extra emission from interface states. Hence, the interface design and optimization are necessary to realize efficient UV EL. So, both a high quality active layer and good interface at the p-i-n junction are critical factors to realize pure UV LED. You et al. successfully prepared the vertically aligned ZnO NRs and showed high crystal and optical quality. Nanostructured LED arrays were constructed by directly bonding ZnO NRs onto on AlN-coated p-GaN wafer. This simple and feasible method can effectively suppress the interface defects induced by the buffer layer formation [75,76].
Interface Modification and Modulation 1D Nanomaterial Methods
As we all know, total internal reflection (TIR) will occur at the epitaxial layers/substrate interface and substrate/air interface because of the large difference in refraction indices. The surface/interface modification and modulation are effective methods to avoid the TIR. Guo et.al reported AlGaN-based 282-nm LEDs grown on nanopatterned sapphire substrates (NPSS), exhibiting 98% better performance relative to those grown on flat sapphire substrates [77]. The AlN epitaxial lateral overgrowth on pattered substrates or templates can not only improve the crystal quality of the overgrown epitaxial layers but also form embedded air voids in the AlN layer. The effective refraction index around the interface is thereby between the AlN layer and the substrates. The internal quantum efficiency (IQE) enhancement is estimated to be 60%, so the light-extraction efficiency (LEE) enhancement would be more than 20%. Lee et al. fabricated DUV LEDs on NP-AlN/sapphire
Interface Control 1D Nanomaterial Methods
ZnO has inspired considerable attention to develop ultraviolet (UV) LEDs and LDs due to its wide direct bandgap of 3.37 eV and high exciton binding energy up to 60 meV. However, in the process of emission of 1D ZnO nanometer materials, it is difficult to avoid an extra emission from interface states. Hence, the interface design and optimization are necessary to realize efficient UV EL. So, both a high quality active layer and good interface at the p-i-n junction are critical factors to realize pure UV LED. You et al. successfully prepared the vertically aligned ZnO NRs and showed high crystal and optical quality. Nanostructured LED arrays were constructed by directly bonding ZnO NRs onto on AlN-coated p-GaN wafer. This simple and feasible method can effectively suppress the interface defects induced by the buffer layer formation [75,76].
Interface Modification and Modulation 1D Nanomaterial Methods
As we all know, total internal reflection (TIR) will occur at the epitaxial layers/substrate interface and substrate/air interface because of the large difference in refraction indices. The surface/interface modification and modulation are effective methods to avoid the TIR. Guo et.al reported AlGaN-based 282-nm LEDs grown on nanopatterned sapphire substrates (NPSS), exhibiting 98% better performance relative to those grown on flat sapphire substrates [77]. The AlN epitaxial lateral overgrowth on pattered substrates or templates can not only improve the crystal quality of the overgrown epitaxial layers but also form embedded air voids in the AlN layer. The effective refraction index around the interface is thereby between the AlN layer and the substrates. The internal quantum efficiency (IQE) enhancement is estimated to be 60%, so the light-extraction efficiency (LEE) enhancement would be more than 20%. Lee et al. fabricated DUV LEDs on NP-AlN/sapphire templates, with air surrounding the AlN nanorods. Light emitted from the multiple quantum wells can propagate vertically by passing through the embedded nanostructures, and thus the TIR is avoided [78,79].
Core/Shell Structure Methods
Due to the large surface-to-volume ratios of the 1D nanomaterials, the surface plays a key role in the optical properties of 1D nanometer material LEDs [80][81][82][83][84][85][86][87][88][89][90]. Due to the existence of surface states, a Fermi-level pinning effect will occur, and the resulting transverse electric field effect and related surface non-radiative recombination will be very unfavorable to one-dimensional nanostructured-led devices [80][81][82]. Optimizing the growth of core-shell structures can reduce non-radiative recombination and improve the photo-, electrical-and photochemical stability of 1D nanometer materials. Among them, InGaN/GaN and InGaN/AlGaN core-shell nanowires have attracted extensive attention due to their important applications in variable wavelength and UV LED devices. Ledig et al. characterized in detail the structural, optical, and electrical properties of InGaN/GaN core-shell structure LEDs [83].
It turns out that the 3D core shell structure design of the LED based on GaN has many advantages over conventional planar LED counterparts. In addition, in 2016, Müller et al. prepared the InGaN/GaN core-shell nanowires using the selective area metal organic vapor phase epitaxy method and studied the effects of InGaN thickness on cathodoluminescence spectroscopy [84]. The following year, InGaN/AlGaN and AlGaN core-shell tunnel junction nanowire LEDs were prepared by Philip's research group and Sadaf's research group, respectively [85,86]. The results show that the GaN-based nanowires LED can realize relatively high internal quantum efficiency from the deep green to red wavelength range. In 2018, Sim et al. designed highly efficient white LEDs using the 3D InGaN/GaN structure [87]. It can be proved by the experimental results that white LEDs based on dodecagonal ring structures are a platform enabling a high-efficiency warm white light-emitting source. Recently, high-performance InGaN/GaN and InGaN/AlGaN nanowire heterostructure LEDs were prepared by different research groups [88][89][90][91][92][93][94]. A schematic diagram of a nanowire LED with an InGaN/AlGaN core-shell heterostructure is shown in Figure 5. The results show that by controlling and optimizing the core shell structure, various surface defects and surface states can be effectively reduced and suppressed, and a high-quality crystal structure can be achieved, which can significantly improve the performance of LED devices. templates, with air surrounding the AlN nanorods. Light emitted from the multiple quantum wells can propagate vertically by passing through the embedded nanostructures, and thus the TIR is avoided [78,79].
Core/Shell Structure Methods
Due to the large surface-to-volume ratios of the 1D nanomaterials, the surface plays a key role in the optical properties of 1D nanometer material LEDs [80][81][82][83][84][85][86][87][88][89][90]. Due to the existence of surface states, a Fermi-level pinning effect will occur, and the resulting transverse electric field effect and related surface non-radiative recombination will be very unfavorable to one-dimensional nanostructuredled devices [80][81][82]. Optimizing the growth of core-shell structures can reduce non-radiative recombination and improve the photo-, electrical-and photochemical stability of 1D nanometer materials. Among them, InGaN/GaN and InGaN/AlGaN core-shell nanowires have attracted extensive attention due to their important applications in variable wavelength and UV LED devices. Ledig et al. characterized in detail the structural, optical, and electrical properties of InGaN/GaN core-shell structure LEDs [83]. It turns out that the 3D core shell structure design of the LED based on GaN has many advantages over conventional planar LED counterparts. In addition, in 2016, Muller et al. prepared the InGaN/GaN core-shell nanowires using the selective area metal organic vapor phase epitaxy method and studied the effects of InGaN thickness on cathodoluminescence spectroscopy [84]. The following year, InGaN/AlGaN and AlGaN core-shell tunnel junction nanowire LEDs were prepared by Philip's research group and Sadaf's research group, respectively [85,86]. The results show that the GaN-based nanowires LED can realize relatively high internal quantum efficiency from the deep green to red wavelength range. In 2018, Sim et al. designed highly efficient white LEDs using the 3D InGaN/GaN structure [87]. It can be proved by the experimental results that white LEDs based on dodecagonal ring structures are a platform enabling a highefficiency warm white light-emitting source. Recently, high-performance InGaN/GaN and InGaN/AlGaN nanowire heterostructure LEDs were prepared by different research groups [88][89][90][91][92][93][94]. A schematic diagram of a nanowire LED with an InGaN/AlGaN core-shell heterostructure is shown in Figure 5. The results show that by controlling and optimizing the core shell structure, various surface defects and surface states can be effectively reduced and suppressed, and a high-quality crystal structure can be achieved, which can significantly improve the performance of LED devices.
Influence of Surface and Interface of 2D Nanomaterials on LED and Optimization Method
The application of 2D materials in LEDs mainly involves the synthesis of various thin films and the preparation of devices with heterogeneous LED structures [95][96][97][98]. Similar to zero and onedimensional nanomaterials, it is also very important to quantify the influence of the 2D
Influence of Surface and Interface of 2D Nanomaterials on LED and Optimization Method
The application of 2D materials in LEDs mainly involves the synthesis of various thin films and the preparation of devices with heterogeneous LED structures [95][96][97][98]. Similar to zero and one-dimensional nanomaterials, it is also very important to quantify the influence of the 2D surface/interface on LED devices and to adopt effective methods to reduce the useless surface/interface recombination mechanism [99].
Surface Modification and Interface Engineering Method
Carbon nanotubes and graphene are used for surface/interface modification or as the protecting layer to optimize LED device performance [100][101][102]. Due to the high refractive index, the light extraction efficiency of GaAs-based LED devices is limited. Nanoscale surface modification is an effective method to enhance the output light power of GaAs-based LED devices. Jin et al. report a simple method for nanostructure fabrication using super-aligned multiwalled carbon nanotube (SACNT) thin films as etching masks for top-down etching processes [100]. The morphology of the carbon nanotube (CNT) networks can be transferred to the substrate GaAs material at the macro scale. With this method, the nanostructured SACNT network morphology significantly increases the optical output power of GaAs devices in comparison with planar GaAs LED devices. Graphene is a useful material for conducting electrodes in LED device applications [101,102]. Seo et al. study the impact of the graphene quality on the performance of a hybrid electrode of graphene on AgNWs in GaN-based UV LED. The hybrid electrode using two-step graphene showed good ambient stability with stable sheet resistance over time. The UV LED using this TCE offered a low forward voltage, an increase in the EL intensity, and a reduction of efficiency droop. In addition, surface or interface passivation and the modulation of LED devices based on amine and perovskite materials have been studied [103][104][105][106]. Yang et al. made a green LED based on a quasi-2D perovskite composition and phase with surface passivation [104]. The surface passivation is realized through coating molecules of trioctylphosphine oxide on the surface of the perovskite thin film. The measured results of optimized LED based on quasi-2D perovskite reach a current efficiency of 62.4 cd A −1 and an external quantum efficiency of 14.36%, as shown in Figure 6. All the results show that the surface-sensitive characterization is expected to help to reveal the role that the interface plays in various devices and to identify specific strategies to regulate and tune the properties of the surface/interface, leading to enhanced device performance.
Interface Structure Design Methods
For the 2D materials, core-shell, 3D pixel configuration, back-end-of-line material, and device structures are designed to optimize the device performance [107][108][109][110][111][112][113]. For example, Zhang et al. insert a 4-nm Si 3 N 4 layer between the ZnSe core and the CdS shell of p-ZnSe/n-CdS core-shell heterojunctions to passivate the interface defects and reduce the recombination and the saturation current [107]. Liu et al. study the band states at the crystallized interface between GaN and SiNx and the influence of interface roughness on the material, as shown in Figure 7 [109]. Zhang et al. designed a three-dimensional reflective concave structure coated with a high refractive index material and achieved an increase in OLED display pixels by embedding the OLED into the three-dimensional reflective concave structure. This structure allows the coupling region of the light to be defined so that as much of the coupling interior is emitted to the filled region and then redirected. The optical simulation results show that if the optimized structure and highly transparent top electrode material are adopted, the efficiency of light extraction can be improved by ≈ 80%. [110]. Bulling and Venter use two methods to improve light extraction efficiency: first, the design of an improved back-end-of-line (BEOL) light directing structure, and second, the use of surface texturing. The design of an optimized pipe-like BEOL light directing structure has resulted in a 1.35-factor improvement in luminance and a 1.38-factor improvement in light extraction efficiency over the previously designed parabolic BEOL light directing structure; furthermore, it has also resulted in an improved BEOL light-directing structure for improved light extraction efficiency. In addition, the directionality of the light emission radiation pattern has also significantly improved. Once the internal TIR is reduced and the light radiation propagation is improved, surface texturing techniques can be used to further improve the light extraction efficiency [108]. Recently, Lei et al. used the surface texture and LSP coupling effect to enhance the light extraction efficiency for InGaN/GaN LEDs [113].
passivation and the modulation of LED devices based on amine and perovskite materials have been studied [103][104][105][106]. Yang et al. made a green LED based on a quasi-2D perovskite composition and phase with surface passivation [104]. The surface passivation is realized through coating molecules of trioctylphosphine oxide on the surface of the perovskite thin film. The measured results of optimized LED based on quasi-2D perovskite reach a current efficiency of 62.4 cd A −1 and an external quantum efficiency of 14.36%, as shown in Figure 6. All the results show that the surface-sensitive characterization is expected to help to reveal the role that the interface plays in various devices and to identify specific strategies to regulate and tune the properties of the surface/interface, leading to enhanced device performance. For the 2D materials, core-shell, 3D pixel configuration, back-end-of-line material, and device structures are designed to optimize the device performance [107][108][109][110][111][112][113]. For example, Zhang et al. insert a 4-nm Si3N4 layer between the ZnSe core and the CdS shell of p-ZnSe/n-CdS core-shell heterojunctions to passivate the interface defects and reduce the recombination and the saturation current [107]. Liu et al. study the band states at the crystallized interface between GaN and SiNx and the influence of interface roughness on the material, as shown in Figure 7 [109]. Zhang et al. designed a three-dimensional reflective concave structure coated with a high refractive index material and achieved an increase in OLED display pixels by embedding the OLED into the three-dimensional reflective concave structure. This structure allows the coupling region of the light to be defined so that as much of the coupling interior is emitted to the filled region and then redirected. The optical simulation results show that if the optimized structure and highly transparent top electrode material are adopted, the efficiency of light extraction can be improved by ≈ 80%. [110]. Bulling and Venter use two methods to improve light extraction efficiency: first, the design of an improved back-end-ofline (BEOL) light directing structure, and second, the use of surface texturing. The design of an optimized pipe-like BEOL light directing structure has resulted in a 1.35-factor improvement in luminance and a 1.38-factor improvement in light extraction efficiency over the previously designed parabolic BEOL light directing structure; furthermore, it has also resulted in an improved BEOL lightdirecting structure for improved light extraction efficiency. In addition, the directionality of the light emission radiation pattern has also significantly improved. Once the internal TIR is reduced and the light radiation propagation is improved, surface texturing techniques can be used to further improve the light extraction efficiency [108]. Recently, Lei et al. used the surface texture and LSP coupling effect to enhance the light extraction efficiency for InGaN/GaN LEDs [113].
Conclusions and Perspective
In summary, we introduce the effects of surface/interface properties of nanostructured materials on light-emitting diodes, taking some II-VI, III-V, IV, and perovskite nanomaterials as representatives. According to the review, we can see that the surface/interface properties of nanostructured materials are an important factor affecting the performance of the device, which involves the generation of carriers, recombination, separation, collection, and other dynamic
Conclusions and Perspective
In summary, we introduce the effects of surface/interface properties of nanostructured materials on light-emitting diodes, taking some II-VI, III-V, IV, and perovskite nanomaterials as representatives. According to the review, we can see that the surface/interface properties of nanostructured materials are an important factor affecting the performance of the device, which involves the generation of carriers, recombination, separation, collection, and other dynamic processes. Most important of all, as shown in the paper, through surface or interface engineering such as surface purification, surface ligands, or introducing the core/shell structure and others in constructing optoelectric devices, the performance could be improved remarkably. Therefore, continuing to study the surface/interface of nanostructured light-emitting devices from the microscopic perspective and understand the correlation between the properties of nanostructured devices in depth is crucial. Even though this progress in the improvement of the device performance of LEDs is very encouraging, there are several shortcomings and challenges that stand in the way of commercialization. The following aspects are critical to improving the device performance and accelerating the commercialization of LEDs.
1. Reducing Surface/Interface Recombination As we all know, when the atomic lattice is abruptly broken at a surface/interface, unsatisfied dangling bonds (or foreign bonds) introduce electronic energy levels inside of the bandgap that enhance electron-hole non-radiative recombination at the surface/interface by acting as stepping stones for charge carrier transitions between the conduction and valence bands. However, achieving these values in operational device architecture has remained elusive because contacting the nanomaterials with extracting contacts generally induces new, non-radiative loss pathways at the surface, resulting in a decrease in the PLQE and PL lifetime.
2. The Choice of the Appropriate Surface Ligands The surface ligand methods play an important role in improving the properties of nanomaterials and devices. On the one hand, surface ligands can effectively combine with surface atoms to passivate surface defects and reduce the surface states of materials. On the other hand, the intrinsic insulation of the surface ligands will affect the effective charge injection and transmission characteristics of the emitting layer, and reduce the efficiency and performance of the nanoluminescent devices. Therefore, balanced exciton recombination and charge injection/transport are key to the effective use of the method of surface ligand in nanostructured LEDs.
3. In-Depth Understanding of the Interactions of Nanomaterials with CTLs It is generally considered that the exciton quenching of nanomaterials is critical to the performance of nanostructure LEDs. In nanostructure LEDs, nanomaterials can acquire a net charge due to the interactions with the CTLs, and the excess charges in the charging nanomaterials can lead to exciton quenching by non-radiative energy transfer to the CTLs or the defects within the CTLs films, thus diminishing the device efficiency. Therefore, it is necessary to develop effective characterization techniques to study the interactions of the nanomaterials with different CTLs in order to gain a more comprehensive understanding of the mechanisms.
4. Reduce Internal/Reflection TIR Effect As mentioned earlier, TIR occurs at the epitaxial layers/substrate interface and substrate/air interface because of the large difference in refraction indices. A large amount of photons are trapped inside the LED structure and finally absorbed after multiple internal reflections. Disturbing the TIR at the interfaces would be beneficial to achieve high-efficient LEDs. Therefore, how to use highly reflective techniques and surface/interface modification to mitigate TIR's influence on nanostructured light-emitting devices is still a difficult task. | 9,448.6 | 2019-11-27T00:00:00.000 | [
"Physics"
] |
Inequalities Involving Essential Norm Estimates of Product-Type Operators
Consider an open unit disk D � z ∈ C : | z | < 1 { } in the complex plane C , ξ a holomorphic function on D , and ψ a holomorphic self-map of D . For an analytic function f , the weighted composition operator is denoted and defined as follows: ( W ξ , ψ f )( z ) � ξ ( z ) f ( ψ ( z )) . We estimate the essential norm of this operator from Dirichlet-type spaces to Bers-type spaces and
Introduction and Preliminaries
Consider an open unit disk D � z ∈ C: |z| < 1 { } in the complex plane C. Let H(D) denote the class of all analytic functions on D, S(D) be the class of all holomorphic selfmaps of D, and H ∞ be the space of all bounded holomorphic functions on D. Let ξ ∈ H(D) and ψ be a holomorphic selfmap of D. For z ∈ D, the composition operator and multiplication operator are, respectively, defined by (1) e weighted composition operator is denoted and defined as where W ξ,ψ is a product-type operator as W ξ,ψ � M ξ C ψ . Clearly, this operator can be seen as a generalization of the composition operator and multiplication operator. It can be easily seen that, for ξ ≡ 1, the operator reduced to C ψ . If ψ(z) � z, the operator gets reduced to M ξ . is operator is basically a linear transformation of H(D) defined by (W ξ,ψ f)(z) � ξ(z)f(ψ(z)) � (M ξ C ψ f)(z), for f in H (D) and z in D. e basic aim is to give the operator-theoretic characterization of these operators in terms of functiontheoretic characterization of their including functions. Various holomorphic function spaces on various domains have been studied for the boundedness and compactness of weighted composition operators acting on them. Moreover, a number of papers have been studied on these operators acting on different spaces of holomorphic functions on various domains. For more details, see [1][2][3][4][5][6][7][8][9][10][11][12][13][14] and the references therein. We say that a linear operator is bounded if the image of a bounded set is a bounded set. Moreover, a linear operator is said to be compact if it maps the bounded sets to those sets whose closure is compact. For each α > 0, the weighted Bloch space B is defined as follows: In this expression, seminormed is defined. is space forms a Banach space with the natural norm defined by For α � 1, this space gets reduced to classical Bloch space. A function ω: D ⟶ (0, ∞) is said to be a weight if it is continuous. For z ∈ D, the weight ω is said to be radial if ω(z) � ω(|z|). A weight ω is said to be a standard weight if lim |z|⟶1 − 1 ω(z) � 0. For a weight function ω, the Bloch-type space B ω is defined by e little Bloch-type space B ω,0 is the closure of the set of polynomials in B ω and is defined as follows: Both B ω and B ω,0 form a Banach space with the following norm: For more information about these spaces, one may refer [1-3, 5, 6, 15, 16] and the references therein. Likewise, for weight ω, the Bers-type space A ω is defined as follows: It is a nonseparable Banach space with the norm ‖ · ‖ A ω . e closure of the set of polynomials in A ω forms a separable Banach space. is set is denoted by A ω,0 and is defined as ese spaces and their properties are discussed in many papers; some of these are [3,15,16] and the references therein. e Dirichlet space is defined as follows: where dA(z) denotes the normalized Lebesgue area measure on D. With the following norm, it is a Hilbert space: Consider a function K: [0, ∞) ⟶ [0, ∞) which is right continuous and increasing. In this paper, we consider function K as a weight function. With a weight function K, the Dirichlet type space D K is given as follows: Clearly, space D K forms a Hilbert space with the norm ‖ · ‖ D K defined by Here, we have K(t) � t p , 0 ≤ p < ∞, and D K gives D p , that is, the usual Dirichlet-type space. is gives a classical Dirichlet space D for a case when p � 0 and for p � 1, and we gain the Hardy space H 2 . ese spaces have been studied widely in various papers. For example, Aleman, in [17], obtained that each element of D K can be written as a quotient of two bounded functions in D K . Kerman and Sawyer [18], by taking some conditions on weight function K, characterized Carleson measures and multipliers of D K in terms of maximal operator.
e Möbius invariant space generated by D K is denoted by Q K . e space Q K contains those functions f ∈ H(D) which satisfy the following: where σ a (z) � ((a − z)/(1 − az)) is the Möbius transformation of D. Wulan and Zhu, in [19], characterized Lacunary series in the Q K space under some conditions on weight function K. Furthermore, Wulan and Zhou [20] characterized space Q K in terms of fractional-order derivatives of function. ey also established a relationship between Morrey type spaces and Q K space in terms of fractional order derivatives. In the study of Q K spaces, the following two conditions play a very important role: where Let M(D K ) be the class of multipliers of D K , that is, Bao et al., in [21], characterized the interpolating sequences for M(D K ) of space D K , under certain conditions of weight function K. ey also obtained corona theorem, z -equation, and corona-type decomposition theorem on M(D K ). For more details, see [9,15,[21][22][23][24][25] and the references therein.
From [26], one can see that if K satisfies (1), then If K satisfies (16), then From condition (16), we get that K(2t) ≈ K(t) for 0 < t < 1. Also, there exist C > 0 sufficiently small for which t − C K 1 (t) is increasing and K 2 (t)t C− 1 is decreasing. For more information about weight function K, one can refer [19][20][21]. e criterion of boundedness as well as compactness has been discussed in many papers. Recently, Gürbüz, in [27], studied the boundedness of generalized commutators of rough fractional maximal and integral operators on generalized weighted Morrey spaces, respectively, and, in [28], he investigated the generalized weighted Morrey estimates for the boundedness of Marcinkiewicz integrals with rough kernel associated with Schrödinger operator and their commutators. Furthermore, in [29], Gürbuüz studied the behavior of multi-sublinear fractional maximal operators and rough multilinear fractional integral both on product L p and weighted L p spaces and, in [30], he obtained the boundedness of the variation and oscillation operators for the family of multilinear integrals with Lipschitz functions on weighted Morrey spaces. Among others in [3], we obtained the following results about boundedness and compactness of W ξ,ψ given as follows.
Theorem 1.
Let ω and K be two weight functions, ξ ∈ H(D), and ψ be a self-holomorphic map on D. en, the operator W ξ,ψ : D K ⟶ B ω is bounded if and only if the following conditions hold:
Theorem 2. Let ω be a standard weight, ξ ∈ H(D), and ψ be a self-holomorphic map on D. Let K be a weight function.
Assume that W ξ,ψ : D K ⟶ B ω is bounded. en, the operator W ξ,ψ : D K ⟶ B ω is compact if and only if the following conditions hold: Let ω be a weight and K be a weight function, ξ ∈ H(D), and ψ be a self-holomorphic map on D. en, the operator W ξ,ψ : D K ⟶ A ω is bounded if and only if the following condition holds: Theorem 4. Let ω be a standard weight, ξ ∈ H(D), and ψ be a self-analytic map on D. Let K be a weight function. Assume that the operator W ξ,ψ : e aim of this paper is to provide some estimates of essential norm of the operator W ξ,ψ : Assume that T: X 1 ⟶ X 2 is a bounded linear operator for Banach spaces X 1 and X 2 . e essential norm of operator T is denoted and defined as follows: where ‖ · ‖ X 1 ⟶ X 2 is the operator norm. In other words, the essential norm is the distance from compact operators E mapping X 1 into X 2 to the bounded linear operator T: X 1 ⟶ X 2 . If X 1 � X 2 , that is, the two Banach spaces are same, then the norm is simply denoted by ‖ · ‖ e . For unbounded linear operator T: As the class of all compact operators is contained in the class of all bounded operators, in fact, this subset is closed, which implies that the operator T is compact if and only if ‖T‖ e,X 1 ⟶ X 2 � 0. us, the estimate of essential norm leads to the compactness of the operator. Various results on the essential norm of different operators such as multiplication, composition, differentiation, weighted composition, generalized weighted composition, and their different combinations are studied in numerous research papers, and some of the references are [31][32][33][34][35][36][37]. is study is formulated in a systematic way. Introduction and literature part is kept in Section 1. In Section 2, we estimated the essential norm of operator W ξ,ψ : D K ⟶ B ω . Finally, in Section 3, we estimated the essential norm of operator W ξ,ψ : D K ⟶ A ω . roughout the paper, the notation a≲b, for any two positive quantities a and b, which means that a ≤ Cb, where C is some positive constant.
e value of constant C may change from one place to the other. We write a ≈ b if a≲b and b≲a.
Essential Norm of Weighted Composition
Operator from Dirichlet-Type Space to Bloch-Type Space Theorem 5. Let ω be a standard weight, ξ ∈ H(D), and ψ be a self-analytic map on D. Let K be a weight function. Assume that W ξ,ψ : D K ⟶ B ω is bounded. en, where
Journal of Mathematics
Proof. At first, we show that For z ∈ D, define a function where a 0 � (2 + ε/2) and a 1 � − (1 + ε/2). It can be easily checked that h z ∈ D K and, for all z ∈ D, ‖h z ‖ D K ≲1. On calculation, we have h z ′ (z) � 0 and ). Furthermore, on compact subsets of D, h z converges to zero as |z| ⟶ 1. Hence, for any compact operator E: D K ⟶ B ω and any (ζ n ) n∈D such that |ψ(ζ n )| ⟶ 1, we obtain In the above inequality, take lim sup |ψ(ζ n )| ⟶ 1 on both sides, and we obtain Again, for z ∈ D, define another function: where b 0 � 1 and b 1 � − 1. In the similar manner, we can check that k z ∈ D K and, for all z ∈ D, ‖k z ‖ D K ≲1. On calculation, we have k z (z) � 0 and . Furthermore, on compact subsets of D, k z converges to zero as |z| ⟶ 1. us, for any compact operator E: D K ⟶ B ω and any (ζ n ) n∈D such that |ψ(ζ n )| ⟶ 1, we obtain By taking lim sup |ψ(ζ n )| ⟶ 1 on both sides of the above inequality, we obtain On applying the definition of essential norm, we find that Next, we prove that For δ ∈ [0, 1), consider E δ : H(D) ⟶ H(D), defined as follows: Clearly, E δ is compact on D K and ‖E δ ‖ D K ⟶ D K ≤ 1. Consider a sequence δ n ⊂ (0, 1) satisfying δ n ⟶ 1 as n ⟶ ∞. en, for all n ∈ N, operator W ξ,ψ E δ n : D K ⟶ B ω is compact. By using the definition of essential norm, we obtain erefore, we only have to prove that Let f be a function in D K satisfying ‖f‖ D K ≤ 1; then, we have 4 Journal of Mathematics (39) Clearly, lim n⟶∞ |ξ(0)f(ψ(0)) − ξ(0)f(δ n ψ(0))| � 0. Furthermore, consider a large enough N ∈ N such that, for all n ≥ N, we have δ n ≥ 1/2. us, we obtain lim sup where Taking the operator W ξ,ψ to 1 and z and using its boundedness, it easily follows that ξ ∈ B ω and Also, on compact subsets of D, δ n f δ n ′ uniformly converges to f ′ as n ⟶ ∞; thus, we have Similarly, for ξ ∈ B ω and the fact that f δ n converges uniformly to f on compact subsets of D as n ⟶ ∞, we obtain Now, consider S 2 . We have S 2 ≤ lim sup n⟶∞ (P 1 + P 2 ), where First, we consider P 1 . As ‖f‖ D K ≤ 1, we obtain On taking limit as N ⟶ ∞, we obtain lim sup In the similar manner, we obtain lim sup On combining the above two inequalities, we obtain Next, consider S 4 . We have S 4 ≤ lim sup n⟶∞ (P 3 + P 4 ), where By similar calculation, we obtain On taking limit N ⟶ ∞, we obtain Journal of Mathematics 5 lim sup In the similar manner, we obtain lim sup Combining the above two inequalities, we obtain On combining (40), (43), (44), (49), and (54), we obtain us, inequalities (37) and (55) imply that Hence, inequalities (34) and (56) complete the theorem. e following corollary can be easily obtained from eorem 5. □ Corollary 1. Let ω be a standard weight and ψ be a selfanalytic map on D. Let K be a weight function. Assume that C ψ : D K ⟶ B ω is bounded. en,
Essential Norm of Weighted Composition Operator from Dirichlet-Type Space to Bers-Type Space
In this section, we consider the Bers-type spaces and estimated the essential norm of weighted composition operator from D K to A ω .
Theorem 6.
Let ω be a standard weight, ξ ∈ H(D), and ψ be a self-analytic map on D. Let K be a weight function. Assume that the operator W ξ,ψ : D K ⟶ A ω is bounded. en, Proof. Firstly, we prove that Consider a function f z ∈ D K such that ‖f z ‖ D K ≲1, and on compact subsets of D, f z converges to zero as |z| ⟶ 1.
us, for any compact operator E: D K ⟶ A ω and any (ζ n ) n∈D such that |ψ(ζ n )| ⟶ 1 − , we obtain Taking lim sup |ψ(ζ n a)| ⟶ 1 − on both sides, we obtain On applying the definition of essential norm, we find that 6 Journal of Mathematics Finally, we prove that For this, consider E δ : H(D) ⟶ H(D) with δ ∈ [0, 1) and a sequence δ n ⊂ (0, 1) satisfying δ n ⟶ 1 as n ⟶ ∞ defined in eorem 5. en, for all n ∈ N, the operator W ξ,ψ E δ n : D K ⟶ A ω is compact. By using the definition of essential norm, we obtain So, we only have to prove that (65) Let f be a function in D K satisfying ‖f‖ D K ≤ 1; then, we have Furthermore, consider a large enough N ∈ N such that, for all n ≥ N, we have δ n ≥ 1/2. us, we obtain lim sup where Similar to eorem 5, for ξ ∈ A ω and the fact that f δ n converges uniformly to f on compact subsets of D as n ⟶ ∞, we obtain Next, we consider A 2 . We have A 2 ≤ lim sup n⟶∞ (R 1 + R 2 ), where R 1 � sup |ψ(z)|>δ N ω(z)|ξ(z)‖f(ψ(z))|, On calculation, we obtain (75) us, inequalities (64) and (75) imply that Hence, inequalities (62) and (76) complete the theorem. e following corollary can be easily obtained from eorem 6. □ Corollary 2. Let ω be a standard weight and ψ be a selfanalytic map on D. Let K be a weight function. Assume that the operator C ψ : D K ⟶ A ω is bounded. en, (77)
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 4,009.6 | 2021-03-02T00:00:00.000 | [
"Mathematics"
] |
Phenotypic variation of erythrocyte linker histone H1.c in a pheasant (Phasianus colchicus L.) population
Our goal was to characterize a phenotypic variation of the pheasant erythrocyte linker histone subtype H1.c. By using two-dimensional polyacrylamide gel electrophoresis three histone H1.c phenotypes were identified. The differently migrating allelic variants H1.c1 and H1.c2 formed either two homozygous phenotypes, c1 and c2, or a single heterozygous phenotype, c1c2. In the pheasant population screened, birds with phenotype c2 were the most common (frequency 0.761) while individuals with phenotype c1 were rare (frequency 0.043).
Linker histones, also known as H1 histones, are members of a protein family composed of small (~21.5 kDa) and abundant (~0.8 H1/nucleosome) basic proteins, located in the eukaryotic chromatin (Woodcock et al., 2006). In the past, they were mainly recognized as structural components involved in stabilizing nucleosomal arrays and folding chromatin fiber into more compacted states (Widom, 1998;Hansen, 2002;). However, a contribution of histone H1 to other nuclear events, such as specific gene regulation (Lee et al., 2004), DNA methylation (Fan et al., 2005) or cell cycle disruption (Sancho et al., 2008), was also demonstrated. As highly mobile proteins, the members of the histone H1 family can compete with other structural (Catez et al., 2004) and regulatory proteins (Kim et al., 2008) to change their activity in a dynamic network of chromatin interactions (Bustin et al., 2005).
Higher eukaryotes have at least six somatic histone H1 subtypes (van Holde, 1989), encoded by separate genes intermingled with core histone genes (Nakayama et al., 1993). H1 histone variants usually differ in amino acid sequence in less structured N-terminal and C-terminal domains and occasionally in highly conservative and structured central globular domain (Ponte et al., 1998). The avian family of somatic linker histones, composed of at least six to seven non-allelic subtypes that can be identified according to the rate of their electrophoretic migration in polyacrylamide gels (Palyga, 1991a), may differ in the number of components in different species. While faster moving histone H1 subtypes H1.c, H1.c' and H1.d were found to be present in every species tested, the slower migrating subtypes H1.a', H1.b' and H1.z were occasionally missing. For example, histone H1 subtype H1.z that is present in ducks (Palyga et al., 1993), quails (Palyga, 1998a) and many other species (Palyga, 1991a) has not been observed in chickens (Górnicka-Michalska et al., 2006) and partridges (Kowalski et al., 2008). A comparison of the gel patterns of avian H1 histones demonstrated that species-specific components possessed slightly higher molecular weights, and hence migrated slower, than faster moving subtypes that tended to arrange in a triangle-shaped pattern (Palyga, 1991a). In addition, a polymorphic variation was found to be typical for histone subtypes H1.a (Kowalski et al., 1998;Palyga, 1998a;Górnicka-Michalska et al., 2006), H1.a' (Kowalski et al., 2008), H1.b (Palyga, 1998a;Palyga et al., 2000;) and H1.z (Palyga et al., 1993;Palyga, 1998a;Kowalski et al., 2004) in several avian species, whereas the histone subtypes H1.c, H1.c' and H1.d were relatively invariant in all species tested so far.
In this study, we show that pheasant erythrocyte histone H1.c is a heterogeneous protein with two allelic variants, H1.c1 and H1.c2, which occur as homozygous, c1 and c2, and heterozygous, c1c2, phenotypic combinations.
The study was carried out using a group of 46 pheasants (Phasianus colchicus L.) bred at the Department of Poultry Breeding of the University of Technology and Life Sciences at Bydgoszcz, Poland. Erythrocytes were isolated from a cell suspension consisting of 1/3 of whole blood and 2/3 of SSC solution (0.15 M NaCl, 0.015 M sodium citrate) by triple washing with SSC. Erythrocyte nuclei were pre-pared by lysis with a 3% saponin solution buffered with 0.1 M sodium phosphate, pH 7.0. After washing the nuclear pellet several times with 0.9% NaCl, the total acid-soluble fraction containing mainly histone H1 proteins was isolated by double extraction of the crude nuclear pellet with perchloric acid, first with a 1 M and then with a 0.5 M solution. The protein was precipitated with trichloroacetic acid, then the pellet was washed, first with acetone acidified with HCl and finally with pure acetone, and air-dried.
Electrophoresis samples, prepared by dissolving 1-mg aliquots of a total histone H1 protein preparation in 200 mL of a solution containing 8 M urea, 0.9 M acetic acid and 10% 2-mercaptoethanol, were submitted to two-dimensional polyacrylamide gel electrophoresis. First, the total protein was resolved in a 15% acrylamide gel containing 8 M urea and 0.9 M acetic acid in the first dimension, and then in a 13.5% acrylamide gel prepared with 0.1% sodium dodecylsufate (SDS) in the second dimension, according to the detailed description of Palyga (1991b). The gels were stained with Coomassie Blue R-250 and images taken by means of a Doc-Print II gel documentation system (Vilber Lourmat).
Total preparations of pheasant histone H1, containing H1.c and other H1 subtypes, obtained from saponin-lysed erythrocyte nuclei by perchloric acid extraction, were separated both by one-dimensional and two-dimensional polyacrylamide gel electrophoresis ( Figure 1A). A comparison of the electrophoretic mobility of a faintly stained band of histone H1.c (band c2) of the pheasant population in the acetic acid-urea polyacrylamide gel ( Figure 1A) revealed that it was missing in some individuals, while the histone H1.c spots in the two-dimensional gel were clearly present in all individuals ( Figure 1B). Therefore, it appears that pheasant erythrocyte histone H1.c is a heterogeneous protein with a presumed polymorphic variation, while the other histone H1 non-allelic subtypes, H1.a, H1.b, H1.b', H1.c' and H1.d, are monomorphic proteins (Figure 1). The electrophoretic migration of H1.c is determined by the mobility of its allelic complements, H1.c1 and H1.c2, in a particular type of gel. A slow isoform, H1.c1, migrated along with the subtype H1.b in the first dimension and was positioned further away from the nearest moving subtype H1.d in the second dimension, whereas the fast isoform, H1.c2, migrated slightly below the subtype H1.b' in the first dimension and in a direct vicinity of subtype H1.d in the second dimension ( Figure 1C). The histone H1.c heterogeneity detected in the acetic acid-urea gel may result from differences in a net charge between the allelic isoforms H1.c1 and H1.c2. Previously, by using acid-urea polyacrylamide gel, we disclosed the polymorphisms of histone H1.a in ducks (Kowalski et al., 1998) and chickens (Górnicka-Michalska et al., 2006). In both cases, two allelic variants, H1.a1 and H1.a2, formed three phenotypes, a1, a2 and a1a2, which appeared to be differently distributed in the avian breeds and/or genetic groups tested. Most of the duck and chicken specimens were found to possess an abundant phenotype a1, which was the only form of histone H1.a in some avian flocks. A rare phenotype, a2, was present at a frequency below 10% in several duck groups (Kowalski et al., 1998) and has not been detected in any chicken population (Górnicka-Michalska et al., 2006). In the latter, the structural properties of the allelic isoform H1.a2 were analyzed using preparations obtained from homozygous a2 individuals using progeny of purpose-mated heterozygous parents (unpublished). In the pheasant population tested, the allelic isoforms of histone H1.c were found to be arranged into three phenotypes. In particular, the isoforms H1.c1 and H1.c2 were constituents of homozygous phenotypes c1 and c2, respectively, or combined together to constitute the heterozygous phenotype c1c2 (Figure 1). Among the 46 pheasants tested, the majority (35 individuals) was represented by homozygotes for isoform H1.c2 (frequency = 0.761), while the heterozygotes (9 individuals) with both isoforms H1.c1 and H1.c2 (frequency = 0.195) and homo- 476 Kowalski et al. Figure 1 -Phenotypic variation of pheasant histone H1.c in one-dimensional (A) and two-dimensional (B) gels. Identified phenotypes of histone H1.c (c1, c2 and c1c2) are composed of a single band (A) or spot (B) of allelic isoform H1.c1 (homozygous phenotype c1) or allelic isoform H1.c2 (homozygous phenotype c2), or a double band (A) or spot (B) containing both isoforms H1.c1 and H1.c2 (heterozygous phenotype c1c2). In the acetic acid-urea gel, the isoform H1.c1, comigrating with subtype H1.b, moved slower than the fast isoform H1.c2, migrating slightly below subtype H1.b', while both variants exhibited the same rate of electrophoretic migration (and presumably similar molecular weights) in the SDS gel, but were differently located in relation to the H1.d spot (C).
zygotes for isoform H1.c1 (frequency = 0.043) were minority. Thus, in the pheasant population tested, the prevailing allele c 2 (frequency = 0.858) was found to occur at a frequency more than six times higher than that of the rare allele c 1 (frequency = 0.141) ( Table 1).
Various combinations of allelic components of the polymorphic histone H1 subtypes have been identified in several avian species, including ducks (Palyga et al., 1993;Kowalski et al., 1998), quails (Palyga, 1998a), chickens (Górnicka-Michalska et al., 2006), and partridges (Kowalski et al., 2008). Due to differences, either in net charge and/or in molecular weight, the allelic isoforms could be distinguished based on their electrophoretic migration in polyacrylamide gels. Usually, two or three allelic variants which might form three or six phenotypes, respectively, were detected in the populations tested. For example, polymorphic duck subtypes H1.a (Kowalski et al., 1998) and H1.z (Palyga et al., 1993), each composed of two allelic isoforms, formed three phenotypes, while Peking duck histone H1.b (Palyga et al., 2000) and Muscovy duck histone H1.z (Kowalski et al., 2004), with three allelic isoforms each, were combined to form six phenotypes.
A variation in the frequency of phenotypes and alleles among polymorphic histone H1 subtypes was observed in several avian populations (Palyga et al., 1993;Kowalski et al., 2004;Górnicka-Michalska et al., 2006;Kowalski et al., 2008). The allele frequency was also found to correlate with a selection aimed at improving some usable traits in poultry breeding (Palyga, 1998b;Palyga et al., 2000). Moreover, we observed (unpublished data) a tendency to decrease heterozygosity, almost by half for subtype H1.b, and more than five times for subtype H1.z, between quails selected for a high cholesterol content in the eggs (Baumgartner et al., 2007) and an unselected quail population. These and other selection results (Palyga, 1998b) seem to suggest that some histone H1 polymorphic subtypes may affect the mechanisms and/or processes underlying the breeding traits and the quality of animal products. As even slight changes in the histone H1 primary structure can influence chromatin properties (Bharath et al., 2003;Hendzel et al., 2004), it seems that a small structural difference between histone H1 allelic variants may modify their binding to chromatin, possibly modulating chromatin remodeling and regional gene regulation. | 2,426.2 | 2010-07-01T00:00:00.000 | [
"Biology"
] |
Effectiveness of IT-supported patient recruitment: study protocol for an interrupted time series study at ten German university hospitals
Background As part of the German Medical Informatics Initiative, the MIRACUM project establishes data integration centers across ten German university hospitals. The embedded MIRACUM Use Case “Alerting in Care - IT Support for Patient Recruitment”, aims to support the recruitment into clinical trials by automatically querying the repositories for patients satisfying eligibility criteria and presenting them as screening candidates. The objective of this study is to investigate whether the developed recruitment tool has a positive effect on study recruitment within a multi-center environment by increasing the number of participants. Its secondary objective is the measurement of organizational burden and user satisfaction of the provided IT solution. Methods The study uses an Interrupted Time Series Design with a duration of 15 months. All trials start in the control phase of randomized length with regular recruitment and change to the intervention phase with additional IT support. The intervention consists of the application of a recruitment-support system which uses patient data collected in general care for screening according to specific criteria. The inclusion and exclusion criteria of all selected trials are translated into a machine-readable format using the OHDSI ATLAS tool. All patient data from the data integration centers is regularly checked against these criteria. The primary outcome is the number of participants recruited per trial and week standardized by the targeted number of participants per week and the expected recruitment duration of the specific trial. Secondary outcomes are usability, usefulness, and efficacy of the recruitment support. Sample size calculation based on simple parallel group assumption can demonstrate an effect size of d=0.57 on a significance level of 5% and a power of 80% with a total number of 100 trials (10 per site). Data describing the included trials and the recruitment process is collected at each site. The primary analysis will be conducted using linear mixed models with the actual recruitment number per week and trial standardized by the expected recruitment number per week and trial as the dependent variable. Discussion The application of an IT-supported recruitment solution developed in the MIRACUM consortium leads to an increased number of recruited participants in studies at German university hospitals. It supports employees engaged in the recruitment of trial participants and is easy to integrate in their daily work. Supplementary Information The online version contains supplementary material available at 10.1186/s13063-024-07918-z.
Background
Clinical trials are the most important method for gaining new insights into the connection between medical treatments, environmental influences, and other factors on people's health.Furthermore, only randomized controlled trials (RCTs) can provide conclusive evidence for causal effectiveness of new drugs and medical procedures on health.As the main link in the chain of evidence, RCTs are crucial for translating scientific results from basic research to clinical application [1,2].
The success of clinical trials depends on the enrollment of a sufficiently large patient population.Failure to do so will result in reduced statistical power, ineffective use of human resources, and costly extensions to the trial duration [3,4].However, achieving the planned recruitment numbers remains one of the biggest challenges when conducting clinical trials [5][6][7].As a consequence, sites in many countries have diminishing opportunities to participate in international studies and contribute to scientific progress.Further, many patients lose the chance to benefit from novel therapies that might have positive effects on their prognosis and quality of life [6,8].
As an alternative to manual identification of eligible persons, technical solutions using data from electronic health records (EHR) have been developed to improve the inclusion of patients in clinical trials [9][10][11].Initially aimed at supporting trial planning, those IT solutions informed how many patients with defined characteristics are available at a trial site (feasibility) [12].Recently, there have been many approaches that help screening for eligible patients in the hospital information system (HIS) and directly suggest their inclusion into running trials [10,11,13,14].For some of these solutions, improvements in participant recruitment have been shown [15].However, existing solutions often had either severe technical or methodological limitations or were not evaluated across multiple clinical sites.
The goal of the MIRACUM project [16], which is part of the German Medical Informatics Initiative [17], is to establish data integration centers across ten German university hospitals.The data integration centers shall transform the large and heterogenous amount of patient data into harmonized research repositories at each site.To demonstrate their effectiveness, three use cases in different clinical and application contexts were defined.Use Case 1 (UC1, Alerting in Care -IT Support for Patient Recruitment), aims to support the recruitment into clinical trials by automatically querying the repositories for patients satisfying the eligibility criteria and presenting them as screening candidates.To accomplish this, we developed software which leverages the OMOP Common Data Model (CDM) [18] and HL7 FHIR to provide an interoperable and open technical infrastructure for supporting the patient recruitment process [19].
The objective of this study is to investigate whether the developed recruitment tool has a positive effect on trial recruitment within a multi-center environment by increasing the number of participants.The secondary objective is the measurement of organizational burden and user satisfaction of the provided IT solution.
In the following, the presented evaluation is denominated as "study" and the clinical studies forming our target population as "trials".
Objectives Hypotheses
The application of an IT-supported recruitment solution developed in the MIRACUM Use Case 1 (UC1) leads to an increased number of recruited participants in studies at German university hospitals.It supports employees engaged in recruitment of trial participants and is easy to integrate in their daily work.
Primary objective
To investigate whether the application of an IT-supported recruitment solution developed within the MIRA-CUM consortium is effective in increasing the number of recruited trial participants in studies at German university hospitals.
Secondary objective
To survey the subjective experience of the recruitment staff with the solution on (a) usability, (b) user satisfaction, and (c) efficacy, i.e., the support of the recruitment and inclusion processes.
Methods
This study protocol has been developed adhering to the SPIRIT 2013 statement: Defining standard protocol items for clinical trials [20].The completed SPIRIT 2013 Checklist is available as Supplementary File 1.
Study design
The study uses an Interrupted Time Series (ITS) Design [21,22] with a duration of 15 months.All trials start in the control phase of randomized length with regular recruitment and change to the intervention phase with additional IT support, ensuring a minimum of one month in each phase and a minimum trial observation of 6 months.At least one month prior to the inclusion of a specific trial into the study, the length of the control phase for this trial is randomized centrally across all 10 sites.A block randomization was used to randomize the time of switching to the new scheme in the percentage of the planned enrollment time into the study.We used R version 4.1.2and the package blockrand version 1.5 with the seed 9384902.We allowed for values between 0.15 and 0.85 with 0.05 steps, rounding the resulting time of switching to the first or 15th of a month.We generated 5 blocks with a size of 30 resulting in 150 ids.When a study was enrolled at one of the sites, the site contacted the independent person at Freiburg, who took the next free id from the randomization list and relayed the time of switching for the specific study.The persons responsible for enrollment had no access to the randomization list.
Due to the embedding of the intervention into and its overlap with the recruitment processes, it is practically not feasible to completely blind the study.Nevertheless, the randomized time point to switch to the intervention will only be disclosed to the sites 2 months before and only subsequently to the medical staff.The effectiveness will be measured by the number of recruited participants per trial and week standardized by the targeted number of participants to be recruited into the specific trial.This will be analyzed by means of a linear mixed model.In addition, user interviews will be conducted prior to the start of the intervention and at the end of the study.
Settings
The study will be conducted at all university hospitals of the MIRACUM consortium partners within the BMBF Medical Informatics Initiative in the years 2021/2022 (Fig. 1):
Eligibility criteria
Trials fulfilling the following criteria are included into the study: • The specified inclusion and exclusion criteria can be formalized in a machine-processable way • The recruitment phase of the trial is overlapping at least 6 months with the observation period of the study.Recruitment of patients into the trial is expected during the observation period.• An individual trial can take part at one site or multiple sites.Usually, it takes part only at one site and is then included in the study as a single trial at the corresponding site; if n sites participate in a trial the trial is included n times at the corresponding sites.
• Prospective evaluation
Trials meeting the following criteria are excluded:
Intervention
The intervention consists of the application of a complex recruitment-support system, which should be utilized alongside the standard recruitment procedure.Patient data, which is already collected in the context of general care, will be used for screening according to trial-specific criteria.The de-identified patient data, located in the data integration centers of the sites, are continuously compared against inclusion and exclusion criteria of clinical trials.This requires first translating the necessary eligibility criteria of all selected trials into a machine-readable format.The recruitment support system uses the OMOP common data model [18] to store patient data which includes OHDSI ATLAS as a tool for defining patient cohorts.ATLAS is a user interface where researchers can graphically define a cohort using a set of logical expressions that are internally converted to SQL to be executed against the database.This tool is used to identify and translate the eligibility criteria collaboratively with the trial personnel [23,24].During the intervention phase, all patient data is regularly checked against the defined cohorts.When eligible patients are detected, the staff is informed about the proposals by email, and the de-identified data is presented on a web-based screening list.The dataflow is shown in Fig. 2.
Local recruitment staff decides over inclusion of individual patients based on the recruitment proposals on the screening list.Patient status, from recruitment proposal to inclusion, is documented in the screening list.Because it is based on pseudonymized research data, by default, this list displays only a pseudonymized medical record number, the year of birth, the gender, and the last known organizational unit within the hospital.This data can be used to either search the hospital information system with its applicable access controls, or the study personnel has to contact the attending physician based on the last known location of the patient.After consenting to disclose trial and personal data, the staff can then inform the patient about a possible participation Fig. 2 The basic flow of data from the electronic health records (EHRs) to the research repositories to the creation of patient recommendations in the screening list.Treatment data is recorded in the EHR as part of routine care (1), this data is regularly transformed to the OMOP CDM via ETL jobs (2).A query module (3) continuously scans the research repository for patients matching the defined trial eligibility criteria.If potential candidates are found, they are put on a web-based screening list (4), and a notification is sent via email.This notification is received by relevant practitioners or trial personnel (5) and the screening list can be accessed to further manually screen the suggested patients in the trial.Some sites allow displaying the re-identified medical record number to make it easier to identify the patient.In these cases, re-identification works by technically integrating the screening list with the pseudonymization tooling established at the sites.
In the control phase, trial participants are recruited according to the respective standard procedure.
Termination criteria are not defined, since the interventional IT solution is used parallel to the conventional solution.
To increase the adherence of the recruiting staff, training courses are conducted and regular visits take place.Problems with adherence are logged.
Primary outcome
Absolute and relative difference in the recruitment numbers of the observed studies before and after the intervention, in relation to the actual recruitment time.The primary outcome is the number of participants recruited per trial and week standardized by the targeted number of participants per week and the expected recruitment duration of the specific trial (actual recruitment number in week i/expected recruitment number per week, where expected recruitment number per week = target number of cases/duration of the trial in weeks and i=1, ..., observation period in week i, j=trial 1, ...,5, k=site 1, ...,10).
To account for site-specific and seasonal effects, a mixed model will be used.
Secondary outcomes
The absolute numbers of • Patients identified • Patients incorrectly identified • Patients recruited per time-period, site, and trial, and contrasted for control and intervention phases.
Determination of the usability, usefulness, and efficacy of the recruitment support.
Time schedule
Recruitment: Trials are included starting on June 1, 2021.
Randomization: At least one month prior to the inclusion of a specific trial into the study, the length of the control phase for this trial is randomized centrally across all 10 sites.
Sample size calculation
Sample size calculation is based on the assumption of a simple parallel group design.On a significance level of 5% and a power of 80%, an effect size of d=0.57can be demonstrated with a two-sided t-test in a balanced design with a total number of 100 trials (10 per site).Even if this effect can be classified as strong, the dynamic Interrupted Time Series design offers more statistical power than a two-group t-test, especially when the number of data points is relatively large and the effect size is not small, as is expected in our case.This added power in the ITS design is mainly due to its methodology as it incorporates the temporal ordering of observations, which allows it to account for timerelated effects.Thus, it can potentially provide a more accurate estimate of the intervention effect, especially when considering various variability factors (time, location).In addition, we aim to include as many trials as possible, potentially exceeding the total number of 100.
To assess the feasibility of this number, we searched ClinicalTrials.govfor trials registered and actively recruiting that began recruitment on 01-01-2021 or earlier and whose primary completion date is 01-01-2023 or later.Table 1 shows the results per site.
Recruitment
To identify large and viable trials for the study, central and site-local trial registries are searched and trial personnel are contacted directly to contribute additional trials.To increase the sample size of the study, any eligible trial can be dynamically included in the study and allocated to a randomized time slot, generated prior to the start of the trial.
Generation and implementation of allocation
Prior to the inclusion of a specific trial into the study, the length of the control phase for this trial is randomized centrally across all 10 sites.The overall observation period of the trials varies between 6 and 15 months.With a pseudo-random generator from the R-statistical package, the length of the control phase is randomly assigned to a value between 15% and 85% of the total observation period, which ensures that for short (6 months) trials control or intervention phases have at least a duration of one month.Respectively, a trial with an observation time of 15 months has control or intervention phases with a minimum length of 2.5 months.For each trial, an exact date for the start of the intervention phase is calculated from the inclusion date of the trial and the duration of the control phase.
Blinding
Due to the embedding of the intervention into and its overlap with the recruitment processes, it is practically not possible to completely blind the study.Nevertheless, the randomized time point to switch to the intervention will only be disclosed to the site's medical staff 1 month before.The randomization list is generated prior to the start of the evaluation study by an independent statistician and kept secret.
Data collection
The following data is collected for each trial: Trial specific parameters After the intervention: Short interviews and surveys on the usability of the system are conducted with one user for each trial.To quantify the user experience and usability of the tool, the System Usability Scale (SUS) [25] and the User Experience Questionnaire -short (UEQ-S) [26] will be used.
Per recruitment proposal: The following data is collected for each recruitment proposal.A suitable feedback mechanism for reporting trial inclusions must be established individually at the centers.
1. Organizational unit of the site where the inclusion took place (automatically recorded by the IT system) 2. Status as indicated in the recruitment list.
Outpatient or inpatient status at the time of inclusion
The patient inclusion data is primarily collected via the trial's existing electronic documentation and transferred to custom data entry masks.To ensure the completeness of data collection, the documentarians from MIRACUM are in close exchange with the corresponding staff in the departments and trial personnel.
Data management
REDCap is used as a data acquisition tool.
Data on individual patients (the minimal amount of data needed to present patients as screening recommendations) are stored pseudonymized, in accordance with the local data protection regulations.
Statistical methods
The characteristics of the included studies will be presented in tabular form.All variables will be summarized using appropriate summary statistics.The mean, standard deviation, and the quantiles will be used for continuous variables and absolute and relative frequencies for categorical variables.If applicable, the variables will be visualized using boxplots and histograms, potentially over time.
The primary analysis will be conducted using linear mixed models with the actual recruitment number per week and trial standardized by the expected recruitment number per week and trial as the dependent variable.The choice is driven by the model's ability to account both for fixed effects and random effects.This flexibility of this model is particularly suited for capturing the hierarchical structure in our data and aligns with the nature of the ITS design, where repeated measures over time and within-trial correlations are essential.A positive and statistically significant fixed effect of the binary variable "intervention allocation per week" will be interpreted as the measure for effectiveness.The corresponding effect will be evaluated for significance on the 5% level using the likelihood-ratio test.The model will be adjusted for trial, site, season, and time via random effects.Primarily, compound symmetry is considered as a working covariance matrix (constant correlation between time points), while autoregressive and unstructured covariance matrices are used as sensitive analysis.Additional sensitive analysis involves larger time frames, i.e., per month instead of per week, other distributions and link functions, i.e., generalized models, and classical statistics via mean values.Given the conservative sample size calculation based on a t-test, the linear mixed model is backed with adequate statistical power given the expected heterogeneous nature of the data.
To uncover further correlations and generate hypotheses, explorative analysis is carried out, including specific evaluation of site and trial characteristics.
No deviations from the protocol or missing data are expected.Trials are otherwise excluded from the primary analysis and their characteristics are compared to the other trials to identify potential problems for generalizability of the results.
Data monitoring
A Data Monitoring Committee (DMC) is not used, as there can be no undesired effects of the intervention, since the IT-supported intervention tool is only used in addition to the standard method during trial recruitment.
A premature termination of the study is not planned.
Risks and measures for error management
Since the intervention does not directly affect the patient, but is only used in addition to the standard recruitment procedure, no direct risks to patients are expected.Patients who might be incorrectly proposed for trial inclusion will be recognized as such by the physician or trial assistant review and not included.This will happen as soon as possible with no additional burden for the patient.
A low risk may only result from the additional use of resources (personnel and time) when interacting with the recruitment support tool and the need for additional documentation.
Rather, we expect primarily positive effects for patients who otherwise would not have been identified as trial patients and could be proposed for inclusion due to the use of the application.
Protocol amendments
Possible protocol extensions are formally added to the study protocol.
Information and consent
Within the framework of the study, pseudonymized data are processed within the data protection laws of the individual sites and in the context of supporting medical care and research.The evaluation itself does not involve data from trial participants.
Data protection and confidentiality
GDPR-conformaning privacy protection and data usage are guaranteed by an overarching central research program (BMBF Medical Informatics Initiative) and an overall consortial (MIRACUM) data protection concept.Data protection concepts are approved by the national conference of data protection officers and the data protection officers in the participating sites.
The complex process of identifying patients based on pseudonymized data using either the hospital information system or by contacting the treating physicians allows the system to be conformant with data protection regulations by relying on the access control mechanisms of the hospital information systems and established cooperation agreements between organizational units.
The study only collects the data listed under "Data collection." The data is collected in a secure environment at each site.
Declaration of competing interests
The authors declare no competing interests.
Access to data
The original data remain under the sovereignty of the respective locations.The data will be further processed according to the vote of the data protection, the ethics committee, and the Use & Access Committee.The aggregated results will be published and used jointly in the MIRACUM Consortium under the sovereignty of the MIRACUM Steering Board.
For statistical analysis, the collected data of each site is processed at one dedicated site within the consortium.
Dissemination
The results are disseminated through publications at conferences and in specialist journals.The software components used in the intervention will be made freely available as open source, unless subject to further restrictions.
Discussion
The evaluated IT solution is deployed on the common technical infrastructure of all partnering sites, leveraging international standards for interoperability.However, it is also deployed in a diverse setting of established organizational standards, tools, and processes.Therefore, user engagement is foundational to the success of the system.Ultimately, the implemented solution should benefit both trial personnel by reducing the manual effort required to identify eligible patients across large amounts of data and the patients themselves by providing them with access to promising treatment options within the trial.
Fig. 1
Fig. 1 Location of the participating university hospitals across Germany a. "Recruitment Proposal" b. "Under consideration" c. "Does not meet eligibility criteria" d. "Is Participating in trial" e. "Not willing to participate"
Table 1
Number of recruiting trials at the participating sites with a start date on or before 01-01-2021 and primary completion on or after 01-01-2023 with the location filter set to the city name of the site | 5,304.6 | 2024-02-16T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Label-Free Sensing of Biomolecular Adsorption and Desorption Dynamics by Interfacial Second Harmonic Generation
Observing interfacial molecular adsorption and desorption dynamics in a label-free manner is fundamentally important for understanding spatiotemporal transports of matter and energy across interfaces. Here, we report a label-free real-time sensing technique utilizing strong optical second harmonic generation of monolayer 2D semiconductors. BSA molecule adsorption and desorption dynamics on the surface of monolayer MoS2 in liquid environments have been all-optically observed through time-resolved second harmonic generation (SHG) measurements. The proposed SHG detection scheme is not only interface specific but also expected to be widely applicable, which, in principle, undertakes a nanometer-scale spatial resolution across interfaces.
Introduction
Biomolecular activities at interfaces are fundamental phenomena of lives. Interpreting the interfacial dynamics of biomolecules is important for constructing accurate disease models [1][2][3], performing effective drug screening [4,5] and understanding spatiotemporal transport of matter/energy for life systems. As a spatial region with a thickness usually smaller than 10 nm, label-free probing of the interfacial dynamics of biomolecules is challenging. Thus far, only limited label-free probing techniques have been developed, such as surface plasmon resonance [6,7], optical fiber sensors [8], time resolved sumfrequency generation [9,10], surface-enhanced Raman spectroscopy and so on [11][12][13][14][15]. These methods have significantly improved the performance of interfacial biosensing in terms of high sensitivity, high resolution and real-time observation, and have greatly deepened the understanding of the interfacial dynamics at the molecular level.
Surface plasmon resonance microscopy, elegantly utilizing the localized interactions between interfacial electromagnetic fields and biomolecules, possesses molecular-level sensitivity and interfacial spatial resolutions beyond the optical diffraction limit. Inspired by the physical scheme of surface plasmon resonance microscopy, we intend to develop a complementary optical spectrum sensing technique which can realize interfacial biosensing specificity for microfluidic chips in a label-free manner. In our opinion, interfacial second harmonic generation (SHG), a second-order nonlinear optical effect induced by broken inversion symmetry of an interface, is promising to fulfill this goal. Unfortunately, secondorder susceptibility of biomolecule interfaces is usually rather small, resulting in a weak SHG signal. In practice, a PMT of a single pixel is usually required to magnify the weak SHG signals. The consequence is that it is difficult to monitor the biomolecular dynamics of an interface in real-time. An alternative way to overcome such difficulty is to significantly increase the power of a fundamental femtosecond laser, which imposes a high risk of damage to the biomaterials and biostructures.
Two dimensional (2D) semiconducting monolayers, such as monolayer MoS 2 , with broken inversion symmetry, possessing large second-order nonlinear optical susceptibility, can produce strong SHG signals under femtosecond laser pulse excitations, which have been comprehensively investigated in the recent decade [16][17][18]. In our opinion, the strong SHG of these 2D semiconducting monolayers can serve as excellent biosensors if biomolecules can interact with them and form heterointerfaces in a liquid environment. Intuitively, the formation of a heterointerface on the surface of 2D semiconducting monolayers can change the inversion symmetry and then lead to a change in SHG signals. Since strong SHG signals of 2D semiconducting monolayers can be readily detected by regular spectrometer or CMOS sensors, it will be possible to develop a real-time sensing technique to probe biomolecule dynamics at 2D interfaces.
To establish progress, we report a label-free interfacial biomolecular sensing technique by monitoring biomolecular adsorption and desorption processes on the surface of monolayer MoS 2 through time-resolved SHG. Chitosan nanoclusters and bovine serum albumin (BSA) molecules have been used to form effective Coulomb attractions with the negatively charged surface of monolayer MoS 2 . By measuring the SHG intensity changes as a function of time, we realize the label-free real-time sensing of biomolecular adsorption and desorption processes all-optically in liquid environments. Our results open new avenues of label-free interfacial biosensing, taking advantage of the strong optical SHG of monolayer 2D semiconductors.
Results and Discussion
Monolayer MoS 2 is an ideal platform to construct biosensors for interfacial molecule adsorption and desorption processes, considering that the monolayer MoS 2 lattice undertakes a sub-nanometer thickness with broken inversion symmetry. Inspired by the pioneering work of strong SHG observations in monolayer MoS 2 in 2013 [16], monolayer MoS 2 can be considered as a sub-nanometer thick nonlinear optical source emitting SHG with extremely space-confined dipole moments, which can facilitate the interfacial sensing and imaging of ultrahigh spatial resolution across interfaces. Moreover, large specific surface areas and the abundant binding sites of monolayer MoS 2 can further enable effective interaction with biomolecules. To be specific, negatively charged surfaces of monolayer MoS 2 samples grown by chemical vapor deposition (CVD) [19][20][21][22][23] are expected to induce strong Coulomb attractions in liquid environments for positively charged biomolecules, which, in our opinion, are very promising for realizing label-free, real-time sensing of molecule adsorption and desorption processes. Figure 1a shows the experimental setup detecting the SHG spectra of monolayer MoS 2 embedded in a microfluidic chip. The wavelength of a fundamental laser is centered at 780 nm, with a pulse width of about 60 fs and a repetition frequency of about 80 MHz. The fundamental laser is reflected by the dichroic mirror and then focused on the MoS 2 monolayer by the objective lens (NA = 0.55). SHG of the monolayer MoS 2 is collected by the same objective lens. A short-pass filter is deployed behind a dichroic mirror to eliminate the reflected fundamental residual. The SHG signals are simultaneously sent to a spectrometer and a CCD by a beam splitter. In our measurements, the power of the fundamental laser was low enough to avoid optical damage of monolayer MoS 2 . The fine structure of the microfluidic chip is illustrated in Figure 1b. The main body of the microfluidic chip is fabricated by 3D printing with an optical glass window on top. A sapphire substrate containing monolayer MoS 2 is attached to the optical glass window. The microfluidic channel enables unidirectional flow of liquid solutions of biomolecules to form a laminar flow. As a result, a homogeneous 3D fluid-2D solid interaction is constructed to facilitate adsorption and desorption of biomolecules on the surface of monolayer MoS 2 . Monolayer MoS 2 grown by CVD on double-sided polished sapphire substrates (Six Carbon Technology, Shenzen, China) were used as received. To confirm the monolayer nature of these samples, optical characterizations were carried out before microfluidic experiments. Figure 1c presents optical absorption and photoluminescence (PL) spectra of monolayer MoS 2 . Lorentzian fitting of the PL spectrum points to a resonant peak centered at about 669 nm, which agrees well with the A-exciton resonance peak of the optical absorption spectrum [24,25]. Employing the experimental setup of Figure 1a, we measured SHG (centered at 390 nm) and fundamental (centered at 780 nm) spectra of monolayer MoS 2 , as indicated by the purple squares and red dots in Figure 1d, respectively. Solid lines are Gaussian fittings. Full width at half maxima (FWHM) of fundamental and SHG were fitted to be about 12.8 nm and 5.5 nm. Meanwhile, no SHG signals were observed when the fundamental was focused on the substrate, as indicated by the spectrum of sapphire (black triangles) in Figure 1d. Fundamental power dependence of SHG spectra was obtained by varying the power of the 780-nm femtosecond laser, and the result is presented in Figure 1e. Fitting with a square function (solid purple line) matches well with experimental SHG results (purple dots), suggesting a quadratic dependence of SHG power with respect to fundamental power [16]. These observations validate that our MoS 2 samples are monolayers, and strong SHG can be readily recorded by our experimental setup equipped with a regular spectrometer. a laminar flow. As a result, a homogeneous 3D fluid-2D solid interaction is constructed to facilitate adsorption and desorption of biomolecules on the surface of monolayer MoS2. Monolayer MoS2 grown by CVD on double-sided polished sapphire substrates (Six Carbon Technology, Shenzen, China) were used as received. To confirm the monolayer nature of these samples, optical characterizations were carried out before microfluidic experiments. Figure 1c presents optical absorption and photoluminescence (PL) spectra of monolayer MoS2. Lorentzian fitting of the PL spectrum points to a resonant peak centered at about 669 nm, which agrees well with the A-exciton resonance peak of the optical absorption spectrum [24,25]. Employing the experimental setup of Figure 1a, we measured SHG (centered at 390 nm) and fundamental (centered at 780 nm) spectra of monolayer MoS2, as indicated by the purple squares and red dots in Figure 1d, respectively. Solid lines are Gaussian fittings. Full width at half maxima (FWHM) of fundamental and SHG were fitted to be about 12.8 nm and 5.5 nm. Meanwhile, no SHG signals were observed when the fundamental was focused on the substrate, as indicated by the spectrum of sapphire (black triangles) in Figure 1d. Fundamental power dependence of SHG spectra was obtained by varying the power of the 780-nm femtosecond laser, and the result is presented in Figure 1e. Fitting with a square function (solid purple line) matches well with experimental SHG results (purple dots), suggesting a quadratic dependence of SHG power with respect to fundamental power [16]. These observations validate that our MoS2 samples are monolayers, and strong SHG can be readily recorded by our experimental setup equipped with a regular spectrometer. To justify the electrostatic adsorption effect of monolayer MoS2, we employed positively charged chitosan to interact with monolayer MoS2. Chitosan was dissolved into a water solution of acetic acid, configuring a solution with a mass fraction of 0.8 mg/mL. Before adding the chitosan solution, a pixel-to-pixel SHG mapping of a monolayer MoS2 sample in the air was taken by scanning a 2D translation stage (Physik Instrumente, P-51, Karlsruhe, Germany). The SHG image of this sample was presented in Figure 2a. The spatial scanning step was set at 300 nm, and the grey value of each pixel was obtained by integrating the SHG spectrum counts within a wavelength ranging from 370 nm to 410 To justify the electrostatic adsorption effect of monolayer MoS 2 , we employed positively charged chitosan to interact with monolayer MoS 2 . Chitosan was dissolved into a water solution of acetic acid, configuring a solution with a mass fraction of 0.8 mg/mL. Before adding the chitosan solution, a pixel-to-pixel SHG mapping of a monolayer MoS 2 sample in the air was taken by scanning a 2D translation stage (Physik Instrumente, P-51, Karlsruhe, Germany). The SHG image of this sample was presented in Figure 2a. The spatial scanning step was set at 300 nm, and the grey value of each pixel was obtained by integrating the SHG spectrum counts within a wavelength ranging from 370 nm to 410 nm. For monolayer MoS 2 region, strong SHG leads to a distribution of triangle reflecting the spatial profile of monolayer MoS 2 lattice. For the substrate region, the absence of SHG points to a black background. Subsequently, 1µL of chitosan solution (one drop) was added onto the same monolayer MoS 2 sample. When the solution completely evaporated at room temperature, pixel-to-pixel SHG mapping of the monolayer MoS 2 sample was repeated. The obtained SHG image was presented in Figure 2b. Similarly, by repeating the procedures of dropping, drying and SHG mapping, Figure 2c,d are SHG images of the same monolayer MoS 2 sample covered by two and three drops of chitosan solutions, respectively. We anticipate that the electrostatic adsorption process can be initiated by the Coulomb forces between the negatively charged monolayer and chitosan. Accumulated amounts of chitosan are expected to form chitosan nanoclusters on the surface of monolayer MoS 2 , as illustrated by the schematic diagram in Figure 2e. By carefully comparing the SHG images before ( Figure 2a) and after (Figure 2b-d) adding chitosan solutions, it is clear that adsorbed chitosan nanoclusters on monolayer MoS 2 can enhance SHG's intensity. To directly reveal these differences, we subtracted the SHG image in Figure 2a from that in Figure 2b-d. The corresponding differential SHG images are plotted in Figure 2f-h. These differential SHG images present randomly distributed SHG enhancements, suggesting adsorbed chitosan nanoclusters were randomly distributed, as well. We expect that this phenomenon originated from the process wherein a solution dropping action causes a rearrangement of molecules on the surface of monolayer MoS 2 . Rather, the differential SHG intensity shows an increasing trend from Figure 2f to Figure 2h, as added amounts of chitosan were increased. Especially, for certain edge regions of monolayer MoS 2 , SHG enhancement effects turn out to be stronger, suggesting that the edge region with more charged active sites tends to facilitate chitosan adsorptions [26].
nm. For monolayer MoS2 region, strong SHG leads to a distribution of triangle reflecting the spatial profile of monolayer MoS2 lattice. For the substrate region, the absence of SHG points to a black background. Subsequently, 1µL of chitosan solution (one drop) was added onto the same monolayer MoS2 sample. When the solution completely evaporated at room temperature, pixel-to-pixel SHG mapping of the monolayer MoS2 sample was repeated. The obtained SHG image was presented in Figure 2b. Similarly, by repeating the procedures of dropping, drying and SHG mapping, Figures 2c and d are SHG images of the same monolayer MoS2 sample covered by two and three drops of chitosan solutions, respectively. We anticipate that the electrostatic adsorption process can be initiated by the Coulomb forces between the negatively charged monolayer and chitosan. Accumulated amounts of chitosan are expected to form chitosan nanoclusters on the surface of monolayer MoS2, as illustrated by the schematic diagram in Figure 2e. By carefully comparing the SHG images before (Figure 2a) and after (Figure 2b-d) adding chitosan solutions, it is clear that adsorbed chitosan nanoclusters on monolayer MoS2 can enhance SHG's intensity. To directly reveal these differences, we subtracted the SHG image in Figure 2a from that in Figure 2b-d. The corresponding differential SHG images are plotted in Figure 2f-h. These differential SHG images present randomly distributed SHG enhancements, suggesting adsorbed chitosan nanoclusters were randomly distributed, as well. We expect that this phenomenon originated from the process wherein a solution dropping action causes a rearrangement of molecules on the surface of monolayer MoS2. Rather, the differential SHG intensity shows an increasing trend from Figure 2f to Figure 2h, as added amounts of chitosan were increased. Especially, for certain edge regions of monolayer MoS2, SHG enhancement effects turn out to be stronger, suggesting that the edge region with more charged active sites tends to facilitate chitosan adsorptions [26]. To visualize the adsorbed chitosan nanoclusters on the surface of monolayer MoS2, we measured the AFM image after dropping and drying a chitosan solution for monolayer MoS2 samples on the sapphire substrate. As shown in Figure 3a, it is clear that chitosan nanoclusters adsorbed on monolayer MoS2 (triangle region) form many white dots. Furthermore, on the sapphire substrate, some chitosan nanoclusters can still be absorbed, but their density of distribution as well as size are smaller than the case of monolayer MoS2. The physical reason is that positively charged chitosan molecules tend to be more To visualize the adsorbed chitosan nanoclusters on the surface of monolayer MoS 2 , we measured the AFM image after dropping and drying a chitosan solution for monolayer MoS 2 samples on the sapphire substrate. As shown in Figure 3a, it is clear that chitosan nanoclusters adsorbed on monolayer MoS 2 (triangle region) form many white dots. Furthermore, on the sapphire substrate, some chitosan nanoclusters can still be absorbed, but their density of distribution as well as size are smaller than the case of monolayer MoS 2 . The physical reason is that positively charged chitosan molecules tend to be more effi-ciently adsorbed by the negatively charged monolayer MoS 2 through attractive Coulomb forces [19][20][21][22][23]. The height profile along the red arrow in Figure 3a is plotted in Figure 3b.
Observed height values of CNs on the sapphire substrate (distance range: from 0 to 1.4 µm) suggest an averaged thickness of about 7 nm. It is obvious that height values of CNs on the monolayer MoS 2 (distance range: from 1.4 to 3.5 µm) could be as large as about 13 nm. The measured thickness of monolayer MoS 2 is less than 1 nm (about 0.9 nm) according to Figure 3b, which agrees well with previous measurement [27]. In addition, size distributions of adsorbed chitosan nanoclusters on monolayer MoS 2 and on the sapphire substrate were analyzed, as shown in Figure 3c, where two representative regions, marked in Figure 3a, were selected. Histograms of adsorbed chitosan nanoclusters in red and green of Figure 3c are statistics of region A and B of Figure 3a, respectively. After fitting these histograms with a Gaussian function, it turns out that the averaged diameter of adsorbed chitosan nanoclusters on substrate is about 41 nm. In comparison, on monolayer MoS 2 , the averaged diameter of adsorbed chitosan nanoclusters is about 78 nm, validating that Coulomb attraction forces between monolayer MoS 2 and chitosan nanoclusters enhance the adsorption processes. The size FWHM of adsorbed chitosan nanoclusters on monolayer MoS 2 is about 42 nm, which is about twice that (about 22 nm) on the substrate. At this point, we can conclude that chitosan nanoclusters can be effectively adsorbed on monolayer MoS 2 by electrostatic attractions and mediate SHG intensity after drying. However, whether such an interfacial adsorption effect can induce enough SHG intensity change for biomolecules flowing in a liquid environment is still unknown.
Biosensors 2022, 12, x FOR PEER REVIEW 5 of 9 efficiently adsorbed by the negatively charged monolayer MoS2 through attractive Coulomb forces [19][20][21][22][23]. The height profile along the red arrow in Figure 3a is plotted in Figure 3b. Observed height values of CNs on the sapphire substrate (distance range: from 0 to 1.4 µm) suggest an averaged thickness of about 7 nm. It is obvious that height values of CNs on the monolayer MoS2 (distance range: from 1.4 to 3.5 µm) could be as large as about 13 nm. The measured thickness of monolayer MoS2 is less than 1 nm (about 0.9 nm) according to Figure 3b, which agrees well with previous measurement [27]. In addition, size distributions of adsorbed chitosan nanoclusters on monolayer MoS2 and on the sapphire substrate were analyzed, as shown in Figure 3c, where two representative regions, marked in Figure 3a, were selected. Histograms of adsorbed chitosan nanoclusters in red and green of Figure 3c are statistics of region A and B of Figure 3a, To demonstrate the feasibility of our SHG technique towards real-time sensing for flowing biomolecules in liquid environments, we selected bovine serum albumin (BSA) molecules and performed time-resolved SHG spectra measurements employing the experimental setup in Figure 1a,b. The proposed experimental schemes of BSA adsorption and desorption are presented in Figure 4a,b, respectively. Simply speaking, protonated BSA molecules are positively charged, so that the adsorption process is expected when a negatively charged monolayer MoS2 tends to apply attractive Coulomb forces. Then, by controlling the pH of the liquid environment, protonated BSA molecules can gain electrons, and positive charges of adsorbed BSA molecules will be neutralized. Laminar flow in the microfluidic channel will take away these interfacial BSA molecules and trigger a desorption process. Before placing the monolayer MoS2, the microfluidic chip and tubes were carefully cleaned. The BSA solution (1 µg/mL) was configured in PBS buffer solution (pH = 3.6) using BSA (5%, Yuanye Bio-Technology, Shanghai, China). The power of the fundamental laser was fixed at 8 mW. By focusing the fundamental laser tightly on the center of a monolayer MoS2 sample by a 50× objective lens (NA = 0.55), we ensured that the size of the focal spot (1.7 µm) was much smaller than the lateral size of the mon- To demonstrate the feasibility of our SHG technique towards real-time sensing for flowing biomolecules in liquid environments, we selected bovine serum albumin (BSA) molecules and performed time-resolved SHG spectra measurements employing the experimental setup in Figure 1a,b. The proposed experimental schemes of BSA adsorption and desorption are presented in Figure 4a,b, respectively. Simply speaking, protonated BSA molecules are positively charged, so that the adsorption process is expected when a negatively charged monolayer MoS 2 tends to apply attractive Coulomb forces. Then, by controlling the pH of the liquid environment, protonated BSA molecules can gain electrons, and positive charges of adsorbed BSA molecules will be neutralized. Laminar flow in the microfluidic channel will take away these interfacial BSA molecules and trigger a desorption process. Before placing the monolayer MoS 2 , the microfluidic chip and tubes were carefully cleaned. The BSA solution (1 µg/mL) was configured in PBS buffer solution (pH = 3.6) using BSA (5%, Yuanye Bio-Technology, Shanghai, China). The power of the fundamental laser was fixed at 8 mW. By focusing the fundamental laser tightly on the center of a monolayer MoS 2 sample by a 50× objective lens (NA = 0.55), we ensured that the size of the focal spot (1.7 µm) was much smaller than the lateral size of the monolayer MoS 2 sample (about 15 µm). By finely tuning the axial position of monolayer MoS 2 sample, we maximized the intensity of SHG spectrum recorded by the spectrometer. A computer program was coded to record the SHG spectra every 500 ms, which integrated all non-zero SHG counts between 370 nm and 410 nm.
x FOR PEER REVIEW 6 of 9 olayer MoS2 sample (about 15 µm). By finely tuning the axial position of monolayer MoS2 sample, we maximized the intensity of SHG spectrum recorded by the spectrometer. A computer program was coded to record the SHG spectra every 500 ms, which integrated all non-zero SHG counts between 370 nm and 410 nm. Then, at 480 s, the PBS solution (pH = 7.4) was sent to trigger a BSA molecule desorption process. Interestingly, the intensity of SHG signals started to decrease and, eventually, recovered to a constant magnitude at about 850 s, which is equal to the scenario when no BSA was added (before the 90 s mark). Furthermore, a control experiment was performed by shifting the fundamental laser focus onto a nearby monolayer MoS2 sample and replacing the BSA solution with a PBS solution without BSA molecules, while other experimental conditions were kept the same. As indicated by the lower panel of Figure 4c, at 90s, the intensity of SHG signals (black spectra) when the fundamental laser was focusing on monolayer MoS2 remained a constant. This result strongly validates that ions or other molecule components in the PBS solutions would not induce a detectable intensity change of monolayer MoS2 SHG signals for our experimental setup. The baseline In our measurements, a fluid pump sent solutions into the microfluidic channel at a constant flowing rate of 22 µL/s. In the beginning, we flushed the microfluidic chip with a PBS solution (pH = 7.4) until the SHG signal of monolayer MoS 2 became stable in flowing conditions, as indicated by the time-resolved SHG signals (purple spectra) before 90 s in Figure 4c. At the 90 s mark, the BSA solution (pH = 3.6) was sent into the microfluidic channel. The intensity of SHG signals started to increase and, approximately, maintained a constant after the 200 s point. The increasing evolution of SHG signals between 90 s and 200 s is caused by the BSA molecule's adsorption on the surface of monolayer MoS 2 . Then, at 480 s, the PBS solution (pH = 7.4) was sent to trigger a BSA molecule desorption process. Interestingly, the intensity of SHG signals started to decrease and, eventually, recovered to a constant magnitude at about 850 s, which is equal to the scenario when no BSA was added (before the 90 s mark). Furthermore, a control experiment was performed by shifting the fundamental laser focus onto a nearby monolayer MoS 2 sample and replacing the BSA solution with a PBS solution without BSA molecules, while other experimental conditions were kept the same. As indicated by the lower panel of Figure 4c, at 90s, the intensity of SHG signals (black spectra) when the fundamental laser was focusing on monolayer MoS 2 remained a constant. This result strongly validates that ions or other molecule components in the PBS solutions would not induce a detectable intensity change of monolayer MoS 2 SHG signals for our experimental setup. The baseline decrease is attributed to the lattice orientation difference in CVD MoS 2 from sample to sample. To evaluate the contributions of adsorbed BSA molecules to the refractive index change as well as intensity change of SHG signals, we focused the fundamental laser on the sapphire substrate. By replacing the short-pass optical filters in front of the spectrometer, a small portion of the fundamental laser (780 nm) was allowed to pass. Hence, we can measure fundamental and SHG signals at the same time. The BSA adsorption experiments were repeated by adding BSA solutions at 200 s. As shown in Figure 4d, the spectra intensity of fundamental (780 nm) remained a constant after adding BSA molecules, ruling out the possible effect of interfacial refractive index change. More importantly, the spectra intensity of SHG remained zero, indicating that SHG contributions of the sapphire substrate as well as adsorbed BSA generated can be neglected compared with the monolayer MoS 2 . The low SHG conversion efficiency of sub-nanometer thick monolayer MoS 2 and high noise-level of our spectrometer lead to a relatively low signal to noise ratio. This issue can be further improved by increasing the integration time of each SHG spectrum and optimizing the design of microfluidic chips.
To address the physical mechanism of observed SHG signal changes accompanying the BSA adsorption process, we can consider the second-order polarization of the interface with the follow model [28,29]: E 2ω ∝ P (2) = χ (2) E ω E ω + χ (3) E ω E ω ∅ 0 , where ∅ 0 is the interfacial electric field and χ (2) and χ (3) are the second-order optical susceptibility of monolayer MoS 2 and third-order optical susceptibility of the interface, respectively. In our case, we believe that the spatial distribution of adsorbed BSA molecules on the surface monolayer MoS 2 is random. Specifically, since the initial charge distribution of monolayer MoS 2 is inhomogeneous in a 2D plane defined by the flat substrate, the orientations and 3D stacking orders of adsorbed BSA molecules are expected to be highly random. As a result, directions of interfacial electric fields between the monolayer MoS 2 and adsorbed BSA molecules in disorder will no longer be strictly perpendicular to the 2D plane. Therefore, the angle between the wave vector of the fundamental laser and the direction of interfacial electric fields will not be zero, so that the ∅ 0 term can induce a non-zero second-order polarization field. When BSA molecules are dynamically adsorbed, the total magnitude of second-order polarization fields formed by superposition of polarization fields from monolayer MoS 2 and interfacial electric fields will depend on their initial phase difference.
Conclusions
We have comprehensively demonstrated that the interfacial SHG of monolayer MoS 2 can be utilized for label-free biomolecule sensing. Through static SHG mapping experiments, we show that the intensity of SHG in monolayer MoS 2 /adsorbed chitosan nanocluster heterostructures can be mediated due to electrostatic attractions. With a time-resolved SHG measuring system equipped with a microfluidic chip, we further realize label-free sensing of BSA adsorption and desorption dynamics in real time through the SHG intensity change of monolayer MoS 2 in liquid environments, which has been tailored by Coulomb interactions between BSA molecules and monolayer MoS 2 . Our work provides a complementary mean of label-free interfacial biomolecule sensing, which, in principle, undertakes molecular-level spatial resolution across the interfaces for applications including, but not limited to, in vitro medicine evaluation. | 6,451.2 | 2022-11-01T00:00:00.000 | [
"Physics"
] |
EMITTANCE CHARACTERIZATION OF ION BEAMS PROVIDED BY LASER PLASMA
Laser ion sources offer the possibility to obtain ion beams useful for particle accelerators. Nanosecond pulsed lasers at intensities of the order of 108 W/cm2, interacting with solid matter in a vacuum, produce plasma of high temperature and high density. To increase the ion energy, an external post-acceleration system can be employed by means of high voltage power supplies of some tens of kV. In this work, we characterize the ion beams provided by an LIS source and post-accelerated. We calculated the produced charge and the emittance. Applying 60 kV of accelerating voltage and laser irradiance of 0.1 GW/cm2 on the Cu target, we obtain 5.5 mA of output current and normalized beam emittance of 0.2πmm mrad. The brightness of the beams was 137 mA (πmm mrad)−2.
Introduction
It is known that the presence of specific doped ions can significantly modify the chemical-physical properties of many materials.Today many laboratories, including LEAS, are involved in developing accelerators of very contained dimensions, easy to install in small laboratories and hospitals.The use of ion sources facilitates the improvement of ion beams of moderate energy and with good geometric qualities.They are used for the production of innovative electronic and optoelectronic films [7], biomedical materials [8,16], new radiopharmacy applications [14,6], hadrontherapy applications [3], and to improve the oxidation resistance of many materials [13].
There are many methods for obtaining particle beams; application of the pulsed laser ablation (PLA) technique (the technique that we adopt in this work) enables ions to be obtained solid targets, without any previous preparation, whose energy can easily be increased by post-acceleration systems [2,12,1].In this way, plasma can be generated from many materials, including from refractory materials [9,15].
In this work, we characterize the ion beams provided by a laser ion source (LIS) accelerator composed of two independent accelerating sectors, using an excimer KrF laser to get PLA from the pure Cu target.Using a home-made Faraday cup and a pepper pot system, we studied the extracted charges and the geometric quality of the beams.
Materials and methods
The LIS accelerator consists of a KrF excimer laser operating in the UV range to get PLA from solid targets, and a vacuum chamber device for expanding the plasma plume, extracting and accelerating its ion component.The maximum output energy of the laser is 600 mJ.It works at 248 nm wavelength and the pulse duration is 25 ns.The angle formed by the laser beam with respect to the normal to the target surface is 70 • .During our measurements, the laser spot area onto the target surface is fixed at 0.005 cm 2 .The laser beam strikes the solid targets and it generates plasma in a expansion chamber (EC), Fig. 1.
This chamber is closed around to the target support (T) and it is put at a positive high voltage (HV) of +40 kV.The length of the expansion chamber (18 cm) is sufficient to decrease the plasma density [5].The plasma expands inside the EC and, since there is no electric field, breakdowns are absent.Thanks to the plasma expansion, the charges reach the extremity of the expansion chamber, which is drilled with a 1.5 cm hole to allow ion extraction.A pierced ground electrode (GE) is placed at 3 cm distance from EC.In this way it is possible to generate an intense accelerating electric field between EC and GE.Four capacitors of 1 nF, between EC and ground, stabilize the accelerating voltage during fast ion extraction.
After GE, a third electrode (TE) is placed 2 cm from it, and it is connected to a power supply of negative bias voltage −20 kV.It is also utilized as a Faraday cup collector.
TE is connected to the oscilloscope by an HV capacitor (2 nF) and a voltage attenuator, ×20, in order to separate the oscilloscope from the HV and to suit the electric signal to the oscilloscope input voltage.The value of the capacitors (4 nF) applied to stabilize the accelerating voltage and the value of the capacitors (2 nF) used to separate the oscilloscope from the HV are calculated assuming a storage charge higher that the extracted one.Under this condition, the accelerating voltages during charge extraction are constant and the oscilloscope is able to record the real signal.
TE is not able to support the suppressing electrode on the cup collector, and secondary electron emission, caused by high ion energy, is therefore present.In this configuration, we are aware that the output current values are about 20 % higher than the real values [11].
Results and discussion
The value of the laser irradiance used to produce the ion beams was 1.0 × 10 8 W cm −2 and the ablated target was a pure (99.99 %) disk of Cu.
Figure 2 shows the time of flight (TOF) spectra obtained at 60, 40 and 30 kV of total accelerating voltage from the Cu target, detecting the ion emission with the Faraday cup placed 23 cm from the target.The vertical axis represents the output current.The maximum output current is reached applying 40 and 20 kV, respectively, on the first gap (EC-GE) and on the second gap (GE-TE).
The space charge effects are more evident at the lowest accelerating voltage applied on the first gap, although they are present anyway.So, considering only the TOF curves out of the charge domination effects, we obtain the behaviour of the accelerated charge with respect to the accelerating voltage.From these results, we can observe the absence of a saturation phase.In fact the curves in Fig. 3 have a growing trend with respect to the applied voltage.The growing trend on the first gap voltage is larger than the one dependent on the second gap voltage.Theoretically, we can expect to observe a constant trend for the curves, dependent principally on the voltages of the second acceleration gap, because the charge is already extracted and its value should be constant.We can as- cribe this behaviour toi the secondary emission of electrons from the cup collector, because we are not able to prevent this emission, owing to the absence of a suppressing electrode on the Faraday cup.So we expect that the real charge must be increased by about 20 % for the used voltage values.
Fixing the voltage at 20 kV in the second gap, the extraction current increased with the change of the voltage applied on first gap, reaching 150 % at the maximum voltage of 40 kV with respect to the value obtained at 0 kV.This result implies strong dependence of the extraction efficiency on the first stage voltage.In effect, simulations results preformed in previous works, have shown, tha the electric field strenght near the EC hole increased with respect to the applied voltage.This fact can enlarge the extracting volume inside the anode, and as a consequence, the extraction efficiency.The Cu target at zero voltage produced ion beams containing 1.2 × 10 11 ions/pulse (0.7 × 10 11 ions cm −2 ).Instead, applying accelerating voltages of 40 and 20 kV in the first and sec- ond accelerating gap, respectively, we obtained an increase of the ion dose up to 3.4 × 10 11 ions/pulse, (2 × 10 11 ions cm −2 ).
For a geometric characterization of the beam, we performed emittance measurements by the pepper pot technique.We can assume the propagation direction of the beam along the z axis.So the x-plane emittance x is 1/π times the area A x in the x x trace plane (TP x ) occupied by the points representing the beam particles at a given value of z, namely In the phase plane (PP x ), the area of the particle is defined as [11] A where f 0 2 is the distribution function of the particles, m 0 is the rest mass of the particles, β = v/c and γ Actually, it is necessary to define an invariant quantity of the motion called normalized emittance ε nx in the TP x .Therefore, by Liouville's theorem, it is known that the area occupied by the particle beam in PP x is an invariant quantity and the normalised emittance can assume the form ( Figure 4 shows a sketch of the system used to measure the emittance value. The mask we used has 5 holes of 1 mm in diameter, and it was fixed on the GE.One hole is in the centre of the mask and 4 holes are at 3.5 mm from the centre.We used Gafchromic EBT radiochromic films (R) as photo-sensible screen, placed on the TE.
Radiochromic detectors take advantage of the direct impression of a material by the absorption of energetic radiation, without requiring latent chemical, optical, or thermal development or amplification.A radiochromic film changes its optical density as a function of the absorbed dose.This property, and the relative ease of use, led to the adoption these detectors as simple ion beam transverse properties diagnostic tools.
So, the ion beam after the mask imprinted the radiochromic film and then it was possible to measure the divergence of all beamlets.The divergence values allowed us to determine the beam area A x in TP x .We applied 250 laser shots to imprint the radiochromic films.
We measured the emittance for different accelerating voltage values.We fixed the accelerating voltage of the second gap at 20 kV, while the accelerating voltage of the first gap was put at 10, 20 and 40 kV.So, the obtained values of the area in the TP x resulted in 613, 545 and 435 mm mrad for 30, 40 and 60 kV of total accelerating field, respectively (Fig. 5).
Considering Eq. 3, we found the normalized emittance values.
For all the applied voltage values, the normalized emittance resulted constant: ε nx = 0.2 π mm mrad.
Therefore, to estimate the total properties of the delivered beams, it is necessary to introduce the concept of brightness.Brightness is the ratio between the current and the emittance along the x-and the y-axis.Generally assuming x equal to y, the normalised brightness becomes By Eq. 4, at the current peak (5.5 mA), the brightness values resulted in 137 mA (π mm mrad) −2 at 60 kV of accelerating voltage.
Due to low emittance and high current, this apparatus is very promising for use for feeding large accelerators.The challenge of the moment is to get accelerators with dimensions so small that can be easily deployed in small laboratories and hospitals.
Conclusions
Post-acceleration of ions emitted from laser-generated plasma can be developed to obtain small and compact accelerating machines.The output current can easily increase on accelerating voltage.The applied voltage can cause breakdowns, and for this reason the design of the chamber is very important (primarily its dimensions and morphology).We have also shown that two gaps of acceleration can efficiently increase the ion energy.By increasing the voltage of the first accelerating gap, we substantially increased the efficiency of the extracted current due to the rise of the electric field and the extracting volume inside the EC.The charge extracted without the electric fields was 0.7 × 10 11 ions/cm 2 .At the maximum accelerating voltage the ion dose was 2 × 10 11 ions/cm 2 , and the corresponding peak current was 5.5 mA.We measured the geometric characteristics of the beam utilising the pepper pot method.We found a low value for the normalized emittance of our beams, ε nx = 0.2 π mm mrad.The resulting brightness values were therefore 137 mA (π mm mrad) −2 .
This study has demonstrated that our apparatus can produce ion beams of good quality, e.g. with a low emmitance value and high current.For this reason it is very promising for use in feeding large accelerators.
Figure 2 .
Figure 2. Waveforms of the output current at different accelerating voltages for a Cu target; laser irradiance: 0.1 GW cm −2 .
Figure 4 .
Figure 4. Sketch of the pepper pot system used to measure the emittance.
Figure 5 .
Figure 5. Emittance diagram in the trace plane for different accelerating voltage values. | 2,781.8 | 2013-01-01T00:00:00.000 | [
"Physics"
] |
HBIM and augmented information: towards a wider user community of image and range-based reconstructions
Abstract. This paper describes a procedure for the generation of a detailed HBIM which is then turned into a model for mobile apps based on augmented and virtual reality. Starting from laser point clouds, photogrammetric data and additional information, a geometric reconstruction with a high level of detail can be carried out by considering the basic requirements of BIM projects (parametric modelling, object relations, attributes). The work aims at demonstrating that a complex HBIM can be managed in portable devices to extract useful information not only for expert operators, but also towards a wider user community interested in cultural tourism.
INTRODUCTION
The use of handheld mobile devices (mobile phones, tablets, …) is no longer limited to personal and recreational purposes.Mobile devices are used for productive work in different disciplines (medicine, education, simulated training, …) for their high flexibility and a real-time access to information.Nowadays, it is normal to digitize notes, send and receive invoices, managing parts, scanning barcodes or make payments.This aim of this work is to investigate the possibility to handle accurate HBIM in portable devices for cultural heritage conservation and preservation policies with the aid of new technologies.The different operators (architects, archaeologists, restorers, historians, conservators, engineers, etc.) can exploit the advantage of the improved collaboration of BIM projects through the use of portable tools.On the other hand, practical applications are not only confined to professional operators.In this work particular attention is paid to a wider user community with different purposes, including built environment education, interactive learning, cultural tourism, and gamification, among others.Mobile devices can have a fundamental role in stimulating the interaction between people and digital cultural heritage in order to (i) connect people to heritage, (ii) create knowledge, and (iii) preserve the cultural heritage resource.There is growing interest in surveying, modelling, and visualization techniques for a better exploitation of the cultural content with the development of new digital applications and tools towards a more reflective society (Cuca et al., 2014).Although the integrated use of augmented information (including virtual reality and augmented reality) in mobile devices is not a new concept in the construction industry (Park and Wakefield, 2003;Osello, 2012), the integration of detailed HBIM poses new challenges right from the first phases of the reconstruction process.One of the big issues concerns the need of a very detailed model and its operational use in handheld mobile devices.This could require HBIM with a better level of detail than models for modern construction projects with predefined object libraries.Additional issues could arise in terms of memory occupation, making the exploitation of the model more complicated in mobile devices.The generation of an accurate and complete HBIM of a historic construction is a complex task for the lack of commercial software able to manage the geometric complexity (Fai et al., 2011;Murphy et al., 2013;Brumana et al., 2013Brumana et al., , 2014;;Oreni et al., 2014a) revealed by laser scanning and photogrammetric point clouds.Building Information Modelling is based on intelligent objects with relations to other objects, attributes, and parametric modelling tools.The use of advanced NURBS surfaces turned into complete HBIM objects was proposed by Oreni et al. (2014b) to avoid excessive simplifications resulting in models not useful for conservation.On the other hand, the information encapsulated into the model can be used to extract different kinds of solutions for different kinds of users.The advantages of BIM managed in mobile devices is very attractive not only for expert operators of the Architecture, Engineering and Construction (AEC) industry.The extension towards a complete HBIM can open up new opportunities and alternative approaches to data representation, organisation and interaction, also for the operators in the field of cultural heritage.Some mobile applications integrating BIM technology were proposed in Waugh et al. (2012), where a web-based augmented panoramic environment was developed to document construction progress.Park et al. (2013) presented a conceptual framework that integrates augmented reality with BIM to detect construction defects.Dunston and Wang (2005) proposed augmented reality as a tool to support all phases of the facility life cycle.As mentioned, this paper try to obtain a wider diffusion of the digital reconstruction.Augmented information is already used for built environment education, in particular with the provision of real knowledge that can be used in the real world (Tatum, 1987).Several examples have shown that augmented reality can aid tourist organizations and professionals, reaching a wider audience by serving as the delivery technology of advanced multimedia contents.Augmented reality information systems can help tourists with an easy access to valuable information, improving their knowledge regarding a touristic attraction or a destination, enhancing the tourist experience, and offering increased levels of entertainment throughout the process (Fritz et al., 2005).The work investigates the full pipeline for HBIM generation, starting form data acquisition till data delivery.Some mobile applications were tested to take into considerations both professional and casual users.The case study is Castel Masegra, a castle located in Sondrio (Italy).A detailed historical HBIM (500 MB in Revit) was derived from laser scanning point clouds (7.5 billion points).Building information modelling was carried out by dividing the different structural objects and their constructive logic.Chronological, material, and stratigraphic aspects were also taken into account.This step is not only useful for architectural purposes, but also for further static and dynamic simulations where the temporal evolution of the castle provides additional data about its logic of constructions (Barazzetti et a., 2015).The model was then exported and converted into several formats to exploit the possibility offered by the available portable applications.Two different mobile products were generated from the original HBIM.The first one is a complete HBIM integrated with technical data to support restoration and conservation projects.The second product is instead more oriented towards a wider user community.It aims at stimulating a larger interest in historical resources with remote navigations and on-site immersive visualizations.The obtained results showed that modern mobile devices are sufficient to handle advanced BIM reconstructions not only for technical purposes, but also for promoting cultural tourism.Fig. 2. The creation of a HBIM can be exploited for different purposes and user communities.
DATA ACQUISITION AND PROCESSING
The geometric survey of Castel Masegra was carried out with laser scanning and photogrammetric techniques.The complexity and the size of the castle required 176 scans registered with the support of a geodetic network.A total station Leica TS30 was used to create a robust geodetic network made up of 68 stations.The measurement phase took 4 days and included not only station points and some fixed points (mainly retro-reflective targets), but also chessboard targets for scan registration.Geodetic tripods were not moved during the survey to avoid repositioning errors.In all, 4,622 observations and 1,402 unknowns gave 3,220 degrees of freedom.Least Squares adjustment provided an average point precision of about ±1.2 mm.The network provided a robust reference system for scan registration.The instrument used is a Faro Focus 3D and the final point cloud is made up of 7.5 billion points.Scan acquisition took 5 days.Scans were registered with an average precision of ±3 mm by using chessboard targets measured with the total station and additional scan-to-scan correspondence (spherical targets).The survey was then integrated with some additional scans to capture the occluded areas after the first surveying phase.The limited time for scan acquisition (less than a week for the large and complex case study presented in the paper) proves the level of maturity reached by modern laser scanners.It is clear that the time needed for data processing (especially HBIM generation) is instead much longer, especially in the case of very detailed reconstruction.
Photogrammetric techniques were used to complete the reconstruction of some elements.Different digital cameras with different lens were used after a preliminary photogrammetric calibration.Images were extremely useful to extract dense point clouds which integrate laser scans.Photogrammetry was also used for some elements that required a representation with orthophotos (e.g. the umbrella vault in Fig. 4).The use of total station data allowed one to obtain a common reference system for the different acquisition techniques.A survey with a drone (Asctec Falcon 8) equipped with a RGB camera provided a set of images for the inspection of the roof the small surrounding hill.As the goal of the project is a HBIM useful for architectural and structural purposes, the surveying phase cannot be limited to the reconstruction of the shape.The presented measurement techniques can reveal the external layer of construction elements, whereas a HBIM is made up of objects with an internal structure.As the goal is the creation of an interoperable HBIM and its distribution among the different operators that work on the castle (engineers, architects, historians, archaeologists, restorers, etc.) the survey included a historical analysis, materials, construction phases, technological aspects, stratigraphic analysis, and information from other inspections such as infrared thermography or structural tests (flat-jacks, coring, etc.) (Binda and Tiraboschi, 1999;Colla et al., 2008;Gregorczyk and Lourenco, 2000;Rosina and Grinzato, 2001).
POINT CLOUDS TURNED INTO HBIM
The starting point for the generation of the HBIM is the set of dense laser scanning point clouds which reveal the geometric complexity of the castle.HBIM reconstruction must be carried out by considering the basic requirements of BIM projects.Detailed building information modelling cannot be carried out with the interpolation of the point cloud with mesh-based algorithms often used in photogrammetry and computer vision.Additional information (e.g.materials, construction stages, stratigraphy, etc.) has to be taken into account to create intelligent parametric objects with relations to other objects and attributes.Photogrammetric and laser scanning measurements are powerful tools for the generation of object surfaces.However, BIM projects are composed of solid elements with additional information about the internal layers.As point clouds reveal the external surface of the object, the use of other techniques (e.g.IRT), the inspection and analysis of the constructive logic, and architectural/structural interpretation are mandatory to correctly represent the different elements and their connections.The methodology for parametric BIM generation was based on a preliminary separation between simple and complex shapes.In the case of simple objects, the tools of most commercial software (Autodesk Revit in this case) were sufficient for an accurate reconstruction.However, the case of irregular objects (e.g.vaults, arches, etc.) was much more complicated for the lack of commercial solutions for BIM generation able to take into account the level of detail encapsulated into laser point clouds (Eastman et al., 2008;Lee et al., 2006).For this reason, the procedure proposed by Oreni et al. (2014b) was used to create parametric BIM objects based on surfaces made up of NURBS curves and NURBS surfaces (Piegl and Tiller, 1997;1999).Shown in Fig. 5 are some BIM objects used in the project: walls, vaults, columns, ceilings, beams and trusses, stairs, and decorations.Structural elements were classified following the predefined structure of the software database: category, family, type, and instance.3D modelling was carried out from slices and 2D drawings created from the laser cloud.The preliminary use of 2D drawings is a valid tool to distinguish areas where an accurate 3D modelling is required from parts that can be simplified with predefined objects.Plans, sections, and elevations correctly positioned in space provided the reference frame for the reconstruction of the model.This is a fundamental point towards the creation of an accurate BIM consistent with the preliminary products of the geometrical survey.Starting from the sections, the main deviations from verticality of exterior walls were identified, whereas the interior wall appeared reasonably vertical.Revit tools for openings ("windows" or "doors") were not directly used.The basic functions of the software allowed one to define a large number of predefined parameters.However, a limited correspondence was found for the complex openings of the castle.The accurate modelling of wooden frames required the definition of ad hoc families for the different types of openings.As things stand at the moment, the openings were modelled as "voids" in the BIM.One of the advantages of BIM is the opportunity to include different kinds of information in a dynamic common platform for the different operators involved in the project.A historical research was carried out to understand the different construction stages and their interconnections, which were represented in the final HBIM with multi-temporal elements.This means that the HBIM can provide a visualization of the changes and modifications occurred in the past, till the current shape.Obviously, the historical research cannot be assumed as a final solution.From this point of view the use of a HBIM is valid tool for the possibility to modify an initial hypothesis without redrawing (different construction stages can be set in the database).
The 3D visualization shown in Fig. 6 allows an immediate evaluation of the modifications occurred.This information can be used for different purposes.However, this paper focuses on cultural tourism (see next section) with the availability of historical data in mobile platforms.Another application was presented in Barazzetti et al. (2015), where the structural analysis via finite element models was illustrated.The simulation included the different construction stages, which are fundamental to understand the different connections between the elements of the castle.These examples confirm the suitability of HBIM technology as an common environment for both casual and expert operators.
FROM HBIM TO AR & VR
Different apps for mobile phones and tablets were taken into consideration in a exhaustive exploitation of the reconstruction carried out from images and laser scans.These applications are not limited to specific BIM solutions for professional operators.Casual and non-expert users more interested in interactive visualizations coupled with usable information can exploit some of the advantages of these representations.Augmented and virtual reality can be very important technologies for the collection, preservation, exploration and diffusion of cultural heritage.
BIM in portable devices: 3D models with information
Autodesk 360 (A360) is a mobile application directly connected to BIM projects generated in Autodesk Revit (Fig. 7).Different professional operators can exploit the advantages of such application, which offer a dynamic visualization of 2D and 3D drawings.On the other hand, the app is not only a viewer of geometric reconstructions.An efficient and simple visualization is carried out for BIM projects, including object properties and reports of the different activities.A HBIM project in Revit can be saved in a new DWF file format which preserves objects information.Different visualizations (3D views, sections, plans) can be set in the project to facilitate the access to the different parts of the model.Although some problems were found with object textures, A360 allowed an efficient visualization of the huge Revit file of the castle (more than 500 MB) thanks to a preliminary conversion in the DWF file without losing object information.BIMx is another application able to preserve some information during the conversion of the BIM project into a portable version (Fig. 8).The visualization of the model of the castle was very luid, with the opportunity to create dynamic sections.On the other hand, BIMx cannot open Revit models.For this reason, the model of the castle was converted into the interoperable IFC format.The IFC file was imported in ArchiCAD, from which the file for BIMx can be generated.The exchange of information between two BIM software resulted in some errors during the conversion of particular objects, including an information loss especially for CAD blocks, object textures and some material properties.
Virtual visit of the castle
iVisit3D is an user-friendly application able to create virtual visits with static images and panoramic views.A 3D model can be interactively navigated by setting some links between the predefined images.The use of static visualizations makes the navigation very fluid also for platforms with moderate performances (Fig. 9).The Revit file cannot be opened with the iVisit3D, which needed a preliminary conversion in a polygonal format.Rhinoceros was used to handle the geometric model.However, this generated a complete information loss including parametric modelling functions, object relations, and attributes.This is a fundamental difference between models created in BIM packages (Revit, ArchiCAD, Teckla, …) and software for 3D modelling (Rhinoceros, 3DS Max, SketchUp, …).After the generation of the 3D model, iVisit3D uses the powerful rendering engine Artlantis to create virtual tours, where texture can be applied to objects.Different illumination conditions can be simulated as well.Fig. 9.The virtual tour generated in iVisit3D required the preliminary conversion of the BIM with a complete information lossinformation loss.
Augmented reality from BIM
AR-media creates 3D visualizations based on augmented reality through mobile phones and tablets (Fig. 10).Two plugins are provided: the first one allows the conversion of a 3D model directly from a project generated in other 3D modelling environments.The second plugin allows the visualization by means of augmented reality techniques.The first plugin is available for different software, which are mainly pure modellers (Maya, SketchUp, etc.).The BIM model in Revit cannot be directly used by this package and a preliminary conversion in FBX is needed.This means that there is information loss.Finally, a marker can be set so that the package will show the corresponding 3D model when the camera capture the marker.Fig. 10.A 3D model without information can be imported in augmented reality applications.
Discussion
Table 1 shows a general comparison of the proposed applications.Some applications are mainly developed for BIM projects and are more oriented towards expert operators in the field of construction.A very important aspect concerns the lack of "perfect" interoperability during the exchange of information between different BIM packages.Conversion errors (information loss, attribute removal, modification of spatial location) were discovered for some objects, notwithstanding the availability of interoperable file formats (IFC in this case) for input and output.The HBIM can be intended as the central point of data processing.If the project requires an accurate BIM, technical products (plans, sections, …) and additional products (3D models for virtual reality, …) can be derived from the original BIM file.This means that the BIM and its sub products can be used by both expert and casual users.Future work is needed to better understand the potential of BIM technology not only in the case of expert operators, but also for a wider user community.The integration of other existing technologies and disciplines (e.g.QR codes, positioning systems integrated in mobile devices, advanced visual recognition systems, multimedia video glasses, etc.) is a very promising field of research to promote the interaction between users and cultural heritage.The main idea is an approach towards a reflective society which supports a sustainable management preservation and valorisation of built heritage (Fig. 11).
Fig. 11.The closed loop aims at stimulating a better interaction between people (expert operators, tourists, etc.) and built heritage through multimedia.
CONCLUSION
The growing interest in Building Information Modelling is a great opportunity for the expert operators in the field of photogrammetry, laser scanning, and 3D modelling.Accurate as-built BIM of existing constructions, including HBIM of historic constructions, can have a new role in digital documentation.This paper demonstrated that detailed HBIM can be derived from laser scanning and photogrammetric point clouds, obtaining rigours model that reveal the geometric complexity of the building.In other words, the generation of accurate models that reveal the geometry is not limited to few software for "pure" geometric modelling, whose output is not a BIM.On the other hand, new tools and functions were developed and implemented to overcome the lack of commercial software packages able to reconstruct complex shapes.Parametric modelling tools, relations, and attributes can be added to obtain a complete HBIM and not only a geometric reconstruction achievable with software for "pure" 3D modelling.The final model must be therefore an interoperable product for common BIM packages (Revit, ArchiCAD, Teckla, …), where parametric modelling tools, relations and attributes can be preserved with interoperable file formats.On the other hand, this work demonstrated that IFC files derived from commercial packages have compatibility issues, including modifications or information loss.This means that portable BIM viewers need a (partial) correction of the file saved in different formats.
The conversion of the model into pure geometric software was mandatory for virtual reality and augmented reality packages.In this case a complete information loss is inevitable.On the other hand, the first results with huge BIM files demonstrated that portable devices are sufficient to handle complex and detailed models (Fig. 12).
Mobile BIM applications can be a valid tool for expert operators interested in the conservation process, whereas 3D models and virtual tours can be generated from the HBIM, without redrawing a new model for these specific purposes.
The BIM can be therefore intended as a central tool for the accurate investigation of the building.Then, different sub products can be obtained without splitting data processing into different modelling projects for different purposes.A HBIM can be therefore the starting point for expert operators mainly interested in preservation and restoration or a wider user community interested in a product derived from the original HBIM.
Fig. 12.The use of the HBIM with an iPad: interactive visualization and queries can be easily carried out.
Fig. 3 .
Fig. 3. Top: the geodetic network measured with a total station (the average precision is ±1.2 mm) and a 3D view of laser scans (7.5 billion points).Bottom: panoramic visualization of a single scan.
Fig. 5 .
Fig. 5.The final BIM is made up of a combinations of objects with variable geometric complexity.
Fig. 6 .
Fig. 6.A detail of the BIM in Revit, where different colours correspond to different construction stages.
Fig. 7 .
Fig. 7.The BIM of Castel Masegra inspected with A360 on a tablet.The software preserves object information.
Fig. 8 .
Fig. 8.The object based visualization with BIMx is affected by an information loss for the preliminary file exchange from Revit to ArchiCAD.
Table 1 .
Comparison of the four applications. | 4,928 | 2015-08-11T00:00:00.000 | [
"Computer Science"
] |
Effect of Dimple on Aerodynamic Behaviour of Airfoil
-In order to boost the efficiency of an airfoil, surface of the airfoil is altered. A two dimensional airfoil was analysed with and without dimples on the upper surface using CFD software. NACA0012 non cambered airfoil with and without dimples were used for analysis with k-ɛ turbulent model. Both were compared keeping in mind the coefficients of lift and drag. Dimples were located at four different positions and compared mutually with smooth airfoil. The velocity of flow was keeping constant for different angles of attack. In CFD analysis results were fluctuated with size of grid so as to get rid of the fluctuating, a grid independency test was done before final analysis. During grid independence test numbers of nodes were increased until constant results come.
) Separation bubble and lift coefficient fluctuation with time was observed during study. Laminar separation bubble become unstable and developed primary and secondary vortex. Secondary vortex was much stronger than primary vortex. Analysis was done from 0o to 10o angles of attack. As soon as the angle of attack increased, the fluctuation also increased. Laminar separation bubble started moving forwarded for increased angle of attacks and started to reattach to surface of airfoil, hence lift coefficient increased suddenly.
(C.K. Chear & Dol, 2015) Dimples delay the flow separation for bluff bodies. Author simulated car model with different ratios of dimple using k-ɛ turbulent model. Ratio of dimples was taken as depth to diameter. (Mustak & Harun, 2017) At zero degree angle of attack, dimples on airfoil do not shows changes in drag compared to smooth airfoil. But at high angles of attack it behaves like bluff body. It leads to delay in separation and wake formation. Also it increases the angle of stall. In this work NACA4415 airfoil was used and drawing was first made in solid works. Hexagonal outward dimpled profile was compared with smooth profile of airfoil. Physical model was prepared with wood and analysed in wind tunnel. Hexagonal surface delays starting of flow separation by 4 degree angle of attack. In case of smooth surface it starts at 12o and for outward dimpled it happens at 16 o angle of attack. Velocity of the air was taken as 43m/s. III. TURBULENT MODEL Turbulent models are used because of limitations of Navier stroke equations. There are many turbulent models used in CFD analysis. Generally k-ɛ and k-ω models are used in fluid flow. Both models are used for streamlined and bluff bodies. Kinetic energy of turbulent fluid flow is solved by k-ɛ turbulent model. This model is less complicated compared to other. Time of computation is also less. This model can be used in low memory computers. A. k-ɛ turbulent models equation (Muralidhar & Sundrarajan, 2008) The Grid distribution scheme has many limitations. Global error cannot be controlled. But we can control the local errors. For controlling this author increased the no of nodes until he got constant result.
IV. GRID INDEPENDENCE TEST
As we increase the number of nodes, result varies with respect to it. But there comes a stage where the results become fixed. This fixed result shows that this is our required number of nodes on which we have to do work as shown in fig.1. In this work 102180 numbers of nodes have been used.
V. COMPUTATION METHOD
NACA0012 airfoil smooth profile and dimpled airfoil were used to study the aerodynamic behaviour of the airfoils. The shapes of the airfoil models is shown in fig.2 and the farfiled and meshing is shown in fig.3, is used for computation in CFD software. Diameter of dimple was taken as 0.02 % of chord.
(airfoiltools.com, 2016) Practical data were taken from this reference. These data were validated to check the accuracy of the work. (Confluence, 2015) Coordinates of airfoil NACA0012 was downloaded from this source.
Smooth airfoil's computed results were compared with practical data. This ensured us that we have followed right way for calculation. After that dimples on the airfoil at different location were created. Dimples location affects the results. In this work five dimpled airfoil were used, one is smooth and remaining each have dimples at 10%, 25%, 50% and 75% of chord length. Results of smooth airfoil were compared to outcomes of these five dimpled airfoil. Flow of air was taken 7.3 m/s and density was 1.225kg/m3. A of 9-13. At 10o angle of attack fluid starts separating and generates wakes. This leads to pressure drag. As we reaches 16o angle of attack separation reaches maximum value, after that lift starts decreasing. 14-18 (A), this represents that pressure on both surface is approximately similar. At higher angle the difference of pressure near the leading edge is wider, this represents that lift starts from leading edge. It was also proved in contours of pressure diagram. fig 19 (B) of the smooth airfoil. Flow separation delayed in the airfoil which is dimpled at 75% of chord, can be seen in fig 23(B). So it proves that dimpled airfoil performs better than smooth airfoil. was seen that dimple at 10% of chord showed worst result than smooth airfoil. Coefficient of lift was increased and drag was decreased in dimpled airfoil at 75% of the chord .Dimples at 25% and 50% of the chord length also did not performed well. Fig 24 shows that coefficient of lift has been increased by 7% for airfoil having dimple at 75% of chord length, compared to smooth airfoil. In the same manner it was noticed for coefficient of drag as shown in fig 25. Coefficient of drag has been reduced by 3% for the same airfoil. The location of the dimple on the airfoil plays an important role. In this work we noticed that dimple at 75% of the chord length is the best location for the dimples.
ACKNOWLEDGMENT I am using this opportunity to express my gratitude to God, my Parents and everyone who supported me throughout the course of this Research Paper. I am thankful for their aspiring guidance, invaluably constructive criticism and friendly advice during the work. I am sincerely grateful to them for sharing their truthful and illuminating views on a number of issues related to the project.
I express my warm thanks to Dr. M.P.Singh, Jagannath University and Dr. Tej Singh Chouhan for their support and guidance in working in this work.
We would also like to show our gratitude to the Dr. Vivek Sharma, Jagannath University for sharing their pearls of wisdom with us during the course of this research. | 1,523.2 | 2017-06-30T00:00:00.000 | [
"Engineering",
"Physics"
] |
On-line Detection and Classification of PMSM Stator Winding Faults Based on Stator Current Symmetrical Components Analysis and the KNN Algorithm
: The significant advantages of permanent magnet synchronous motors, such as very good dynamic properties, high efficiency and power density, have led to their frequent use in many drive systems today. However, like other types of electric motors, they are exposed to various types of faults, including stator winding faults. Stator winding faults are mainly inter-turn short circuits and are among the most common faults in electric motors. In this paper, the possibility of using the spectral analysis of symmetrical current components to extract fault symptoms and the machine-learning-based K-Nearest Neighbors (KNN) algorithm for the detection and classification of the PMSM stator winding fault is presented. The impact of the key parameters of this classifier on the effectiveness of stator winding fault detection and classification is presented and discussed in detail, which has not been researched in the literature so far. The proposed solution was verified experimentally using a 2.5 kW PMSM, the construction of which was specially prepared for carrying out controlled inter-turn short circuits.
Introduction
The popularity of Permanent Magnet Synchronous Motors (PMSMs) has continued to increase in recent years. This is due to the fact that they are characterized by very good properties such as very high efficiency, high reliability, control of a wide range of rotational speeds and a low rotor moment of inertia [1,2]. Because of this, PMSMs are largely applied to automotive motors, home appliances and other industrial automatic control applications, gradually replacing induction motors [3,4].
In general, electric motors, even when operated under normal conditions, are exposed to various types of damages. The most common faults of electric machines are bearing (41%), stator (36%) and rotor (9%) faults, whereas 14% correspond to other failures [5]. This also applies to highly efficient and durable PMSMs. The stator winding fault is one of the most common faults of PMSMs. Apart from the wrong connection of windings, stator faults include various types of short circuits ( Figure 1): inter-turn short circuits, short circuits between the coils in one phase, phase-to-phase short circuits, phase-to-ground short circuits and open circuits (breaks in phases) [6]. However, the most common situation is that a stator winding fault starts with an inter-turn short circuit, which is very difficult to detect at an early stage.
Inter-turn short circuits are mainly caused by stator winding insulation damage due to electrical stresses, mechanical stresses and overload [7]. This type of failure is very destructive. An imperceptible short circuit between adjacent turns can spread very quickly over the whole winding, causing the main short circuit and leading to an emergency stop of the drive system [8]. This spreading is the result of a large circulating fault current Inter-turn short circuits are mainly caused by stator winding insulation damage due to electrical stresses, mechanical stresses and overload [7]. This type of failure is very destructive. An imperceptible short circuit between adjacent turns can spread very quickly over the whole winding, causing the main short circuit and leading to an emergency stop of the drive system [8]. This spreading is the result of a large circulating fault current induced in the faulted loop, which is associated with a significant temperature increase in a given part of the winding, rapidly degrading the winding insulation [9]. Moreover, stator winding faults can have a negative impact on rotor permanent magnets. Due to the high temperature in the shorted part of the stator winding and magnetic field value amplified to greater than magnet coercivity, partial or complete irreversible demagnetization may occur [10].
Taking into account the aforementioned increasing popularity of PMSMs, with the nature of stator winding faults and the constant pursuit of the most reliable solutions in mind, new methods of detecting and classifying this type of failure with the highest possible efficiency and at an early stage are still being sought. The development of such methods may prevent the complete and costly failure of the drive system. Emergency downtimes may also cause long delays in the industrial process. Moreover, an effective diagnostic system guarantees safe operation and extends the lifetime of the motor [11,12].
There are many methods used for electric motor fault detection, including PMSMs [13][14][15][16][17]. Diagnostic methods are mostly based on the processed signal. Signal processing allows for the extraction of fault features [18]. Mathematical apparatuses used for symptom extraction from the stator phase current signal include those that perform frequency and time-frequency domain analysis. The phase current signal is the most commonly used signal in the process of stator winding fault detection [19]. One of the most popular fault diagnosis techniques based on motor current analysis is Motor Current Signature Analysis (MCSA). Fast Fourier Transform (FFT) is also a powerful and simple MCSA technique [20]. The effectiveness of the application of this method for the detection of inter-turn short circuits was confirmed among others in [8] and [21]. The group of methods that performs time-frequency domain analysis is dominated by Continuous Wavelet Transform (CWT) [22,23], Discrete Wavelet Transform (DWT) [24,25], Short-Time Fourier Transform (STFT) [26] and Hilbert-Huang Transform [27]. Signal processing methods based on High-Order Transforms (HOTs) are also used in PMSM stator winding fault diagnostics. HOTs that Taking into account the aforementioned increasing popularity of PMSMs, with the nature of stator winding faults and the constant pursuit of the most reliable solutions in mind, new methods of detecting and classifying this type of failure with the highest possible efficiency and at an early stage are still being sought. The development of such methods may prevent the complete and costly failure of the drive system. Emergency downtimes may also cause long delays in the industrial process. Moreover, an effective diagnostic system guarantees safe operation and extends the lifetime of the motor [11,12].
There are many methods used for electric motor fault detection, including PMSMs [13][14][15][16][17]. Diagnostic methods are mostly based on the processed signal. Signal processing allows for the extraction of fault features [18]. Mathematical apparatuses used for symptom extraction from the stator phase current signal include those that perform frequency and time-frequency domain analysis. The phase current signal is the most commonly used signal in the process of stator winding fault detection [19]. One of the most popular fault diagnosis techniques based on motor current analysis is Motor Current Signature Analysis (MCSA). Fast Fourier Transform (FFT) is also a powerful and simple MCSA technique [20]. The effectiveness of the application of this method for the detection of interturn short circuits was confirmed among others in [8] and [21]. The group of methods that performs time-frequency domain analysis is dominated by Continuous Wavelet Transform (CWT) [22,23], Discrete Wavelet Transform (DWT) [24,25], Short-Time Fourier Transform (STFT) [26] and Hilbert-Huang Transform [27]. Signal processing methods based on High-Order Transforms (HOTs) are also used in PMSM stator winding fault diagnostics. HOTs that have been applied in diagnostics are bispectrum [28,29], MUltiple Signal Classification (MUSIC) [30] and Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) [31]. In addition to the processing of the stator phase current signal, attempts have also been made to use the symmetrical components of the stator current for stator winding fault detection of induction motors [32] and PMSMs [33].
Except for the extraction of fault symptoms from which the signal is carrying diagnostic information, it is extremely important to develop an algorithm that infers the condition of the motor and classifies the degree of damage. In recent years, this function has been entrusted more and more often to fault classifiers that are based on Machine Learning (ML) algorithms. These algorithms are used in knowledge-based approaches, and they are constantly being improved. Therefore, it seems to be a promising research direction in the field of fault diagnostics [34].
ML has become a very popular technique and is an inherent part of the Artificial Intelligence (AI) field. Subcategories of classic ML algorithms, such as Decision Tree (DT), Support Vector Machine (SVM) and K-Nearest Neighbors (KNN), are algorithms whose operation is inspired by the human brain operation principle-Artificial Neural Networks (ANNs) and Deep Neural Networks (DNN).
The usage of the above-mentioned methods can minimize human participation in fault diagnosis and help in automating this process. Therefore, the usage of selected MLbased classifiers, shallow and deep neural networks, has been verified to detect various types of electric motor faults [10,[35][36][37][38][39][40][41][42][43][44][45]. Taking into account an electric motor fault other than mechanical failure, there are still very few scientific papers in which the usage of simple machine learning algorithms to detect PMSM stator winding faults is presented, especially taking into account the analysis of the key parameter selection of fault classifiers on their effectiveness.
It is also important that due to the increasing requirements for the reliability of drive systems, classical diagnostic methods are not sufficient. In order to meet these requirements, nowadays, it is recommended to use intelligent diagnostic methods. An extensive review of AI-based fault diagnostic methods for PMSMs is presented in [46]. The authors discussed methods that use artificial knowledge technology such as neural networks, expert systems and fuzzy logic to realize complex motor fault detection and condition monitoring. Moreover, the idea of Industry 4.0, the popularity of which has increased rapidly in recent years, is also closely connected with the condition monitoring of drives systems or even whole industrial processes. More and more often, the solutions ensuring the wireless transmission of information about machine conditions and other promising smart approaches are being proposed [47][48][49], as well as those that demonstrate an advanced embedded online monitoring algorithm [50].
The main goal of this article is inter-turn short-circuit detection and classification in PMSM stator windings using the spectral analysis of symmetrical current components to extract the fault symptoms and a simple ML-based classifier (KNN). Furthermore, the impact of the key parameters of this classifier on the effectiveness of stator winding fault detection and classification during off-line and on-line verification is presented and discussed in detail. The efficiency of the KNN algorithm to detect various faults of induction motors has been proven in recent years among others in [34,[51][52][53]. Nonetheless, there is a visible gap in current research with regard to the usage of simple AI-based algorithms such as KNN to PMSM stator winding fault detection and classification. In particular, there is a lack of solutions that allow for the detection of this type of fault at a very early stage, with just one shorted turn in the stator winding coil. Widely discussed in the diagnostic literature, artificial neural networks require a long training time, while there are relatively few solutions guaranteeing both a short training time and effective classification with a resolution to one turn.
The novelty of the solution presented in this paper results from:
•
The proposal of a solution that allows the detection of failure at a very early stage, with one shorted turn in a stator winding and under various motor operating conditions.
The article is divided into seven sections. After this introduction, Section 2 discusses the proposed KNN-based fault classifiers. Successively, the extraction of the stator fault features using the spectral analysis of stator current symmetrical components is presented. In Section 4, the test stand and methodology of the experimental research are presented. Next, in Section 5, the training process of the proposed fault classifier is discussed. In Section 6, the experimental verification of its effectiveness during off-line and on-line tests is presented. Final conclusions from the conducted research are discussed in Section 7.
K-Nearest Neighbors
The KNN algorithm is one of the most fundamental, simple and effective machine learning algorithms used for data classification [54,55]. To classify unknown data represented by the feature vector as a point in the feature space, the KNN calculates the distance between the new point and points that were used in the training process-the training data set. Then, this classifier assigns the point to the class among its K-nearest neighbors, where K is a pre-determined integer value [56,57].
This concept is shown in Figure 2. The new data point is represented as *. If K is equal to 3, then there are two neighbors in Class A and one in Class B, hence this new data point must belong to Class A. However, if K = 5, two points in the neighborhood are in Class A, and three are in Class B, so the new data point will be classified as Class B. It follows that the choice of the value of K has a big impact on the accuracy of the trained model [58]. There is no specific way to determine the best K value, so it is necessary to try different values to find the best one.
A detailed examination of the impact of key parameter (hyper-parameters) changes of the tested classifier on its effectiveness and a proposal of the best solution, The proposal of a solution that allows the detection of failure at a very early stage, with one shorted turn in a stator winding and under various motor operating conditions.
The article is divided into seven sections. After this introduction, Section 2 discusses the proposed KNN-based fault classifiers. Successively, the extraction of the stator fault features using the spectral analysis of stator current symmetrical components is presented. In Section 4, the test stand and methodology of the experimental research are presented. Next, in Section 5, the training process of the proposed fault classifier is discussed. In Section 6, the experimental verification of its effectiveness during off-line and on-line tests is presented. Final conclusions from the conducted research are discussed in Section 7.
K-Nearest Neighbors
The KNN algorithm is one of the most fundamental, simple and effective machine learning algorithms used for data classification [54,55]. To classify unknown data represented by the feature vector as a point in the feature space, the KNN calculates the distance between the new point and points that were used in the training process-the training data set. Then, this classifier assigns the point to the class among its K-nearest neighbors, where K is a pre-determined integer value [56,57].
This concept is shown in Figure 2. The new data point is represented as *. If K is equal to 3, then there are two neighbors in Class A and one in Class B, hence this new data point must belong to Class A. However, if K = 5, two points in the neighborhood are in Class A, and three are in Class B, so the new data point will be classified as Class B. It follows that the choice of the value of K has a big impact on the accuracy of the trained model [58]. There is no specific way to determine the best K value, so it is necessary to try different values to find the best one. Various distance metrics for calculating the distance between adjacent points are presented in the literature [56]. In this work, apart from the impact of the number of K closest neighbors on the accuracy of the classifier, the impact of different distance metrics is also verified. Various distance metrics for calculating the distance between adjacent points are presented in the literature [56]. In this work, apart from the impact of the number of K closest neighbors on the accuracy of the classifier, the impact of different distance metrics is also verified.
Let A and B be feature vectors: A = (x 1 , x 2 , . . . , x n ) and B = (y 1 , y 2 , . . . , y n ), where n is the dimensionality of the feature space [48]. The most common functions used to calculate the distance are Euclidean, Minkowski, Mahalanobis and Correlation. These distance metrics are expressed by Equations (1)-(4), respectively. The most popular distance metric is the Minkowski distance [59]. Algorithm 1 presented below defines the basic KNN classifier algorithm steps in detail [60]. where: x i , y i -elements of the A and B feature vector, respectively; n-feature space dimension; r-order of the Minkowski distance metric; σ-standard deviation of the x i and y i over the data set; x, y-mean value of the x i and y i elements (for i = 1 to i = n) of the A and B feature vectors, respectively.
Algorithm 1 The basic KNN algorithm
is an observation that belongs to class c i , n is a number of elements in the data set and m is a number of features in the input vector Data: Z = (z 1 , z 2 . . . , z m ) new data to be classified Result: class to which new input data Z belongs Sort distances {d i , for i = 1 to n} in ascending order; Get the first K cases closer to Z (with the shortest distance), D K z ; class ← most frequent class in D K However, despite its many advantages, this algorithm has some disadvantages: • The KNN algorithm is generally not recommended for analyzing very large data sets; • To obtain the proper and effective operation of this algorithm, it is necessary to choose the optimal number of K, which involves the need to test the algorithm several times during the training process; • It can be challenging to apply the KNN algorithm to high-dimensional data (a high number of features).
Spectral Analysis of Symmetrical Components of Stator Phase Currents
The effectiveness of fault classifiers strongly depends on the selected fault features. Therefore, it is essential to select those that are most susceptible to damages. Symptoms of the inter-turn short circuits that allow for their detection at an early stage are still being searched. In this paper, the spectral analysis of the positive and negative sequence symmetrical components of stator phase currents is proposed to extract inter-turn shortcircuit symptoms.
A negative sequence component value of the phase current is a significant indicator of unbalance in motor phases. This unbalance may be caused by short circuits in the stator winding [7]. Zero, positive and negative sequence components of the phase current can be calculated as follows: where: I 0 , I 1 , I 2 -zero, positive and negative stator phase current component in the steady state, respectively; I sA , I sB , I sC -Phases A, B and C stator current, respectively; And: a = e j 2π 3 .
In the case of three-phase PMSMs, the I 0 component does not exist; therefore, it is necessary to calculate only the positive and negative symmetrical current components.
The matrix Equation (5) concerns the sinusoidal signals of stator phase currents in the steady state. However, supplying motors from Voltage Source Inverters (VSIs) introduces a number of additional harmonics that cause the distortion of voltages and currents. In such cases, in order to use the classic symmetrical component calculation method, it is necessary to filter out the disturbing harmonics or extract the fundamental components of the supply voltage (f s ). In this paper, the second approach is proposed. This approach is based on the calculation of the instantaneous values of the stator current symmetrical components using the 90 • shift operator in the time domain, according to [61]: where: i 1 , i 2 -instantaneous values of the positive and negative stator phase component, respectively; i sA , i sB , i sC -Phases A, B and C instantaneous values of stator current, respectively; S 90 -operator of a phase shift by an angle of 90 • in the time domain.
In the next step, spectral analysis of the instantaneous values of symmetrical current components calculated according to Equation (7) is performed. In Figure 3, the spectra of the stator phase current's positive sequence component ( Figure 3a) and negative sequence component ( Figure 3b) for PMSM, the parameters of which are grouped in Appendix A, are shown. These spectra concern the operation of the motor at f s = 100 Hz (n = n N = 1500 rpm), with nominal load torque (T L = T N ) for an undamaged winding and with a different number of shorted turns (N sh ).
components calculated according to Equation (7) is performed. In Figure 3, the spectra of the stator phase current's positive sequence component ( Figure 3a) and negative sequence component ( Figure 3b) for PMSM, the parameters of which are grouped in Appendix A, are shown. These spectra concern the operation of the motor at fs = 100 Hz (n = nN = 1500 rpm), with nominal load torque (TL = TN) for an undamaged winding and with a different number of shorted turns (Nsh). In these spectra, an increase in the amplitude of the fs fundamental frequency components after an inter-turn short circuit in the stator winding can be observed. It is clearly visible that the increase in the amplitude of this component due to the inter-turn short circuit is greater for the negative sequence component analysis. In order to clearly define the symbols and avoid misunderstanding, the fs frequency component in the i1 spectrum will be hereinafter denoted as fsi1 and in the i2 as fsi2.
The effect of the number of shorted turns Nsh and the load torque TL on the amplitude value of the fsi1 component is shown in Figure 4a, whereas the dependence on the supply voltage frequency value fs is illustrated in Figure 4b. It can be concluded from the presented results that the load torque changes have an impact on the value of the fsi1 component amplitude, but the frequency of supply voltage does not affect these values. In these spectra, an increase in the amplitude of the f s fundamental frequency components after an inter-turn short circuit in the stator winding can be observed. It is clearly visible that the increase in the amplitude of this component due to the inter-turn short circuit is greater for the negative sequence component analysis. In order to clearly define the symbols and avoid misunderstanding, the f s frequency component in the i 1 spectrum will be hereinafter denoted as f si1 and in the i 2 as f si2 .
The effect of the number of shorted turns N sh and the load torque T L on the amplitude value of the f si 1 component is shown in Figure 4a, whereas the dependence on the supply voltage frequency value f s is illustrated in Figure 4b. It can be concluded from the presented results that the load torque changes have an impact on the value of the f si 1 component amplitude, but the frequency of supply voltage does not affect these values. (rotational speeds n). Therefore, it can be concluded that the f si2 component, because of its changes due to damage to the stator winding, is a very good diagnostic indicator. The greatest sensitivity to the increasing number of shorted turns N sh occurs when the motor is operating at high rotational speeds, close to the rated value. Based on these observations, it was decided to use the values of the amplitudes of the f si1 and f si2 components as input features of the KNN model. Figure 5a,b shows the effect of the number of shorted turns Nsh, the load torque TL and the power supply frequency voltage fs on the fsi2 component amplitude. The results below show that the TL does not have a significant impact on the value of the fsi2 component amplitude. Moreover, this value increases as a result of inter-turn short circuits in a wide range of the power supply frequency fs (rotational speeds n). Therefore, it can be concluded that the fsi2 component, because of its changes due to damage to the stator winding, is a very good diagnostic indicator. The greatest sensitivity to the increasing number of shorted turns Nsh occurs when the motor is operating at high rotational speeds, close to the rated value. Based on these observations, it was decided to use the values of the amplitudes of the fsi1 and fsi2 components as input features of the KNN model. It has to be noted that despite the insensitivity of the amplitude fsi2 to the change in TL value, its increase as a result of an inter-turn short-circuit drops with the decreasing It has to be noted that despite the insensitivity of the amplitude f si2 to the change in T L value, its increase as a result of an inter-turn short-circuit drops with the decreasing frequency of the supply voltage, which is a minor limitation. However, even at a lower speed (power supply frequency), changes due to the stator winding damage are still visible.
Experimental Setup
The experimental verification of the proposed KNN-based stator winding fault classifier was carried out on a specially designed laboratory setup with PMSM with nominal power equal to 2.5 kW, operating in a closed-loop structure and powered by a VSI. The loading machine was a second PMSM with nominal power equal to 4.7 kW. The laboratory stand is shown in Figure 6. The main parameters of the tested PMSM are grouped in Appendix A.
The construction of the tested PMSM was specially prepared to allow the physical modeling of the inter-turn short circuits of a selected number of turns in a phase. Each of the three phases of the stator winding consists of two coils, 125 turns each. An illustrative schema of the tested PMSM stator winding is shown in Figure 7a. One of the two winding coils in each of the three phases was modified to provide controlled short circuits. This modification consisted of leading out a group of coils to the terminal board. The diagram of the terminal board with the derived phases of the PMSM stator winding is shown in Figure 7b. During the experimental verification, a maximum of three turns in Phase A was short circuited, which accounted for 1.2% of all turns in one phase. Direct short circuits were performed by connecting the taps on the terminal board with a wire without limiting the current in a short-circuit loop with an additional resistor. The experimental verification of the proposed KNN-based stator winding fault classifier was carried out on a specially designed laboratory setup with PMSM with nominal power equal to 2.5 kW, operating in a closed-loop structure and powered by a VSI. The loading machine was a second PMSM with nominal power equal to 4.7 kW. The laboratory stand is shown in Figure 6. The main parameters of the tested PMSM are grouped in Appendix A. The construction of the tested PMSM was specially prepared to allow the physical modeling of the inter-turn short circuits of a selected number of turns in a phase. Each of the three phases of the stator winding consists of two coils, 125 turns each. An illustrative schema of the tested PMSM stator winding is shown in Figure 7a. One of the two winding coils in each of the three phases was modified to provide controlled short circuits. This modification consisted of leading out a group of coils to the terminal board. The diagram of the terminal board with the derived phases of the PMSM stator winding is shown in Figure 7b. During the experimental verification, a maximum of three turns in Phase A was short circuited, which accounted for 1.2% of all turns in one phase. Direct short circuits were performed by connecting the taps on the terminal board with a wire without limiting the current in a short-circuit loop with an additional resistor.
The block diagram of the experimental setup is shown in Figure 8. The tested PMSM was fed from an industrial VSI by Lenze. The used diagnostic signals, stator phase currents, were measured with LEM LA 25-NP transducers. The output signals from the transducers were passed to the data acquisition measurement card (DAQ NI PXI-4492) by National Instruments (NI, Austin, TX, USA) and then pre-processed by the LabVIEW programming environment. The sampling frequency of the phase current measurement was equal to 8192 Hz. The DAQ card was placed in the NI PXI 1082 industrial computer. The control of the tested motor was performed in Lenze Engineer software, whereas the load torque was set in Veristand. The block diagram of the experimental setup is shown in Figure 8. The tested PMSM was fed from an industrial VSI by Lenze. The used diagnostic signals, stator phase currents, were measured with LEM LA 25-NP transducers. The output signals from the transducers were passed to the data acquisition measurement card (DAQ NI PXI-4492) by National Instruments (NI, Austin, TX, USA) and then pre-processed by the LabVIEW programming environment. The sampling frequency of the phase current measurement was equal to 8192 Hz. The DAQ card was placed in the NI PXI 1082 industrial computer. The control of the tested motor was performed in Lenze Engineer software, whereas the load torque was set in Veristand.
PMSM -LOAD
The described experimental setup was used to collect the measurement data, which were used for training the proposed KNN classifier and its off-line verification but also for further on-line tests. The experimental studies were carried out for various load torque values in the range of (0 ÷ 1)T N with 0.2T N step and for various rotational speeds (frequency of the supply voltage (60 ÷ 100) Hz). It allowed evaluating the influence of the motor operating conditions on the effectiveness of the fault classifier. The described experimental setup was used to collect the measurement data, which were used for training the proposed KNN classifier and its off-line verification but also for further on-line tests. The experimental studies were carried out for various load torque values in the range of (0 ÷ 1)TN with 0.2TN step and for various rotational speeds (frequency of the supply voltage (60 ÷ 100) Hz). It allowed evaluating the influence of the motor operating conditions on the effectiveness of the fault classifier.
Training Process of the Proposed KNN Fault Classifier
The effectiveness of the classifier model depends on the appropriate selection of its input vector elements, so they have to be carefully selected. This article proposes a spectral
Training Process of the Proposed KNN Fault Classifier
The effectiveness of the classifier model depends on the appropriate selection of its input vector elements, so they have to be carefully selected. This article proposes a spectral analysis of the symmetrical component of the stator phase current for inter-turn short-circuit symptoms extraction, which allowed the selection of a two-element input vector consisting of f si1 and f si2 component amplitudes, X i = (Af si1 ,Af si2 ). In ML, the input data set is typically split into training data and test data. In the training process of the proposed KNN fault classifier model, 240 input vectors corresponding to different states of the stator winding state and operating conditions were used. These conditions are grouped in Table 1. The distribution of the training data points is shown in Figure 9. Due to the fact that the classifier input vector is a two-element vector, it can be represented in the Cartesian coordinate system (two-dimensional space). Based on the analysis of the scatter chart (Figure 9), it can be concluded that these input features are promising fault symptoms, as there is a clear distribution of classes. These classes correspond to the stator winding states of the tested PMSM. Electronics 2021, 10, x FOR PEER REVIEW 12 of 21 To choose the best configuration from those characterized by 100% accuracy, the times needed to train each type are compared in Figure 11. The fastest training time was obtained for KNN with the Euclidean distance metric and K = 3. Based on this detailed analysis, the authors decided to carry out the off-line and on-line experimental verification tests for this classifier. To choose the best model of the KNN, the accuracy of the classifiers was verified for four different distance metrics, which are described by Equations (1)-(4) and a different number of K-nearest neighbors. The impact of these parameters is very often overlooked in papers on the application of KNN, especially with regard to the electric motor fault diagnosis. The classifier's model accuracy for different configurations is shown in Figure 10 and grouped in Table 2. Based on these values, it is concluded that 100% accuracy of the KNN model to the training data set was achieved for KNN with the Euclidean, Minkowski and Mahalanobis distance metrics both for K = 3 and K = 5. The verification of the influence of the K value on the accuracy of the model led to the conclusion that a value that is too large may cause a significant decrease in the accuracy of the classifier. An increasing number of nearest neighbors is connected with the phenomenon of overfitting, which is clearly confirmed by the discussed results. Moreover, a large K value significantly increases the computational complexity of the algorithm. A K that is too low will increase bias and cause misclassifications, leading to underfitting [62]. In the analyzed case, the underfitting is visible for K = 1 and K = 2. Furthermore, setting K to an odd value helps to eliminate the possibility of a statistical stalemate and invalid results.
Nonetheless, the inverse trend is characteristic for KNN with the Correlation distance metric. In this case, for low values of the K parameter, the classifier accuracy has the lowest value and gradually increases with increasing K. This is typical for a function that takes into account the correlation between two points.
To choose the best configuration from those characterized by 100% accuracy, the times needed to train each type are compared in Figure 11. The fastest training time was obtained for KNN with the Euclidean distance metric and K = 3. Based on this detailed analysis, the authors decided to carry out the off-line and on-line experimental verification tests for this classifier. To choose the best configuration from those characterized by 100% accuracy, the times needed to train each type are compared in Figure 11. The fastest training time was obtained for KNN with the Euclidean distance metric and K = 3. Based on this detailed analysis, the authors decided to carry out the off-line and on-line experimental verification tests for this classifier.
The Off-Line and On-Line Verification of the KNN-Based Fault Classifier
In the process of verification of the classifier's operation during the off-line tests, a set of test data was used. This set consisted of 120 input vectors that were not involved in the KNN training process and corresponded to different states of the stator winding state (Nsh) and operating conditions (TL). These conditions are grouped in Table 3.
In order to assess the effectiveness of the proposed stator winding fault classifier, the CEFF index was introduced, which determines the ratio of the correctly classified stator winding states to the number of input vectors-the sum of the correct classifications and misclassifications. This index is defined by the following equation:
The Off-Line and On-Line Verification of the KNN-Based Fault Classifier
In the process of verification of the classifier's operation during the off-line tests, a set of test data was used. This set consisted of 120 input vectors that were not involved in the KNN training process and corresponded to different states of the stator winding state (N sh ) and operating conditions (T L ). These conditions are grouped in Table 3. Table 3. Test data set. In order to assess the effectiveness of the proposed stator winding fault classifier, the C EFF index was introduced, which determines the ratio of the correctly classified stator winding states to the number of input vectors-the sum of the correct classifications and misclassifications. This index is defined by the following equation:
Training Packages
where: Y C -number of correct stator winding state classifications performed by the proposed KNN model; Y M -number of stator winding state misclassifications performed by the proposed KNN model.
The KNN classifier's response to the test data set is shown in Figure 12. The C EFF value of this classifier for the vectors that were not used in the learning process is equal to 100%. It means that the classifier's response was correct for each of the investigated PMSM stator winding states and also for only one shorted turn in a coil at a very early stage of the failure.
Test vector number [-]
Test vector number (-) Off-line verification tests showed that this classifier provides high efficiency in the detection and classification of inter-turn short circuits. For this reason, it was decided to continue experimental tests during the on-line operation of the drive system.
The flowchart of the proposed on-line fault classification algorithm is shown in Figure 13. The diagnostic application responsible for the data acquisition and signal pre-processing (calculation and spectral analysis of i 1 and i 2 ) was developed in the Lab-VIEW programming environment. The script to call pre-trained KNN stator winding state classifier model was prepared in MATLAB. The first on-line verification scenario (Test 1) was carried out for motor operation in such conditions for which the model of the KNN classifier was trained, i.e., TL = (0 ÷ 1)TN with a 0.2TN step and fs = fsN = 100 Hz. In this scenario, one, two and three turns were short circuited for several seconds. This is referred to hereinafter as steady short circuits. The efficiency of the classifier CEFF for this condition was as high as 99.4%. The classifier responses and the actual states of the stator winding during this scenario are shown in The first on-line verification scenario (Test 1) was carried out for motor operation in such conditions for which the model of the KNN classifier was trained, i.e., T L = (0 ÷ 1)T N with a 0.2T N step and f s = f sN = 100 Hz. In this scenario, one, two and three turns were short circuited for several seconds. This is referred to hereinafter as steady short circuits. The efficiency of the classifier C EFF for this condition was as high as 99.4%. The classifier responses and the actual states of the stator winding during this scenario are shown in Figure 14.
The first on-line verification scenario (Test 1) was carried out for motor operation in such conditions for which the model of the KNN classifier was trained, i.e., TL = (0 ÷ 1)TN with a 0.2TN step and fs = fsN = 100 Hz. In this scenario, one, two and three turns were short circuited for several seconds. This is referred to hereinafter as steady short circuits. The efficiency of the classifier CEFF for this condition was as high as 99.4%. The classifier responses and the actual states of the stator winding during this scenario are shown in Figure 14. In the next scenario (Test 2), the operation of the proposed KNN classifier was verified during the momentary (for 1 ÷ 2 s) short circuiting of one, two and three shorted turns, respectively. This test was also carried out for different load torques TL = (0 ÷ 1)TN and fs = fsN = 100 Hz. In this case, the CEFF was equal to 98.6%, which confirmed the satisfying properties of this solution ( Figure 15).
Finally, the last test (Test 3) was carried out to verify the classifier operation for power supply frequency (rotational speeds) different from the rated value (fsN). Before the test, the training data set was extended with vectors corresponding to the motor operation at frequencies lower than the rated one-fs = {90 Hz; 80 Hz; 70 Hz; 60 Hz}. With this set, the classifier was re-trained without changing its parameters, and an on-line verification test was performed. The classifier responses and the actual states of the stator winding for In the next scenario (Test 2), the operation of the proposed KNN classifier was verified during the momentary (for 1 ÷ 2 s) short circuiting of one, two and three shorted turns, respectively. This test was also carried out for different load torques T L = (0 ÷ 1)T N and f s = f sN = 100 Hz. In this case, the C EFF was equal to 98.6%, which confirmed the satisfying properties of this solution ( Figure 15). such motor operating conditions are shown in Figure 16. As can be seen, the supply voltage frequency was reduced with a step of 10 Hz down to the value of 60 Hz. In this test, the KNN correctly recognizes the stator winding state in 99.5% of all cases. The confusion matrices for each of the test scenarios are shown in Figure 17. The analysis of these matrices shows that the most misclassifications (7.4%) were found in the case of distinguishing between an undamaged PMSM stator winding and one shorted turn in the coil during Test 2. Nonetheless, it should be emphasized that in each of the discussed cases the effectiveness of winding states classifications is very high, especially that it has been tested on a real drive system during the on-line operation, where disturbances and motor parameters change such as temperature also have a negative influence.
Moreover, in order to summarize the on-line tests and clarify the scenarios, they are described in Table 4, while classifier key parameters, properties and CEFF values are grouped in Table 5. The analysis of Table 5 allows concluding that the proposed construction of the classifier input vector and its parameters allow achieving very good efficiency in the detection of inter-turn short circuits with a resolution to one turn at an early stage of the damage. Finally, the last test (Test 3) was carried out to verify the classifier operation for power supply frequency (rotational speeds) different from the rated value (f sN ). Before the test, the training data set was extended with vectors corresponding to the motor operation at frequencies lower than the rated one-f s = {90 Hz; 80 Hz; 70 Hz; 60 Hz}. With this set, the classifier was re-trained without changing its parameters, and an on-line verification test was performed. The classifier responses and the actual states of the stator winding for such motor operating conditions are shown in Figure 16. As can be seen, the supply voltage frequency was reduced with a step of 10 Hz down to the value of 60 Hz. In this test, the KNN correctly recognizes the stator winding state in 99.5% of all cases. The confusion matrices for each of the test scenarios are shown in Figure 17. The analysis of these matrices shows that the most misclassifications (7.4%) were found in the case of distinguishing between an undamaged PMSM stator winding and one shorted turn in the coil during Test 2. Nonetheless, it should be emphasized that in each of the discussed cases the effectiveness of winding states classifications is very high, especially that it has been tested on a real drive system during the on-line operation, where disturbances and motor parameters change such as temperature also have a negative influence. Moreover, in order to summarize the on-line tests and clarify the scenarios, they are described in Table 4, while classifier key parameters, properties and C EFF values are grouped in Table 5. The analysis of Table 5 allows concluding that the proposed construction of the classifier input vector and its parameters allow achieving very good efficiency in the detection of inter-turn short circuits with a resolution to one turn at an early stage of the damage. Table 4. Details of the test scenarios.
Test Scenario
Description f S N sh T L 1 One, two and three turns were short circuited for several seconds (steady short-circuits). The motor was operating under the following conditions: T L = (0 ÷ 1)T N with a 0.2T N step and nominal supply frequency.
100 Hz (f sN ) 0; 1; 2; 3 var 2 The momentary (for 1 ÷ 2 s) short-circuiting of one, two and three shorted turns was conducted. The motor was operating under the following conditions: T L = (0 ÷ 1)T N with a 0.2T N step and nominal supply frequency.
100 Hz (f sN ) 0; 1; 2; 3 var 3 One, two and three turns were short circuited for several seconds (steady short-circuits). The motor was operating under the following conditions: T L = (0 ÷ 1)T N with a 0.2T N step and f s = (60 ÷ 100) Hz with 10Hz step. var 0; 1; 2; 3 var The analysis of the results of all three tests showed that in most cases, when there is a misclassification, it occurs for a condition where there is one shorted turn in the PMSM stator winding coil. However, these misclassifications do not occur as often, so it can be considered that they are a significant limitation of the proposed method.
Conclusions
This paper focuses on the two important elements of electric motors diagnosis-the extraction of failure symptoms and fault classification. For the successful realization of the first issue, the spectral analysis of negative and positive symmetrical components was proposed. For the detection and classification of the inter-turn short circuits of PMSM stator winding, a simple machine learning algorithm (KNN) was successfully implemented. The presented experimental research results confirm the effectiveness of such a solution, even during the on-line operation of the drive system under different motor operating conditions. What has not been analyzed in the diagnostic literature, the verification of the key parameters of the KNN classifier on its effectiveness, was discussed and compared in detail. To evaluate the classifier's effectiveness, the C EFF index was introduced, the average value of which during the on-line tests was equal to 99.1%. Moreover, the proposed classifier allows achieving very good efficiency in inter-turn short-circuit detection with a resolution to one turn at a very early stage of the winding damage.
The original virtual diagnostic tool developed in the LabVIEW and MATLAB environments performed the function of data acquisition, diagnostic signal pre-processing, extraction of stator winding failure-sensitive symptoms and fault detection and classification. In addition to very good fault classification effectiveness, the training time, which is only 1.024 s, should be highlighted as an important advantage of the proposed solution. Compared to artificial neural networks, especially those with a deep structure, which can take up to several hours to train, this is a clear advantage. Due to the low computational complexity of the KNN classifier, the algorithm that is described in the paper, it will be easy to implement even on a low-cost micro-controller.
Author Contributions: All of the authors contributed equally to the concept of the paper, and proposed the methodology; investigation and formal analyses, P.P. and M.W.; software and data curation, P.P.; measurements, P.P. and M.W.; proposed the paper organization, P.P. and M.W.; validated the obtained results, M.W. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. Table A1. Rated parameters of the tested PMSM. | 10,938.2 | 2021-07-26T00:00:00.000 | [
"Engineering"
] |
Cross-document Event Coreference Resolution based on Cross-media Features
In this paper we focus on a new problem of event coreference resolution across television news videos. Based on the observation that the contents from multiple data modalities are complementary, we develop a novel approach to jointly encode effective features from both closed captions and video key frames. Experiment re-sults demonstrate that visual features provided 7.2% absolute F-score gain on state-of-the-art text based event extraction and coreference resolution.
Introduction
TV news is the medium that broadcasts events, stories and other information via television. The broadcast is conducted in programs with the name of "Newscast". Typically, newscasts require one or several anchors who are introducing stories and coordinating transition among topics, reporters or journalists who are presenting events in the fields and scenes that are captured by cameramen. Similar to newspapers, the same stories are often reported by multiple newscast agents. Moreover, in order to increase the impact on audience, the same stories and events are reported for mutliple times. TV audience passively receives redundant information, and often has difficulty in obtaining clear and useful digest of ongoing events. These properties lead to needs for automatic methods to cluster information and remove redundancy. We propose a new research problem of event coreference resolution across multiple news videos.
To tackle this problem, a good starting point is processing the Closed Captions (CC) which is accompanying videos in newcasts. The CC is either generated by automatic speech recognition (ASR) systems or transcribed by a human stenotype operator who inputs phonetics which are Figure 1: Similar visual contents improve detection of a coreferential event pair which has a low text-based confidence score. Closed Captions: "It 's not clear when it was killed."; "Jordan just executed two ISIS prisoners, direct retaliation for the capture of the killing Jordanian pilot." instantly and automatically translated into texts, where events can be extracted. There exist some previous event coreference resolution work such as (Chen and Ji, 2009b;Chen et al., 2009;Lee et al., 2012;Bejan and Harabagiu, 2010). However, they only focused on formally written newswire articles and utilized textual features. Such approaches do not perform well on CC due to (1). the propagated errors from upper stream components (e.g., automatic speech/stenotype recognition and event extraction); (2). the incompleteness of information. Different from written news, newscasts are often limited in time due to fixed TV program schedules, thus, anchors and journalists are trained and expected to organize reports which are comprehensively informative with complementary visual and CC descriptions within a short time. These two sides have minimal overlapped information while they are inter-dependent. For example, anchors and reporters introduce the background story which are not presented in the videos, and thus the events extracted from CC often lack information about participants.
For example, as shown in Figure 1, these two Conflict.Attack event mentions are coreferential. However, in the first event mention, a mistake in Closed Caption ("he was killed" → "it was killed") makes event extraction and text based coreference systems unable to detect and link "it" to the entity of "Jordanian pilot". Fortunately, videos often illustrate brief descriptions by vivid visual contents. Moreover, diverse anchors, reporters and TV channels tend to use similar or identical video contents to describe the same story, even though they usually use different words and phrases. Therefore, the challenges in coreference resolution methods based on text information can be addressed by incorporating visual similarity. In this example, the visual similarity between the corresponding video frames is high because both of them show the scene of the Jordanian pilot.
Similar work such as (Kong et al., 2014), (Ramanathan et al., 2014), (Motwani and Mooney, 2012) and (Ramanathan et al., 2013) have explored methods of linking visual materials with texts. However, these methods mainly focus on connecting image concepts with entities in text mentions; and some of them do not clearly distinguish entity and event in the documents since the definition of visual concepts often require both of them. Moreover, the aforementioned work mainly focuses on improving visual contents recognition by introducing text features while our work will take the opposite route, which takes advantage of visual information to improve event coreference resolution.
In this paper, we propose to jointly incorporate features from both speech (textual) and video (visual) channels for the first time. We also build a newscast crawling system that can automatically accumulate video records and transcribe closed captions. With the crawler, we created a benchmark dataset which is fully annotated with crossdocument coreferential events 1 . 1 Dataset can be found at http://www.ee.columbia.edu/dvmm/newDownloads.htm 2 Approach
Event Extraction
Given unstructured transcribed CC, we extract entities and events and present them in structured forms. We follow the terminologies used in ACE (Automatic Content Extraction) (NIST, 2005): • Entity: an object or set of objects in the world, such as person, organization and facility. • Entity mention: words or phrases in the texts that mention an entity. • Event: a specific occurrence involving participants. • Event trigger: the word that most clearly expresses an event's occurrence. • Event argument: an entity, or a temporal expression or a value that has a certain role (e.g., Time-Within, Place) in an event. • Event mention: a sentence (or a text span extent) that mentions an event, including a distinct trigger and arguments involved.
Text based Event Coreference Resolution
Coreferential events are defined as the same specific occurrence mentioned in different sentences, documents and transcript texts. Coreferential events should happen in the same place and within the same time period, and the entities involved and their roles should be identical. From the perspective of extracted events, each specific attribute and argument from those events should match. However, mentions for the same event may appear in forms of diverse words and phrases; and they do not always cover all arguments or attributes.
To tackle these challenges, we adopt a Maximum Entropy (MaxEnt) model as in (Chen and Ji, 2009b). We consider every pair of event mentions which share the same event type as a candidate and exploit features proposed in (Chen and Ji, 2009b;Chen et al., 2009). Note that the goal in (Chen and Ji, 2009b;Chen et al., 2009) was to resolve event coreference within the same document, whereas our scenario yields to a crossdocument/video transcript setting, so we remove some improper and invalid features. We also investigated the approaches by (Lee et al., 2012) and (Bejan and Harabagiu, 2010), but the confidence estimation results from these alternative methods are not reliable. Moreover, the input of event coreference are automatic results from event extraction instead of gold standard, so the noise and errors significantly impact the corefer-ence performance, especially for unsupervised approaches (Bejan and Harabagiu, 2010). Nevertheless, we still incorporate features from the aforementioned methods. Table 1 shows the features that constitute the input of the MaxEnt model.
Visual Similarity
Visual content provides useful cues complementary with those used in text-based approach in event coreference resolution. For example, two coreferential events typically show similar or even duplicate scenes, objects, and activities in the visual channel. Coherence of such visual content has been used in grouping multiple video shots into the same video story , but it has not been used for event coreference resolution. Recent work in computer vision has demonstrated tremendous progress in large-scale visual content recognition. In this work, we adopt the state-of-the-art techniques (Krizhevsky et al., 2012) and (Simonyan and Zisserman, 2014) that train robust convolutional neural networks (CNN) over millions of web images to detect 20,000 semantic categories defined in ImageNet (Deng et al., 2009) from each image. The 2nd to the last layer features from such deep network can be considered as high-level visual representation that can be used to discriminate various semantic classes (scenes, objects, activity). It has been found effective in computing visual similarity between images, by directly computing the L2 distance of such features or through further metric learning. To compute the similarity between videos associated with two candidate event mentions, we sample multiple frames from each video and aggregate the similarity scores of the few most similar image pairs between the videos. Let {f i 1 , f i 2 , ..., , f i l } be the key frames sampled from video V i and {f j 1 , f j 2 , ..., , f j l } be key frames sampled from video V j . All the frames are resized to a fixed resolution of 256 x 256 and fed into our pretrained CNN model. We get the high-level visual representation F m = F C7(f m ) for each frame f m from the output of the 2nd to the last fully connected layer (FC7) of CNN model. F m is a 4096 dimension vector. The visual distance of frames f m and f n is defined by L 2 distance, which is The distance of video pair (V i , V j ) is computed as , where (f m , f n ) is the top k of most similar frame pairs. In our experiment, we use k = 3. Such aggregation method among the top matches is intended to capture similarity between videos that share only partially overlapped content. Each news video story typically starts with an introduction by an anchor person followed by news footages showing the visual scenes or activities of the event. Therefore, when computing visual similarity, it's important to exclude the anchor shot and focus on the story-related clips. Anchor frame detection is a well studied problem. In order to detect anchor frames automatically, a face detector is applied to all I-frames of a video. We can obtain the location and size of each detected face. After checking the temporal consistency of the detected faces within each shot, we get a set of candidate anchor faces. The detected face regions are further extended to regions of interest that may include hair and upper body. All the candidate faces detected from the same video are clustered based on their HSV color histogram. It is reasonable to assume that the most frequent face cluster is the one corresponding to the anchor faces. Once the anchor frames are detected, they are excluded and only the non-anchor frames are used to compute the visual similarity between videos associated with event mentions.
Joint Re-ranking
Using the visual distance calculated from Section 2.3, we can rerank the confidence values from Section 2.2 using the text-based MaxEnt model. We use the following empirical equation to adjust the confidence: where W ij denotes the original coreference confidence between event mentions i and j, D ij denotes the visual distance between the corresponding video frames where the event mentions were spoken and α is a parameter which is used to adjust the impact of visual distance. In the current implementation, we empirically set it as the average of pair-wised visual distances between videos of all event coreference candidates. With this α
Data and Setting
We establish a system that actively monitors over 100 U.S. major broadcast TV channels such as ABC, CNN and FOX, and crawls newscasts from these channels for more than two years (Li et al., 2013a). With this crawler, we retrieve 100 videos and their correspondent transcribed CC with the topic of "ISIS" 2 . This system also temporally aligns the CC text with the transcribed text from automatic speech recognition following the methods in . This provides accurate time alignment between the CC text and the video frames. As CC consists of capitalized letters, we apply the true-casing tool from Standford CoreNLP (Manning et al., 2014) on CC. Then we apply a state-of-the-art event extraction system (Li et al., 2013b; to extract event mentions from CC. We asked two human annotators to investigate all event pairs and annotate coreferential pairs as the ground truth. Kappa coefficient for measuring inter-annotator agreement is 2 abbreviation for Islamic State of Iraq and Syria 74.11%. In order to evaluate our system performance, we rank the confidence scores of all event mention pairs and present the results in Precision vs. Detection Depth curve. Finally we find the video frames corresponding to the event mentions, remove the anchor frames and calculate the visual similarity between the videos. Our final dataset consists of 85 videos, 207 events and 848 event pairs, where 47 pairs are considered coreferential. We adopt the MaxEnt-based coreference resolution system from (Chen and Ji, 2009b;Chen et al., 2009) as our baseline, and use ACE 2005 English Corpus as the training set for the model. A 5-fold cross-validation is conducted on the training set and the average f-score is 56%. It is lower than results from (Chen and Ji, 2009a) since we remove some features which are not available for the cross-document scenario.
Results
The peak F-score for the baseline system is 44.23% while our cross-media method boosts it to 51.43%. Figure 2 shows the improvement after incorporating the visual information. We adopt Wilcoxon signed-rank test to determine the significance between the pairs of precision scores at the same depth. The z-ratio is 3.22, which shows the improvement is significant.
For example, the event pair "So why hasn't U.S. air strikes targeted Kobani within the city limits" and "Our strikes continue alongside our partners." was mistakenly considered coreferential by text features. In fact, the former "strikes" mentions the airstrike and the latter refers to the war or battle, therefore, they are not coreferential. The corresponding video shots demonstrate two different scenes: the former one shows bombing while the latter shows that the president is giving a speech about the strike. Thus the visual distance successfully corrected this error.
Error Analysis
However, from Figure 2 we can also notice that there are still some errors caused by the visual features. One major error type resides in the negative pairs with both "relatively" high textual coreference confidence scores and "relatively" high visual similarity. From the text side, the event pair contains similar events, for example: "The Penn(tagon) says coalition air strikes in and around the Syrian city of Kobani have kill hundreds of ISIS fighters but more are streaming in even as the air campaign intensifies." and "Throughout the day, explosions from coalition air strikes sent plums of smoke towering into the sky.". They talk about two airstrikes during different time periods and are not coreferential, but the baseline system produces a high rank. Our current approach limits the image frames to those overlapped with the speech of an event mention, and in this error, both videos show "battle" scene, yielding a small visual distance. The aforementioned assumption that anchors and journalists tend to use similar videos when describing the same events , which may introduce risk of error caused by similar text event mentions with similar video shots. For such errors, one potential solution is to expand the video frame windows to capture more events and concepts from videos. Expanding the detec-tion range to include visual events in the temporal neighborhood can also differentiate the events.
Discussion
A systematic way of choosing α in Equation 3 will be useful. One idea is to adapt the α value for different types of events, e.g., we expect some event types are more visually oriented than others and thus use a smaller α value.
We also notice the impact of the errors from the upstream event extraction system. According to the F-score of event trigger labeling is 65.3%, and event argument labeling is 45%. Missing arguments in events is a main problem, thus the performance on automatically extracted event mentions is significantly worse. About 20 more coreferential pairs could be detected if events and arguments are perfectly extracted.
Conclusions and Future Work
In this paper, we improved event coreference resolution on newscast speech by incorporating visual similarity. We also build a crawler that provides a benchmark dataset of videos with aligned closed captions. This system can also help create more datasets to conduct research on video description generation. In the future, we will focus on improving event extraction from texts by introducing more fine-grained cross-media information such as object, concept and event detection results from videos. Moreover, joint detection of events from both sides is our ultimate goal, however, we need to explore the mapping among events from both text and visual sides, and automatic detection of a wide range of objects and events from news video itself is still challenging.
Acknowledgement
This work was supported by the U.S. DARPA DEFT Program No. FA8750-13-2-0041, ARL NS-CTA No. W911NF-09-2-0053, NSF CA-REER Award IIS-1523198, AFRL DREAM project, gift awards from IBM, Google, Disney and Bosch. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. | 3,932.6 | 2015-01-01T00:00:00.000 | [
"Computer Science"
] |
Fabrication of Niobium Nanobridge Josephson Junctions
To realize antenna-coupled Josephson detectors for microwave and millimeter-wave radiation, planar-type Nb nanobridge Josephson junctions were fabricated. Nb thin films whose thickness, the root mean square roughness and the critical temperature were 20.0 nm, 0.109 nm and 8.4 K, respectively were deposited using a DC magnetron sputtering at a substrate temperature of 700°C. Nanobridges were obtained from the film using 80-kV electron beam lithography and reactive ion-beam etching in CF4 (90%) + O2 (10%) gases. The minimum bridge area was 65 nm wide and 60 nm long. For the nanobridge whose width and length were less than 110 nm, an I-V characteristic showed resistively-shunted-junction behaviour near the critical temperature. Moreover, Shapiro steps were observed in the nanobridge with microwave irradiation at a frequency of 6 – 30 GHz. The Nb nanobridges can be used as detectors in the antenna-coupled devices.
Introduction
It is well known that the Dayem bridge has one of the simple device structures which show the Josephson effect. The bridge width and length should be comparable to the coherence length of a superconducting material. Since 38 nm for Nb, this bridge is called nanobridge [1]. Recently, nanobridges can be successfully obtained, because electron beam lithography (EBL) technology is highly developed, e.g. the minimum width of a line pattern < 10 nm [2].
The nanobridge Josephson junction is a planer junction. This junction is suitable to fabricate the antenna-coupled detector compared to a conventional vertical junction such as a superconductorinsulator-superconductor junction, because a high-frequency current induced in a thin-film planer antenna such as slot, bow-tie and spiral antennas is strongly coupled to the Josephson current flowing along the planer junction [3]. Using the antenna-coupled detector, it is expected to realize a harmonic mixer that detects microwaves and millimeter-waves with low local oscillator (LO) power and low LO frequency.
To fabricate nanobridges from Nb thin films, a thickness of the film should be less than 20 nm, because an aspect ratio of the thickness to the bridge width (or length) can hardly exceed 0.5 due to reactive ion etching. In this study, deposition conditions for 20-nm-thick Nb-sputtered thin films were evaluated. And then Nb nanobridges were fabricated from the thin films, and their electrical properties were evaluated for the evidence that the nanobridges acted as Josephson junctions.
Fabrication of 20-nm-thick Nb thin films
Superconducting Nb thin films less than 20 nm with a flat surface morphology is required to fabricate Nb nanobridges. The thin films were deposited onto 10×10-mm 2 SiO 2 /Si substrates by a DC magnetron sputtering system with a DC power of 200 W and 100-mm target-substrate distance in the Ar atmosphere of 0.8 Pa. The base pressure of the system was 7.0×10 6 Pa. Under these conditions, a deposition rate was 0.2 nm/s, and 20 nm-thick Nb films having the critical temperature T c of 7.3 K were obtained. In order to improve T c , a substrate temperature T s was increased from room temperature to 700 o C [4]. Figure 1 shows dependencies of T c and the residual resistance ratio (RRR) on T s . RRR is defined as R(273 K) / R(10 K). T c increased with increasing T s and almost saturated around 700 o C. The maximum T c of 8.4 K was obtained at T s = 700 o C. Therefore, the films have a sufficient value of T c even if the films are slightly degraded during the fabrication process of the nanobridge described below. Since T s dependence of T c is similar to that of RRR, the origin of T c degradation is residual impurities such as Niobium oxide in the films. Figure 2 shows an atomic force microscope (AFM) image of the Nb film deposited at T s = 700 o C. Dense Nb grains with diameters of 40 100 nm were formed, as shown in the figure. The root mean square (RMS) of surface roughness in the film was 0.109 nm. This value was half of the RMS in the film deposited at room temperature.
From these results, the obtained films are applicable to the nanobridge Josephson junctions.
Fabrication and electrical properties of Nb nanobridges
Nb nanobridges were fabricated from the 20-nm-thick Nb thin films mentioned above. The device structure of the nanobridge is shown in figure 3 (a). The bridge length and width w are defined in the magnified view in the figure. The taper angle of Nb banks, = 45 o , was selected to avoid the microloading effect [5] in reactive ion-beam etching (RIBE), i.e. decrease in an etching rate of the Nb film around the nanobridge.
Device fabrication
Prior to the nanobridge fabrication, Nb contact pads and wires, as shown in figure 3 ( dose were 80 kV and 204 C/cm 2 , respectively. After the EBL process, the film was etched using RIBE under the same condition as the first RIBE process. The minimum bridge area of 65 nm in width and 60 nm in length was obtained. These bridge sizes are comparable to the coherence length of Nb near T c . This means that these nanobridges are regarded as weak links in the Nb films and act as Josephson junctions [6]. The nanobridges whose width and length were less than 110 nm actually showed the Josephson effect as described below. An SEM image of typical Nb nanobridge is shown in Figure 3 (b). The width and length of this nanobridge were 110 nm and 51 nm, respectively. Figure 4 graphs typical I-V characteristics of the fabricated nanobridge shown in figure 3 (b) at 7.1 K (a) without and (b) with microwave irradiation at 6.2 GHz. As shown in figure 4 (a), the I-V characteristic displays the resistively shunted junction (RSJ)-like behaviour. For this junction, the critical current I c and the normal resistance R n were 450 A and 1.2 , respectively. From these values, the I c R n product was estimated to be 540 V.
I-V and microwave response characteristics
With the microwave irradiation at a frequency in the range from 6 to 30 GHz, constant current steps were observed. The microwaves were directed from a semi-rigid cable with an open end onto the nanobridge by a signal generator (SG). Each irradiation frequency f and a voltage interval V between the steps well satisfied the Josephson voltage-frequency relation. This implies that these steps are Shapiro steps and that the nanobridge acts as a Josephson junction. The highest order of the step was 7 at f = 6.2 GHz and a SG power of -0.3 dBm as shown in figure 4 (b). However, the order can increase and the maximum response frequency f max can be much higher than 7 6.2 GHz = 43.4 GHz, if coupling efficiency between the junction and microwaves are improved by introducing the antennacoupled structure into the device.
Although f max can increase by improving the microwave coupling, f max is probably lower than the characteristic frequency f c = (2e/h) I c R n = 260 GHz. This implies that the observed I c includes the excess current which is a supercurrent except the Josephson current. An EB resist pattern of the nanobridge whose width and length are less than 40 nm can be formed by our EBL system, whereas it
Conclusion
Nb nanobridge Josephson junctions were fabricated from 20 nm-thick Nb thin films in order to realize antenna-coupled Josephson detectors for microwave and millimeter-wave radiation. The Nb-sputtered thin films whose thickness, the RMS and T c were 20.0 nm, 0.109 nm and 8.4 K, respectively were obtained at T s = 700 o C. Nanobridges were obtained from the film using 80-kV EBL and RIBE in CF 4 + O 2 gases. The minimum bridge area was 65 nm wide and 60 nm long. For the nanobridge whose width and length were less than 110 nm, the I-V characteristic showed RSJ-like behaviour near T c . Moreover, Shapiro steps were observed in the nanobridge with microwave irradiation at a frequency in the range from 6 to 30 GHz. These results are useful to realize Josephson detectors in the antennacoupled devices. For the other application, the junctions are applicable to nano-SQUIDs [7]. | 1,926.4 | 2014-05-12T00:00:00.000 | [
"Physics",
"Engineering"
] |
GDP-Mannose Pyrophosphorylase: A Biologically Validated Target for Drug Development Against Leishmaniasis
Leishmaniases are neglected tropical diseases that threaten about 350 million people in 98 countries around the world. In order to find new antileishmanial drugs, an original approach consists in reducing the pathogenic effect of the parasite by impairing the glycoconjugate biosynthesis, necessary for parasite recognition and internalization by the macrophage. Some proteins appear to be critical in this way, and one of them, the GDP-Mannose Pyrophosphorylase (GDP-MP), is an attractive target for the design of specific inhibitors as it is essential for Leishmania survival and it presents significant differences with the host counterpart. Two GDP-MP inhibitors, compounds A and B, have been identified in two distinct studies by high throughput screening and by a rational approach based on molecular modeling, respectively. Compound B was found to be the most promising as it exhibited specific competitive inhibition of leishmanial GDP-MP and antileishmanial activities at the micromolar range with interesting selectivity indexes, as opposed to compound A. Therefore, compound B can be used as a pharmacological tool for the development of new specific antileishmanial drugs.
INTRODUCTION
Leishmaniases are vector-borne neglected tropical diseases caused by a protozoan parasite from the genus Leishmania and transmitted by hematophagous female phlebotomine sandflies. During its life cycle, the parasite alternates from a promastigote motile form within the phlebotome to an intracellular amastigote form in mammalian host macrophages. Leishmaniases can be classified in three main groups according to their clinical manifestations: cutaneous, which is the most common form, muco-cutaneous leading to nasal and oropharyngeal lesions and marked disfigurements, and visceral, the most severe form, always fatal in the absence of adequate treatment. These clinical manifestations can be provoked by several Leishmania species: for instance, L. major or L. mexicana will give rise to cutaneous leishmaniasis, and L. donovani or L. infantum visceral leishmaniasis. Only few drugs are currently available for the treatment of leishmaniases. Antimonials, which have been historically used since 1920s, generate a strong toxicity at cardiac, renal, and hepatic levels and select drug resistance. The other classical drugs, namely oral miltefosine, injectable liposomal amphotericin B, and paromomycin, display some deleterious effects and now represent a potential threat of drug resistance as well (Croft et al., 2006;Sundar and Singh, 2016;Ponte-Sucre et al., 2017). The development of new antileishmanial treatments is thus crucial in this context. In order to overcome the limitations of the existing treatments, rational approaches have been used to develop new specific therapies for leishmaniases (Zulfiqar et al., 2017). Among the different strategies elaborated, the identification of new targets that are essential for parasite viability or virulence is an attractive approach for the development of specific antileishmanial compounds (Jiang et al., 1999;Burchmore et al., 2003;Jain and Jain, 2018). Indeed, these essential targets can be exploited by chemical screening in order to characterize inhibitor scaffolds whose specificities are optimized by pharmacomodulations, based on target threedimensional structures. In this way, targets from Leishmania energy metabolism (i.e., glycolysis, folate or redox metabolism) were first intensively studied (Aronov et al., 1999;Chowdhury et al., 1999;Verlinde et al., 2001;Olin-Sandoval et al., 2010;Colotti et al., 2013;Leroux and Krauth-Siegel, 2016). Other biochemical pathways were also investigated, but the characterized inhibitors met some limitations such as parasite specificity, inhibitors synthesis cost and lack of in vivo activity (Croft and Coombs, 2003).
TARGETING MEMBRANE GLYCOCONJUGATE METABOLISM
There are two main ways to impair parasite development within the host, considering proteins expressed in the amastigote form as therapeutic targets. The first one relies on targeting some biochemical pathways leading to an unbalanced metabolism, toxic for the parasite. Many proteins have been considered for this purpose (Aronov et al., 1999;Chowdhury et al., 1999;Verlinde et al., 2001;Olin-Sandoval et al., 2010). The second one considers that a relevant Achille's heel consists in avoiding macrophage-parasite interactions (Descoteaux et al., 1995;Descoteaux and Turco, 1999;Podinovskaia and Descoteaux, 2015;Lamotte et al., 2017). As host-Leishmania interactions mainly rely on glycoconjugate recognition, an inhibition of glycoconjugate biosynthesis could affect this molecular recognition, and therefore reduce parasite burden. Furthermore, as the glycosylation is a crucial pathway for macrophage infection (Descoteaux et al., 1995;Descoteaux and Turco, 1999;Pomel and Loiseau, 2013;Podinovskaia and Descoteaux, 2015), we hypothesize that an alteration of glycoconjugate structures would not easily select drug resistance.
Mannose-containing glycoconjugates represent a large proportion of the carbohydrates addressed at the surface of a eukaryotic cell and are involved in many biological processes such as intercellular recognition, adhesion or signaling (Varki, 2007;Colley et al., 2017). In Leishmania, a wide range of unusual mannose-containing glycoconjugates [e.g., GlycosylPhosphatidylInositol (GPI) anchors, LipoPhosphoGlycans (LPG), ProteoPhosphoGlycans (PPG) or GlycosylInositolPhosphoLipids (GIPLs)] are synthesized and are essential for parasite virulence (Descoteaux and Turco, 1999;Pomel and Loiseau, 2013). The biosynthesis of these glycoconjugates requires initially the conversion of mannose into GDP-mannose. The mannose moiety of this nucleotide sugar is then transferred into nascent glycoconjugates to allow mannosylation reaction. In eukaryotic cells, mannose can either be imported via membrane transporters or be generated from the reaction catalyzed by the PhosphoMannose Isomerase (PMI) on fructose-6-phosphate originating from glycolysis to produce mannose-6-phosphate ( Figure 1A). In the mannosylation pathway, the PhosphoMannoMutase (PMM) converts mannose-6-phosphate in mannose-1-phosphate (Figure 1). The activated form of mannose, GDP-mannose, is then produced by the action of the GDP-Mannose Pyrophosphorylase (GDP-MP) according to the following reversible enzymatic reaction (Ning and Elbein, 2000): The GDP-MP is a ubiquitous enzyme found in bacteria, fungi, plants, and animals and belonging to the family of nucleotidyltransferases. In mammalian organisms, GDP-MP was mainly studied in swine (Szumilo et al., 1993;Ning and Elbein, 2000). The swine native enzyme is a complex of about 450 kDa with two distinct subunits: α (43 kDa) and β (37 kDa). In pig, as well as in human, the β subunit displays the enzymatic activity, while the α subunit would have a regulatory function (Szumilo et al., 1993;Ning and Elbein, 2000;Carss et al., 2013;Koehler et al., 2013). In human, α and β subunits share 32% identity. Mutations in the genes coding for α or β subunits in human lead to glycosylation disorders characterized notably by neurological deficits and muscular dystrophies (Carss et al., 2013;Koehler et al., 2013). Two β isoforms, named β1 and β2, have been characterized in the human genome, displaying 90 and 97% identity with the porcine β subunit, respectively. The human β2 isoform is strongly expressed in a wide range of tissues, in opposition to β1 which is only weakly expressed, especially in liver, heart, and kidney (Carss et al., 2013). Additionally, the β2 isoform shows a better homology with Leishmania mexicana GDP-MP, compared to β1 (49% for β2 vs. 46% for β1). In bacteria, GDP-MP are mostly dimeric, either mono-or bifunctional, the latter displaying both GDP-MP and PMI activities in separate domains of an individual enzyme (Shinabarger et al., 1991;May et al., 1994;Ning and Elbein, 1999;Wu et al., 2002;Asencion Diez et al., 2010;Pelissier et al., 2010;Akutsu et al., 2015). Unlike in other organisms, leishmanial GDP-MP has been shown to assemble as a hexamer of 240 kDa in several Leishmania species (Davis et al., 2004;Mao et al., 2017). As this hexamer can dissociate at low ionic strength conditions and at low protein concentration, a mixture of the three forms may be present in the reaction medium in vitro.
Both human and leishmanial GDP-MP have been reported to display a high substrate specificity (Mao et al., 2017), in agreement with previous studies performed in bacterial, trypanosomal, and swine GDP-MP (Ning and Elbein, 2000;Denton et al., 2010;Pelissier et al., 2010). The investigation of the mechanism of reaction has shown a sequential ordered mechanism in most bacterial GDP-MP like in some other nucleotidyl-transferases, with GTP fixation prior to mannose-1phosphate (Barton et al., 2001;Zuccotti et al., 2001;Asencion Diez et al., 2010;Pelissier et al., 2010;Boehlein et al., 2013). However, leishmanial and human GDP-MP have been characterized by a sequential random mechanism (Mao et al., 2017), in which the substrate binding order is not defined, in agreement with a mammalian nucleotidyl-transferase (Persat et al., 1983), suggesting that the GDP-MP mechanism of reaction differs from bacteria to Leishmania and human.
A knockout of the gene encoding for GDP-MP in L. mexicana lead to an absence of development in the macrophage in vitro and to an absence of parasite persistence in vivo (Garami and Ilg, 2001;Stewart et al., 2005). These results show that GDP-MP is critical for amastigote survival and is therefore an interesting drug therapeutic target to be exploited for antileishmanial drug development. Likewise, GDP-MP has been described to be essential for cell integrity and survival in other microorganisms such as Trypanosoma brucei, Aspergillus fumigatus, or Candida albicans showing the biological validation as a potential therapeutic target of this enzyme in several kinetoplastids and fungi (Warit et al., 2000;Jiang et al., 2008;Denton et al., 2010). Additionally, a High-Throughput Screening (HTS) assay, allowed the selection of leishmanial GDP-MP inhibitors (Lackovic et al., 2010). From this study, the most potent inhibitor identified was a piperazinyl quinoline derivative (compound A; Figure 1B) demonstrating an in vitro activity on L. major GDP-MP and on intracellular parasite proliferation with IC 50 values at 0.58 and 21.9 µM, respectively.
COMPUTATIONAL AND TARGET-BASED DRUG DESIGN
A molecular model of the GDP-MP quaternary structure has been generated in L. mexicana, confirming the hexameric structure of the enzyme (Perugini et al., 2005). Based on this model, GDP-MP hexamers would be assembled by a contact between trimer structures in a head-to-head manner involving only the N-terminal end of the protein. These results are however in opposition to crystallography studies of other GDP-MP or nucleotidyl-transferases, showing a tail-to-tail arrangement of the C-terminal β-helices in their quaternary structures (Cupp-Vickery et al., 2005;Jin et al., 2005;Pelissier et al., 2010;Führing et al., 2015).
As no GDP-MP crystal could be obtained in Leishmania, molecular models of L. infantum and L. donovani GDP-MP were generated using distinct sequence alignment strategies and were compared with the human counterpart (Pomel et al., 2012;Daligaux et al., 2016a). Both analyses showed a structural conservation of a consensus sequence GXGXRX n K in leishmanial and human GDP-MP corresponding to a pyrophosphorylase signature motif, as well as the F(V)EKP sequence previously described to be part of the GDP-MP active site (Sousa et al., 2008). Interestingly, several specific residues have been identified in the catalytic site of both L. infantum and L. donovani GDP-MP compared to the human counterpart (Pomel et al., 2012;Daligaux et al., 2016a). Moreover, GDP-MP sequences share more than 85% of identity in the Leishmania genus. Therefore, the differences identified between the leishmanial and human catalytic sites could potentially be exploited to design specific antileishmanial agents.
The GDP-mannose, as a substrate or a product of the GDP-MP, has been selected as the basis for inhibitor design because of its steric volume presenting the maximum of interactions within the enzyme catalytic pocket (Mao et al., 2017). In this work, the chemical approach to design leishmanial GDP-MP inhibitors relied on the pharmacomodulation of the GDPmannose from the analysis of enzyme molecular models, by substituting for example the mannose moiety by a phenyl group, the pyrophosphate by a triazole or a phosphonate, the ribose by an ether oxide group or a deoxyribose and the guanine by different heterocycles such as purine analogs or quinolines, especially two-substituted quinolines which have been previously described to display promising in vitro and in vivo antileishmanial activities (Fournet et al., 1993(Fournet et al., , 1994(Fournet et al., , 1996Nakayama et al., 2005Nakayama et al., , 2007Campos-Vieira et al., 2008;Loiseau et al., 2011). Therefore, the presence of two-substituted quinolines in these compounds designed could potentiate their antileishmanial activities through GDP-MP inhibition.
CELL-FREE IN VITRO AND IN SILICO EVALUATION OF COMPOUNDS ON PURIFIED GDP-MPs
From the analysis of GDP-MP structural models, a library of 100 compounds was designed and synthesized (Daligaux et al., 2016b;Mao et al., 2017). These compounds were evaluated on recombinant GDP-MP purified from L. donovani (LdGDP-MP), L. mexicana (LmGDP-MP), and human (hGDP-MP). In this work, the hGDP-MP corresponded to the β2 subunit displaying the enzyme activity and showing the highest homology with leishmanial GDP-MP (see above). This evaluation allowed to identify compound B, a quinoline derivative substituted in position 2 with a methoxy-ethyltriazol-butyn-diisopropylphosphonate group (Figure 1B), as a specific competitive inhibitor of LdGDP-MP with a K i at 7 µM. In comparison, compound A, previously identified from a HTS (Lackovic et al., 2010), displayed a competitive inhibition of both LdGDP-MP and hGDP-MP with K i values at 62 and 20 µM, respectively, reflecting a lower affinity for the leishmanial enzyme compared to the human counterpart.
A docking study of the identified competitive inhibitors on GDP-MP structural models showed that compound A binds to both LdGDP-MP and hGDP-MP with similar potency and binding modes: the quinoline, piperazine, and tert-butyl groups occupying the same position as the GDP-mannose nucleotide, ribose and mannose moieties, respectively, in both catalytic sites (Daligaux et al., 2016a; Figure 1C). In contrast, compound B was found to bind more strongly to LdGDP-MP compared to hGDP-MP, with the diisopropylphosphonate group located more deeply in the leishmanial enzyme catalytic pocket compared to the human one (Mao et al., 2017; Figure 1C). These in silico data are in agreement with the non-selective inhibition of both leishmanial and human GDP-MP by compound A and the specific competitive inhibition observed with compound B on LdGDP-MP.
CELLULAR IN VITRO ANTILEISHMANIAL ACTIVITY AND CYTOTOXICITY OF COMPOUNDS A AND B
Both compounds have been evaluated on L. donovani and L. mexicana axenic and intramacrophage amastigotes in two host cell models: the RAW264.7 macrophage cell line and primary Bone Marrow Derived Macrophages (BMDM; Mao et al., 2017). Compound A showed a moderate antileishmanial activity on both L. mexicana and L. donovani with IC 50 values between 30 and 50 µM and between 12 and 28 µM on axenic and intramacrophage amastigotes, respectively (Mao et al., 2017; Table 1). These data are in agreement with the IC 50 previously reported at 21.9 µM on L. major intramacrophage amastigotes by Lackovic et al. (2010). Moreover, this GDP-MP inhibitor showed some cytotoxicity on both RAW264.7 and BMDM macrophages, giving a low Selectivity Index (SI) in both host cell models. On the other hand, compound B exhibited a very interesting activity on L. donovani axenic amastigotes with an IC 50 at the micromolar range (Mao et al., 2017; Table 1). However, it was inactive on L. mexicana axenic amastigotes, in line with the data obtained on the purified enzyme showing a specific competitive inhibition of LdGDP-MP. In L. donovani intramacrophage amastigotes, the activity of compound B was maintained with an IC 50 at the micromolar range in both host cell models (Mao et al., 2017; Table 1). Interestingly, this compound was also active on L. mexicana intramacrophage amastigotes with IC 50 values at 1.5 and 8.6 µM on RAW264.7 and BMDM cell models, respectively, suggesting that an additional mechanism of action, distinct from the parasite GDP-MP inhibition, may be involved. Moreover, no cytotoxicity was observed with compound B on BMDM, giving a promising SI above 94 and 12 in L. donovani and L. mexicana, respectively (Mao et al., 2017; Table 1). Nevertheless, some cytotoxicity was observed on RAW264.7 macrophages, giving a low SI on this cell model. These differences could be due to distinct mechanisms of drug uptake and accumulation between host cell models, the BMDM being closer to physiological and clinical conditions as they are primary macrophages. The table was adapted from Mao et al. (2017). The results expressed correspond to the mean of three independent experiments (± SD).
CONCLUSION AND FUTURE DIRECTIONS
The mannose activation enzyme systems leading to GDPmannose biosynthesis are essential for host-parasite interactions. Thus, GDP-MP, but also PMI and PMM, are interesting targets to be inhibited for impairing glycoconjugate biosynthesis.
In this review, we focus on GDP-MP, this enzyme being responsible for GDP-mannose biosynthesis. GDP-MP is a druggable protein involved in the host-cell/parasite interactions, that has now been biologically and pharmacologically validated. Although ubiquitous, molecular modeling on both leishmanial and human GDP-MPs strongly suggests that specific inhibitors could be designed. From a rational design of 100 compounds based on leishmanial and human GDP-MP tertiary structural models, compound B appeared to be the most promising. In comparison with compound A which displayed a competitive inhibition of both leishmanial and human GDP-MP with moderate antileishmanial activities and a low SI, compound B showed a specific competitive inhibition of LdGDP-MP and an activity on both L. donovani and L. mexicana intramacrophage amastigotes at the micromolar range giving an interesting SI above 10 in the BMDM host cell model. Therefore, the in vivo antileishmanial activity of this compound should be analyzed in order to determine its potency for the treatment of leishmaniasis. Further investigations will address in vivo antileishmanial evaluation, pharmacokinetics, and pharmacodynamics of compound B to confirm its status as a hit. Furthermore, the pathways altered in the parasite by compound B could be investigated in future works through glycomics analysis in order to study the impact of this inhibitor on the membrane glycoconjugate composition. Pharmacomodulations of compound B would also allow to optimize its selectivity and affinity for the target in Leishmania. However, the large molecular volume of this compound required to fill GDP-MP catalytic pocket (Mao et al., 2017), as well as its high polarity, could present challenges for downstream optimization. In order to assess the relative importance of GDP-MP in the most pathogenic leishmanial species, comparative functional analyses should be performed to optimize the inhibitor strategy.
In conclusion, compound B can be considered as an original and interesting hit to be optimized proving that GDP-MP inhibition is a promising strategy to impair host-parasite interactions. However, the capacity of this specific metabolism alteration to prevent drug resistance emergence is still to be proved.
AUTHOR CONTRIBUTIONS
SP wrote the manuscript. WM, TH-D, CC, and PL contributed to manuscript revision and read and approved the submitted version.
ACKNOWLEDGMENTS
We are grateful to DIM Malinf (Région Île de France) for WM Ph.D. funding. | 4,152.4 | 2019-05-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
A hybrid algorithm based on TDOA and DOA for underwater target localization
To address the problems of complex scenarios, low accuracy of a single localization algorithm, and high requirements for localization system equipment, this paper proposes a Taylor-weighted least squares algorithm based on Time Differences of Arrival (TDOA) and Direction of Arrival (DOA) joint localization method. The method first performs DOA estimation of the target by the Bayesian algorithm with off-grid sparse reconstruction and converts the obtained orientation information into position coordinates, and finds the final position of the target through iterative operations. The proposed algorithm is compared with the existing TDOA method based on the Chan algorithm and the Taylor algorithm based on the TDOA with the Chan algorithm as the initial value and the Two-Step Weighted Least Squares (TSWLS) algorithm in a simulation environment. The results show that the proposed algorithm has better performance in terms of localization accuracy, noise immunity, stability and is suitable for target localization in underwater.
Introduction
TDOA localization [1][2] has received much attention from scholars in underwater target localization because it does not require time synchronization between the target and the sensor.A Taylor-weighted least squares algorithm based on TDOA and FDOA was proposed in the literature [3] and applied underwater to estimate the position and velocity of the target.In the literature [4], the authors proposed a hybrid TDOA and AOA-based localization algorithm using two base stations for target localization, which achieved better results with small measurement errors and was able to achieve the Cramer-Rao Lower Bound (CRLB).
This paper proposes a hybrid TDOA and DOA localization method based on the Taylor-weighted least squares algorithm.Firstly, using the off-grid sparse reconstruction Bayesian estimation method for DOA estimation of submerged targets; Secondly, converting the obtained directional information into position coordinates Then using the position coordinates as the initial value of the TDOA-based Taylorweighted least-square algorithm.The Taylor expansion of TDOA measurements is used to construct the localization error equation, and the local least-square solution of the error equation is derived to continuously iterate and update the target position until it reaches the set iteration threshold algorithm stops iterating.Finally, it obtains the exact position of the target.
Cross-domain co-location scenarios
The cross-domain cooperative communication network architecture based on the joint positioning method of TDOA and DOA consists of a shore-based command centre, Unmanned Aerial Vehicle
Taylor series expansion method
Assuming that the initial position of the localization target is 00 Also based on the principle of TDOA's positioning algorithm, the following set of equations can be obtained (2) and (3).
) ) ) ) where ( , ) xy denotes the coordinates of the target, the location of the hydrophone is ( , ) ii xy.It is assumed that the first hydrophone is the reference hydrophone and the distance from the source to the i hydrophone is i R .Then the 11 ii R R R means the difference between the distance from the target to the i hydrophone and the distance to the first hydrophone.The function is constructed from the above equation as follows (4).
Subjecting the function ( , , , ) The weighted least-squares solution of the above equation is as follows (7).
where Q denotes the covariance matrix of the TDOA measurements.In the next recursive operation, assume 00 , x x x y y y , update the coordinate values of the target for iterative operation.
The above process is repeated until the calculation stops when the error satisfies xy and finally obtain the corrected target estimated position.
TDOA and DOA hybrid positioning method
According to the DOA estimation model, we can obtain (8).
() Y A X E (8) where [ ( 1), ( 2), ( 3 defined as the mismatch error between the target position and the grid point, and the division distance of the adjacent grid is r.The grid correction vector can be obtained as in (12)., 0, , {1, 2, , } = 0, 0, Thus, the off-grid sparse DOA estimation model can be obtained as in (13).= Z (13) Finally, sparse Bayesian learning is used to solve the global optimal solution of DOA and obtain the DOA estimate.The estimated angle of DOA is used to convert the position coordinates of the target, which is taken as the original value of the TDOA-based Taylor method for iteration.
Theoretical model of Acoustic propagation
According to the literature [8], obtain the shallow acoustic field sound pressure normal mode solution in (14).
is the eigenfunction of the normal mode, which satisfies the characteristic equation of the normal.In order to obtain the propagation time difference of the received signal, the Warping transform-based time-frequency analysis method is used to separate the normal mode from obtaining each order of simple normal modes.
The results of the Warping transform are obtained as (15).where r is the propagation distance and the average speed of sound in the waveguide is c.
Normal mode separation based on the improved Warping transform
The simulation uses the following ocean environment: the seafloor is a liquid seafloor with a sound speed of 1700 m/s, a density of 1.5 g/cm3, absorption of 0.5 dB and a water depth of 25 m.The sound source is located at a depth of 20 m underwater, and the hydrophone is situated at a depth of 24 m underwater.The amplitude modulated LFM signal with a Blackman window of 3s length at 800-1200Hz is used to calculate the sound field using the Kraken model of simple regular waves.The transmit signal and its spectrum used in the simulation as show in figure 2. From figure 3, the separation results of the 1st and 2nd order normal waves are given.From figure 4, the correlation peak between the 1st order normal wave and the 2nd order normal wave is 0.8821.Therefore, the time delay difference between the two orders of the normal wave is obtained as 0.8821s.
DOA estimation results
The number of uniform line array elements is
Hybrid localization algorithm results
The function of RMSE is ,where L is the number of Monte Carlo experiments, MS is the true coordinate of the target, and MS is the estimated position coordinate.
As shown in figure 6 (a), the RMSE of several algorithms gradually increases in the range of measurement noise standard deviation of 1m to 10m.The RMSE of the Chan algorithm is the largest and the TSWLS algorithm is the second largest.The RMSE of Taylor's algorithm with Chan's algorithm as the initial value is closer to that of the proposed algorithm, and the RMSE of the proposed algorithm is always smaller than that of Taylor's algorithm with Chan's algorithm as the initial value as the measurement noise increases.In order to observe the localization accuracy of several algorithms more intuitively, 1000 Monte Carlo random experiments were conducted under the condition that the standard deviation of the measurement noise is 10m.From figure 6 (b), we can see that the estimated position of the proposed algorithm is (99.9053,199.7864), which is closer to the real target position (100,200).
Conclusion
This paper introduces a hybrid TDOA and DOA-based localization algorithm.The method integrates ranging and directional information, uses Bayesian algorithm with off-grid sparse reconstruction to estimate DOA, then converts the orientation information into position information as the initial value based on Taylor's algorithm, and finds the final position of the target through iterative operations.
Simulation experiments show that the proposed algorithm outperforms the existing algorithm in localization accuracy, noise resilience and stability.This offers a novel approach for accurate and stable underwater target positioning, contributing to joint positioning technology research.Finally, this paper acknowledges support from the Kunming AI Computing Centre.
Figure 1 .
Figure 1.The architecture diagram of a cross-domain collaborative communication system.
, then the Taylor series expansion of the function is (1).
matrix.According to the literature[5][6][7], we can calculate the fourth-order accumulation solution of the received signal, as follows in (9).
.
denotes the conjugate, () ik a denotes the i row and the k column of array manifold matrix () A , cum denotes the cumulative quantity and matrix is constructed twice to reduce its dimensionality, thus removing the redundant elements and keeping only the non-zero elements with information to obtain a sparse representation of the model in (10).manifold matrix of the spatial domain, and denotes the sparse vector.There are few non-zero elements in the sparse vector, and these elements are the grid points corresponding to the target positions.This paper uses the first-order Taylor formula to approximate the real off-grid points and constructs a new array manifold overcomplete matrix, which is represented as in (11).The grid correction vector is , T 12 =[ , , , ] [ 1/ 2 ,1/ 2 ]
6 M
, the array element spacing is 0BS .The DOA estimation results are shown in figure 5(a), the estimated angle of the measured orientation is [44.5071,63.4708] .The estimated DOA azimuth measured in this paper is converted to the position coordinates of the target.The conversion equation is azimuth angle of the target reaching the two hydrophones, respectively, ( , ) xyis the true position of the target. 13, BS BS are the position coordinates of the two hydrophones .In turn, the estimated position of the target is obtained, as shown in figure 5 (b).
Figure 5 .
Figure 5. DOA estimation and estimated position.As shown in figure5(b), it can be seen that the target position estimated by the DOA estimation algorithm proposed in this paper is (98.7191,197.6131),which is closer to the real target position.Thus, it is chosen as the initial location of the target of Taylor's algorithm for iterative calculation.
Figure 6 .
Figure 6.The results of several algorithms for | 2,266.6 | 2024-03-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Genetic control of resistance to rosette virus disease in groundnut ( A rachis hypogaea L . )
Groundnut rosette disease is one of the most damaging diseases militating against groundnut production in sub-Saharan Africa. The disease cause up to 100% yield loss whenever epidemic occurs. The most effective, economic and environmental friendly method to control the disease is through genetic resistance. Knowledge on inheritance of resistance to the rosette disease is required to accelerate breeding of resistant varieties. A study was conducted to understand the nature and magnitude of gene effect of resistance to the disease. Thus F1, RF1, F2, RF2, BC1, RBC1, BC2, and RBC2 progenies were derived from crosses of Otuhia × Manipintar, Otuhia × Shitaochi, ICGV 01276 × Manipintar, and ICGV 01276 × Shitaochi along with their parents were evaluated in a randomized complete block design at CSIR CRI, Fumesua, under artificial infection. Generation mean analysis revealed that additive gene action effect was predominant on the resistance to the disease in all the crosses. Additive by dominance was the only form of non-allelic interaction observed to be significant in ICGV 01276 × Manipintar cross. Reciprocal differences suggested the presence of maternal effect involved in the inheritance of resistance to groundnut rosette disease. Estimates of broad and narrow sense heritability indicated that genetic effect was larger than the environmental effects in this study. Disease diagnosis using TAS ELISA revealed the presence of groundnut rosette assistor virus (GRAV) antigens in the resistant samples analyzed. Resistant genotypes containing GRAV were considered to be resistant to the GRV and its Sat-RNA, but not the GRAV which causes no obvious symptoms by itself. Pure line breeding with selection from early generation is suggested for the improvement of resistance to rosette virus disease, because additive genetic effect contributed significantly in controlling the inheritance of resistance to groundnut rosette disease (GRD).
INTRODUCTION
Groundnut is an important food crop providing income and livelihoods to many of the farmers in Africa.Its production is constrained by several biotic and abiotic factors such as diseases, pest, aflatoxin and drought.*Corresponding author.E-mail<EMAIL_ADDRESS>Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Groundnut rosette disease (GRD) has been described as the most damaging disease of groundnut in sub-Saharan Africa causing yield losses approaching 100% whenever an epidemic occurs (Ntare et al., 2002).Yield loss due to GRD depends on the growth stage at which infection occurs.Seedling stage infection leads to 100% yield loss whilst infection at the pod filling stage causes negligible effects (Naidu et al., 1999a;Waliyar et al., 2007).The disease contributes to an annual loss of US $ 156 million across Africa (Nigam et al., 2012).
The disease is caused by three agents, these include groundnut rosette virus (GRV), a satellite RNA of GRV and groundnut rosette assistor virus (GRAV) (Bock et al., 1990).Aphis craccivora Koch transmits the virus complex in a persistent and circulative manner (Okusanya and Watson, 1966).Variants of Sat-RNA have been shown to be responsible for different rosette symptoms, such as green and chlorotic rosette (Murant and Kumar, 1990;Taliansky and Robinson, 1997).The complex interaction of these agents in causing the disease makes it a unique and fascinating virus disease whose origin and perpetuation in nature, in spite of substantial advance in knowledge, still remain inconclusive (Waliyar et al., 2007).
Farmers have adopted several cultural, biological and chemical methods to curb the spread of the disease but the adoption rate of these methods has been very low because they are not economical and effective (Olorunju et al., 2001).However, the most economic, ecological and environmentally-friendly method of control is the use of rosette resistant lines (Adu-Dapaah et al., 2004).Recent reports have shown that resistance is directed towards GRV and Sat-RNA, but not GRAV (Waliyar et al., 2007).Breeding for resistance to diseases remains a principal focus in the groundnut breeding programme in Ghana.Although genetics of resistance to the disease has been reported, the mechanism of resistance may be different in the sources of parents.To facilitate the design of breeding strategies to develop resistant cultivars to GRD, it would be beneficial to understand more completely the mode of inheritance of resistance to the disease.The objectives for the study were to determine the mechanism of gene action, contribution of maternal effect and heritability of resistance to GRD.
Genetic materials
Parent materials used were selected based on germplasm screening of resistance to the disease by the Council for Scientific and Industrial Research (CSIR)-Crops Research Institute, Kumasi, Ghana.The resistant genotypes were Otuhia and ICGV 01276 while the susceptible genotypes were Manipintar and Shitaochi.
Disease evaluation of parental and progenies
Disease evaluation of the parental and the progenies developed were done under high disease pressure environment created through aphid infestation on the field of Council for Scientific and Industrial Research (CSIR)-Crops Research Institute, Kumasi, Ghana.The trials were laid out in randomized complete block design with 3 replications.Each replicate consisted of one plot of each of the Parents, F1, RF1, backcross and two plots of each F2 and RF2 generations.Each plot was made up of a row, 2 m long with 0.4 m between rows and 0.2 m within plants giving 10 plants per row.Plants were sown at a rate of 1 seed per hill.Aphid colonies were reared on a highly infested genotype Manipintar in netted cages prior to planting of the experiments.Triple antibody sandwich enzyme linked immunosorbent assay (TAS ELISA) tests were done to detect the presence of (GRAV on those infested plants before the aphids colonies were collected from them.Five wingless viruliferous aphids were transferred onto 7 to 14 days old seedlings on the test materials using wet camel's hair brush as described by Naidu and Kimmins (2007).This was done to ensure effective inoculation by the vectors.Each of the test plants was evaluated for GRD symptoms at weekly interval for the first four weeks and every two weeks thereafter using a 1-5 rating scale (Pande et al., 1997;Olorunju et al., 2001) as follows: 1= No visible symptoms on leaves (Highly Resistant), 2= Rosette symptoms on 1 to 20% leaves, but no obvious stunting (Resistant), 3= Rosette symptoms on 21-50% leaves with stunting (Moderately Resistant), 4= Severe symptoms on 51 to 70% leaves with stunting (Susceptible), and 5= Severe symptoms on 71 to 100% leaves with stunting (Highly Susceptible).
Disease diagnosis
Leaf samples for serological test were taken from field plants rated 1 to 4 (Susceptibility and resistance).An indirect triple antibody sandwich-enzyme -linked immunosorbent essay method which entails the usage of beet western yellow virus (Luteovirus) antiserum was used for the detection of GRAV antigen in the various samples (Naidu et al., 1999b).Purified polyclonal antiserum (LgG) (AS-0049) raised against purified preparation of GRAV was diluted at recommended dilution in coating buffer, 200 µl was added to each well of a microliter plate.Each sample was allotted to two wells in the 96 well microplate.200 µl of the monoclonal antibodies (MAb) (AS0049/1) to beet western yellow virus, which reacts with the GRAV coat protein was used as the secondary antibody.Since the monoclonal antibodies were not labeled, a secondary, animal species (mouse) antibody was used to react with the bound MAb.This anti mouse (RAM) antibody was labeled with alkaline phosphatase (AP) as reporter group for detecting the antibody.200 µl aliquot of freshly prepared substrate [10 mg p-nitrophenyl phosphate (sigma, fluke)] dissolved in 10 ml of substrate buffer was added to each well of the plates.The plates were incubated at room temperature for 30-50 min to obtain a clear reaction.Samples were then assessed by visual and spectrophotometric measurement of absorbance at 405 nm.All TAS-ELISA kits and protocol used were supplied by the Leibniz Institute DSMZ, Germany.
Statistical and genetic analyses
Data collected were subjected to analysis of variance (ANOVA using GENSTAT statistical package (Discovery Edition 4).Means were separated using least significant difference (LSD) at 5%.Generation mean analysis (GMA) (Mather and Jinks, 1982) was carried out to determine the types of gene action influencing the expression of groundnut rosette virus disease resistance trait.Gene effect based on a six parameter model was estimated using the PBTools, version 1.4, 2014.Weighted regression approach was used for the generation mean analysis.Two full models were fitted to the data.The first was "mean = 0 + m + a + d + aa + ad" and the other was "mean = 0 + m + a + d + aa + dd".For each model, backward regression procedure was used to obtain the best model.Mather and Jinks (1982) b) and narrow-sense (h 2 n) heritabilities were estimated using the variance component method (Wright, 1968) and variances of F2 and back cross generations (Warner, 1952), respectively, as:
RESULTS
Mean values, standard errors and variances for resistance to GRD of the four crosses are presented in the Table 1.Parents used in this research showed significant differences in rosette virus reactions.Otuhia (P 1 ) was the most resistant, followed by ICGV 01276 (P 1 ) and Manipintar (P 4 ) was highly susceptible.Means of the direct and reciprocal first filial generation (F 1 ) were significantly different in three of the crosses, except in Otuhia × Manipintar cross.The F 1 and F 2 were more resistant than that of their respective reciprocal crosses.
The mean of the F 1s was less than the mid-parent value but higher than the mean of the parent with lowest disease score (P 1 ).Significant mean differences were detected in all the Backcrosses except that of the Otuhia and Manipintar backcross, with reciprocal cross of Otuhia × Manipintar recording the highest mean score (Table 1).
Mean scores for BC 1 and BC 2 were significantly different from each other in two of the crosses that is, Otuhia × Shitaochi and Otuhia × Manipintar.
Results from the TAS-ELISA showed that all the 23 susceptible and 9 resistant samples tested positive for the GRAV causal agent, indicating that GRAV antigen occurred frequently in all the samples (Table 2).
Generation mean analysis for gene effect controlling inheritance of resistance to the groundnut rosette disease is presented in Table 3.The results provide estimates of the main and first order gene interaction.Mid parent value ranged from 0.14 to 4.01, it was lowest in the cross of Otuhia × Shitaochi and high in ICGV 01276 × Shitaochi.Additive gene action was the only significant gene action in the crosses of Otuhia × Shitaochi, Otuhia × Manipintar and ICGV 01276 × Shitaochi (Table 2).In contrast both additive and additive by dominance were significant in the cross of ICGV 01276 × Manipintar.Additive by dominance non-allelic interaction was the only significant non allelic interaction in all the crosses.
Table 4 shows broad sense and narrow sense heritability (based on mid parent value) for the groundnut rosette resistance in four different crosses.Heritability estimates varied between crosses.The broad sense heritability ranged from 76-95% in the various crosses with Otuhia × Manipintar cross recording the highest broad sense heritability value.Mean broad sense heritability in all the four crosses was 83% whilst mean narrow sense heritability was 43%.The highest narrow sense heritability value recorded was 67% with the cross Otuhia × Manipintar, in sharp contrast with a narrow sense heritability of 4% recorded with the cross Otuhia × Shitaochi (Table 4).
DISCUSSION
The significantly different mean GRD resistance scores detected among some of the direct and reciprocal crosses indicates that maternal effect played a major role in the GRD resistance.The inheritance of resistance to the rosette virus disease might therefore not be attributed solely to nuclear gene control.This suggests that the choice of maternal parent is relevant in hybridization programme that focuses on the improvement of groundnut for resistance to the disease.Generation mean analysis using the weighted regression approach was adequate to explain the genetic control of resistance to groundnut rosette disease in the four crosses involving two resistance parents and two susceptible parents.
Additive gene effect was of the greatest importance in crosses of Otuhia × Shitaochi, Otuhia × Manipintar and ICGV 01276 × Shitaochi for resistance to GRD.On the other hand, both additive and additive × dominance gene effect were important for inheritance of rosette resistance in ICGV 01276 × Manipintar cross.With respect to epistatic effects, additive by dominance gene effect was the only non -allelic interaction observed to play a significant role in the inheritance of resistance to Nalugo et al. (2013) found the interaction of dominance by dominance with duplicate epistatic effect to be the only type of epistatic effect on the resistance to groundnut rosette disease.Probably this contradiction would have come as a result of the differences in the parent genotypes which were used in the studies.The presence of epistasis has important implications for any plant breeding program.Due to epistasis, selection has to be delayed after several cycles of crossing until a high level of gene fixation is attained.The negative sign for additive effects depend on which parent is chosen as P 1 (Cukadar-Olmedo and Miller, 1997;Edwards et al., 1975;Azizi et al., 2006).The sign for dominance effect is a function of the F 1 mean value in relation to the midparental value and indicates which parent is contributing to the dominance effect (Cukadar-Olmedo and Miller, 1997).It is suggested that pure line selection at early generation would be appropriate due to the large significant contribution of additive gene effect to the inheritance of resistance to GRD.Whereas selection at later generation would be appropriate for the additive by dominance type of epistasis for that fact that it will allow favorable gene combinations to be in a homozygous state before practicing final selection (Azizi et al., 2006).Detection of GRAV antigen in the resistant plants tested, is in agreement with results obtained by Bock and Nigam (1988) who observed GRAV antigen present in all plants of six rosette-resistant groundnut lines that had been exposed to aphid inoculation in Malawi.These lines were RG 1, RMP 91, RMP 40, RMP 93, RRI/24 and RRI/16.Similar findings were also reported by Olorunju et al. (1992), who reported that GRAV was detected in 11 of 15 symptomless plants of R × R and RMP × M1204.781crosses.The detection of GRAV antigen in resistance genotype can be attributed to the lower concentration of GRV (SatRNA) in the genotype resulting in no symptoms expression as compared to the susceptible ones (Olorunju et al., 1992).Naidu and Kimmins (2007) reported that GRV and its Sat-RNA may not always occur in the same tissue together with GRAV which explain the transmissions of GRAV alone.All resistance samples tested positive, indicating that genes conferring resistance to GRV and its sat RNA were successfully introgressed in those varieties but those genes did not confer resistance to the GRAV.These observations infer that symptoms alone cannot be a reliable basics for screening of groundnut plants for their resistance to the causal agents of the disease, as demonstrated by this study.
High average broad sense heritability of 86% observed in the study for the trait indicated that genetic variation was high and that it will respond readily to selection.This findings is in agreement with a high realized broad sense heritability reported by Kayondo et al. (2014).
The generally high broad sense heritability estimate in all the four crosses indicates that the environment in which the plants were evaluated had a lower effect on the expression of resistance to GRD.A high narrow sense heritability recorded for Otuhia × Manipintar and ICGV 01276 × Manipintar crosses suggested that the trait is largely governed by additive genes and may not require more cycles for selection.
Conclusions
Detection of GRAV antigens in the resistant samples suggests that introgressed gene conferred resistance to GRV and its sat RNA, but not GRAV.The significant difference between the direct and reciprocals suggested that maternal effect contributed significantly to the inheritance of the resistance to the rosette disease.This indicates that when developing breeding populations for resistance to GRD, the choice of a maternal parent is very important.Additionally, generation mean analysis revealed that inheritance of resistance to the disease is control by both additive and non-additive gene action.The additive gene component was predominant over the non-additive.Additive by dominance form of non-allelic interaction was the only form of epistasis revealed in this study.Due to the significant additive gene action it is suggested that selection from early generation would be effective.High heritability estimates suggest low environment influence on resistance to GRD model describes the phenotype in terms of the mid-parental values [m], additive effects [a], dominance effects [d], additive by additive [aa], additive by dominance [ad], and dominance by dominance [dd] epistatic interaction effects.Broad sense (h 2
Table 1 .
Mean rosette resistance scores, standard error and variance in ten generations of direct and reciprocal crosses in groundnut.
Table 2 .
Detection of groundnut rosette assistor virus (GRAV) by ELISA of groundnut genotypes resistance and susceptible to groundnut rosette virus (GRV).
Table 3 .
Estimate of gene effects for groundnut rosette resistance in Otuhia/ Shitaochi cross.
Table 4 .
Percentage broad and narrow sense heritability of rosette virus disease resistance in groundnut crosses. | 3,928.4 | 2016-06-30T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Kinematics of Non-axially Positioned Vesicles through a Pore
We employ finite element method to investigate the kinematics of non-axially positioned vesicles through a pore. To complete the coupling between fluid flow and the vesicle membranes, we use the fluid structure interactions with the arbitrary Lagrangian Eulerian method. Our results demonstrate that the vesicles show different deformations in migration process, in turn an oblique ellipse-shape, slipper-shape, ovalshape. We find that the rotation angle of non-axially positioned vesicles mainly shows the trend of increase, besides the small fluctuation induced by deformation relaxation. Moreover, when the vesicles move towards the axis of the channel, the rotation angle exhibits a decrease because of the decrease of the shear force. However, rotation of axially positioned vesicles hardly occur due to symmetrical shear force. Our results further indicate that the rotation is faster nearby the pore for non-axially positioned vesicles. Our work answers the mapping between the positions of vesicles and deformed states, as well as the change of rotation angle and rotation velocity, which can provide helpful information on the utilization of vesicles in pharmaceutical, chemical, and physiological processes.
INTRODUCTION
Vesicles consist of an internal liquid medium protected by a thin deformable membrane, which can be deformed easily by external forces. Thus, the behavior of vesicles in external flow field is determined by a complex interplay between membrane elasticity and hydrodynamic forces. Studying the resulting rich phenomenology is fundamental for understanding the flow dynamics of this paradigmatic soft matter system, which possesses great application potential in gene therapy [1−3] and drug delivery. [1,4,5] In addition, even though biological cells have a more complex architecture, vesicles have often served as a model system to explore the mechanical properties for anucleate cells such as red blood cells. [6−9] Therefore, studying the deformation and migration of vesicles in microfluidic channels can provide valuable information on the utilization of vesicles in biomedical, chemical and physiological processes.
Much work has been carried out to expound the deformation and migration of vesicles under planar hyperbolic, shear and Poiseuille flow [10−16] or in constriction of microchannels. [17−24] The dynamics of vesicle is complicated due to nonlinear couple between vesicle membrane and surrounding fluid. Previous studies have indicated that in shear flow initial spherical vesicle takes an ellipsoidal shape and moves with in-clination angle respect to flow direction. [11,13,14,16,25,26] Meanwhile, the shear rate has significant effect on dynamics of vesicle. For instance, the vesicle is wrinkling within specifically range of shear rate, which was induced by compression forces acting onto the membrane. [13] Moreover, the vesicle will be broken with shear flow strength beyond its tolerance capacity. [11,27] Furthermore, nonspherical red blood cells (RBCs) and spherical vesicles move in a variety of ways at different shear rates, such as steady tank-treading, swinging, and unsteady tumbling motions. [28−30] Abkarian et al. have obtained cosinoidal variation of the angular velocity of the inclination of vesicle and of RBC versus the inclination angle. [29] In addition, vesicle exhibits special dynamics in Poiseuille flow as the fluid velocity is parabolic distribution. [15,31] The vesicles were initially placed away from the axis of tube, which also exhibited dynamics similar to those in shear flow. However, the decrease in the deviation of distance from the axis leads to the decrease of shear force on the vesicle, which results in the steady nonspherical shape. Moreover, Abkarian et al. have indicated that presence of wall significantly changes deformations and motion of vesicles. [29] Hence, dynamics of vesicle is more intricate in constriction of microchannels due to nonlinear velocity distribution and interaction between vesicle and wall. However, the description of flow-induced kinematics of vesicles through a pore has not been well established, especially on rotating angle and rotating velocity, which is essential in various engineering and biomedical applications.
In this research, we numerically investigate non-axially positioned vesicles through a pore, in particular from a view of motion mechanism, using finite element method (FEM), where the fluid-structure interactions (FSIs) are employed to complete the coupling between fluid flow and vesicle membrane. We aim to monitor the motions and deformations of non-axially positioned vesicles through the pore to yield the mapping between the positions of vesicles and deformed states. Moreover, we calculate rotated states in this process, which possesses a variety of applications, such as cell sorting and characterization. In the second section, we present FEM and the corresponding details. In the third section, we illustrate the shape transition of vesicles and fluctuation of rotation angle. We demonstrate that the rotation velocity reaches its maximum nearby the pore. However, the peak of horizontal velocity decreases with the increase of deviation distance. Finally, we draw some conclusions and provide our overview in the last section.
SIMULATION MODEL AND METHOD
We construct a numerical 2D model due to a vesicle always in symmetry plane for non-axially positioned vesicles through a pore, in which a circular vesicle is put into a microchannel (radius is 15 μm and length is 120 μm), where a pore (height is 2 μm and length is 2 μm) exists in the middle of the microchannel as illustrated in Fig. 1. Initially, the vesicle is fixed in the left side of the pore, and then flow fluid will induce deformation and migration of the vesicle.
We consider that the membrane of vesicle is isotropically viscoelastic, [32] in which the deformation and motion are described as follows where ρ solid denotes the density of the membrane, u solid is the displacement vector, σ is the stress tensor, and F V represents the unit volume force. The Kelvin-Voigt model is used to describe viscoelastic behavior of the membrane, in which the relation between stress and elastic strain rate is presented by the following equation where η = 0.022 Pa·s is the viscosity of membrane, ε is the strain tensor of membrane, G = E/2(1 + ν) represents the shear modulus, ν = 0.45 is Poisson's ratio and E = 180 Pa represents Young′s modulus.
Exterior and interior liquids of vesicle have the same density ρ fluid = 1000 kg/m 3 and viscosity μ = 0.005 Pa·s, which are considered as incompressible Newtonian fluids with a lamin-ar flow. The left-hand side boundary condition is influx with average inlet velocity, and that of right-hand side is outflux with pressure, which is also the same order as velocity of relevant experiment. [33] The mass and momentum conservation equations are presented as follows where U fluid denotes the velocity vector, and F is the external force.
The fluid-solid interface is boundary of interaction between fluid and membrane, which means the effect of fluid flow on solid and solid displacement and deformation on fluid flow, as described by the following equations where n denotes the unit normal vector of fluid-solid interfaces and v w is the solid velocity. Γ represents the total force, which is fluid loads on the solid boundary and negative of the reaction force on the fluid. The rotation and motion of vesicle in non-uniform shear flow are represented as where θ denotes rotation angle of the point on membrane around the center of mass, x 1 , x 2 and y 1 , y 2 are abscissa and ordinate of the center of mass and point on membrane, respectively, ω is angular velocity of vesicle, and u solidx and v x represent horizontal displacement and horizontal velocity of vesicle, respectively. In this work, the corners are smoothed to improve mesh quality and automatic remeshing is switched on to enhance the model convergence and computing precision. The finite element method (FEM) is used to solve governing equations on the unstructured meshes, where FSIs are used to couple between vesicle membrane and fluid flow. The arbitrary Lagrangian Eulerian (ALE) method is employed to describe the movement of the meshes associated with FSIs, in which the nodes of the computational mesh have freedom of motion, [34] that is, the new coordinates of the deformed grid related to solids are calculated based on the moving boundaries of geometry and mesh smoothing in ALE method. And the Newton iteration method is used to carry out numerical iterative. In addition, we introduce a repulsive zone of thickness 0.1 um on the boundaries of walls, which consists of a distribution of springs to avoid topological structure change of geometric model and singular when the solid boundaries of walls and the vesicle membrane are too close or even contact between them. [35] Whenever the vesicle membrane enters the repuls- ive zone, it will be pushed away along the normal direction of the boundary. Finally, we use the transient study and PARD-ISO solver to compute the governing equations with COM-SOL Multiphysics 5.3a package.
RESULTS AND DISCUSSION
As shown in Fig. 2, we monitor the rotations, deformations and migrations of vesicles through the pore in Poiseuille flow to investigate the kinematics of non-axially positioned vesicles. For the vesicle with r = 1 μm fixed away from middle of microchannel with deviation distance h = 10 μm, we find that with the vesicle gradually approaching the pore, its shape changes from circle to ellipse inclined with respect to the axis of microchannel. Then, when the vesicle enters into the pore, it exhibits slipper-like shape and its location is close to the bottom of the pore (depending on the original deviation direction). After passing through the pore, the vesicle keeps an oval-shape and moves towards the bottom of the channel (also depending on the original deviation direction). In the whole process of crossing, the vesicle's membrane rotates around it, which could be visualized clearly as the red dot in Fig. 2. Similar phenomenon is also found for smaller original deviation distance h = 5 μm, while the extent of deformation and rotation decreases. When the vesicle is originally positioned at the middle of microchannel, it will be only deformed without rotation during the crossing process. In addition, larger vesicle (r = 3 μm) also exhibits similar kinematics. Moreover, the rotations of vesicle are always clockwise for non-axial positioned vesicles (Fig. 2), which is attributed to the Poiseuille flow in the microchannel. The velocity of flow is greater in the middle and decreases towards the wall, which results in the inhomogeneous stress on the vesicle membrane. Therefore, if we put the vesicle into the lower part of the microchannel, the vesicle will rotate clockwise. On the contrary, it will show counter clockwise rotation.
Theoretically, when the vesicles are positioned away from the axis, they will bear an unbalanced shear force owing to the asymmetric distribution of flow filed (as demonstrated in Fig. 1), which will further drive the deformation and rotation of vesicles. Moreover, the unbalanced shear force is closely related with the deviation distance and vesicles' size, which is the reason why the extent of deformation is different and rotation for different deviation distance and size of vesicles.
To investigate nonuniform flow field effect on migration of vesicles, we obtain horizontal velocity of center of mass of vesicle along horizontal position. The horizontal direction is center of mass position, where x = 0 μm indicates the midpoint of the pore. For radius of vesicle r = 1 μm (Fig. 3a1), we find that the horizontal velocity far from the pore is almost constant, and after the vesicle passing through the pore it is always greater than before for non-axially positioned vesicles but increases rapidly near the pore and reaches peak at about x = 0 μm. The horizontal velocity of vesicles near the middle of microchannel is always higher than others. The velocity difference is decreased with increase of radius of vesicle after passing the pore. Especially, the horizontal velocities of radii of vesicle r = 3, 4 μm are almost the same. For radii of vesicle r = 2, 3, 4 μm, we obtain three fluctuation curves similar to that of radius of vesicle r = 1 μm. These are attributed to the fact that fluid flow velocity presents stable parabolic distribution far from the pore but increases rapidly near the pore to keep consistent flux. However, after the non-axially positioned vesicle passes through the pore, it will enter the region with the higher velocity because deformations and rotations of the vesicle reduce vertical pushing force from the flow. While passing the pore, the pushing force decreases with increase of vesicle size because of symmetrical flow field distribution. The increase of inlet velocity obviously enhances horizontal velocity of the vesicle (Fig. 3b). The horizontal velocity of vesicle decreases before reaching the pore when vesicle is placed closer to the pore (Fig. 3c). This is because the flow velocity decreases when getting closer to the pore. The horizontal velocity of vesicle exhibits a small fluctuation with radii of vesicle r = 2, 3, 4 μm after passing through the pore. This phenomenon is induced by deformation relaxation of vesicles.
To quantitatively describe rotation of the vesicle, we calculate the rotation angle of the specified point on the vesicle membrane around the center of mass of vesicle. In fact, the selection of observation point has slight influence on the results because of the deformations of vesicle shape. Fig. 4(a1) presents fluctuation of rotation angle of vesicle along horizontal position for radius r = 1 μm. In Fig. 4(a1), the rotation angle of vesicle is monotonically increasing when the vesicle passes through the pore for deviation distance h = 10 μm. The fluctuation of rotation angle is low when the vesicle keeps away from the pore. However, the rotation angle exhibits a dramatic increase nearby the narrow pore. The rotation angle of vesicle is also monotonically increasing when the vesicle passes through the pore for deviation distance h = 5 μm, which is similar to that of deviation distance h = 10 μm. μm is smaller than that of deviation distance h = 10 μm. The vesicle is not rotating when it is placed at axis of microchannel (h = 0 μm). We obtain tiny fluctuation of rotation angle of vesicle. This is because the flow strength is weak at the position far away from the pore, but it is intense nearby the pore to ensure constant of flux. Thus, the vesicle bears higher shear force nearby the pore. The shear rate around vesicle decreases as the deviation distance of vesicle decreases. For the vesicle deviation distance h = 0 μm, the symmetrical shear force is exerted on vesicles. Therefore, the vesicle is only deformed, but without rotation. The deformation of vesicle induces tiny fluctuation of rotation angle. For radii of vesicle r = 2, 3, 4 μm, we obtain similar change of rotation angle. However, the rotation angle decreases with increase of vesicles size for the same position, for shear force induces greater deformation for bigger vesicle. Hence, more work done by shear forces is stored as strain energy. The deformations and relaxations of vesicles induce decrease of rotation angle nearby the pore when radii of vesicle are r = 2, 3, 4 μm and deviation distance is h = 10 μm. The decrease of deviation distance reduces this influence. For deviation distance h = 10 μm and radius of vesicle r = 4 μm, effect of inlet velocity on rotation angle is shown in Fig. 4(b). We find that different inlet velocities have certain effect on the rotation angle near the pore, where the increase of inlet velocity leads to decrease of rotation angle. In Fig. 4(c), the vesicle is fixed at different horizontal positions for radius of vesicle r = 4 μm and deviation distance h = 10 μm. When the vesicles are fixed far away from the pore, the rotation angle increases continuously with the vesicle approaching the pore. The farther the vesicle away from the pore, the longer the rotation time. Hence, the rotation angle is decreasing with increase of horizontal position x except for x = −10 μm. When horizontal position is x = −10 μm, the vesicle bears strong enough shear force that propel rotation angle higher than x = −20 μm. The rotation angle of vesicle membrane shows complex variation. Thus, we calculate the rotation velocity of vesicle membrane to characterize angle change. In Fig. 5(a1) (r = 1 μm), the rotation velocity of vesicle is almost constant when vesicle keeps away from the pore. The rotation velocity shows dramatic fluctuation nearby the narrow pore. It increases with increase of deviation distance, and reaches up to 28 rad/s nearby the pore for deviation distance h = 10 μm. That is much higher than rotation velocity of deviation distance h = 5 μm. Because flow strength is stronger nearby the narrow pore than far away from the pore, vesicle rotation is faster nearby the narrow pore. The shear rate is stronger nearby wall of microchannel, for the flow field distribution is hyperbolic. The vesicle is placed at axis of microchannel, and the rotation velocity is equal to 0 when vesicle keeps away from the pore. The rotation velocity nearby the pore is attributed to deformation and relaxation of vesicle. We obtain similar results in Fig. 5(a1) velocity decreases with increase of vesicle radius. The rotation velocity of vesicle radius r = 4 μm is reduced to 15 rad/s for deviation distance h = 10 μm. The rotation velocity of vesicle radii r = 2, 3, 4 μm shows negative value for deviation distance h = 10 μm. This is the result of deformation and relaxation of the vesicle. Fig. 5(b) indicates effect of inlet velocity on rotation velocity for deviation distance h = 10 μm. The rotation velocity increases with increase of inlet velocity. It is up to 28 rad/s for mean inlet velocity of 20 μm/s because the increase of inlet velocity enhances flow strength. Moreover, the presence of pore creates non-uniform flow field distribution. Thus, we study the influence of horizontal position on rotation velocity for deviation distance h = 10 μm. The result presents that the maximum of rotation velocity is increasing as the vesicle approaches the pore. Our simple theoretical model can describe the microscopic images of translocation dynamics of vesicles or cells, which is very helpful for us to understand the relevant processes. For example, our simulations find that when the initial position of the same vesicle is different, the dynamic behavior exhibits great difference. Moreover, the strain energy also shows a great difference, which proves that to improve the accuracy of cell selection, their initial positions are also very important.
CONCLUSIONS
In this work, using finite element method, we numerically investigate the motion mechanism of non-axially positioned vesicles passing through a pore. The fluid-structure interactions are employed to complete the coupling between vesicle membrane and fluid flow with the arbitrary Lagrangian-Eulerian method. We monitor the motions, rotations and deformations of vesicles into the pore to yield the mapping between the positions and rotations of vesicle, and deformed states. Our results demonstrate that the non-axially positioned vesicle shows similar shape change from ellipse-shape to slippershape, and from slipper-shape to oval-shape. In addition, the horizontal velocity decreases with increase of deviation distance. Furthermore, we find that the rotation angle of nonaxially positioned vesicle is complex when passing through the pore. The rotation angle increases linearly far away from the pore, but increases rapidly nearby the pore. In addition, the rotation angle exhibits an increase with increase of deviation distance. Our results further indicate that the rotation velocity is constant far away from the pore, but shows drastic increase nearby the pore, where the increase of deviation distance enhances rotation velocity. These are attributed to hyperbolic distribution of flow field and presence of pore. Our work not only creates mapping between the positions of the vesicles and deformed states, but also displays the change of horizontal velocity, rotation angle and rotation velocity. In this context, we expect that extensive studies on the nonspherical structure of vesicles or the non-Newtonian and compressible behaviors of fluids would be of particular interest, [10,36,37] which can provide an extensive understanding on the utilization of vesicles in pharmaceutical, physiological, and chemical processes. | 4,657.8 | 2019-12-31T00:00:00.000 | [
"Physics",
"Engineering"
] |
A Hierarchical Convolution Neural Network ( CNN )-Based Ship Target Detection Method in Spaceborne SAR Imagery
The ghost phenomenon in synthetic aperture radar (SAR) imaging is primarily caused by azimuth or range ambiguities, which cause difficulties in SAR target detection application. To mitigate this influence, we propose a ship target detection method in spaceborne SAR imagery, using a hierarchical convolutional neural network (H-CNN). Based on the nature of ghost replicas and typical target classes, a two-stage CNN model is built to detect ship targets against sea clutter and the ghost. First, regions of interest (ROIs) were extracted from a large imaged scene during the coarse-detection stage. Unwanted ghost replicas represented major residual interference sources in ROIs, therefore, the other CNN process was executed during the fine-detection stage. Finally, comparative experiments and analyses, using Sentinel-1 SAR data and various assessment criteria, were conducted to validate H-CNN. Our results showed that the proposed method can outperform the conventional constant false-alarm rate technique and CNN-based models.
Introduction
Synthetic aperture radar (SAR) is an active microwave sensor, whose resolution-both in range and azimuth-can be improved via the pulse compression technique and synthetic aperture principle, to obtain high resolution remote sensing images.Moreover, another advantage of SAR imaging is its ability to operate on an all-weather/all-day-and-night basis [1].Its application has been of interest in a variety of fields [2,3], e.g., SAR-based ocean remote sensing is widely used for environmental monitoring, search and rescue, target recognition, etc. [4].Spaceborne SAR can also be operated over long periods in wide-area and real-time observations.In this context, it became a fundamental system for ship target recognition [5][6][7][8].
Typical ship target recognition using SAR imagery involves land and sea segmentation, target detection, target recognition, etc.In a large SAR image, target detection can be based on the feature difference between targets and backgrounds.In this process, a minimum region in one target chip containing the whole target can be confirmed [9], and the other part is considered background.Obvious feature differences normally exist between target and background regions, i.e., grayscale, multi-resolution, polarization, phase, etc., which form the basis for the design of many target detection methods.Hu et al. analyzed multidimensional SAR information using a linear time-frequency (TF) decomposition approach [10].Yuan et al. extracted the gradient ratio pattern for each pixel based on Weber's law, and used the local gradient ratio pattern histogram (LGRPH) for SAR target recognition [11].In addition, the conventional constant false alarm rate (CFAR) technique is a typical detection method based on the grayscale feature.However, complicated and cluttered backgrounds severely affect CFAR detection performance [12].
In recent years, ship target detection based on deep learning (DL) has been widely studied [13,14], using the typical model of convolutional neural network (CNN) [15].Liu et al. [16] presented a ship detection method, namely sea-land segmentation-based convolutional neural network (SLS-CNN), which combines a SLS-CNN detector, saliency computation, and corner features.Furthermore, Zhao et al. [17] proposed a spaceborne SAR ship detection algorithm based on low complexity CNN.Some other well-known CNN-based target detection methods include faster region-CNN (Faster R-CNN), you only look once (YOLO) list model, etc.For example, Li et al. [18,19] improved detection performance using Faster R-CNN, to successfully provide a densely connected multi-scale neural network [19].This method is used to solve multi-scale and multi-scene problems in SAR ship detection.Feature maps are fused by densely connecting different feature map layers, rather than information from single feature maps, which represent top-to-down feature map connections.The R-CNN method is used for target recognition in large scene SAR images [20].Furthermore, Hamza and Cai used YOLOv2 for ship detection [21], which introduced a multitude of enhancements into the original YOLO model.
However, these methods may be no longer effective when ghost replicas exist in an imaged scene.The ghost phenomenon is an intrinsic effect of SAR's ambiguity, both in azimuth and range [22,23].Range ambiguity occurs when different backscattered echoes-one related to a transmitted pulse and the other due to a previous transmission-temporarily overlap during the receiving operation [24].On the other hand, azimuth ambiguity is caused by the aliasing of each target's Doppler phase history.The Doppler frequency, which is higher than pulse repetition frequency (PRF), may lead to azimuth ambiguity [25].This phenomenon is particularly relevant for high reflectivity targets, which appears in SAR images as ghosts in low reflectivity areas [26].Moreover, according to the ghost generating principle, it is similar to its real target, rendering discrimination difficult.Azimuth ambiguity is prominent due to the spaceborne SAR's fast platform velocity and big azimuth Doppler bandwidth.
According to the ghost generating principle and characteristics, we provided a hierarchical CNN-based ship target detection method in spaceborne SAR imagery, i.e., H-CNN.Hierarchical processing includes two stages: the coarse detection and fine detection.First, regions of interest (ROIs) were extracted from a large imaged scene in the coarse-detection stage.Although most land and sea background-related clutter was removed, ghost replicas remained in the ROIs.Therefore, the fine detection stage was introduced to further refine target detection against ghost replicas.In the experiments, H-CNN was trained and tested using Sentinel-1 SAR data [27].In the following sections we first discuss H-CNN parameter configuration for optimal detection results.Then, the feature extraction quality is analyzed.Detailed texture and abstract semantic information are extracted using different convolutional layer operations.Finally, we conduct detection experiments to validate the H-CNN, and compare it to conventional CFAR technique and CNN models.
Ghost Phenomenon in Spaceborne SAR
Spaceborne SAR is an applied formation of SAR in space.Spaceborne SAR has some characteristic differences compared to airborne SAR [28][29][30][31], e.g., the former image normally has large data size due to its large antenna beam irradiation range, etc.
Ghost is an image representation of SAR ambiguity in range or azimuth direction.When PRF is too high, successive pulses may be aliased in one pulse period [32].The distance between the target and its range ambiguity ghost can be calculated as follows [33,34]: where n is the index of azimuth ambiguities, indicating the spatial location of ghost replicas in the azimuth direction, λ is the radar wavelength, f PRF is the PRF, f DR is the Doppler rate, and f DC is the Doppler centroid.
If the PRF is excessively low, the part of Doppler frequency higher than PRF is folded into the azimuth spectrum, resulting in the occurrence of azimuth ambiguity.Figure 1 illustrates azimuth ambiguity formation with azimuth antenna pattern and PRF.B D is the Doppler bandwidth and B D ≈ 2V/L a , where V is the SAR platform velocity and L a is the antenna size in the azimuth direction.When B D is greater than the value of PRF, as shown in Figure 1, undersampling causes aliasing in the azimuth spectrum.Blue and red dashed curves denote the first left and right replicas due to the sampling, respectively.where n is the index of azimuth ambiguities, indicating the spatial location of ghost replicas in the azimuth direction, is the radar wavelength, If the PRF is excessively low, the part of Doppler frequency higher than PRF is folded into the azimuth spectrum, resulting in the occurrence of azimuth ambiguity.Figure 1 , where V is the SAR platform velocity and a L is the antenna size in the azimuth direction.When D B is greater than the value of PRF, as shown in Figure 1, undersampling causes aliasing in the azimuth spectrum.Blue and red dashed curves denote the first left and right replicas due to the sampling, respectively.The distance between azimuth ambiguity ghost and target can be calculated by Equation ( 2) [33,34]: where ' R is the slant range and V is the SAR platform velocity.Moreover, in the case of a scene where ships are moving on a smooth sea surface, bright targets against a dark background would be present in the SAR image.In such cases, ghosts are noticeably observed, and may impose severe difficulties during ship target detection.
According to spaceborne SAR parameters, theoretical range and azimuth ambiguity distances can be estimated by Equations ( 1) and (2), respectively.Taking for instance Sentinel-1 SAR data, we analyze its azimuth ambiguity in some SAR images.Its imaging geometry is shown in Figure 2a.Although it contains four imaging modes, we only show the interferometric wide (IW) swath mode.
Moreover, Sentinel-1 SAR system parameters play a significant role in the imaging, which contain platform speed, altitude of satellite to earth ground R , elevation angle , PRF, etc. Table 1a,b show the Sentinel-1 satellite SAR system and a ship's example parameters, respectively.Three different PRFs exist in one group of Sentinel-1 data.Furthermore, according to the characteristics of spaceborne SAR, slant range is influenced by the Earth's curvature and distance from ground to satellite-their relationship is shown in Figure 2b.In other words, it can therefore be calculated using the satellite's altitude from the Earth's ground, radius of the Earth earth R , elevation angle, and incidence angle .The distance between azimuth ambiguity ghost and target can be calculated by Equation ( 2) [33,34]: where R is the slant range and V is the SAR platform velocity.Moreover, in the case of a scene where ships are moving on a smooth sea surface, bright targets against a dark background would be present in the SAR image.In such cases, ghosts are noticeably observed, and may impose severe difficulties during ship target detection.
According to spaceborne SAR parameters, theoretical range and azimuth ambiguity distances can be estimated by Equations ( 1) and (2), respectively.Taking for instance Sentinel-1 SAR data, we analyze its azimuth ambiguity in some SAR images.Its imaging geometry is shown in Figure 2a.Although it contains four imaging modes, we only show the interferometric wide (IW) swath mode.Moreover, Sentinel-1 SAR system parameters play a significant role in the imaging, which contain platform speed, altitude of satellite to earth ground R, elevation angle β, PRF, etc. Table 1a,b show the Sentinel-1 satellite SAR system and a ship's example parameters, respectively.Three different PRFs exist in one group of Sentinel-1 data.Furthermore, according to the characteristics of spaceborne SAR, slant range is influenced by the Earth's curvature and distance from ground to satellite-their relationship is shown in Figure 2b.In other words, it can therefore be calculated using the satellite's altitude from the Earth's ground, radius of the Earth R earth , elevation angle, and incidence angle θ.Theoretical azimuth ambiguity distance can be obtained using Equation (2).When 1 n = , the results in the cases of three PRF are ~5031.4m, 4254.9 m, and 4940.6 m, respectively.The right graph of Figure 3 depicts the SAR image of the ship example and corresponding ghost replicas.We then extracted the azimuth direction sequence in one fixed range direction cell.In order to decrease the dynamic range of amplitude in azimuth direction, we expressed it in decibels.Finally, the sequence in azimuth direction is shown in the left graph of Figure 3.The distances between two ghosts and their target are approximately estimated to be ~4630 m and 4970 m, respectively, which are close to theoretical values mentioned above.
Discrimination difficulty is due to the fact that some traditional characteristics of a target and its corresponding ghost are similar, i.e., length-width ratio, area and shape complexity, etc. [35][36][37].We therefore need to dispose of special discrimination between target and ghost, to eliminate the negative effects of ghosts on the detection performance.
Altitude
Slant range
Interference wide
Range direction Theoretical azimuth ambiguity distance can be obtained using Equation (2).When n = 1, the results in the cases of three PRF are ~5031.4m, 4254.9 m, and 4940.6 m, respectively.The right graph of Figure 3 depicts the SAR image of the ship example and corresponding ghost replicas.We then extracted the azimuth direction sequence in one fixed range direction cell.In order to decrease the dynamic range of amplitude in azimuth direction, we expressed it in decibels.Finally, the sequence in azimuth direction is shown in the left graph of Figure 3.The distances between two ghosts and their target are approximately estimated to be ~4630 m and 4970 m, respectively, which are close to theoretical values mentioned above.
Discrimination difficulty is due to the fact that some traditional characteristics of a target and its corresponding ghost are similar, i.e., length-width ratio, area and shape complexity, etc. [35][36][37].We therefore need to dispose of special discrimination between target and ghost, to eliminate the negative effects of ghosts on the detection performance.
Property Analyses of Ship Target and Ghost Replica
Some traditional characteristics are similar between a target and its corresponding ghost, i.e., length-width ratio, area and shape complexity, etc.It is therefore necessary to analyze their differences.The proposed method in this paper was designed based on the amplitude information in space dimension.Thus, we discuss the amplitude statistical feature of target chips and their ghost replicas.Amplitude distribution differences between target and ghost highlight their degree of distinction.In other words, a more obvious amplitude distribution difference makes the discrimination between target and ghost easier.First, one-to-one target chips and ghost replicas were collected from Sentinel-1 SAR data, all of which contain 100 groups.Amplitude normalization was performed for comparison convenience.For ghost replicas and target chips, the ratio of point number in the corresponding amplitude range to the overall pixel number was calculated as shown in Figure 4.Moreover, we enlarged local distribution results in the range of normalized amplitude from 0 to 0.02, which demonstrated that the amplitudes of most pixels are in this region.We found that the two distribution formations are similar, in that they first increase and then decline.When the normalized amplitude is higher than 0.02, the proportion difference of two distributions decreases and all the values are close to zero.
Architecture of the H-CNN Model
Traditional CNN consists of convolutional, pooling, and fully connected layers.The convolutional layer is used for feature extraction.Many convolutional kernels exist in every convolutional layer, and each pixel of kernel corresponds to one weight and one bias.Each neuron in the convolutional layer must be connected to several neighboring regions of the front layer.In addition, kernel size decides region size.In convolutional operation, kernels regularly slide in the whole feature map and feature extraction is realized as:
Property Analyses of Ship Target and Ghost Replica
Some traditional characteristics are similar between a target and its corresponding ghost, i.e., length-width ratio, area and shape complexity, etc.It is therefore necessary to analyze their differences.The proposed method in this paper was designed based on the amplitude information in space dimension.Thus, we discuss the amplitude statistical feature of target chips and their ghost replicas.Amplitude distribution differences between target and ghost highlight their degree of distinction.In other words, a more obvious amplitude distribution difference makes the discrimination between target and ghost easier.First, one-to-one target chips and ghost replicas were collected from Sentinel-1 SAR data, all of which contain 100 groups.Amplitude normalization was performed for comparison convenience.For ghost replicas and target chips, the ratio of point number in the corresponding amplitude range to the overall pixel number was calculated as shown in Figure 4.Moreover, we enlarged local distribution results in the range of normalized amplitude from 0 to 0.02, which demonstrated that the amplitudes of most pixels are in this region.We found that the two distribution formations are similar, in that they first increase and then decline.When the normalized amplitude is higher than 0.02, the proportion difference of two distributions decreases and all the values are close to zero.
Property Analyses of Ship Target and Ghost Replica
Some traditional characteristics are similar between a target and its corresponding ghost, i.e., length-width ratio, area and shape complexity, etc.It is therefore necessary to analyze their differences.The proposed method in this paper was designed based on the amplitude information in space dimension.Thus, we discuss the amplitude statistical feature of target chips and their ghost replicas.Amplitude distribution differences between target and ghost highlight their degree of distinction.In other words, a more obvious amplitude distribution difference makes the discrimination between target and ghost easier.First, one-to-one target chips and ghost replicas were collected from Sentinel-1 SAR data, all of which contain 100 groups.Amplitude normalization was performed for comparison convenience.For ghost replicas and target chips, the ratio of point number in the corresponding amplitude range to the overall pixel number was calculated as shown in Figure 4.Moreover, we enlarged local distribution results in the range of normalized amplitude from 0 to 0.02, which demonstrated that the amplitudes of most pixels are in this region.We found that the two distribution formations are similar, in that they first increase and then decline.When the normalized amplitude is higher than 0.02, the proportion difference of two distributions decreases and all the values are close to zero.
Architecture of the H-CNN Model
Traditional CNN consists of convolutional, pooling, and fully connected layers.The convolutional layer is used for feature extraction.Many convolutional kernels exist in every convolutional layer, and each pixel of kernel corresponds to one weight and one bias.Each neuron in the convolutional layer must be connected to several neighboring regions of the front layer.In addition, kernel size decides region size.In convolutional operation, kernels regularly slide in the whole feature map and feature extraction is realized as:
Architecture of the H-CNN Model
Traditional CNN consists of convolutional, pooling, and fully connected layers.The convolutional layer is used for feature extraction.Many convolutional kernels exist in every convolutional layer, and each pixel of kernel corresponds to one weight and one bias.Each neuron in the convolutional layer must be connected to several neighboring regions of the front layer.In addition, kernel size decides region size.In convolutional operation, kernels regularly slide in the whole feature map and feature extraction is realized as: where Z l i,j and Z l+1 i,j are the input and output results in (i, j) pixel of the lth convolutional layer, respectively.They are all named as feature maps.In addition, w l and b are weight and bias of convolutional kernel in convolutional layer l, respectively.f (•) is an activation function which is usually designed as sigmoid, rectified linear unit (ReLU) [38], etc.In this paper, ReLU is selected and is defined by: After convolutional layer feature extraction, feature maps are transmitted to the next pooling layer.The pooling operation is used for selecting a few points to replace the whole feature map.Classic pooling methods include max pooling-which we applied in this paper-mean pooling, etc.
Finally, feature maps are fully connected in the last layer, which is similar to the hidden layer of traditional feedforward neural network.In this layer, multi-dimensional feature map structures are reshaped.
Traditional CNN is a supervised network.It is usually optimized by the well-known stochastic gradient descent (SGD) algorithm [39,40], which is basically an improved version of the batch gradient descent (BGD) method.In every iterative procedure, all samples were computed using this optimization algorithm.Moreover, to solve the slow update problem, a group of samples were stochastically selected and used for gradient direction determination in one iterative procedure.In the next iteration, a new group of stochastically selected samples was applied for the parameter update.When the loss of function arrives at the minimum value and remains stable, all parameters, i.e., weight and bias, are confirmed.
In this paper, we provide the H-CNN method for ship target detection in the spaceborne SAR imagery, with the hierarchical training pattern.The first coarse-detection stage of H-CNN was used to discriminate between ROIs and background.The ship targets were further determined from the interference of ghost replicas during the fine-detection stage.In the test phase, the whole SAR image was cut into several chips, and processed using coarse-and fine-detection stages, during which ship targets are extracted from the whole SAR.Here, all SAR chips were input in the coarse-detection stage.The chips were extracted when different from background.In order to further mitigate ghost interference, chips extracted after the coarse-detection stage were discriminated during the fine-detection stage for the ship target detection.It should be noted that large quantities of sea chips were always present.Therefore, the coarse detection could ease the computational burden for the following step by removing plenty of background chips.Furthermore, the fine-detection stage focuses on the elimination of ghost interference.However, since the sliding step is smaller than chip size, the overlapping phenomenon may occur.We used non-maximum suppression (NMS) [41] to further dispose of coarse-detection stage results.Architecture of the H-CNN model is shown in Figure 5.
During the coarse-detection stage, the network was trained using target and background samples.This part of the network mainly focuses on ROI extraction from a large imaged scene.Since unwanted ghost replicas are major interference sources that remain in ROIs, coarse-detection stage outputs are inputs into the fine-detection stage network, which facilitates the discrimination between real targets and ghosts.In the meantime, the fine-detection stage network is trained using target and ghost samples.NMS is disposed to all ROIs, which are extracted during the coarse-detection stage.Based on this process, ship target detection in spaceborne SAR imagery can be realized.
Dataset
In order to verify the effectiveness of proposed method, we applied it to Sentinel-1 SAR data [27].Sentinel-1 satellite is an Earth observation satellite from the European Space Agency Copernicus Project.It consists of two satellites: Sentinel-1A and Sentinel-1B, and carries C-band SAR, which can provide continuous images in all-weather/all-day-and-night conditions.Nowadays, a series of operational services can be provided by Sentinel-1 SAR data, which include mapping of arctic sea ice and daily sea ice, marine environment monitoring, ground motion risk monitoring, forest mapping, etc.In this study, we collected data in the IW model, as shown in Figure 2. Its resolution was 5 m × 20 m, imaging field width is 250 km, and orbit altitude is 693 km.
To further ensure the training samples' reliability, each ship in the target sample set was verified using the Australian Maritime Safety Authority's (AMSA) information [42].These ship samples are collected in three Australian regions (North West, Great Australian Bight, and Bass Strait), which are indicated by white rectangles in Figure 6.To further guarantee the high diversity of ship types, we elaborate ship types using information provided on the AMSA website.For example, six-type ship SAR data are confirmed, i.e., cargo, tanker, dredging ship, fishing ship, tug, and other.Some samples of SAR target images and their corresponding optical images are shown in Figure 7.In most cases, cargo and tanker are larger than other ships, and thus their structures in SAR images are obvious.On the other hand, the dredging ship is small, which is indicated by the SAR and optical images.
Dataset
In order to verify the effectiveness of proposed method, we applied it to Sentinel-1 SAR data [27].Sentinel-1 satellite is an Earth observation satellite from the European Space Agency Copernicus Project.It consists of two satellites: Sentinel-1A and Sentinel-1B, and carries C-band SAR, which can provide continuous images in all-weather/all-day-and-night conditions.Nowadays, a series of operational services can be provided by Sentinel-1 SAR data, which include mapping of arctic sea ice and daily sea ice, marine environment monitoring, ground motion risk monitoring, forest mapping, etc.In this study, we collected data in the IW model, as shown in Figure 2. Its resolution was 5 m × 20 m, imaging field width is 250 km, and orbit altitude is 693 km.
To further ensure the training samples' reliability, each ship in the target sample set was verified using the Australian Maritime Safety Authority's (AMSA) information [42].These ship samples are collected in three Australian regions (North West, Great Australian Bight, and Bass Strait), which are indicated by white rectangles in Figure 6.To further guarantee the high diversity of ship types, we elaborate ship types using information provided on the AMSA website.For example, six-type ship SAR data are confirmed, i.e., cargo, tanker, dredging ship, fishing ship, tug, and other.
Dataset
In order to verify the effectiveness of proposed method, we applied it to Sentinel-1 SAR data [27].Sentinel-1 satellite is an Earth observation satellite from the European Space Agency Copernicus Project.It consists of two satellites: Sentinel-1A and Sentinel-1B, and carries C-band SAR, which can provide continuous images in all-weather/all-day-and-night conditions.Nowadays, a series of operational services can be provided by Sentinel-1 SAR data, which include mapping of arctic sea ice and daily sea ice, marine environment monitoring, ground motion risk monitoring, forest mapping, etc.In this study, we collected data in the IW model, as shown in Figure 2. Its resolution was 5 m × 20 m, imaging field width is 250 km, and orbit altitude is 693 km.
To further ensure the training samples' reliability, each ship in the target sample set was verified using the Australian Maritime Safety Authority's (AMSA) information [42].These ship samples are collected in three Australian regions (North West, Great Australian Bight, and Bass Strait), which are indicated by white rectangles in Figure 6.To further guarantee the high diversity of ship types, we elaborate ship types using information provided on the AMSA website.For example, six-type ship SAR data are confirmed, i.e., cargo, tanker, dredging ship, fishing ship, tug, and other.Some samples of SAR target images and their corresponding optical images are shown in Figure 7.In most cases, cargo and tanker are larger than other ships, and thus their structures in SAR images are obvious.On the other hand, the dredging ship is small, which is indicated by the SAR and optical images.Some samples of SAR target images and their corresponding optical images are shown in Figure 7.In most cases, cargo and tanker are larger than other ships, and thus their structures in SAR images are obvious.On the other hand, the dredging ship is small, which is indicated by the SAR and optical images.Ghost samples are extracted based on the corresponding target positions.Figure 8 shows a SAR image used in the test, where target chips and ghost replicas are highlighted by blue and yellow squares, respectively.In order to present the corresponding relationship, we labeled target as T-i, where the target chip is i.The ghost is labeled as G-i, where the ghost replica is i.We can identify 23 ship targets and 4 ghost replicas in this image.On this basis, target chips, ghost replicas, and background chips were collected, which contained 350 samples with the size of 40 pixels × 40 pixels, respectively, and were used for H-CNN training.Additional 149 Sentinel-1 SAR images with the size of 670 pixels × 643 pixels were applied to test the proposed networks performance.Altogether, 480 ships chips and 304 ghost replicas were present.To verify the effectiveness of H-CNN, training samples and test SAR images were acquired from different Sentinel-1 SAR data.The ship targets were confirmed by the maritime information on the AMSA website.Furthermore, we gained approximate corresponding ghost information based on the spaceborne SAR imaging theory, Sentinel-1 system parameters, and maritime information.The ghost confirmation method is shown in Section 2. Ghost samples are extracted based on the corresponding target positions.Figure 8 shows a SAR image used in the test, where target chips and ghost replicas are highlighted by blue and yellow squares, respectively.In order to present the corresponding relationship, we labeled target as T-i, where the target chip is i.The ghost is labeled as G-i, where the ghost replica is i.We can identify 23 ship targets and 4 ghost replicas in this image.On this basis, target chips, ghost replicas, and background chips were collected, which contained 350 samples with the size of 40 pixels × 40 pixels, respectively, and were used for H-CNN training.Additional 149 Sentinel-1 SAR images with the size of 670 pixels × 643 pixels were applied to test the proposed networks performance.Altogether, 480 ships chips and 304 ghost replicas were present.To verify the effectiveness of H-CNN, training samples and test SAR images were acquired from different Sentinel-1 SAR data.The ship targets were confirmed by the maritime information on the AMSA website.Furthermore, we gained approximate corresponding ghost information based on the spaceborne SAR imaging theory, Sentinel-1 system parameters, and maritime information.The ghost confirmation method is shown in Section 2. Ghost samples are extracted based on the corresponding target positions.Figure 8 shows a SAR image used in the test, where target chips and ghost replicas are highlighted by blue and yellow squares, respectively.In order to present the corresponding relationship, we labeled target as T-i, where the target chip is i.The ghost is labeled as G-i, where the ghost replica is i.We can identify 23 ship targets and 4 ghost replicas in this image.On this basis, target chips, ghost replicas, and background chips were collected, which contained 350 samples with the size of 40 pixels × 40 pixels, respectively, and were used for H-CNN training.Additional 149 Sentinel-1 SAR images with the size of 670 pixels × 643 pixels were applied to test the proposed networks performance.Altogether, 480 ships chips and 304 ghost replicas were present.To verify the effectiveness of H-CNN, training samples and test SAR images were acquired from different Sentinel-1 SAR data.The ship targets were confirmed by the maritime information on the AMSA website.Furthermore, we gained approximate corresponding ghost information based on the spaceborne SAR imaging theory, Sentinel-1 system parameters, and maritime information.The ghost confirmation method is shown in Section 2.
Discussion of Parameter Configuration of H-CNN
The key point of the proposed method is to mitigate the influence of ghost replicas on CNN models' detection performance.Particularly, hyperparameters of convolutional kernels play a key role in the H-CNN performance.In this part, we studied H-CNN configurations with a variety of kernel hyperparameters to obtain its optimal detection performance.Details of kernel hyperparameters involved in H-CNN are shown in Table 2.In order to conveniently present different parameter configurations, we defined a brief description of network structure as H-i-j.It presents structure cases i and j in coarse-and fine-stage detection, respectively.We discuss the influence of kernel numbers and sizes during coarse-and fine-detection stages on detection performance, respectively.Moreover, in each layer, the structure is shown as A@B × B-Maxpool C × C formation, where kernel number is A, the kernel size is B × B, and max-pool is operated in each region of C × C. Different networks were trained by the same samples.We only changed kernel numbers of the coarse-detection stage and other parameters were fixed, as shown in Table 2a.According to the detection results, we confirmed the optimal kernel numbers and sizes during the coarse-detection stage using network comparisons shown in Table 2b.Similarly, kernel numbers and sizes during the fine-detection stage were confirmed using network comparisons shown in Table 2c,d.Detection performance was evaluated using four typical measures, including figure of merit (FoM), precision, recall, and F-measure [19,43], respectively.They are defined as follows: where TP is the number of correct detected targets, TN denotes the number of falsely detected targets, and FP is the number of undetected targets. Case , and H-6-1.According to Table 2a, we identified differences in kernel numbers in stage1, while other parameters were similar.Hence, we could confirm kernel numbers during the coarse-detection stage by applying this comparison.Results in terms of FoM, precision, and F-measure showed that the optimal situation is H-1-1.As to the assessment in recall, H-1-1 is the second best one, but very close to H-5-1, which has the highest value in Figure 9a.It illustrates that compared to 4, 6, 8, 9, and 12, 3 is the best kernel number choice during coarse-detection stage.Therefore, the kernel number during coarse-detection stage was set to be 3 for the following comparison experiments.Finally, we discuss the influence of kernel size during the fine-detection stage on detection performance.According to Figure 9d, H-10-1 shows the best result, thus representing that the two layers kernel size during the fine-detection stage should be designed as 9 × 9.
Analyses of Feature Extraction by H-CNN
To analyze feature extraction quality, we first observed feature maps of target, background, and ghost chips.Figure 10 shows some feature map examples of H-CNN during the test.The target and ghost chips had clear boundaries compared to the background chip, thus the first layer's feature maps both in coarse-and fine-detection stages had obvious texture in target and ghost chips.On the other hand, feature maps in the last layer presented abstract semantic information.We found that feature maps of target and ghost in the last layer were hardly discriminated during the coarse-detection stage.However, feature map differences between target and background were obvious, thus target and background discrimination was easy to detect during the coarse-detection stage.During the fine-detection stage, feature maps differences between target and ghost in the last layer became more obvious, thus their discrimination difficulty decreased.According to Table 2b, kernel sizes alone during the coarse-detection stage in H-1-1, H-7-1, H-8-1, H-9-1, H-10-1, and H-11-1 were different.Hence, we could confirm this parameter via the detection results, which are shown in Figure 9b.Detection results of H-10-1 have a little superiority, i.e., best kernel size results are 11 × 11 and 8 × 8 in two layers of the coarse-detection stage.
On this basis, we further compared the results using different kernel numbers during the fine-detection stage, as shown in Table 2c.H-10-1 results are the best, indicating 3 as the kernel number during the fine-detection stage.
Finally, we discuss the influence of kernel size during the fine-detection stage on detection performance.According to Figure 9d, H-10-1 shows the best result, thus representing that the two layers kernel size during the fine-detection stage should be designed as 9 × 9.
Analyses of Feature Extraction by H-CNN
To analyze feature extraction quality, we first observed feature maps of target, background, and ghost chips.Figure 10 shows some feature map examples of H-CNN during the test.The target and ghost chips had clear boundaries compared to the background chip, thus the first layer's feature maps both in coarse-and fine-detection stages had obvious texture in target and ghost chips.On the other hand, feature maps in the last layer presented abstract semantic information.We found that feature maps of target and ghost in the last layer were hardly discriminated during the coarse-detection stage.However, feature map differences between target and background were obvious, thus target and background discrimination was easy to detect during the coarse-detection stage.During the fine-detection stage, feature maps differences between target and ghost in the last layer became more obvious, thus their discrimination difficulty decreased.In this part, we investigated the feature extraction quality of target chips and ghost replicas.If features are significantly different, the degree of distinction between two chips improves.The chips introduced in Section 3 were disposed by H-CNN and we collected their feature maps.The amplitude distribution was obtained by the same method.Distributions are shown in Figure 11.Amplitude for most focus points on the two regions, 0-0.02 and 0.96-1, are enlarged.The two distributions are dissimilar, especially in these two enlarged parts.Compared to Figure 4, distribution differences are obvious in Figure 11.It indicates that the distinguishable degree of features extracted by H-CNN is stronger than that of the original chips.In order to further quantitatively analyze feature extraction quality, we introduced a linear discrimination analyses (LDA) theory.It is well known that LDA is aimed at maximizing between-class to within-class scatter matrices ratio.Here, two scatter matrices, called the within-class and between-class scatter matrices, are defined as [44]: In this part, we investigated the feature extraction quality of target chips and ghost replicas.If features are significantly different, the degree of distinction between two chips improves.The chips introduced in Section 3 were disposed by H-CNN and we collected their feature maps.The amplitude distribution was obtained by the same method.Distributions are shown in Figure 11.Amplitude for most focus points on the two regions, 0-0.02 and 0.96-1, are enlarged.The two distributions are dissimilar, especially in these two enlarged parts.Compared to Figure 4, distribution differences are obvious in Figure 11.It indicates that the distinguishable degree of features extracted by H-CNN is stronger than that of the original chips.In this part, we investigated the feature extraction quality of target chips and ghost replicas.If features are significantly different, the degree of distinction between two chips improves.The chips introduced in Section 3 were disposed by H-CNN and we collected their feature maps.The amplitude distribution was obtained by the same method.Distributions are shown in Figure 11.Amplitude for most focus points on the two regions, 0-0.02 and 0.96-1, are enlarged.The two distributions are dissimilar, especially in these two enlarged parts.Compared to Figure 4, distribution differences are obvious in Figure 11.It indicates that the distinguishable degree of features extracted by H-CNN is stronger than that of the original chips.In order to further quantitatively analyze feature extraction quality, we introduced a linear discrimination analyses (LDA) theory.It is well known that LDA is aimed at maximizing between-class to within-class scatter matrices ratio.Here, two scatter matrices, called the within-class and between-class scatter matrices, are defined as [44]: In order to further quantitatively analyze feature extraction quality, we introduced a linear discrimination analyses (LDA) theory.It is well known that LDA is aimed at maximizing between-class to within-class scatter matrices ratio.Here, two scatter matrices, called the within-class and between-class scatter matrices, are defined as [44]: where S w is the within-class scatter matrices, S b is the between-class scatter matrices, X i is the samples i, E{•} is the mean value, c is the type number, P(ω i ) is the ω i sample number ratio to all sample numbers, M i is the mean value matrix of ω i samples, and M 0 is the mean value matrix of all samples.Furthermore, there are two criteria for evaluating feature extraction quality, J 1 and J 2 , as follows: where trace(•) is the operation of calculate matrix trace.According to the LDA theory, the bigger J 1 and J 2 , the stronger distinguishable degree it has.Taking fine-stage detection for instance, we calculated J 1 and J 2 of a feature map in two layers as shown in Table 3.It is obvious that criteria values of the L2 layer were bigger than those of L1 layer.In other words, features in the L2 layer had a stronger distinguishable degree than those in the L1 layer.
Detection Result Comparison
Comparative analyses of CFAR, traditional CNN, low complex CNN, and the proposed network are presented herein to validate the H-CNN.In the CFAR method, we used the cell average CFAR (CA-CFAR) to detect above SAR images [45] where the false alarm rate was set as 1 × 10 −3 .Moreover, the traditional CNN model consisted of two convolutional layers, two pooling layers, and one fully connected layer.Its parameter configuration was confirmed by detection result comparisons of multiple networks.Moreover, a low complex CNN was introduced by [17].H-CNN parameter configuration was set as aforementioned H-10-1.
In order to intuitively observe detection results based on different methods, we provided one instance as shown in Figure 12.It illustrates detection results of the SAR image of Figure 8, where targets and ghosts are labeled.We can see that all targets were detected, but the performance on ghosts was different.Hence, let us focus on the detection results of ghost replicas.G-2 was accurately detected as a ghost replica by these four methods.Other ghosts may be falsely detected by CFAR, traditional CNN, or low complexity CNN.For example, G-4 was identified as a target by CFAR, G-7 was also identified as a target by CFAR and traditional CNN.Only H-CNN was able to discriminate G-19 as a ghost replica.In other words, H-CNN could resist the interference of ghost replica and its detection performance outperforms other detection methods.
Furthermore, we calculated the statistical results to accurately illustrate detection performance.Detection results of CFAR, traditional CNN, low complexity CNN, and the proposed H-CNN are presented in Table 4.All the test data consisting of 149 Sentinel-1 SAR images with 480 ship targets and 304 ghost replicas are used herein.We can see that superiority of the proposed H-CNN is obvious.
More specifically, the proposed method could achieve more than 13.83% and 4.57% improvement compared to the CFAR technique and traditional CNN model, respectively.In addition, compared with low complexity CNN, the increase of 3.51%, 3.47%, and 2.54% in FoM, recall, and F-measure, respectively, could be achieved by H-CNN.
Conclusions
A ship target detection method was proposed in this paper based on hierarchical CNN in the spaceborne SAR imagery.Its major contributions are twofold.First, a hierarchical pattern was designed to allow the single attention of each stage for the ship target detection against different interference, i.e., sea clutter and ghost replicas.Second, we adopted the statistical analyses of feature maps in the last layer, which may facilitate the understanding of these abstract features of ship targets and ghosts in spaceborne SAR images.Specifically, in the coarse-detection stage of H-CNN, ROIs can be extracted from whole images.Moreover, ship targets were detected against ghosts in the fine-detection stage.According to spaceborne SAR characteristics, we analyzed the ghost-generating principle, which conforms to the actual data situation.H-CNN designation was based on the amplitude information of SAR image chip in space dimension, and amplitude distribution differences between target and ghost were then discussed.Amplitude proportion differences were obvious, but the envelope forms of the two distributions were similar.In the
Conclusions
A ship target detection method was proposed in this paper based on hierarchical CNN in the spaceborne SAR imagery.Its major contributions are twofold.First, a hierarchical pattern was designed to allow the single attention of each stage for the ship target detection against different interference, i.e., sea clutter and ghost replicas.Second, we adopted the statistical analyses of feature maps in the last layer, which may facilitate the understanding of these abstract features of ship targets and ghosts in spaceborne SAR images.Specifically, in the coarse-detection stage of H-CNN, ROIs can be extracted from whole images.Moreover, ship targets were detected against ghosts in the fine-detection illustrates azimuth ambiguity formation with azimuth antenna pattern and PRF.
Figure 2 .
Figure 2. Geometry of the Sentinel-1 SAR satellite operation in interferometric wide (IW) mode: (a) imaging geometry of the Sentinel-1 SAR system; (b) interpretation of Sentinel-1 satellite in orbit.
Figure 2 .
Figure 2. Geometry of the Sentinel-1 SAR satellite operation in interferometric wide (IW) mode: (a) imaging geometry of the Sentinel-1 SAR system; (b) interpretation of Sentinel-1 satellite in orbit.
Figure 3 .
Figure 3. Illustration of a ship target and corresponding ghost replicas in a Sentinel-1 SAR image.
Figure 4 .
Figure 4. Comparison of statistical amplitudes of ship target and ghost replica pixels in Sentinel-1 SAR images.
Figure 3 .
Figure 3. Illustration of a ship target and corresponding ghost replicas in a Sentinel-1 SAR image.
Figure 3 .
Figure 3. Illustration of a ship target and corresponding ghost replicas in a Sentinel-1 SAR image.
Figure 4 .
Figure 4. Comparison of statistical amplitudes of ship target and ghost replica pixels in Sentinel-1 SAR images.
Figure 4 .
Figure 4. Comparison of statistical amplitudes of ship target and ghost replica pixels in Sentinel-1 SAR images.
Figure 5 .
Figure 5. Architecture of the hierarchical convolutional neural network (H-CNN) model.
Figure 6 .
Figure 6.Location illustration: North West, Great Australian Bight, and Bass Strait of Australia, where Sentinel-1 SAR images are collected for the experiments.
Figure 5 .
Figure 5. Architecture of the hierarchical convolutional neural network (H-CNN) model.
Figure 6 .
Figure 6.Location illustration: North West, Great Australian Bight, and Bass Strait of Australia, where Sentinel-1 SAR images are collected for the experiments.
Figure 6 .
Figure 6.Location illustration: North West, Great Australian Bight, and Bass Strait of Australia, where Sentinel-1 SAR images are collected for the experiments.
17 Figure 7 .
Figure 7.Samples of SAR and optical image chips of various types of ship targets.
Figure 8 .Figure 7 .
Figure 8. Ship targets and ghost replicas in a SAR image sample in test.Twenty-three ships and four azimuth ghosts are indicated by blue and yellow squares, respectively.
17 Figure 7 .
Figure 7.Samples of SAR and optical image chips of various types of ship targets.
Figure 8 .Figure 8 .
Figure 8. Ship targets and ghost replicas in a SAR image sample in test.Twenty-three ships and four azimuth ghosts are indicated by blue and yellow squares, respectively.
Figure 9 .
Figure 9. H-CNN detection performance comparison with respect to different kernel hyperparameters in terms of various assessment criteria: (a) number of kernels in the coarse-detection stage; (b) kernel size in the coarse-detection stage; (c) number of kernels in the fine-detection stage; and (d) kernel size in the fine-detection stage.
Figure 9 .
Figure 9. H-CNN detection performance comparison with respect to different kernel hyperparameters in terms of various assessment criteria: (a) number of kernels in the coarse-detection stage; (b) kernel size in the coarse-detection stage; (c) number of kernels in the fine-detection stage; and (d) kernel size in the fine-detection stage.
17 Figure 10 .
Figure 10.Feature map examples in two stages of H-CNN for ship target, ghost replica, and sea clutter background in test.
Figure 11 .
Figure 11.Comparison of statistical amplitudes of ship target and ghost feature maps.
Figure 10 .
Figure 10.Feature map examples in two stages of H-CNN for ship target, ghost replica, and sea clutter background in test.
17 Figure 10 .
Figure 10.Feature map examples in two stages of H-CNN for ship target, ghost replica, and sea clutter background in test.
Figure 11 .
Figure 11.Comparison of statistical amplitudes of ship target and ghost feature maps.
Figure 11 .
Figure 11.Comparison of statistical amplitudes of ship target and ghost feature maps.
Table 1 .
Some parameters of the Sentinel-1 satellite SAR system and a ship target in an imaged scene:
Table 1 .
Some parameters of the Sentinel-1 satellite SAR system and a ship target in an imaged scene:
Table 2 .
Configurations of H-CNN with different network hyperparameters with respect to convolutional kernels: (a) number of kernels in the coarse-detection stage; (b) kernel size in the coarse-detection stage; (c) number of kernels in the fine-detection stage; and (d) kernel size in the fine-detection stage.
Table 3 .
Quantitative evaluation of feature extraction for fine-detection in different layers.
Table 4 .
Comparison of statistical detection results in terms of various assessment criteria.
Table 4 .
Comparison of statistical detection results in terms of various assessment criteria. | 10,535 | 2019-03-14T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Deuteron form factor measurements at low momentum transfers
A precise measurement of the elastic electron-deuteron scattering cross section at four-momentum transfers of 0.24 fm−1 ≤ Q ≤ 2.7 fm−1 has been performed at the Mainz Microtron. In this paper we describe the utilized experimental setup and the necessary analysis procedure to precisely determine the deuteron charge form factor from these data. Finally, the deuteron charge radius rd can be extracted from an extrapolation of that form factor to Q2 = 0.
Introduction
Electromagnetic form factors of light nuclei can be measured in elastic electron scattering.The results can be compared to predictions obtained by different theoretical approaches (for instance, see [1]) and therefore offer a unique opportunity to test our understanding of nuclear dynamics.In addition, charge radii can be determined from precise measurements of the charge form factors at low momentum transfers Q.
Assuming single photon exchange, the cross section for unpolarized electron-deuteron scattering can be written as where σ NS is the Mott differential cross section multiplied by the deuteron recoil factor, θ e the electron scattering angle and η = Q and magnetic form factors, respectively.In the kinematic regime investigated within the experiment, the cross section is dominated by the contribution of the deuteron charge form factor G C (Q 2 ).Taking into account corrections due to the small contributions of the quadrupole and the magnetic form factors, the cross section measurements allow for a determination of the charge form factor as a function of the momentum transfer.From the extrapolation of G C (Q 2 ) to Q 2 = 0, the deuteron radius can then be extracted from the slope dG C /dQ 2 .The interest in the deuteron radius is renewed in the context of the proton radius puzzle, pointing out the inconsistency between proton radius values obtained using different experimental techniques: The recommended deuteron radius value from the CODATA group is dominated by the determined value for the proton radius combined with isotope shift data, relating the proton and deuteron radii.Including the result from muonic hydrogen measurements [2] results in a drastic change of r d .An overview of deuteron charge radius results is illustrated in Fig. 1. [3][4][5].The uncertainty of the radius determination from electron-deuteron scattering is too large to reasonably distinguish between the two quoted CODATA values with and without inclusion of the muonic hydrogen result.
Experiment
Deuteron form factor measurements have been carried out by the A1 Collaboration at the spectrometer facility [6] at MAMI [7] to considerably improve the current form factor data base and to provide an extraction of the deuteron radius with a significant smaller uncertainty.Data were taken in 2014 during three weeks of beam time.Cross section measurements of the elastic d(e, e )d reaction have been performed for three different beam energies (E e = 180, 315, and 450 MeV).The scattered electrons were detected with three high resolution magnetic spectrometers simultaneously (spectrometers A, B, and C) to maximize the covered angular range and the internal redundancy in the data.The spectrometers have a relative momentum resolution of ∼ 10 −4 and an angular resolution at the target of ∼ 3 mrad [6].The angular acceptance of the spectrometers is well defined by collimators.The central electron scattering angles were varied in the range between 15 • and 107 • by changing the positions of the spectrometers.The chosen kinematic settings reach down in Q 2 as far as possible with the available setup (down to Q = 0.24 fm −1 , similar to lowest published data), and have high redundancy in the region 0.5 fm −1 ≤ Q ≤ 1.2 fm −1 where sensitivity to the rms-radius is high [1].Altogether, almost 200 different kinematic settings have been investigated during the beam time.
The electron beam impinged on a liquid deuterium target with beam currents between ∼ 1 nA and 4 μA, which were measured with Foerster probes and a pA-meter.Vertical drift chambers (VDCs) were used for tracking, scintillation detectors for trigger and timing purposes, and a threshold-gas-Cherenkov detector for electron identification in the offline analysis.Due to the large cross sections, the low background, and the relatively high data taking rate of about 500 Hz per spectrometer, all kinematic settings could be measured with reasonable statistics during the available time.Hence, the statistical errors of the resulting cross section determinations are well below the 1% level for most of the settings.Usually only one spectrometer angle was varied at a time, allowing to monitor changes in the luminosity with the other two spectrometers.The beam delivery was generally not stopped during angle changes, feasible through the use of a remote spectrometer movement control, to minimize possible beam parameter changes.
04017-p.2 3 Data analysis
For each recorded event the raw data of the individual detectors are considered to reconstruct kinematic quantities like the angle and the momentum of the detected particle and the reaction vertex.The VDC and scintillator data is used for track reconstruction of the detected particles in the focal plane of the spectrometers.The well known optics are used to reconstruct the particle tracks at the target position.Identification of elastic electron-deuteron scattering events is then accomplished via comparison of the detected energy E exp with the energy calculated from the detected scattering angle, E (θ exp ), see Fig. 2. Comparison of the experimental spectra with the spectra from a simulation then yields the experimental cross sections relative to the cross sections employed in the simulation.The simulation accounts for leading radiative processes beyond the one photon exchange approximation including the generation of the radiative tail, energy loss and straggling of the electrons in the target and window materials.Not all of the necessary corrections have been applied, yet.For instance, there is a contribution from background electrons originating from the target foils beneath the elastic deuteron peak.That contribution is in general larger for smaller angles.Since the deuteron charge radius will be determined from the slope of the extracted charge form factor with respect to the momentum transfer, a careful study of this kind of systematic effects is mandatory.Empty cell data have been taken for the purpose of background subtraction.Due to a slightly different energy loss situation when measuring on an empty cell without deuterium gas inside, there is a slight mismatch between the empty cell spectrum and the spectrum measured with the deuterium target which needs to be accounted for.To avoid an erroneous background subtraction, a simulation of the target wall contribution, including the contribution from excited states, is being developed to study that effect in detail.To illustrate the statistical quality of the data, a preliminary cross section result for one particular data set is shown in Fig. 3.
Outlook
The extraction of the individual cross section values is clearly the central part of the analysis.Several crucial corrections have not been applied yet, for instance Coulomb corrections or the mentioned background subtraction.Also, the modelling of the spectrometer resolutions, needed for a precise comparison between data and simulation, has to be finalized.Moreover, the determination of the luminosity is actually solely based on the measured target parameters and the beam current, and can be improved by the comparison of the count rates of the spectrometers, which were not moved between a setting change of the third spectrometer.Once all the necessary corrections and crosschecks have been performed, the obtained cross sections can be used to extract the charge form factor of the deuteron, G 2 C (Q 2 ).An appropriate fitting procedure will be applied for this purpose.From The reevaluated (see [4]) most precise existing low-Q data sets [9][10][11] are shown for comparison.The beam current, which enters the luminosity determination, was measured with a pA-meter, which has not been calibrated at this stage of the analysis.This can result in a systematic shift of our data compared to the previous data, as can be observed in the figure.Finally, the absolute normalization will be fixed by the known values of the form factors at Q 2 = 0.There is an additional rise of our data at smaller momentum transfers Q essentially caused by background produced at the target foils, which has not been subtracted, yet.Bottom: Projected statistical errors of the other data sets of this experiment.
the slope of G 2 C (Q 2 ) at Q 2 = 0, the deuteron radius can then be determined, with the aim to provide additional insight to the proton radius puzzle.
Figure 1 .
Figure 1.Various determinations of the deuteron charge radius r d[3][4][5].The uncertainty of the radius determination from electron-deuteron scattering is too large to reasonably distinguish between the two quoted CODATA values with and without inclusion of the muonic hydrogen result.
Figure 2 .
Figure 2. Identification of elastic electron-deuteron scattering events.Data (blue) and simulation (black) for spectrometer A at θ e = 23.6 • and E e = 315 MeV.The additional large contribution forΔE = E (θ exp ) − E exp > 2.2 MeV from the deuteron breakup reaction can be clearly separated from the elastic peak, which is located around zero.Furthermore, there is a contribution from electrons originating from the foils surrounding the deuterium target.Empty cell data, taken for the purpose of background subtraction, are shown in green.
Figure 3 .
Figure3.Top: Experimental cross sections divided by the cross sections calculated with the form factor parametrization of[8] (Sum-of-Gaussians) for one particular data set (red circles) at E e = 315 MeV (statistical errors only).The reevaluated (see[4]) most precise existing low-Q data sets[9][10][11] are shown for comparison.The beam current, which enters the luminosity determination, was measured with a pA-meter, which has not been calibrated at this stage of the analysis.This can result in a systematic shift of our data compared to the previous data, as can be observed in the figure.Finally, the absolute normalization will be fixed by the known values of the form factors at Q 2 = 0.There is an additional rise of our data at smaller momentum transfers Q essentially caused by background produced at the target foils, which has not been subtracted, yet.Bottom: Projected statistical errors of the other data sets of this experiment. | 2,332.4 | 2016-03-01T00:00:00.000 | [
"Physics"
] |
Measurement of VH , H b b ˉ production as a function of the → vector boson transverse momentum in 13 TeV pp collisions with the ATLAS detector
Abstract
Cross-sections of associated production of a Higgs boson decaying into bottomquark pairs and an electroweak gauge boson, W or Z, decaying into leptons are measured as a function of the gauge boson transverse momentum. The measurements are performed in kinematic fiducial volumes defined in the ‘simplified template cross-section’ framework. The results are obtained using 79.8 fb−1 of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at a centre-of-mass energy of 13 TeV. All measurements are found to be in agreement with the Standard Model predictions, and limits are set on the parameters of an effective Lagrangian sensitive to modifications of the Higgs boson couplings to the electroweak gauge bosons.
Introduction
with its large branching ratio of 58%. This paper presents a measurement of 'reduced' stage-1 V H STXS (defined in Section 3) using H → bb decays with 79.8 fb −1 of 13 TeV pp collisions collected by ATLAS between 2015 and 2017. The results are used to investigate the strength and tensor structure of the interactions of the Higgs boson with vector bosons using an effective Lagrangian approach [22].
Data and simulation samples
The data were collected with the ATLAS detector [23,24] between 2015 and 2017, triggered by isolated charged leptons or large transverse momentum imbalance, E miss T . Only events with good data quality were kept.
The Monte Carlo simulation samples used for the measurements presented here are identical to those used for the measurement of the inclusive V H, H → bb signal strength [15]. Several samples of simulated events were produced for the signal (qq → W H, qq → Z H and gg → Z H) and main background (tt, single-top, V+jets and diboson) processes. They were used to optimise the analysis criteria and to determine the expected signal and background distributions of the discriminating variables used in the final fit to the data. The multijet background is largely suppressed by the selection criteria and is estimated using data-driven techniques.
The signal templates in each STXS region were obtained from simulated qq → W H and qq → Z H events with zero or one additional jet, calculated at next-to-leading order (NLO), generated with the P -B v2 + G S + M NLO generators [25][26][27][28]. The contribution from loop-induced gg → Z H production was simulated at leading order (LO) using the P -B v2 generator [25]. Additional scale factors were applied to the qq → V H processes as a function of the generated vector-boson transverse momentum (p V T ) to account for electroweak (EW) corrections at NLO. These factors were determined from the ratio between the V H differential cross-sections computed with and without these corrections by the H program [29,30]. The mass of the Higgs boson was fixed at 125 GeV.
Event selection and categorisation
The object reconstruction, event selection and classification into categories used for the measurements, are identical to those described in Ref. [15]. The selection and the event categories are briefly summarised below.
Events are retained if they are consistent with one of the typical signatures of V H, H → bb production and decay, with Z → νν, W → ν or Z → ( = e, µ). Vector-boson decays into τ-leptons are not targeted explicitly. However, they satisfy the selection criteria with reduced efficiency in the case of leptonic τ-lepton decays.
In particular, events are kept if they contain at most two isolated electrons or muons, and two good-quality high-p T (> 45, 20 GeV) jets with |η| < 2.5 satisfying b-jet identification ('b-tagging') requirements (which have an average efficiency of 70% for jets containing b-hadrons that are produced in inclusive tt events [43]). The two b-jet candidates are used to reconstruct the Higgs boson candidate; their invariant mass is denoted by m bb . Additional jets are required to have p T > 20 GeV for |η| < 2.5 or p T > 30 GeV for 2.5 < |η| < 4.5, and not be identified as b-jets.
Events with either zero, one or two isolated electrons or muons are classified as '0-lepton', '1-lepton' or '2-lepton' events, respectively. The 0-lepton events and the 1-lepton events are required to have transverse momentum imbalance, as expected from the neutrinos from Z → νν or W → ν decays; in the 2-lepton events, the leptons must have the same flavour and an invariant mass close to the Z boson mass.
Additional requirements are applied to suppress background from QCD production of multijet events in the 0-lepton and 1-lepton channels. To suppress the large tt background, events with four or more jets are discarded in the 0-lepton and 1-lepton channels. Finally, a requirement on the reconstructed transverse momentum p V,r T of the vector boson V is applied. It is computed, depending on the number, N lep , of selected electrons and muons, as either the missing transverse momentum E miss T (N lep = 0), the magnitude of the vector sum of the missing transverse momentum and the lepton p T (N lep = 1), or the dilepton p T (N lep = 2). The minimum value of p V,r T is 150 GeV in the 0-and 1-lepton channels, and 75 GeV in the 2-lepton channel.
Events satisfying the previous criteria are classified into eight categories (also called signal regions in the following), shown in Table 1, with different signal-to-background ratios. These categories are defined by the number of jets, N jet (including the two b-jet candidates), N lep , and p V,r T . Additional categories (also called control regions in the following) containing events satisfying alternative selections are introduced to constrain some background processes such as W boson production in association with jets containing heavy-flavour hadrons, or top-quark pair production. The signal contribution in such categories is expected to be negligible. Table 1: Summary of the reconstructed-event categories. Categories with relatively large fractions of the total expected signal yields are referred to as 'signal regions' (SR), while those with negligible expected signal yield, mainly designed to constrain some background processes, are called 'control regions' (CR). The quantity m top is the reconstructed mass of a semileptonically decaying top-quark candidate in the 1-lepton channel. The calculation of m top uses the four-momenta of one of the two b-jet candidates, the lepton, and the hypothetical neutrino produced in the event. The neutrino four-momentum is derived using the W boson mass constraint [15] and m top is then reconstructed from the combination of the b-jet candidate and neutrino longitudinal momentum that yields the smallest top-quark candidate mass.
Cross-section measurements
The reduced V H, V → leptons stage-1 STXS regions used in this paper are summarised in Table 2, which also indicates which reconstructed-event categories are most sensitive in each region. All leptonic decays of the weak gauge bosons (including Z → ττ and W → τν) are considered for the STXS definition.
To avoid theory uncertainties from extrapolations to a phase space not accessible to the measurement, the p Z T < 150 GeV stage-1 regions are split into two subregions, p Z T < 75 GeV and 75 < p Z T < 150 GeV. Currently, there are not enough data events to distinguish qq → Z H from gluon-induced Z H production, despite their different kinematic properties. As the gg → Z H cross-section is only 16% of that of qq → Z H, no attempt is made to measure the qqand gg-initiated processes separately. The qq → Z H and gg → Z H regions are thus merged together, after having modified the gg → Z H fiducial region definition to match that of qq → Z H. Therefore, the gg → Z H, p Z T > 150 GeV stage-1 regions (with zero or at least one extra particle-level jet) are modified by adding a p Z T < 250 GeV requirement, and events with p Z T > 250 GeV and any number of particle-level jets are put in a separate gg → Z H, p Z T > 250 GeV region, leading to a total of 14 modified stage-1 regions. These regions are then merged together in reduced stage-1 regions, chosen to keep the total uncertainty in the measurements near or below 100%.
Two sets of reduced stage-1 regions are considered. In one, called the '5-POI (parameters of interest)' scheme, five cross-sections, three for Z H production (75 < p Z T < 150 GeV, 150 < p Z T < 250 GeV and p Z T > 250 GeV) and two for W H production (150 < p W T < 250 GeV and p W T > 250 GeV), are measured. In the other one, called the '3-POI' scheme, three cross-sections, two for Z H (75 < p Z T < 150 GeV and p Z T > 150 GeV) and one for W H (p W T > 150 GeV), are measured. The 5-POI scheme leads to measurements that have total uncertainties larger than those in the 3-POI scheme, but are more sensitive to enhancements at high p V T from potential anomalous interactions between the Higgs boson and the EW gauge bosons. The reconstructed-event categories do not distinguish between events with generated p V T below or above 250 GeV. Discrimination between the two p V T regions 150-250 GeV and > 250 GeV for events with generated p V T below or above 250 GeV is provided by the different shapes of the boosted-decision-tree discriminant (BDT V H ) used in the final fit to the data, as illustrated in Figure 1 in the case of the 1-lepton, 2-jet category. This arises from the fact that the reconstructed p V,r T is largely correlated with the BDT V H output, for which it constitutes one of the most discriminating input variables together with m bb and the angular separation of the two b-jets.
The product of the signal cross-section times the H → bb branching ratio and the total leptonic decay branching ratio for W or Z bosons is determined in each of the reduced stage-1 regions by a binned maximum-likelihood fit to the data. The cross-sections are not constrained to be positive in the fit. Signal and background templates of the discriminating variables, determined from the simulation or data control regions, are used to extract the signal and background yields. A simultaneous fit is performed to all the signal and control regions. Systematic uncertainties are included in the likelihood function as nuisance parameters.
The likelihood function is very similar to that described in Ref. [15]. In particular, the same observables are used, namely BDT V H in the signal regions and either the invariant mass m bb of the two b-jets or the event yield in the control regions. The treatment of the background and of its uncertainties is also unchanged. The only differences relative to the likelihood function in Ref.
[15] concern the treatment of the signal: • Instead of a single signal shape (for BDT V H or m bb ) or yield per category, multiple shapes or yields are introduced, one for each reduced stage-1 STXS region under study. The 3-POI and 5-POI 'reduced stage-1' sets of merged regions used for the measurements, the corresponding kinematic regions of the stage-1 V H simplified template cross-sections, and the reconstructed-event categories that are most sensitive in each merged region. The stage-1 regions are modified (i) by splitting the two Z H, p Z T < 150 GeV regions (from qq and gg) into four regions, based on whether p Z T < 75 GeV or 75 < p Z T < 150 GeV; (ii) by adding a p Z T < 250 GeV requirement to the gg → Z H, p Z T > 150 GeV regions (with zero or at least one extra particle-level jet), and (iii) by adding a separate gg → Z H, p Z T > 250 GeV region. The three regions W H, p W T < 150 GeV, qq → Z H, p Z T < 75 GeV and gg → Z H, p Z T < 75 GeV, in which the current analysis is not sensitive and whose corresponding cross-sections are fixed to the SM prediction in the fit, are not shown.
Merged region
Merged region Stage 1 (modified) STXS region Reconstructed-event categories 3-POI scheme 5-POI scheme with largest sensitivity • Instead of a single parameter of interest, the inclusive signal strength, the fit has multiple parameters of interest, i.e. the cross-sections of the reduced stage-1 regions, multiplied by the H → bb and V → leptons branching ratios.
• Overall theoretical cross-section and branching ratio uncertainties, which affect the signal strength measurements but not the STXS measurements, are not included in the likelihood function.
The expected signal shapes of the discriminating variable distributions and the acceptance times efficiency (referred to as 'acceptance' in the following) in each reduced stage-1 region are determined from simulated samples of SM V H, V → leptons, H → bb events. The acceptance of each reconstructed-event category for signal events from the different regions of the 5-POI reduced stage-1 scheme is shown in Figure 2(a). The fraction of signal events in each reconstructed-event category originating from the different regions in the same scheme is shown in Figure 2 As shown in Figure 2(a), the current analysis is not sensitive to W H events with p W T < 150 GeV and to Z H events with p Z T < 75 GeV, since their acceptance in each category is at the level of 0.1% or smaller. Therefore, in the fits the signal cross-section in these regions is constrained to the SM prediction, within the theoretical uncertainties. Since these regions contribute only marginally to the selected event sample, the impact on the final results is negligible. A cross-check in which the relative signal cross-section uncertainty for the p W T < 150 GeV and p Z T < 75 GeV regions is conservatively set to 70% of the prediction (i.e. about seven times the nominal uncertainty) leads to variations of the measured STXS below 1%. The sources of systematic uncertainty are identical to those described in Ref.
[15], except for those associated with the Higgs boson signal simulation, which are re-evaluated [44]. In this re-evaluation the uncertainties are separated into two groups: • uncertainties affecting signal modelling -i.e. acceptance and shape of kinematic distributionsin each of the three or five reduced stage-1 regions (hereafter referred to as theoretical modelling uncertainties), and • uncertainties in the prediction of the production cross-section for each of these regions (hereafter referred to as theoretical cross-section uncertainties).
While theoretical modelling uncertainties enter the measurement of the STXS, theoretical cross-section uncertainties do not affect the results, but only the predictions with which they are compared. The consequent reduction of the impact of the theoretical uncertainties on the results with respect to the signal strength measurements is one of the main advantages of measuring STXS.
The two groups of systematic uncertainties are estimated for high-granularity STXS regions, and then merged into the reduced scheme under consideration. This approach makes it easy to compute the systematic uncertainties for merging schemes different from those presented here. The uncertainties are evaluated by dividing the phase space into five p V T regions (with the following lower edges: 0 GeV, 75 GeV, 150 GeV, 250 GeV and 400 GeV), and each p V T region into three bins depending on the number of particle-level jets (zero, one, or at least two), independently for the qq → V H and gg → Z H processes. When two STXS regions are merged, their relative theoretical cross-section uncertainties lead to a modelling uncertainty. These uncertainties are evaluated as the remnant of the theoretical cross-section uncertainties for the high-granularity regions after the subtraction of the theoretical cross-section uncertainty for the merged region.
The high-granularity regions are used to calculate theoretical cross-section uncertainties for the missing higher-order terms in the QCD perturbative expansion and for the uncertainties induced by the choices of the parton distribution function (PDF) and α S . Fourteen independent sources of uncertainties due to the missing higher-order terms lead to total uncertainties of 3%-4% for qq → V H and 40%-50% for gg → Z H with p V T > 75 GeV [44]. Thirty-one independent sources of PDF and α S uncertainties, each of them usually smaller than 1%, lead to a total quadrature sum between 2% and 3% depending on the STXS region. The theoretical modelling uncertainties change the shapes of the reconstructed p V,r T and m bb distributions in the same way as described in Ref. [15]. Four independent sources for the QCD expansion and two independent sources for the PDF and α S choices are considered.
Systematic uncertainties in the signal acceptance and shape of the p V,r T and m bb distributions due to the parton shower (PS) and underlying event (UE) models are estimated from the variations of acceptance and shapes of simulated events after changing the P 8 PS parameters or after replacing P 8 with H 7 for the PS and UE models [15]. The signal acceptance uncertainties due to the PS and UE models (five independent sources) are typically of the order of 1% (5%-15%) with a maximum of 10% (30%) for the qq → V H (gg → Z H) production mode. Two independent nuisance parameters account for the systematic uncertainties induced by the PS and UE models in the p V,r T and m bb distributions. In addition, a systematic uncertainty due to the EW corrections is parameterised as a change in shape of the p V T distributions for the qq → V H processes [15].
Results
The measured reduced stage-1 V H cross-sections times the H → bb and V → leptons branching ratios, σ × B, in the 5-POI and 3-POI schemes, together with the SM predictions, are summarised in Table 3. The results of the 5-POI scheme are also illustrated in Figure 3. The SM predictions are shown together with the theoretical cross-section uncertainty for the merged regions computed as described in the previous section. The measurements are in agreement with the SM predictions.
The cross-sections measured in the p V T > 150 GeV intervals are not equal to the sum of those measured for 150 < p V T < 250 GeV and p V T > 250 GeV. This is because the signal template for p V T > 150 GeV in the 3-POI fit is computed from the sum of the templates of the two regions assuming that the ratio of yields in those regions is that predicted by the SM, while in the 5-POI fit the normalisations of the two templates are floated independently.
The cross-sections are measured with relative uncertainties varying between 50% and 125% in the 5-POI case, and between 29% and 56% for the 3-POI. The largest uncertainties are statistical, except for the W H cross-sections with p W T > 150 GeV in the 3-POI case and with 150 < p W T < 250 GeV in the 5-POI case. In the 5-POI case, an anti-correlation of the order of 40%-60% is observed between the cross-sections in the ranges p V T > 250 GeV and 150 < p V T < 250 GeV, which are measured with the same reconstructed-event categories.
The dominant systematic uncertainties are due to the limited number of simulated background events and the theoretical modelling of the background processes. The uncertainties due to the theoretical modelling of the V H signal are small, with relative values ranging between 6% and 12%. The uncertainties in the predictions are 2-3 times larger for Z H than for W H in the same p V T interval due to the limited precision of the theoretical calculations of the gg → Z H process. Table 3: Best-fit values and uncertainties for the V H, V → leptons reduced stage-1 simplified template cross-sections times the H → bb branching ratio, in the 5-POI (top five rows) and 3-POI (bottom three rows) schemes. The SM predictions for each region, computed using the inclusive cross-section calculations and the simulated event samples described in Section 2, are also shown. The contributions to the total uncertainty in the measurements from statistical (Stat. unc.) or systematic uncertainties (Syst. unc.) in the signal modelling (Th. sig.), background modelling (Th. bkg.), and in experimental performance (Exp.) are given separately. All leptonic decays of the V bosons (including those to τ-leptons, = e, µ, τ) are considered.
Constraints on anomalous Higgs boson interactions
The i are numerical coefficients, are added to the SM Lagrangian to obtain an effective Lagrangian inspired by that in Ref. [45]. Only dimension D = 6 operators are considered in this study, since dimension D = 5 operators violate lepton or baryon number, while dimension D > 6 operators are further suppressed by powers of Λ.
The results presented in this paper focus on the coefficients of the operators in the 'Strongly Interacting Light Higgs' formulation [46]. This formalism is defined as the effective theory of a strongly interacting sector in which a light composite Higgs boson arises as a pseudo Goldstone boson, and is responsible for The corresponding CP-odd operatorsÕ HW ,Õ H B ,Õ W , andÕ B , are not considered.
Modifications of the gg → Z H production cross-section are only introduced by either higher-dimension (D ≥ 8) operators or corrections that are formally at NNLO in QCD, and are not included in this study, in which the expected gg → Z H contribution is kept fixed to the SM prediction.
The operator O d = y d |H| 2Q L Hd R (plus Hermitian conjugate) with Yukawa coupling strength y d , which modifies the coupling between the Higgs boson and down-type quarks, induces variations of the partial width Γ bb H and of the total Higgs boson width Γ H , and therefore of the H → bb branching ratio. This operator affects the measured cross-sections in the same way in each region. [47], using the known relations between such coefficients and the stage-1 STXS based on leading-order predictions [48]. Such relations include interference terms between the SM and non-SM amplitudes that are linear in the coefficients and of order 1/Λ 2 , and the SM-independent contributions that are quadratic in the coefficients and of order 1/Λ 4 . In the HEL implementation, the coefficients c i of interest are recast into the following dimensionless coefficients: where g and g are the SU(2) and U(1) SM gauge couplings, and v is the vacuum expectation value of the Higgs boson field. These dimensionless coefficients are equal to zero in the SM.
The sumc W +c B is strongly constrained by precision EW data [49] and is thus assumed here to be zero, and constraints are set onc HW ,c H B ,c W −c B andc d . The relations between the HEL coefficients and the reduced STXS measured in this paper are obtained by averaging the relations for the regions that are merged with weights proportional to their respective cross-sections.
Simultaneous maximum-likelihood fits to the five STXS measured in the 5-POI scheme are performed to determinec HW ,c H B ,c W −c B andc d . Due to the large sensitivity to the Higgs boson anomalous couplings to vector bosons provided by the p V T > 250 GeV cross-sections, the 5-POI results place tighter constraints on these coefficients (e.g. approximately a factor two forc HW ) than do the 3-POI results. For this reason, constraints obtained with the 3-POI results are not shown here.
In each fit, all coefficients but one are assumed to vanish, and 68% and 95% confidence level (CL) one-dimensional intervals are inferred for the remaining coefficient. The negative-log-likelihood onedimensional projections are shown in Figure 4, and the 68% and 95% CL intervals forc HW ,c H B ,c W −c B andc d are summarised in Table 4. The parametersc HW andc W −c B are constrained at 95% CL to be no more than a few percent, while the constraint onc H B is about five times worse, and the constraint onc d is of order unity. For comparison, Table 4 also shows the 68% and 95% CL intervals for the dimensionless coefficients when the SM-independent contributions, which are of the same order (1/Λ 4 ) as the dimension-8 operators that are neglected, are not considered. The constraints are typically 50% stronger than when the SM-independent contributions are not neglected.
Conclusion
Using 79.8 fb −1 of √ s = 13 TeV proton-proton collisions collected by the ATLAS detector at the LHC, the cross-sections for the associated production of a Higgs boson decaying into bottom-quark pairs and an electroweak gauge boson W or Z decaying into leptons are measured as functions of the vector-boson transverse momentum p V T . The cross-sections are measured for Higgs bosons in a fiducial volume with rapidity |y H | < 2.5, in the 'simplified template cross-section' framework.
The measurements are performed for two different choices of the number of p V T intervals. The results have relative uncertainties varying between 50% and 125% in one case, and between 29% and 56% in the other. The measurements are in agreement with the Standard Model predictions, even in high p V T (> 250 GeV) regions that are most sensitive to enhancements from potential anomalous interactions between the Higgs boson and the electroweak gauge bosons.
One-dimensional limits on four linear combinations of the coefficients of effective Lagrangian operators affecting the Higgs boson couplings to the electroweak gauge bosons and to down-type quarks have also been set. For two of these parameters the constraint has a precision of a few percent. | 6,070.4 | 2019-05-01T00:00:00.000 | [
"Physics"
] |
Advances in friction stir welding of Ti6Al4V alloy complex geometries: T-butt joint with complete penetration
In this work, the friction stir weldability of Ti6Al4V T-joints has been investigated. Its aims are: (i) to study the influence of tool and welding parameters on weld quality, (ii) to assess the joints’ mechanical strength to foresee future applications, and (iii) to characterize Co-based FSW tools’ wear by following the wear during the tests. Welds’ defectivity is studied by cross-section macrographies analysis. Independently from welding parameters and tools, internal voids are avoided, and a suitable weldability window is identified. Microstructure observations have corroborated temperatures below the β\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\beta $$\end{document}-transus point even in the nugget zone, guaranteeing joints’ maximum mechanical strengths at 96% and 87% of the base material for UTS and Y, respectively. Contrarily, elongation at break is very low, without reaching 20% of the base material. The failure is linked to section thinning and kissing bond defects at the joints’ corners. Additionally, tool wear proved to be a critical issue while friction stir welding Ti6Al4V. The inner part of the shoulder is the most sensitive to wear. The consequent high wear rate might be a problem for mass production. The work established the pertinence of assembling complex geometries of Ti6Al4V using friction stir welding, considering weld quality and the mechanical strength achieved. However, critical factors such as section thinning, kissing bond, and tool wear must be carefully addressed to avoid joints’ low elongation at break and to guarantee their mechanical strength.
Introduction
Friction stir welding (FSW) is finding many industrial applications, slowly replacing traditional fusion welding technologies. The process is now firmly established for aluminum alloys, which still account for most FSW applications [1]. However, as argued by Brassington et al. [2], new possible applications involving titanium alloys can be conceived in the space industry due to their unique characteristics (i.e., high strength-to-weight ratio, excellent corrosion resistance, and compatibility with mostly propellants). Ti6Al4V is among the titanium alloys whose friction stir weldability has been studied the most. As reported by Mironov et al. [3], friction stir welding of this titanium alloy presents several issues, such as tool wear, the need for an inert gas shield to minimize oxidation, a cooling system to avoid overheating the tool and spindle system, and a welding machine capable of sustaining very high axial forces (i.e., three times those required for aluminum alloys). An overview of friction stir weldability of Ti6Al4V was proposed by Edwards and Ramulu [4], ranging between 65 and 100 mm/min for the welding speed and 170 and 300 rpm for rotational speeds. Additionally, in a follow-up study, they measured temperature in the nugget zone (NZ) while welding Ti6Al4V sheets between 1000 and 1200 • C above the -transus temperature (980 • C) [5].
Concerning Ti6Al4V microstructure induced by friction stir welding, Kitamura et al. [6] found an equiaxed and elongated grains microstructure in NZ when the 182 Page 2 of 17 temperature is below the transus point. Instead, when the temperature in the NZ overcomes the -transus temperature, the microstructure is characterized by a fully transformed microstructure with basket-weave lamellar / structure, as demonstrated by Su et al. [7]. Wu et al. [8] focused on the final microstructure obtained in the thermo-mechanically affected zone (TMAZ). According to the authors, a complex process characterizes the final microstructure. First, the dislocation generated by the deformation arranges, forming subgrain boundaries due to dynamic recovery. Then the high temperature induces the DRX by transforming low-angle boundaries into high-angle boundaries generating newly refined grains. In the end, new dislocations are introduced in the refined recrystallized grains. Instead, the weakest zone of the joint is the heat affected zone (HAZ), as stated by Fall et al. [9]. The authors found a modified microstructure compared to the base material, with a bimodal structure with lamellar and equiaxed grains justified by temperature reached over the -transus temperature.
Research over the past decade has provided an overview of FSW joints of Ti6Al4V, at least in traditional buttbutt configuration. However, complex geometries, such as T-welds, are essential for structures in transportation, such as light ships and aircraft fuselages [10]. As reported by Tavares et al. [11], different types of T-welds can be realized. The main configurations are the following: • T-lap joint, with two plates, one horizontal (skin) and one vertical (stringer) (Fig. 1a); • T-butt joint with complete penetration, i.e., horizontal sheets lateral surfaces faying on the vertical one in the middle (Fig. 1b); • T-butt joint without complete penetration, i.e., horizontal sheets lateral surfaces faying between them (Fig. 1c); Tavares et al. [11] showed the excellent mechanical properties of the T-butt joints with the stringer between the two horizontal slabs (Fig. 1b). Whereas Zhao et al. [12] highlighted the critical issue in T-joint configuration in the support's radius fillet and the tool geometry's choice. Both aspects play a role in kissing bond defects at the radius between the skin and the stringer, affecting the joint's mechanical strength. Derazkola et al. [13] pointed out the importance of the tilt angle in friction stir welding in T configuration, identifying the optimal value to maximize mechanical properties at 2 • . However, while we are beginning to have relevant information on friction stir welding T-joint aluminum alloys, more research still needs to be conducted on T-welds of titanium alloys. The authors acknowledge two research groups working on this issue in the past years, whose first papers were published in 2020. Su et al. [14] first studied the weldability of Ti-4Al-0.005B alloy in T-butt joint configuration with complete penetration performing two welds along the two interfaces obtaining a free defects joint. In their second work, Su et al. [15] compared the mechanical properties of joints obtained, welding either the two interfaces or once, centering the tool in the mid-thickness of the vertical slab. The authors achieved sound welds with both strategies reaching good tensile strength while the elongation was always around 2% against the 20% reached in the BM. The welding strategy affected the fracture position with crack occurring at the corner for single-weld and at the HAZ for double-weld T-joints. Higher mechanical properties were found with the single-weld strategy. The second research group, Campanella et al. [16], began work instead on T-lap welding of the Ti6Al4V alloy. In the first work, they demonstrated the feasibility T-lap joint with this alloy through friction stir welding, which until then had always been welded by melting-based welding technologies. In their subsequent work [17], the authors proposed a numerical model to simulate the final microstructure of the T-joint on the same test already carried out in the previous work.
In this investigation, for the first time, Ti6Al4V T-joints with complete penetration are manufactured to assess their friction stir weldability and mechanical properties. Two tools and different combinations of welding parameters are employed to study their influence on the joint quality (i.e., defectiveness and mechanical strength). Thus, the goal is to understand the microstructure and mechanical strength of the joints and how their weldability is affected by the welding conditions and tools, improving the state-of-the-art in FSW in more complex geometry welds of Ti6Al4V alloy to identify new potential applications.
Friction stir welding
Three parts friction stir welding T-joints with complete penetration are performed on FSW dedicated machine MTS ISTIR PDS. The welds 140 mm long along the rolling direction are executed on Ti6Al4V. The horizontal sheets The alloy chemical composition and mechanical properties obtained characterizing base material with three tensile tests and five hardness HV 0.3 indentations are listed in Table 1. Figure 2a displays the base material microstructure, consisting of phases placed along phases grain boundaries. Two types of frustum-shaped cobalt-based alloy tools are used [18]. Pins are featureless, with tapered profiles that are cylindrical and triangular with rounded edges, while the shoulder is concave with an angle of 5 • . Three welding conditions are tested for both tools carrying out two welding for each configuration. Figure 2b displays the tools, their geometrical information, and the employed welding parameters.
Rounded corner fillets in the fixture characterize mild steel support to ensure good formability of the corners in T-joints. The radius of rounded edges is fixed at 1 mm. A ZrO2 coating is applied on the support surface to avoid sticking between titanium alloy sheets and steel support due to the local high temperature possibly reached during the process. Additionally, argon protects the welding zone during the stirring to avoid welding seam oxidation.
Welds characterization
The welds' strength is characterized by tensile tests along the skin and Vickers hardness measurements. An Instron electromechanical testing machine is used for uniaxial tensile tests equipped with a 100-kN load cell and cross-head speed of 1 mm/min at room temperature. Subsize specimens are tested according to ASTM E8M [19]. On each T-joint, three tensile samples and one for hardness and microstructure analysis are obtained through water-jet cutting, as displayed in Fig. 4. Five tensile tests for each configuration are The indentation pattern in the weld cross-section is displayed in Fig. 4 and is performed on advancing and retreating sides. After polishing and etching with Kroll's reagent, the same samples are used for microstructural observation. The etched samples are inspected using an optical microscope (Olympus PMG3) for microstructural detail and a digital microscope (Keyence VHX6000) for macroscopic observations. One specimen's microstructure is characterized using a Zeiss Ultra Plus FESEM microscope equipped with an X-Max Oxford Instruments system and x-ray silicon drift detector (SDD) of 20 mm 2 to acquire high magnification micrographs (5000X to 25000X) and to perform semiquantitative analysis of the chemical composition by energy dispersive spectroscopy (EDS). For tool wear analysis, 3D profiles of tools (new and after welding) and weld exit holes are rebuilt with the Alicona Infinite Focus 5 G. Tools' profiles are analyzed with a laser scanner (Riftek model RF627) to compare the volume of the as-manufactured and worn tools automatically.
Results
The feasibility of Ti6Al4V friction stir T-welds is presented in the following subsections: First, tool wear is studied to provide insight into tool wear evolution and preferable sites using two Co-based tools. Afterwards, the weld quality is assessed by analyzing the macrographies to identify the weldability window. Mechanical properties and fractography of the samples are discussed to establish the strength achievable when friction stir welding Ti6Al4V complex geometries. In the end, the microstructure of one selected sample is analyzed to evaluate microstructural modifications in the different zone to infer the maximum temperatures reached.
Tool wear
Among problems linked to friction stir welding of titanium alloys, tool wear is one of the most critical due to severe and rapid tool wear, as reported by Wang et al. [20]. Materials able to resist the thermomechanical stresses developed during FSW of titanium are often costly, difficult to machine, and wear rapidly [21]. The tools used in this work are made of Co-based alloy. They are easily machinable and could represent a cost-effective FSW tool material solution for high-strength alloys such as titanium [22]. However, there is little information regarding the wear of this type of tool during the FSW of titanium alloys. Park et al. [23] analyzed cobalt-based tool wear during friction stir welding of Ti6Al4V alloy. According to the authors, these tools are cheap, easily castable, and machinable while standing good mechanical properties over 1000 • C. At the same time, the wear rate is high when welding titanium alloys due to intermittent sticking on the tool surfaces, causing adhesion wear. Recently, Du et al. [24] characterized the wear mechanisms of a Co-based tool when friction stir welding TA5 alloy. The flat pin tip mainly wear for abrasion, while at the pin root and shoulder, wear is governed by adhesion and diffusion.
The qualitative analysis of tool wear is given by following the evolution of the weld exit-hole to infer tool wear by comparing the trace left by the pin when retreated between welds. Change in the tool shape is then revealed by the mark left by the tool. Figure 5a compares a new tool A and the same after 720 mm welds.
From the top view, the complete transformation of pin features is remarkable. The surface of the pin tip is significantly reduced after the welds. Additionally, the tapered pin lateral surface profile is difficult to recognize due to wear with its shape approaching a straight cylinder. The important parameter for the next steps is h o , represented in Fig. 5b. This height represents the distance between the pin tip and the tangent to the shoulder concavity. The h o parameter is selected because it is possible to find the same analyzing the tool exit-hole profiles, as displayed in Fig. 5c and d. By rebuilding the 3D profiles of the mark left by the tool after retreating, including the hole and the shoulder-workpiece interaction zone, it is possible to obtain the parameter h WC (Fig. 5d). It represents the same quantity as h o after the distances traveled by the tool. The height h WC is calculated for several tool exit-holes to follow the wear, and its evolution for both tools is displayed in Fig. 5e. While the height h o for both manufactured tools is around 1.25 mm, it increases fast and continuously with the millimeters traveled, reaching 1.6 mm after 720 mm of welds, growing more than 30%. Considering that the pin's tip is less sensitive to wear, as visible in Fig. 5b, the The wear drastically changes between shoulder and pin. In the shoulder, the maximum height difference measured between the new and worn tool ranges between 0.5 and 0.6 mm. For the pin, both the bottom and lateral surfaces should be considered. On the bottom surface, the wear is low in the center, with values below 0.1 mm, and it increases in the outer part reaching the edges. Additionally, while for tool A (Fig. 6a), the lateral wear of the pin is significant, i.e., 0.15 mm, the same does not apply for tool B (Fig. 6b). The difference is due to the different pin profiles. The rounded trigonal edges characterizing the manufactured tool A disappear during welding due to wear and explain the 0.15 mm wear. Contrarily, the lateral surface of the tapered cylindrical pin B is less sensitive to wear, not being characterized by stress concentration points (i.e., edges). Wear of Co-based FSW tool when welding Ti6Al4V is prominent and quick, and the preferred site is the inner part of the shoulder. Pin features are mechanical and thermal stress concentration spots wearing quickly, completely modifying the initial tool shape.
Weld quality
Weld cross-sections are displayed in Fig. 7. Independently from the tool and the welding parameters, no flow-related defects are found. The thermo-mechanical condition led to a proper material flow, guaranteeing the correct filling of the cavity left by the tool while advancing.
Hence, using a tilt angle of 2 • , welding speed between 60 and 100 mm/min, and rotational speed between 150 and 200 rpm, sound Ti6Al4V T-joints are obtained. The weldability window is valid for two tool shapes, tapered cylindrical and trigonal with rounded edges.
The weld cross-sections' features are shown in Fig. 8. Due to the severe tool wear occurring during each weld, black traces left in the stirred zone can be observed, as illustrated in Fig. 8, but they are present to a different extent in all the cross-sections (Fig. 7). The trace can be found at the interface shoulder-workpiece and the advancing side in the transition between SAZ and NZ. In some welding configurations, those traces can also be found in the nugget zone ( Fig. 7b-d and f). EDS analysis within the NZ of sample A-WC2 is carried out to clarify the nature of the observed dark traces. Figure 9a shows the macro area of the EDS spot (red oval solid line), while Fig. 9b displays the exact point corresponding to the spectrum (red cross). Observing the element revealed by the analysis in Fig. 9c, despite the titanium and the two major alloying elements, i.e., aluminum and vanadium, four extra elements such as tungsten (2%), nickel (1.9%), cobalt (1.3%) and chromium (0.5%) appeared. All of them are contained within the Co-based alloy FSW tool according to the patent [18].
Hence, the four extra chemical components prove tool particles within welds. When analyzing cross-sections, the variations in the dark traces observed in the stirred zone are justified by the welding parameters combination leading to a different amount of tool wear and material flow. Another critical aspect to highlight is the thinning of the weld crosssection due to the shoulder forging action. The thinning gradually evolves from the weld seam's limit to the center of the nugget zone. By measuring the thickness of the sheet in the middle point between those two extremes, as illustrated in Fig. 8, it is possible to estimate the thinning at 0.1 mm. Similar thinning results were observed for all the joints. Concerning the weld corners, two aspects are worth noting. Firstly, in all joints, both corners are not perfectly smooth and rounded because of the slight lateral movement of the horizontal sheets during the process due to unsuitable lateral clamping. Secondly, the kissing bond defects are observed on both sides of the weld, as highlighted by the dotted and dashed red squares in Fig. 8. The kissing bond defect has been well addressed by Sato et al. [25], representing in friction stir butt welding a partial remnant of the unwelded butt surface below the stir zone attributed to the insufficient plunging of the welding tool during FSW. As assessed by Tavares et al. [11], these defects can occur in friction stir T-welds in the corners because of the insufficient pressure applied by the tool due to insufficient plunge depth or unsuitable combination of pin-radius fillets. In the T-joints presented, kissing bonds on both sides always occurred independently of welding parameters and tools. This result suggests an unsatisfying combination of pin geometry-support radius leading to partially unwelded surfaces. The main difference between the two sides is the notable thicker oxide layer on the retreating side. The asymmetry is explainable, with the tool shifting a few tenths of a millimeter towards the advancing side, as observable from the cross-section.
Hence, while the main problem of internal voids is avoided, kissing bonds in the corners and cross-section thinning of the joints occur during welding. Also, tool particles are observed in the joint, starting from the interface shoulder-workpiece, and spreading within the nugget zone.
Mechanical properties
Vickers hardness measurements across the welds are shown in Fig. 10a and b, for tools A and B, respectively.
Analyzing the profiles, the low influence of welding configurations and tools on the hardness is worth noting. As reported by Kitamura et al. [6], the main factors affecting hardness in Ti6Al4V friction stir welds are the thermal cycle intended as cooling rate and peak temperature. The two elements are affected by welding and rotational speeds. In the welding configuration employed, the range of welding parameters is limited to minimize tool wear, maximizing the tool life. Therefore, the similarities in hardness are induced by very similar thermal cycles guaranteeing similar microstructures for all the welds in the different zones. The higher hardness values are found in the SZ, the narrow decreasing zone may be linked to the TMAZ, and in the HAZ, the hardness is slightly lower or equal to the BM. The temperature may explain the result with peak temperature in HAZ below the -transus one and resulting microstructure similar to the BM. Additionally, the asymmetrical hardness in the heat affected zones is already reported by different authors [26,27] and may be justified by material plastic deformation and peak temperature differences between AS and RS.
The mechanical strength of the T-joints are analyzed through engineering stress-strain curves estimating ultimate tensile strength (UTS), yield strength (Y 0.2% ), and elongation at break ( b,% ). The engineering stress is calculated using the area obtained by multiplying the base material thickness of 1.2 mm by the width depending on the water-jet cutting. Engineering stress-strain curves for the different T-joints, organized for tool and compared to the BM, are shown in Fig. 11 displaying just one out of the five tensile tests performed for each configuration.
On the one hand, the influence of the welding configuration on the assembled mechanical properties is negligible, with the curves almost overlapping, as displayed in Fig. 11a and b. On the other hand, differences between curves in the two figures can be observed, with the joints performed with tool B reaching slightly higher maximum stresses and deformation. While UTS and Y seem to approach, at different extents, the base material, the elongation at break for all joints is low, without reaching 2%. Hence, T-joints can reach high UTS and Y values while breaking as soon as they enter the plastic regime, as illustrated in Fig. 11. The remarkable UTS and Y result from a proper material flow induced by the pin, avoiding internal defects and consequent premature failure of the samples. Contrarily, the surprisingly low deformation reached at failure may be explained through the flaws highlighted in the previous sections, such as the kissing bond at the corners and the cross-section thinning. Mechanical properties mean values and standard deviations of the T-joints according to the tool and welding configuration are summarized in Table 2. Considering the standard deviations, the negligible differences in mechanical properties reached by employing different welding parameters are worth noting. Joints executed with tool B are more stable than the ones obtained with tool A, in which WC2 maximizes the mechanical properties.
The critical issue raised by analyzing T-joint strength is the low elongation at break reached in all configurations. The T-joints fractography may elucidate this aspect. Despite very similar mechanical properties and hardness, two fracture types characterized the T-joints. The ones obtained through WC1 and WC2, independently from the tool, all failed in the heat affected zone on the retreating side. Instead, only 50% of the B-WC3 follow the same failure path. In order to clarify the fracture paths, frames before and after fracture during tensile tests and the longitudinal strain field before fracture Fig. 12a-c. The strain is concentrated on the retreating side before fracture, as shown in Fig. 12b, and the failure occurs in the HAZ below the shoulder. The second fracture type occurred at the kissing bond on the retreating side. All T-joints welded in WC3 and with tool A failed at the kissing bond against the 50% of B-WC3. Frames before and after fracture and the strain field before the fracture are shown in Fig. 12d-f. In this case, the fracture occurs in the corner even if the longitudinal strain distribution is similar to Fig. 12b, i.e., concentrated in the HAZ on the RS. Hence, the joints' behavior is similar when subjected to traction independently from the welding configuration and tool, but in WC3, the fracture path can change. Differences in the fracture behavior of the samples can be clarified by analyzing fracture surfaces through SEM observations. Figure 13a shows the fractured surfaces in specimen B-WC2, which failed between the shoulder affected zone (SAZ), i.e., the zone affected mainly by the shoulder, and the HAZ. Three distinct fracture surfaces can be distinguished, in agreement with [15] when studying the fracture behavior of Ti-4Al-0.005B. They can be divided into shear and flat fracture zones and are illustrated in Fig. 13a.
The flat zone includes zone A, corresponding to the crack initiation zone located in the upper part of the joint. Zone B, the crack propagation area, is also included in the flat surfaces and is shared between SAZ and HAZ. Zone C represents the shear zone and corresponds to the rapid fracture zone containing only the HAZ. Zone B and C are displayed in Fig. 13b and c, while details at higher magnification are illustrated in Fig. 13d and e. Both zones are characterized by deep dimples (red arrows), typical features of ductile fracture surfaces [28]. Dimple ridges and valleys between zone B and C are similar, while they increase in size in zone C. The size difference is justified by the microstructure characterizing the SAZ and the HAZ, with a smaller grain size in the SAZ due to the stirring and consequent grain refinement. Instead, ridges and valleys are similar because of the significant plastic deformation undergone by the two zones due to the strain localization. Macroscopic fracture surface of B-WC3 fractured in the kissing bond zone is shown in Fig. 14a. The same fracture zones can be identified, but the path is inverse, starting at the bottom and ending at the upper surface. The fracture begins at the kissing bond (zone A), develops in the center (zone B), and ends in the upper part of [29] demonstrated the proportionality between grain size and dimples diameters. It is known that the stirring pin and shoulder action in friction stir welding is responsible for a refined equiaxed microstructure in the stirred zone. Consequently, the smaller dimples found in B-WC3 fracture surfaces and zone B in B-WC2 ( Fig. 13b and d) are explained through the microstructural differences in grain size between stirred zones (by the pin and shoulder) and heat affected zone. However, ridges and valleys are more pronounced, i.e., deeper dimples, in fractured surfaces from the specimen failed between SAZ and HAZ. The dimples depth difference is explained by the same elongation at break found and the similar strain field before fracture displayed in Fig. 12, but different fracture path and zone. For sample B-WC3, while during the test, the strain is localized in the weaker and thinner zone, i.e., the heat affected zone, due to the kissing bond defects, the crack starts suddenly to propagate at the kissing bond tip instead of occurring at the HAZ for necking and void coalescence in the dimples zone. Hence, the fracture in B-WC3 differs from that in B-WC2, with shallow dimples because of the strain localized in HAZ during the tensile test. In contrast, the fracture propagates at the kissing bond tip characterizing lower plastic deformation undergone by the failed zone. Deeper dimples in Fig. 13d and e due to the substantial amount of plastic deformation occurring before the final separation [30], against the small and shallow dimples found in Fig. 14d and c, confirm the hypothesis. Therefore, although there are differences in fracture surfaces between samples, elongation at break are similar due to the previous strain localization in the heat affected zone, then overcome in certain configuration by the sudden crack propagation at the kissing bond tip.
Microstructure
To infer the microstructure in the different zone composing the Ti6Al4V T-joints, SEM observations in the base material (BM), heat affected zone (HAZ) and nugget zone (NZ) on sample A-WC2 are performed. Two observations at different magnifications for each zone are shown in Fig. 15. The base material is characterized by bimodal equiaxed -phase and intergranular -phase, highlighted in Fig. 15d with red and black arrows, respectively.
Approximating the NZ, within the heat affected zone, the microstructure begins to transform due to the thermal cycle undergone by the base material. The impact of the rapid heating and subsequent cooling induced by friction stir welding on the advancing side (AS) is revealed in Fig. 15b and e. HAZ microstructure still resembles the BM, but some modifications occurred. Simultaneously, intergranular -phases grow (black solid arrow), and intragranular -phases (black dotted arrow) appear in the matrix due to the temperature reached during the process. Those observations suggest that the maximum temperature reached within the HAZ did not overcome the -transus temperature. Otherwise, microstructure in the HAZ should have presented both ∕ grains and equiaxed--phases, as reported by Li et al. [31]. However, the increased size of intergranular and the higher density of intragranular phases in grains suggested a temperature in the − transition range, with a consequent percentage increase of against phases. Additionally, the slight differences in values observed between the HAZ on the AS and the BM (Fig. 10) are justified by the growth of intergranular phases in HAZ leading to the few points reduction observed.
Focusing on the NZ, as always occur in FSW, the microstructure is wholly transformed compared to other zones composing the joints due to the high temperature and strain rate reached during the stirring. Figure 15c and f display the transformed microstructure characterized by equiaxed (EA) and elongated (E) grains. As discussed for the HAZ, the absence of the lamellar structure suggests that the peak temperature in the nugget zone never [6] reinforce this hypothesis. The stir zone obtained below the -transus point is given by dynamic recrystallization achieved in friction stir welding, i.e., high temperature and strain rate in the stirred zone, generating the refined and equiaxed grains structure.
Discussion
Tool wear analysis proved a massive reduction in the shoulder height even after less than a meter of welding. However, worn tools did not negatively affect FSW joints' defectivity and mechanical properties. The tools ensured the correct material mixing in all samples avoiding internal voids in the selected welding configuration. Also, static mechanical properties are not influenced by tool wear states. These positive results prove the reliability of FSW-worn tools, at least for the wear encountered in our test. Of course, the definition of critical limit over which the weld standards are not met (i.e., internal voids or decreasing mechanical properties) may be helpful in future works. Furthermore, wear should be quantified to comprehend the welding impact on the base material employed for our tools. For this purpose, their volumes before and after welding are quantified. The difference in volume divided by the welded distance provides the wear rate. In order to simplify the volume estimations, we assume axisymmetric wear, considering that the whole surface wears evenly. The pin of tool B is axisymmetric, while in A, it is not due to the trigonal external surface making the volume estimation not straightforward. However, most wear is localized on the shoulder. Hence, we neglect the pins' surface to compare wear, bypassing the problem of non-axisymmetry of pin A. The areas considered for volume calculation are visually highlighted in red and orange in Fig. 16a. The wear rate is similar for both tools, considering the same base material, weld conditions, and millimeters traveled. Based on the bar graph displayed in Fig. 16b, the average shoulder wear rate between the two tools is around 0.045 mm 3 per millimeter of welding when friction stir welding Ti6Al4V with a Co-based alloy tool. Such a high wear rate might be a financial, i.e., the unit cost of the tool, and a time-wise problem, i.e., frequent tool changes, if we imagine adopting FSW to weld titanium T-joints in mass production.
Friction stir welding of T-joints with complete penetration obtained in the investigated combination of process parameters and tools does not present critical internal voids. Consequently, welding speeds between 60 and 100 mm/min and rotational speeds between 150 and 200 rpm are suitable for avoiding internal defects related to material flow. To quantify the mechanical strength of the joints, it is helpful to directly compare it to the base material through the efficiency parameter, i.e., the ratio between the joint and base material (BM) mechanical properties. Because of the similar values obtained with the different welding configurations and to get an overall view of joints' strength, efficiencies are calculated by averaging the mean values for each welding configuration of UTS, Y, and b . Efficiencies are reported in Table 2. The tapered cylindrical pin (tool B) gives better mechanical properties with a 94% and 89% efficiency for UTS and Y, respectively, while the tapered trigonal rounded edges pin maximum efficiencies are 88% and 86%. Nevertheless, hardness measurements are very similar between all welding configurations, as displayed in Fig. 11, not corroborating mechanical testing results. Discrepancies in UTS and Y average values can be explained as they are calculated using the theoretical section of 1.2 mm of the base material. However, when observing cross-sections in Fig. 7, not uniform thicknesses are evident due to the arc pattern generated by the shoulder-workpiece interaction. Variations can therefore be linked to the differences between true and theoretical sections. Nonetheless, the efficiency of UTS and Y consistently above 80% using the Fig. 16 Information about shoulder wear for both tools conservative thickness without considering the thinning is a comforting result for the first comprehensive report on the mechanical properties of complex friction stir welds of Ti6Al4V. The efficiencies of elongation at break are very low for both tools staying below 20% of the base material. Samples failed either at the HAZ or the corner, always on the RS. The good mechanical properties, UTS and Y, comparable with the base material, are confirmed by microstructural observations in HAZ and NZ. The heat affected zone presents a similar microstructure as the base material, with the growth of intergranular phases, while the NZ is characterized by refined equiaxed and elongated grains. The microstructural observations justify the hardness measurements, with hardness values in HAZ very similar to BM and higher values reached in NZ due to the significantly smaller size of the grains, according to the Hall-Patch relation. The microstructural modifications induced by friction stir welding suggest that -transus temperature has never been exceeded during the test, avoiding the typical lamellar structure reported by Liu et al. [32]. As demonstrated by Kitamura et al. [6], the best mechanical properties of Ti6Al4V are achieved when the stirred and heat affected zone temperatures are below the -transus point, avoiding the formation of lamellar structure. Similar mechanical properties and hardness measurements are achieved in the three welding configurations. Hence, the weldability of friction stir T-joints below the transition temperature has been assessed, and maximum mechanical properties can reach 94% and 89% of the base material for UTS and Y, respectively.
The critical issues are the low elongation at break, the strain concentration and fracture on the RS. Hardness measurements show negligible differences between RS and AS and cannot explain why the fracture always occurs on the RS. Low elongations at break below the 20% of the BM are reported by Su et al. [15]. Failures are similar to the ones we encountered, i.e., in the HAZ. According to the authors, the rupture in the HAZ started at the upper surface of the T-joints, characterized by an arc pattern considered a stress concentration feature likely becoming a point of crack initiation. The arc pattern can be observed in our welds in Fig. 7 and linked to the thinning of the section. Instead, Liu et al. [27] also encountered elongation at break comparable to our results in butt configuration when friction stir welding Ti6Al4V. The authors explained the early rupture through section thinning and the consequent yielding only in the zone below the shoulder and deformation not evenly distributed in the samples. Hence, as already reported by other authors, the joints mainly failed in the heat affected zone, below the shoulder-workpiece interaction zone, because of the section thinning. The smaller section in the center of the weld leads to strain concentration in this area instead of evenly distributing it in the whole sample length. Additionally, all the samples fractured at the retreating side in our tensile tests. To understand the strain concentration on the RS, as highlighted in Fig. 12b and e, it is noteworthy analyzing the upper surface of the joints in Fig. 17.
The shoulder forges the sheets' upper surface, generating a dissymmetrical arc pattern in the cross-sections. The arc shape of the weld seam is linked to the tilt angle used, i.e., 2 • , and the tool plunge depth. However, the non-symmetry is associated with the increased flash formation on the retreating side, detected in all welds. The excessive flash on RS is driven by non-optimal material flow below the shoulder. The shoulder can only partially keep the material beneath its concave surface resulting in material ejection on the RS rather than its containment and release to ensure a uniform and symmetrical surface between AS and RS. Additionally, Fig. 17 Cross-section analysis to explain the section reduction tool particles (Co, W, Ni and Cr) mixed with the workpiece are visible in the stirred zone and deposited on the upper surface, mainly on the AS. These particles released due to tool wear cyclically at the end of each rotation could favour dissymmetries. Therefore, the excessive flash on the RS and the tool particles concentrated in the upper surface on the AS leads to different true sections between the two sides of the friction stir welds before starting the tensile tests. Consequently, a slightly smaller area at the RS might be the cause of the 100% rate of failure in the RS for joints breaking in the HAZ. Cross-section heights are measured at different points at AS and RS between the nugget zone and the shoulder diameter mark in the areas represented by the dashed double arrow in Fig. 17c to identify the minimum height on both sides. Measurements revealed a difference ( Δh = h min,AS − h min,RS ) ranging from 0.04 to 0.10 mm between AS and RS, confirming all configurations' minimum cross-section on the retreating side.
To understand why some WC3 configurations failed in the corner, studying the evolution of kissing bond defects with welding parameters is noteworthy. Details of the kissing bond defects on the retreating side of the T-joint obtained with the trigonal pin are displayed in Fig. 18.
It is remarkable how its extension and direction evolve with welding parameters. Based on the measurement illustrated in Fig. 18, on the one hand, WC3, i.e., highest welding and rotational speeds, leads to the longer and more inclined oxide layer, a possible critical nucleation site for crack propagation during tensile tests. On the other hand, the hottest configuration, WC2 (highest rotational speed and lowest welding speed), leads to a shorter and almost vertical oxide layer. Welding parameters affect the extension and direction of the kissing bond defect. Similar results are obtained with tool B (tapered trigonal rounded edges profile), with thicker oxide layers on the retreating side and welding parameters influencing its length and direction. WC3 gives the most critical kissing bond defect in length and orientation (i.e., towards 45 • ). Hence, the joints' behavior is similar when subjected to traction independently from the welding configuration and tool. Nevertheless, if the oxide layer is long, thick, and inclined towards traction, the fracture path can change with a sudden and instantaneous crack growth starting from the kissing bond defect on the RS until the upper surface. Differences in fracture behavior have been shown through SEM observations in the crack propagation and rapid growth zones. Dimples in samples fractured in the HAZ are big in diameter and deep, i.e., a significant distance between ridges and valleys, as shown in Fig. 13. Contrarily, dimples in the sample fracture in NZ (Fig. 14), with the crack starting to propagate at the kissing bond tip, are small and shallow. Size differences are linked to the microstructure in the fractured zones. At the same time, the ridges' lower extension is caused by the less amount of plastic deformation undergone by the material [30] in the fractured zone, considering the previous strain concentration in the HAZ. This phenomenon explains why the sample presents similar elongation at break and strain field distribution before fracture, even if fracturing in two different zones.
In conclusion, in the welding configurations tested in this work, independently from process parameters and tools, all the joints presented cross-section thinning and kissing bonds while avoiding internal defects. The chosen tilt angle and plunge depth induce the thinning, independently of welding and rotational speeds. Despite the section thinning, kissing bonds have always occurred, and their extension and orientation are affected by process parameters. A longer pin could have reduced or completely removed the interface's oxide layer. However, being aware of the vertical steep thermal gradient existing when friction stir welding Ti6Al4V [33], the longer pin could wear fast because of the more resistant stirring material at the bottom, soon leading to a shorter pin reencountering the same problem. Hence, finding the optimal FSW T-welding setup is undoubtedly challenging due to several factors affecting the final result. Besides kissing bond and section thinning we encountered in all configurations leading to poor elongation at break ( ∼15% of BM), in the best case, joints' yield strength and ultimate tensile strength reached 87% and 96% of the base material. These excellent mechanical properties suggest that by solving the problem of kissing bond and thinning problem, complex assemblies that are certainly competitive with melting-based welding technologies may be achieved, taking advantage of all the environmental and automation benefits of friction stir welding.
Conclusions
This work demonstrated friction stir weldability of Ti6Al4V T-joints with complete penetration. Based on the present results, the following conclusions are drawn.
1. The investigated weldability windows (rotational speed between 150 and 200 rpm and welding speed between 60 and 100 mm/min) avoided critical internal voids, independently from the pin's shapes. However, minor flaws, such as small kissing bonds at corners and section thinning, are observed in all joints. 2. The similar hardness distributions observed in the different configurations suggested a uniform microstructure in the investigated weldability windows. Based on microstructural observations, temperatures in the various zones are below the transus point.
Tensile tests have demonstrated the outstanding strength
of Ti6Al4V friction stir T-welds. They reached very high mechanical properties, with the best case efficiencies of 96% and 87% for UTS and Y, respectively. However, the critical problem of elongation at break not higher than 15% of the base material is established. 4. Samples always shown the strain concentration in the HAZ on the retreating side due to cross-section thinning leading to the early fracture in the HAZ. Instead, some joints with the highest rotational and welding speed are characterized by a different fracture path starting from the kissing bond defect because of a more pronounced kissing bond defect (i.e., longer and inclined towards the traction direction). Hence, both factors, pronounced kissing bonds and section-thinning on one side, are responsible for low elongation at break. They can be avoided by adopting the correct plunge depth and combination of pin and support radius fillet. 5. Co-based alloy tool wears in Ti6Al4V friction stir welding has proved to be a critical issue, with relevant amounts of Co, W, Ni and Cr found within the nugget zone. The wear rate is very high and estimated at around 0.045 mm 3 per millimeter, with a reduction in the shoulder height of about 0.5 mm after 720 mm of welding in its inner part closer to the pin root. Nevertheless, apart from tool particles within the weld cross-section, tool modifications induced by wear did not affect either weld quality or mechanical properties. | 9,979.2 | 2023-06-28T00:00:00.000 | [
"Materials Science"
] |