text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
The relationship between project management and digital transformation: Systematic literature review
ABSTRACT Purpose: This article aims to investigate the relationship between project management (PM) and digital transformation (DT) in organizations. Originality/value: This article contributes to expanding the knowledge of the relationship between PM and DT, indicating that PM and its different approaches are used strategically to enable DT implementation in organizations. In addition, it is evidenced that DT demands individuals with technical and behavioral competencies to work in innovative and rapid organizational, cultural, and technological contexts arising from adopting new digital technologies. Design/methodology/approach: The research is characterized as exploratory with a qualitative approach. The methodology adopted was the systematic literature review and sought to understand the relationship and convergence between PM and DT. The research was carried out broadly, and the articles were selected on the Web of Science, Scopus, and Google Scholar bases, forming the analysis corpus with 104 articles published from 2015 to 2020. Findings: The results converged in the composition of four factors: competencies; strategy; digital technologies; and portfolio, programs, and projects, demonstrating the evolutionary and adaptive capacity of PM to support major changes such as DT.
INTRODUCTION
Digital transformation (DT) presents several challenges for organizations and is considered a critical issue (Fitzgerald et al., 2014).However, it is still not widely dominated or known about its characteristics and how they contribute to the practice of performing DT at the organizational level (Berghaus & Back, 2016;Chanias et al., 2019).Besides, DT is a subject that arouses community interest in strategic issues related to business digitalization that require leadership involvement and support to make changes, promote capabilities and leverage its competitiveness and innovation level amid risks (Kane et al., 2015).
Research conducted by Bilgeri et al. (2017) identified six main reasons why DT would affect the overall organizational structure in large manufacturing companies.The highlighted one lies in its executives' uncertainty about where and how to allocate and align digital capabilities within their organizational structures.Ismail Abdelaal et al. (2017) also approached the factors that challenge leaders to perform DT from a strategic and managerial point of view.They underlined factors such as lack of clarity, sense of urgency, vision, direction, organizational aspects related to workers' attitudes, legacy technology, politics, and other factors that challenge leaders to perform DT.
The dynamic in the company is often affected by the changes arising from DT.The convergence and adoption of new digital technologies address performance increase, competitive advantages, and transformation of various organizational aspects such as the business model, the customer experience, the operational model and its processes, social factors related to competencies, talents, culture, and value system (Downes & Nunes, 2013).This context supports a growing research interest that highlights the use of projects to generate organizational changes (Crawford & Nahmias, 2010).Organizations must use projects to deliver change and bring success (Parker et al., 2013), incorporating elements that require effective change management and influential leadership to successfully implement projects and initiatives (Müller & Jugdev, 2012).
The essence of project management (PM) is to support the execution of an organization's strategy to achieve results beyond its competitors (Milosevic & Srivannaboon, 2006).Svejvig and Andersen (2015) show the importance of PM for topics related to strategic changes, innovation to generate competitive advantages, resource optimization, and increased efficiency.These needs suggest the PM's evolution as a discipline supporting organizational, financial, technological, scientific, and social needs (Geraldi & Söderlund, 2018).Due to DT being broad and demanding the involvement of the entire organization (Gimpel et al., 2018), the use of projects could be necessary to change organizational and technological aspects as well as redefine the value proposition (Wessel et al., 2021).In the context of DT, not all PM practices and approaches are adequate (Shenhar et al., 2001).This makes it necessary to examine PM's existing methods, focusing on DT, to recommend the most important contributions, suiting them to the diversity of projects that require different management practices (Henriette et al., 2015).
Therefore, considering that DT associates aspects beyond the use of technology and includes organizational and strategic changes during DT projects, this study investigates the relationship between PM and DT in organizations.The following research question is proposed: • What is the relationship between project management and digital transformation in organizations?
To achieve the proposed objective and answer the research question, we performed a systematic literature review (SLR) to explore this relationship and identify the theme's most treated subjects, current issues, and future perspectives.
METHODOLOGICAL PROCEDURES
This article aims to understand the relationship between PM and DT in organizations based on information acquired through literature mapping.According to Kitchenham and Charters (2007), literature mapping uses a broad review of studies already conducted on a given topic to raise evidence and help answer a specific research question.This study uses a qualitative approach of exploratory-descriptive type to elaborate on the research corpus and bring up the patterns discovered in the studies surveyed (Tranfield et al., 2003).
This research followed the stages of the protocol prescribed by Pollock and Berge (2018) to perform an SLR, being: 1. clarify research goals and objectives; 2. seek relevant research; 3. collect data; 4. assess the quality of studies; 5. synthesize the evidence; 6. interpret the findings.The first stage, guided by the research question, aims to answer the question: "What is the relationship between project management and digital transformation in organizations?".Pollock and Berge (2018) prescribe a flow to assemble the final corpus of analysis, which consists of four phases: 1. identification; 2. screening; 3. eligibility; and 4. included (Figure 1).The identification phase, which comprises stage two of the protocol, used the electronic databases Scopus, Web of Science, and Google Scholar to seek relevant research.The definition process of the string used other terms associated with the DT and PM, but the results showed few relevant studies.The final form of the string was then a simple structure extended using an asterisk text mask (*).It was composed of the terms "Digit* transformat*" AND "project* manag*" to perform the search.We considered the operator "AND" and the asterisk text mask (*) to give amplitude to the investigation and bring up the junction of the terms and variability sought by the string.
Scopus, Web of Science, and Google databases were chosen to collect data (stage three); the Publish or Perish software was used to operationalize data collection in the Google Scholar database.The search for articles was conducted in September 2020.
Since it is a topic of recent interest, no time restrictions were used, bringing 1,152 articles without informing the period for the research.The objective was to search as many articles as possible to form the analysis corpus.Additionally, articles not written in English were excluded from the corpus; thus, 751 articles remained.We adopted conference papers and academic journals as the subject is a topic of recent publications.The objective was to build a corpus of analysis with the largest number of research on the topic.It is worth noting that conference articles were in the inclusion and exclusion criteria highlighted in the protocol of this literature review.
Figure 1 presents the steps followed for the final selection of articles that compose the corpus of analysis for this SLR and that used the orientations and stages shown in the work of Pollock and Berge (2018).
The fourth stage of the protocol comprises the screening and eligibility phases.The screening phase aims to delimit the study to the research purpose.It was performed by reading the titles and abstracts and removing the repeated articles from the corpus, remaining 204 articles.In the third phase, eligibility, a complete reading of each piece was done in order to remove those papers whose content did not address the scope of the research, remaining a corpus formed by 104 articles.
The last phase, included, was a thorough reading of the 104 articles that form the final corpus of analysis, addressing the research objective of verifying PM and DT's relationship in organizations.The article's analysis used a Microsoft Excel spreadsheet to consolidate the findings, categorize them, and gather aspects to give meaning to the reviews, such as methodological procedures, objectives, research approach, results, limitations, and future studies.The activities carried out reflect the stages and the sequence of the protocol prescribed by Pollock and Berge (2018), that is: 5. synthesize the evidence and 6. interpret the findings and prioritize the treatment and qualitative analysis of the articles to consolidate the findings of the studies.The application of the method was audited by two researchers who took part in the study.
Mapping articles
The corpus of analysis was generated for the beginning of the analyses after the filtering process of the articles.It is important to point out that there was no date filter in the search for articles.The first article within the context of this research was published in 2005.The selected articles comprise 104 articles published between 2015 and 2020.Out of this total, 60 appear in 47 journals, and the remaining 44 are posted in 38 various events, congresses, conventions, symposia, and conferences.
After the constitution of the database, the repeated articles were removed to guarantee the homogeneity of the analysis corpus.Next, the database was treated with the aid of Excel spreadsheets.This phase of the research also allowed us to present a relevant descriptive analysis of the study carried out.
We emphasize that the applied content analysis is in line with Bardin's prescriptions (2011).The selected articles were read in their entirety.Categories of analysis were sought that could help to understand the phenomenon researched.The categorization criteria were validated by all researchers and included in the Excel spreadsheet.The generated spreadsheet contains the metadata of the selected articles, as well as information about the categorization.After analyzing all the articles, the researchers sought convergences through a clustering process that allowed them to reach the categories presented in the next section.
When analyzing the number of studies published per year, we can see the growing interest in the theme: the most significant number of studies were published in 2019 (Figure 2).We point out that the 2020 period does not cover a full year, so it does not allow direct comparison with other periods.The selected studies were published between 2015 and 2020.They show a growing interest in the subject by scholars.After mapping the articles and performing a careful reading of each of them, it was possible to identify the connections between the contents and the authors, resulting in four categories consisting: 1. competencies, 2. digital technologies, 3. portfolio, programs, and projects, and 4. strategy.We emphasized that the categories presented emerged from the process of reading and categorizing the articles, as presented in Table 1.
We emphasize that the categorization process is understood as an abstraction of the contents observed in the analysis corpus.Each category translates a set of meanings representing elements that will be explored in the presentation and discussion phase in the next section.
Four factors were found after performing a complete reading of the articles.The classification of the studies was tabulated in a spreadsheet.It is considered a cross-analysis of several sections of the article title, abstract, covered domains, objectives, approach to research, results, limitations, and future studies.This allowed the identification of the main aspects for categorization into the four factors.The following section presents the identified factors' discussion, explaining this research's findings and their relationships.
Competencies
Making a company fit for DT demands that competencies be incorporated at a strategic level (Azzouz & Papadonikolaki, 2020) as a prerequisite in managers' training (Azarenko et al., 2018;Bygstad et al., 2017;Malmelin & Virta, 2016) preparing these managers to be responsible for DT and guide the company in methods, tools, and expert processes (Wolff et al., 2020).
Individuals trained in managerial, personal, and behavioral topics are perceived as someone with a differential to conduct DT projects (Rojas & Mejıa-Moncayo, 2019), even with the participation of internal and external agents (Braun & Sydow, 2019).Technical skills and training in new technologies, such as the internet of things (IoT), are complementary skills to DT and pressing for companies to expand in the business of high competitiveness and innovation (Assante et al., 2020).
The organizational culture achieves digital excellence through the new technological capacities and changes undertaken to enable innovation, a new way of thinking, and the possibility of people venturing into new challenges with leadership support (Ngereja et al., 2020).Professionals who work with multiple PM frameworks to meet technological change projects are perceived as adaptable individuals with specific skills to work in high-performance teams, combining their talents to create innovative jobs (Demirkan & Spohrer, 2018).
New digital skills demanded by the market may also impact DT processes (Betz et al., 2016;Papadonikolaki et al., 2019).These skills are demonstrated in the adoption of building information modeling (BIM) by sharing information in engineering projects that adopt agile project management (APM) and the need to inspire an agile culture in teams to gain acceptance and synergy (Silvaggi & Pesce, 2018).The different PM approaches, such as the APM, show that the lack of individuals with an adequate profile to act in the positions demanded by DT's process may impact some sectors (Brunet-Thornton et al., 2019).However, the impact of changes in the work environment and future trends in the PM field in terms of knowledge, skills, attributes, and experiences tend to be more appreciated (Walker & Lloyd-Walker, 2019).
The proposition of frameworks appears with various purposes.In the questions of maturity assessment in DT, a framework is proposed for companies that need educational programs and certifications to manage DT, evaluating the competencies required based on a competencies model elaborated for DT (Wolff et al., 2020).Another proposition assesses digital excellence in hospitals, evaluating technological capabilities and non-technological aspects as enablers for digital change in each specific health system, aiming to develop PM skills and digital skills to support hospital teams (Krasuska et al., 2020).We emphasize that these skills are associated with the competencies necessary for the DT process.Such competencies must be present in the people and, in some way, in the company´s culture.
Strategy
In strategic matters, the process of organizational change aligned with technological needs passes through new PM perspectives.These perspectives bring simplicity (Chowdhury & Lamacchia, 2019;Gerster et al., 2019;Ignat, 2017) to attend the central strategy of DT (Kouroubali & Katehakis, 2019), new business perspectives and innovations to the organizations of many industries (Benzerga et al., 2017;Krumay et al., 2019).However, many companies misunderstood the objective of DT.DT means combining the use of technologies, leadership, culture, and skills to integrate and explore the fundamental aspects of DT and meet the business's digital needs (Guinan et al., 2019;Karimi & Walter, 2015;Vukšić et al., 2018).
The disruptive and innovative conjuncture of DT (Barthel & Hess, 2019) is pressing digital leadership to act as an agent of change to disseminate and cultivate the cultural aspects of DT (Jacobi & Brenner, 2018).The managers evaluate and measure the maturity of DT to manage change, develop PM capabilities, outlining the future of organizational relationships, technological changes, culture, people, and processes (Akatkin et al., 2016;Bierwolf et al., 2017;Ilin et al., 2019;Berghaus & Back, 2016;Schuh et al., 2017).
Digital technologies provide spaces for co-creation in the context of innovation with adequate resources and partners influenced by DT (Eikebrokk et al., 2018;Williams & Schubert, 2018).Another example is the virtual environment required for people to interact with digital technologies, generating intelligence and collective agility to innovate (Moreira et al., 2018).The educational area approaches digital technologies to develop, strategically, new skills in the individual and address the dynamics of DT projects and social needs (Barsukov et al., 2018).Artificial intelligence (AI) is also applied to monitor construction employees' performance electronically, aiming to create intelligent contracts in the DT context (Calvetti et al., 2020).
Other aspects show the strategic design to drive DT changes, such as using Lean to map the value stream and support decisions about the potential use of technologies and solutions (Wagner et al., 2018).Similarly, portfolio management broadly outlines the value chain's strategy and organizes PM resources and areas to optimize projects' conduct (Bierwolf, 2017).Technological change, on the other hand, proves beneficial in the process of transitioning projects that incorporate new technologies to obtain transformative organizational results in the DT context (Hartl & Hess, 2019).
Another well-explored point concerns the proposition of frameworks with the prescription of scripts to assess maturity and help companies face the challenges of DT (Parviainen et al., 2017).Some examples of frameworks are those designed to manage and characterize projects, aiming at effectiveness in the results of the project (Dombrowski et al., 2020); to formulate DT's strategy for the company's business processes (Borremans et al., 2018); to explain the resources needed for small and medium-sized enterprises (SMEs) to face the DT process (Sanchez & Zuntini, 2018).
These strategies should allow the DT process to occur clearly, and objectively and follow the companies' business rules.In order to achieve these results, the competencies must be disseminated among people, as presented in the previous subsection.
Portfolio, programs, and projects
The portfolio, programs, and projects factor shows their importance in organizing and addressing DT initiatives.Portfolio management connects different organization levels with project governance practices allied to management and leadership skills (Lappi et al., 2019) and treating projects in a strategic context to maximize results (Barata et al., 2018;Isikli et al., 2018).At another level of abstraction, program management is used to organize inter-relationship projects and addresses the various aspects of organizational transformation to enable DT (Perides et al., 2020;Tchana et al., 2019;Teubner, 2019).
Companies that need a quick response to business and community will take advantage of APM (Nerurkar & Das, 2017).This advantage is because APM integrates the stakeholders to address PM matters (Nerurkar & Das, 2017) and value aspects of leadership, teams, and ability to innovate (Gurusamy et al., 2016;Mergel, 2016;Shamim et al., 2016).APM brings benefits to being used with DevOps (development and operations), adjusting projects to the ongoing model, continuous deliveries of value, no end date of the project, and management based on products (Wiedemann et al., 2019).
The leadership exercise is a subject present in the DT researchers, especially the project manager's leadership.Outstanding leadership is collaborative and capable of bringing together internal and external actors to generate innovations through multifunctional teams, collective learning, and agile processes for decision-making and problem-solving (Crowley et al., 2017;Scott et al., 2019).The specificities of projects challenge project managers and leadership with complexity, constraints, and critical success factors to achieve high performance and ensure success in DT projects (Kolasa, 2017;Priambodo et al., 2019).
The PM is a strategic priority for organizations and their leaders to address the opportunities and challenges regarding new surgent digital technologies (Hassani et al., 2018).Exploring diverse digital technologies needs deep analysis under optics that integrates technology, process, and people (Hassani & El Idrissi, 2020).It should start with a technological transformation that supports DT (Tsurkan et al., 2019) and brings clear benefits to the organization and interdisciplinary projects (Ganis & Waszkiewicz, 2018).
In the DT process, the projects start focused on strategy and need to be aligned with the customer value proposition.This scenario is supported by the APM and the development team, always in line with the business strategy.
Digital technologies
Areas that develop projects using technologies such as Robotic Process Automation (RPA), BIM, Blockchain, IoT, Big Data, Analytics, and Cyber-Physical Systems (CPS), among others, will benefit when the technology is used in conjunction with APM.For example, the projects that use APM management implementing RPA to automate routine tasks and scan processes, The relationship between project management and digital transformation: Systematic literature review ISSN 1678-6971 (electronic version) • RAM, São Paulo, 24(4), eRAMR230075, 2023 https://doi.org/10.1590/1678-6971/eRAMR230075.enbring technical innovations, and connect the business areas to the organization's IT (Marek et al., 2019;Schmitz et al., 2019).The importance of creating a reliable environment in integrating RPA project teams (Mishra et al., 2019b) and the technical management's support to build the skills in the workers (Mishra et al., 2019a).Within the field of AI, BIM is increasingly used to address complex problems in the Architecture, Engineering, and Construction (AEC) industry and address optimization, simulation, PM, and uncertainty treatment (Darko et al., 2020).Among the issues addressed by digital technologies, we highlight the use of APM in projects that needs the ability to design new features, scale resources (Diaz et al., 2020), improve production processes, and add new technical resources.APM can add value by integrating and managing those involved during the construction and delivery of the assets (Aibinu & Papadonikolaki, 2020;Chan, 2020;Dremel et al., 2017;Ochara et al., 2018;Woodhead et al., 2018).
An example of BIM projects is using the Lean method as a support tool to help with the difficulties faced for adaptation to DT and to improve processes, productivity, efficiency, and quality in construction (Çelik, 2019;Koseoglu & Nurtan-Gunes, 2018).Another approach proposes product lifecycle management best practices to indicate improvements in BIM results using an information-centric management approach in construction projects (Boton et al., 2016).
BIM shows the prominence of digital technologies such as IoT, which brings real-time project situations, using intelligent systems to assist decisionmaking processes and avoid risks in AEC projects.It will also benefit the work's performance by providing accurate data integrated with a cloudbased BIM data platform (Teizer et al., 2017).The significant increase in the use of agile tools in conjunction with digital tools demonstrates the degree of importance and relevance they acquire to address organizations' future priorities in implementing DT (Durão et al., 2019).
The delay in adopting blockchain in the industry has several origins: lack of collaboration and exchange of information, mistrust between the parties, operational and management problems, and intellectual property rights (Li et al., 2019).Projects that jointly use BIM and blockchain prove complexity by bringing together products, processes, people, technologies, and policies (Singh, 2016).The use of blockchain can bring together parts of different sectors to support the design and construction phases, increasing performance and improving quality and transparency in projects (Bataev, 2019;Li et al., 2019).
The relationship between project management and digital transformation: Systematic literature review ISSN 1678-6971 (electronic version) • RAM, São Paulo, 24(4), eRAMR230075, 2023 https://doi.org/10.1590/1678-6971/eRAMR230075.enDigital technologies have an important role in the DT process, ensuring that projects are aligned with the companies' business rules strategy, driven by people's technical skills, and disseminated throughout the organization.
Model proposition
Figure 3 summarizes the study's findings, demonstrating the main aspects and the relationship between the four factors.The elements of competencies appear in all factors, demonstrating their scope and importance in carrying out the DT.
The strategy factor relates to the definition and planning to remodel and create new business models, digitize services, maintain competitiveness amid major changes, generate value, outline, and drive the organizational and cultural changes involved.Management and business competencies are highlighted in this factor.
The factor portfolio, programs, and projects appears to structure the enabling factors of DT.The portfolio aspect connects different levels of the organization, governance, and leadership aspects.The programs aspect is used to organize and interrelate projects with common purposes.The projects aspect performs deliveries through agile, hybrid, and traditional approaches.In this factor, competencies are fundamental to leading the delivery of products and services within the new context of DT.
The digital technologies factor demonstrates the use of these new technologies in the challenges involved in automating and optimizing processes, simulating scenarios and models, generating operational efficiency, assisting in the decision-making process, and generating collaboration between individuals and teams.This factor requires individuals and teams proficient in technical skills to use new digital technologies.The competencies factor participates in the other factors and is considered at the individual and collective level.In this sense, the competencies factor covers the entire DT adoption process, being necessary to plan the strategy and conduct the process, operationalize digitization through digital technologies, and support change and operationalization through PM frameworks.
Considering the four factors involved in the relationship between DT and PM, all factors are interrelated to the objective of performing DT.Therefore, DT is a process that needs to be in line with the company's strategy regarding its digitalization planning.Thus, to make DT operational, individuals with competencies in digital technologies are required to operationalize the digitization, as well as PM competencies to support the process of change.
FINAL CONSIDERATIONS
This research analyzed the relationship between PM and DT to identify convergence points.The analysis corpus comprised 104 articles published in The relationship between project management and digital transformation: Systematic literature review ISSN 1678-6971 (electronic version) • RAM, São Paulo, 24(4), eRAMR230075, 2023 https://doi.org/10.1590/1678-6971/eRAMR230075.enjournals, congresses, symposiums, and conferences between 2015 and 2020.For the most part, the articles came from engineering, telecommunications, health, management, media, technology, and strategy areas.
The corpus of analysis categorizes four factors: 1. competencies, 2. strategy, 3. portfolio, programs, and projects, and 4. digital technologies.The factors identified present aspects relate to the challenge of performing a DT.Using new digital technologies is the basis for performing DT, and it brings the company to a new competitive level.In this sense, the articles explore DT applicability in various situations.However, some articles show that digital technologies need alignment with the strategy that adapts and shapes the organization into a digital market context.
Considering the research question proposed in this study, which verified the relationship between DT and PM, we can infer that the DT process has a strong relationship with PM, being necessary to understand the introduction of digital technologies aligned with business strategies supported by the PM.In addition, an important aspect of the relationship is the relevance of people management as identifying and using competencies that facilitate the realization of DT.
New digital technologies and strategies, allied to DT initiatives, demand competencies in technical and behavioral aspects suitable for operating in environments of rapid change, volatility, and high risks.In this context, PM presents adaptability, supporting DT transitions with adaptive PM approaches (agile and hybrid methods).It is hoped that this study will contribute to deepening the discussion on the themes of PM and DT and the existing relationship.The research question of this study was answered with the indication that the relationship between TD and PM starts from a strategic direction to plan and execute TD, using the PM to support the strategy, digital technologies as a means for implementation, development, and maintenance of skills for the entire change process.Additionally, this article proposes in Figure 3 an interaction flow between the factors that make DT viable.
The model is composed of factors identified in the study and considers strategic premises to develop the DT strategy.The model is composed in such a way that the DT is reviewed and analyzed throughout the life cycle of the DT project.In addition, it considers that the organization must address aspects of competitiveness, innovation, and rapid changes in the market.
The model proposes that the DT strategy rethink its business models and services for the digital context and undertake organizational and cultural changes to execute the transformation.The model considers the digital technologies factor as a fundamental means to execute the DT, providing the strategy execution to reach a new level of competitiveness for the organization.In this scenario, PM frameworks are a supporting factor to help operationalize the DT strategy and maintain alignment with organizational strategy.Thus, competencies appear to be essential for the model's factor to enable the organization to face the challenges arising from DT by developing and maintaining technical, relational, and managerial competencies for the entire DT process.It is expected that this study will contribute to the deepening of the discussion on the themes of PM and DT in organizations, and the relevance of providing a database that informs the work carried out, periodicals, authors, and sub-theme categories is a facilitating tool for researchers.
Therefore, for future research, we recommend that studies be carried out to understand the influence of competencies for DT, which play a strategic role in operationalizing and improving the DT process.Furthermore, DT is an evolving topic with great potential for further studies. | 6,406.4 | 2023-06-23T00:00:00.000 | [
"Computer Science",
"Business"
] |
A New Algorithm for Generalized Least Squares Factor Analysis with a Majorization Technique
Factor analysis (FA) is a time-honored multivariate analysis procedure for exploring the factors underlying observed variables. In this paper, we propose a new algorithm for the generalized least squares (GLS) estimation in FA. In the algorithm, a majorization step and diagonal steps are alternately iterated until convergence is reached, where Kiers and ten Berge’s (1992) majorization technique is used for the former step, and the latter ones are formulated as minimizing simple quadratic functions of diagonal matrices. This procedure is named a majorizing-diagonal (MD) algorithm. In contrast to the existing gradient approaches, differential calculus is not used and only elmentary matrix computations are required in the MD algorithm. A simuation study shows that the proposed MD algorithm recovers parameters better than the existing algorithms.
Introduction
Using y for a 1 p × observation vector whose expectation ( ) Here, O is the m p × matrix of zeros, m I is the m m × identity matrix, and Ψ is the p p × diagonal matrix whose diagonal elements are called unique variances.The FA model (1) with the assumptions in (2) imply that the covariance matrix ( ) [1] [2].A main purpose of FA is to estimate the parameter matrices Λ and Ψ from the inter-variable sample covariance matrix S ( ) p p × corresponding to (3).Some authors classify FA as exploratory (EFA) or confirmatory (CFA) [2], where Λ is unconstrained in EFA, while some elements of Λ are constrained in CFA.In this paper, we refer to EFA simply as FA.
Three major approaches for the parameter estimation are least squares (LS), generalized least squares (GLS), and maximum likelihood (ML) procedures [3].They differ in the definition of the loss function ( ) The functions for the LS and GLS estimation procedures are defined as ( ) ( ) respectively, while ( ) f Θ is defined as the negative of the log-likelihood derived under the normality assumption for x and e in the ML estimation [3] [4].
In all estimation procedures, iterative algorithms are needed for minimizing loss function ( ) f Θ .They can be roughly classified into gradient and inequality-based algorithms.Here, the gradient ones refer to the algorithms using Newton and related methods [5], in which the partial differentiation of ( ) f Θ with respect to Θ is used for updating it.On the other hand, the term "inequality-based algorithms" is not a popular one.We use the term for the algorithms, in which differentiation is not used and the inequality Θ underlies that which guarantees the weakly monotone decrease in the loss function value with updating Θ to new Θ .Similar dichotomization of minimization methodology is also found in [6].
For all of the LS, GLS, and ML estimation, gradient algorithms have been developed: those with the Fletcher-Powell and Newton-Raphson methods have been proposed for the ML estimation [7] [8], while the algorithms using the Newton-Raphson and Gauss-Newton methods have been developed for GLS [9] [10] with the gradient algorithms for GLS also used for LS.On the other hand, inequality-based algorithms have been developed for the LS and ML estimation excluding GLS.Such an algorithm for LS is MINRES [11] in which Θ is parti- tioned into the subsets of parameters with { } 1 , , q = Θ θ θ and the minimization of ( ) The inequality-based one for the ML estimation is the EM algorithm for FA [12] in which ( ) f Θ decreases monotonically with the alternate iteration of so-called E-and M-steps [13].A feature of MINRES and the EM algorithm is that only simple matrix computations such as the inversion of matrices are required and their computer-programs are easily formed.In contrast, the gradient algorithms require more complicated computations such as obtaining or numerically approximating the second derivatives of ( ) f Θ .As found in the above discussion, an inequality-based algorithm has not been developed for the GLS estimation in which (4) is minimized over Θ .To propose it is the purpose of this paper.The algorithm to be proposed is also computationally simple as in the existing inequality-based ones: only elementary matrix computations are required such as the inversion and singular value decomposition (SVD) of matrices.A feature of the proposed algorithm to be addressed is using majorization in one of steps.The majorization generally refers to a class of the techniques in which a majorizing function ( ) , h Φ Ξ is utilized for minimizing a function ( ) Here, ( ) Φ being the minimizer of the majo- rizing function ( ) , h * Φ Φ for its latter argument matrix Φ kept fixed [14].It shows that ( ) g Φ decreases with the update of Φ into * Φ .As described in the next section, the step with a majorization technique and the steps for minimizing the functions of diagonal matrices form the algorithm to be presented.It is thus called majorizing-diagonal (MD) algorithm in this paper.
The MD algorithm is not the first one with majorization in FA.Indeed, the above EM algorithm [12] can be regarded as a majorization procedure with its majorizing function being the full log likelihood derived by supposing that latent factor scores in x were observed. [15]has also proposed an FA algorithm with a majoriza- tion technique.However, in that algorithm, the estimation of a new type [16] [17] is considered, which are different from the LS, GLS, and ML estimation treated as the major procedures in this paper: [15] is beyond the scope of this paper.
The remaining parts of this paper are organized as follows: the MD algorithm is detailed in the next section, and it is illustrated with a real data set in Section 3. A simulation study for assessing the algorithm is reported in Section 4, which is followed by discussions.
Proposed Algorithm
We propose the MD algorithm for minimizing the GLS loss function (4) over the loadings in Λ ( ) the unique variances in the diagonal matrix Ψ ( ) Here, it is supposed that the sample covariance matrix S is positive-definite and Λ is of full-column rank, i.e., its rank is m with p m > .This supposition and the covariance matrix being modeled as (3) imply that, without loss of generality, we can reparameterize Λ as where L is a p m × matrix satisfying and ∆ is an m m × positive-definite diagonal matrix.By substituting (5) into the GLS loss function ( 4), it is rewritten as , , tr tr 2tr tr 2tr 2tr tr 2tr tr This function is minimized over L , ∆ , and Ψ subject to (6) and the latter two matrices being diagonal ones, by alternately iterating the majorizing and diagonal steps described in the next subsections.
Majorization Step
Let us consider minimizing (7) over L subject to (6) while ∆ and Ψ are kept fixed.Summarizing the parts irrelevant to L in (7) into Though the optimal L that minimizes (8) under ( 6) is not given explicitly, the solution can be obtained using Kiers and ten Berge's [18] majorization technique, whose earlier version is also found in [19].This technique purposes to minimize a function expressed as the form ( ) this with (8), we can find (8) to be a special case of the above ( ) φ L with A being the zero matrix, ( ) ∆ , and 2 q = .Therefore, the update formula in [18] (pp.374-375) can be straightforwardly used for (8).
According to the formula, the update of L by ′ = L PQ (9) decreases the value of ( 8) with ( ) ( ) Here, OLD L stands for the matrix L before the update; P and Q are the column-orthonormal matrices that are obtained from the SVD defined as with Θ the diagonal matrix including the singular values of the matrix in the left-hand side, and a λ , b λ , and c λ the largest eigenvalues of
Diagonal Steps
In this section, we describe updating each of diagonal matrices ∆ and Ψ .First, let us consider minimizing the loss function (7) over ∆ with keeping L and Ψ fixed.Since the terms relevant to ∆ in the loss func- tion (7) are the same as those relevant to L , the expression (8) into which (7) is rewritten is to be noted again.By taking account of the fact that ∆ is a diagonal matrix, (8) can be rewritten as ( ) ( ) ( ) Here, ( ) diag denoting the diagonal matrix whose diagonal elements are those of the parenthesized matrix.Further, we can rewrite (11) as ( ) with denoting the Frobenius norm.It shows that the function ( 11) is minimized for for fixed L and Ψ .Next, we consider minimizing (7) over Ψ with L and ∆ fixed.Summarizing the parts irrelevant to Ψ in (7) into ( ) We can find that ( 13) is minimized for ( ) ( ) for fixed L and ∆ .
Whole Algorithm
The results in the last two subsections show that the proposed MD algorithm can be listed as follows: Step 1. Initialize L , ∆ , and Ψ .
Step 5. Finish with Λ set to (5) if convergence is reached; otherwise, return to Step 2. It should be noted in Step 2 that the update of L by (9) does not minimize (7) but only decreases its value, which implies that that update can be replicated ( l times) for further decreasing the value of (7).In this paper, we set 5 l = .In Step 1, the initialization is performed using the principal component analysis of sample covariance matrix S .That is, the initial L and ∆ are given by V and 1 2 Ω , respectively, with Ω the m m × diagonal ma- trix whose diagonal elements are the m largest eigenvalues of S , and the columns of V ( ) Step 5, we define the convergence as the decrease in the value of ( 7) or (4) from the previous round being less than 2 10 0.1 p × .
Illustration
In this section, we illustrate the performance of the MD algorithm with a 190-person × 25-item data matrix, which was collected by the author and publicly available at http://bm.osaka-u.ac.jp/data/big5/.This data set contains the self-ratings of the persons (university students) for to what extents they are characterized by the personalities described by the 25 items.According to a theory in personality psychology [20], the items can be classified into the five groups shown in the first column of Table 1.The 25 × 25 matrix of the correlation coefficients among those items was obtained from the data set.We carried out the MD algorithm for the correlation matrix with the number of factors m set to five. Figure 1 shows the change in the value of loss function (4) until the steps in Section 2.3 were iterated ten times and the change after the tenth iteration.There, we can find that the function value decreased monotonically with iteration, which was finally reached to convergence at the 542 nd iteration.As the resulting loading matrix has rotational freedom, that is, the Λ post-multiplied by arbitrary orthonor- mal matrix satisfies (1) and ( 2), the loading matrix was rotated by the varimax method [21].The solution is pre- 1. There, bold font is used for the loadings whose absolute values are greater than 0.35.They show that the 25 items are clearly classified into the five groups as predicted by the theory in personality psychology [20], which demonstrates that the MD algorithm provided the reasonable solution.
Simulation Study
A simulation study was performed in order to assess how well parameter matrices are recovered by the proposed MD algorithm and compare it with the existing algorithms for the GLS estimation in the goodness of the recovery.We first describe the procedure for synthesizing the data to be analyzed, which is followed by results.
An n-observations × p-variables data matrix Y was synthesized according to the matrix versions of the FA model ( 1) and the assumptions in (2): [ ] and Here (1), and the five equations in (2) can be summarized into the two matrix expressions in (16).The data synthesis procedure follows the next steps: Step 1. Draw m from ( ) 15) from ( ) which is followed by centering Z and post- multiplying it by the matrix that allows the resulting Z to satisfy (16).
Step 4. Form Y with (15) and obtain the covariance matrix 1 n − ′ = S Y Y .In Step 3 we have used a uniform distribution for Z , rather than the normal distribution typically used for such a matrix, as a feature of the GLS estimation is that it does not need the normality assumption required in the ML estimation.We replicated the above steps to have 2000 sets of S .For them, the MD and the existing algorithms were carried out, where the latter are the two gradient algorithms [9] [10], as described in Section 1.We refer to the ones in [9] and [10] as the Newton-Raphson (NR) and Gauss-Newton (GN) algorithms, respectively.In the NR one, we obtained the gradient vector in [9], Equation (32), by pre-multiplying the vector of first derivatives by the Moore Penrose inverse of the corresponding Hessian matrix.Also in the NR and GN algorithms, we used the same initialization and definition of convergence as in Section 2. , Λ Ψ for the solution given by the NR, GN, or MD algorithm.For assessing the recovery of the loading matrix, the averaged absolute difference (AAD) of the elements in Λ to the corresponding estimates, i.e., ( ) ( ) can be used with 1 l denoting the 1 l norm.Here, it should be noted that # Λ has rotational freedom and must be rotated so that the resulting # Λ is optimally matched to Λ .Such a rotated # Λ can be obtained by the orthogonal Procrustes method [22] with Λ a target matrix.The loading matrix # Λ in (17) thus stands for the one rotated by the Procrustes method.The recovery of unique variances can also be assessed with the AAD index ( ) Ψ Ψ , where the unique variances are uniquely determined, thus the additional procedure as for # Λ is unnecessary.Smaller values of those AAD indices stand for better recovery.The statistics of AAD values over 2000 data sets are presented in Table 2. There, the averages show that the recovery by the MD algorithm is the best and that for the NR one is the worst.It should be noted that the 50 and 75 percentiles for the NR algorithm are zero, while the maximum and 99 percentile are very large.That is, the recovery by the NR algorithm was perfect for more than 75 percent of the 2000 data sets, but for a few percent of them, recovery was considerably bad, which increased the averages for the NR one.In contrast, the maximum AAD of loadings and unique variances for the MD algorithm are 0.0041 and 0.0013, respectively, which are small enough to be ignored.That is, the proposed MD algorithm well recovered the true parameter values for all of the 2000 data sets.We can thus conclude that the MD algorithm is superior to the existing ones in the goodness of recovery.
y equals the 1 p
× zero vector p 0 , the factor analysis (FA) model is expressed as = p-variables × m-factors loading matrix, x an 1 m × latent factor score vector, e a 1 p × error vector, and p m > .The expectations for x and e are assumed to satisfy loss function(7) is rewritten as fact of Ψ being a diagonal matrix, the loss func- tion (7) can be rewritten as
Figure 1 .
Figure 1.Change in the GLS loss function value as a function of the number of iteration.
, 7 DU
denoting the discrete uniform distribution defined for the integers within the range [ ] , I J .Step 2. Draw each loading in Λ from denoting the uniform distribution over the range [ ] , α β .Step 3. Draw each elements of Z in (
Table 1 .
Loadings and unique variances Ψ1 p for personality rating data.
the loading matrix and the square roots of unique variances.It should be noticed that each row of Y , X , and E corresponds to ′
Table 2 .
Statistics for the differences between the true parameter values and their estimated counterparts. | 3,855.6 | 2015-04-27T00:00:00.000 | [
"Computer Science"
] |
Simulation-Optimization Approach for the Production and Distribution Planning Problem in the Green Closed-Loop Supply Chain
With the growth of multinational companies, increasing international and domestic competition between companies, upgrading information technology, and increasing customer expectations, accurate supply chain (SC) planning is essential. In such an environment, pollution has become more severe in recent decades, and with the weakening of the environment and global warming, green SC management (GSCM) strategies have become more attention in recent decades. In this research, we consider the integrated production and distribution (PD) planning problem of a multi-level green closed-loop SC (GCLSC) system, which includes multiple recycling, manufacturing/ remanufacturing, and distribution centers. We present a three-level bi-objective programming model to maximize profit and minimize the amount of greenhouse gas emissions. A hierarchical iterative approach utilizing the LP-metric method and the non-dominated sorting genetic algorithm (NSGA-II) is introduced to solve the proposed model. Also, the Taguchi approach is applied to find optimum control parameters of NSGA-II. Moreover, Monte Carlo (MC) simulation is applied to tackle uncertainty in demand, and the NSGA-II algorithm is fusioned with MC simulation (MCNSGA-II). The results obtained show that the simulation-optimization approach presented better results than the deterministic approach.
The analysis and design of PD systems have been a vital area of research over the years. Geoffrion and Graves (1974) studied a single-period PD problem and suggested a solution approach based on the Benders decomposition algorithm. This is probably the first article to suggest a general mixed-integer programming (MIP) model for the mentioned problem. Haq et al. (1991) utilized an integrated production-inventory-distribution planning formulation in a large fertilizer industry in North India that incorporates many realistic conditions such as lead times, set-up cost, and recycling of production losses as well as backlogging. Chen and Wang (1997) suggested an integrated framework for steel PD planning at a major steel producer company in Canada. Ozdamar and Yazgac (1999) introduced a hierarchical PD planning method for a multinational plant with multiple warehouses. Lee and Kim (2000) presented a general PD model in the literature and offered a solution method using a hybrid approach combining analytical and simulation techniques. Varthanan et al. (2012) investigated PD problem with stochastic demand to minimize regular, overtime and outsourced production costs. They used a simulation based heuristic discrete particle swarm optimization (DPSO) algorithm to solve the problem. Nasiri et al. (2014) suggested a PD model for a three-level program with uncertain demands. They presented a solution framework based on the Lagrangian Relaxation algorithm, which is developed by a heuristic to solve sub-problems. Niknamfar et al. (2015) considered a three-level SC with multiple production and distribution centers, and different customer areas. They presented a robust counterpart model in PD planning to minimize the total cost. Sarrafha et al. (2015) suggested a bi-objective mixed-integer non-linear programming (MINLP) formulation to design an SC network that includes suppliers, production and distribution centers, and retailers. They used the multi-objective biogeography based optimization (MOBBO) algorithm for solving the problem. Seyedhosseini and Ghoreyshi (2015) introduced a mathematical formulation for integrated PD planning for perishable products. An innovative framework is used to solve the formulation in which the suggested method first solves the production problem and afterward, the distribution problem. Devapriya et al. (2017) considered a PD planning problem of a perishable product and provided a MIP model to minimize cost. They presented a solution approach using evolutionary algorithms to solve the model. Zamarripa et al. (2016) introduced a rolling horizon approach for coordinating the PD of industrial gas SC. Zheng et al. (2016) presented a penalty function-based method to solve a riskaverse PD planning problem. The method changes the formulation into some optimization problems which can be solved by traditional optimization software. Ma et al. (2016) proposed a PD planning model applying bi-level programming for SCM and developed a genetic algorithm to solve the model. Rezaeian et al. (2016) suggested an MINLP model for the integrated PD and inventory planning for perishable products with a fixed lifetime all over a two-echelon SC by integrating production centers and distributors. Moon et al. (2016) introduced a bi-objective MIP model to design a four-stage distribution system under a carbon emission constraint. They proposed a two-phase method to solve the model and find a non-dominated solution. Osorio et al. (2017) suggested a simulation-optimization formulation to make strategic and operational decisions in production planning. They used discrete event simulation to demonstrate the flows across the SC, and MIP model to assist daily decisions. Wei et al. ( 2017) studied the integrated PD planning problem and presented a model with a two-stage production structure. They applied relax-and-fix and fix-and-optimize approaches to solve the problem. Ensafian and Yaghoubi (2017) considered an SC that consists of procurement, production, and distribution of platelets. They presented a bi-objective mathematical model to maximize the freshness of the platelets and minimize the total cost. Moreover, a robust optimization approach is used to tackle uncertain demand. Farahani and Rahmani (2017) proposed a MIP model that includes production planning, allocation-location facilitation, and distribution planning to maximize the profit of a crude oil network. Nourifar et al. (2018) introduced a multi-period decentralized SC network model with uncertainty. Uncertainty parameters such as demand and final product prices were defined by stochastic and fuzzy numbers. They presented a solution framework based on the Kth-best algorithm, chance constraint approach, and fuzzy approach. Rafiei et al. (2018) investigated a PD planning problem within a four-echelon SC. The problem is modeled in two non-competitive and competitive markets to minimize total chain cost and maximize service level. Casas-Ramírez et al. (2018) studied an SC, including factories and depots. They proposed a MIP model to balance the total workload and minimize the total cost of the SC. To solve this model, they used an adapted bi-objective GRASP to find non-dominated solutions. Jing and Li (2018) considered a multi-echelon closed-loop SC planning problem involving a joint recycling center, multiple remanufacturing/manufacturing centers, and multiple distribution centers decentralized to various areas. The solution framework was designed by a hierarchical iterative approach based on the self-adaptive genetic algorithm. Goodarzian et al. (2021) studied a novel multi-objective formulation is devised for the PD problem of a supply chain that consists of several suppliers, manufacturers, distributors, and different customers. Due to the NP-hardness of problems, NSGA-II and Fast PGA algorithm are applied. Pant et al. (2021) developed a bi-objective CLSC model for the paper industry under uncertain environment. The first objective of the proposed model is to maximize SC surplus, and the second objective is to incorporate sustainability through minimizing carbon content by reducing the number of trucks between various echelons of CLSC network. They considered uncertainty at demand points and applied MC simulation to handle it.
The reviewed articles on integrated production-distribution planning are summarized in Table 1. The last row of Table 1 is for the present study. Specifically, the followings are the significant contributions of this paper: A three-level bi-objective programming model is presented to maximize profit and minimize the amount of greenhouse gas emissions. NSGA-II algorithm is developed to solve the bi-objective model at each level. A hierarchical iterative approach is applied to solved the three-level model. NSGA-II algorithm is combined with MC simulation (MCNSGA-II) to tackle uncertainty in demand. Geoffrion and Graves (1974) 1974 * * * * -Bender method - Haq et al. (1991) 1991 * * * * -Exact - Chen and Wang (1997) 1997 * * * * -Exact - Ozdamar and Yazgac (1999) 1999 * * * * -Exact GAMS Lee and Kim (2000) 2000
3-Mathematical modeling
In this paper, a three-level bi-objective programming model is presented for a GCLSC, which includes multiple recycling centers, multiple manufacturing/remanufacturing factories, and multiple distributors.
Multi-level programming is an optimization approach that has a multi-layer hierarchical form. In this form, decision-makers of different levels have various decision-making authority and objectives. The set of strategies and the goal attainment of lower levels can be affected by the decisions from the upper levels. Nevertheless, lower levels have considerable autonomy, so the upper levels cannot totally control them (Jing and Li 2018). Regarding the basic concept of multi-level programming, each subordinate level in this model is situated in different region and has its policies and strategies. The first-level decision maker (recycling centers) sets own goals and/or decision, and then asks each subordinate level of the organization for their optima, that is calculated in isolation. Because of it, Multi-level programming is used.
In the considered GCLSC, returned products are disassembled to components in the multiple recycling centers. Testing activities to know the defective components perform in the centers. The components can be saved in the inventory of eligible components if they have a remanufacturing standard; otherwise they will be excreted. Eligible components are carried to various remanufacturing/ manufacturing centers.
The remanufacturing/manufacturing centers test the quality and efficiency of Eligible components, or raw materials are converted to new components. Remanufactured and new components are used to produce remanufactured and new products, respectively. The products are added to the inventory of remanufactured and new products and are carried to various distribution centers according to orders.
Each distribution center can select one or more remanufacturing/manufacturing centers for satisfying demand, and provide remanufactured or new products to retailers for sales. Moreover, the returned products from retailers are gathered and recycled by distribution centers and carried to multiple recycling centers. Note that there are several kinds of vehicles to transport parts and products between the GCLSC components.
Furthermore, the assumptions for the considered problem are as follows: 1. The SC is multi-product, multi-period, and multi-level. 2. Demand for remanufactured and new products is uncertain and can only be met by themselves, and remanufactured products have lower selling prices than new products. 3. The production cost of each factory is different. 4. Vehicles have variable capacity in each level. 5. The cost of transfer between the nodes is different. 6. Processing and reprocessing parts in the manufacturing/remanufacturing factories lead to greenhouse gas emissions into the environment. 7. The transportation of vehicles at each level in each period has a greenhouse gas emissions limit. The notations are introduced in Table 13 to Table 19 in the Appendix. The model of each level is as below.
3-1-The recycling centers, the first level
The bi-objective MIP model at the first level (FLM) to maximize profit and minimize adverse environmental effects is as follows: .
Expression (1) maximizes the profit of the recycling centers. The profit is equal to revenue minus total costs, including recycling, disassembly, testing, disposal, inventory, and transportation costs. The objective function (2) minimizes the amount of greenhouse gas emissions of each vehicle per kilometer for a unit of eligible components between the recycling centers and remanufacturing/manufacturing factories. Constraint sets (3) and (4) are the inventory equations for returned products and eligible components. Constraint set (5) shows the quantity constraint for disposed of components. Constraint sets (6) and (7) ensure that the inventory rate of returned products and eligible components do not exceed the maximum level. Constraint set (8) represents that the quantity of returned products to be disassembled and tested in a recycling center does not exceed the maximum amount. Constraint set (9) guarantees that the quantity of eligible components that are transported by a vehicle from the recycling centers to the remanufacturing/manufacturing factories does not exceed the capacity of the vehicle. Constraint set (10) ensures the sum of greenhouse gas emissions for the transportation system between recycling centers and factories does not exceed the maximum allowance. Constraint sets (11) and (12) describe the value ranges of the variables.
3-2-The manufacturing/remanufacturing centers, the second level
The bi-objective MIP model at the second level (SLM) to maximize profit and minimize adverse environmental effects is as follows: 1 1 1 1 1 1 1 1 1 1 1 .. st ,1 11 Expression (13) maximizes the profit of the remanufacturing/manufacturing factories. The profit is equal to sales revenue minus total costs, including manufacturing, remanufacturing, inventory, purchase, and transportation costs. The objective function (14) minimizes the amount of greenhouse gas emissions of each vehicle per kilometer for a unit of new products and remanufactured products between remanufacturing/manufacturing factories and distribution centers, as well as the amount of greenhouse gas emissions during processing and reprocessing. Constraint sets (15)-(19) represent the inventory equations for eligible, new, remanufactured components, and products. Constraint sets (20)-(24) ensure that the inventory quantities of eligible, new, remanufactured components and products do not exceed the maximum quantity. Constraint sets (25)-(28) guaranty that the quantity of processed and reprocessed components, as well as the quantity of new and remanufactured products, do not exceed the capacity of relevant factories. Constraint set (29) considers that the amount of new and remanufactured products that are transported by a vehicle from the manufacturing/remanufacturing factories to distribution centers does not exceed the capacity of the vehicle. Constraint set (30) ensures the sum of greenhouse gas emissions for the transportation system between factories and distributions centers does not exceed the maximum allowance. Constraint set (31) represents that the sum of greenhouse gas emissions for processing and reprocessing operations in factories must be less than the maximum allowance. Constraint sets (32) and (33) describe the value ranges of the variables.
3-3-The Distribution centers, the third level
The bi-objective MIP model at the third level (TLM) to maximize profit and minimize adverse environmental effects is as bellows: 1 1 1 1 1 1 1 Expression (34) maximizes the profit of the distribution centers. The profit is equal to sales revenue minus total costs, including the purchase of new and returned products, shortage, inventory, and transportation costs. The objective function (35) minimizes the amount of greenhouse gas emissions of each vehicle per kilometer for a unit of returned product between distribution and recycling centers. Constraint sets (36)-(38) show the inventory equations for new, remanufactured, and returned products. Constraint set (39) ensures that the number of collected returned products does not exceed the number of returned products available in the market. Constraint sets (40) and (41) represent the quantity shortage for new and remanufactured products. Constraint sets (42)-(44) guaranty that the inventory rate of remanufactured, new, and returned products do not exceed the maximum level. Constraint set (45) ensures that the quantity of returned products, which are transported by a vehicle from distribution centers to recycling centers, does not overstep the capacity of the vehicle. Constraint set (46) guarantees that the sum of greenhouse gas emissions for the transportation system between distribution and recycling centers does not exceed the maximum allowance. Constraint set (47) defines the value ranges of the variables.
4-Solution approach
The multi-level linear programming is NP-hard (Hansen et al. 1992), so finding its solutions are usually hard. In this paper, the proposed three-level bi-objective programming formulation is solved based on a hierarchical iterative approach using the LP-metric method for small-size instances and the NSGA-II algorithm for large-size instances, respectively. Moreover, the NSGA-II algorithm is combined with MC simulation (MCNSGA-II) to tackle uncertainty in demand.
4-1-LP-metric method
LP-metric is a classical approach which is applied to solve multi-objective models. The method attempts to find the best solution, which has the shortest distance from the ideal solution. Therefore, the LP-metric function (48) is applied to compute the distance of an available solution to the ideal solution .
fx and j w represent the th j objective function and its importance degree, respectively.
p demonstrates the emphasis degree on available deviations. To calculate the ideal value of the th j objective function ( * () j j fx ), the model with the th j objective function and the existing constraints is solved. Moreover, to find the anti-ideal value of the th j objective function ( () j j fx ), the objective function is reversed. Finally, we minimize the LP-metric function (48) subject to the constraints to obtain the solution.
4-2-The NSGA-II algorithm
In this paper, the NSGA-II algorithm (Deb et al. 2002) is applied for solving the considered bi-objective models at each level. Note that the algorithm is adopted to solve medium and largescale instances.
4-2-2-Crossover
The crossover operator is used to transfer the same characteristics from parents to nextgeneration children. The simplest types of crossovers are single-point, two-point, and uniform crossover. In this paper, the two-point crossover is applied. First, two parents are selected, and then two numbers are randomly selected to determine the cut-off area. The first child inherits the cut region from the second parent, and the second child inherits it from the first parent. Figure 2 shows the crossover operator in this study.
4-2-3-Mutation
The mutation operator is usually performed with a very low probability. In the present study, two chromosomes from the parents are randomly selected, and the position of the two chromosomes is changed, as shown in Figure 3.
4-3-The hierarchical iterative approach
The proposed three-level model is solved based on a hierarchical iterative algorithm. The description of the algorithm is as below: Step 1: Determine the initial value of variable jkpvt da . It is randomly generated from formula (45) and (46) at iteration 0 ( 0 Iter ).
Step 2: Solve the FLM using the values of will be considered for the FLM as a parameter in a conceivable new iteration.
Step 5: Check that the stopping condition has been met, that is: where * w is an iteration accuracy given in advanced. If so, stop iteration; if not, start Step 2. Note that the NSGA-II algorithm is implemented in the context of the above procedure. The NSGA-II is a population-based algorithm, so the Mean Ideal Distance (MID) criterion is applied to calculate w ̅ 1 , w ̅ 2 , and w ̅ 3 . By considering the equation (51) to calculate MID, the equation (52) is used to calculated w ̅ 1 , w ̅ 2 , and w ̅ 3 in Step 5.
4-4-Monte Carlo Simulation
MC simulation is premier to a deterministic simulation of a system when the input variables of the system are random. In this simulation approach, a single value is selected for each input random variable based on the best guess by the modeler. The approach is then run, and the output is considered. This output is a single value or a single set of values based on the selected input. MC simulation randomly samples values from every input variable distribution and applies the sample to compute the model's output. This procedure is iterated many times until the modeler achieves a sense of how the output changes given the random input values (Sokolowski and Banks 2010).
In this paper, it is assumed that the demand parameter in the third level (distribution centers) is uncertain. We use MC simulation to deal with the uncertainty in the NSGA-II algorithm. Therefore, MC simulation is applied to appraise the objective functions of each chromosome, considering the probability distribution of the demand that is obtained from historical data. Moreover, the expected value of each objective function related to a chromosome is estimated by simulation. Moreover, we calculate the number of simulation iterations using equation (53).
4-5-Comparison of multi-objective solution methods
Two metrics, set coverage and spacing, are usually applied to evaluate convergence and diversity of a multi-objective solution method. We apply the following metrics to assess the obtained non-dominated solutions the NSGA-II and MCNSGA-II algorithms. Zitzler and Thiele (1999) proposed the set coverage metric , C A B that computes the ratio of solutions in B are weakly dominated by solution A .
4-5-1-Set coverage metric
C A B means that all members of B are weakly dominated by A . ,0 C A B shows that no member of B is weakly dominated by A . Scott (1995) presented the following metric to calculate the comparative distance between successive solutions in the non-dominated set (Q ). Based on the above-mentioned metrics, an algorithm with a greater C and a smaller S is preferred.
5-Computational results
In this section, numerical experiments are conducted to investigate the performance of the three-level bi-objective MIP model, the NSGA-II algorithm, and the MCNSGA-II algorithm based on random data. First, we present how the test problems are generated. The Taguchi approach is then employed to tune the parameters of the NSGA-II algorithm, and the best level of each parameter is obtained by the signal-to-noise diagram in the Minitab software. Sensitivity analysis is then performed for different weights of LP-metric function, and the best weight for each level is selected. The results obtained by the LP-metric method and the NSGA II algorithm are compared, and the relative distance of each sample is determined. Note that the LP-metric method and the NSGA II algorithm are implemented in GAMS and MATLAB, respectively, and are tested on a computer with a 2.3 GHz CPU and 8.0 GB of RAM. Finally, the MCNSGA-II algorithm is implemented on two instances, and the results are compared with the NSGA-II algorithm.
5-1-Data generation
We need to test the suggested mathematical model and the solution approach using randomly generated test problems. The dimension of the test problems that are categorized into small, medium, and large-scale are introduced in Table 5. Moreover, the values of model parameters are randomly generated based on a uniform distribution that are shown in Table 6.
TE
Uniform
TE
Uniform
5-2-Parameter tunning of NSGA-II
The Taguchi experimental design approach is utilized to calibrate the parameters of the NSGA-II algorithm. The approach minimizes the effect of noise and specifies the optimal level of signal factors. To do so, the signal to noise ratio (S/N), that computes the value of variation of the response, is implemented. Then, the method aims to maximize the S/N ratio (Peace 1993). In this paper, MID and maximum scattering (MS) metrics calculate response based on equation (56). Table 7 exhibits different levels of the factors for NSGA-II algorithm parameters at the recycling centers level to run the Taguchi method. where Pc and Pm are the crossover and mutation probability, respectively. Then, applying Minitab Software, the L9 design is applied for the NSGA-II algorithm, and experimental results are illustrated in Table 8. The above approach is applied to parameter tuning of the NSGA-II algorithm at the second and the third levels. Figure 5 indicates the effect plot of the S/N ratio related to each level. According to Figure 5, the best parameters values of the NSGA-II algorithm for each level are indicated in Table 9.
5-3-The weight coefficients in LP-metric method
It is necessary to determine the suitable weight coefficients of the objective functions for executing the proposed model in GAMS software based on the LP-metric method. Different weight combinations are solved for a small-size determinant problem, and the appropriate weight is generalized to the other problem. Figure 6 illustrates how the value of the objective functions is affected by various weights in different levels of the problem. The most significant variations in the objective functions happen in the weight coefficients between 0.4 and 0.9, and in the other weights, the objective function value becomes zero. So, it is better to weigh the goals between these limits. According to Figure 6
5-4-Validation of the NSGA-II algorithm
The LP-metric method is used for model validation. Thus, six small-scale problems are solved in GAMS software with the LP-metric method. Note that it is not possible for large-scale problems. Moreover, for comparison, the six small problems are solved with the NSGA-II algorithm. Since the NSGA-II is run based on the assumption that the initial population is randomly generated; hence, any test problem is solved ten times. The best non-dominated solution is selected from the ten times of running results, and its LP-metric measure is calculated. The measured value is compared with the value obtained from the exact method that is run in GAMS. In Table 10, the relative gap between the values of the two LP-metric measures, which are achieved from the exact method and the NSGA-II algorithm, is calculated. According to Table 10, the average of the relative gaps is 7%, 7%, and 1% for the three levels, respectively. Therefore, we can declare that the NSGA-II algorithm is suitable to solve the considered problem.
5-5-Analysis of MC Simulation
In this research, it is assumed that the demand for remanufactured and new products in distribution centers at different periods are stochastic. The probability distribution function associated with the value of demand is uniform. The MC simulation approach is used to deal with the uncertainty Equation (53) is used to calculate the number of simulation replications. First, a pilot run of MC simulation is implemented considering an initial sample size of 10 replications to calculate mean and the standard deviation of 1 and 2 for a chromosome. By considering a confidence level of 95% and an error of 0.1, the number of simulation replications is obtained equal to eight based on equation (53).
The medium and large-size problems are solved to compare the NSGA-II and MCNSGA-II algorithms. Figure 7 and Figure Table 11 and Table 12 indicates the value of convergence (C) and spacing (S) metrics of the different optimization methods. As can be seen, the MCNSGA-II algorithm presents better non-dominated solutions than the NSGA-II algorithm.
6-Conclusions
In this paper, we considered an integrated PD planning problem of a multi-level GCLSC system, that includes multiple recycling centers, multiple remanufacturing/manufacturing centers, and multiple distribution centers. A three-level bi-objective MIP model is presented to maximize profit and minimize the amount of greenhouse gas emissions. A hierarchical iterative approach using the LP-metric method and the NSGA-II algorithm is suggested to solve the proposed model. The Taguchi experimental design approach is applied to find optimum control parameters of NSGA-II. Moreover, the NSGA-II algorithm is fusioned with MC simulation (MCNSGA-II) to deal with the uncertainty of the product demand in distribution centers. The results obtained show that the simulation-optimization approach presented better results than the deterministic approach. Furthermore, there are several opportunities in future research. First, to understand the realities of the environment, it is important to consider uncertainty in other influential parameters such as purchase costs, shipping costs, and so on. Second, studying the literature on multi-level production-distribution planning shows that fewer study goals, such as customer satisfaction and reduction of delay time in the model, were considered. Adding the aforementioned goals could be of interest for future research. Third, in this and other studies, multi-level planning has been solved at two or three levels. Adding the fourth level as the supplier level can be an appropriate area for future research.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest regarding the publication of this paper.
Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. x Inventory quantity of remanufactured product p in distributor j at the end of period t D jpt Inventory quantity of return product p in distributor j at the end of period t jkpvt da This notation has taken value in the FLM ijpvt fdn This notation has taken value in the SLM ijpvt fdr This notation has taken value in the SLM
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict ofinterest. | 6,342 | 2021-10-21T00:00:00.000 | [
"Environmental Science",
"Business",
"Engineering"
] |
Interdependent linear complexion structure and dislocation mechanics in Fe-Ni
Using large-scale atomistic simulations, dislocation mechanics in the presence of linear complexions are investigated in an Fe-Ni alloy, where the complexions appear as nanoparticle arrays along edge dislocation lines. When mechanical shear stress is applied to drive dislocation motion, a strong pinning effect is observed where the defects are restricted by their own linear complexion structures. This pinning effect becomes weaker after the first dislocation break-away event, leading to a stress-strain curve with a profound initial yield point, similar to the static strain ageing behavior observed experimentally for Fe-Mn alloys with the same type of linear complexions. The existence of such a response can be explained by local diffusion-less and lattice distortive transformations corresponding to L10-to-B2 phase transitions within the linear complexion nanoparticles. As such, an interdependence between linear complexion structure and dislocation mechanics is found.
Introduction
Linear complexions are thermodynamically-stable nanoscale phases recently discovered at dislocations in the Fe-9 at.% Mn alloy [1]. Similar to interfacial complexions confined to grain boundary regions [2][3][4], linear complexions are defined by a structure and chemistry that are different from the matrix yet can only exist in the presence of crystalline defects, with dislocations serving that role for linear complexions. Using atomistic simulations, the authors of this work recently predicted a wide variety of linear complexions in body centered cubic (BCC) and face centered cubic (FCC) metals [5][6][7]. One interesting feature of linear complexions in BCC Fe-based alloys is the presence of a metastable phase in the dislocation segregation zone, which maintain coherent interfaces with the matrix phase. These metastable phases have been reported for the Fe-Ni system with simulations [5] and for the Fe-Mn system with experiments [8]. Other interesting features have been predicted for FCC alloys, such as the formation of 2D platelet phases which can form platelet arrays along partial dislocations or replace the dislocation stacking fault [7]. While some of these complexion types still require experimental validation, it is clear that linear complexions at dislocations represent an new exciting materials research area for crystalline solids, as this topic has the potential to enable new materials with unique properties.
While the effects of grain boundary complexions on various material properties have been studied extensively [9][10][11][12], similar research on the influence of linear complexions is limited to the work of Kwiatkovski de Silva et al. [13], who demonstrated a static strain aging effect in single crystal Fe-Mn samples containing linear complexions. Specifically, the atomic-scale details of dislocationlinear complexion interactions and the associated mechanical behavior are not known. Atomistic simulations are proven to be a powerful tool for investigating the nanoscale mechanics involving dislocation interactions with alloying elements [14], grain boundaries and grain boundary complexions [15,16], nanoscale precipitates [17,18], ceramic nanoparticles [19], Guinier-Preston zones [20,21], and vacancy clusters [22]. Atomistic simulations act in these situations as a digital microscope, providing a great level of detail on structural and chemical transitions as well as deformation mechanisms at the nanoscale. For example, multi-principal element alloys have intriguing mechanical properties yet their deformation physics are complicated by the compositional complexity of the lattice. Jian et al. [23] reported on the roles of lattice distortion and chemical short range order on dislocation behavior, finding that these factors can result in enhances glide resistance. Xu et al. [24] explored the local slip resistances in a BCC multi-principal element alloy on a variety of slip planes and with a variety of Burgers vectors, observing that these alloys could deform by a multiplicity of slip modes. The work of Wang et al. [14] provided experimental validation of such a plasticity mechanism and connected this behavior to the observation of a strength plateau at intermediate temperatures, rather than the rapidly decreasing strength of traditional BCC alloys with increasing testing temperature.
Due to the recent discovery of linear complexions, a comprehensive investigation of their effect on dislocation propagation and pinning is missing in the literature. In this paper, we provide the first mechanistic insight into the effect of nanoparticle array linear complexions in a BCC Fe-Ni alloy on mechanical behavior, with a solid solution of the same composition providing a point for comparison. The atomistic mechanisms associated with dislocation pinning and unpinning events during the shear deformation are investigated in detail, and connected to the shape of the stressstrain curve. Finally, the structure of the linear complexion is found to change as the dislocation and its local stress field moves away, resulting in an interdependence of dislocation behavior and complexion structure. The results shown here highlight that linear complexions are defect states that both alter and react to the dislocation environment, providing a pathway for the direct manipulation of mechanical behavior.
Materials and Methods
Atomistic simulations, including molecular statics, molecular dynamics (MD), and hybrid Monte Carlo (MC)/MD, were performed using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) software [25], with an embedded-atom method (EAM) potential parametrized to reproduce the binary Fe-Ni system used to model atomic interactions [26].All MD simulations used a 1 fs integration timestep. Atomic snapshots were analyzed and visualized using the OVITO software [27]. Crystalline structure and chemical ordering were analyzed using the Polyhedral Template Matching method [28], while the positions of dislocation lines were identified using the Dislocation Extraction Algorithm [29].
The Fe-Ni system and the given potential were chosen for several reasons. First, the Fe-Ni phase diagram is similar to the phase diagram of the Fe-Mn system, in which linear complexions were first discovered experimentally. This is important because only two Fe-Mn potentials currently can be found in the literature, but neither is appropriate for the goals of this study. Bonny et al. [30] developed an Fe-Mn EAM potential while Kim et al. [31] developed an Fe-Mn modified embedded atom method (MEAM) potential but neither was rigorously fitted to reproduce the bulk phase diagram. Moreover, MEAM potentials are not currently compatible with the hybrid MC/MD code used here. Since it is critical to reproduce the relevant phases for the alloy system, the Fe-Ni potential from Bonny et al. [26] was chosen for this work as it was fitted based on the experimental phase diagram for Fe-Ni and was found to reproduce the stable intermetallic phases (L10-FeNi and L12-FeNi3). The main weakness of the potential is that the solubility limit of Ni in BCC Fe is overestimated, meaning that exact composition values for complexion transitions should not be compared with experiments. In addition, Domain and Becquart [32] performed density functional theory (DFT) calculations of segregation to self-interstitial atom clusters, finding that Ni may be able to segregate to sites under both local compressive and tensile stresses, while the EAM potential only predicted segregation to the tensile regions. This suggests that some segregation of Ni dopants to both sides of the dislocation may occur during the initial segregation stages. However, the final linear complexion structure is not expected to be affected by this, and in fact the simulated linear complexions resemble those observed experimentally in Fe-Mn [5,6]. Strong variations of Ni composition along the dislocation line were observed at the compression side in the dislocation segregation zone, with the composition near precipitates approaching ~50 at.% Ni while in between precipitates the composition was near the global composition in the system. Based on these observations, we conclude that the final form of the complexion and solute distribution around the dislocation core is controlled by the second-phase precipitation and growth, and not by the initial solute segregation. Next, we provide the simulation details for a representative computational cell containing linear complexions, prepared for mechanical testing.
First, an initial simulation cell with two edge dislocations was prepared, as shown in Figure 1(a). The dislocations were inserted by removing one-half of the atomic plane in the middle of the sample and relaxing the atomic structure using the conjugate gradient descent method implemented in LAMMPS. Next, 2 at.% Ni atoms were randomly distributed within the sample by replacing Fe atoms, followed by MD equilibration with an NPT ensemble (constant number of atoms, constant pressure, and constant temperature) at zero pressure and 300 K temperature for 20 ps. Three thermodynamically-equivalent initial solid solution configurations with different random distributions of solutes were prepared with this procedure, providing a baseline for comparison against samples containing linear complexions. To induce linear complexion formation, the alloy samples were equilibrated at 500 K using the hybrid MC/MD method which allows for both chemical segregation as well as local structural relaxations. The MC steps were performed after every 0.1 ps of MD relaxation time, using a variance-constrained semi-grand canonical ensemble that can stabilize alloy systems with coexisting phases [33,34]. This method has been used in a number of recent modeling studies to capture complexion transitions in metallic alloys [35][36][37]. The MC/MD procedure led to Ni segregation to the compressive side of the dislocations and then formation of linear complexions in the form of nanoscale precipitate arrays composed of metastable B2-FeNi and stable L10-FeNi phases, as shown in Figure 1. The presence of nanoscale precipitates reduces the compressive stresses on the side of the dislocation with the extra half-plane of atoms (see the zoomed view of the bottom dislocation in Figure 2). More details about the MC/MD procedure for linear complexions in the Fe-Ni system can be found in our previous studies of equilibrium complexion states [38,39]. The MC/MD simulations were determined to be in equilibrium once the rate of evolution of total simulation cell energy fell below 1 eV/ns, although the procedure was continued for additional time to sample three distinct yet thermodynamically equivalent samples. These samples were then cooled to 300 K over 20 ps with MD for subsequent mechanical testing. The solid solution specimens and the samples containing linear complexions were then deformed to promote dislocation slip by applying XY shear deformation to the simulation cell. The deformation of the cell was performed with the non-equilibrium MD method [40,41], using an engineering strain rate of 10 8 s -1 at 300 K. To test the effect of deformation rate, an additional simulation was performed at an engineering strain rate of 10 7 s -1 . The deformation of the cell was stopped after 10% applied shear strain, and the stress-strain curves were extracted and analyzed in the context of atomic-scale deformation mechanisms and the local structure of the linear complexions.
Results and Discussion
Figure 3(a) shows the obtained stress-strain curves for the samples with linear complexions. We observe profound initial stress peaks for all the samples, indicated by black circles. The initial plastic events are followed by smaller peaks, indicated by black triangles, that represent the flow stress of the dislocations passing by the linear complexions. Similar stress-strain curves with aprofound first peak have been previously reported in experimental work on linear complexions in an Fe-9 at. % Mn alloy [13]. The substantial strain aging is observed in both experimental Fe-Mn and modeled Fe-Ni systems with linear complexions. The average values for both the initial break-away stress and the flow stress for the simulated Fe-Ni samples with linear complexions are presented in Figure 3(b). The mean initial break-away stress of 586 MPa is almost 50% higher than the mean flow stress of 404 MPa. We note that these values should not be directly compared to experimental measurements, as other aspects of alloy strengthening from defects such as grain boundaries are not present in the simulation cell, which isolates the dislocation pair. To understand how deformation differs between the solid solution and linear complexion states, Figure 4 presents the initial flow events for representative examples of each of the two sample types. Figure 4(a) shows the elastic loading and initial flow event, where it is clearly observed that the dislocation can move much more easily in the solid solution sample. Figure 4(b) shows the dislocation pair at three different simulation times (or, equivalently, applied strains since the strain rate is controlled). The two dislocations in the pair remain relatively straight during the motion, with the top dislocation shifting to the right and the bottom dislocation moving to the left. The solutes in the solution act as obstacles that must only be locally overcome, leading to very little change in the dislocation shape. The bottom dislocation is shown from a top view in the lower half of Figure 4(b), demonstrating the progressive migration that leads to a temporary stress drop as a set of local, solute obstacles are overcome. The local obstacles are easily overcome, which is why the initial peak stress for the solid solution specimens is relatively low. In general, the dislocation in the solid solution sample behaves in a "textbook" fashion, with few features of interest. The mechanism of the dislocation motion is drastically different for the specimen with linear complexions, with dislocation bowing and unpinning from the nanoscale precipitates one by one, starting in the region with the largest distance between particles, controlling the yield event shown in Figure 4(c). It is important to note that only one dislocation, the defect at the bottom of the cell, is moving in the linear complexion sample. The other dislocation at the top of the cell bows under increasing stress but remains pinned by the nanoparticle array linear complexion. One dislocation is favored to move over the other because the nanoparticle arrays, while similar, are not exactly identical. A bowing mechanism is easiest in the location where there is the largest spacing between obstacles, which occurs near the middle of the lower dislocation. While dislocation bowing is a common mechanism for overcoming precipitates in conventional alloys, an important distinction is found for the linear complexion state: the obstacle is not in the dislocation's slip plane. Traditional Orowan bowing occurs when an inpenetrable obstacle (e.g., a precipitate with a different crystal structure than the matrix) impedes the dislocation's slip path, requiring bowing to move past the obstacle in a way that leaves dislocation loops around the obstacle. However, in the case of linear complexions, the nanoparticles are primarily above or below the dislocation slip plane (depending on whether one is looking at the top or bottom dislocation). Even the small portion of the B2-FeNi intermetallic phase that crosses the the slip plane in Figure 1 has the same BCC crystal structure as the matrix phase and a lattice parameter that is similar. The linear complexion is not an impenetrable obstacle and in fact no new dislocation loops or segments are formed as the dislocation pulls away. The strong initial pinning can be explained from the same energetic perspective that is used to describe the complexion nucleation. Segregation of Ni occurs to the compressed region near the edge dislocation, until the local composition is enriched enough that a complexion transition can occur [5,38]. Although the Gibbs free energy of the L10 phase is lower, which is why this structure appears on the bulk phase diagram, the restriction to a nanoscale region and the large energy cost for an incoherent BCC-L10 interface results in the formation of a "shell" of a metastable B2 phase surrounding the L10 core [38]. Fundamentally, the local stress field near the dislocation is relaxed by this transformation and the driving force for motion under shear stress (i.e., the Peach-Koehler force [42,43]) is therefore reduced. We do note that segregation of Ni to the dislocation is needed to create the linear complexion states, thus lowering the solute composition in the matrix solid solution and removing some weak obstacles. However, the net effect is still a notable strengthening increment, as the strong linear complexion obstacles more than make up for losing some amount of solid solution strengthening.
Since the periodic boundary conditions are used here, the dislocation exits the cell after it breaks away from the linear complexion and then reenters on the other side, where it eventually interacts with the complexon again. Therefore, subsequent dislocation-complexions can be observed by continuing the simulation and investigating the flow events at larger strains. A detailed look at multiple flow events is shown in Figure 5 for one of the linear complexion samples. In addition to the stress-strain curve shown in black, measurements of the dislocation length at any given time are also extracted and presented as the blue curve. Three separate events are labels A-B, C-D, and E-F and isolated into separate parts of the figure. Similar to the initial yield event in Figure 4, only one dislocation (bottom) is moving while the other (top) remains pinned by the nanoscale precipitates. Figure 5 shows that there are cyclic undulations in the shear stress, which correlate with the observation of repeated bursts in the dislocation density that are associated with dislocation bowing. In fact the shapes of the dislocation length bursts are extremely similar for each cycle, suggesting that the physical events are similar as well. Snapshots A-B show the final pinned segment of the dislocation breaking away from a large particle in the complexion array, then enerting from the other side and actually being attracted to the particles, as evidenced by the fact that the dislocation is pulled closer to the precipitates first in frame B. Snapshots C-D show the first bowing event in a new cycle, which occurs at the location with the spacing between particles and is reminiscent of the events presented in Figure 4(c). We do remind the reader that this bowing does occur at lower stresses than the initial yield event, perhaps suggesting that something about the complexion has evolved. Finally, snapshots E-F show an intermediate bowing event, neither the first nor the last in a cycle. To investigate complexion structure during the deformation simulation, the number of B2 and L10 atoms in the simulation cell were tracked and are shown in Figure 6(a). A cyclic pattern is again observed, suggesting a connection to the repetitive dislocation motion through the cell. Most notable is that reductions in L10 atoms are generally aligned with increases in the number of B2 atoms (and vice versa). Figures 6(b)-(e) show atomistic snapshots of important dislocation interactions with a linear complexion particle, in an effort to understand this cyclic behavior. In Figure 6(b), the dislocation is still pinned next to the L10 precipitate, so the complexion contains the expected L10 core and B2 shell. The dislocation started to pull away from the particle in Figure 6(c), and close inspection of the complexion particle shows a reduction in the size of the green L10 region. However, the transition to the B2 structure does not happen immediately, as some time is apparently needed, although short since it is captured on MD time scales. Figure 6(d) shows a later time when the dislocation has moved fully away from the particle, and the linear complexion is almost entirely composed of B2 structure in this frame (a very small number of green L10 atoms remain but the number is dramatically reduced). This observation provides further support for the concept that the two-phase complexion structure is caused by the dislocation's hydrostatic stress field. When that stress field is no longer present, a diffusion-less and lattice distortive transformation from an FCClike structure (L10) to a BCC-like structure (B2) occurs. Figure 6(e) shows that the L10 region of the precipitate starts to be recovered as the dislocation, and its stress field, arrives at the other side. We note that while a cyclic behavior is observed in Figure 6(a), the number of B2 atoms trends downward generally as the process continues. This could be a sign that the linear complexion particles are becoming smaller in subsequent cycles (or at least during the transitions from the first few cycles to the steady-state cycling), which would provide an explanation for why the initial yield event is more difficult than later dislocation flow. Figure 6 generally highlights the close connection between the structure of the linear complexion and its interactions with the dislocation. The complexions restrict the dislocation and make it harder to move, while the structure of the complexion relies on the local stress field from the dislocation to find a local equilibrium structure. There is hence an interdependence of these two defect structures, which truly sets linear complexions apart from traditional obstacles to dislocation motion. Finally, to show that this local transition within the complexion is not dependent on the deformation rate used here, an additional deformation simulation was run at one order of magnitude slower strain rate (10 7 s -1 ). The results of this simulation are presented in Figure 7, capturing all of the same important features described above. Loading is initially elastic until a high stress is reached, when the dislocation is able to bow out at the region with the largest spacing between linear complexion particles. The dislocation re-enters the simulation cell and becomes pinned, and subsequent dislocation unpinning and motion follows the same mechanism. A reduction of the number of L10 atoms in the system and a corresponding increase in the number of B2 atoms is observed each time the dislocation is able to break away. These trends agree with the L10-to-B2 lattice distortive transformation shown in Figure 6.
Conclusions
This paper presents the first atomistic study of the effect of linear complexions on the mechanical behavior of metallic alloys, with Fe-Ni used as a model system. A strong pinning effect of linear complexions on their host dislocations is observed, which is connected to the alteration of the dislocation's stress field in the crystal. This pinning effect leads to a substantial increase in the initial breakaway stress, with a pronounced initial peak stress is in agreement with experimental observations from alloys with similar complexion structures. Dislocation motion away from the nanoparticle arrays leads to an L10-to-B2 lattice distortive transformation as the stress field which stabilized the original complexion structure is removed. These findings provide additional understanding and context for nanoscale phase transformations induced by dislocation stress fields and their effect on the mechanical properties of materials. Additional study to obtain a complete understanding of linear complexion thermodynamics and the deformation physics associated with dislocation complexion-interactions are needed to enable "defects-by-design" which can be used to tailor mechanical response.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,139.4 | 2020-11-13T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Impact of the Endocardium in a Parameter Optimization to Solve the Inverse Problem of Electrocardiography
Electrocardiographic imaging aims at reconstructing cardiac electrical events from electrical signals measured on the body surface. The most common approach relies on the inverse solution of the Laplace equation in the torso to reconstruct epicardial potential maps from body surface potential maps. Here we apply a method based on a parameter identification problem to reconstruct both activation and repolarization times. From an ansatz of action potential, based on the Mitchell-Schaeffer ionic model, we compute body surface potential signals. The inverse problem is reduced to the identification of the parameters of the Mitchell-Schaeffer model. We investigate whether solving the inverse problem with the endocardium improves the results or not. We solved the parameter identification problem on two different meshes: one with only the epicardium, and one with both the epicardium and the endocardium. We compared the results on both the heart (activation and repolarization times) and the torso. The comparison was done on validation data of sinus rhythm and ventricular pacing. We found similar results with both meshes in 6 cases out of 7: the presence of the endocardium slightly improved the activation times. This was the most visible on a sinus beat, leading to the conclusion that inclusion of the endocardium would be useful in situations where endo-epicardial gradients in activation or repolarization times play an important role.
INTRODUCTION
Electrocardiographic imaging aims at reconstructing cardiac electrical events from electrical signals measured on the body surface. The most common approach relies on the inverse solution of the Laplace equation in the torso to reconstruct epicardial potential maps from the body surface electrical potential maps (BSPM) (Wang and Rudy, 2006). This technique requires a regularization strategy to deal with the ill-posedness of the problem, for example Tikhonov regularization. However, as this regularization is applied to potential patterns, it suppresses the steep voltage gradients that characterize activation wavefronts. This leads to prominent errors such as artefactual block lines in the reconstructed activation map (Duchateau et al., 2017;Ravon et al., 2017).
Other methods have been designed to reconstruct directly the activation times (van Oosterom and Oostendorp, 1992;Liu et al., 2006). While Liu et al. (2006) look for the threedimensional activation sequence in the ventricular muscle, van Oosterom and Oostendorp (1992) reconstruct activation on both the epicardium and the endocardium. van Dam et al. (2009) proposed a method that solved both the activation and the repolarization. Based on an equivalent double layer model, it updates activation and repolarization times alternatingly. Ghodrati et al. (2006) developed two methods to reconstruct epicardial information. One optimizes the position of the depolarization front at each time. The second reconstructs epicardial potentials with a regularization term based on the estimation of the wavefront behavior. These approaches still rely on a Tikhonov-like regularization technique. Recently, studies that reconstruct both the activation and the recovery, with a novel regularization technique, have been published (Cluitmans et al., 2017(Cluitmans et al., , 2018. The regularization is done through an electrophysiological input and the potentials on the torso are sparsely represented to deal with the ill-posedness of the problem. Others used a probabilistic approach to find parameters (Rahimi et al., 2016;Dhamala et al., 2018). The former used the two-variable Aliev-Panfilov model (Aliev and Panfilov, 1996) to model the AP. Their aim was to probabilistically personalize a model parameter using machine learning methods. The estimation was made on a whole-heart 3D model, from BSPMs or extracellular potentials. In the latter the parameters of the model are assumed and the behavior of the wavefront is optimized. The same group worked on regularizing both the spatial and the temporal propagation of action potential (Wang et al., 2010). The method relies on a two-variable propagation model with fixed parameters in a volumetric myocardium. It was then improved in Ghimire et al. (2017). Note that in these studies constraints in the spatial distribution are considered.
In a previous study (Ravon et al., 2017) we introduced a new technique that aims at recovering directly both the activation and repolarization maps on the epicardium. The general idea consists in looking for an ansatz of an action potential (AP) under the form of a function v(P; t) parameterized by a small number of parameters P, e.g., less than three. The upstroke of this AP is supposed to be at t = 0. From the knowledge of the activation times τ (x) on the heart, we can map the AP to a space-and time-dependent function V m (t, x) = v(P; t − τ ). In addition, the parameters P may have space-dependent values distributed on the surface, which enriches the model, but increases the number of unknown parameters. Then this transmembrane voltage function V m (t, x) is projected to body surface potential signals. The method searches for the parameters P and activation map τ that realize the best fit to the target body surface signals on a given time interval. It amounts to solving a nonlinear least squares parameter identification problem with a small number of (possibly distributed) parameters. We previously represented the action potential as the product of two logistic functions, as proposed by Van Oosterom and Jacquemet (2005). The final parameter identification problem (Ravon et al., 2017) consisted of identifying three distributed parameters, given the BSPM of a complete ventricular activation and repolarization sequence (i.e., a QRST waveform). This method was demonstrated to give a better range of activation times (ATs) and a smoother AT distribution than a solution based on the Laplace equation with Tikhonov regularization of order zero. However, it only reconstructed APs on the epicardium. In general, large and physiologically very relevant differences in AT and repolarization time (RT) can exist across the wall. Therefore, in this study we investigated whether including the endocardium improves the results.
To this aim, we tested our method on in silico data with and without important transmural gradients. The parameter identification problem was solved either on the epicardium only, or on both the epicardium and endocardium. We found that the quality of the reconstructed activation and repolarization maps (in terms of correlation coefficients) was similar when transmural gradients were small, but that inclusion of the endocardium improved the solution in a case where these gradients were important.
As compared to Ravon et al. (2017), we also changed the representation of the AP from the product of two logistic functions to the solution of the two-variable ionic model of Mitchell and Schaeffer (2003), to have a more relevant AP shape without increasing the number of parameters.
We resorted to a discretize-then-optimize strategy: we first set the direct problem that maps the parameters P and activation map τ to the voltage V m (t, x), and then to the BSPM φ T . This problem was discretized using triangulated surfaces. The parameters were identified in the discrete problem using a gradient descent method on a discrete least squares cost function.
Mapping the Parameters to the Transmembrane Voltage
The parameterization was based on the two-current model proposed by Mitchell and Schaeffer (2003). This model describes the dynamics of two functions: the voltage v and an auxiliary variable h. Both quantities are dimensionless and scaled between 0 and 1, and solve the following ordinary differential equations: The five parameters were originally chosen as (Mitchell and Schaeffer, 2003): τ in = 0.3 ms, τ out = 6 ms, τ open = 120 ms, τ close = 150 ms, and v gate = 0.13. The steady state for this model is (v, h) = (0, 1). The voltage v takes the shape of an AP if we set the initial condition as v(0), h(0) = (0.15, 1), see the red curve in Figure 1.
The function v(t) defined for t ≥ 0 as the solution of the initial value problem (1)-(2) with v(0), h(0) = (0.15, 1) was completed by 0 for t < 0. It was our ansatz of an AP, denoted by v(P; t) for t ∈ R, and in general P = {τ in , τ out , τ open , τ close , v gate }. For instance, the blue curve in Figure 1 is the graph of v(P, t − τ ) for an activation time τ = 50 ms and the default values for P stated above.
In practice, the parameters τ in and τ open define the upstroke of the AP, and were fixed with their default values τ in = 0.3 ms and τ open = 120 ms. Similarly, the parameter v gate defines the excitability threshold and was fixed at v gate = 0.13. Hence, only the parameters τ out and τ close were searched as unknown parameters, because they are directly related to the AP duration. τ close can be seen as the plateau phase duration whereas τ out is linked to the speed of the repolarization. τ out also has a small impact on the amplitude of the voltage v.
In addition, we rescaled the voltage v by a factor A, so as to fit the scaling of the measured BSPM. Hence, we considered the FIGURE 1 | Red curve: voltage v(P, t) with the default parameters P. Blue curve: TMP V m (t) = v(P; t − τ ) with τ = 50 ms.
The parameter τ was distributed on the heart surface by the design of the method. Meanwhile, the parameters A, τ out , and τ close may be constant or distributed. Since AP duration varies across the heart surface, we would rather consider varying distributed parameters τ out and τ close .
Projecting the Transmembrane Voltage to the Body Surface Potential Map
Afterwards, we mapped the transmembrane voltage V m (x, t) to extracellular potentials φ e (x, t) as in Potse et al. (2009): where V m (t) was a fixed spatial average of where S is the heart surface (epicardium only, or epicardium and endocardium). The rationale of the formula is a rewriting of the bidomain model coupled with the hypothesis that conductivity tensor fields in both extra-and intra-cellular domains are homogeneous and isotropic. Here the ratio of conductivities was hidden in the factor A. Finally, we projected the extracellular potentials φ e (x, t) to the body surface potentials φ T (y, t) for any point y on the torso surface as follows: This amounted to approximating the solution of the Laplace equation outside the heart domain, assuming it is an infinite homogeneous medium (Malmivuo and Plonsey, 1995;Macfarlane et al., 2010).
Discrete Surfaces and Approximations
In practice, the endocardial and epicardial surfaces were discretized by two separate triangular meshes (Figure 2) with For the sake of computational simplicity, the mappings (4) and (5) were replaced by their discrete counterparts: where V m (x i , t) was given by the mapping (3) for given Hence there are 1 + 3N H parameters to be identified.
The Parameter Identification Problem
We looked for the parameter set P = (A, τ out , τ close , τ ) ∈ R 1+3N H that minimized the least squares error where (y j ) j=1...N T were the N T electrode locations on the body surface, (t k ) k=1...T max was the time sequence of interest, (φ ⋆ (y j , t k )) were the measured BSPMs, and (φ T (y j , t k )) were the BSPMs computed according to equations (6). For each time t k , the spatial averages φ T (t k ) and . Potentials are given up to a constant. This constant can be a reference electrode on the torso, the WCT or the mean of all the electrodes. We chose the mean. As Wilson's Central Terminal it was also a way to reduce noise. Moreover, it rescaled the data around their mean value.
The total number of data elements is finally T max N T , which may be compared to the number of unknown parameters 1 + 3N H . This nonlinear least squares problem was solved by the gradient descent method with the RMSprop update (Tieleman and Hinton, 2012). This is an adaptive learning rate method: at each iteration, the update reads: with κ ∈ R 1+3N H an intermediate variable, η ∈ R the learning rate and γ = 0.9. The learning rate was not fixed, an optimal value for η was chosen at each iteration in the range 10 −5 , 10 2 . In equations 8 and 9 the operators ⊗, ⊘, and • denote the Hadamard product, division, and power, respectively. The gradient of the cost function J with respect to the unknown parameters P was calculated analytically. For the gradient descent method, an initial guess was required. We arbitrarily chose A = 10, the default values τ out,i = 6 ms and τ close,i = 150 ms for all i, and τ i constant τ i = τ 0 ∈ R. Since the initialization was the same for all the nodes, the initial torso potentials were zero. The optimization ended when the cost function J and its gradient remained constant. The code was in Matlab and not parallel. Computational time was quite long and similar for all the cases, namely about one day. A more flexible stopping criterion and parallelism would reduce computational time.
Validation Data
In order to create testing data, simulations were run on an anatomically realistic 3D geometry of the torso, including heart, blood vessels, lungs, and skeletal muscle (Figure 3). Each organ had its own conductivity. Propagating AP were generated using a monodomain reaction-diffusion model with a TNNP membrane model (Ten Tusscher et al., 2004) on an anisotropic heart model at 0.2 mm resolution. To compute φ T the computed transmembrane current density in the myocardium was projected on an inhomogeneous heart-torso model with anisotropic skeletal muscle layer at 1 mm resolution and the potential field φ T was found by solving an anisotropic Laplace problem using a finite-difference method (Potse, 2018). Boundary conditions did not match between the monodomain model and the Laplace equation. This approach leads to slightly different extracellular potentials within a few hundred FIGURE 3 | Heart-torso mesh used for the computation of validation data. The 252-electrode body surface mapping set is shown. Red electrodes mark two locations used in Figure 8.
Frontiers in Physiology | www.frontiersin.org micrometers from the surface only (Potse et al., 2006). All simulations were performed with a recent version of the Propag-5 software (Krause et al., 2012) on a BullX cluster machine.
We had access to the activation times on the epicardium and the endocardium (named reference ATs in the following). Repolarization times were computed from extra-cellular potentials as the time with highest positive slope during the repolarization phase.
RESULTS
On the same model anatomy, seven different simulations were run: one sinus rhythm (SR) and six different pacing cases. The description of the cases can be found in Table 1. For all the cases, we solved the parameter identification problem on the epicardium-only mesh (Mesh1) and on the epicardium and endocardium mesh (Mesh2). Mesh1 and Mesh2 had 641 and 534 vertices respectively. We will describe the results in detail for two cases: right-ventricular pacing and sinus rhythm.
Epicardial Ventricular Pacing
The reconstructed activation maps in case of right-ventricular pacing were of the same quality on both meshes. In particular the late ATs were not well reconstructed in both cases (first row, dark blue part in Figure 4). The correlation coefficient (CC) and relative error (RE) between ATs were close for both meshes, about 0.7 and 0.3 respectively. However, Figure 5 shows that a part of the reference ATs between 120 and 160 ms was less well reconstructed with Mesh1 than with Mesh2. For both meshes some reference ATs between 100 and 150 ms were not well reconstructed (Figure 5, left, black box). These points were located between the two valves, where the reconstruction is more difficult. The pacing site was better localized with Mesh1 (11.4 mm from the actual position, geodesic distance) than with Mesh2 (16 mm), as shown in Figure 12. For Mesh2 we also calculated CC for the points on the epicardium (CC = 0.72) and on the endocardium (CC = 0.77). With the endocardium we did not improve the accuracy on the epicardium compared to the results with the epicardium only.
The benefit of considering the endocardium was to look for gradients of depolarization between the endocardium and epicardium. For each point on the epicardium, we selected the closest point on the endocardium and computed the delay in Pacing on the septum, halfway up to the right ventricle the activation. Figure 6 presents box plot of these delays for the 7 cases. Delays existed in the reference ATs (first box) and the delays we obtained were smaller on average. We also obtained large delays (more than 20 ms and up to 135) that were not consistent with the actual ones. On both meshes, the quality of repolarization maps was less good than the activation maps (Figure 4, second row). The CC was slightly better with Mesh1 (0.55 vs. 0.51). It was highlighted on the scatter plot, especially for the earlier RTs (Figure 5, right). Figure 7 shows the evolution in time of the CC between the measured BSPM and the reconstructed ones. Reconstructed torso potentials were computed from equation (3), (4), and (5) with the optimized parameters and the corresponding mesh Mesh1 or Mesh2. On both meshes, the behavior was similar: at the beginning and the end of the simulation the reconstruction was less accurate. As shown by Figure 8, after 400 ms, measured and reconstructed BSPMs are close to zero, which explained that the CC dropped. On average, the CC was 0.88 with Mesh1, and 0.9 with Mesh2. On both electrodes, depolarization, and repolarization phases were quite well fitted for the two meshes. There were just slight differences between the reconstructed BSPMs. We also calculated the root mean square error (RMSE) between the measured BSPMs and the reconstructed ones (Figure 9). Two peaks can be seen: one corresponding to the depolarization phase and the second to the repolarization phase. They were mainly due to the amplitude: the optimized amplitude did not allow to fit the signals on all the electrodes (Figure 8). RMSE was similar for the 2 meshes.
Sinus Rhythm
It is well known that the QRS duration is shorter in sinus beat than in a paced beat. Moreover, there were multiple breakthroughs in the myocardium. For these reasons it was harder to obtain a satisfying reconstruction than in the pacing cases. For both meshes the reconstructed total activation time was longer than the actual. The CC and RE were better with the endocardium than without, but still not as good as in the pacing cases (Figure 10, left). For Mesh2 we also calculated CC for the points on the epicardium (CC = 0.64) and on the endocardium (CC = 0.57). With the endocardium we improved the accuracy on the epicardium (CC = 0.64) compared to the results with Mesh1 (CC = 0.49).
We also looked at the delays between endocardium and epicardium (Figure 6). These were similar on the reference ATs for the SR and RV pacing case (first and third boxes). Since the total activation time (TAT) is smaller in a sinus beat, the relative values of these gradients to the TAT were more important than in RV pacing. We reconstructed different delays for this two cases. The delays were not reconstructed as well for the SR as for the pacing cases. Indeed as shown in Figure 11, there was a gradient of activation on the left ventricular free wall that we did not recover. Similarly there were delays in the activation of the septum that we did not reconstruct.
CC and RE for the repolarization times were better with Mesh1: 0.51 and 0.18 respectively with the endocardium and 0.68 and 0.1 without (Figure 10, right). Indeed with the endocardium the range of RTs was much larger, from t = 108 ms to t = 628 ms, whereas the actual range was from t = 259 ms to t = 393 ms.
Finally we compared the signals on the torso. As in the pacing case, CC and RMSE evolved in the same way for both meshes, with close values over time. In both cases the CC dropped after 350 ms because reconstructed T waves sometimes ended later than the real ones. In the simulation the heart was almost at rest after 350 ms, which was not the case with our optimized parameters. On average, the CC was 0.83 with Mesh1, and 0.87 with Mesh2.
Sensitivity to the Initialization
In order to test if the method was sensitive to the initialization, we solved the inverse problem with two other triplets. The results we previously presented were obtained from the triplet (τ i , τ out,i , τ close,i ) = (60, 6, 150). The second and third triplets were (75, 5, 130) and (75, 6, 15) respectively. The results are presented in Table 2. The three initializations ended with very close results: CC for ATs and RTs were in the same range, as well as for the BSPM. Moreover, for the three triplets, the method gave a better accuracy of the ATs with Mesh2, while RTs were better reconstructed with Mesh1. Changing the initial ATs did not improve the accuracy on the reconstructed ATs. Finally, the reconstructed torso potentials were very close to each other for the three initializations (CC between 0.83 and 0.9). Especially, the QRS complex and the T wave were fitted in the same way.
All the Cases
We present the results for all the cases in Table 3. A box plot representation can be found on Supplementary Material, as well as activation, repolarization, and APD90 maps for all the cases. In cases 4 and 6 CC of ATs were better with Mesh2. In all others cases, CC were similar for both meshes. In all cases, solving the inverse problem with Mesh2 gave at least as accurate ATs on the epicardium as with Mesh1. Optimized RTs were better with Mesh2 in only 2 cases: pacing on the basis of the pulmonary vein (case 6) and pacing on the septum (case 7). Figure 6 shows the delays in activation. On average we reconstructed smaller delays in all cases. Concerning the reconstructed BSPMs, averaged CC and RMSE are given in Table 3. Except in case 7, the averaged CC were very similar for both meshes. They kept very close values over time. We observed the same behavior for the RMSE in all the cases. The lower averaged CC in case 7 with Mesh2 was due to a shorter total activation time: late ATs were not well reconstructed.
A statistical T-test was performed on the CC for ATs, RTs, and BSPM. The resulting p-values were 0.5, 0.41, and 0.28 respectively, showing no significant differences between the two meshes.
We computed the geodesic distance between the actual pacing site locations and the one given by the inverse solution for cases 1, 4, and 6 (epicardial pacing). For endocardial pacing (cases 3, 5, and 7) we computed the distance between actual and reconstructed breakthrough on the epicardium.
From the optimization, the pacing site (or breakthrough) was identified as the mesh node with the earliest AT (resp. on the epicardium). We added a visual validation to exclude irrelevant, isolated, early ATs. Results can be found in Figure 12. In most of the cases, the distance was smaller with Mesh1 than Mesh2. However, except for case 6, the identified site with Mesh2 was a neighbor of the actual site. So the differences in the mesh density could explain the smaller distances with Mesh1.
We looked at the AP duration. For the 7 cases the reference APD90 varied between 225 and 285 ms. A difference was clearly visible between the endocardium and the epicardium. We were not able to reproduce this difference with Mesh2. However, APD90 were similar on the epicardium for both meshes. Our method tended to reconstruct maximal APD90s much higher than 285 ms, especially in cases 1, 2, and 7.
DISCUSSION
We presented a new ECGI method designed to recover both the depolarization and the repolarization sequence, by solving a parameter identification problem. We hypothesized that this method would work better when both the endocardium and epicardium are included in the model, since important and physiologically relevant differences in both depolarization and repolarization timing exist between these surfaces. Therefore, we tested the method on two different heart meshes: the one a FIGURE 10 | Scatter plot of the ATs (left) and RTs (right) for the SR case. For each point, the x coordinate is the reference AT (resp. RT) and the y coordinate is the corresponding reconstructed AT (resp. RT). The dashed lines represent the linear fitting. closed surface of the epicardium alone, and the other including both epicardium and endocardium. Tests were performed using in silico data for a sinus beat and six different ventricularly paced beats. Results were very similar for both meshes in 6 cases: all the characteristics we looked at were of the same good quality. The presence of the endocardium slightly improved the ATs on the epicardium. In contrast, for the RTs the effect of including the endocardium was variable.
In two other cases (sinus rhythm, case 2, and septal pacing case 7), the reconstruction of AT with Mesh1 was poor. In the sinus rhythm case, inclusion of the endocardium (Mesh2) improved the reconstruction substantially. This was the only case where endo-epicardial gradients, with respect to the total activation time, were significant. In all cases, the repolarization times were better reconstructed with the epicardium only.
We showed that our method was not sensitive to the initialization. Especially the choice for τ out and τ close did not impact the reconstruction of ATs, since these two parameters play a role only during the repolarization. Similarly, imposing global instead of distributed parameters will not worsen ATs reconstruction. The quality of the estimation of the Mitchell-Schaeffer parameters can only be seen through RTs and APD90 reconstructions. CC for RTs were smaller than the ones for ATs which may suggest that the reconstruction of τ out and τ close was less precise than ATs reconstruction. APD90 maps confirmed that, on a same case, we can overestimate as well as underestimate APD90 on large areas.
In general, our method underestimated AT delays between endocardium and epicardium (Figure 6). A possible explanation is that from the torso surface the two heart surfaces are too close to be seen separately. The endocardial activity is masked by the epicardial one, even in the case of endocardial pacing. The problems we solved, with Mesh1 or Mesh2, were actually the same; we ended with similar results. It may also explain why we did not reconstruct APD differences between the epicardium and the endocardium. Another possible explanation is the difference in density between the two meshes. We chose to have about the same number of nodes in each mesh, so that the difference in the number of parameters to identify could not alone explain the results. However, it implied that Mesh2 was coarser than Mesh1. A test was made on a refined mesh of Mesh2 (Figure 2, right). This third mesh had 1328 nodes and a density similar to the one of Mesh1. We solved the inverse problem on this mesh for the ventricular pacing case 1. The results we obtained were very similar to those with Mesh2: the CC for ATs was 0.79 (0.77 for Mesh2) and the average CC for the BSPM was 0.86 (0.9 for Mesh2). This test may suggest that the density of the mesh does not have an impact on the results.
We solved the inverse problem with a constant factor A over the whole heart. However, this factor (proportional to the amplitude of the AP) may not be constant, e.g., in the case of ischemia. We attempted to consider a distributed factor, more relevant from a physiological point of view. In that case the method was not converging, or converged to both positive and negative amplitudes.
So far we did not add noise to the testing data. Even if the models to create the data and to solve the inverse problem are different, it would be helpful to assess the robustness of the method.
Validation data were created from a volumetric heart mesh with a much higher density than Mesh1 and Mesh2. The reference values (AT, RT) were the values on the mesh nodes. In contrast, the inverse problem on a surface leads to values that contain information averaged over a considerable volume. This may explain why the delays between reconstructed ATs were smaller than the delays between the reference ATs.
Comparison With Other Methods
Currently, most ECGI methods are based on a Laplace problem for the potential in the torso. Using the MFS (Wang and Rudy, 2006) or boundary-element models (Sapp et al., 2012;Bear et al., 2018) these methods reconstruct instantaneous potential patterns on the surface of the heart. These methods use Tikhonov or similar forms of regularization to counter the ill-posedness of this problem. This form of regularization leads to smooth solutions for the potential distribution, while the actual pattern, especially in case of an activation wavefront, is characterized by steep gradients. This leads to unrealistic solutions for the activation pattern, featuring large areas that appear to be activated nearly simultaneously, separated by artefactual lines of conduction block (Duchateau et al., 2017;Ravon et al., 2017). Various methods have been proposed to counter this effect, e.g., by reconstructing AT maps from local delays estimated from the whole signal morphology (Duchateau et al., 2017) or by simply smoothing the activation map (Bear et al., 2018). The latter method claims that it does not wipe out true block lines, as well as the artefactual ones, without any validation yet. The method that we proposed here does not require such postprocessing. It imposes a predefined action potential waveform, parameterized in terms of AT and parameters of the Mitchell-Schaeffer model, and does not require further regularization. We have previously shown that our method leads to more realistic activation maps than the MFS (Ravon et al., 2017). In the larger sample of this study we also did not observe the clustering of AT that is typical for MFS methods.
A similar parameter optimization approach, also in terms of endocardial and epicardial AT and RT, was used by van Dam et al. (2009). In contrast to our method it still relied on a (Laplacian) regularization of the AT field, and ahead of the parameter estimation phase it performed an initial estimate based on an exhaustive search. On the other hand, it used a more realistic volume conductor model that took the boundedness and inhomogeneity of the torso into account. Unlike our method they showed that the choice of the initial estimates had an impact on the quality of the inverse procedure. This importance had also been reported by Potyagaylo et al. (2016) and Erem et al. (2014).
Others have worked on the impact of the endocardium in the case of atrial fibrillation Schuler et al. (2017). Considering that atria are very thin, they imposed similar TMP values on the epicardium and the endocardium. Due to the greater thickness of the ventricles, this hypothesis would not be suitable in our study. In a previous study (Potyagaylo et al., 2014) the same group proposed a local regularization of the two surfaces to localize ectopic beats. The regularization parameter can differ between the endocardium and the epicardium. It was a way to better distinguish endocardial events from epicardial events. This approach might be applicable in our case with two different factors A.
Conclusion
Our parameter optimization method reconstructs accurate activation times and, to a lesser extent, repolarization times. In some cases inclusion of the endocardium in the solution helps to improve the reconstruction of activation times, while in general it does not improve the reconstruction of repolarization times.
AUTHOR CONTRIBUTIONS
All authors have made substantial contributions to this study. GR designed the study, implemented the algorithms, analyzed, and interpreted the results, and drafted the manuscript. YC and RD helped conceive the study, provided feedback about the implementation of the methods and the interpretation of the results, and revised the manuscript. MP provided the validation data and feedback about the results, and revised the manuscript.
FUNDING
This study received financial support from the French Government as part of the Investments for the Future program managed by the National Research Agency (ANR), Grant reference ANR-10-IAHU-04. This work was granted access to the HPC resources of TGCC under the allocation x2016037379 made by GENCI. | 7,614.6 | 2019-01-22T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Epitaxial growth and magnetic properties of Mn5Ge3/Ge and Mn5Ge3Cx/Ge heterostructures for spintronic applications
The development of active spintronic devices, such as spin-transistors and spin-diodes, calls for new materials that are able to efficiently inject the spin-polarized current into group-IV semiconductors (Ge and Si). In this paper we review recent achievements of the synthesis and the magnetic properties of Mn5Ge3/Ge and carbon-doped Mn5Ge3/Ge heterostructures. We show that high crystalline quality and threading-dislocation free Mn5Ge3 films can be epitaxially grown on Ge(111) substrates despite the existence of a misfit as high as 3.7% between two materials. We have investigated the effect of carbon doping in epitaxial Mn5Ge3 films and show that incorporation of carbon into interstitial sites of Mn5Ge3 can allow not only enhancement of the magnetic properties but also an increase of the thermal stability of Mn5Ge3. Finally, toward the perspective to realize Ge/Mn5Ge3/Ge multilayers for spintronic applications, we shall show how to use carbon to prevent Mn out-diffusion from Mn5Ge3 during Ge overgrowth on top of Mn5Ge3/Ge heterostructures. The above results open the route to develop spintronic devices based on Mn5Ge3Cx/Ge heterostructures using a Schottky contact without needing an oxide tunnel barrier at the interface.
Introduction
Spintronics is an emerging field and one of the key requirements for its development rests on obtaining spin injectors, which not only have a high Curie temperature (T C ) and a high spin polarization but also are compatible with Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. the existing Si complementary metal-oxide semiconductor (CMOS) technology. Silicon-or germanium-based diluted ferromagnetic semiconductors (DMS) would be ideal candidates since they exhibit a natural impedance match to group-IV semiconductors. Unfortunately, Si 1−x Mn x alloys are not ferromagnetic and the use of Ge 1−x Mn x alloys would be hampered by their low T C , which, in most cases, does not exceed 150 K [1]. As a result, to make advances in applications much effort has been, in recent years, devoted to epitaxial ferromagnetic compounds that can be directly grown on Si and Ge substrates, such as Heusler alloys [2] or Mn 5 Ge 3 [3][4][5]. Besides the fact that these compounds are fully compatible with the mainstream Si-based technology, they open the possibility of spin injection via tunnel effect through the Schottky barrier at the interface. Among these compounds, Mn 5 Ge 3 is of particular interest since the bulk Mn 5 Ge 3 compound is intermetallic, ferromagnetic at room temperature [6] and theoretical calculations have predicted an efficient spin injection along its c-axis [7], opening thus the possibility of spin injection without an external applied magnetic field, i.e. in remanent magnetic states. In addition, a spin polarization up to 42% has been demonstrated from Andreev reflection [8].
However, the Mn 5 Ge 3 compound exhibits some drawbacks that need to be overcome for device applications: (i) firstly, according to the Ge-Mn bulk phase diagram, there are four phases at standard temperature and pressure conditions: Mn 3 Ge, Mn 5 Ge 2 , Mn 5 Ge 3 and Mn 11 Ge 8 [6,9]. The first two phases are ferrimagnetic, Mn 5 Ge 3 is the unique ferromagnetic phase and Mn 11 Ge 8 is antiferromagnetic. Thus, starting from a system consisting of a thin Mn layer deposited on a Ge substrate, when thermal annealing is carried out to activate Ge/Mn interdiffusion, the most stable phase, which should be formed at high annealing temperatures, is the antiferromagnetic Mn 11 Ge 8 , having the highest Ge concentration. However, Mn 5 Ge 3 is a unique phase, which has a hexagonal structure similar to a threefold symmetry of the (111) plane of the Ge; it can be then possible to be stabilized on Ge(111) by epitaxial effect. (ii) Secondly, the Curie temperature of Mn 5 Ge 3 is only limited at room temperature (∼296 K) while for device applications it is desirable that spin injectors have a magnetic order well above room temperature. Here, we shall show that incorporation of carbon atoms in interstitial sites of the Mn 5 Ge 3 lattice greatly enhance the magnetic ordering of Mn 5 Ge 3 films. Another important feature of a material for device applications is its thermal stability. Indeed, the thermal stability is a critical parameter for the integration of Mn 5 Ge 3 into CMOS technology. In general, materials must remain stable up to temperatures higher than 700 • C since in the device fabrication process, numerous thermal anneals are needed, in particular after dopant implantation. We have then investigated the stability of Mn 5 Ge 3 and carbon-doped Mn 5 Ge 3 layers during post-grown thermal annealing, (iii) Thirdly, we have recently shown that the Mn segregation is a central problem that needs to be handled in order to get high-quality Ge/Mn 5 Ge 3 /Ge stacked layers [10,11], which are the basis for numerous applications such as spin valves or the giant magnetoresistance effect. We shall summarize here the use of carbon to suppress the Mn segregation during Ge overgrowth on top of Mn 5 Ge 3 /Ge heterostructures.
Experimental
Mn 5 Ge 3 and Mn 5 Ge 3 C x films were grown in a standard molecular-beam epitaxial (MBE) system with a base pressure better than 3 × 10 −10 mbar. The growth system is equipped with a reflection high-energy electron diffraction (RHEED) to monitor the film growth mode and an Auger spectroscopy to control the film chemical composition. Mn 5 Ge 3 and Mn 5 Ge 3 C x were grown on Ge(111) substrates using the solid phase epitaxial (SPE) technique, which consists of Mn deposition or co-deposition of Mn and C at room temperature followed by thermal annealing at temperature of ∼450 • C to activate interdiffusion and phase nucleation. Mn and Ge evaporations were carried out using standard effusion cells; the Mn flux, measured with a quartz-crystal microbalance, is ∼2 nm min −1 and the Ge flux, deduced from RHEED intensity oscillations, is in the range of ∼2-5 nm min −1 . Carbon evaporation was carried out using a sublimation source of high-purity pyrolytic graphite, the carbon concentration was estimated by using the change of Si(001) surface reconstructions from (2 × 1) to c(4 × 4) upon adsorption of a carbon submonolayer.
The cleaning of the Ge surfaces was carried out using the HF-last dip method similar to that used for the Si substrate [12] to minimize native oxide. The second step was an in situ thermal cleaning, which consists of outgassing the sample for several hours at 450 • C followed by flash annealing at ∼650 • C to remove the residual Ge surface oxide, which can be formed during sample transfer into a high vacuum. After this step, the Ge(111) surface generally exhibits a relatively well-developed c(2 × 4) reconstruction.
Structural characterizations of grown films were performed by means of high-resolution transmission electron microscopy (HR-TEM) using a JEOL 3010 microscope operating at 300 kV with a spatial resolution of 1.7 Å. Complementary structural characterizations were carried by means of x-ray diffraction (XRD) using a diffractometer (Philips X'pert MPD) equipped with a copper target for Cu-K α1 radiation (λ = 1.540 59 Å). The angular resolution is ∼0.01 • .
The magnetic properties of the films were probed using a superconducting quantum interference device (SQUID) magnetometer with a magnetic field applied both in-plane and out-of-plane of the sample surface. The diamagnetic contribution arising from Ge was subtracted, leaving only the magnetic signal coming from Mn 5 Ge 3 and Mn 5 Ge 3 C x films.
Results and discussion
This section will be divided into four subsections in which we present results concerning the epitaxial growth as well as the magnetic properties of Mn 5 Ge 3 grown on Ge(111) substrates, the effect of carbon doping on the magnetic properties and thermal stability of Mn 5 Ge 3 films and finally the Mn segregation and the method to suppress it.
Epitaxial growth and magnetic anisotropy of Mn 5 Ge 3 /Ge(111) heterostructures
In a conventional MBE, the growth of an epitaxial film can proceed via two main techniques: solid phase epitaxy (SPE) and reactive deposition epitaxy (RDE). The SPE consists of deposition or co-deposition of materials at room temperature, then followed by thermal anneals in order to activate diffusion and/or interdiffusion of species. This method involves therefore two successive processes: first, diffusion and/or interdiffusion occur and then phase nucleation takes place, which starts from the interface. In RDE, materials are deposited or co-deposited on the substrate surface, which is kept at high temperatures. Depending on the substrate temperature, different phases can be formed but the phase nucleation is the main process, which occurs at the growing surface as the growth advances. Since Mn 5 Ge 3 is not the most stable phase and has a hexagonal structure similar to that of Ge(111), the SPE technique, which allows Mn 5 Ge 3 films to easily adopt the substrate symmetry from the interface, naturally appears more appropriate than the RDE to form epitaxial Mn 5 Ge 3 films. Indeed, our unpublished results show that when using the RDE technique, a fraction of the Mn 11 Ge 8 phase may coexist with Mn 5 Ge 3 . Starting from a thin Mn film deposited on a Ge(111) substrate at room temperature, we have shown that Mn 5 Ge 3 is the unique epitaxial phase that can be formed upon annealing in the temperature range of 430-600 • C [4,5]. The Mn 11 Ge 8 phase, which is the richest in Ge, is formed only when annealing at temperatures higher than 650 • C [13]. Below 650 • C, the coexistence of two phases, Mn 5 Ge 3 and Mn 11 Ge 8 , may be observed, but only when the initial Mn film is thick enough (>210 nm) [14] or when films are grown on an amorphous oxide substrate [15].
Figures 1(a) and (b) show typical RHEED patterns observed during epitaxial growth of Mn 5 Ge 3 on Ge(111). Stating from a clean (2 × 4) reconstructed Ge(111) surface, RHEED patterns indicate that Mn 5 Ge 3 films display a hexagonal symmetry similar to that of the substrate. By combining RHEED patterns with TEM backside electron diffraction analysis, it is found that the hexagonal basal (0001) plane of Mn 5 Ge 3 is parallel to the (111) plane of Ge. The epitaxial relationship has following form: The Mn 5 Ge 3 surface is characterized by a RHEED ( √ 3 × √ 3)R30 • reconstruction, defined by the observation of 1 × 1 streaks along the [1-10] azimuth and additional 1/3-and 2/3-ordered streaks along the azimuth. Interestingly, long streaks are observed in RHEED patterns for film thicknesses in the range between some nm up to about 160 nm, indicating that the film surface is highly smooth.
A high-resolution TEM image taken near the interface region of a 25 nm thick Mn 5 Ge 3 film is shown in figure 2 A systematic study of the magnetic properties of Mn 5 Ge 3 has been carried out as a function of the film thickness ranging from 5 up to 160 nm. We show in figure 3 some representative hysteresis (M-H) loops measured with in-plane and out-of-plane magnetic field. Insets represent a zoom around the positive saturation field of the out-of-plane configuration. Parallel and perpendicular configurations for thick samples lead apparently to similar magnetic reversal; this is surprising since we expect an easy magnetization axis along the c-axis, which is perpendicular to the sample plane. However, in-plane M-H curves reveal a steady change in the magnetic behavior: for samples thinner than 10 nm, the hysteresis loop exhibits a square shape that unambiguously indicates that the magnetization easy axis lies in-plane; for thicker samples, the M-H curves become increasingly canted as the saturation field increases with thickness but a hysteresis is still visible around zero field.
In the out-of plane configuration, at first sight, the hysteresis loops appear similar throughout the range of the studied thicknesses: little hysteresis is present and the saturation fields are higher than the ones observed in the parallel configuration. However, when looking at the variation of the perpendicular saturation field versus the film thickness, two regimes can clearly be distinguished: firstly, below thicknesses of about 20 nm, the perpendicular saturation field increases rapidly with the film thickness; secondly, above this thickness, it is independent of the film thickness and fluctuates around 10 000 ± 1000 Oe. More importantly, the general shape of the hysteresis curves changes. Above a threshold thickness, a singularity in all the out-of-plane M-H curves appears around the saturation field as demonstrated in the inset of figure 3(c) for a 25 nm thick sample. By describing the M-H curve from positive saturation to lower field values, a characteristic opening of the hysteresis loop appears around the saturation field over a narrow range of fields. As the field decreases, the two M-H branches return to similar field dependence and this singularity disappears. This feature is not present in hysteresis loops of layers thinner than 10 nm where the magnetization rises linearly with the applied field and almost reversibly. In [16], we have provided a detailed analysis of the evolution of Mn 5 Ge 3 magnetic properties versus the film thickness and show that the reorientation of the magnetization from in-plane to out-of-plane occurs for a film thickness lying between 10 and 25 nm. This result is strongly supported by theoretical calculations based on an improved version of Kittel's model to retranscribe the magnetic behavior of domains in uniaxial thin films. Of particular interest, the size of magnetic domains in Mn 5 Ge 3 is shown to be considerably smaller than the one in any other known magnetic system and it can be in addition tailored by the film thickness.
Enhancement of the Curie temperature in carbon-doped Mn 5 Ge 3 films
The enhancement of the magnetic properties of polycrystalline Mn 5 Ge 3 films induced by carbon doping was first demonstrated by Gajdzik et al [17] using sputtering deposition technique. Later, Slipukhina et al [18] calculated exchange-coupling constants in Mn 5 Ge 3 C alloys and showed that the enhanced ferromagnetic stability in the alloy mainly results from interactions between Mn atoms mediated by carbon incorporated in octahedral voids of the hexagonal Mn 5 Ge 3 cell. To incorporate as high as possible carbon atoms into interstitial sites of the Mn 5 Ge 3 cell, we have implemented the SPE technique in order to promote carbon diffusion. Indeed, since the atomic radius of carbon is almost twice as small as that of Mn and Ge, it follows that in a growth process where carbon atoms can diffuse, it becomes easier for them to be incorporated in the interstitial site.
The effects of the carbon concentration on the magnetic properties of Mn 5 Ge 3 C x films are depicted in figure 4. Figure 4(a) displays hysteresis loops of C-free Mn 5 Ge 3 and Mn 5 Ge 3 C x films with various carbon concentrations, measured by SQUID at 300 K with a magnetic field of 0.5 T applied in the film plane. At 300 K the hysteresis loops of the C-free Mn 5 Ge 3 film exhibit a paramagnetic character, as being expected because C-free Mn 5 Ge 3 is ferromagnetic only up to 296 K. For C-doped Mn 5 Ge 3 , hysteresis loops measured at 300 K clearly indicate that the materials are ferromagnetic within the whole range of the carbon concentration. It is worth noting that while the measurements at 300 K confirm that at x = 0.9 the film remains ferromagnetic, the hysteresis loops measured at 5 K (not shown here) display an oblate shape and show a great reduction of net magnetization [19,20]. The temperature dependence of normalized magnetization of Mn 5 Ge 3 C x with various carbon concentrations is presented in figure 4(b).
For comparison, we also show a curve of a C-free Mn 5 Ge 3 film of the same thickness. The figure clearly indicates that addition of carbon strongly enhances magnetization of Mn 5 Ge 3 and such an enhancement continuously increases with the carbon concentration up to 0.7. The T C , measured at the inflection point of the curve M versus T , reaches a value of ∼430 K for x = 0.6 and 0.7. We note that if we determine the T C from the extrapolation of the M(T ) data to M(T C ) = 0, the T C of the Mn 5 Ge 3 C 0.6 curve gives a value up to 460 K.
We report in figure 5(a) the variations of T C and of magnetization at saturation (M S ) as a function of the carbon concentration. The variation of T C with x occurs in two distinct regions: T C first linearly increases with x up to 0.6-0.7, then falls for larger values of x. Note that this behavior is different from that previously reported for polycrystalline films [17]. The evolution of M S , measured at 5 K, is well correlated with the T C variation. Saturation magnetization is found to linearly decrease with x and at x = 0.6-0.7, an abrupt change in the slope is observed. We note that in transition metals and their alloys, it is generally observed that magnetization at saturation increases when increasing the Curie temperature. However, as previously mentioned, in Mn 5 Ge 3 C x the ferromagnetic enhancement results from Mn II -Mn II interactions mediated by carbon atoms inserted into the voids of Mn octahedra of the hexagonal structure. Consequently, carbon incorporation into Mn 5 Ge 3 changes the 3d states of neighboring Mn atoms and the hybridization between the C 2p and Mn II 3d states leads to a decrease of magnetization of saturation as well as magnetic moment on Mn II .
These results indicate that the saturation concentration of carbon, which can be inserted into interstitial sites of the Mn 5 Ge 3 lattice is around x ∼0.6-0.7. From measurements of the magnetic moment at saturation and the film thickness deduced from TEM images, an average saturated Mn moment of ∼1.9 µ B /Mn is deduced for x = 0.7. This value is close to the one obtained in C-implanted films [21] and with theoretical calculations [18], but deviates from the values obtained for the sputtered films, where the highest moment of 1.1 µ B /Mn was observed [17].
To understand the decrease of T C for x 0.7, we show in figure 5(b) a typical TEM image of a sample corresponding to x = 0.7. It is worth noting that for x 0.6, TEM images of carbon-doped Mn 5 Ge 3 C x alloys are similar to that of carbon-free Mn 5 Ge 3 shown in figure 2(a) in which the film is of high crystalline quality and the interface is atomically smooth. The presence of clusters or precipitates in the grown film can be clearly seen from this image. In order to identify the nature of clusters that are formed for x 0.7, thermodynamic calculations of the formation energy of the carbon defects in Mn 5 Ge 3 have shown that the Mn 5 Ge 3 C 0.5 alloy is a stable ternary alloy and additional carbon atoms cannot be inserted into interstitial sites, but will rather form clusters of manganese carbides (MnC) [19]. Two main phases, Mn 7 C 3 and/or Mn 5 C 2 , appear to be energetically favorable when the carbon concentration becomes higher than 0.5.
Thermal stability of Mn 5 Ge 3 and carbon-doped Mn 5 Ge 3 films
The thermal stability of an active material is one of the crucial parameters that need to be determined and controlled for its integration in the device fabrication process. We display in figure 6(a) the change of magnetization of a 100 nm thick Mn 5 Ge 3 film after annealing at 650 • C. Before annealing, hysteresis loops of Mn 5 Ge 3 exhibit ferromagnetic behavior as expected. The magnetization at saturation (M S ) is ∼1300 emu cm −3 and the average magnetic moment per Mn atom (µ s ) is ∼3.2 µ B . These values are close to those reported in literature for thin films [3,4] and bulk materials [22]. After annealing at 650 • C, the magnetization at saturation (M S ) is found to decrease down to 6 emu cm −3 and the remanent magnetization reduces from 125 to 1 emu cm −3 .
To get a better insight into the origin of the above drastic change in magnetic properties of Mn 5 Ge 3 upon annealing, we present in figure 6(b) a comparison of magnetization versus temperature before and after annealing. The as-grown sample clearly displays a ferromagnetic behavior characteristic of Mn 5 Ge 3 ; the T C , measured at the inflection point of the M(T ) curve, is ∼296 K. After annealing at 650 • C, the M(T ) curve reveals two distinct transitions: an antiferromagnetic/ferromagnetic transition at ∼150 K and a ferromagnetic/paramagnetic transition at ∼270 K. Such a magnetic signature can be unambiguously attributed to the antiferromagnetic Mn 11 Ge 8 compound [23].
Thus, the above results confirm that Mn 5 Ge 3 , which is not a stable phase, can be stabilized on Ge(111) by epitaxy due to the similarity of its hexagonal structure compared to that of Ge(111). As defined by thermodynamics, post-thermal annealing of grown films should bring the system toward a more equilibrium state, i.e. to the formation of Mn 11 Ge 8 , which is the most stable phase in the Mn/Ge phase diagram.
Since the Ge concentration in Mn 11 Ge 8 is of ∼42% compared to ∼37.5% in Mn 5 Ge 3 , such a phase transformation should require a long-range diffusion of Ge from the substrate. Regarding the thermal stability of C-doped Mn 5 Ge 3 , as mentioned above, doping Mn 5 Ge 3 with carbon allows increasing its T C , and such an enhancement has been explained due to Mn II -Mn II interactions mediated by carbon atoms [18]. Figure 7(a) shows the magnetization enhancement induced by carbon doping (black curve corresponding to Mn 5 Ge 3 and green one to Mn 5 Ge 3 C 0.6 ). It can be seen that the Mn 5 Ge 3 C 0.6 curve exhibits a T C up to ∼460 K, compared to 296 K of Mn 5 Ge 3 . Figure 7(b) shows the evolution of hysteresis loops of the Mn 5 Ge 3 C 0.6 film upon annealing at 750 and 850 • C. The most interesting feature is that the carbon-doped Mn 5 Ge 3 layers remain ferromagnetic even after annealing at 850 • C. The hysteresis loops conserve its squareness up to 750 • C, beyond which an increase of the coercive field occurs, which can be probably attributed to the formation of point defects due to high annealing temperatures. Thus, the above results provide evidence that inserting carbon into interstitial sites of Mn 5 Ge 3 allows great improvement in its thermal stability. Another interesting feature that can be seen in figure 7(a) is a reversible transition of T C upon carbon doping and annealing. Doping Mn 5 Ge 3 with carbon allows enhancing T C from 296 up to 460 K, which is found to decrease down to 350 and 307 K when increasing the annealing temperature to 750 • C (red curve) and 850 • C (blue curve), respectively. Other magnetic parameters, such as M sat and µ s , also exhibit a similar reversible transition. For example, in Mn 5 Ge 3 , the measured value of µ s is ∼3.2 µ B , which is found to decrease down to ∼1.9 µ B in Mn 5 Ge 3 C 0.6 . Upon annealing, µ s progressively increases with increasing temperature and almost reaches the initial value of ∼3.2 µ B after annealing at 850 • C. Such results imply that carbon atoms, which have been incorporated into interstitial sites of Mn 5 Ge 3 , are progressively extracted during annealing.
Mn segregation and its suppression induced by carbon adsorption
In numerous spintronic applications, such as spin valves or giant magnetoresistance (GMR) superlattices, high-quality Ge overgrowth on top of Mn 5 Ge 3 films is needed. One of the difficulties inhibiting the realization of Ge/Mn 5 Ge 3 heterostructures with abrupt interfaces would be probably the Mn segregation. As we have mentioned above, since epitaxial Mn 5 Ge 3 films give rise to additional RHEED streaks compared to the Ge surface, we have used RHEED to monitor, in real time and in situ, the Mn segregation process by measuring the intensity evolution of these additional streaks versus the Ge deposition time or the thickness [10]. A typical result of intensity measurements of a 2/3 streak at three different substrate temperatures, 200, 450 and 550 • C, is reported in figure 8.
Two distinct behaviors regarding the scale of the deposition time of the Ge overlayers are clearly observed. At 200 • C, the intensity of the 2/3 streak vanishes at around 95 s, while at 450 and 550 • C it continues to a much higher deposition time and completely disappears only after a Ge deposition of more than 800 s. The corresponding Ge thickness is ∼8 nm at 200 • C and it is larger than 70 nm at 450 and 550 • C. It is worth noting that the surface segregation of an element during the growth of heterostructures or multilayers has been observed in many systems, including III-V materials [24] or Si on SiGe [25]. However, at usual growth temperatures (∼600 • C), the segregation length does not, in general, exceed a dozen nanometers.
To understand the different behavior of Ge overgrowth observed at low and high temperatures as described above, we have systematically performed TEM analyses of samples grown at 450 and 550 • C. Figure 9(a) displays a typical cross-sectional TEM image taken after the deposition of 60 nm of Ge at 450 • C. To see the evolution of different layers in the final structure more clearly, we show, in figure 9(b), a schema of the designed sample in which the thickness of each layer is indicated.
Contrary to the designed structure, the TEM image reveals that the sample surface is terminated by a Mn 5 Ge 3 layer and no trace of Ge overlayers is detectable. Indeed, the observation in this TEM image of well-defined atomic rows, all being perpendicularly aligned to the interface, can be unambiguously attributed the hexagonal (0001) plane of Mn 5 Ge 3 , which is parallel to the (111) plane of Ge [4,19]. Thus, TEM analyses indicate that upon Ge deposition at 450 • C, it is not a Ge layer which is progressively formed on the surface as expected but the deposited Ge reacts with Mn to form the Mn 5 Ge 3 compound. Interestingly, the Mn 5 Ge 3 surface layer has a thickness of ∼20 nm, a value slightly thinner than the initial thickness of the starting Mn 5 Ge 3 layer. It can also be seen that the whole Mn 5 Ge 3 layer continuously floats on the surface as the Ge deposition progresses, leaving behind newly formed Ge layers. In other words, the Mn 5 Ge 3 film behaves here as a surfactant, which floats upwards the growing surface, similar to the case of monolayer-thick Mn adsorbed on Ge(001) [26]. An interesting feature that can be seen from the image is the presence of Mn-rich clusters embedded inside the lower Ge layers. We notice that in standard Mn 5 Ge 3 /Ge heterostructures, Mn-Ge clusters are never observed in the Ge substrate after Mn 5 Ge 3 growth at 450 • C [4,5,16] or even after post-annealing up to 650 • C [13]. The formation of such clusters can be attributed to the fact that in the case of the Ge deposition on Mn 5 Ge 3 at 450 • C, the initial epitaxial Mn 5 Ge 3 film, because of its metastable state, is destabilized even at its interface with the substrate, which is 25 nm from the surface. Consequently, a part of Mn is detached around the interface region and diffuses into the substrate, resulting in the formation of Mn-rich clusters. This explains why the thickness of the final floating Mn 5 Ge 3 film is slightly thinner than the initial one.
To prevent out-diffusion of an element, it is common to use a diffusion barrier, and materials must be not only nonreactive but also are able to strongly adhere to adjacent materials. In electronic or memory devices, multilayers of metals, WN 2 , RuTiN or RuTiO, are usually used to prevent out-diffusion of dopants (B and P) or oxidation of devices [27][28][29]. Such materials are, however, difficult to insert in a heterostructure where epitaxial growth is needed. Since any segregation process should involve a rapid and long-range diffusion of elements, which, in general, occurs via interstitial diffusion, our approach consists of filling in the interstitial sites of Mn 5 Ge 3 prior to Ge deposition. To experiment with the filling, we choose carbon for its small atomic radius. The principle of experiments is described in figure 10(a), consisting of depositing some carbon MLs on Mn 5 Ge 3 prior to Ge growth.
The amount of adsorbed carbon should be an important parameter, and it was chosen according to the change in RHEED patterns. Upon carbon adsorption, the √ 3 RHEED characteristic of Mn 5 Ge 3 remains almost unchanged up to carbon coverage of 4 ML, beyond which a faint pattern is observed. Since we search a high filling degree of Mn 5 Ge 3 interstitial sites, a carbon amount of 4 ML is then chosen. It is worth noting that carbon adsorption at room temperature or at 250 • C produces almost similar results. Figure 10(b) shows a typical structure of a sample containing 4 ML of carbon adsorbed at 250 • C.
Even if the Ge overlayers are far from perfect, the image clearly reveals that the Ge/Mn 5 Ge 3 interface has become much smoother. A much smaller Mn segregation length is confirmed by RHEED analyses, which reveal that a c(2 × 4) reconstruction characteristic of a clean Ge(111) surface quickly appears only after deposition of some Ge MLs. The improvement of Mn out-diffusion is also confirmed by Auger measurements, which reveal that Mn transitions, located at 537, 584 and 631 eV, almost disappear after 3 nm thick Ge deposition while on C-free samples Mn signals persist for Ge thickness larger than 10 nm. Shown in the inset is an atomically resolved image of the interface region. Clearly, no Mn-rich clusters are present, and more importantly, well-ordered (111) planes of Ge overlayers are found to be perpendicular to the atomic rows produced by Mn arrangement along [0001] direction of the underneath Mn 5 Ge 3 . However, it is worth noting that when carbon adsorption on the Mn 5 Ge 3 surface is carried out at temperatures 450 • C, manganese carbides can be formed and the resulting Ge overlayer changes its orientation from (001) to (111), which has a higher surface energy [30].
Conclusion
To summarize, we have investigated the epitaxial growth of Mn 5 Ge 3 and carbon-doped Mn 5 Ge 3 films on Ge (111) and evidenced numerous features, which may render these materials of high potential for the development of spintronic devices compatible with group-IV semiconductors. High crystalline quality Mn 5 Ge 3 films can be obtained despite the existence of a lattice mismatch as high as 3.7%. Of particular interest, the epitaxial Mn 5 Ge 3 film is strain relaxed but displays an extremely low density of threading dislocations. This interesting feature can be attributed to a high value of the elastic modulus of Mn 5 Ge 3 , which is 110 GPa compared to 77.2 GPa for Ge, thus allowing Mn 5 Ge 3 films to be easily elastically deformed on Ge. We have shown that Mn 5 Ge 3 is not a stable phase but can be stabilized on Ge(111) thanks to the similarity of its crystal structure compared to that of Ge(111). Upon annealing at 650 • C, Mn 5 Ge 3 transforms into the antiferromagnetic Mn 11 Ge 8 phase. The reorientation of the magnetization in Mn 5 Ge 3 films from in-plane to out-of-plane is found to occur at film thicknesses lying between 10 and 25 nm, which are much smaller compared to other uniaxial thin films.
In an effort to insert as much carbon as possible into the octahedral voids of the hexagonal Mn 5 Ge 3 lattice, we have implemented the solid phase epitaxy technique, which allowed us to insert carbon up to a saturation concentration of ∼0.6-0.7. When the carbon concentration increases from 0 to the saturation value, the Curie temperature of the alloys is found to linearly increase with x, reaching a value as high as 460 K. When the carbon concentration is higher than the saturation value, the formation of manganese carbides becomes thermodynamically more favorable. Doping Mn 5 Ge 3 with carbon also allows great enhancement of its thermal stability, the materials remain ferromagnetic up to a temperature as high as 850 • C. We have also shown that the realization of Ge/Mn 5 Ge 3 multilayers is hampered by Mn segregation toward the Ge growing surface and adsorption of some monolayers of carbon on top of the Mn 5 Ge 3 surface prior to Ge deposition allows great reduction of Mn segregation. | 7,917 | 2013-09-30T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
Protein phosphatase 4 is involved in tumor necrosis factor-alpha-induced activation of c-Jun N-terminal kinase.
Protein phosphatase 4 (PP4, previously named protein phosphatase X (PPX)), a PP2A-related serine/threonine phosphatase, has been shown to be involved in essential cellular processes, such as microtubule growth and nuclear factor kappa B activation. We provide evidence that PP4 is involved in tumor necrosis factor (TNF)-alpha signaling in human embryonic kidney 293T (HEK293T) cells. Treatment of HEK293T cells with TNF-alpha resulted in time-dependent activation of endogenous PP4, peaking at 10 min, as well as increased serine and threonine phosphorylation of PP4. We also found that PP4 is involved in relaying the TNF-alpha signal to c-Jun N-terminal kinase (JNK) as indicated by the ability of PP4-RL, a dominant-negative PP4 mutant, to block TNF-alpha-induced JNK activation. Moreover, the response of JNK to TNF-alpha was inhibited in HEK293 cells stably expressing PP4-RL in comparison to parental HEK293 cells. The involvement of PP4 in JNK signaling was further demonstrated by the specific activation of JNK, but not p38 and ERK2, by PP4 in transient transfection assays. However, no direct PP4-JNK interaction was detected, suggesting that PP4 exerts its positive regulatory effect on JNK in an indirect manner. Taken together, these data indicate that PP4 is a signaling component of the JNK cascade and involved in relaying the TNF-alpha signal to the JNK pathway.
Mitogen-activated protein kinases (MAPKs), 1 including extracellular-signal-regulated kinase (ERK), c-Jun N-terminal kinase (JNK)/stress-activated protein kinase (SAPK), and p38, play essential roles in many important biological processes such as the stress response, cell proliferation, apoptosis, and tumorigenesis (7)(8)(9). MAPK activation involves sequential protein kinase reactions within a three-kinase module (MAP3K-MAP2K-MAPK), whereby a MAP3K phosphorylates and activates a MAP2K, a dual-specificity kinase, that then phosphorylates and activates a MAPK (7,8,10). In vivo MAPK phosphorylation is a reversible process, indicating that protein phosphatases provide an additional level of regulation of MAPKs. In fact, the magnitude and duration of MAPK activation are tightly controlled by the coordinate actions of protein kinases and protein phosphatases. A large number of mammalian MAPK phosphatases have been identified, including dualspecificity phosphatases and tyrosine-specific phosphatases (11,12). There is evidence that serine/threonine-specific phosphatases also regulate MAPKs (13,14). MAPK phosphatases inactivate MAPKs by directly dephosphorylating both threonine and tyrosine residues of MAPKs (12). The coordinate regulation by protein kinases and phosphatases also occurs at many other points within the three-kinase module. For example, MKP-1, a dual-specificity phosphatase, inhibits ERK, but positively regulates Raf-1 and MKK in an ERK-independent manner (15). PP2A also acts on multiple components of the ERK pathway (12).
Protein phosphatase 4 (PP4, previously named protein phosphatase X (PPX)) is a novel protein serine/threonine phosphatase that is a member of the PP2A family of phosphatases (16). PP4 is highly conserved during evolution, with human and Drosophila PP4 sharing 91% amino acid identity (16). It has been shown that PP4 is localized at the centrosomes in mammalian cells and Drosophila embryos, and that PP4 is involved in the regulation of microtubule growth/organization at centrosomes (17,18). Our previous studies showed that PP4 interacts with members of the nuclear factor B (NF-B) family (such as c-Rel, p50, and RelA), stimulates the DNA binding activity of c-Rel, and activates NF-B-mediated transcription (19). The high degree of conservation of PP4 suggests that PP4 may be involved in many more essential cellular processes and is tightly controlled in vivo. It has been shown that PP4 is carboxymethylated (20). Furthermore, three potential regulatory subunits have been identified for PP4: ␣4 (21,22), PP4 R1 (23), and PP4 R2 (18). In an effort to further investigate the cellular function of PP4, we found that PP4 acts as a specific positive regulator for the JNK pathway and that PP4 is required to relay the TNF-␣ signal to the JNK pathway.
MATERIALS AND METHODS
Reagents-[␥-32 P]ATP and [ 32 P]orthophosphate were purchased from ICN Biomedicals (Irvine, CA). An enhanced chemiluminescence system was purchased from Amersham Biosciences, Inc. Ser/Thr phosphatase assay kit 1 was purchased from Upstate Biotechnology, Inc. (Waltham, MA). TNF-␣ was purchased from R&D Systems. Anti-HA antibody (12CA5) was purchased from Roche Molecular Biochemicals. Monoclonal anti-Flag (M2) and anti-␥-tubulin antibodies were purchased from Sigma. Monoclonal anti-PP1 and anti-c-Myc (9E10) antibodies, and goat anti-Bcl-X L antibody were purchased from Santa Cruz Biotechnology (Santa Cruz, CA). Goat anti-aldolase antibody was purchased from Biodesign (Saco, ME). Monoclonal anti-golgin-97 was purchased from Molecular Probes (Eugene, OR). Rabbit anti-GRP78 polyclonal antibody was purchased from StressGen (Victoria, British Columbia, Canada). Monoclonal anti-lamin B 1 was purchased from Zymed Laboratories Inc. (South San Francisco, CA). Goat anti-human and -rabbit IgG (HϩL) conjugated to fluorescein isothiocyanate and Texas Red were purchased from Jackson Immunoresearch Laboratories, Inc. (West Grove, PA). Rabbit anti-PP4 polyclonal antibodies Ab104 and Ab6101 were raised against the C-terminal regions of PP4, 287 EAAPQETRGIPSKKPVADY 305 , and 291 QETRGIPSKKPVA 303 , respectively. Ab104 and Ab6101 were peptide purified using the Sulfolink kit from Pierce. Rabbit anti-JNK1 polyclonal antibody (Ab101) was described previously (24). Human autoimmune serum (no. 4171, 1:2000) specific for proteins of the pericentriolar matrix has been described previously (25). All other chemical reagents were purchased from Sigma unless otherwise noted.
Cells and Transfection-Human HeLa cells, human embryonic kidney 293T (HEK293T) and 293 (HEK293) cells were obtained from the American Type Culture Collection (Rockville, MD) and grown in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal calf serum and 100 units/ml streptomycin/penicillin at 37°C in a humidified atmosphere of 5% CO 2 . HEK293T cells were plated at a density of either 1.5 ϫ 10 5 cells/35-mm plate well or 1.0 ϫ 10 6 cells/100-mm dish and transfected the next day using the modified calcium phosphate precipitation protocol (Specialty Media, Inc., Lavallette, NJ). Cells were transfected with plasmids encoding -galactosidase (0.15 g) in combination with an empty vector or various amounts of plasmids encoding phosphatases, phosphatase mutants, kinases, or kinase mutants as indicated in the figure legends.
Coimmunoprecipitation, Immunocomplex Kinase Assays, and Western Blot Analysis-Coimmunoprecipitation and immunocomplex kinase assays were performed as described previously (28 -31). Western blot analysis was performed using an enhanced chemiluminescence detection kit according to the manufacturer's protocols (Amersham Biosciences, Inc.).
Phosphatase Assays-HEK293T cells were lysed in buffer containing 50 mM Tris-HCl (pH 8.0), 1% Nonidet P-40, 120 mM NaCl, 1 mM EDTA, 6 mM EGTA, 1 mM dithiothreitol, 50 M p-amidinophenylmethanesulfonyl fluoride, and 2 g/ml aprotinin. Endogenous PP4 was immuno-precipitated with an anti-PP4 (Ab104) antibody. Overexpressed Flag-PP4 and HA-PP4-RL were immunoprecipitated with anti-Flag (M2) and anti-HA (12CA5) antibodies, respectively. The immunoprecipitates were washed three times with buffer containing 50 mM HEPES (pH 7.4), 0.1% Triton X-100, and 500 mM NaCl. Phosphatase assays were performed using Ser/Thr phosphatase assay kit 1, according to the manufacturer's protocol (Upstate Biotechnology, Inc., Waltham, MA). The immunoprecipitates were incubated with 4 M KTpIRR peptide in 40 l of assay buffer (50 mM Tris (pH 7.0), 0.1 mM CaCl 2, and 1 mM MnCl 2 ) at 30°C for 30 min (unless otherwise indicated in the figure legend). Buffer plus peptide was used as a negative control. The immunoprecipitates were then pelleted, and the assay buffer was transferred to a 96-well, half-volume plate. The assay was terminated by the addition of 100 l of Malachite Green solution (one volume of 4.2% (w/v) ammonium molybdate in 4 M HCl, three volumes of 0.045% (w/v) Malachite Green in water, and 1 l/ml 10% Tween 20 added fresh). After 15 min at room temperature, the assay was read at 650 nm on a PerkinElmer Life Sciences Bioassay Reader (HTS 7000 Plus).
In Vitro Binding Assays-GST and GST-SAPK fusion protein were immobilized on glutathione-Sepharose 4B beads equilibrated in incubation buffer containing 20 mM Tris-HCl (pH 8.0), 100 mM NaCl, 1 mM EDTA, 0.5% Nonidet P-40, 1 mM dithiothreitol, 0.5 mM phenylmethylsulfonyl fluoride, 1 g/ml leupeptin, and 2 g/ml aprotinin. Cell lysates (600 g) from HEK293 cells stably transfected with Flag-PP4 or HEK293T cells transiently transfected with Myc-M3/6 were incubated with GST-JNK fusion protein or GST-4T-Sepharose beads in incubation buffer containing 3 mg/ml bovine serum albumin at 4°C for 2 h. The beads were washed five times with the incubation buffer, boiled in a SDS-PAGE loading buffer for 5 min, resolved by 10% SDS-PAGE, transferred to nitrocellulose membranes, and then subjected to Western blotting with an anti-Flag (M2) or an anti-Myc antibody. The membrane was then stripped with stripping buffer (62.5 mM Tris-HCl (pH 6.7), 100 mM 2-mercaptoethanol, 2% SDS) and reprobed with an anti-GST antibody.
Centrosome Isolation-Centrosomes were purified from HeLa cells by a standard protocol (32,33). Briefly, 6 ϫ 10 7 HeLa cells were incubated with 0.2 M nocodazole and 1 g/ml cytochalasin D at 37°C for 60 min. After trypsinization, the cells were pelleted and washed one time with 1ϫ TBS (50 mM Tris (pH 7.6), 150 mM NaCl) and one time with 0.1ϫ TBS ϩ 8% sucrose. The cells were then resuspended in 2 ml of 0.1ϫ TBS ϩ 8% sucrose and lysed by adding 8 ml of fractionation lysis buffer (1 mM HEPES (pH 7.2), 0.5% Nonidet P-40, 0.5 mM MgCl 2 , 0.1% -mercaptoethanol, 1 g/ml leupeptin, 1 g/ml aprotinin, 1 mM p-amidinophenylmethanesulfonyl fluoride, 1 mM Na 3 VO 4 , and 0.5 mM NaF). The lysate was spun at 2,500 ϫ g for 10 min. The supernatant was collected and spun again at 2,500 ϫ g for 10 min. The supernatant was transferred into a new tube through a 70-m nylon filter (Falcon 2350). The resulting supernatant was incubated with 10 mM HEPES and 1 g/ml DNase on ice for 30 min, transferred to a 15-ml ultracentrifuge tube, underlaid with 1 ml of 60% sucrose in sucrose dilution buffer (10 mM PIPES (pH 7.2), 0.1% Triton X-100, and 0.1% -mercaptoethanol), and spun at 10,000 ϫ g for 1.5 h. The bottom 3 ml was transferred to a new tube containing a discontinuous 40/50/70% sucrose gradient in sucrose dilution buffer. After spinning at 120,000 ϫ g for 1.5 h, 0.5-ml fractions from the top were taken and diluted to 1 ml with 0.5 ml of PEM buffer (80 mM PIPES (pH 6.8), 5 mM EGTA, 2 mM MgCl 2 ). After mixing, the solution was spun at 15,000 rpm in a tabletop centrifuge for 30 min. The pellet was resuspended in 1 ml of PEM buffer and spun at 15,000 rpm in a tabletop centrifuge for 30 min. The final pellet containing centrosomes was washed twice with PEM buffer and then resuspended in Laemmli sample buffer (Bio-Rad) with 5% -mercaptoethanol.
Immunofluorescence-As recently described in detail (34), cells were grown on coverslips and the coverslips were washed in 0.5% Triton X-100 for 2 min and fixed in cold 4% ultrapure formaldehyde (Polysciences, Inc.) in PEM buffer (80 mM K-PIPES (pH 7.0), 5 mM EGTA, 2 mM MgCl 2 ) for 10 -20 min. For the immunofluorescence of -tubulin, 4% polyethylene glycol was added to PEM buffer during the permeabilization and fixation steps. After they were fixed, the coverslips were washed with PEM buffer and permeabilized in 0.5% Triton X-100 in PEM buffer for 30 min. Then, the coverslips were washed with PEM buffer and blocked in 2.5% nonfat dry milk in TBST buffer (50 mM Tris (pH 7.6), 150 mM NaCl, 0.1% Tween 20) overnight. The next day, the coverslips were incubated for 1 h at 37°C with primary antibodies diluted in TBST, washed in TBST, and incubated for 1 h at 37°C with secondary antibodies diluted in 1:200 in TBST. After washing in TBST, coverslips were counterstained with 0.4 g/ml 4,6-diamino-2-phenylindole (DAPI, Molecular Probes, Eugene, OR) in TBST and mounted with Vectashield ® antifade medium (Vector Laboratories, Burlingame, CA) or ProLong antifade medium (Molecular Probes, Eugene, OR). The figures are composite images obtained with a Deltavision, deconvolution-based optical work station (Applied Precision, Issaquah, WA). Zseries stacks of multiple focal planes were used to render three-dimensional volumes.
Establishment of HEK293 Cell Clones Stably Transfected with Flag-PP4 and HA-PP4-RL-HEK293 cells were grown in complete DMEM containing 10% fetal calf serum supplemented with 12.5 mM HEPES, 50 g/ml gentamycin, and 100 units/ml penicillin-streptomycin (Invitrogen). We transfected the HEK293 cells with Flag-PP4 by the Fugene 6 method according to the manufacturer's protocol (Roche Molecular Biochemicals). The transfected cells were selected by Geneticin (G418; Invitrogen) at a concentration of 750 g/ml or 1 mg/ml. The cells were replated at a 1:15 dilution whenever they reached 80% confluence. After 10 -14 days, the T-75 flasks were trypsinized, and the drug-resistant cells were replated at a limiting dilution to obtain independent clones. Each clone was tested for Flag-PP4 expression by Western blotting. A similar approach was used to establish an HEK293 cell line stably expressing HA-PP4-RL.
In Vivo Labeling of PP4 and Phosphoamino Acid Analysis-HEK293T cells (1 ϫ 10 6 cells in 100-mm dishes) were transfected with 5 g of Flag-PP4. After 40 h, the cells were maintained in phosphatefree DMEM containing 5% dialyzed serum for 1 h at 37°C. The cells were then labeled in phosphate-free DMEM supplemented with 5% dialyzed serum and 100 Ci/ml [ 32 P]orthophosphate for 4 h at 37°C. After TNF-␣ treatment, the cells were washed with PBS twice to remove free [ 32 P]orthophosphate. Flag-PP4 was immunoprecipitated with an anti-Flag antibody (M2) and separated by SDS-PAGE. The separated proteins were transferred to PVDF, and autoradiography was performed. The membrane was then subjected to immunoblotting using an anti-Flag (M2) antibody. The corresponding PP4 bands were cut out and subjected to phosphoamino acid analysis (35,36).
RESULTS
PP4 Is Activated by TNF-␣-In an effort to investigate which signaling pathway(s) PP4 may be involved in, we examined the effect of TNF-␣ on PP4 phosphatase activity. We first generated an anti-PP4 antibody, Ab104, which recognizes the Cterminal region of PP4. Western blot analysis indicated that Ab104 specifically recognized PP4, but not the most highly homologous phosphatases PP2A and PP6 (Fig. 1A). Previously, PP4 had been shown to localize to the centrosomes via immunofluorescence staining (17,18). To confirm the specificity of Ab104, we isolated centrosomes from HeLa cells and performed Western blotting with antibodies to PP4 (Ab104), as well as markers for various subcellular compartments. PP4 localized to centrosome fractions, and these fractions were shown to be free of contamination from other subcellular compartments (Fig. 1B). We noticed that PP4 did not peak with ␥-tubulin. Considering that ␥-tubulin is a component of the centrioles of the centrosomes and that PP4 has been previously reported to be a component of the pericentriolar matrix of the centrosomes (17), the slight difference in the Western blot detection may be the result of slight differences in the densities of the two centrosomal structures. The association of PP4 with the centrosome was further confirmed by immunofluorescence staining using a peptide purified anti-PP4 antibody (Ab104). As shown in Fig. 1C, PP4 co-localized with proteins of the pericentriolar matrix (PCM). Taken together, these data show that PP4 is a component of the centrosome.
We then measured the phosphatase activity of PP4 before FIG. 1. Characterization of a PP4specific antibody, Ab104. A, the anti-PP4 antibody, Ab104, specifically recognizes PP4, but not PP2A and PP6. HEK293T cells were transfected with 2 g of empty vector (lanes 1), 2 g of Flag-PP4 (lanes 2), 2 g of Flag-PP2A (lanes 3), or 2 g of Flag-PP6 (lanes 4). Cells were harvested 36 h after transfection and subjected to SDS-PAGE. Western blotting was performed with 1 g/ml Ab104. The experiments were repeated four times with similar results. B, PP4 co-purifies with centrosomes. Centrosomes were prepared from 6 ϫ 10 7 HeLa cells and purified on a discontinuous sucrose gradient. 10% of protein recovered from each fraction and 5 g of HeLa whole cell lysate (W) were Western-blotted for the presence of PP4 (Ab104) and subcellular compartment markers: ␥-tubulin (centrosome), aldolase (cytosol), lamin B1 (nucleus), GRP78 (endoplasmic reticulum), golgin-97 (Golgi), and Bcl-X L (mitochondria). C, PP4 is a component of the centrosome. HeLa cells were grown on polylysinecoated coverslips, extracted in 0.5% Triton X-100 for 2 min, and fixed in 4% ultrapure formaldehyde. Fixed cells were incubated with DAPI DNA stain (DAPI; blue), human autoimmune serum 4171 (PCM; red), and the peptide-purified anti-PP4 antibody Ab104 (PP4; green; panels a-d) or normal preimmune serum from the same rabbit used to generate Ab104, before immunization with peptide (n.s.; green; panels e-h). Panels PCM, PP4, and DAPI were merged (merged; panel d), to identify areas of colocalization of PP4 and PCM staining (yellow). Arrows indicate position of centrosomes. The experiments were repeated at least three times with similar results. and after TNF-␣ treatment. PP4 phosphatase assays were established by using a synthetic peptide substrate, KTpIRR. We first wanted to ensure that the PP4 phosphatase assay is able to measure PP4 phosphatase activity. Thus, we tested the assay to determine the linear range of the assay and to show that increasing amounts of PP4 correlate with increasing PP4 activity. PP4 showed a time-dependent increase in its phospha-tase activity in a time period of 1-50 min of incubation of PP4 with the peptide substrate ( Fig. 2A, upper panel). Within this time frame, PP4 activity increased with increased amounts of PP4 ( Fig. 2A, lower panel). HEK293T cells were treated with TNF-␣ (10 ng/ml), and endogenous PP4 was immunoprecipitated from the cells with the PP4-specific antibody, Ab104. The PP4 phosphatase activity was measured by incubating the immunoprecipitated PP4 with the peptide substrate, KTpIRR, for 30 min. PP4 phosphatase activity was increased following TNF-␣ stimulation in a time-dependent fashion, peaking at 10 min (Fig. 2B, upper panel). PP4 activity was decreased after 10 min, indicating that TNF-␣-induced PP4 activation was a transient event. The increased phosphatase activity was not caused by variation in levels of PP4 because the amounts of PP4 immunoprecipitated were comparable (Fig. 2B, lower panel). Therefore, PP4 was activated in response to TNF-␣ in HEK293T cells.
It is known that TNF-␣ is a potent activator of the JNK pathway. To establish a possible link between PP4 and the JNK pathway in response to TNF-␣, endogenous JNK was immunoprecipitated with an anti-JNK1 antibody (Ab101) from HEK293T cells, and its kinase activity was determined by an immunocomplex kinase assay using GST-c-Jun-(1-79) as substrate. As shown in Fig. 2C, JNK was activated by TNF-␣ with kinetics similar to that of PP4 in HEK293T cells. Thus, PP4 was activated concomitant with JNK activation in response to TNF-␣ in HEK293T cells.
TNF-␣ Induces Serine and Threonine Phosphorylation of PP4 -To further confirm the involvement of PP4 in TNF-␣ signaling, we examined the effect of TNF-␣ on the phosphorylation state of PP4, because PP2A, the phosphatase most homologous to PP4, is regulated by phosphorylation. HEK293T cells were transfected with Flag-PP4, labeled in vivo with [ 32 P]orthophosphate, and treated with TNF-␣ (10 ng/ml). Flag-PP4 was then immunoprecipitated with an anti-Flag antibody
FIG. 2. TNF-␣ activates both PP4 and JNK in HEK293T cells.
A, establishment of PP4 phosphatase assays. 800 g of HEK293 cell lysate was immunoprecipitated with either anti-PP4 antibody (Ab104) or protein A bead alone. The immunoprecipitates were washed and incubated with assay buffer and KTpIRR peptide at 30°C for various times, from 0 to 120 min, as indicated (upper panel). 200, 400, 600, or 800 g of HEK293 cell lysate was immunoprecipitated with either anti-PP4 antibody (Ab104) or beads alone. The immunoprecipitates were washed and incubated with assay buffer and KTpIRR peptide at 30°C for 30 min (lower panel). The phosphatase assays were read at 650 nm. The readings are the average and standard deviation of three separate immunoprecipitations (PP4) or two separate immunoprecipitations (beads). B, TNF-␣ activates PP4 phosphatase activity. HEK293T cells were seeded at a density of 3.5 ϫ 10 6 cells/100-mm dish. After 24 h, the cells were treated with TNF-␣ (10 ng/ml) for various times as indicated. PP4 was immunoprecipitated with an anti-PP4 antibody (Ab104). The PP4 phosphatase activity was determined by using a synthetic peptide, KTpIRR, as a substrate (upper panel). The amounts of PP4 immunoprecipitated were monitored by Western blotting using an anti-PP4 antibody (Ab6101; lower panel). The experiments were repeated at least three times with similar results. C, TNF-␣ activates JNK kinase activity. HEK293T cells were seeded at a density of 3.5 ϫ 10 6 cells/100-mm dish. After 24 h, the cells were treated with TNF-␣ (10 ng/ml) for various times as indicated. JNK was immunoprecipitated with an anti-JNK antibody (Ab101). The JNK phosphatase activity was determined by using GST-c-Jun-(1-79) as a substrate. The experiments were repeated three times with similar results. (M2). We found that TNF-␣ treatment induced phosphorylation of PP4 in a time-dependent manner, peaking at 5 min (Fig. 3A). Phosphoamino acid analysis showed that TNF-␣-induced phosphorylation of PP4 occurred on serine and threonine residues (Fig. 3B). These results indicate that PP4 is inducibly phosphorylated in response to TNF-␣ in HEK293T cells.
JNK Activation by TNF-␣ Is Blocked by PP4-RL-To investigate the functional involvement of PP4 in the TNF-␣ signaling, we examined the contribution of PP4 to TNF-␣-induced JNK activation. We first constructed a PP4 mutant, PP4-RL, in which the replacement of arginine 236 with leucine resulted in the loss of its phosphatase activity (Fig. 4B). We then examined the effect of PP4-RL on JNK activation by TNF-␣. HEK293T cells were transfected with HA-JNK1 alone or HA-JNK1 plus PP4-RL. The transfected cells were treated with TNF-␣ (10 ng/ml) for 10 min. We found that TNF-␣-induced JNK activation was blocked by PP4-RL (Fig. 4A, upper panel), indicating that PP4-RL may be a dominant-negative mutant and that PP4 plays a role in JNK activation by TNF-␣.
We also established a HEK293 cell clone, called HEK293-PP4-RL, that stably expresses HA-PP4-RL (Fig. 4C, right panel). HEK293-PP4-RL cells were treated with TNF-␣ (10 ng/ml) for various times (0 to 60 min), and endogenous JNK was immunoprecipitated from the cells with an anti-JNK antibody (Ab101). The JNK kinase activity was measured by immunocomplex kinase assays using GST-c-Jun-(1-79) as a substrate. As shown in Fig. 4C (left panel), a decrease in JNK activation by TNF-␣ in HEK293-PP4-RL cells was detected, in comparison to the parental HEK293 cells. Although JNK activation by TNF-␣ peaked at 10 min in HEK293T cells (Fig. 2C), TNF-␣induced JNK activation peaked at 20 min in HEK293 cells (Fig. 4C, left panel). This kinetic difference between HEK293 and HEK293T cells may be the result of the presence of SV40 large T antigen in HEK293T cells. Taken together, these data indicate that PP4 is required for transducing TNF-␣ signals to the JNK pathway. PP4 Specifically Activates JNK, but Not p38 and ERK2-To confirm the involvement of PP4 in the JNK signaling pathway, we tested whether expression of PP4 had any effect on the activity of JNK. Hemagglutinin (HA)-tagged JNK1 was cotransfected in HEK293T cells with PP4, PP1, another serine/ threonine phosphatase, or M3/6, a dual-specificity MAPK phosphatase. HA-JNK1 was immunoprecipitated, and its kinase activity was determined in vitro using GST-c-Jun-(1-79) as a substrate. Cotransfection of PP4 resulted in activation of JNK1 (Fig. 5, lanes 1 and 2), whereas, PP1 and M3/6 had no such effect on JNK1 (Fig. 5, lanes 1, 3, and 4). M3/6 is a known JNK-inactivating dual-specificity phosphatase, which dephosphorylates the TPY motif of JNK (27,37). Transiently transfected JNK is somehow partially activated. Therefore, cotransfection of M3/6 with JNK resulted in inhibition of JNK activity, as expected. The nature of the inhibition of JNK by PP1 is not known at this point. It is likely that PP1 dephosphorylates the threonine residue of the TPY motif of JNK and thus inhibits JNK activity. These data indicate that PP4 exerted a positive regulatory effect on JNK1.
To determine whether PP4's effect on JNK1 is specific, we also examined the effect of PP4 on p38 and ERK2. HEK293T cells were transfected with various amounts of the PP4 expression plasmid together with the HA-tagged MAPK constructs, HA-JNK1, HA-p38, and HA-ERK2. HA-tagged MAPKs were immunoprecipitated, and their kinase activities were determined in vitro using the appropriate substrates (GST-c-Jun for JNK1, GST-ATF2 for p38, and myelin basic protein for ERK2). JNK was activated by PP4 in a dose-dependent manner by PP4 (Fig. 6A). In contrast, PP4 had no significant effect on the activities of either p38 (Fig. 6B) or ERK2 (Fig. 6C). These data indicate that PP4 serves as a specific positive regulator for the JNK signaling pathway. We also found that PP4-RL had no effect on PKC--induced ERK and MKK6-induced p38 activation (data not shown).
We next wanted to determine whether PP4 and JNK1 interact directly with each other. We incubated GST-JNK fusion protein with cell lysates from untreated or TNF-␣ treated HEK293 cells stably expressing Flag-PP4. The potential PP4-JNK interaction was analyzed by SDS-PAGE and Western blotting using an anti-Flag antibody (M2). Similar to transient transfected Flag-PP4 in HEK293T cells (Fig. 3), stably expressed Flag-PP4 in HEK293 cells was also inducibly phosphorylated after 5-min treatment of TNF-␣ (Fig. 7A, lower panel). Association of PP4 with GST-JNK was not detectable (Fig. 7A, upper panel) in the absence or presence of TNF-␣. We also HA-JNK1 was immunoprecipitated with an anti-HA antibody (12CA5), and immunocomplex kinase assays were performed using GST-c-Jun-(1-79) as a substrate. Expression levels of HA-JNK1 and PP4 were monitored by immunoblotting using anti-HA (12CA5) and anti-PP4 antibodies, respectively (bottom panels). The experiments were repeated at least 10 times with similar results. B, HEK293T cells (1.5 ϫ 10 5 cells in 35-mm wells) were transfected with HA-p38 (1 g) alone, HA-p38 plus various amounts of PP4, or HA-p38 plus 2 g of HA-MKK6, as indicated. Empty vector was used to normalize the amount of transfected DNA. 36 h after transfection, the cell lysates were prepared. HA-p38 was precipitated with an anti-HA antibody (12CA5), and immunocomplex kinase assays were performed using GST-ATF2-(1-96) as a substrate. Expression levels of HA-p38, HA-MKK6, and PP4 were monitored by immunoblotting using anti-HA (12CA5) and anti-PP4 antibodies, respectively (bottom panels). C, HEK293T cells (1.5 ϫ 10 5 cells in 35-mm wells) were transfected with HA-ERK2 (1 g) alone, HA-ERK2 plus various amounts of PP4, or HA-ERK2 plus 1 g of HA-PKC-, as indicated. Empty vector was used to normalize the amount of transfected DNA. 36 h after transfection, the cell lysates were prepared. HA-ERK2 was precipitated with an anti-HA antibody (12CA5), and immunocomplex kinase assays were performed using myelin basic protein as a substrate. Expression levels of HA-ERK2, HA-PKC-, and PP4 were monitored by immunoblotting using anti-HA (12CA5) and anti-PP4 (Ab104) antibodies, respectively (bottom panels).
found that PP4 had no phosphatase activity toward in vitro phosphorylated GST-JNK (data not shown). Under the same conditions, however, M3/6, a dual-specificity phosphatase known to target JNK directly, interacted with GST-JNK (Fig. 7B). Taken together, these data suggest that PP4 affects the JNK pathway in an indirect manner. DISCUSSION TNF-␣ is an important effector cytokine for inflammatory and immune responses and is involved in many important cellular processes, such as proliferation, differentiation, and apoptosis (38). A variety of protein phosphatases have been implicated in TNF-␣ signaling. For example, calcineurin, a calcium-dependent serine/threonine phosphatase, participates in TNF-␣-mediated apoptosis in rat hepatoma cells (39) and SHP-2, a Src homology 2-containing phosphotyrosine phosphatase, mediates the induction of interleukin-6 by TNF-␣ through modulation of the NF-B pathway (40). Another phosphotyrosine phosphatase, SHP-1, has been shown to mediate TNF-␣'s inhibitory effect on vascular endothelial cell growth factorinduced endothelial cell proliferation (41). PP2A has also been shown to be involved in many TNF-␣-induced cellular processes (42)(43)(44)(45). However, many of these studies relied on the use of okadaic acid, an inhibitor for PP1 and PP2A. Because okadaic acid inhibits PP4 with an IC 50 comparable with that of PP2A (17), it is necessary to reexamine some of the functions assigned to PP2A. We provide evidence here that PP4, a novel member of the PP2A family, was activated by TNF-␣ in HEK293T cells, as indicated by increased phosphatase activity, and increased serine and threonine phosphorylation of PP4 itself. The involvement of PP4 in TNF-␣ signaling was further demonstrated by the observation that a PP4 mutant blocked TNF-␣-induced JNK activation. Demonstration of the involvement of PP4 in TNF-␣ signaling will help in exploring the molecular mechanism by which TNF-␣ regulates cellular processes.
We found that the activation of PP4 by TNF-␣ was accompanied by an increase in the serine and threonine phosphorylation of PP4. These results indicate the novel finding that a member of the PP2A family is subject to regulation by serine phosphorylation. It has been known that the catalytic subunit of PP2A is subject to phosphorylation of a conserved tyrosine and an as yet unidentified threonine (46 -48), and that phosphorylation of either the tyrosine or the threonine site inhibits phosphatase activity of PP2A in vitro. However, in human hepatoma Hep3B cells, interleukin-6 induced an increase in both the phosphorylation and phosphatase activity of PP2A (39). The nature of PP4 serine and threonine phosphorylation in response to TNF-␣ remains unknown at this point. We noted that PP4 phosphorylation preceded PP4 activation in response to TNF-␣ (5 min versus 10 min). Considering the existence of multiple potential phosphorylation sites on PP4, we speculate that PP4 may be subject to multiple phosphorylation in response to TNF-␣, and it is the phosphorylation that occurred at 10 min, but not at 5 min, that contributes to activation of PP4. Further study, including identification of the phosphorylation site(s) and characterization of site-directed mutants of PP4, is required to understand the relationship between the phosphorylation, which occurred at 5 min, and PP4 activation. Alternatively, we cannot exclude the possibility that PP4 phosphorylation precedes PP4 activation by inducing conformational change(s) and/or recruiting some regulatory subunits required for the activation of PP4.
Phosphorylation-dependent inactivation is characteristic of many types of protein kinases, such as DNA-dependent protein kinase (49), phosphoinositide 3 kinase (50), Raf-1 (51)(52)(53), and CLK1 (54). It has been shown that PP2A dephosphorylates the inhibitory phosphoserine residue 259 of Raf-1 and thus serves as a positive regulator for Raf-1, an upstream activating kinase for the ERK pathway (55). Raf-1 and MEK1/2, another upstream activating kinase for the ERK pathway, are positively regulated by MAPK phosphatase 1, a dual-specificity phosphatase, in an ERK-independent manner (15). We provide evidence here that PP4 acts as a specific positive regulator for the JNK pathway. However, we did not detect a direct interaction between PP4 and JNK1, strongly suggesting that PP4 exerts its positive regulatory effect on the JNK pathway in an indirect FIG. 7. PP4 does not interact with JNK in vitro. A, no PP4-JNK association was detected in vitro. HEK293 cells stably transfected with Flag-PP4 (10F1 clone) were seeded at 4 ϫ 10 6 cells/100-mm dish and treated with TNF-␣ (10 ng/ml) for 5 min the next day. 600 g of lysate from either TNF-␣-treated or untreated 10F1 HEK293 cells was incubated with GST or GST-JNK fusion protein immobilized onto glutathione-agarose beads for 2 h at 4°C. The PP4-JNK interaction was analyzed by immunoblotting with an anti-Flag antibody (M2) to detect Flag-PP4 bound to GST-JNK after SDS-PAGE (upper panel). The GST and GST-JNK were monitored by immunoblotting with an anti-GST antibody (middle panel). The experiments were repeated three times with similar results. To assure Flag-PP4 was in a phosphorylated state, 10F1 HEK293 cells were labeled in the phosphate-free DMEM supplemented with 5% of dialyzed serum and 100 Ci/ml [ 32 P]orthophosphate for 4 h at 37°C and treated with TNF-␣ (10 ng/ml) for 5 min. Flag-PP4 was immunoprecipitated with an anti-Flag antibody (M2) and subjected to SDS-PAGE and autoradiography. The experiments were repeated two times with similar results. B, M3/6 associates with JNK in vitro. GST or GST-JNK fusion protein was immobilized on glutathione-agarose beads and incubated with 600 g of lysate from HEK293T cells transiently transfected with Myc-M3/6 for 2 h at 4°C. The M3/6-JNK interaction was analyzed by immunoblotting with an anti-Myc antibody to detect Myc-M3/6 bound to GST-SAPK after SDS-PAGE (upper panel). The GST and GST-SAPK were monitored by immunoblotting with an anti-GST antibody (lower panel). manner. Given the fact that the core of the JNK signaling pathway is a multiple-kinase module that is assembled by scaffold proteins to act as a stimulus-specific signaling complex (7)(8)(9), and that the magnitude and duration of JNK activation are tightly controlled by the coordinate actions of protein kinases and protein phosphatases (12), we speculate that PP4 may target and activate the JNK upstream activating kinase(s), which is negatively regulated by phosphorylation, and subsequently leads to JNK activation. The target for PP4 could be a kinase at one or multiple levels of the JNK signaling cascade.
In addition to regulation of upstream activating kinases, we cannot exclude the possibility that PP4 may target a phosphatase which inhibits JNK, and thus exert an indirect positive effect on the JNK pathway. This putative JNK phosphatase may be activated by phosphorylation, and hence inactivated by dephosphorylation. Because only JNK, but not p38 or ERK, is activated by PP4, the putative phosphatase should also be JNK-specific. Inhibition of this JNK-specific phosphatase by PP4-mediated dephosphorylation would then lead to JNK activation. Therefore, some JNK-specific, dual-specificity phosphatases, such as M3/6 (37), may be good candidates for PP4 targets. | 7,670.6 | 2002-02-22T00:00:00.000 | [
"Biology"
] |
Novel Higgsino Dark Matter Signatures at the LHC
In the LHC searches for gluinos it is usually assumed that they decay predominantly into the lightest neutralino plus jets. In this work we perform a proof-of-concept collider analysis of a novel supersymmetric signal in which gluinos decay mostly into jets and the bino-like neutralino ($\tilde\chi_3^0$), which in turn decays into the lightest Higgsino-like neutralino ($\tilde\chi_1^0$), considered the dark matter candidate, together with the SM-like Higgs boson ($h$). This new physics signal then consists of an LHC final state made up by four light jets, four $b$-jets, and a large amount of missing transverse energy. We identify $t \bar t$, $V$+jets ($V$= $W$, $Z$), and $t \bar t + X$ ($X$ = $W$, $Z$, $\gamma^*$, $h$) productions as the most problematic backgrounds, and develop a search strategy for the high luminosity phase of the LHC, reaching signal significances at the evidence level for a luminosity of 1000 fb$^{-1}$. The prospects for a luminosity of 3000 fb$^{-1}$ are even more promising, with discovery-level significances.
Introduction
After the Higgs boson discovery [1,2] at the LHC lots of efforts of the CMS and ATLAS collaborations are in searches for physics beyond the Standard Model (BSM). So far the results have been null so bounds are put in popular models albeit there are caveats on those bounds. The reinterpretation of the searches are normally done in the context of simplified models where it is easier to draw conclusions. An example of those situations are gluino searches done at the LHC (for a recent summary, see, for instance, [3] and [4]). In most of the cases it is assumed that the gluino decays with a branching ratio equal to 1 to the lightest neutralino plus jets, which in fact makes an implicit assumption on the supersymmetric (SUSY) spectrum and couplings. If this assumption is not fulfilled many experimental bounds could be evaded. It is thus interesting to explore other (less conventional) possibilities, as very often they are theoretically well motivated, as it is the case we will explore in this paper.
In this work we develop a search strategy for a novel signature at the LHC of Higgsino dark matter, proposed in [5], where the gluino will not decay predominately to the lightest neutralino plus jets. Under very general conditions, that will be explained in section 2, there could be several electroweakinos lighter than the gluino, which will change dramatically the signatures at the LHC. The aim of our analysis is more to give a proof of principle, than providing an elaborated strategy, to show which kinematical variables and cuts may be effective for this kind of scenarios. Let us finally emphasize that, in general, it is very important for the next run of the LHC to go beyond the usual simplified models, and to design searches, to look for kinematic variables and to optimize cuts, to be sensitive to more scenarios than just the ones captured by simplified models or spectra.
The rest of the paper is organized as follows. The general theoretical framework for the model we will consider is provided in Sec. 2. In this framework our guideline will be the possibility of having a 1.1 TeV Higgsino as dark matter. The collider analysis will be done in Sec. 3 while our conclusions will be drawn in Sec. 4.
Theoretical Framework
Identifying the lightest neutralinoχ 0 1 as the lightest supersymmetric particle (LSP), and thus a dark matter candidate in the presence of R parity [6], is one of the most appealing features of the minimal supersymmetric extension of the Standard Model (MSSM) [7][8][9]. Given the strong LHC bounds on the mass of supersymmetric particles, and the plethora of null results on dark matter direct searches, there remains a preferred supersymmetric scenario: an almost pure Higgsino with a mass ∼ 1.1 TeV [10,11]. This requirement (almost) fixes the theoretical framework in the electroweakino (neutralino/chargino) sector as it generically requires that µ ∼ 1.1 TeV (where µ is the supersymmetric Higgsino mass) while M 1 , M 2 µ (where M 1 and M 2 are soft supersymmetry breaking Majorana masses for the fermionic partners of the U (1) Y and SU (2) gauge bosons, bino, and wino respectively).
The Majorana masses M 1,2 are defined at the low scale and their values depend on the mechanism of supersymmetry breaking. While the requirement of the Higgsino being the LSP rules out gauge mediation (for which the gravitino is the LSP) as the transmission mechanism for supersymmetry breaking, gravity mediation seems to be the preferred one, as there is room for the lightest neutralino to be the LSP and moreover the supersymmetric mass µ can be generated by the Giudice-Masiero mechanism [12]. In gravity mediation, all supersymmetry breaking parameters, and in particular M 1,2 are generated at the high (unification) scale, i.e. M 0 1,2 , and their value at low scale is obtained by means of the renormalization group equation (RGE) running. Unification conditions are usually assumed, i.e. M 0 1 = M 0 2 , but even assuming that M 0 1 ∼ M 0 2 , after the RGE running we have M 2 ∼ 2M 1 so that the binoχ 0 3 is lighter than the winoχ 0 4 . Under these circumstances the neutralino sector is almost completely fixed: i) There are two (almost) purely Higgsinos,χ 0 1 ,χ 0 2 , with masses ∼ 1.1 TeV and a mass separation of a few GeV. ii) There is a binoχ 0 3 with a mass mχ0 3 ∼ M 1 and a wino with a mass mχ0 4 ∼ 2mχ0 3 . At the same time the constraints from the XENON1T experiment on direct detection [13], analyzed in Ref. [11], put the constraint, for the case of equal masses at the unification scale, M 0 1 = M 0 2 3.2 TeV, which translates into the lower bounds mχ0 3 1.5 TeV and mχ0 4 2.7 TeV [5]. As for the chargino sector, the lightest stateχ ± 1 is almost degenerate with the LSP, with a few GeV gap, while the heaviest chargino is almost degenerate with the heavy neutralino, so that mχ± 2 2.7 TeV. On the other hand the gluinog mass Mg is also fixed by the breaking mass M 0 3 at the unification scale. In our theoretical framework the gluino mass is not unified with the electroweak masses M 0 1,2 so that it will be considered as a free parameter. This is a safe assumption as the gluino mass does not enter the process of electroweak breaking at the tree level. We will assume that the gluino mass will be close to its present experimental bound Mg ∼ 2 TeV. Moreover we are going to assume, for simplicity, that all other sparticles including squarks are more massive than the gluino, nonetheless all decays are assumed to be prompt. In this case the possible channels for the gluino decay arẽ g →χ 0 1,2 jj,g →χ ± 1 jj, andg →χ 0 3 jj, mediated by the decayg →q * a q a , where a is a generation label, and followed byq * a →χ 0 1,2 q a ,q * a →χ ± 1 q a (induced by the Yukawa coupling y qa ) andq * a →χ 0 3 q a (induced by the U (1) gauge coupling g 1 ), respectively. The typical situation that current analyses consider and cover is that the direct decay to nearly degenerate Higgsinos dominates (χ 0 1,2 ,χ ± 1 ). If, instead, the gluino decays predominantly to χ 0 3 , one will get a final state with several energetic jets and b-quarks that will evade current bounds. The decay channels of the gluino depend on the details of the squark spectrum: if the first two generations of squark are less massive than the third generation, then the decay to χ 0 3 is favored, being of electroweak nature as opposed to the decay to the Higgsino which is proportional to the corresponding Yukawa coupling. In Fig. 1 we have a schematic view of the spectrum and decays that are going to be analyzed in the next section.
e z e y f k x C o 9 0 o + U L W n I T P 0 9 k d J Q 6 3 E Y 2 M 6 Q m q F e 9 K b i f 1 4 7 M f 0 r P + U y T g x K N l / U T w Q x E Z k + T 3 p c I T N i b A l l i t t b C R t S R Z m x E R V s C N 7 i y 8 u k U S l 7 F + X K 3 X m p e p 3 F k Y c j O I Z T 8 O A S q n A L N a g D A w H P 8 A p v z q P z 4 r w 7 H / P W n J P N H M I f O J 8 / U m + P g Q = = < / l a t e x i t > z c a G I C y p 2 R X i I Z I I a x N a 2 Y T g z p + 8 S N r 1 m n t e q 9 + e V R p X R R w l c A S O w S l w w Q V o g B v Q B C 2 A w S N 4 B q / g z X q y X q x 3 6 2 P W u m Q V M w f g D 6 z P H 4 W O l a k = < / l a t e x i t >
Collider Analysis
The experimental signature under study at the LHC comes from the SUSY production of a pair of gluinos, pp →gg, that decay intoχ 0 3 and two light jets (g →χ 0 3 jj). We consider then that eachχ 0 3 decays into the LSP (χ 0 1 ) and the lightest MSSM Higgs boson, h, identified as the 125-GeV SM-like Higgs boson discovered at the LHC, which decays into a pair of b-quarks. Therefore, the final state is made of four light jets, four b-jets, and a large amount of missing transverse energy (4j + 4b + E miss T ), whose main SM backgrounds are QCD multijet; Z + jets and W + jets productions; tt production; tt production in association with electroweak or Higgs bosons, tt + X (X = W , Z, γ * , h); and diboson production (W W , ZZ, W Z, W h, and Zh) plus jets.
We develop our search strategy for a LHC center-of-mass energy of √ s = 14 TeV and a total integrated luminosity of L = 1000 fb −1 , compatible with the high-luminosity LHC (HL-LHC) phase. We make use of MadGraph aMC@NLO 2.7 [14] for the Monte Carlo generation of both signal and background events, whose parton shower and hadronization is performed with PYTHIA 8 [15], while the detector response simulation is achieved with Delphes 3 [16]. From the proposed new physics signal, one would expect in the final state very energetic light jets and b-jets, coming from the decays of gluinos and Higgs bosons, respectively. Therefore, with the intention of reducing the large background cross sections and making event generation more efficient, we impose the following generator-level cuts on the p T of the light jets and b-jets for the background simulation 1 : where j 1 . . . j 4 (b 1 . . . b 4 ) runs from the most to the least energetic light (b-) jet. Dealing with many jets in the final state, the MLM algorithm [17,18] was implemented for jet matching and merging.
In order to optimize the simulation and checking that the jet related distributions are smooth, the xqcut was set to 20 for all simulated samples and qcut equal to 550, 50, and 30 for signal, tt-like and backgrounds with bosons, respectively. With this in mind, the following comments on the signal and backgrounds are pertinent: • The SUSY spectrum and branching ratios for the signal benchmark have been computed with SOFTSUSY [19][20][21][22][23][24][25], while the production cross section of a pair of gluinos is obtained from [26]. • The QCD multijet background is unmanageable with our computational capacity, and is usually treated with data-driven techniques. In our case, taking into account that our signal will have a large amount of E miss T , variables related to this observable, such as the E miss T significance, greatly reduce this class of backgrounds with instrumental missing transverse energy, bringing practically to zero the number of expected events. Therefore, we can consider the QCD multijet background as negligible and it will not be included in our analysis.
• Regarding the V +jets production, including both Z+jets and W +jets, we considered a pair of b-jets and a pair of light jets leading to four extra jets and a genuine source of missing energy through neutrinos coming from the decay of the gauge bosons (with BR(Z → νν) = 0.2 and BR(W → lν) = 0.21). Other combinations of extra jets do not have b or light jets enough and more than 4 extra jets are out of our simulation capacity. Then, taking into account the generator setup, we expect 5.6 × 10 4 for Z+jets and 3 × 10 5 events for W +jets with L = 1000 fb −1 .
• Related to the V +jets background, the diboson production can be safely neglected in this analysis since it is subdominant with an amount of roughly 10 −3 times the V +jets (which we will see it is already under control). • The tt production, with both fully-hadronic and semileptonic decay channels, is the most dangerous background. The corresponding branching fractions are BR(tt had ) = 0.457 and BR(tt semilep ) = 0.438. After the generator-level cuts, we expect 1.36 × 10 6 and 0.42 × 10 6 events, respectively. We also consider one extra jet in the simulation, resulting in 0.83 × 10 6 and 0.25 × 10 6 events more for the hadronic and semileptonic channels, respectively.
• Concerning the tt + X backgrounds, even though is much smaller than the tt ones, the extra boson provide genuine source of missing energy (more b-jets) for the hadronic (semileptonic) top-quark pair. Explicitly, we consider tt had +(Z → νν), tt had +(W → lν), tt semilep +(Z → bb), tt semilep + (γ * → bb), and tt semilep + (h → bb). We also include one extra jet to each process, leading to 2.9 × 10 3 expected events in this category.
Next we will perform a characterization of the signal against the dominant SM backgrounds in order to define the most promising signal regions for our search strategy. In our analysis, the previously defined backgrounds are separated in four categories: tt had + j (inclusive), tt semilep + j (inclusive), V +jets, and tt + X + j (inclusive). In Fig. 2 we depict the distributions of the fraction of signal and backgrounds events of the number of identified b-jets N b (left panel) and the number of light jets N j (right panel). In order to avoid one of the most dangerous background, the semileptonic tt production, we firstly set a lepton veto (N = 0), which have been already imposed on the distributions on both plots of Fig. 2. One of the most challenging task of the proposed signature is the identification of b-jets, since the signal is characterized by 4 bottom quarks coming from the Higgs boson decays. It is clear from the left panel of Fig. 2 that the requirement of identifying 4 b-jets would reduce the number of signal events to less than half. Therefore, we are going to impose two class of selection cuts related to the number of identified b-jets: a loose cut with at least 2 b-jets in the final state (N b ≥ 2) and a tight cut, requiring at least 3 reconstructed b-jets (N b ≥ 3). The signal consists also of 4 light jets, then we add to the selection-cut set the requisite of having at least 4 light jets in the final state (N j ≥ 4). Thus, the selection cuts that characterize our signal are as follows: T distributions for the background events have their maximum around 100 GeV, with a sharp drop after that. It is also easy to check that the p b 1 T distribution for the signal is less choppy, with its maximum around 500 GeV. Recall also here that the simulation of the backgrounds has been performed with the generator-level cuts, while the signal events have been simulated with only the default cuts. Therefore, a severe cut on p b 1 T will help to greatly reduce the background events, without affecting the signal events too much. On the other hand, a priori no similar conclusion can be drawn about the p j 1 T distributions of the backgrounds, which mimic the signal distribution very well. However, we will see later when we define our search strategy, that the cuts on the p T of the four leading light jets remove a large number of background events. The E miss T distribution for the signal is practically flat (in the range from 200 GeV to 600 GeV, more or less), while for the backgrounds it peaks below 100 GeV and drops sharply thereafter, with very little fraction of events above 200 GeV. It is therefore to be expected that a cut around this value eliminates much of the background events without much change in the number of signal events. In addition, our signal presents a significant peak around 1500 GeV for the hadronic activity distribution, while the peaks of the H T distributions for the backgrounds are below 1000 GeV, with very little fraction of events above this value. Again, an H T cut at 1000 GeV and above should be very useful for getting the backgrounds out of the way and keeping a large proportion of signal events. E miss T significance distributions for the backgrounds are mostly below 5, with peaks around values of 2-3. The signal distribution, however, is much less steep, being more or less flat between 5 and 15. From this we can also conclude that a E miss T significance cut above 5 should be very helpful in reducing the backgrounds without affecting the signal. Finally, the effective mass m eff also appears to be a very efficient variable for separating signal from background. The signal distribution peaks around 1800 GeV while the background ones have peaks around 700-800 GeV, with very few events beyond 1300 GeV.
All these six kinematic variables, shown in Fig. 3, together with the transverse momenta of the subleading light jets and b-jets, not shown here for space saving, indicate in general a very distinct behavior between signal and background. This motivates the definition of our search strategy, through the cuts shown below, separating into two signal regions: a first signal region (SR1) in which we ask for at least two b-jets in the final state and another one (SR2) with at least three reconstructed b-jets. Also, both signal regions require at least four light jets. The p T cuts at detector level for all the jets are then: Based on the above, we define the SR1 search strategy with the following cuts: • Loose selection cuts of Eq. (2), • loose p T cuts of Eq. (3), • E miss T > 150 GeV, • and m eff > 1800 GeV, (4) and (5), the latter with a background systematic uncertainty of 30%.
In order to study the potential of our search strategies, we are going to make use of the following expression for the statistical significance of the number of signal events, S, with respect to the number of background events, B [27,28]: In addition, to obtain a more realistic estimate of the significances 2 , we can take background systematic uncertainties into account by modifying Eq. (4) as follows [27,28]: where σ B = (∆B)B, with ∆B being the relative systematic uncertainty, that we choose to be, in a conservative way, of 30%.
We are now in a position to apply our search strategies on the events of our signal and the backgrounds generated for an LHC energy of 14 TeV and a total integrated luminosity of 1000 fb −1 .
In Tabs. 1 and 2 the cut flow of the the SR1 and SR2 signal regions are shown, respectively, together with their corresponding significances as we apply each of the cuts. In the SR1 case (Tab. 1), we see that the selection cuts reduce more than one order the magnitude all the background events, while keeping the 75% of the signal events. In this signal region, the loose p T cuts are very efficient, reducing backgrounds by more than two orders of magnitude and only half the signal. The E miss T cut is also very useful, eliminating most of the tt and tt+X events and bringing the V +jets background to zero, while barely affecting the signal events. Finally, the m eff variable eliminates most of the tt-like events, leaving only 2.7 events of the total tt background and keeping 5.5 signal events, more than 25% of those initially expected. This all adds up to a final statistical significance close to the evidence level and somewhat greater than 2 when considering 30% systematic uncertainties in the background. The results for the SR2 search strategy are more stimulating, as shown in the cut flow of Tab. 2. The tight selection cuts reduce the hadronic tt background by two orders of magnitude and all other backgrounds by more than three orders of magnitude, while keeping half of the signal events. The p T cuts eliminate the V +jets background and again reduce the remaining backgrounds by more than two orders of magnitude, with half of the remaining signal events surviving. The E miss T cut again hardly affects the signal, reduces by two orders of magnitude the events of the hadronic tt background, which are finally removed by the m eff cut, which hardly modifies the signal, eliminates the tt + X events and leaves the only surviving background in this signal region, semileptonic tt, at 0.36 events. In the end, in this signal region we obtain for both significance estimates values above the evidence level. At this point, it is important to note that in both signal regions the cuts can be further adjusted, preserving at least three signal events and killing all the simulated backgrounds at the same time. For instance, for SR1 (SR2) with m eff > 2100 GeV (m eff > 1500 GeV), 3.2 (3.7) signal events remain and the background events vanish. Notice that this kinematic variable summarizes the main feature of our signal, with several energetic light jets and b-jets, that differs from the more conventional ones (with full decays to the LSP).
The projections for a luminosity of 3000 fb −1 , considering that the number of signal and background events increase in the same way, are very promising. For the SR1 search strategy we obtain S sta = 4.66 and S sys = 3.23, and S sta = 6.08 and S sys = 5.32 for the SR2 case. That is, for the future high-luminosity phase of the LHC, one could expect significances above the evidence level in the SR1 signal region and reach significances larger than the discovery level with the SR2 search strategy, which shows that this class of experimental signatures at the LHC deserve special attention and dedicated searches.
Conclusions
In this work we have developed a proof-of-concept collider analysis at the HL-LHC for a new SUSY signal (whose spectrum evades current LHC searches): pp →gg → (χ 0 3 jj) (χ 0 3 jj) → (χ 0 1 hjj) (χ 0 1 hjj) → 4j + 4b + E miss T . The more problematic SM backgrounds of this experimental signature are tt, V +jets (V = W , Z), and tt + X (X = W , Z, γ * , h), which all turn out to be under control after the cuts of our search strategy. The selection cuts define two signal regions, SR1 with N b ≥ 2 and SR2 with N b ≥ 3, to which we subsequently applied cuts on the most relevant kinematic variables: the transverse momenta of light and b-jets, E miss T , and m eff , which is the sum of E miss T plus the hadronic activity, H T . With a center-of-mass energy of 14 TeV and a total integrated luminosity of 1000 fb −1 we reach signal significances close to the evidence level (3σ) for SR1 and above this value for SR2. The prospects for 3000 fb −1 are very encouraging, with significances greater than 3σ for SR1 and above the discovery level (5σ) for SR2, indicating that this novel signature deserves the development of dedicated searches by the LHC experiments. | 5,972.4 | 2021-04-28T00:00:00.000 | [
"Physics"
] |
A Review on Sources and Pharmacological Aspects of Sakuranetin
Sakuranetin belongs to the group of methoxylated flavanones. It is widely distributed in Polyomnia fruticosa and rice, where it acts as a phytoalexin. Other natural sources of this compound are, among others, grass trees, shrubs, flowering plants, cheery, and some herbal drugs, where it has been found in the form of glycosides (mainly sakuranin). Sakuranetin has antiproliferative activity against human cell lines typical for B16BL6 melanoma, esophageal squamous cell carcinoma (ESCC) and colon cancer (Colo 320). Moreover, sakuranetin shows antiviral activity towards human rhinovirus 3 and influenza B virus and was reported to have antioxidant, antimicrobial, antiinflammatory, antiparasitic, antimutagenic, and antiallergic properties. The aim of this review is to present the current status of knowledge of pro-health properties of sakuranetin.
Introduction
Flavonoids are natural plant polyphenols. Based on their chemical structures and the type of substituents in the aromatic rings, they are classified into several subclasses, such as flavanones, flavonols, flavones, isoflavones, dihydroflavonols, chalcones, anthocyanidins, and catechins. Among them, the large group comprises natural O-metylated flavones, flavanones and chalcones. In the human organism, they exert numerous beneficial health effects. The best known are anticancer, antioxidant, antiinflammatory, antiviral, antidiabetic, antimutagenic and antimicrobial ones [1][2][3][4][5][6]. Some of these compounds were also characterised as exerting beneficial physiological effects. Among all the flavonoids, sakuranetin is one of the best characterized plant natural product, and one of the most studied among phenolic compounds. In plants, it is present either in the glycosylated form, named sakuranin, or as the aglycone.
In terms of the chemical structure sakuranetin, chemically named as 4 ,5-dihydroxy-7methoxyflavanone (Scheme 1), it has a molecular weight of 286.27 (C 16 H 14 O 5 ) and consist of two fused rings, A and C, and a phenyl ring B, which is attached to the C ring at the C-2 position. This flavanone is characterized by the absence of a double bond between C2-C3 in the C ring, and also by the presence of a 5-hydroxy-7-methoxy substitution pattern in the A ring and a single 4 -hydroxyl group in ring B. Sakuranetin is the O-methylated derivative of the best known citrus flavanone naringenin.
In plants, sakuranetin is produced in response to stress and infections [7]. 7-O-Methyltransferase, an enzyme that catalyzes the synthesis of sakuranetin, can be activated by ultraviolet light or by infection with Oryza sativa [8][9][10]. The key enzyme in the phenylpropanoid pathway is phenylalanine ammonia lyase (PAL), which is directly involved in the synthesis of flavonoid-type phytoalexins, including sakuranetin and naringenin [11]. According to Tomogami's study, the amino acid conjugates of jasmonic acid are found to elicit the production of the flavonoid phytoalexin sakuranetin in rice leaves [12]. evergreen shrub native to Yunnan, and Sichuan provinces in China. As a result of the extraction of 6.5 kg of the powder and the subsequent chromatographic purification, 50 mg of pure sakuranetin was isolated. Sakuranetin was also the main compound obtained by purification of 80% ethanol:water extract of Dicerothamnus rhinocerotis [35]. Its presence has been confirmed in many other plant species, including Artemisia campestris, Boesenbergia pandurata, Baccharis spp., Bertula spp., Juglans spp. and Rhus spp. Due to their health-promoting effects, these plants were used in folk medicine in the form of herbal supplements, for the treatment of diabetes, inflammatory diseases, allergies and cancer. Furthermore, sakuranetin is a phytochemical abundantly present in many plant extracts [36,37] and the honey of different floral and geographic origins [38] well known for their various biological activities. According to the latest reports, the content of sakuranetin in linden honey was the highest among seven types of honey (Table 2) [38].
Metabolism of Sakuranetin in the Human Body
The biovailability of flavonones is limited due to presystemic elimination (both in the intestine and the liver). It can be influenced as well by concomitantly taken drugs and other biologically active compounds supplied in daily diet. For this reason, the metabolic pathways of biologically active natural dietary components, including polyphenolic compounds, are being studied [39]. Since flavonoids are unstable compounds, it is likely that the observed effects are related to their degradation products (for example, microbial degradation in the gut or by intestine and colon enzymes) rather than the parent compounds. Many of the flavonoid metabolites have a wide spectrum of biological activity. Therefore, the metabolites of sakuranetin may be also responsible for its therapeutic effects. The major metabolic pathways of sakuranetin in humans include B-ring hydroxylation, 5-O-demethylation, and conjugation with glutathione or glucuronic acid. The phase I metabolites have been identified as naringenin and eriodictyol. Sakuranetin was also found to be a UDP-glucuronosyltransferases (UGT) 1A9 inhibitor, whereas it induced transactivation of the human pregnane X receptor-mediated cytochrome P450 (CYP) 3A4 gene [40].
Additionally, Katsumata et al. [43] reported that the fungus causing the rice blast disease-Pyricularia oryzae (syn. Magnaporthe oryzae) metabolized sakuranetin to sternbin and naringenin, which have a lower antifungal activity than the substrate.
According to the literature, a wide range of biological activities have been ascribed to naringenin and eriodictyol, which showed, among others, anticancer, antioxidant, antidiabetic, cytoprotective, and anti-inflammatory properties [44][45][46]. Some studies confirmed the pro-health effects of the derivatives with free hydroxyl groups (e.g., eriodictyol), in contrast to their esters with glucuronic acid [47]. Similarly, the biological properties of naringenin metabolites were often different from those of the aglycone (naringenin). Therefore, it is of paramount importance to test compounds obtained in transformation under physiological conditions, e.g., using microorganisms or in vitro experimental models.
Anticancer Effects
Methoxyflavonoids are a group of natural substances that have the ability to control the processes of tumorigenesis of various types of cells [48]. Anticancer activity at the molecular level is associated with the modulation of angiogenesis and the influence on cancer cell proliferation and apoptosis [49,50].
Of particular importance is the role of natural phytoestrogens with antitumor activity, which are also an alternative to hormone therapy. They are effective in alleviating menopausal symptoms in women [51]. The results of the studies indicate the importance of phytoestrogens-which contain a methoxyl group in the aromatic ring A-in inhibiting the development of estrogen-dependent breast, ovarian or prostate cancer cells [52,53]. The antioxidative properties of phytohormones and their derivatives are important in preventing the development of tumours [54,55].
Several studies confirmed the correlation between phytochemicals present in daily diet, including flavonoids, and the prevention of lifestyle diseases, including cancer [56]. This group of natural compounds has low adverse effects and systemic toxicity and is safe for human use. Singh et al. noted that the administration of hydroxyl flavones like apigenin, which is the analog of sakuranetin, improved the antioxidant status during carcinogenesis [57]. It was also evidenced that sakuranetin inhibits tumor growth through the apoptosis pathway both in vitro and in vivo. The primary mechanism of its action is the induction of cell death by apoptosis [58]. Park et al. noted that sakuranetin inhibits the growth of human colon carcinoma (HCT-116) cells with an IC 50 value of 68.8±5.2 µg/mL [59].
Drira and Sakamoto observed that sakuranetin at the concentration of 15 µmol/L had cytotoxic effects on B16BL6 melanoma cells (MTT assay, after 72 h of treatment) [60]. The results indicated that sakuranetin influences the enzymatic process of melanin production (melanogenesis), through the modulation of the signaling pathways in the melanoma cell line. It was proved that sakuranetin inhibits the ERK1/2 and PI3K/AKT signaling pathways, which are involved in the regulation of proliferation, differentiation, and apoptosis, in response to extracellular signals. In this study, the authors proved the upregulating effect of sakuranetin on tyrosinase (Tyr), tyrosinase-related protein 1 (TRP1), and tyrosinase-related protein 2 (TRP2).
Additionally, sakuranetin isolated from Artemisia dracunculus was found to have potent effects on the inhibition of cell proliferation in esophageal squamous cell carcinoma (ESCC). This compound induced DNA damage as well as mitochondrial membrane potential loss in esophageal cancer cells [37].
Ugocsai et al. studied the effect of flavonoids and other natural compounds on the reversal of multi-drug resistance (MDR) and apoptosis induction in colon cancer cells [58]. Sakuranetin only had a marginal influence on Rhodamine 123 accumulation in multidrug-resistant Colo 320 human colon cancer cells expressing MDR1/LRP, whereas sakuranin, which is a glucosylated derivative of sakuranetin, was ineffective. Furthermore, there is also some evidence that sakuranetin, being a component of many herbal medicines, may exhance their antiproliferative activity against various human cancer cells, e.g., HT-29 and SGC-7901 [36]. There are also known studies on the relationship between the content of bioactive substances, like flavonoids (including sakuranetin) and free amino acids, and the biological activity of honeys of various origins [38].
Antimicrobial Activity
Compounds of natural origin, due to their high application potential, currently play a significant role in research focused on antimicrobial agents [61][62][63]. With the development of biological methods of treatment, natural substances started to be used as standards or as substrates for the production of more active derivatives [64]. Grecco et al. showed that sakuranetin, which was extracted from twigs of Baccharis retusa, could be employed as a tool for designing novel and more efficient antifungal agents [18]. The minimum inhibitory concentration values (MIC) of the isolated compound were determined for pathogenic yeast belonging to the genus Candida (six species), Cryptococcus (two species/four serotypes) and S. cerevisiae BY 4742 (S288c background). The results showed that sakuranetin at a concentration of 0.63 µg/µL inhibited the growth of all the tested Candida strains by 98% and 99%, except for C. albicans, which was more sensitive to sakuranetin at 0.32 µg/µL (99% of inhibition). The Cryptococcus species displayed a similar behavior: C. neoformans serotype A (var. grubii) and C. gatti (R265) strains in the presence of 0.32 µg/µL of sakuranetin were inhibited by 99% and 97%, respectively. The most sensitive was the strain C. neoformans serotype D (JEC21), which showed 98% inhibition with 0.08 µg/µL of sakuranetin concentration.
Previously, the antifungal activity of sakuranetin was also observed for the phytopathogenic strain Cladosporium sp. [65] and for the clinical strains Trichophyton rubrum and T. mentagrophytes [66].
In order to design a new, highly effective compound with a good affinity to the site of action, changes in physical properties of the compound that affect distribution, metabolism and interaction with a particular receptor must be taken into consideration. Thus, strategies to improve the biological impact of a bioactive substance relate to replacements of substituents, stiffening or simplifying the molecule structure, and modifications within the side groups. In the study by Aida et al., sakuranetin was prepared in 75% yield from the main citrus flavanone-naringenin-by treatment with ethereal diazomethane under anhydrous conditions [67]. After acetylating the hydroxy groups, sakuranetin was converted to 7-methoxyapigeninidin by the NaBH 4 reduction, followed by chloranil dehydrogenation. The results obtained in biological studies suggested that 7-methoxyapigeninidin had a higher antifungal activity than apigeninidin. The results indicated that the presence of the methoxy group at C-7 is important with respect to the antifungal activity against the plant pathogen Gloeocercospora sorghi. Apigeninidin, which has no methoxy group, showed only 25% growth inhibition at a two-fold higher concentration (100 ppm).
In turn, Zhang et al. reported sakuranetin as a competitive inhibitor of the β-hydroxyacyl-acyl carrier protein dehydratase from Helicobacter pylori (HpFabZ) [68]. The authors suggested that sakuranetin functions as the inhibitor against HpFabZ by competing with the substrate crotonoyl-CoA. It was observed that this activity is strongly correlated with the presence of the methoxyl group in sakuranetin, which does not occur in structurally similar flavonoids quercetin and apigenin, used for comparison. The inhibitory activity of the above-mentioned flavonoids against HpFabZ is as follows: (IC 50 , µM): (S)-sakuranetin (2.0 ± 0) > apigenin (11.0 ± 2.5) > quercetin (39.3 ± 2.7). Furthermore, the results obtained using the standard agar dilution method showed that sakuranetin inhibited the growth of Helicobacter pylori ATCC 43504 with a minimum inhibitory concentration (MIC) of 92.5 µM.
Antiprotozoal Properties
As it was described, sakuranetin can be helpful for the development of new therapeutic agents to treat Leishmaniasis and Chagas diseases in the future. These are parasitic protozoan diseases that affect the poorest populations in the world, causing high mortality and morbidity.
In the Grecco et al. study, sakuranetin was tested in vitro against Leishmania spp. promastigotes and amastigotes and Trypanosoma cruzi trypomastigotes and amastigotes [69]. It was confirmed that sakuranetin is active against Leishmania (L) amazonensis, Leishmania (V.) braziliensis, Leishmania (L) major, and Leishmania (L) chagasi with IC 50 values in the range of 43-52 µg/mL and against T. cruzi trypomastigotes (IC 50 = 20.17 µg/mL). The results indicated that the presence of both the hydroxyl group at C-4 and the methoxyl group at C-7 are of paramount importance for antiparasitic activity. Despite the chemical similarity, naringenin, containing three free hydroxyl groups, did not show antiparasitic activity (not active at 150 (promastigotes and trypomastigotes) or 300 µg/mL (amastigotes). Additionally, the methylation of sakuranetin to sakuranetin-4 -methyl ether made the compound inactive against both Leishmania spp. and T. cruzi (IC 50 = 265.6 µg/mL against Leishmania. (L.) chagasi promastigotes).
Antiviral Activity
Sakuranetin also has an antiviral activity. Kwon et al. [70] proved that sakuranetin has a strong activity against the influenza B/Lee/40 virus. This activity was shown to be dose-and temperature-dependent. The researchers observed a decrease in the cytopathic effect caused by viral invasion with the 50% inhibitory concentration (IC 50 ) of 7.21 µg/mL. The therapeutic index (TI) was over 13.87. The considerable inhibitory effect of sakuranetin on viral RNA synthesis with no visible cytotoxicity was observed at a concentration of 100 µg/mL.
Moreover, Choi reported sakuranetin to be effective against human rhinoviruses HRV3 obtained from ATCC (American Type Culture Collection, Manassas, VA, USA) and propagated in human epithelioid carcinoma cervix (HeLa) cells [71]. Viruses of this type cause the common cold and are associated with the exacerbation of chronic inflammatory respiratory diseases. In the study, Sakuranetin exhibited the excellent antiviral activity of approximately 67% against HRV3 at 100 mg/mL and of approximately 41% at 10 mg/mL.
Antiinflammatory Activity
Since sakuranetin modulates oxidative stress, the NF-κB pathway, and lung function, it may be a candidate for a novel therapeutic agent to prevent and treat acute lung injury (ALI). Bittencourt-Mernak et al. investigated the preventive and therapeutic effects of sakuranetin on lipopolysaccharide (LPS)-induced ALI in mice that were treated with this compound 30 min before or 6 h after instillation of LPS [72]. It was observed that the animals began to show lung alterations 6 h after LPS instillation and these changes persisted until 24 h after LPS administration. Treatment with sakuranetin reduced the neutrophils in the peripheral blood and in the bronchial alveolar lavage. Sakuranetin treatment also reduced macrophage populations, particularly that of M1-like macrophages. In addition, sakuranetin treatment reduced keratinocyte-derived chemokines (IL-8 homolog) and NF-κB levels, collagen fiber formation, MMM-9 and TIMP-1-positive cells, and oxidative stress in lung tissues compared with LPS animals treated with vehicle. Finally, sakuranetin treatment also reduced total protein and the levels of TNF-α and IL-1β in the lung. Mernak et al. have shown a similar effect of sakuranetin which reduced inflammation and collagen deposition in a murine ALI mode [73].
In Kim's study, the mechanism of the anti-inflammatory activity of sakuranetin was investigated. The study involved using lipopolysaccharide (LPS) and the experimental model with macrophages stimulated with interferon-or LPS [74]. In the cells stimulated with LPS/IFN-γ, sakuranetin inhibited the synthesis of iNOS and COX2. In the case of the single stimulation with LPS, sakuranetin inhibited the secretion of TNF-α, IL-6, and IL-12. The secretion of co-stimulatory molecules CD86 and CD40 was also inhibited. At concentrations of 50 and 100 µM, a decrease in proinflammatory cytokine (TNF-α, IL-6, and IL-12) levels was observed as early as after 6 h of incubation.
Sakoda et al. hypothesized that sakuranetin may be a good candidate for the treatment of allergic asthma, caused by inflammation of airways. In vivo sakuranetin treatment in a dose of 20 mg/kg/BALB/c in mice reduced serum IgE levels, lung inflammation (eosinophils, neutrophils, and Th2/Th17 cytokines), and respiratory epithelial mucus production in ovalbumin-sensitized (for 30 days) animals in a murine experimental asthma model. Considering the possible mechanisms, sakuranetin acts by the inhibition of ERK1/2, JNK, p38, and STAT3 activation in lungs. No alterations were found in the livers of treated animals [75].
Santana et al. clarified how sakuranetin treatment (in a dose of 20 mg/kg −1 / day; 10 µL intranasal) affects mitogen-activated protein kinases MAPKs and STAT3-SOCS3 pathways in a murine experimental asthma model [76]. Mice were submitted to an asthma ovalbumin-induction protocol and were treated with vehicle, sakuranetin, or dexamethasone. However, sakuranetin did not modify in vitro cell viability in RAW 264.7 and reduced the NO release and gene expression of IL-1β and IL-6 induced by LPS in these cells. These data show that the inhibitory effects of sakuranetin on eosinophilic lung inflammation may be due to the inhibition of Th2 and Th17 cytokines and the inhibition of the MAPK and STAT3 pathways, reinforcing the idea that sakuranetin may be considered a relevant candidate for the treatment of inflammatory allergic airway disease.
The other research team investigated the anti-inflammatory and antioxidant effects of sakuranetin in lung disease using an experimental model of emphysema induced via the instillation of elastase into C57BL6 mice. Sakuranetin in a dose of 20 mg/kg was diluted in 10 µL of a mixture of DMSO: physiological salt solution (1:4) and delivered intranasally. In the sakuranetin-treated emphysematous animals, reductions in lung inflammation, which were associated with attenuated lung parenchymal remodeling and in alveolar destruction, were observed. Sakuranetin treatment reduced lung inflammation and pro-inflammatory cytokine levels (M-CSF, TNF-α, IL-1β, MCP-1 and MIP-2) in lung homogenates [77].
In another study, Yamauchi et al. noted that sakuranetin may be responsible for the antiinflammatory effects of Pruni Cortex-the Japanese herbal drug [78]. Sakuranetin, which was present in the ethyl acetate-soluble fraction of the bark extract, significantly inhibited NO induction and inducible nitric oxide synthase (iNOS) expression in rat hepatocytes. Furthermore, this compound decreased the expression of type 1 IL-1 receptor gene and phosphorylation of Akt, also known as protein kinase B, which is regulated by phosphatidylinositol-4,5-bisphosphate 3-kinase (PI3K). Additionally, sakuranetin decreased the phosphorylation of the activator of isoforms of the CCAAT/enhancer-binding protein β (C/EBPβ), which synergistically activates the transcription of the iNOS gene with nuclear factor κB (NF-κB). Therefore, sakuranetin inhibited the co-activating activity of C/EBPβ with NF-κB, leading to the suppression of iNOS gene expression in hepatocytes.
Toledo et al. noted that sakuranetin obtained from B. retusa, decreased IgE specific antibodies, eosinophil inflammation, AHR and airway remodelling by reducing oxidative stress, Th2 pro-inflammatory cytokines and chemokines and NF-κB activation in inflammatory cells in an experimental asthma model [79]. Its effects were similar to those observed in animals treated with corticosteroids in the majority of the parameters evaluated.
Beneficial Role of Sakuranetin in Alzheimer's Disease (AD)
Li et al. evaluated the effect of sakuranetin on spatial discrimination in a rat model of cognitive dysfunction exposed to D-galactose, investigated with respect to its effect on malondialdehyde (MDA), superoxide dismutase (SOD) and glutathione peroxidase (GPx) levels, and on the expressions of interleukin-6 (IL-6), tumor necrosis factor-α (TNF-α) and nuclear factor-κB inhibitory factor-α (IκBα) in the hippocampus of rats [81]. The results obtained suggested that sakuranetin may exert protective effects on brain cells through an antioxidation mechanism. Moreover, the improvement in learning and memory impairment by sakurantein may also be related to the inhibition of inflammatory mediators in brain tissue.
Other Effects of Sakuranetin
Furthermore, sakuranetin was also reported to enhance adipogenesis and the insulin sensitivity of 3T3-L1 cells through the upregulation of peroxisome proliferator-activated receptor γ2 (PPARγ2). Saito et al. [82] demonstrated that sakuranetin induces the differentiation of 3T3-L1 preadipocytes, as evidenced by the increased triglyceride accumulation and glycerol-3-phosphate dehydrogenase (GPDH) activity. Moreover, it was observed that sakuranetin stimulated glucose uptake in differentiated 3T3-L1 adipocytes and may sensitize adipocytes to insulin, which suggest that it may contribute to maintaining correct glucose homeostasis in animals.
Hernández et al. proved that sakuranetin inhibits the production of the strongest inflammatory mediators-leukotrienes. It acts as the selective inhibitor of 5-lipoxygenase, the enzyme responsible for their synthesis [83].
Moreover, as it was described, some natural products with methoxyl group and antioxidants might activate or inhibit DNA repair, which may have an effect on inhibiting cancer processes. Double-strand breaks (DSBs) which may be caused, e.g., by reactive oxygen species, disrupt the integrity of DNA in human cells. Failed or improper DSBs repair may lead to genomic instability and, eventually, mutations, cancer, or cell death. Charles et al. noted that sakuranetin in vitro activated the non-homologous end-joining (NHEJ), which is the major pathway used by higher eukaryotic cells to repair these lesions [84].
Flavonoids, including sakuranetin and its derivatives, due to their capability to absorb UV radiation and their well-documented antioxidant activity, classify as anti-aging agents. It was proven that they inhibit the expression of ultraviolet radiation-mediated matrix metalloproteinases (MMPs).
Jung et al. studied the influence of methoxyflavonoids, including sakuranetin, isosakuranetin, homoeriodictyol, genkwanin, chrysoeriol and syringetin on skin photodamage caused by UV-B irradiation [85]. Among all tested substances, the most active was isosakuranetin, which is the isomer of sakuranetin. Isosakuranetin inhibited UV-B-induced phosphorylation of mitogen-activated protein kinase (MAPK) signaling components, ERK1/2, JNK1/2 and p38 proteins. This result suggests that the ERK1/2 kinase pathways likely contribute to the inhibitory effects of isosakuranetin on UV-induced MMP-1 production in human keratinocytes. According to these results, isosakuranetin also prevented UV-B-induced degradation of type-1 collagen in human dermal fibroblast cells. In contrast, sakuranetin was inactive. The other methoxyflavonoids showed no significant inhibition effect on UV-B-induced MMP-1 mRNA expression.
In order to evaluate the bitter-masking potential of sakuranetin, an in vitro study was performed using HEK-293T cells, in which chimeric G-protein α-subunit was expressed [86]. In addition, the cells were transfected with the human bitter receptor, hTAS2R31 which is coupled with G-protein and responsible for bitter taste perception. Sakuranetin at the concentration of 25 µM inhibited the activation of hTAS2R31 by saccharin (1 mM) by over 50% (IC 50 5.5 ± 2.5 µM). In order to verify the in vitro results, the activity of 1% ethanolic solution of sakuranetin was confirmed in a bitterness masking test with the participation of four qualified testers. The test was performed in the presence of acesulfame-K, which is a known hTAS2R31 receptor activator. However, the low water-solubility of sakuranetin is a drawback for broader research. Thus, new methods of its functionalization are needed in order to improve its bioavaiability in vivo.
Conclusions
To summarize, the multidirectional biological effects of sakuranetin are very promising and predispose this compound to further multifaceted research for its use as a drug in many areas of medicine. Although different antimicrobial effects were described, this property is of unknown biological application since the compound also has cytotoxic effects. However, further studies on its pharmacological activity are needed to develop more efficient production methods and new delivery methods. The study should include detailed in vivo tests, further research on anticancer activity involving new cell lines, and investigation of the modulatory role of sakuranetin in biochemical paths (biotransformations). All these may contribute to better understanding the mechanisms of action of natural methoxyflavones (including sakuranetin) and their potential medical use. The aim is to achieve feasible sakuranetin-based clinical formulations. | 5,384.4 | 2020-02-01T00:00:00.000 | [
"Biology"
] |
Two-year follow-up of a direct pulp capping and dental fragment bonding with self-adhesive cement – Case report
— Fractures in permanent teeth due to trauma have become an increasingly frequent problem, and these fractures often affect the dentin-pulp complex. Direct pulp capping with MTA and tooth fragment reattachment with a self-adhesive resin cement proved to be a minimally invasive and high success rate procedure over one year follow-up. The objective of this article is to discuss relevant aspects about tooth fragment reattachment and direct pulp capping, reporting a clinical case of anterior tooth fracture. MTA was selected as the direct pulp capping material and tooth fragment reattachment was bonded with a self-adhesive resin cement. Clinical examination after one year recall showed excellent function and esthetics, pulp vitality and periodontal health.
INTRODUCTION
The largest number of coronary fractures occur in anterior teeth, mainly in children and adolescents (ANDREASEN; ANDREASEN, 2007;DIETSCHI et al., 2000). The most affected tooth by this type of injury is the maxillary central incisor, due to its most prior position (ZUHAL et al., 2005;BRUSCHI-ALONSO et al., 2010). One conservative and aesthetic way to rehabilitate traumatized teeth is bonding the original tooth fragment to the fracture substrate, the so-called tooth fracture reattachment (FARIK et al., 2002;CORRÊA-FARIA et al., 2010) With this approach, clinical time is decreased, there is less wear and more predictable long term results, compared with composite restorations (FARIK et al., 2002). Authors have reported a 10-year follow-up case report of a tooth fragment reattachment in a lower canine, showing the longevity that this technique may achieve (RESTON et al., 2014). In another study, Moura et al (2013), reported 18 years of success of bonding a homogeneous tooth fragment.
When dental trauma generates pulpal exposure, it is necessary to protect the exposed remnant tissue. Direct pulp capping is indicated when the pulp is accidentally exposed during cavity preparation or by traumaat least 24 h after the accident (ANDREASEN; ANDREASEN, 1991). Several materials are available to be used in this technique, such as mineral trioxide aggregate (MTA) and calcium hydroxide (CH). The current biocompatibility technology allows the application of these materials in direct contact with the pulp, in cases of small exposures and absence of bleeding, in order to stimulate the dentin bridge formation (ANDREASEN et al., 1995;SAWICKI et al., 2008). The treatment longevity with direct pulp capping using MTA has proved to be more effective than calcium hydroxide (WITHERSPOON, 2008;MENTE et al., 2014).
According to Reis et al. (2009), several materials may be used for bonding dental fragments, such as resin modified glass ionomers, flowable composite and resin cements. In another study, the authors used conventional microhybrid composite resin to reattach the tooth fragment (Macedo et al., 2008). The use of just an adhesive system to adhere the tooth fragment was also reported in the literature (VADINI et al., 2011). In the last years, a new generation of materials were developed, which had the main purpose to decrease the clinical steps, consequently reducing the overall clinical time dispensed. This is the case of self-adhesive resin cements (RADOVIC et al., 2008). However, there are few studies in the literature with the use of these resin cements to reattach tooth fragment. In this way, the objective of this case report is to describe the treatment and two-year follow-up of a crown fracture with pulp exposure, treated with MTA as the direct pulp-capping agent and restoration by fragment reattachment combined with a self-adhesive resin cement.
II. METHOD
A 22-year-old male patient had an accident fall and fractured the crown of the upper right central incisor tooth (tooth 11 in the ISO system or tooth 8 in the universal numbering system) (Fig. 1). The tooth fragment was recovered by the patient and maintained in water until his appointment at the Clinic of State University of Ponta Grossa, PR, Brazil (Fig. 2). During clinical examination, the patient only reported a slight sensitivity in the dental element. Intraoral examination revealed an oblique fracture line and pulpal exposure (Fig. 3), but no alveolar bone fracture. The initial radiograph indicated complete root formation and a closed apex with no periapical radiolucency (Fig. 4). Vitality test was conducted (Endo Ice, Maquira, Maringá, PR, Brazil) and the tooth responded positively to the test. There was no pain with percussion test. The other teeth were not affect by the trauma. The treatment plan consisted of direct pulp capping, as the exposure was recent, and tooth fragment reattachment. The patient agreed with the treatment plan and signed the written consent term.
Prophylaxis and infiltrative anesthesia (3% Citanest, prilocaine hydrochloride and felypressin) were performed. The operating field was isolated with a rubber dam and the dental fragment was placed in position to analyze the adaptation. After obtaining hemostasis, the exposed area was cleaned with copious irrigation of saline solution, and air-dried. The protection of the dentin-pulp complex (Fig. 5) In order to mask the fracture line, the enamel/fragment interface was beveled with a spherical diamond bur (# 1014 KG Sorensen, São Paulo, SP, Brazil), acid-etched for 30 s (Fig. 7) (37% Phosphoric acid, Condac, FGM, Joinville, SC, Brazil) and cleaned with air/water spray. One single coat of adhesive system (Adper Single Bond 2, 3M ESPE, St. Paul, MN, USA) was applied and light-cured for 10 s. A composite resin (Shade A2, IPS Empress Direct, Ivoclar Vivadent, Schaan, Liechtenstein) was placed over the fracture line (Fig. 8), and light cured for 40 s (Radii-Cal). After that, the resin was finished with abrasive discs (Sof-Lex 3M ESPE, St. Paul, MN, USA), polished with abrasive silicon tips (Optimize-TDV, Pomerode, SC, Brazil) and diamond polishing paste (Diamond Gloss, TDV) (Fig. 9). The occlusion was carefully checked and adjusted in all excursive movements.
III. RESULTS
The patient was recalled in 1 week, 1 month, and 6 months. At the recall appointments, new radiographs were taken, and the pulp-capped tooth was tested with a cold stimulus, responding positively in every section. The two-year recall view also showed the adequate results in terms of aesthetics and functionality (Fig. 10), and another radiograph was taken (Fig. 11).
IV. DISCUSSION
Dental trauma is considered a public health problem, causing psychological, aesthetic and physical damages (DIAZ et al., 2010). Reattachment of a fragment to the fractured tooth may result in a positive psychological response by the patient (MAIA et al., 2003).
Dental fragment reattachment is one of the suitable techniques to reestablish aesthetics and function to a fractured dental element (REIS et al., 2004). In contrast to the conventional composite resin restoration, which may result in a not-so-similar coloring and natural contours, the fragment bonding preserve the color, texture, incisal translucency, original tooth anatomy and the clinical time dispensed to a fragment reattachment is less than a composite resin reconstruction, also this is a low-cost technique to the patient (GOENKA et al., 2010).
However, in more severe coronary fractures, where there is pulp exposure, it is necessary to establish the pulp involvement degree. In this way, collection of subjective and objective data is required to perform a wellconducted diagnosis (DIANGELIS et al., 2012). In the present case report, to achieve the correct diagnosis and consequently a correct treatment plan, periapical radiographs and pulp sensitivity test were conducted. It is important to perform control radiographs in the recall appointments to observe the success of the treatment, as was as done in this present study. Radiographs are very important when dental trauma occurs in order to analyze if there is the presence of periapical lesions, root fractures, invasion of the biological space or involvement of other teeth.
International Journal of Advanced Engineering Research and Science (IJAERS)
[ The sensitivity test was done to evaluate the pulp involvement degree. It may be selected a thermal test (cold or hot) or electric test (DIANGELIS et al., 2012). In this study, the cold test was chosen as it present higher sensitivity and specificity compared to other tests (DIANGELIS et al., 2012). Once the teeth responded positively, the most recognized procedure to protect the dentin-pulp complex is the direct pulp capping. The main purpose of this conservative approach is place a biocompatible material, directly on the exposed pulp, to provide the formation of a dentin bridge (ASGARY; AHMADYAR, 2013;ROTSTEIN;INGLE, 2019). This procedure aims to maintain and preserve dental pulp vitality, function and health, besides being a minimally invasive therapy as compared with conventional endodontic treatment, as pulpotomy and biopulpectomy (BERMAN, HARGREAVES;2015). The most employed materials used in this technique are MTA and calcium hydroxide. Although calcium hydroxide is the most frequently used material for direct pulp capping, being considered gold standard for this type of technique for many years, MTA may also be used for this procedure and satisfactory results were found in the last years (WITHERSPOON, 2008;TORABINEJAD, PARIROKH, 2010;MENTE et al., 2014;LI et al., 2015;ROTSTEIN;INGLE, 2019). MTA is a calcium silicate based cement composed of tricalcium silicate, tricalcium aluminate, tricalcium oxide, silicate oxide and other mineral oxides. As well as calcium hydroxide, MTA induces the formation of dentinal bridges (TORABINEJAD, PARIROKH, 2010;ROTSTEIN;INGLE, 2019).
Several studies compared the efficacy of MTA and calcium hydroxide in direct pulp capping (WITHERSPOON, 2008;GOENKA et al., 2010;MENTE et al., 2014;LI et al., 2015). Mente et al. (2014), in a cohort study, observed that MTA is the best option for direct pulp capping compared to calcium hydroxide, when a definitive restoration is made immediately after the conservative pulp therapy. Witherspoon (2008), concluded that MTA is an excellent material indicated for vital pulp therapy and better than calcium hidroxide, in terms of clinical outcome results, as high success rate and long-term sealing capacity.
The materials based on calcium hydroxide tend to dissolve over time and leave a gap between tooth and restoration (HILTON, 2009). Other studies show that both materials induce the formation of hard tissue; however, the hard tissue formed in teeth treated with MTA is more homogeneous and thicker than that produced in teeth treated with calcium hydroxide, so the MTA was the material of choice for this case report (SAWICKI et al., 2008;LI et al., 2015).
The pH of both materials is similar (10 and 12, respectively for calcium hydroxide and MTA) (PARIROKH; TORABINEJAD, 2010). The antimicrobial potential of MTA is greater than calcium hydroxide, because the pH of calcium hydroxide falls rapidly, while the pH of MTA remains alkaline for longer periods. Yadav et al. (2013), using the self-adhesive resin cement to bond the tooth fragment, concluded that it is a conservative treatment and less time consumer option. The application mode of the self-adhesive resin cements is significantly simplified, eliminating the etching and adhesive application procedures. These materials combine the characteristics of composite resins, self-etching adhesives and in some cases, luting agents. Positive results of this category of resin cements, found in the literature, are the less susceptibility to humidity, when compared with zinc phosphate cement and conventional resin cements (RADOVIC et al., 2008;GUARDA et al., 2010). Regarding microleakage between dental substrate and resin cements, the results are controversial. Ibarra et al., 2007 showed decreased microleakage between dentin and conventional total-etching cements, compared to selfadhesive cements. On the other hand, Behr et al., 2004 obtained similar marginal adaptation results to dentin and enamel with both conventional and self-adhesive resin cements. In terms of clinical outcomes, the post-operative sensitivity reduction and color stability over time were also found for the self-etch resin cements (BEHR et al., 2004;COSTA et al., 2006). The dental fragment bond is an extremely conservative procedure, fast and with low cost for the patient, with a very satisfactory aesthetic, being almost imperceptible and an excellent choice of treatment.
V. CONCLUSION
It is possible to conclude that with a correct diagnosis, appropriate materials and monitoring over time, high success rate may be achieved in cases of dental trauma with pulp exposure. Direct pulp capping with MTA and tooth fragment reattachment with self-adhesive resin cement is a simple and fast procedure, which preserved the tooth integrity and promoted excellent aesthetics after two year follow-up.
International Journal of Advanced Engineering Research and Science (IJAERS)
[ | 2,794.4 | 2020-08-22T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Traumatic brain injury and the risk of dementia diagnosis: A nationwide cohort study
Background Traumatic brain injury (TBI) has been associated with dementia. The questions of whether the risk of dementia decreases over time after TBI, whether it is similar for different TBI types, and whether it is influenced by familial aggregation are not well studied. Methods and findings The cohort considered for inclusion comprised all individuals in Sweden aged ≥50 years on December 31, 2005 (n = 3,329,360). Diagnoses of dementia and TBI were tracked through nationwide databases from 1964 until December 31, 2012. In a first cohort, individuals diagnosed with TBI (n = 164,334) were matched with up to two controls. A second cohort consisted of subjects diagnosed with dementia during follow-up (n = 136,233) matched with up to two controls. A third cohort consisted of 46,970 full sibling pairs with discordant TBI status. During a mean follow-up period of 15.3 (range, 0–49) years, 21,963 individuals in the first cohort (6.3% with TBI, 3.6% without TBI) were diagnosed with dementia (adjusted odds ratio [OR], 1.81; 95% confidence interval [CI], 1.75–1.86). The association was strongest in the first year after TBI (OR, 3.52; 95% CI, 3.23–3.84), but the risk remained significant >30 years (OR, 1.25; 95% CI, 1.11–1.41). Single mild TBI showed a weaker association with dementia (OR, 1.63; 95% CI, 1.57–1.70) than did more severe TBI (OR, 2.06; 95% CI, 1.95–2.19) and multiple TBIs (OR, 2.81; 95% CI, 2.51–3.15). These results were in general confirmed in the nested case-control cohort. TBI was also associated with an increased risk of dementia diagnosis in sibling pairs with discordant TBI status (OR, 1.89; 95% CI, 1.62–2.21). A main limitation of the present study is the observational design. Thus, no causal inferences can be made based on the associations found. Conclusions The risk of dementia diagnosis decreased over time after TBI, but it was still evident >30 years after the trauma. The association was stronger for more severe TBI and multiple TBIs, and it persisted after adjustment for familial factors.
Introduction Traumatic brain injury (TBI) is a leading cause of death and disability in individuals aged <45 years in industrialized countries, and it is associated with developing a broad spectrum of pathophysiological symptoms, followed by long-term disability [1]. Accumulating evidence suggests that TBI is also associated with risk of developing dementia [2], a neurodegenerative disease with far-reaching social and medical implications.
Two meta-analyses of retrospective case-control studies have suggested that the risk of Alzheimer disease (AD) is doubled in men, but not in women, after TBI resulting in loss of consciousness [3,4]. Furthermore, a grading system for the risk of developing dementia based on TBI severity has been proposed for non-AD dementia [5] and for AD [6]. The retrospective MIRAGE study documented a fourfold increased risk of AD associated with TBI resulting in loss of consciousness and a twofold increased risk for TBI not resulting in loss of consciousness [6]. Other studies have yielded conflicting results, suggesting no association between previous TBI resulting in loss of consciousness and the development of AD or other types of dementia [7,8].
A previous study from our research group showed that the risk of developing young-onset dementia (YOD) after TBI was low for AD but strongly related to non-AD dementia, in a nationwide population-based cohort of 811,622 men and more than 30 years of follow-up [9]. Other researchers have suggested that a history of TBI accelerates the development of minimal cognitive impairment and AD; a retrospective study showed that the onset of dementia may occur !2 years earlier in individuals with TBI [10].
Thus, the details of how TBI is associated with the development, the time of onset, and the different types of dementia remain unclear. The aim of the present study was to examine whether different types of dementia diagnoses are associated with previous TBI and whether any observed association is time dependent, in a nationwide cohort.
Materials
The cohort considered for inclusion in the present study included all men and women aged !50 years who lived in Sweden on December 31, 2005 (n = 3,329,360). Using data from Statistics Sweden (www.scb.se), information about early disability pension, civil status, and educational attainment in 2005 was linked to each individual in the cohort. From the total cohort, three component cohorts were formed. In the first retrospective cohort, individuals with TBI diagnoses and no prior diagnosis of dementia were each matched with two individuals without TBI during follow-up, based on birth year and sex. The baseline date for TBI cases and controls was the date of TBI. Controls who died or had a diagnosis of dementia prior to baseline were excluded. This procedure was repeated up to three times for each case. The National Patient Register was searched from 1964 through 2012, to identify prospective diagnoses of dementia.
The second cohort consisted of all full sibling pairs from the total cohort with discordant TBI status during follow-up. The baseline date for each pair was the date of TBI. Sibling pairs with death of the sibling without TBI or a diagnosis of dementia in at least one sibling before baseline were excluded. Prospective diagnoses of dementia from 1964 through 2012 were identified using the National Patient Register. The purpose of the sibling cohort was to adjust for potential uncontrolled confounding due to familial factors that would not be captured in the medical record.
In the third cohort, all subjects diagnosed with dementia during follow-up were matched with up to two controls with no dementia diagnosis during follow-up, based on birth year and sex. Controls who died prior to the date of dementia for the corresponding cases were excluded. This procedure was repeated up to three times for each case. The baseline date in this case-control cohort was the date of dementia for each case and corresponding controls. In this cohort, a retrospective search of the National Patient Register through 1964 was conducted to identify TBIs occurring before baseline. The purpose of this nested case-control cohort was to evaluate the results from the cohort study. The Regional Ethical Review Board in Umeå and the National Board of Health and Welfare approved this study. There was no written prospective research protocol for the analyses presented in the present study. However, the analyses presented were preplanned, with the exception of Fig 1, Fig 2 and Fig 3, which were constructed during the revision process in response to reviewer comments.
Diagnoses of dementia and TBI, death, and other covariates of interest
The Swedish National Patient Register (SNPR), controlled by the National Board of Health and Welfare, was searched through December 31, 2012, to identify diagnoses of dementia and TBI using appropriate International Classification of Disease (ICD; 8th, 9th, and 10th revisions) codes. Diagnoses of dementia were categorized as AD (ICD-10 code F00), vascular dementia (ICD-10 code F01), and dementia of unspecified type (ICD-10 code F039). For the diagnosis of dementia, the ICD-8 and ICD-9 code 290 was also included. TBI was coded as mild (ICD-10 code S060, ICD-8 and ICD-9 code 850) and more severe (ICD-10 code S06x, excluding S060, ICD-8 and ICD-9 code 851). A second TBI was defined as a new diagnosis recorded at least 6 months after the first diagnosis. Other diagnoses were selected based on known associations with the main exposure, outcome, or death; these included myocardial infarction, stroke, cancer, kidney failure, chronic pulmonary disease, atrial fibrillation, alcohol intoxication, depression, and diabetes. Diagnoses recorded in the SNPR have been validated, with positive predictive values of 85%-95% [11]. The SNPR has a national coverage rate of about 90% for inpatient care from 1970, and all specialized outpatient care has been included since 2001. Diagnoses of death were collected from the National Death Register, also controlled by the National Board of Health and Welfare.
Statistical analysis
To test whether the association between TBI and the risk of subsequent dementia was time dependent in the prospective cohort, we evaluated Schoenfeld's residuals using the estat phtest command in the Stata software (version 12.1; StataCorp LP, TX). As the test indicated that the proportional hazards assumption was violated, the association between TBI and the risk of dementia was analyzed in intervals of follow-up using multivariable adjusted conditional logistic regression in all three cohorts. For this purpose, the clogit command in the Stata software was used to fit maximum likelihood (fixed-effect) models with the dichotomous dependent variable of interest, i.e., dementia in the retrospective cohort study and sibling cohort, and TBI in the case-control study. The likelihood was then calculated relative to each group, i.e., conditional likelihood was used. The first model was unadjusted, although adjusted for age and sex by design. The second model was additionally adjusted for age at baseline, civil status, education, early retirement pension, and 10 diagnoses at baseline (Table 1). To further illustrate the nonlinear association over time in the cohort study and sibling cohort, restricted cubic splines with four knots were used (resulting in three degrees of freedom), followed by fitting a proportional hazards model [12]. The Stata software and SPSS (version 23; IBM, NY) were used to fit the statistical models and graphically illustrate the results.
Baseline characteristics
Characteristics of the retrospective cohort study and the sibling cohort, in which the risk of dementia diagnoses after baseline was investigated, and characteristics of the case-control Traumatic brain injury and the risk of dementia diagnosis cohort, in which the risk of TBI before baseline was investigated, are presented in Table 1. In the retrospective cohort study, more individuals with than without TBI were divorced, received early retirement pensions, and had diagnoses of diabetes and depression. Similar characteristics were found in the sibling cohort, although the mean age at baseline was lower and diagnoses were less common. In the case-control study, individuals with dementia more often had myocardial infarction, stroke, atrial fibrillation, depression, and diabetes at baseline than did those without dementia.
Prospective risk of dementia diagnosis in the cohort study
During a mean follow-up period of 15.3 (range, 0-49) years, 21,963 individuals in the total cohort (6.3% of those diagnosed with TBI, 3.6% of the rest of the cohort) were diagnosed with dementia (fully adjusted odds ratio [aOR], 1.81; 95% confidence interval [CI], 1.75-1.86; Table 2, Fig 4). Other strong risk factors at baseline (p < 0.001 for all) included higher age (aOR, 1.13), early retirement pension (aOR, 3.10), alcohol intoxication (aOR, 1.75), and depression (aOR, 1.41). The association was similar in men (aOR, 1.88; 95% CI, 1.80-1.97) and in women (aOR, 1.75; 95% CI, 1.68-1.82). The risk of dementia diagnosis after TBI decreased rapidly the first year (Fig 4). Thus, the association between TBI and subsequent dementia was strongest in the first year after TBI (aOR, 3.52; 95% CI, 3.23-3.84; Table 2) but still increased more than 30 years after TBI (aOR, 1.25; 95% CI, 1.11-1.41; Table 2). The association between TBI and dementia was weaker for the outcome of AD (aOR, 1.58; 95% CI, 1.49-1.69) than for vascular dementia (aOR, 2.17; 95% CI, 2.02-2.32) and unspecified dementia Table 1. Cohort characteristics at baseline. The first (prospective) cohort was matched according to TBI at baseline, and the risk of dementia was investigated during follow-up; the second cohort consisted of siblings with discordant TBI at baseline; the third (retrospective) cohort was matched according to dementia during follow-up, and diagnoses of TBI were investigated.
Characteristic
Retrospective (aOR, 1.78; 95% CI, 1.71-1.85). The risk of dementia diagnosis associated with one mild TBI and one more severe TBI both decreased rapidly the first years after TBI (Fig 2 and Fig 3,
Prospective risk of dementia diagnosis in the sibling cohort
During a mean follow-up period of 18.8 (range, 0-49) years, 1,204 individuals in the sibling cohort (1.8% of siblings with TBI at baseline, 0.8% of unaffected siblings) were diagnosed with dementia (aOR, 1.89; 95% CI, 1.62-2.21; Table 3 and Fig 5). The association between TBI and the risk of subsequent dementia was similar to that in the total cohort not consisting of siblings (
Retrospective risk of TBI diagnosis in the case-control cohort
In total, 21,276 individuals in this cohort (7.7% of individuals with dementia, 4.0% of the rest of the cohort; p < 0.001) had a history of at least one TBI before baseline. After adjustment for all confounders, the risk of dementia was highest in the first year after TBI (OR, 3.89; 95% CI,
Discussion
In the present nationwide cohort, with up to 50 years of follow-up, a clear association was observed between previous TBI and the risk of being diagnosed with dementia later in life. The risk of dementia was highest in the first years after TBI, but it was sustained more than 30 years thereafter. The association was also similar in a large cohort of full siblings and similar in men and in women. Finally, the risk of developing dementia appeared to have a dose-response relationship with regard to TBI severity and number of TBIs. The link between TBI and developing dementia has been controversial for some time, due to the conflicting nature of available data. In the present study, the risk of dementia diagnosis was increased by about 80% during a mean follow-up period of 15 years for individuals diagnosed with TBI, compared with the rest of the cohort. The investigation of TBI as a risk factor for dementia entails the risk of reversed causality [13][14][15] or misdiagnosis due to post-concussive symptoms; thus, data from studies with short follow-up periods [16] should be interpreted with caution. In the aging population, dementia can be an underlying risk factor for accidents resulting in TBI, such as car accidents and injurious falls [17,18]. In the present study, the risk of being diagnosed with dementia in the first year after TBI was four to six times higher, compared with individuals with no TBI. Thereafter, this risk declined rapidly. The development of dementia, with impaired executive function and an increased risk of falling, likely began before the time of TBI in some individuals in these cohorts; thus, TBI may have been influenced by reduced cognitive function, with resulting reversed causality. Nevertheless, the significant association observed more than 30 years after TBI cannot be explained by reversed causality. Still, an unknown confounder may explain the increased risk of dementia also with longer followup. In a previous study, the strength of the association between YOD and TBI was reduced markedly after adjustment for confounders [9]. As in the present study, a previous TBI showed stronger associations with non-AD forms. To our knowledge, no previous prospective study with similar power and follow-up time has been reported, preventing direct comparisons to our data. The relation of TBI severity to the risk of developing dementia is a matter of much debate. Evidence from previous studies is not conclusive; several studies have suggested that moderate to severe TBI is an important risk factor for subsequent dementia [4,6,[19][20][21], but others have failed to confirm these results [14,22,23]. The lack of association could be due to limited statistical power, as severe and multiple TBIs are less common than single mild TBI, including in the present study. Data from the present study suggest a clear dose-response relationship, with single mild TBI showing a weaker association with dementia diagnosis than did more severe and multiple TBIs. In support, a recent nationwide Finnish study found that persons with more severe TBI were at increased risk of dementia, compared to those with mild TBI [24]. These graded associations may support the existence of a causal relationship. Another explanation could be that subjects with more severe or multiple TBIs more often have other risk factors for dementia, such as lower cognitive function before TBI [25].
The associations found between TBI and the risk of subsequent dementia diagnosis could also be influenced by familial factors, such as upbringing conditions, education, and genetic factors. To our knowledge, no previous population-based study with a long follow-up period has evaluated these potential influences. In the present study, we thus examined the association in about 47,000 full sibling pairs with discordant TBI status during follow-up. The risk of dementia diagnosis during follow-up was almost doubled in siblings with TBI compared with their counterparts without TBI, and it remained increased more than 10 years after TBI. These results are similar to those obtained for the other cohorts, suggesting that familial factors cannot explain the association between TBI and dementia.
The present study has several limitations that should be considered. Most importantly, no causal inference should be made based on observational data, although the time-dependent associations more than 30 years after TBI and the dose-dependent relationship according to TBI severity and number may support such causality. The strong association demonstrated for individuals with short follow-up is most likely subject to different forms of bias, e.g., reversed causality, as discussed previously. In addition, it is likely that individuals with TBI are subject to a more rigorous control from healthcare and relatives initially after TBI, increasing the chance of being diagnosed with dementia. The results of the present study are based on diagnoses made in specialist care; diagnoses made in primary care were not included, which may have affected the numbers of TBI and dementia diagnoses included in analyses. Furthermore, data were obtained from registers and diagnoses could not be confirmed clinically, although all diagnoses were recorded in the context of specialist healthcare. Nevertheless, a lower sensitivity with respect to these outcomes would, if anything, attenuate the associations found. The main strengths of the present study include the large body of recorded data covering a long period, which provided superior statistical power for the performance of reliable analyses, and a long follow-up period that was not subject to recall bias. The main results of the present study were also evaluated in a cohort of siblings, and the findings were consistent. Thus, this sensitivity analysis supported the validity of the main findings, and because the cohort was nationwide, the external validity of the results is likely to be high.
In summary, the findings of this study suggest the existence of a time-and dose-dependent risk of developing dementia more than 30 years after TBI. The association was stronger for more severe and multiple TBIs than for single mild TBI. The association was also present after adjustment for familial factors in a sibling analysis. Overall, the results may support a causal association between TBI and the risks of different types of dementia. However, given the observational study design, we cannot exclude the possibility that other factors explain the observed associations. | 4,302.2 | 2018-01-01T00:00:00.000 | [
"Medicine",
"Biology",
"Psychology"
] |
Inferring B cell specificity for vaccines using a Bayesian mixture model
Background Vaccines have greatly reduced the burden of infectious disease, ranking in their impact on global health second only after clean water. Most vaccines confer protection by the production of antibodies with binding affinity for the antigen, which is the main effector function of B cells. This results in short term changes in the B cell receptor (BCR) repertoire when an immune response is launched, and long term changes when immunity is conferred. Analysis of antibodies in serum is usually used to evaluate vaccine response, however this is limited and therefore the investigation of the BCR repertoire provides far more detail for the analysis of vaccine response. Results Here, we introduce a novel Bayesian model to describe the observed distribution of BCR sequences and the pattern of sharing across time and between individuals, with the goal to identify vaccine-specific BCRs. We use data from two studies to assess the model and estimate that we can identify vaccine-specific BCRs with 69% sensitivity. Conclusion Our results demonstrate that statistical modelling can capture patterns associated with vaccine response and identify vaccine specific B cells in a range of different data sets. Additionally, the B cells we identify as vaccine specific show greater levels of sequence similarity than expected, suggesting that there are additional signals of vaccine response, not currently considered, which could improve the identification of vaccine specific B cells.
For a dataset x consisting of clonal abundances in subjects, s and at time points, t, the joint probability of the model is built up conditionally, and given by: where γ is the latent allocation vector denoting the allocation of BCR clones to classes (background, vaccine specific or non-vaccine specific); z is a binary variable indicating the presence or absence of a clone within an individual; and e is the latent allocation vector denoting the underlying distribution from which the clonal abundances are generated.
The parameter e is not of primary interest, so we marginalise over it, and obtain a posterior which is equivalent to a mixture model: The vector θ = θ 1,1 , . . . , θ ST contains the sample specific parameters associated with the underlying clonal abundance distributions, where where N B is the density of the negative-binomial distribution and dGP D is the density of the discretised Generalised Pareto Distribution [1]. These parameters are subject and time point dependent allowing for differences between the samples, in particular sequencing depths. The dGPD has a threshold parameter, and only assigns probability to values above this threshold. This ensures that it is only capturing the tail of the distribution (those clones which are seen in high abundance) and provides an intuitive interpretation that only clones seen at abundances above this threshold could be considered clonal.
We adopt a flexible approach allowing the model to be applied to a range of data sets, and therefore we use non-informative priors and seek to learn parameters as much as possible. We choose Dirichlet priors for the distribution Figure 1: Full graphical representation of model using plate notation.
of γ i and e ist , and a Beta prior for z is ; more precisely, p(γ i = class) = Γ class for 1 ≤ i ≤ K; class ∈ {bg, vs, ns} where K is the number of clones and Dir is the symmetric Dirichlet distribution. We set G = W = 1 to give the flat Dirichlet distribution, α = β = 1 to give a uniform distribution, and Θ defines the space of all possible parameter values. The full model is illustrated in plate notation in Figure 1.
Inference
The parameters are fitted to the data sets using an E-M algorithm. Initial parameter values are based on prior belief that vaccine specific clones will be rare, seen at high frequency and shared between multiple samples, and the results are robust to different initial parameter values which maintain these properties. This choice of initial parameters was seen to prevent problems of label switching and to identify clones with properties typically associated with vaccine response, whilst allowing the data to inform the final parameter values.
Restrictions on parameter values allow us to encode additional structure and to link parameters hierarchically. First, we assume no structure in the time profile for the B cell abundances which are not responding to the vaccine, so that ω bg,t = ω bg and ω ns,t = ω ns for all t. The time profile that we assume for the vaccine-specific cells assumes that pre-vaccination the abundances of vaccinespecific cells have the same distribution as the background cells (ω vs,0 = ω bg ), and that post-vaccination they have the same abundance distribution as B cells responding to a stimulus other than the vaccine (ω vs,t = ω ns , for t > 0). We also assume that the probability of a clone being observed in a subject is the same for B cells classified as background and those classified as a non-specific response, that is, p bg = p ns Finally, z is = 0 indicates an absence of B cells in subject, so in this case we restrict the B cell abundance to being generated by the point mass at zero by defining p(e ist = 1|γ i , z is = 0, t) = 1.
In order to prevent convergence to degenerative local maxima we restrict Γ class ≥ .001, so that is there is always some small probability of a clone belonging to any class. | 1,290.8 | 2020-02-22T00:00:00.000 | [
"Medicine",
"Biology"
] |
Transverse single-spin asymmetry of very forward neutral pion
. We present in this talk a recent work on the transverse single-spin asymmetry of the very forward neutral pion in polarized p + p collisions at √ s = 510 GeV. The triple-Regge formalism remarkably well desribes the RHICf data at p T < 1 GeV. We found that the neutral pion production at low p T is interpreted as a di ff ractive one.
The Regge approach has been successfully applied to soft processes, where the center of mass energy s is very large and the transverse momentum is small.In 1970, Mueller found the generalized optical theorem [14], which allows one to approximate the two-body scattering in the high energies to a relevant Reggeon exchange processes of the three-body reaction.The triple-Regge process is one of the kinematic boundaries of the three-body Mueller amplitude which can be applied to the kinematic range of the RHICf experiment.
In the present talk, we show how to compute the TSSA of the neutral pion in the very forward direction by using the triple-Regge exchange diagram.The N, N * (1520), ∆(1232) and ∆(1600) baryon trajectories are introduced.The interefence between the trajectories yield the very foward TSSA.It might indicate that the TSSA of the neutral pion in the very forward direction can be interpreted as a diffractive one.
2 The differential cross section in the p ↑ + p → π 0 + X reaction in the triple-Regge limit The differential cross section in the inclusive pp ↑ → π 0 X collision at high-energies is given as where M X is the missing mass which is defined as M 2 X = (p p + p p ↑ − p π 0 ) 2 and X denotes the summation over the phase space of the X.Since the RHICf energy is sufficiently large, √ s = 510 GeV, the scattering amplitude can be described in terms of Reggon exchanges.The square of the two-body Reggeon exchange process is then analytically continued to the 3 → 3 reaction in the forward direction by Mueller's generalized optical theorem [14]: where β i ↑λ is a residue function in the Reggeized amplitude and λ is the helicity of the baryon.Since the Regge approach does not provide any information on the vertex structure, we adopt the effective Lagrangians to compute the residue function.The propagator of the Regge tajectory is defined as [15,16] where α i (t) stands for the Regge trajectory of the i particle and x F is the Feynman variable which is defined as a longitudinal momentum fraction between the polarized proton and its fragment π 0 .In the M 2 X → ∞ limit, the discontinuity on the M 2 X plane also can be replaced by an appropriate Regge trajectory as shown in Fig. 1.Then the dσ ↑ is written as the sum of the triple-Regge diagrams, where s 0 is an energy scaling factor and fixed as 1 GeV 2 .One could calculate the ppk vertex on the top side of the triple-Regge diagram by using the effective Lagrangian method.However, since we utilize the generalized optical theorem to derive the triple-Regge exchange process, k trajectories do not carry the momentum (t k = 0).Thus the residue function of this vertex can be written as γ k pp = λ β k λ λ .The coupling of three tracjectories G k i j are parametrized as a function of t.The transverse single-spin asymmetry is given as a ratio of the spin-dependent(d∆σ) and spin-averaged(dσ) differential cross sections.In the course of the calculation, one can see that the i j diagonal terms do not contribute to the spin-dependent cross section.It implies that the interference between i and j exchange (more specifically between the signature factors of them) yield the TSSA in the very forward direction.dσ can be simplified by the parity invariance of the residue function β [17].Firstly, dσ ↑ vanishes when the trajectory k is unnatural parity state.So we introduce the Pomeron for k exchange as a leading trajectory, because it has the largest Regge intercept α(0) among the natural parity states.Secondly, d∆σ vanishes if i and j trajectories have opposite naturalities.The leading contributions are N-N * (1520) and ∆-∆(1600) interference for natural and unnatural part, respectively.One define the triple-Regge coupling as In order to fit the RHICf data, we define the following new parameters as Finally, we obtain the TSSA in terms of the triple-Regge amplitude as where P indicates the Pomeron.We list the values of the parameters in Table .1.
Results and Discussion
Our main interest lies in describing the A N for the soft diffractive π 0 production in the the large x F region, employing the tripe-Regge formalism.The numerical results show a remarkable agreement with the RHICf experimental data up to p T 0.8 GeV/c, as shown in Fig. 2. The very forward π 0 A N exhibits a rising trend as p T increase.In Fig. 2, A N is almost saturated in the region 0.25 p T 0.45 GeV/c.The √ |t| factor in the triple-Regge coupling can explain this data point, since t becomes zero at the point between third and fourth points.For the smaller p T , A N comes from NN * interference term.As p T increases, the contribution of NN * becomes negative and large ∆∆ * term compensates it, so that the total contributions yield positive values at p T > 0.5 GeV/c.In addition, A N starts to increase rapidly and reachs over 20 % at about p T = 0.8 GeV/c.The next leading pole contributions moderate A N at higher p T .N * N * contribution is not sufficient to suppress the large value of A N , so ∆ * ∆ * is required to describe the RHICf data.
Figure 1 .
Figure 1.The triple-Regge approximation of the differential cross section.i, j and k indicate the Regge trajectories.
Figure 2 .
Figure 2. p T distribution of A N with 0.58 < x F < 1.The square symbol denotes the numerical result for the present work.The circles and solid lines are the RHICf experiment results and errors.
Table 1 .
Parameter values in A N . | 1,442 | 2023-01-01T00:00:00.000 | [
"Physics"
] |
From Farm-to-Fork: E. Coli from an Intensive Pig Production System in South Africa Shows High Resistance to Critically Important Antibiotics for Human and Animal Use
Antibiotic resistance profiles of Escherichia coli were investigated in an intensive pig production system in the uMgungundlovu District, South Africa, using the ‘farm-to-fork’ approach. Four hundred seventeen (417) samples were collected from pig and pig products at different points (farm, transport, and abattoir). E. coli was isolated and enumerated using the Colilert® 18/Quanti-Tray® 2000 system. Ten isolates from each Quanti-tray were selected randomly and putatively identified on eosin methylene blue agar. Real-time PCR targeting the uidA gene was used to confirm isolates to the genus level. The Kirby–Bauer disc diffusion method was used to determine the isolates’ antibiotic susceptibility profiles against 20 antibiotics. A total of 1044 confirmed E. coli isolates were obtained across the three critical points in the food chain. Resistance was observed to all the antibiotics tested with the highest and lowest rates obtained against tetracycline (88.5%) and meropenem (0.2%), respectively. Resistance was also observed to chloramphenicol (71.4%), ampicillin (71.1%), trimethoprim-sulfamethoxazole (61.3%), amoxicillin-clavulanate (43.8%), cephalexin (34.3%), azithromycin (23.9%), nalidixic acid (22.1%), cefoxitin (21.1%), ceftriaxone (18.9%), ciprofloxacin (17.3%), cefotaxime (16.9%), gentamicin (15.5%), cefepime (13.8%), ceftazidime (9.8%), amikacin (3.4%), piperacillin-tazobactam (1.2%), tigecycline (0.9%), and imipenem (0.3%). Multidrug resistance (MDR) was observed in 71.2% of the resistant isolates with an overall multiple antibiotic resistance (MAR) index of 0.25, indicating exposure to high antibiotic use environments at the farm level. A high percentage of resistance was observed to growth promoters and antibiotics approved for veterinary medicine in South Africa. Of concern was resistance to critically important antibiotics for animal and human use and the watch and reserve categories of antibiotics. This could have adverse animal and human health consequences from a food safety perspective, necessitating efficient antibiotic stewardship and guidelines to streamline antibiotic use in the food-animal production chain.
Introduction
Animal-based protein consumption continues to increase worldwide due to economic development and urbanization [1]. In 2013, the Food and Agriculture Organization (FAO) of the United Nations estimated the average annual consumption of meat per person was Although the South African pork industry is not the largest in terms of the country's overall food animal production sector [32], when combined with the poultry industry, these two sectors are considered as the highest consumers of antimicrobials for treatment, disease prevention, and growth-promotion [33]. In a previous study, we demonstrated that over 60% of E. coli isolated from an intensive poultry system was resistant to numerous antibiotics, including clinically relevant ones [23]. Nevertheless, this study was limited by the number of isolates that were tested. Additionally, compared to poultry, pigs have a more extended growth period that typically results in the administration of greater quantities of a broader range of antimicrobials [34]. Despite this, there is limited data on the antibiotic resistance in E. coli in intensive pig production in South Africa [35]. Therefore, it is essential to investigate the use of antimicrobials in such sectors and their impact on the emergence and escalation of resistance, as the transmission to humans could lead to infectious disease treatment failures. Using the 'farm-to-fork' approach, the current study aimed to investigate the antibiotic resistance profiles of E. coli isolated from an intensive pig production system in the uMgungundlovu District KwaZulu-Natal, South Africa. Most other pig studies focus on a single point along the continuum, such as the farm alone or slaughterhouse. However, the 'farm-to-fork' approach used in the current study gives a better picture of the distribution of resistance along the entire continuum, by allowing for samples to be collected from all points, beginning from the farm, through the transport to the final packaged product, usually involving the same batch of animals from the introduction into the farm to slaughter.
Prevalence of E. coli along the Pig Production Chain
E. coli was obtained at each critical point across the "farm to fork" continuum. The distribution of isolates along the continuum was 80.5% (n = 840), 4.1% (n = 43), and 15.4% (n = 161) for farm, transport, and abattoir, respectively ( Figure 1). Antibiotics 2021, 10, x FOR PEER REVIEW 3 of 14 Although the South African pork industry is not the largest in terms of the country's overall food animal production sector [32], when combined with the poultry industry, these two sectors are considered as the highest consumers of antimicrobials for treatment, disease prevention, and growth-promotion [33]. In a previous study, we demonstrated that over 60% of E. coli isolated from an intensive poultry system was resistant to numerous antibiotics, including clinically relevant ones [23]. Nevertheless, this study was limited by the number of isolates that were tested. Additionally, compared to poultry, pigs have a more extended growth period that typically results in the administration of greater quantities of a broader range of antimicrobials [34]. Despite this, there is limited data on the antibiotic resistance in E. coli in intensive pig production in South Africa [35]. Therefore, it is essential to investigate the use of antimicrobials in such sectors and their impact on the emergence and escalation of resistance, as the transmission to humans could lead to infectious disease treatment failures. Using the 'farm-to-fork' approach, the current study aimed to investigate the antibiotic resistance profiles of E. coli isolated from an intensive pig production system in the uMgungundlovu District KwaZulu-Natal, South Africa. Most other pig studies focus on a single point along the continuum, such as the farm alone or slaughterhouse. However, the 'farm-to-fork' approach used in the current study gives a better picture of the distribution of resistance along the entire continuum, by allowing for samples to be collected from all points, beginning from the farm, through the transport to the final packaged product, usually involving the same batch of animals from the introduction into the farm to slaughter.
Prevalence of E. coli along the Pig Production Chain
E. coli was obtained at each critical point across the "farm to fork" continuum. The distribution of isolates along the continuum was 80.5% (n = 840), 4.1% (n = 43), and 15.4% (n = 161) for farm, transport, and abattoir, respectively ( Figure 1).
Figure 1.
Distribution of Escherichia coli isolates along the pig production system, according to source and site.
Antimicrobial Resistance Profile of E. coli Isolated from the Pig Production Chain
Ninety-eight percent (98.3%, n = 1027) of the isolates were resistant to at least one of the antibiotics tested. Resistance was observed to all the antibiotics and classes tested with
Antimicrobial Resistance Profile of E. coli Isolated from the Pig Production Chain
Ninety-eight percent (98.3%, n = 1027) of the isolates were resistant to at least one of the antibiotics tested. Resistance was observed to all the antibiotics and classes tested with the highest and lowest rate obtained against tetracycline (88.5%, n = 924) and meropenem (0.2%, n = 2), respectively ( Figure 2). When stratified by site of collection, the highest percentage resistance was observed in the farm isolates. No tigecycline resistance was recorded in both truck and abattoir isolates ( Figure 3). When stratified by site of collection, the highest percentage resistance was observed in the farm isolates. No tigecycline resistance was recorded in both truck and abattoir isolates ( Figure 3).
The overall multidrug resistance (MDR) rate was 71.2% (n = 743). The highest prevalence of MDR was found on the farm 74.6% (n = 723) while truck and abattoir MDR rate was 51.1% (n = 43) and 50.3% (n= 161), respectively ( Table 1). The difference in the MDR rate between the different sampling sites was statistically significant ( Table 1). None of the isolates were pan-drug resistant ( Table 1). Two hundred and two (202) MDR patterns were recorded in the study, the most common pattern being AMP-SXT-TET-CHL, found in 4.7% (n = 49) of the isolates (Table 1). The overall multidrug resistance (MDR) rate was 71.2% (n = 743). The highest prevalence of MDR was found on the farm 74.6% (n = 723) while truck and abattoir MDR rate was 51.1% (n = 43) and 50.3% (n= 161), respectively ( Table 1). The difference in the MDR rate between the different sampling sites was statistically significant ( Table 1). None of the isolates were pan-drug resistant (Table 1). Two hundred and two (202) MDR patterns were recorded in the study, the most common pattern being AMP-SXT-TET-CHL, found in 4.7% (n = 49) of the isolates (Table 1)
MAR Phenotypes of E. coli
The MAR indices ranged from 0.1 to 0.9. The lowest MAR index recorded among all the isolates was 0.1 (resistance to two antibiotics), and this was found in 15.2% (n = 157) of the total isolates. The highest MAR index was 0.9 (resistance to 18 antibiotics), recorded in 0.1% (n = 1) of the farm isolates. The highest MAR indices recorded in transport and abattoir were 0.7 and 0.75, respectively ( Figure 4).
MAR Phenotypes of E. coli
The MAR indices ranged from 0.1 to 0.9. The lowest MAR index recorded among all the isolates was 0.1 (resistance to two antibiotics), and this was found in 15.2% (n = 157) of the total isolates. The highest MAR index was 0.9 (resistance to 18 antibiotics), recorded in 0.1% (n = 1) of the farm isolates. The highest MAR indices recorded in transport and abattoir were 0.7 and 0.75, respectively ( Figure 4). There was an overall statistically significant difference (p = 0.000; p < 0.05) between the MAR indexes from the different sampling (Table S1). The multiple pairwise comparison revealed statistically significant differences in the MAR indices between the farm and transport (p = 0.000; p < 0.05), and farm and abattoir (p = 0.004; p < 0.05). However, there was no statistically significant difference (p = 0.092; p < 0.05) between the transport and abattoir MAR indices (Table S1).
Discussion
We investigated the antibiotic resistance profiles of E. coli in an intensive pig production system in uMgungundlovu District, KwaZulu-Natal, South Africa using the farm-to-fork approach to determine the nature and extent of antibiotic resistance in food animal production. All the samples collected along the continuum were E. coli positive. The antibiotic sensitivity test revealed high resistance toward commonly used growth promoter analogues such as tetracycline, ampicillin, chloramphenicol, and sulfamethoxazole/trimethoprim. Of note, 76% of the E. coli isolates were multidrug-resistant (MDR). The MAR Index analysis intimated high exposure of isolates to antibiotics at the farm level.
Prevalence of E. coli across the Pig Production System
Escherichia coli was found across the farm-to-fork continuum with the highest number of positive samples observed on the farm. This was expected since farm samples constituted the largest proportion of the total sample, and the vast majority of samples were fecal. It was also unsurprising that E. coli was isolated from the transport vehicle as the pigs defecated during the transport. However, samples collected from the truck before the pigs' transportation were also positive, indicating that the transport vehicle was not properly cleaned after the initial transport round. This could lead to the transfer of bacteria from one farm to another and the abattoir contributing to food contamination. It was not surprising to find E. coli in the cecal samples and the carcass rinsate at the abattoir. Contamination of carcasses by E. coli during the slaughter of animals leading to bacterial transmission through the food-chain has been reported [12,[36][37][38]. The identification of E. coli in the meat samples in the meat processing area in the current study corroborates the findings of Schwaiger et al. [39]. They reported that almost 50% of the pork samples in their study were positive for E. coli, indicating that fecal contamination during the slaughter process could not be prevented entirely. Despite the careful removal of the animal's internal organs, contamination from the intestinal contents would be challenging and unavoidable, likely due to many animals being processed. Therefore, stricter sanitary conditions should be observed in the abattoir to avoid transmitting these bacteria to consumers. The most appropriate approach would be to ensure proper rinsing of the carcass post-evisceration before sending them to the abattoir's meat portion sections.
Antimicrobial Resistance Profile of E. coli Isolated from the Pig Production Chain
Although resistance to antibiotics is a natural phenomenon [34], their overuse and misuse in humans and animals have significantly escalated the antibiotic resistance levels [40]. In the current study, E. coli showed the highest resistance to tetracycline, chloramphenicol ampicillin, and sulfamethoxazole-trimethoprim, correlating with the use of amoxicillin sodium and trimethoprim-sulphamethoxazole reported by the farm (personal communication). Ampicillin, sulphonamides, and tetracycline have a long history of use in animals [41]. As mentioned by OIE Annual Report on Antimicrobial Agents Intended for Use in Animals (2019), the largest proportion of antibiotics used in animal production was tetracycline's followed by ampicillin and macrolides [42]. In South Africa, tetracyclines are the most commonly used antibiotics in animals after the macrolides; they are registered as a growth promoter in the Fertilizers, Farm Feeds, Agricultural Remedies and Stock Remedies Act (Act 36 of 1947), providing a possible explanation for the resistance observed [43].
Although many studies conducted in other countries have reported similar percentage resistance in E. coli against the antibiotics tested, the resistance varies between countries based on antibiotics and the country's regulations. For example, an Australian study [44] reported the highest resistance was toward tetracycline (68.2%), ampicillin (60.2%), chloramphenicol (47.8%), and trimethoprim/sulfamethoxazole (34.3%), respectively. However, these rates were relatively lower than in our study and reflected the strict use of antibiotics in Australian livestock production. Australia has been reported as one of the five lowest antibiotic users in livestock production globally [45]. On the contrary, a surveillance study in China investigating the antibiotic resistance trends in E. coli originating from food animals during 2008-2015 [41] reported a high resistance rate to tetracycline, sulfamethoxazole, and ampicillin at 94%, 88.36%, and 81.44%, respectively. This could be explained by the fact that in China, using antibiotics both for animal disease treatment and growth promotion is unmonitored [46].
The factors behind the emergence and spread of resistant bacteria are complex. They may be due to coselection, whereby using one antibiotic selects for resistance to other substances [47]. Such has been reported for chlortetracycline, penicillin, and sulfamethazine coselecting for antimicrobials of other classes that were not administered, like aminoglycoside [11]. Indeed, chloramphenicol is not registered for use in food animals in South Africa [48] but, high percentage resistance was still observed, possibly due to the horizontal transfer of chloramphenicol resistome and the coselection of resistance because of the use of other compounds [48,49]. Moreover, it has been found that resistance to sulfamethoxazole, tetracycline, and kanamycin is frequently transferred along with chloramphenicol resistance as the cmA1 gene conferring resistance to chloramphenicol is cocarried with genes encoding resistance to other antimicrobials that are currently approved for use in food animals [50]. Although cephalosporins and quinolones showed a relatively low resistance rate in comparison to other antibiotics, they still need monitoring due to their clinical importance [51]. The WHO has classified both cephalosporins and quinolones as critically important antibiotics for human medicine [20].
Notwithstanding the low percentage resistance to carbapenems, the emergence of carbapenem resistance is grave as the WHO classifies them as critically important antibiotics [20]. Moreover, carbapenems are the last-resort antibiotics for treating a wide range of infections caused by multidrug-resistant Gram-negative bacteria [52].
Prevalence of Multidrug-Resistant and Estimation of MAR Index of E. coli Isolates in the Pig Production Chain
Food animal husbandry is considered an important factor contributing to the distribution of multidrug-resistant bacteria [53]. In this study, multidrug resistance was found in 71% of the total isolates. Almost 200 different antibiogram patterns were found along the continuum. The highest diversity in antibiogram patterns was found on the farm compared to both the transport and abattoir. The most prevalent MDR pattern reported mostly in the three different sampling areas included resistance to commonly used growth promoters and antibiotics in veterinary and human medicine belonging to the same class of antibiotics in different permutations and combinations, indicating the possibility of transmission along the pork production chain.
The development of resistance to these antibiotic classes is a major concern in human and animal medicine because these drugs are commonly used both in practices [54], especially as infections with MDR strains will limit treatment options [55]. The increasing prevalence of MDR E. coli is challenging because E. coli can occupy multiple niches, including humans and animals, thereby acquiring or transmitting antimicrobial resistance genes horizontally and vertically [56].
The multiple antibiotic resistance (MAR) index is used to determine the health risk associated with the spread of resistance in a specified location [54,57]. A MAR index of 0.2 differentiates between low-and high-risk, and a MAR index greater than 0.2 suggests that bacteria were exposed to high antibiotic use environments [58]. The mean MAR index of 0.29 on the farm affirmed the high antibiotic use and high selective pressure in the farm, which was statistically significant (p-value < 0.01) compared to the other sampling points. This could also indicate the possible transfer of the resistant bacteria along the production chain. Worryingly, results in this study revealed an increasing resistance to all antibiotic classes, including critically important antibiotics for human use and the watch and reserve categories of antibiotics leading to serious concern to human health [59,60]. Hence, this reiterates the calls for a holistic review on the use of these antibiotics and growth promoters in the food-animal production chain.
Study Clearance and Ethical Consideration
Ethical approval was received from the Animal Research Ethics Committee (Reference: AREC 073/016PD) and the Biomedical Research Ethics Committee (Reference: BCA444/16) of the University of KwaZulu-Natal. A section 20A permit was further obtained from the South African National Department of Agriculture, Forestry, and Fisheries (Reference: 12/11/1/5).
Study Site and Sample Collection
This longitudinal study was conducted over 18 weeks (September 2018 to January 2019) from birth to slaughter. The collection points consisted of the pig farm, transport system (truck), and the associated abattoir. A total of 417 samples were collected from these points following the World Health Organization Advisory Group on Integrated Surveillance of Antimicrobial Resistance (WHO-AGISAR) guidelines as follows: • Farm: Two groups of newborn pigs in two fences labeled A and B were selected for the study. Five fresh pig feces samples were randomly collected per fence, ensuring that each sample was from a different location. Samples were collected twice a month for 18 weeks. Additionally, slurry samples were collected in triplicate from the pipes draining the pig house at each sampling period. • Transport: After 18 weeks, when pigs reached maturity and slaughter readiness, swab samples were taken from the transport vehicle (truck) before and after loading the pigs for transportation to the abattoir. • Abattoir: Swabs were collected throughout the slaughter chain, viz. carcass, carcass rinsate, caeca, and pork portions before packaging (head, body, and thigh).
All samples were transported on ice packs to the Antimicrobial Research Unit's microbiology laboratory, University of KwaZulu-Natal, and processed within 4 h from the time of collection.
Sample Processing and Isolation of E. coli
E. coli was isolated using the Colilert ® 18/Quanti-Tray ® 2000 system (IDEXX Laboratories (Pty) Ltd., Johannesburg, South Africa) according to the manufacturer's instructions.
Fecal and Slurry Samples from the Farm
Each fecal sample (1 g) was weighed and transferred into 9 mL of distilled water and vortexed briefly. A total of 100 µL of the supernatant from the resuspended sample was transferred into a 120 mL sterile plastic bottle, and the bottle topped up to the 100 mL mark with sterile distilled water. The 100 mL sample was then processed as per the IDEXX protocol for water samples (IDEXX Laboratories (Pty) Ltd., Johannesburg, South Africa). For the slurry samples, 20 µL of the slurry were analyzed directly without any prior processing.
Rinsate and Swabs (Transport and Abattoir)
For the carcass rinsate, 48 samples were pooled into four equal samples of 12 each, after which 1 mL dilutions were extracted and processed in the same way as the slurry samples. Carcass and meat cut swabs from each site were pooled into four sets of 12 and transferred into 10 mL sterile distilled water. The mixture was vortexed for 1 min to separate the bacteria from the swabs. The subsequent supernatant was then processed further as with the slurry samples. For the cecal samples, 48 different ceca samples were pooled into four samples of 12 each and mixed properly. Twenty-five grams of each mixture was transferred into a sterile container containing 225 mL of sterile distilled water. The mixture was vigorously shaken manually, and then 20 µL of the supernatant was extracted and processed like the slurry samples. A flow diagram depicting the sampling frame used in the farm-to-fork approach is shown in Figure 5. arate the bacteria from the swabs. The subsequent supernatant was then processed further as with the slurry samples. For the cecal samples, 48 different ceca samples were pooled into four samples of 12 each and mixed properly. Twenty-five grams of each mixture was transferred into a sterile container containing 225 mL of sterile distilled water. The mixture was vigorously shaken manually, and then 20 µL of the supernatant was extracted and processed like the slurry samples. A flow diagram depicting the sampling frame used in the farm-to-fork approach is shown in Figure 5.
Molecular Confirmation of E. coli
All the samples collected along the continuum were positive for E. coli, using the Colilert ® 18/Quanti-Tray ® (data not shown) as mentioned previously. Ten isolates from each Quanti-tray were selected randomly and putatively identified on eosin methylene blue agar for further confirmation by PCR, yielding a final sample size of 1044 E. coli isolates. DNA was extracted from these isolates using the boiling method [61]. The extracted DNA was used as a template to confirm E. coli using real-time polymerase chain reaction (PCR), targeting the uidA gene. The reactions were performed in a total volume of 10 µL consisting of 5 µL of Luna ® universal qPCR master mix (New England Biolabs, Ipswich, MA, USA), 0.5 µL from each primer (forward-AAAACGGCAAGAAAAAGCAG and reverse-ACGCGTGGTTAACAGTCTTGCG; final concentration 0.5 µM (Inqaba Biotechnical Industries (Pty) Ltd., Pretoria, South Africa)), 3 µL DNA, and 1 µL of nuclease-free water. The thermal cycling conditions were as previously described [62]. After the final extension step, a melt curve was generated and analyzed as previously described [63]. All reactions were performed on a Quant Studio ® 5 Real-time PCR system (Thermo Fischer
Molecular Confirmation of E. coli
All the samples collected along the continuum were positive for E. coli, using the Colilert ® 18/Quanti-Tray ® (data not shown) as mentioned previously. Ten isolates from each Quanti-tray were selected randomly and putatively identified on eosin methylene blue agar for further confirmation by PCR, yielding a final sample size of 1044 E. coli isolates. DNA was extracted from these isolates using the boiling method [61]. The extracted DNA was used as a template to confirm E. coli using real-time polymerase chain reaction (PCR), targeting the uidA gene. The reactions were performed in a total volume of 10 µL consisting of 5 µL of Luna ® universal qPCR master mix (New England Biolabs, Ipswich, MA, USA), 0.5 µL from each primer (forward-AAAACGGCAAGAAAAAGCAG and reverse-ACGCGTGGTTAACAGTCTTGCG; final concentration 0.5 µM (Inqaba Biotechnical Industries (Pty) Ltd., Pretoria, South Africa)), 3 µL DNA, and 1 µL of nuclease-free water. The thermal cycling conditions were as previously described [62]. After the final extension step, a melt curve was generated and analyzed as previously described [63]. All reactions were performed on a Quant Studio ® 5 Real-time PCR system (Thermo Fischer Scientific, Waltham, MA, USA). DNA from E. coli ATCC ® 25922 was used as a positive control, while the reaction mixture with no DNA (replaced with nuclease-free water) was used as a no template control.
Determination of Multidrug-Resistant (MDR) and Multiple Antibiotic Resistance (MAR) Index
Isolates showing resistance to ≥1 agent in >3 antibiotic classes were considered as multidrug-resistant (MDR) [64]. The Multiple Antibiotic Resistance (MAR) index was calculated as a/b, where 'a' was the number of antibiotics to which an isolate was resistant, and 'b' was the total number of antibiotics tested [65].
Statistical Analysis and Interpretation
The data were analyzed using the Statistical Package for the Social Science SPSSv26 (IBM, Armonk, NY, USA). Descriptive statistics were used to describe the frequency of E. coli that was isolated from different sources. The prevalence of MDR isolates and MAR index of E. coli from different sampling sources were compared using the ANOVA, Tukey test, and a p-value of < 0.05 was considered statistically significant.
Conclusions
To our knowledge, this is the first study in South Africa investigating antibioticresistant E. coli in intensive pig farming using the "farm-to-fork" approach. E. coli showed high percentage and multidrug resistance with high MAR indices, suggesting that the isolates originated from high antibiotic use/exposure areas. Of note, high percentage resistance was observed to growth promoters and antibiotics approved for use in veterinary medicine in South Africa. This could have adverse human health consequences from a food safety perspective, necessitating efficient antibiotic stewardship and guidelines to streamline antibiotic use. Therefore, it is recommended that strict regulations regarding the use of antibiotics in food animals in South Africa be developed and implemented to curb this issue. | 5,888.4 | 2021-02-01T00:00:00.000 | [
"Biology",
"Agricultural And Food Sciences"
] |
Biomolecular condensates form spatially inhomogeneous network fluids
The functions of biomolecular condensates are thought to be influenced by their material properties, and these will be determined by the internal organization of molecules within condensates. However, structural characterizations of condensates are challenging, and rarely reported. Here, we deploy a combination of small angle neutron scattering, fluorescence recovery after photobleaching, and coarse-grained molecular dynamics simulations to provide structural descriptions of model condensates that are formed by macromolecules from nucleolar granular components (GCs). We show that these minimal facsimiles of GCs form condensates that are network fluids featuring spatial inhomogeneities across different length scales that reflect the contributions of distinct protein and peptide domains. The network-like inhomogeneous organization is characterized by a coexistence of liquid- and gas-like macromolecular densities that engenders bimodality of internal molecular dynamics. These insights suggest that condensates formed by multivalent proteins share features with network fluids formed by systems such as patchy or hairy colloids.
Reporting Summary
Nature Portfolio wishes to to improve the reproducibility of of the work that we we publish.This form provides structure for consistency and transparency in in reporting.For further information on on Nature Portfolio policies, see our Editorial Policies and the Editorial Policy Checklist Statistics For all statistical analyses, confirm that the following items are present in in the figure legend, table legend, main text, or or Methods section.
n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as as a discrete number and unit of of measurement A statement on on whether measurements were taken from distinct samples or or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of of all covariates tested
A description of of any assumptions or or corrections, such as as tests of of normality and adjustment for multiple comparisons A full description of of the statistical parameters including central tendency (e.g.means) or or other basic estimates (e.g.regression coefficient) AND variation (e.g. standard deviation) or or associated estimates of of uncertainty (e.g.confidence intervals) For null hypothesis testing, the test statistic (e.g.F, t, r) with confidence intervals, effect sizes, degrees of of freedom and P value noted Give P values as exact values whenever suitable.
For Bayesian analysis, information on on the choice of of priors and Markov chain Monte Carlo settings For hierarchical and complex designs, identification of of the appropriate level for tests and full reporting of of outcomes Estimates of of effect sizes (e.g.Cohen's d, Pearson's r), ), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above.
Software and code
Policy information about availability of of computer code Data collection
Data analysis
For manuscripts utilizing custom algorithms or or software that are central to to the research but not yet described in published literature, software must be be made available to to editors and reviewers.We We strongly encourage code deposition in in a community repository (e.g.GitHub).See the Nature Portfolio guidelines for submitting code & software for further information.Describe the methods by which all novel plant genotypes were produced.This includes those generated by transgenic approaches, gene editing, chemical/radiation-based mutagenesis and hybridization.For transgenic lines, describe the transformation method, the number of independent lines analyzed and the generation upon which experiments were performed.For gene-edited lines, describe the editor used, the endogenous sequence targeted for editing, the targeting guide RNA sequence (if applicable) and how the editor was applied.was applied.was applied.
Report on the source of all seed stocks or other plant material used.If applicable, state the seed stock centre and catalogue number.If plant specimens were collected from the field, describe the collection location, date and sampling procedures.
Describe any authentication procedures for each seed stock used or novel genotype generated.Describe any experiments used to Describe any authentication procedures for each seed stock used or novel genotype generated.Describe any experiments used to Describe any authentication procedures for each seed stock used or novel genotype generated.Describe any experiments used to assess the effect of a mutation and, where applicable, how potential secondary effects (e.g.second site T-DNA insertions, mosiacism, off-target gene editing) were examined. | 985.6 | 2024-04-22T00:00:00.000 | [
"Materials Science",
"Physics",
"Biology",
"Chemistry"
] |
Vasomotor Reaction to Cyclooxygenase-1-Mediated Prostacyclin Synthesis in Carotid Arteries from Two-Kidney-One-Clip Hypertensive Mice
This study tested the hypothesis that in hypertensive arteries cyclooxygenase-1 (COX-1) remains as a major form, mediating prostacyclin (prostaglandin I2; PGI2) synthesis that may evoke a vasoconstrictor response in the presence of functional vasodilator PGI2 (IP) receptors. Two-kidney-one-clip (2K1C) hypertension was induced in wild-type (WT) mice and/or those with COX-1 deficiency (COX-1-/-). Carotid arteries were isolated for analyses 4 weeks after. Results showed that as in normotensive mice, the muscarinic receptor agonist ACh evoked a production of the PGI2 metabolite 6-keto-PGF1α and an endothelium-dependent vasoconstrictor response; both of them were abolished by COX-1 inhibition. At the same time, PGI2, which evokes contraction of hypertensive vessels, caused relaxation after thromboxane-prostanoid (TP) receptor antagonism that abolished the contraction evoked by ACh. Antagonizing IP receptors enhanced the contraction to the COX substrate arachidonic acid (AA). Also, COX-1-/- mice was noted to develop hypertension; however, their increase of blood pressure and/or heart mass was not to a level achieved with WT mice. In addition, we found that either the contraction in response to ACh or that evoked by AA was abolished in COX-1-/- hypertensive mice. These results demonstrate that as in normotensive conditions, COX-1 is a major contributor of PGI2 synthesis in 2K1C hypertensive carotid arteries, which leads to a vasoconstrictor response resulting from opposing dilator and vasoconstrictor activities of IP and TP receptors, respectively. Also, our data suggest that COX-1-/- attenuates the development of 2K1C hypertension in mice, reflecting a net adverse role yielded from all COX-1-mediated activities under the pathological condition.
Mice, induction of 2K1C hypertension, and tissue preparation
All procedures performed on mice were in conformance with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85-23, revised 1996) and approved by The Institutional Animal Research and Use Committee of Shantou University (Permit Number: STUMC2012-039 and STUMC2014-095). All surgery was performed under sodium pentobarbital anesthesia, and all efforts were made to minimize suffering of the mice.
C57BL/6 and COX-1 -/mice were obtained as described previously [10]. The surgery to induce 2K1C hypertension was performed using a procedure modified from that reported previously [41]. Briefly, male C57BL/6 or COX-1 -/mice (10-12 weeks) were anesthetized with pentobarbital (100mg/kg, i.p.) and a horizontal incision was made on the right back below the lowest rib. The right renal artery was isolated, cleared of fat with a micro glass rod, and then clipped with a silver clip with a 1.2-mm-wide notch in the middle of the main stem.
On the 28 th day after surgery, 2K1C mice with systolic blood pressure (SBP) > 140 mm Hg or an increase of > 20 mm Hg compared to the preoperative value and age-matched C57BL/6 mice (to serve as positive controls for the COX-1-mediated response and to contrast the deviation of pathologies developed in 2K1C mice) were euthanized by CO 2 inhalation. With the help of a binocular microscope, carotid arteries were isolated and dissected free of adherent tissues for biochemical and/or functional analyses. Also, the heart was cut, cleared of blood and adherent tissues, and weighed for the examination of the heart to body weight ratio. For functional studies, vessels were cut into 1 mm rings as described previously [10,20].
Blood pressure measurements SBP was measured on the day of surgery preoperatively and weekly thereafter, using a noninvasive computerized tail-cuff system (ALC-NIBP, ALCBIO; Shanghai, China). Before BP measurement, mice were accustomed to the tail-cuff blood pressure measurements for 3 days before the day of surgery or 1 day before measurement thereafter. SBP was obtained with 3 consecutive measurements that have a constant value.
Analyses of vasomotor reactions
Analyses of vascular function were performed as described elsewhere [11,20]. Briefly, the vascular ring was mounted between two tungsten wires in an organ bath filled with PSS aerated with 95%O 2 -5% CO 2 and maintained at 37°C. One wire was stationary, whereas the other was connected to an AE801 force transducer (Kronex, Oakland, USA). In some experiments, the endothelium was removed by rotating the vessel ring around the two tungsten wires with the passive tension kept at 100 mg. Thereafter, vessels were stimulated with 60 mM K + every 15 minutes, and the resting tension was adjusted stepwise to an optimal level (~250 mg), at which point the response to 60 mM K + was maximal and reproducible.
To remove the influence of NO, some vessels were treated with the NO synthase (NOS) inhibitor L-NAME (1 mM), under which the response of arteries appears similar to that of eNOS -/mice [42]. Inhibitors were added 30 min before the vessel was contracted with an agonist, and was kept in the solution throughout the experiment. For control responses, the inhibitor was replaced with the vehicle alone. The response elicited by an agent under baseline conditions was expressed relative to that of 60 mM K + , while that during the contraction evoked by PE (at the concentration indicated) was expressed relative to the value immediately prior to the application of the agent.
Assay of 6-keto-PGF 1α
The PGI 2 metabolite 6-keto-PGF 1α was measured with an EIA kit [10]. Briefly, after being cut open and rinsed of blood components, carotid arteries (vessels from both sides were pooled for each single experiment) were incubated with PSS at 37°C for 30 min, followed by exposure to PSS (100 μl) and ACh (10 μM) in 100 μl PSS (37°C) for 15 min each. In some experiments, FR122047 (1 μM) was added to the incubating buffer, and was kept in reaction solution throughout the experiment. Thereafter, vessels were taken out, and the reaction solution was diluted with PSS (1:10). Measurement of 6-keto-PGF 1α was performed with 50 μl of final diluted solution, using a protocol described in instructions of the manufacturer. The amount of 6-keto-PGF 1α was expressed as ng per mg of wet tissue.
Real-time PCR
Expressions of IP, TP receptors, and β-actin (internal controls) were detected by real-time PCR. Vessel specimens (pooled from 2 mice for each single set of experiments) were cut open and rinsed of blood components, followed by mincing and homogenizing in an ice-cold RNAiso Plus solution (TaKaRa, Dalian, China), using a glass homogenizer. In some experiments, the opened vessel strips were further denuded of endothelium by rubbing with a moistened cotton swab, which was made around one-tip of a micro-dissecting forceps, under a binocular microscope. Total RNA was prepared according to the manufacturer's instructions. First-strand cDNA was synthesized using total RNA (250 ng) and oligo(dT)15 primers (TaKaRa). The PCR primers for IP, TP receptors, and β-actin were described previously (19). Real-time PCR was performed using a SYBR PrimScript RT-PCR kit (TaKaRa).
Data analysis
Data were expressed as means ± SEM from n numbers or pools of vessels from different animals. For statistical evaluation, a Student's t-test (unpaired; two tails) was performed to compare two means. When more than two means were compared, a one-way or two-way ANOVA followed by Bonferroni's post-hoc test was used. P<0.05 was considered to be statistically significant.
Effect of COX-1 inhibition on PGI 2 synthesis and relaxation evoked by ACh
The effect of COX-1 inhibition on the in vitro production of the PGI 2 metabolite 6-keto-PGF 1α and relaxation evoked by the maximal concentration of muscarinic receptor agonist ACh (10 μM) was first examined [35,42]. As shown in Fig 1A, in hypertensive carotid arteries, ACh a >10-fold increase in the production 6-keto-PGF 1α compared to resting conditions. However, the addition of selective COX-1 inhibitor FR122047 (1 μM) reduced 6-keto-PGF 1α to a level close to resting conditions, similar to results observed in the control normotensive mice. No difference was found in levels of 6-keto-PGF 1α between hypertensive and normotensive vessels ( Fig 1A).
Also, in hypertensive carotid arteries pre-contracted with 10 μM PE (to yield a sustained contraction of 80-100% that of 60 mM K + ) the relaxation evoked by ACh (10 μM) was blunted by a biphasic force development (Fig 1B & 1C). Interestingly, FR122047 (1 μM) abolished the biphasic force development and resulted in an enhanced relaxation, which had a tension within 6 min after the application of ACh being significantly lower than that of control hypertensive vessels (Fig 1B and 1C). In addition, we noted that the overall time course of the response evoked by 10 μM ACh in such treated hypertensive vessels was similar to that of nomortensive mice ( Fig 1C).
ACh-evoked response in hypertensive carotid arteries with NOS inhibited
The above functional analyses suggest that COX-1 in fact mediates a vasoconstrictor response in hypertensive carotid arteries, which might be of a similar extent to that of normotensive mice revealed previously [10]. To substantiate this, responses evoked by ACh in NOS-inhibited conditions were examined. As shown in Fig 2A, in L-NAME-treated hypertensive arteries either 0.3 or 10 μM ACh evoked contraction under baseline conditions comparable to that of normotensive mice. However, FR122047 (1 μM) abolished the contraction evoked by 10 μM ACh in both normotensive and hypertensive vessels ( Fig 2B). Also, the contraction to ACh (10 μM) in hypertensive arteries was removed either by the TP receptor antagonist SQ29548 (3 μM) or by endothelial denudation (Fig 2C & 2D), in a manner similar to that of some other normotensive mouse arteries reported previously [10,42]. In contrast, the TXAS inhibitor ozagrel (IC 50 : 11 nM) did not show any effect at a concentration of 3 μM (Fig 2C and 2D).
Response to PGI 2 in hypertensive carotid arteries
Next, we determined the effect of PGI 2 on hypertensive carotid arteries. As shown in Fig 3A, even in NOS intact hypertensive carotid arteries pre-contracted with 1 μM PE (to reach 40-60% contraction evoked by 60 mM K + , at which point, a dilator activity could be readily detected) PGI 2 did not evoke any relaxation up to 1 μM. Moreover, in those treated with the NOS inhibitor L-NAME, PGI 2 (> 1 μM) evoked contraction under baseline conditions, similar to that of normotensive vessels (Fig 3B & 3C). Notably, the TP receptor antagonist SQ29548 not only abolished the contraction evoked by PGI 2 under baseline conditions (Fig 3C), but also resulted in relaxation (18.2 ± 1.99% and 38.0 ± 5.53% decrease of force vs. 3.2 ± 2.29% and 5.5 ± 3.60% increase of force evoked by 0.1 and 1 μM PGI 2 in control hypertensive vessels, respectively; n = 5, P<0.01) in response to the agonist in vessels pre-contracted with PE (refer to Fig 3A for representative traces). Again, the TXAS inhibitor ozagrel (3 μM) did not show any effect (P>0.05) on the sub-maximal contraction evoked by 10 μM PGI 2 (Fig 3D).
Expressions and functions of TP and IP receptors in hypertensive carotid arteries
Since PGI 2 evoked relaxation after TP receptor antagonism, the expression levels and functions of IP and TP receptors in hypertensive carotid arteries were determined. Real-time PCR showed that amounts of either IP or TP receptor mRNAs were similar between hypertensive and normotensive carotid arteries (Fig 4A). In addition, in hypertensive arteries, mRNAs of both IP and TP receptors were not altered by endothelial denudation (Fig 4B), consistent with a generally proposed major functional presence of these two receptors in medial smooth muscle [36].
At the same time, we noted that after antagonizing TP receptors with SQ29548 (3 μM), 0.1-1 μM iloprost (a stable PGI 2 analogue and IP receptor agonist, which is less effective than PGI 2 on TP receptors [11]) evoked relaxation in L-NAME-treated hypertensive carotid arteries pre- contracted with 0.3 μM PE (Fig 5A & 5B), which was similar to that of normotensive mice, and concurring with the NO-independent property of IP receptor-mediated relaxation [22]. In addition, in hypertensive carotid arteries the IP receptor antagonist CAY10441 (1 μM) increased the sub-maximal contraction evoked by the COX substrate AA (3 μM), though it did not show such an effect on a similar response evoked by 0.3 μM ACh (Fig 5C). Also, the contractions evoked by the TP receptor agonist U46619 was similar between hypertensive and normotensive vessels (Fig 5D). Effect of COX-1 -/on vasoconstrictor response to AA or ACh Lastly, the effect of COX-1 -/on the contraction to the COX substrate AA or that evoked by ACh in 2K1C hypertensive carotid arteries was examined. As shown in Table 1, COX-1 -/-2K1C mice (preoperative SBP: 106 ± 2.4 vs. 109 ± 2.1 mmHg of WT mice, n = 6, P>0.05) developed hypertension; however, their SBP was lower, and the increase of heart mass was not to a significant extent as that of WT counterparts (Table 1). Notably, in L-NAME-treated COX-1 -/-2K1C hypertensive arteries, either the contraction to the COX substrate AA (10 μM) or that to ACh (10 μM) disappeared (Fig 6A & 6B). Moreover, the increase of force evoked by ACh (10 μM) in L-NAME-treated WT hypertensive vessels pre-contracted with PE (2 μM; to reach 80-100% contraction evoked by 60 mM K + ) was reversed into relaxation in vessels from COX-1 -/-2K1C hypertensive mice, similar to that of WT hypertensive vessels treated with 10 μM of non-selective inhibitor indomethacin (Fig 6C & 6D).
Discussion
In this study, we demonstrated that the muscarinic receptor agonist ACh stimulated a production of the PGI 2 metabolite 6-keto-PGF 1α and an endothelium-dependent contraction in 2K1C hypertensive carotid arteries; both of them were sensitive to COX-1 inhibition. Meanwhile, PGI 2 was noted to evoke contraction that was reversed into relaxation after TP receptor antagonism. Antagonizing IP receptors enhanced the contraction evoked by the COX substrate AA. Also, we noted that in COX-1 -/-2K1C hypertensive mice, not only was the contraction to ACh or arachidonic acid (AA) abolished, but also the increase of SBP and/or heart mass did not reach an extent seen in WT counterparts. Therefore, COX-1, whose activities altogether appear to adversely influence the development of hypertension, remains as a major form in carotid arteries from 2K1C hypertensive mice, mediating PGI 2 synthesis that evokes a vasoconstrictor response in the functional presence of dilator IP receptors.
The measurements of 6-keto-PGF 1α production clearly indicate that in 2K1C hypertensive carotid arteries, ACh stimulates PGI 2 synthesis in a manner similar to that of control normotensive mice. Furthermore, FR122047, a COX-1 selective inhibitor, abolished 6-keto-PGF 1α production evoked by ACh both in normotensive and in 2K1C hypertensive arteries, suggesting a critical role for COX-1 in PGI 2 synthesis under normotensive and hypertensive conditions. Also, our functional analyses revealed that FR122047 not only prevented a biphasic force to result in enhanced relaxation to ACh in NOS-intact 2K1C hypertensive carotid arteries, but also abolished a contraction evoked by the agonist in L-NAME-treated vessels under baseline conditions. This further suggests a vasoconstrictor role for COX-1 under the hypertensive condition. It should be noted that the contraction to ACh in hypertensive carotid arteries was similar to that of normotensive mice (which was sensitive to FR122047 as well). Also, concurring with a major expression of COX-1 in endothelium of mouse arteries [10,43], endothelial denudation removed the contraction to ACh. In addition, we noted that in COX-1 -/hypertensive arteries, not only contractions evoked by ACh and the COX substrate AA disappeared, but also increase of force evoked by ACh in PE pre-contracted conditions was reversed into relaxation as with non-selective COX inhibition in WT hypertensive mice. These results point to that COX-1, whose function remains unaltered, acts as a major form mediating both PGI 2 synthesis and a vasoconstrictor response in 2K1C hypertensive carotid arteries.
At the same time, we noted that PGI 2 , which caused contraction of the hypertensive arteries, evoked relaxation after TP receptor antagonism. This not only verifies a vasoconstrictor effect of PGI 2 via TP receptors, but also suggests a dilator role for IP receptors in 2K1C hypertensive COX-1 in 2K1C Hypertension carotid arteries where both COX-1 and PGI 2 mediate contraction. Indeed, in such vessels the relaxation evoked by the stable PGI 2 analogue iloprost (following TP receptor antagonism) and the mRNA level of IP receptors were comparable to those of normotensive mice. In addition, mRNA levels and contractions evoked by U46619 suggest that TP receptors were also unaltered under the hypertensive conditions. As a result, the vasoconstrictor response to PGI 2 implies an overcoming of IP receptor-mediated dilator activity by the vasoconstrictor effect of TP receptors, as we previously showed in normotensive conditions [22,43]. In fact, the amount COX-1 in 2K1C Hypertension of PGI 2 produced by ACh-evoked, COX-1-mediated metabolism is well above its initial level (1 μM) to evoke contraction (1.0 ng/mg 6-keto-PGF 1α , which equals 2.7 μmol PGI 2 per kg vessel tissue). In addition, PGH 2 , an intermediate of PGI 2 synthesis, can activate TP receptors before being converted to PGI 2 in the medial smooth muscle [35,43,44]. Therefore, PGI 2 synthesis would eventually lead to a vasoconstrictor response and contribute significantly to the contraction evoked by ACh (which was also sensitive to TP receptor antagonism) in 2K1C hypertensive carotid arteries. In contrast, the involvement of TxA 2 , an originally proposed EDCF [36,42], could be largely excluded by the experiments with TXAS inhibition, which did not reduce the contraction to ACh or evoked by PGI 2 .
Also of interest is that the IP receptor antagonist CAY10441, which enhances contraction but inhibits relaxation to PGI 2 [5,22], increased the contraction evoked by the COX substrate AA (which was COX-1-dependent as discussed above). We have previously shown that in mouse arteries, AA stimulates PGI 2 synthesis [22,43]. Thus, the above enhancement of AA contraction substantiates the presence of PGI 2 synthesis in 2K1C hypertensive carotid arteries and implies that under the pathological condition, the native COX-1-mediated AA metabolism activates both TP and IP receptors to result in a net response of contraction. On the other hand, CAY10441 did not increase the contraction to ACh under similar conditions. It has already been known that unlike that of AA, the response evoked by ACh implicates an endothelium-derived hyperpolarizing factor (EDHF)-mediated dilator activity [43], which we also verified by a relaxing response to ACh in NOS-inhibited COX-1 -/-2K1C hypertensive carotid arteries. It should be noted that EDHF mediates relaxation via pathways similar to those of IP receptors [45]. Thus, there exists a possibility that the effect of IP receptor antagonism on the ACh-evoked response is compensated by the redundancy of EDHF-mediated dilation activity. However, this remains speculative; the exact reason(s) for the inability of IP receptor antagonism to enhance the contraction to ACh still requires further investigation.
To date, there has been considerable inconsistency regarding COX isoforms in mediating PGI 2 synthesis and/or endothelium-derived vasoconstrictor activity under hypertensive conditions [12, 28, 29, 31-33, 35, 46, 47]. An important reason for this could be that some of COX inhibitors used in prior studies might have effects independent of their intended targets [38,48,49]. In the present study using C57BL/6 and COX-1 -/mice, our results clearly indicate that COX-1 remains as a major form in 2K1C hypertensive carotid arteries, mediating PGI 2 synthesis that leads to a vasoconstrictor response as in normotensive conditions. Moreover, our results from SBP and heart mass measurements further suggest that COX-1 -/attenuates the development of 2K1C hypertension in mice. Indeed, a similar beneficial effect of COX-1 -/or COX-1 inhibition has been previously obtained with diabetic mice or a rat model of angiotensin II/salt-induced hypertension, respectively [37,50]. On the other hand, in 2K1C hypertensive carotid arteries the contraction evoked by ACh was comparable to that of normotensive mice. In addition, COX-1 also mediates TxA 2 synthesis in platelets, which has been suggested to implicate the pressor response of angiotensin II that plays an essential role in the development of 2K1C hypertension [41,51]. Also, COX-1 may be linked to oxidative stress, which impairs endothelial function and influences vascular remodeling [36,52]. Therefore, the attenuation of hypertension in COX-1 -/-2K1C mice could reflect a net adverse role resulting from all COX-1-mediated activities rather than altered endothelium-derived vasoconstrictor activity under the pathological condition.
Also, IP receptors have been previously thought to become dysfunctional, leading to PGI 2 acting as an EDCF under hypertensive conditions [5,36]. However, our results showed that IP receptor-mediated dilator function was preserved in the 2K1C hypertensive carotid arteries where PGI 2 evoked contraction. In some other vascular beds, the dilator function of IP receptors outweighs the effect of TP receptors [22,23,53], and hence a vasodilator response to PGI 2 synthesis would be expected in such vessels even under hypertensive conditions. These results thus further imply a diversity of vasomotor reactions evoked by endothelial COX-1-mediated AA metabolism in hypertension and therefore, TP receptors (which mediate the vasoconstrictor activity of COX-1) rather than COX-1 should be considered as a target for pharmacological intervention of the disorder under clinical conditions.
In summary, our results explicitly demonstrate that COX-1 remains as a major form in 2K1C hypertensive carotid arteries, mediating PGI 2 synthesis that evokes a vasoconstrictor response resulting from opposing dilator and constrictor activities derived from IP and TP receptors, respectively. In addition, our data suggest that COX-1 -/attenuates the development of 2K1C hypertension in mice, reflecting a net adverse role yielded from all COX-1-mediated activities under the pathological condition. | 4,926.6 | 2015-08-26T00:00:00.000 | [
"Biology",
"Medicine"
] |
Peroxisome Proliferator‐Activated Receptor‐γ in Capillary Endothelia Promotes Fatty Acid Uptake by Heart During Long‐Term Fasting
Background Endothelium is a crucial blood–tissue interface controlling energy supply according to organ needs. We investigated whether peroxisome proliferator‐activated receptor‐γ (PPARγ) induces expression of fatty acid–binding protein 4 (FABP4) and fatty acid translocase (FAT)/CD36 in capillary endothelial cells (ECs) to promote FA transport into the heart. Methods and Results Expression of FABP4 and CD36 was induced by the PPARγ agonist pioglitazone in human cardiac microvessel ECs (HCMECs), but not in human umbilical vein ECs. Real‐time PCR and immunohistochemistry of the heart tissue of control (Ppargfl/null) mice showed an increase in expression of FABP4 and CD36 in capillary ECs by either pioglitazone treatment or 48 hours of fasting, and these effects were not found in mice deficient in endothelial PPARγ (Pparg∆EC/null). Luciferase reporter constructs of the Fabp4 and CD36 promoters were markedly activated by pioglitazone in HCMECs through canonical PPAR‐responsive elements. Activation of PPARγ facilitated FA uptake by HCMECs, which was partially inhibited by knockdown of either FABP4 or CD36. Uptake of an FA analogue, 125I‐BMIPP, was significantly reduced in heart, red skeletal muscle, and adipose tissue in Pparg∆EC/null mice as compared with Ppargfl/null mice after olive oil loading, whereas those values were comparable between Ppargfl/null and Pparg∆EC/null null mice on standard chow and a high‐fat diet. Furthermore, Pparg∆EC/null mice displayed slower triglyceride clearance after olive oil loading. Conclusions These findings identified a novel role for capillary endothelial PPARγ as a regulator of FA handing in FA‐metabolizing organs including the heart in the postprandial state after long‐term fasting.
E ndothelium is a crucial blood-tissue interface controlling energy supply according to organ needs. Obesity-related metabolic disorders such as type 2 diabetes and metabolic syndrome cause endothelial dysfunction. Emerging evidence indicates that endothelial cells (ECs) play an important role in fatty acid (FA) transport from the blood into fat-utilizing tissues such as heart, red skeletal muscle, and adipose tissue. Because shuttling through the endothelial layer is the first rate-limiting step in the utilization of long-chain FAs as fuels, the mechanism of their endothelial transport in heart and skeletal muscle, which contain continuous, nonfenestrated endothelium, 1,2 has been the subject of intense research. Although several pathways including interendothelial passive diffusion, transcytosis (combination of uptake by endocytosis and discharge by exocytosis) and a facilitated protein-mediated process have been proposed, 3,4 the precise mechanisms by which FAs are taken up by muscle tissues are not well understood.
Cytoplasmic fatty acid-binding proteins (FABPs) are a family of 14-to 15-kDa proteins that bind with high affinity to hydrophobic molecules such as long-chain FAs and eicosanoids. 5 As lipid chaperones, FABPs may actively facilitate the transport of lipids to specific compartments in the cells, such as to lipid droplets for storage; to the endoplasmic reticulum for signaling, trafficking, and membrane synthesis; and to mitochondria or peroxisomes for oxidation. FABP4, also known as aP2/ALBP/A-FABP, is expressed highly in adipocytes and much less in macrophages. 6 Accordingly, the molecular mechanisms regulating FABP4 expression have been extensively studied in adipocytes and macrophages. FABP4 is expressed in the capillary ECs in mouse and human hearts, 7,8 but it remains to be determined whether FABP4 expression is regulated in ECs by mechanisms similar to those in adipocytes and macrophages. More importantly, the fundamental question of whether endothelial FABP4 contributes to vascular FA transport into heart, skeletal muscle, and adipose tissue has yet to be determined.
Evidence obtained from isolated cells indicates that fatty acid translocase (FAT)/CD36 plays an important role in membrane transport of long-chain FA uptake in heart and skeletal muscle as well as adipose tissue. 9,10 Like FABP4, CD36 is also expressed in microvascular ECs, [11][12][13] thus suggesting that CD36 is involved in FA transport across the endothelium. Mice lacking CD36 had reduced FA uptake in the heart, skeletal muscle, and adipose tissue, whereas glucose uptake was markedly induced in heart and skeletal muscle, presumably to compensate for the resultant shortage of FA supply. 14,15 The CD36-deficient mice also showed an insulintolerant phenotype with increased levels of nonesterified FAs (NEFAs) and triacylglycerol (TG) under a standard chow diet, 16,17 thus suggesting impaired FA utilization and reciprocally enhanced glucose consumption. Human studies with CD36 mutations showed findings similar to those in the CD36deficient mice. Whereas the impact of CD36 mutation on metabolic phenotype is variable in human subjects, myocardial uptake of long-chain FAs was markedly reduced with enhanced myocardial glucose use in patients with CD36 mutations. [18][19][20] However, the relative contribution of endothelial and myocardial CD36 to myocardial FA uptake is not known.
Systemic and cellular lipid metabolism is regulated by peroxisome proliferator-activated receptors (PPARs): PPARa, PPARb/d, and PPARc. Among these isoforms, PPARc is primarily a regulator of lipid storage and transport in adipocytes and macrophages, where its constitutive expression is high. 21 PPARc2 expression is mainly limited to adipose tissue, whereas PPARc1 is expressed in various tissues including vascular ECs. 22,23 Transcriptional activity of PPARc is modulated by direct binding of small molecules such as long-chain FAs and synthetic PPARc agonists such as thiazolidinediones (TZDs), which are clinically utilized as insulin sensitizers. 21,24 Although induction of FABP4 and CD36 expression by PPARc agonists was reported in adipocytes and macrophages, regulation of PPARc-target genes in the ECs has not been extensively studied.
In the present study, the possibility that PPARc facilitates FA transport by direct induction of FABP4 and CD36 gene expression in capillary ECs was investigated. By using mice deficient in PPARc in endothelial cells, evidence is provided that endothelial PPARc increases FA uptake in heart, skeletal muscle, and adipose tissue under conditions in which the oral lipid load is rapidly increased. Disturbance of transendothelial FA transport regulated by PPARc results in a remarkable increase in serum levels of TG and NEFAs after olive oil loading, at least partly because of impaired FA uptake by peripheral organs such as heart, red skeletal muscle, and adipose tissue. Thus, capillary endothelial PPARc is activated during fasting, resulting in more efficient FA transport into FAs utilizing organs such as heart, red skeletal muscle, and adipose tissue when a meal is taken.
Cell Culture
Human cardiac microvessel ECs (HCMECs) were purchased from Lonza (Switzerland) and cultured on collagen-coated dishes with EBM-2 medium (Lonza). Human umbilical vein ECs (HUVECs) were obtained from Cell Systems (USA) and cultured on collagen-coated dishes with CS-C Complete Medium (Cell Systems).
Mice
Male and female PPARc-floxed mice with or without Crerecombinase driven by the Tie2 promoter were generated as described previously. 25 These mice are of mixed C57BL6/N, Sv129, FVB/N background. The genotype of the mice was Pparg fl/null , and the floxed allele was successfully disrupted when Cre-recombinase was induced by the Tie2 promoter ( Figure 1). 26 All mice were housed on a 12-hour light/dark cycle. Before the study, all mice were fed a standard pellet diet (CE-2, Clea Japan, Inc). Obesity was induced using a highfat diet for 12 to 16 weeks, beginning at 6 weeks of age (High Fat Diet 32, Clea Japan, Inc). Low-fat diet-fed mice received the standard pellet diet throughout life. Pioglitazone (Takeda Pharmaceutical, Co Ltd) was administered to mice by oral gavage for 14 days (25 mg/kg per day). Animal care and experimentation were approved by the Gunma University Animal Care and Use Committee.
RNA Isolation and Reverse-Transcription Polymerase Chain Reaction (RT-PCR)
Total RNA was isolated from cultured cells and isolated hearts using TRIzol Reagent (Invitrogen). Semiquantitative RT-PCR was performed with an RT-PCR kit (TAKARA, Japan) according to the manufacturer's protocol. The gene-specific primers for cDNA are listed on Table 1. Quantitative real-time PCR was performed with SYBR Green PCR Master Mix (Applied Biosystems) according to the manufacturer's instructions. Expression of the target gene was normalized to the Gapdh mRNA level.
Immunohistochemical Analysis
Hearts of mice were fixed with 4% paraformaldehyde and embedded in paraffin. Immunohistochemistry was performed with antibody directed against FABP4 using an ABC kit (Vector) according to the manufacturer's protocol. Nuclei were stained with hematoxylin. For immunofluorescence, the cells were labeled with anti-FABP4 and cy3-conjugated antirabbit IgG antibody (Sigma).
Reporter Gene Assays
The Fabp4 (aP2) promoter (bp 1-5491) was a generous gift from Dr. Bruce Spiegelman. The Gateway system (Invitrogen) was used to generate Fabp4 promoter-luciferase reporter constructs in adenovirus. A pDONR vector was constructed with multiple cloning sites from the pBluescript (pDONR-MCS) vector. The SV40 promoter plus the luciferase coding region The presence or absence of Cre-recombinase and the genotype of Pparg were evaluated by PCR. The PCR product by 2F-1R primers (285 bp) is the floxed allele, whereas the product by 2F-5H (450 bp) is the null allele. 26 Detection of both PCR products in a mouse implies that the mouse is Pparg fl/null . All mice showed the Pparg fl/null genotype irrespective of Cre expression. Expression of mRNA for Pparg in hearts was also determined by using 1F-3R primers that can detect both wild-type (353 bp) and null (214 bp) transcripts. As reported by de Lange, even mice without the Cre-transgene (Crenegative mice) showed modest expression of the null transcript. 26 However, expression of the null transcript was higher in Cre-positive mice than in Cre-negative mice. Importantly, expression of the wildtype transcript was higher in Cre-negative mice than in Cre-positive mice. Taken together, we conclude that the basal genotype of our mice is Pparg fl/null and that the mice turn Pparg DE/null when Crerecombinase is expressed in ECs. EC indicates endothelial cell; PPAR, peroxisome proliferator-activated receptor; PCR, polymerase chain reaction. GATCAGAGTT to mutated ARE7; CCCCCGGGGG). Adenovirusreporter constructs with these Fabp4 promoters were produced by using the Gateway system as described previously. 27 The CD36 V1/V3 promoter (1510 bp) was a generous gift from Dr Kiyoto Motojima (P2 reporter plasmid). 28 The following V1/V3 promoter fragments were subcloned upstream of the SV40 promoter of pDONR-SV40luc by PCR: bp 1 to 1510, 1 to 730, 710 to 1510, 710 to 1150, and 1131 to 1510. To generate the mutated CD36 reporter construct, a PPARc-responsive element (PPRE) in the V1/V3 promoter (bp 710 to 1150) was changed by PCR (original PPRE; TGGCCTCTGACTT to mutated PPRE; ACCTAAGCTTGAA). We also isolated the V2 (2466 bp) and V4/V5 (2351 bp) promoters from the human genome by PCR using primers 5′-ACATGGGAAGTGCTGGGTAG-3′ and 5′-GAAATGAGGCACAGGCTCTC-3′ for V2 and 5′-AGGGCAGG GAAAGCTATTGT-3′ and 5′-CGTATCATTTTGCCCGTTCT-3′ for V4/V5 and subcloned them into pDONR-SV40-luc. CD36 promoter-luciferase reporter constructs of the adenovirus were generated as described above. The cells were infected with adenovirus-reporter constructs at an m.o.i. of 20. Luciferase assays were performed at least twice using a luciferase assay system (Promega).
Fatty Acid Uptake
For the fatty acid uptake experiments, HCMECs were starved in DMEM without glucose and bovine fetal serum for 30 minutes. Ten minutes after adding a mixture of 14 Cpalmitic acid (Perkin Elmer, USA) plus bovine serum albumin, cells were washed with ice-cold stop buffer (PBS containing 0.1% bovine serum albumin and 0.2 mmol/L phloretin) and lysed with lysis buffer (0.1N NaOH and 0.2% SDS). Radioactivity of the lysate in the scintillation cocktail (Aquasol2; Perkin Elmer) was measured by a liquid scintillation counter (LCS-3000; Aloka). Experiments were done in triplicate and repeated 3 times.
Biodistribution of 125 I-BMIPP and 18 F-FDG
Biodistribution of 15-[p-iodophenyl]-3-[R,S]-methyl pentadecanoic acid ( 125 I-BMIPP) and 2-fluorodeoxyglucose ( 18 F-FDG) was determined as described previously. 14,15 Mice received intravenous injections of 125 I-BMIPP (5 kBq) and 18 F-FDG (100 kBq) via the lateral tail vein in a volume of 100 lL. 125 I-BMIPP was a gift from Nihon Medi-Physics Co Ltd, and 18 F-FDG was obtained from batches prepared for clinical PET imaging at Gunma University. The animals were euthanized 2 hours after injection. The isolated tissues were weighed and counted in a well-type gamma counter (ARC-7001; Aloka). Each experiment was performed at least twice.
Statistical Analysis
Statistical analysis was performed in SPSS 20.0. Data are presented as dot plots or meanAESD. Statistical comparisons were performed using nonparametric analysis (Mann-Whitney U test) when the group numbers were 2. Statistical significance was tested by the Kruskal-Wallis test with the Bonferroni post hoc test when experiments included >3 groups. The level of significance was set at a probability value of <0.05. Effect size (ES) was shown with g 2 and x 2 .
PPARc Regulated Expression of FABP4 and CD36 in Capillary ECs In Vitro
To explore the role of PPARc in ECs, the effects of pioglitazone, a PPARc ligand, was examined in HCMECs and HUVECs. Expression of FABP4 and CD36 mRNA was induced by pioglitazone in HCMECs that express PPARc, but not in HUVECs with no detectable PPARc mRNA (Figure 2A). When HCMECs were pretreated with siRNA for PPARc, the increase in FABP4 and CD36 mRNA levels by pioglitazone was completely abolished ( Figure 2B). Adenoviral overexpression of PPARc in HUVECs and HCMECs increased pioglitazone-induced expression of FABP4 and CD36 mRNA ( Figure 2C) and protein ( Figure 2D). In contrast, mRNA encoding lipoprotein lipase (LPL), glycosylphosphatidylinositol-anchored high-density lipoprotein-binding protein (GPIHBP1), CPT1/2, ACS, FATP1/3/4, and FABP5 was not increased by pioglitazone ( Figure 2E). These results suggest that the genes encoding FABP4 and CD36 are most highly responsive to PPARc. microvessel ECs (HCMECs, HC) and human umbilical vein ECs (HUVECs, HU) were treated with pioglitazone (Pio, 10 lmol/L) or DMSO. Two days later, total RNA was extracted for PCR (n=6 in each group); *P<0.05. Effect size (ES): g 2 =0.85, x 2 =0.82 for FABP4; g 2 =0.75, x 2 =0.70 for CD36; and g 2 =0.82, x 2 =0.79 for PPARG. B, After pretreatement with siGFP, siLamin, or siPPARc, HCMECs were treated with pioglitazone (10 lmol/L). Two days later, total RNA was isolated for PCR. C, HUVECs were infected with Ad-lacZ or Ad-PPARc at an m.o.i. of 20 in the presence or absence of pioglitazone (10 lmol/L). Two days later, total RNA was extracted for PCR (n=6 in each group); *P<0.05. ES: g 2 =0.99, x 2 =0.99 for FABP4; g 2 =0.92, x 2 =0.91 for CD36. D, HCMECs were infected with Ad-lac Z or Ad-PPARc at an m.o.i. of 20 in the presence or the absence of pioglitazone treatment. Three days later, protein was extracted for Western blot analysis. GAPDH was used as an internal control. E, HCMECs were treated with pioglitazone (10 lmol/L) or DMSO. Two days later, total RNA was extracted for RT-PCR. Expression of indicated genes was determined. LPL functions as triglyceride hydrolase for chyromicrons and very-low-density lipoprotein to liberate NEFAs at the surface of capillary ECs. GPIHBP1 is expressed in capillary ECs and transports LPL across ECs to the capillary lumen. CPT1a, CPT1b, and CPT2 are the key enzymes in carnitinedependent transport across the mitochondrial membrane and rate-limiting factors for FA b-oxidation. CPT1a is predominantly expressed in liver but is also expressed in ECs. ACS converts NEFAs into fatty acyl-CoA esters in cytoplasm. FATP1, FATP3, FATP4 are involved in translocation of long-chain fatty acids across the plasma membrane. FATP3 and FATP4 are induced in capillary ECs by vascular endothelial growth factor-B. FABP5, also known as epidermal FABP and mal1, is a member of the FABP family and is strongly expressed in capillary ECs. Note that these molecules are not induced by pioglitazone. PPAR indicates peroxisome proliferator-activated receptor; EC, endothelial cell; PCR, polymerase chain reaction; RT, reverse transcription; LPL, lipoprotein lipase; GPIHBP1, glycosylphosphatidylinositol-anchored high-density lipoprotein-binding protein; CPT, carnitine palmitoyltransferase; ACS, acyl CoA synthetase; NEFA, nonesterified fatty acid; FATP, fatty acid transport protein; FABP, fatty acid-binding protein; DMSO, dimethyl sulfoxide; DMEM, Dulbecco's modified eagle's medium.
PPARc Regulated Expression of FABP4 and CD36 in Capillary ECs In Vivo
We next examined whether the Fabp4 and Cd36 genes are targets of PPARc in vivo by using PPARc endothelial null (Pparg ΔEC/null ) mice. 25 When mice were treated with pioglitazone, expression of Fabp4 mRNA was induced in control mice (Pparg fl/null ), but not in the Pparg ΔEC/null mice ( Figure 3A). In contrast, Cd36 mRNA was not induced by pioglitazone. Enhancement of Fabp4 expression by pioglitazone was observed in capillary ECs in the Pparg fl/null mice ( Figure 3B). 29 These findings demonstrate that Fabp4 was induced by pioglitazone in capillary ECs, not in other cell types, in a PPARc-dependent manner. We next examined whether the expression of FA-handing genes is induced by fasting given that PPARc is activated by fasting. Results of quantitative real-time PCR showed that mRNA encoding Fabp4 and Cd36 as well as Pparg was increased after 24 hours of fasting in the Pparg fl/null mice ( Figure 3C). Among these, induction of Fabp4 expression was completely abolished in Pparg ΔEC/null mice, whereas Cd36 and Pparg expression remained inducible although to a lesser extent in the Pparg ΔEC/null mice ( Figure 3C). These data suggest that both CD36 and PPARc are expressed in ECs as well as other cell types including cardiomyocytes and that lack of PPARc in ECs leads to a slight reduction in expression of PPARc in whole hearts of the Pparg ΔEC/null mice, resulting in a slight reduction in CD36 expression. Increased immunoreactivity against FABP4 was exclusively observed in capillary ECs in Pparg fl/null mice ( Figure 3D), thus indicating that capillaryspecific expression of FABP4 is enhanced by fasting in a PPARc-dependent manner.
To determine whether the induction of FABP4 and CD36 after fasting depends on serum components changed by fasting, HCMECs were cultured in medium supplemented with 10% mouse serum derived from fed or 48-hour fasted mice. There was no significant difference in their expression between the groups ( Figure 4A). Transcriptional activity of the Fabp4 promoter (described below in detail) was not enhanced by the serum either ( Figure 4B), suggesting that serum alone is not sufficient to induce expression of FABP4 and CD36 mRNA.
FABP4 Promoter Was Transactivated Via 2 Canonical PPREs in Capillary ECs
To examine the transcriptional regulation of the Fabp4 promoter in ECs, adenovirus-reporter constructs were generated because standard lipofection severely damaged ECs, resulting in poor transfection efficiency. The Fabp4 promoter (bp 1 to 5491) was divided into 6 fragments: bp 1 to 929, 1 to 240, 221 to 929, 862 to 2300, 2130 to 3896, and 3171 to 5491. Pioglitazone induced transcriptional activity of Fabp4 promoters containing 2 canonical PPREs (bp 1 to 929 and 1 to 240) in HCMECs, but not in HUVECs ( Figure 5A). Further analysis utilizing the bp 1 to 240 fragment revealed that both canonical PPREs are required for transactivation of the Fabp4 promoter ( Figure 5B). Knockdown of PPARc expression by siRNA abolished pioglitazone-induced Fabp4 expression ( Figure 5C). These findings lent further support to the hypothesis that pioglitazone induces FABP4 expression by PPARc in capillary ECs where PPARc is expressed.
CD36 Promoter Was Transactivated via a PPRE in Capillary ECs
Transcriptional regulation of the CD36 promoter in ECs was also examined. Because CD36 mRNA is transcribed from several different promoters, 3 adenovirus-reporter constructs of CD36 promoters, termed V1/V3, V2, and V4/V5, were generated. V1/V3 contains a putative PPRE that has questionable biological significance. 28,30 Transcriptional activity of the V1/V3 promoter was increased by pioglitazone in HCMECs, but not in HUVECs ( Figure 6A), whereas activity of the V2 and the V4/V5 promoters was not enhanced in both HCEMCs and HUVECs. Constructs that lacked the PPRE within the V1/V3 promoter showed no responsiveness to pioglitazone ( Figure 6B). PPARc knockdown by siRNA completely abolished pioglitazone-induced activation of the V1/V3 promoter ( Figure 6C). As described previouly, 29 these promoters were not responsive to PPARc in 3T3L1 adipocytes despite the abundant expression of Pparg2 as well as Cd36 ( Figure 7A and 7B). Collectively, these findings indicate that the V1/V3 PPRE mediates pioglitazone-induced expression of CD36 in capillary ECs and suggest that both FABP4 and CD36 are direct targets of PPARc in capillary ECs.
FA Uptake Was Promoted by PPARc Stimulation Via Induction of FABP4 and CD36
To determine the role of FABP4 and CD36 in capillary ECs, FA uptake was examined using 14 C-palmitic acid. FA uptake was increased by pioglitazone and further enhanced by overexpression of PPARc ( Figure 8A and 8B). Pretreatment of HCMECs with siFABP4 or siCD36 diminished FA uptake induced by pioglitazone plus overexpression of PPARc, suggesting that FABP4 and CD36 contribute to FA uptake ( Figure 8B). Interestingly, FA uptake was increased only when both FABP4 and CD36 were overexpressed, and such an increase in FA uptake was less than that by PPARc overexpression plus pioglitazone ( Figure 8C). These findings suggest that both FABP4 and CD36 play a role in FA uptake, whereas either FABP4 or CD36 alone is not sufficient. Our data also suggest that other target genes of PPARc are likely Pparg ΔEC/null mice were treated with pioglitazone (25 mg/kg per day) or vehicle for 2 weeks. Total RNA was extracted from hearts for qPCR analysis. Gapdh mRNA was used as an internal control (n=7 to 9 in each group), *P<0.05. ES: g 2 =0.86, x 2 =0.84 for Fabp4; g 2 =0.07, x 2 =À0.07 for Cd36. B, Immunofluorescence of FABP4 in hearts of Pparg fl/null and Pparg ΔEC/null mice with or without pioglitazone treatment (25 mg/kg per day) for 2 weeks. Scale bar: 200 lm. Intensity of FAPB4-positive area was calculated as previously described using ImageJ software 29 (National Institutes of Health); n=6 in each group; *P<0.05. C, Total RNA was extracted from heart tissues of Pparg fl/null and Pparg ΔEC/null mice for qPCR after 0, 24, or 48 hours of fasting (n=7 to 9 in each group); *P<0.05. ES: g 2 =0.84, w 2 =0.81 for Fabp4; g 2 =0.66, x 2 =0.61 for Cd36; g 2 =0.76, to be involved in FA uptake in combination with FABP4 and FAT/CD36.
FA Uptake Was Impaired in Heart, Red Skeletal Muscle, and Adipose Tissue in PpargΔEC/null Mice After Olive Oil Gavage
To determine the physiological relevance of the regulation of FABP4 and CD36 gene expression by PPARc in ECs, biodistribution of the slowly oxidized FA analogue 125 I-BMIPP and metabolically trapped glucose analogue 18 F-FDG was compared. Uptake of 125 I-BMIPP by heart, adipose tissue, and red skeletal muscle was comparable between standard chowfed Pparg fl/null and Pparg ΔEC/null mice after 24 hours of fasting ( Figure 9A) and even after refeeding with standard chow (data not shown), whereas uptake of 125 I-BMIPP by liver was higher in the Pparg ΔEC/null mice. We then tested whether endothelial PPARc disruption interferes with lipid metabolism in the endothelium when serum FA levels are excessively increased. Standard chow-fed Pparg fl/null and Pparg ΔEC/null mice were subjected to a 24-hour fasting period followed by olive oil gavage. Interestingly, uptake of 125 I-BMIPP was significantly lower in heart, adipose tissue, and red skeletal muscle in the Pparg ΔEC/null mice ( Figure 9A). Uptake of 125 I-BMIPP in the liver was higher after olive oil gavage ( Figure 9A), which probably reflects a compensatory influx of FAs into the liver. These findings suggest that although endothelial PPARc is dispensable for FA supply to heart, adipose tissue, and red skeletal muscle via capillary endothelium in the fasting state, endothelial PPARc is required for the uptake of excess FAs after a lipid-rich diet following fasting.
Next, the impact of endothelial PPARc deficiency on FA uptake in the overnutrient state was examined. Pparg fl/null and Pparg ΔEC/null mice fed a high-fat diet (HFD) for 4 months were subjected to 24 hours of fasting followed by olive oil gavage and measuring the uptake of 125 I-BMIPP and 18 F-FDG. The Pparg fl/null and the Pparg ΔEC/null mice displayed no significant difference during the fasted and olive oil-fed state ( Figure 9B). 125 I-BMIPP uptake by heart and adipose tissue was markedly reduced in both Pparg fl/null and Pparg ΔEC/null mice ( Figure 9C and 9D). This reduction was also observed after olive oil gavage ( Figure 9C and 9D). These results indicate that a deficiency in endothelial PPARc does not lead to a reduction in FA transport into heart and adipose tissue when mice are fed HFD. This is likely because of the capacity of parenchymal cells for FA uptake that seems to be saturated despite the status of capillary-endothelial function.
We further studied how pioglitazone affects FA uptake by peripheral organs. When mice were fed HFD, pioglitazone did not alter the uptake of both 125 I-BMIPP and 18 F-FDG in heart and adipose tissue ( Figure 9E). However, uptake of 125 I-BMIPP in liver and serum level were lower in the Pparg fl/null mice ( Figure 9E), suggesting that uptake of 125 I-BMIPP via capillary ECs was improved by pioglitazone at the whole-body level in the Pparg fl/null mice. Although the effect of pioglitazone on capillary ECs after HFD is unclear at the individual organ level, it is likely that the gross effect of pioglitazone for metabolism in the whole body is improved via the capillary ECs.
TG and NEFA Clearance Was Slower in Pparg ΔEC/null Than in Pparg fl/null Mice Serum levels of TG, NEFAs, glucose, insulin, and ketone bodies after olive oil gavage to Pparg ΔEC/null and Pparg fl/null mice were examined. Serum TG ( Figure 10A) and NEFAs ( Figure 10B) were markedly increased in the Pparg ΔEC/null mice 2 and 4 hours after olive oil gavage, whereas serum glucose ( Figure 10C), insulin ( Figure 10D), and ketone bodies (data not shown) were not affected. The influence of hepatic VLDL production plus lipid absorption from the gastrointestinal (GI) tract was determined by using tyloxapol (WR-1339), an LPL inhibitor ( Figure 10E and 10F). Pparg fl/null and Pparg ΔEC/ null mice were divided into 4 groups: (1) no treatment, (2) olive oil gavage, (3) tyloxapol treatment (hepatic VLDL production), and (4) tyloxapol treatment plus olive oil gavage (hepatic VLDL production+lipid absorption from GI tract) ( Figure 10E and 10F). Although serum levels of TG and NEFAs were higher in the Pparg ΔEC/null mice after olive oil gavage (2), the difference disappeared on treatment with tyloxapol (4), thus suggesting that the sum of hepatic VLDL production plus lipid absorption from the GI tract is comparable between the Pparg fl/null and the Pparg ΔEC/null mice. Accordingly, the difference in TG and NEFAs levels after olive oil gavage (2) is likely a result of a disturbance in FA uptake by peripheral organs. Taken together, these data suggest that PPARc in capillary ECs regulates efficient FA uptake by peripheral FAs consuming organs such as heart, red skeletal muscle, and adipose tissue when serum levels of lipids are rapidly increased in the postprandial state.
Discussion
Despite intense research, the mechanisms underlying FA uptake and transport remain elusive, particularly in heart and skeletal muscle, where energy substrates are supplied through muscle-type continuous capillaries. Here, by using mice deficient in endothelial PPARc and systemic administration of 125 I-BMIPP, a long-chain FA analogue that allows the evaluation of FAs in tissues of whole-body capillary endothelial PPARc was demonstrated to facilitate FA transport in heart and skeletal muscle through induction of FABP4 and CD36. In addition, this study has provided definitive evidence indicating that ligand activation of PPARc leads to transcriptional activation of the FABP4 and CD36 genes through canonical PPREs in cardiac microvessel ECs. Furthermore, these experiments showed that endothelial disruption of PPARc increases plasma levels of TG and NEFAs, indicating that ECs play a role in controlling systemic lipid metabolism in the postprandial state. These findings corroborate earlier findings demonstrating that PPARc deficiency in ECs caused marked dyslipidemia after a high-fat diet or olive oil gavage. 31
Physiological Relevance of PPARc Regulation of FABP4 and CD36 in Muscle-Type Capillary ECs
It is well known that PPARc exerts its control of metabolic activities by increasing white adipose tissue mass, leading to efficient energy conservation and storage by adipocytes along with improved glucose homeostasis. These activities were evolutionally beneficial for mammals to survive food shortage or famine. 24 Thus, from an evolutionary perspective, PPARc activation favors survival in an ancient era. In this regard, the present findings indicating PPARc-driven induction of FABP4 and CD36 by long-term fasting are especially noteworthy because this mechanism may represent a novel aspect of the thrifty activities of PPARc in heart, red skeletal muscle, and adipose tissue. Indeed, cardiac muscle is the most energyrequiring tissue in the body and primarily uses FAs and, to a great extent, lipoprotein-derived FA. During long-term fasting, circulating NEFA levels are elevated by an increase in lipolysis in adipose tissue. Under this condition, activation of PPARc expression may be adaptive in order for the myocardium to take up FAs vigorously to support ATP synthesis.
When fed an HFD, however, the difference in FA uptake between the Pparg fl/null and Pparg ΔEC/null mice was reduced in all organs tested before as well as after olive oil loading. This is likely because of FA uptake by parenchymal cells of individual organs that was markedly decreased both in the Pparg fl/null and the Pparg ΔEC/null mice to the same degree from longtime lipid overload. This is consistent with a previous report that fat storage of adipose tissue after meals was substantially depressed in obese men. 32 In our model, the capacity of FA uptake by parenchymal cells (myocytes, adipocytes, and hepatocytes) was already saturated before olive oil loading, resulting in no obvious enhancement of FA uptake. Moreover, when treated with pioglitazone under HFD, a difference in FA transport into heart and adipose tissue did not appear again. Clearance of 125 I-BMIPP from circulation, however, was decreased in Pparg fl/null mice, suggesting that the sum of a small improvement in capillary function by pioglitazone leads to a significant effect on whole-body metabolism. Consistent with this finding, it was reported that rosiglitazone significantly lowered NEFA and TG levels in Pparg fl/null mice receiving a lipid load, whereas rosiglitazone had no effect on either NEFA or TG levels in the Pparg ΔEC/null mice. 31 Thus, impaired transendothelial FA transport by loss of PPARc in capillary ECs modestly affects dietinduced dyslipidemia. In this regard, capillary endothelial PPARc can be a therapeutic target of dyslipidemia induced by a HFD.
Upstream Signal That Stimulates PPARc
What are the factors responsible for an induction of PPARc target genes in the fasting response? Since the initial observation by Amri et al, 33 there has been increasing evidence that FAs are potent regulators of lipid metabolism. It is well established that PPARs are FA-responsive transcription factors, and FAs serve as ligands for the 3 PPARs. Although many studies described that FA derivatives of arachidonic acid such as the eicosanoids leukotriene B4, carbaprostacycline, and unsaturated FAs are PPAR activators, most of these ligands were identified by in vitro approaches, and thus the bona fide endogenous PPAR ligands remain to be discovered. 34 In a recent study, Chakravarthy et al 35 identified a phosphatidylcholine species, 1-palmitoyl-2-olelyl-sn-glycerol-3-phosphocholine (16:0/18:1-GPC), as a physiologically relevant endogenous PPARa ligand. However, no significant difference in the expression of FABP4 and CD36 in HCMECs between the groups on treatment with serum from fed and 48-hour fasted mice was observed. In addition, no endogenous ligands of PPARc relevant to the fasting response have emerged to date.
FABP4 Promoter Was Active in Capillary ECs
In the present study, FABP4 was found to be expressed in capillary ECs, and its promoter strongly activated by PPARc and pioglitazone via 2 canonical PPREs. These findings are rather surprising because Fabp4 (also called aP2) has long been considered to be an adipocyte-and macrophage-specific gene, and its promoter has been widely used to generate "fat-specific" expression and disruption in transgenic mouse studies. In these experiments, "fat-specific" Cre expression was achieved by placing the Cre cDNA under the control of the 5.4-kb promoter fragment of the Fabp4 gene. 36 Therefore, it is likely that Cre expression is induced in ECs, and the resultant gene of interest is disrupted by recombination in ECs as well as in adipocytes and macrophages. Accordingly, caution should be advised when analyzing and interpreting Figure 7. CD36 promoter did not respond to pioglitazone stimulation in 3T3L1 adipocytes. A, 3T3L1 fibroblasts and adipocytes were infected with reporter-adenoviruses containing the indicated constructs of the CD36 promoter at an m.o.i. of 20 with or without adipogenic medium (1 lg/mL insulin, 1 lmol/L dexamethasone, and 500 lmol/L 3-isobutyl-1-methylxantine). Three days later, cells were lysed for the luciferase assay (n=3); *P<0.05. B, 3T3L1 cells were treated with and without adipogenic medium. Total RNA was extracted for RT-PCR. Expression of the indicated genes was determined. Note that endogenous Fabp4, Cd36, and Pparg2 were induced in 3T3L1 adipocytes when treated with adipogenic medium. PPAR indicates peroxisome proliferator-activated receptor; RT-PCR, reverse-transcriptase polymerase chain reaction.
phenotypes of knockout mice, when said phenotypes may be a result at least partly of the deficiency of the gene of interest in the ECs.
Differential Regulation of CD36 Promoter Between Adipocyte and Capillary ECs
CD36 is a multifunctional membrane-glycoprotein expressed in various cells including adipocytes, striated muscle, cardiomyocytes, smooth muscle, microvessel endothelium, platelets, macrophages, and hepatocytes. 9,37 CD36 has ≥5 spliced variants that are differentially regulated by divergent promoters. Among them, the present study revealed that the V1/V3 promoter is responsible for capillary-endothelial expression of CD36 in a PPARc-dependent manner. This promoter contains a putative PPRE element that was first reported by Tontonoz et al, 30 who showed transactivation of the minimal length promoter by a synthetic PPARc ligand and overexpressing exogenous PPARc1 and RXR in CV-1 cells (monkey kidney fibroblasts). 30 In contrast, others reported that mouse and human proximal CD36 promoters containing a PPRE did not respond to PPARa and PPARc ligands in rat hepatoma Fao cells and 3T3L1 cells. 28 However, the present study clearly showed that the PPRE contained in the V1/V3 promoter is functionally important in PPARc-dependent expression of CD36 in capillary ECs. On the other hand, these promoters are not responsive to PPARc in 3T3L1 adipocytes, despite the abundant expression of endogenous Pparg2 as well as Cd36.
These results indicate that the V1/V3 promoter of the CD36 gene is differentially regulated between ECs and adipocyte, and suggest that cofactors that associated with the PPARc/ RXR heterodimer, and/or the interaction between PPARc/ RXR and tissue-specific transcription factors may play a role in cell-type-specific function of PPARc. Further study would be warranted.
Involvement of Endothelial PPARc in Hydrolysis of TG-Rich Lipoproteins
It is well known that hearts utilize the long-chain FAs associated with albumin or derived from LPL-mediated hydrolysis of triglyceride-rich lipoproteins. Lipase activity of LPL requires a newly recognized partner protein, GPIHBP1, which is expressed exclusively in capillary ECs and transports LPL across ECs to the capillary lumen. 38 Because TG level was markedly increased after olive oil loading in Pparg DE/null mice, we assumed that LPL activity is impaired by decreased expression of LPL or GPIHBP1. However, neither LPL nor GPIHBP1 gene expression was induced by pioglitazone in HCMECs, whereas both were induced in both Pparg fl/null and Pparg DE/null mice after 24 hours' fasting (data not shown). Given that PPARc controls multiple FA-handling genes, hydrolysis of TG-rich lipoproteins through the induction of LPL activity may also be regulated by PPARc through genes or mechanisms yet to be identified.
In conclusion, PPARc induces FABP4 and CD36 expression in capillary ECs in the heart and plays a role in FA uptake by the heart that heavily utilizes FAs as the main substrates for energy conversion. The present study has revealed a novel role for the PPARc-mediated physiological response during severe fasting. Pparg ΔEC/null mice with or without olive oil gavage was measured as described in Methods (n=6 in each group); *P<0.05. A and B, Mice were fed (A) a standard chow (SC) or (B) a high-fat diet (HFD) and starved for 24 hours before experiments. C and D, Mice were fed SC or HFD and starved for 24 hours before experiments. Uptake of 125 I-BMIPP and 18 F-FDG by organs from Pparg fl/null (C) or Pparg ΔEC/null mice (D) was measured. E, Mice were fed with HFD in treatment with pioglitazone (25 mg/kg per day) for 14 days and starved for 24 hours before experiments. FA indicates fatty acid; PPAR, peroxisome proliferator-activated receptor; 125 I-BMIPP, 15-(p-iodophenyl)-3-(R,S)-methyl pentadecanoic acid; 18 F-FDG, 2fluorodeoxyglucose; Bld, blood; Hrt, heart; Liv, liver; Fat, gonadal fat pad; W Mus, white skeletal muscle; R Mus, red skeletal muscle. | 8,391.6 | 2013-01-18T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Brainrender : a python-based software for visualizing anatomically registered data
The recent development of high-resolution three-dimensional (3D) digital brain atlases and high-throughput brain wide imaging techniques has fueled the generation of large datasets that can be registered to a common reference frame. This registration facilitates integrating data from different sources and resolutions to assemble rich multidimensional datasets. Generating insights from these new types of datasets depends critically on the ability to easily visualize and explore the data in an interactive manner. This is, however, a challenging task. Currently available software is dedicated to single atlases, model species or data types, and generating 3D renderings that merge anatomically registered data from diverse sources requires extensive development and programming skills. To address this challenge, we have developed brainrender : a generic, open-source Python package for simultaneous and interactive visualization of multidimensional datasets registered to brain atlases. Brainrender has been designed to facilitate the creation of complex custom renderings and can be used programmatically or through a graphical user interface. It can easily render different data types in the same visualization, including user-generated data, and enables seamless use of different brain atlases using the same code base. In addition, brainrender generates high-quality visualizations that can be used interactively and exported as high-resolution figures and animated videos. By facilitating the visualization of anatomically registered data, brainrender should accelerate the analysis, interpretation, and dissemination of brain-wide multidimensional data.
Introduction
Understanding how nervous systems generate behavior benefits from gathering multi-dimensional data from different individual animals. These data range from neural activity recordings and anatomical connectivity, to cellular and subcellular information such as morphology and gene expression profiles. These different types of data should ideally all be in register so that, for example, neural activity in one brain region can be interpreted in light of the connectivity of that region or the cell types it contains. Such registration, however, is challenging. Often it is not technically feasible to obtain multi-dimensional data in a single experiment, and registration to a common reference frame must be performed post-hoc. Even for the same experiment type, registration is necessary to allow comparisons across individual animals.
While different types of references can in principle be used, neuroanatomical location is a natural and most commonly used reference frame (Chon et al. 2019;Oh et al. 2014;Arganda-Carreras et al. 2018;Kunst et al. 2019). In recent years, several high-resolution threedimensional electronic brain atlases have been generated for model species commonly used in neuroscience (e.g.: Wang et al. 2020;Kunst et al. 2019;Arganda-Carreras et al. 2018). These atlases provide a framework for registering different types of data across macro-and microscopic scales. A key output of this process is the visualiza-tion of all datasets in register. Given the intrinsically three-dimensional (3D) geometry of brain structures and individual neurons, 3D renderings are more readily understandable and can provide more information when compared to two dimensional images. Exploring interactive 3D visualizations of the brain gives an overview of the relationship between datasets and brain regions and helps generating intuitive insights about these relationships. This is particularly important for large-scale datasets such as the ones generated by open-science projects like MouseLight (Winnubst et al. 2019) and the Allen Mouse Connectome (Oh et al. 2014). In addition, high-quality 3D visualizations facilitate the communication of experimental results registered to brain anatomy.
Generating custom 3D visualizations of atlas data requires programmatic access to the atlas. While some of the recently developed atlases provide an API (Application Programming Interface) for accessing atlas data (Wang et al. 2020;Kunst et al. 2019), rendering these data in 3D remains a demanding and time-consuming task that requires significant programming skills. Moreover, visualization of user-generated data registered onto the atlas requires an interface between the user data and the atlas data, which further requires advanced programming knowledge and extensive development. There is therefore the need for software that can simplify the process of visualizing 3D anatomical data from available atlases and from new experimental datasets.
-2/9
Currently, existing software packages such as cocoframer (Lein et al. 2007), BrainMesh (Yaoyao-Hao 2020) and SHARPTRACK (Shamash et al. 2018) provide some functionality for 3D rendering of anatomical data. These packages, however, are only compatible with a single atlas and cannot be used to render data from different atlases or different animal species. Achieving this requires adapting the existing software to the different atlases datasets or developing new dedicated software all together, at the cost of significant additional efforts, often duplicated. An important limitation of the currently available software is that it frequently does not support rendering of non-atlas data, such as data from publicly available datasets (e.g.: MouseLight) or produced by individual laboratories. This capability is essential for easily mapping newly generated data onto brain anatomy at high-resolution and produce visualizations of multidimensional datasets. More advanced software such as natverse (Bates et al. 2020) offers extensive data visualization and analysis functionality but currently it is mostly restricted to data obtained from the drosophila brain. Simple Neurite Tracer (Arshadi et al. 2020), an ImageJ-based software, can render neuronal morphological data from public and user-generated datasets and is compatible with several reference atlases. However, this software does not support visualization of data other than neuronal morphological reconstructions nor can it be easily adapted to work with different or new atlases beyond the ones already supported. Finally, software such as MagellanMapper (Young et al. 2020) can be used to visualize and analyze large 3D brain imaging datasets, but the visualization is restricted to one data item (i.e. images from one individual brain). It is therefore not possible to combine data from different sources into a single visualization. Ideally, a rendering software should work with 3D mesh data instead of 3D voxel image data to allow the creation of high-quality renderings and facilitate the integration of data from different sources.
An additional consideration is that existing software tools for programmatic neuroanatomical renderings have been developed in programming languages such as R and Matlab, and there is currently no available alternative in Python. The popularity of Python within the neuroscientific community has grown tremendously in recent years (Muller et al. 2015). Building on Python's simple syntax and free, high-quality data processing and analysis packages, several open-source tools directly aimed at neuroscientists have been written in Python and are increasingly used (e.g. see Mathis et al. 2018;Pachitariu et al. 2017;Tyson et al. 2020). Developing a python-based software for universal generation of 3D renderings of anatomically registered data can therefore take advantage of the increasing strength and depth of the python neuroscience community for testing and further development.
For these reasons we have developed brainrender : an open-source python package for creating high-resolution, interactive 3D renderings of anatomically registered data. Brainrender is written in Python and integrated with BrainGlobe's AtlasAPI (Claudi, Tyson, Petrucco et al. 2020) to interface natively with different atlases without need for modification. Brainrender supports the visualization of data acquired with different techniques and at different scales. Data from multiple sources can be combined in a single rendering to produce rich and informative visualizations of multi-dimensional data. Brainrender can also be used to create high-resolution, publicationready images and videos (see Tyson et al. 2020;Adkins et al. 2020), as well as interactive online visualizations to facilitate the dissemination of anatomically registered data. Finally, using brainrender requires minimal programming skills, which should accelerate the adoption of this new software by the research community. All brainrender code is available at the GitHub repository together with extensive online documentation and examples.
Design principles and implementation
A core design goal for brainrender was to generate a visualization software compatible with any reference atlas, thus providing a generic and flexible tool ( Figure 1A). To achieve this goal, brainrender has been developed as part of the BrainGlobe's computational neuroanatomy software suite. In particular, we integrated brainrender directly with BrainGlobe's AtlasAPI (Claudi, Tyson, Petrucco et al. 2020). The AtlasAPI can download and access atlas data from several supported atlases in a unified format and new atlases can be easily adapted to work with the API. Brainrender uses the AtlasAPI to access 3D mesh data from individual brain regions as well as metadata about the hierarchal organization of the brain's structures ( Figure 1B). Thus, the same programming interface can be used to access data from any atlas (see code examples in Figure 2), including recently developed ones (e.g.: the enhanced and unified mouse brain atlas, Chon et al. 2019).
The second major design principle was to enable rendering of any data type that can be registered to a reference atlas, either from publicly available datasets or from individual laboratories. To achieve this, all data loaded in brainrender is represented as 3D mesh information, which enables the use of state-of-the-art rendering tools (Hanwell et al. 2015;Musy, Dalmasso, and Sullivan 2019). Converting data into a 3D mesh data format is a non-trivial and time-consuming task that requires significant programming expertise. To facilitate this process, brainrender provides functionality for easily loading and visualizing commonly used data types, such as the location of labelled cells or 3D mesh data from .obj and -3/9 Figure 1. Brainrender design principles A) Schematic illustration of how different types of data can be loaded into brainrender using either brainrender 's own functions, software packages from the BrainGlobe suite or custom Python scripts. All data loaded into brainrender is converted onto a unified format, which simplifies the process of visualizing data from different sources. B) Using brainrender with different atlases. Visualization of brain atlas data from three different atlases using brainrender. Left, Allen atlas of the mouse brain showing the superficial (SCs) and motor (SCm) subdivisions of the superior colliculus and the Zona Incerta (data from Wang et al. 2020). Middle, visualization of the cerebellum and tectum in the larval zebrafish brain (data from Kunst et al. 2019). Right, visualization of the precentral gyrus, postcentral gyrus and temporal lobe of the human brain (data from Ding et al. 2016 .stl files. In addition, brainrender can visualize data produced with any analysis software from the BrainGlobe suite, including cellfinder ) and brainreg (Tyson, Rousseau, and Margrie 2020). The existing loading functionality can be easily expanded to support user-specific needs by directly plugging in custom user code into the brainrender interface ( Figure 1A).
One of the goals of brainrender is to facilitate the creation of high-resolution images, animated videos and interactive online visualizations from any anatomically registered data. Brainrender uses vedo as the rendering engine (Musy, Dalmasso, and Sullivan 2019), a state-ofthe-art tool that enables fast, high-quality rendering with minimal hardware requirements (e.g.: no dedicated GPU is needed). Animated videos and online visualizations can be produced with a few lines of code in brainrender. Several options are provided for easily customizing the appearance of rendered objects, thus enabling highquality, rich data visualizations that combine multiple data sources.
Finally, we aimed for brainrender to empower scientists with little, or no programming experience to generate advanced visualizations of their anatomically registered data. To make brainrender as user-friendly as possible we have produced extensive documentation, tutorials and examples for installing and using the software. We have also developed a Graphic User Interface (GUI) to access most of brainrender 's core functionality. This GUI can be used to perform actions such as rendering of brain regions and labelled cells (e.g.: from cellfinder ) and creating images of the rendered data, without writing custom python code ( Figure 1C).
Visualizing brain regions and other structures
A key element of any neuroanatomical visualization is the rendering of the entire outline of the brain as well as the borders of brain regions of interest. In brainrender this can easily be achieved by specifying which brain regions to include in the rendering. The software will then use BrainGlobe's AtlasAPI to load the 3D data and subsequently renders them ( Figure 1B).
brainrender can also render brain areas defined by factors other than anatomical location, such as gene expression levels or functional properties. These can be loaded either directly as 3D mesh data after processing with dedicated software (e.g. Tyson, Rousseau, and Margrie 2020;Song et al. 2020;Jin et al. 2019) (Figure 3A), or as 3D volumetric data ( Figure 3E). For the latter, brainrender takes care of the conversion of voxels into a 3D mesh for rendering. Furthermore, custom 3D meshes can be created to visualize different types of data. For example, brainrender can import JSON files with tractography connectivity data and create 'streamlines' to visualize efferent projections from a brain region of interested ( Figure 3B).
Brainrender also simplifies visualizing the location of devices implanted in the brain for neural activity recordings or manipulations, such as electrodes or optical fibers. Post-hoc histological images taken to confirm the correct placement of the device can be registered to a reference atlas using appropriate software and the registered data can be imported into brainrender ( Figure 3C). This type of visualization greatly facilitates cross-animal comparisons and helps data interpretation within and across research groups.
Finally, brainrender can be used to visualize any object represented by the most commonly used file formats for three-dimensional design (e.g.: .obj, .stl), thus ensuring that brainrender can flexibly adapt to the visualization needs of the user ( Figure 3D).
Individual neurons and mesoscale connectomics
Recent advances in large field of view and whole-brain imaging allow the generation of brain-wide data at single neuron resolution. Having a platform for visualizing these datasets with ease is critical for exploratory data analyses. Several open source software packages are available for and registering large amounts of such imaging data Fürth et al. 2018;Goubran et al. 2019;Renier et al. 2016), and automatically identify labelled cells (e.g.: expressing fluorescent proteins). This processing step outputs a table of coordinates for a set of labelled cells, which can be directly imported into brainrender to visualize a wealth of anatomical data at cellular resolution ( Figure 4A).
Beyond the location of cell bodies, visualizing the entire dendritic and axonal arbors of single neurons registered to a reference atlas is important for understanding the distribution of neuronal signals across the brain. Single cell morphologies are often complex threedimensional structures and therefore poorly represented in two-dimensional images. Generating three-dimensional interactive renderings is thus important to facilitate the exploration of this type of data. brainrender can be used to parse and render .swc files containing morphological data and it is fully integrated with morphapi, a software for downloading morphological data from publicly available datasets (e.g.: from neuromorpho.org) ( Figure 4B).
Producing figures, videos and interactive visualizations with brainrender
A core goal of brainrender is to facilitate the production of high-quality images, videos and interactive visualizations of anatomical data. brainrender leverages the functionality provided by vedo (Musy, Dalmasso, and Sullivan 2019) to create images directly from the rendered scene. Renderings can also be exported to HTML files to create interactive visualizations that can be hosted online. Finally, functionality is provided to easily export videos from rendered scenes. Animated videos can be created by specifying parameters (e.g.: the position of the camera or the transparency of a mesh) at selected keyframes. brainrender then creates a video by animating the rendering between the keyframes. This approach facilitates the creation of videos while retaining the flexibility necessary to produce richly animated sequences (Videos 1-4). All example figures and videos in this article were generated directly in brainrender, with no further editing.
Conclusions
In this article we have presented brainrender, a python software for creating three-dimensional renderings of anatomically registered data. brainrender builds on Brain-Globe's AtlasAPI to provide a user-friendly, yet powerful visualization tool. The integration with the BrainGlobe ecosystem enables seamless use of different brain atlases using the same code base. This feature allows the production of generic code that can be shared and re-used for different purposes, thereby speeding up visualization developments. The integration with BrainGlobe also allows direct visualization of data generated with tools such as brainreg and cellfinder, as well of data downloaded from publicly available datasets through morphapi. The interoperability of multiple software packages dedicated to different tasks facilitates the development of analysis pipelines (Bates et al. 2020) and it is one of the core principles motivating the development of brainrender. By making the rendering processes as easy as possible, including access to most of the functionality through a GUI, we have tried to facilitate as much as possible the adoption of brainrender for generating high quality data visualizations of anatomically registered data.
We have demonstrated the use of brainrender for rendering different types of data, including brain-wide gene expression profiles, mesoscale connectivity patters and single neuron morphologies. These include user-generated data and data from large scale projects such as Mouse-Light and the Allen Institute's Mouse Connectome projects. While we have aimed to make the visualization process as easy as possible, this is not at the cost of flexibility. Rendering different types of data and advanced custom visualizations can be achieved by plugging in custom code directly into the brainrender engine. Our software is accompanied by extensive online documentation, tutorials and examples to facilitate adoption and achieve the goal of providing a useful, open-source tool for the community. In addition, the code has been written following best practices (e.g.: thorough testing and documentation) and therefore provide a solid base for future developments.
Limitations and future directions
While we have designed brainrender usage to require minimal programming expertise, installing python and brainrender may still prove challenging for some users. In the future, we aim to make brainrender a stand-alone application that can be simply downloaded and locally installed.
In addition to images and videos, brainrender can be used to export renderings as HTML files and generate online 3D interactive renderings. Currently, however, embedding renderings into a web page remains far from a trivial task. Further developments on this front should make it possible to easily host interactive renderings online, therefore improving how anatomically registered data are disseminated both in scientific publications and other media.
Methods
Brainrender is written in Python 3 and depends on standard python packages (Harris et al. 2020) and on vedo Figure 3. Visualizing different types of data in brainrender. A) Spread of fluorescence labelling following viral injection of AAV2-CRE-eGPF in the superior colliculus of two FLEX-TdTomato mice. 3D objects showing the injection sites were created using custom python scripts following acquisition of a 3D image of the entire brain with serial 2-photon tomography and registration of the image data to the atlas' template (with brainreg, Tyson, Rousseau, and Margrie 2020). B) Streamlines visualization of efferent projections from the mouse primary motor cortex following injection of an anterogradely transported virus expressing fluorescent proteins (original data from Oh et al. 2014, downloaded from Neuroinformatics NL with brainrender ). C) Visualization of the location of several implanted neuropixel probes from multiple mice (data from Steinmetz et al. 2019). Dark salmon colored tracks show probes going through both primary/anterior visual cortex (VISp/VISa) and the dorsal lateral geniculate nucleus of the thalamus. D) Single periaqueductal gray (PAG) neuron. The PAG and superior colliculus are also shown. The neuron's morphology was reconstructed by targeting the expression of fluorescent proteins in excitatory neurons in the PAG via an intersectional viral strategy, followed by imaging of cleared tissue and manual reconstruction of the neuron's morphology with Vaa3D software. Data were registered to the Allen atlas with SHARPTRACK (Shamash et al. 2018). The 3D data was saved as a .stl file and loaded directly into brainrender. E) Gene expression data. Left, expression of genes 'brn3c' and 'nk1688CGt' in the tectum of the larval zebrafish brain (gene expression data from fishatlas.neuro.mpg.de, 3D objects created with custom python scripts). Right, expression of gene 'Gpr161' in the mouse hippocampus (gene expression data from Wang et al. 2020, downloaded with brainrender. 3D objects created with brainrender ). Colored voxels show voxels with high gene expressions. The CA1 field of the hippocampus is also shown.
(Musy, Dalmasso, and Sullivan 2019) and BrainGlobe's AtlasAPI (Claudi, Tyson, Petrucco et al. 2020). Exten-sive documentation on how to install and use brainrender can be found at docs.brainrender.info and we provide -7/9 Tyson et al. 2020). Right, visualization of functionally defined clusters of regions of interest in the brain of a zebrafish larvae during a visuomotor task. (data from Markov et al. 2020). B) Visualizing neuronal morphology data. Left. three secondary motor cortex neurons projecting to the thalamus (data from Winnubst et al. 2019, downloaded with morphapi from neuromorpho.org). Right, morphology of cerebellar neurons in larval zebrafish (data from Kunst et al. 2019, downloaded with morphapi). In the left panel of A) and B), the brain's outline was sliced along the midline to expose the data.
here a only brief overview of the workflow in brainrender. The GitHub repository also contains detailed examples of Python scripts and Jupyter notebooks. All brainrender 's code is open-source and has been deposited in full in the GitHub repository and at PyPi (a repository of Python software) under a permissive BSD 3-Clause license. We welcome any user to download and inspect the source code, modify it as needed or contribute to brainrender 's development directly
Brainrender's workflow
The central element of any visualization produced by brainrender is the Scene. A Scene controls which elements (Actors) are visualized and coordinates the rendering, the position of the camera's point of view, the generation of screenshots and animations from the rendered scene and other important actions.
Actors can be added to the scene in several ways. When loading data directly from a file with 3D mesh information (e.g.: .obj) an Actor is generated automatically to represent the mesh in the rendering. When rendering data from other sources (e.g.: from a .swc file with neuronal morphology or from a table of coordinates of labelled cells), dedicated functions in brainrender parse the input data and generate the corresponding Actors. Actors in brainrender have properties, such as color and transparency, that can be used to specify the appearance of a rendered actor accordingly to the user's aesthetic preferences. Brainrender 's Scene and Actor functionality use vedo as the rendering engine ( GitHub repository; Musy, Dalmasso, and Sullivan 2019).
In addition to data loaded from external files, brainrender can directly load atlas data containing, for example, the 3D meshes of individual brain regions. This is done via BrainGlobe's AtlasAPI to allow the same programming interface in brainrender to visualize data from any atlas supported by the AtlasAPI. Brainrender also provides additional functionality to interface with data available from projects that are part of the Allen Institute Mouse Atlas and Mouse Connectome projects (Wang et al. 2020;Oh et al. 2014). These projects provide an SDK (Software Development Kit) to directly download data from their database and brainrender provides a simple interface for downloading gene-expression and connectomics (streamlines) data. All atlas and connectomics data downloaded by brainrender can be loaded directly into a Scene as Actors.
Visualizing morphological data with reconstructions of individual neurons can be done by loading these type of data directly from .swc files, or by downloading them in Python using morphapi -software from the BrainGlobe suite that provides a simple and unified interface with several databases of neurons morphologies (e. g. neuromorpho.org). Data downloaded with morphapi can be loaded directly into a brainrender scene for visualization.
Example code
As a demonstration of how easily renderings can be created in brainrender, the following Python code illustrates how to create a Scene and add Actors by loading 3D data from an .obj file and then adding brain regions to the visualization ( Figure 5).
The code used to generate the figures and videos in this article is made freely available at a dedicated GitHub repository. | 5,481 | 2020-02-25T00:00:00.000 | [
"Computer Science"
] |
Collision Avoidance in Mobile Wireless Ad-Hoc Networks with Enhanced MACAW Protocol Suite
Jamming attack is quite serious threat for Mobile networks that collapses all necessary communication infrastructure. Since mobile nodes in Mobile Ad Hoc Networks (MANET) communicate in a multi-hop mode, there is always a possibility for an intruder to launch a jamming attack in order to intercept communication among communication nodes. In this study, a network simulation has been carried out in order to explore and evaluate the possible impacts of jamming attack on MACAW protocol. Ad-hoc network modelling is used to provide communication infrastructure among mobile nodes in order to modelling the simulation scenarios. In simulation model, these nodes have used AODV routing protocol which is designed for MANET while second scenario contains simulated MACAW node models for comparison. On the other hand, this paper is the first study that addresses performance evaluation of MACAW protocol under a constant Jamming Attack. The performance of MACAW protocol is simulated through OPNET Modeler 14.5 software.
Introduction
Wireless networks take important place in the world of communication.Today a great number of people such as businessmen, managers, students and employees can easily access to the internet or to the corporate networks through wireless connections.Although wireless technologies expand the limits of communication area, they are exposed to some problems due to their nature.These problems violate quality of wireless communication.
Collision, one of these problems occurs when two nodes in the same network, attempt to transmit data at the exact same time [1]- [4].Corresponding problem results in loss of quality in communication.Especially in mobile wireless networks collision avoidance issue becomes more difficult due to transmission environment.Up to now, considerable solutions addressing to this problem have been proposed in variety of researches.
MACAW, one of these solutions provides effective collision avoidance mechanisms.MACAW protocol is generally used in mobile wireless networks [4]- [6].
On the other hand, security attacks which are another reason for collision occurrence, result in loss of quality in communication as well.In this study, a network simulation has been carried out in order to evaluate performance of MACAW protocol.During this simulation, MACAW protocol has been exposed to a constant Jamming Attack which results high collision occurrence rate in the network.The entire network mechanisms are simulated through OPNET Modeler 14.5 simulation software which is widely used in the network industry to estimate behaviors of network component in a virtual environment.The importance of this study is the first simulation case that addresses performance evaluation of MACAW protocol under a Jamming Attack.
Collision in Mobile Wireless Networks
In computer networks, there are many nodes and they have to transmit data packages over the same carrier.This carrier can be an optic cable in wired networks while it is a frequency in wireless networks.Owing to this networking principle, if two nodes in the same network attempt to send data packages to the communication line at the exact same time, a collision occurs.Collisions are important problems for networks because they violate data transmission and results in loss of information.When any collision occurs in the network, the communication stops; ultimately, data packages are dropped.Collisions always results in less throughput of the network, high network load, high delay and high data drop rate [7]- [9].
Collision Avoidance Protocol in Mobile Wireless Networks
As mentioned previously, owing to collisions, network nodes face with loss of packet integrity.That means a proper communication cannot be established in the network.In seven layer OSI model [10]- [12], Media Access Control (MAC) layer is responsible for avoidance of package collision.MAC sub layer performs this task through avoidance protocols.These protocols play a critical role in preventing data collision; they aim to rule situations out which multiple nodes access to the network at the exact same time and to provide packet transmission to any node without any collision.There are some protocols that mostly used and are developed to prevent collisions in the networks such as ALOHA, CSMA, MACA and MACAW.
MACAW Protocol
Multiple Access with Collision Avoidance for Wireless (MACAW) is a widely used MAC sub layer protocol.MACAW is useful for mobile ad-hoc networks.It contains new collision avoidance mechanisms.By these mechanisms data transmission is completed in five steps.These five steps are Request-to-Send (RTS), Clear-to-Send (CTS), Data Sending (DS), data packages and Acknowledgement (ACK).RTS is a message, sent from data sender node to receiver node, notifies that a node attempts to transmit data to another node.CTS message is a respond for transmission request.If receiver node available for transmission then sends a CTS message.DS frame informs receiver node about the size of data package.After that, data transmission starts.When it completes properly, receiver node sends an ACK message to sender node.ACK notifies that data transmission completed successfully [13]- [16].
Network Simulation
In computer networking field, testing a complete network's behaviors in a real environment is a quite costly process.In this case network simulation techniques provide an opportunity to test network equipment such as routers, servers and cables in an inexpensive way.Besides that, network protocols, networks services and other network features can be tested to see behaviors of nodes.Network simulation actions are performed by network simulators in a virtual environment.A network simulator is a software application that estimates behaviors of nodes, equipment and protocols of a modelled network.Simulators typically support commonly used networking technologies such as Wi-Max, WLAN, and ZigBee.Most of these simulators have a Graphical User Inter-face (GUI).As well as GUI simulators, Command Line Interface (CLI) simulators are also available.Some network simulation software are open source while some are proprietary software.Commonly used simulators are GNS3, ns, OPNET, NetSim, OMNeT++ [17] [18].
Simulated Node Models
In this simulation experiment while evaluating collision effects on network, mobile nodes have been used.These mobile nodes create an ad-hoc network among them.In OPNET simulator these types of nodes are called as "manet_station_adv".While simulating scenario, 50 nodes were used.
Simulation Model and Experiment Environment
While performing simulation scenarios, OPNET Modeler 14.5 has been used.In this simulation, 2 different scenarios are designed.The simulation was performed in a 1000 × 1000 meters campus area with 50 mobile nodes.These nodes share the common parameter attributes.In Table 1, all global simulation parameters are shown in detail.
In this simulation model, MACAW and AODV protocols are used.The performance evaluation and contents of the protocol is exposed by the researchers in the literature before [19] [20].MACAW as mentioned before, is a powerful collision avoidance protocol and used in this simulation model for its specific purpose.On the other hand Ad hoc On-Demand Distance Vector (AODV) [2] is a routing protocol that is used in mobile ad-hoc networks while nodes determining their destination paths for data transmission.was set as Vector which means mobile nodes change their location unsymmetrically.Finally, Seed value which is number of network events performed in 1 second, was set as 40,000.The successful simulation scenarios have been conducted on simulated different contention-based or contention-less protocols through OPNET in the literature [19] [20].So it is quite reliable to conduct this simulation scenario through OPNET simulation package.
Simulation Scenario 1
In the first scenario, there are 50 mobile nodes that have an ad-hoc network among them.They move at a constant speed of 10 meters per second.Figure 1 below illustrates these nodes distributed randomly in a 1000 × 1000 meters area.
In this scenario illustrated, Application profile, Profile configuration and Mobility configuration are defined to meet network requirements specified in Table 1.Network model has two scenarios.In first scenario, nodes communicate with each other in a proper way.There is no malicious node and no security attack.One of these nodes acts as an Access Point at the same time.OPNET simulator has evaluated this scenario for 1 hour.Simulation results were measured and evaluated according to network performance metrics.The main purpose for this scenario is to determine status of network under normal conditions.This scenario will be useful while comparing effects of collisions and security attacks to network performance.
Simulation Scenario 2
In this scenario again 50 mobile nodes have been used.Unlike Scenario 1, here also 3 mobile jammer nodes have been used.Scenario 2 is shown on the While these nodes attempting to communicate between each other properly, two jammer nodes violate communication.They constantly sent large size data packages to the network so that it causes less network throughput, collision occurrences and high network traffic.These jammer nodes were specified according to requirements of the project.Jammer nodes transmit data packages in large sizes.It sends constantly 10,000 bits size of data packages.In simulation model this jamming attack will keep being as long as the simulation continue.Therefore network communication is affected adversely.
All these circumstances directly affect network throughput.In scenario 2, these conditions have been simulated and evaluated.Comparison results of two scenarios clearly show how jamming attacks cause less throughput and high collision occurrence.
Performance Metrics
Simulation results are evaluated according to determined network performance criteria.In this experiment four performance metrics are taken.These metrics are Network Throughput, Network Load, WLAN Delay and Data Dropped.The network throughput refers to the amount of bits forwarded successfully from one network layer to another in a given time.Network throughput is typically measured as bits per second (bps), megabits in per second (Mbps) and gigabits per second (Gbps).On the other hand, Network Load is described as measurement of total data traffic on a WLAN Base Station Subsystem (BSS).It shows BSS load statistics of a network separately.Other performance metric which is WLAN Delay, represents latency of packages while they are travelling from one device to another.Finally Data Dropped statistics show total amount of data packages that are discarded by higher network level due to high buffer size of packages.
Simulation Results
Two scenarios have been subjected to simulation for one hour.In first scenario 50 mobile nodes had a proper communication between each other.There were not any malicious nodes or security attacks.These nodes have used MACAW protocol as collision avoidance protocol as well as they have used AODV protocol as mobile ad-hoc network routing protocol.Likewise scenario 1, scenario 2 had the same protocols, and equal number of mobile nodes.On the other hand unlike scenario 1, scenario 2 had also 3 mobile constant jammer nodes.Network model in scenario 2 was exposed to a powerful and constant jamming attack.These jammer nodes have sent large data packages to the network.In simulation results we have seen the performance of MACAW protocol under jamming attack condition.These 2 scenarios were simulated within a Discrete Event Simulation (DES) environment.Simulation outcomes and statistics were generated by OPNET Modeler 14.5 in graphical charts according to mentioned conditions.
Average WLAN Throughput Statistics
As stated before, Network Throughput refers to number of bits that are forwarded successfully one layer to another in a given time.Measurement for this statistics is used to be bits per second (bps).In this topic, two scenarios' throughput is compared to each other.It is expected that throughput of scenario 1 would be higher than scenario 2 because as mentioned in previous chapters, malicious nodes and security attacks directly affect overall network performance.OPNET Modeler 14.5 has provided throughput comparison of the two scenarios as a consequence of 1 hour simulation.In the following figure, WLAN Throughput statistics comparison of 2 scenarios is shown.
Figure 3 clearly illustrates average Wireless LAN Throughput comparison of two scenarios.In the first scenario which doesn't have any malicious nodes, it can be easily seen that bit transfer rate is above 8,000,000 bits per second.Under normal network conditions network throughput reaches up to approximately 7.7 Mbit.Second scenario which is represented by a red line in the figure shows that when network is exposed to a jamming attack its overall throughput rapidly decreased below 3000,000 bits per second.It can be clearly seen that jamming attack has a significant impact on overall network performance.It decreases throughput approximately three times.
Average Wireless LAN Delay Statistics
Wireless LAN Delay statistics represent package latency while they are transferring one layer to another.When network performance is low, package transmission slows down.In this case total network delay becomes high.In Figure 4, comparison graphics of scenarios for Wireless LAN Delay statistics can be seen.
Blue line which represents Scenario 1 shows that WLAN Delay rate is close to zero seconds.In normal network state, packages are delivered one layer to another without more delay.However in second scenario it can be seen that package delay have rapidly increased.Jamming attack caused a significant latency of packages.
Average Wireless Data Dropped Statistics
As discussed previously, data drop rate represents data packages that are discarded by higher level network layer.When buffer size of a data package is higher than determined acceptable value, network automatically drops the data package.As known, in Denial of Service attacks malicious nodes constantly send packages in large sizes to make network resources unavailable.As measure, network administrators adjust server nodes to drop these large size data packages.In simulation scenarios, "Large Packet Processing" option is set as "Drop" in order to protect the network against possible damages of large size packages.Below Figure 5 shows average Wireless LAN Data Dropped statistics.
Figure 5 clearly shows data drop rate comparisons of two scenarios.As seen, red line which represents Scenario 2 is higher than blue line.Because jammer nodes send packages in 10,000 bits size and network directly drops them.
Average Wireless LAN Network Load Statistics
Network Load represents measurement of total amount of data over entire network.In Figure 6 Wireless LAN Network Load statistics comparison of two scenarios is shown.As it is shown through direct relationship between the average WLAN Data Dropped Rate on Figure 5 and average WLAN Network Load on Figure 6, the scenario 2 had higher data dropped rate due to jamming attack and network load shown on Figure 6 is higher due to injected packages into network through jammers.The huge amount of injected packages dropped from the network that is proven by the Figure 5 with higher data dropped rate of scenario 2 and high network load value of Figure 6 for scenario 2.
Conclusion
In the simulation case of study, the performance of MACAW protocol is evaluated.During this simulation, MACAW protocol has been exposed to a constant Jamming Attack.The main goal of this study is to observe possible impacts of a constant Jamming Attack on MACAW protocol.MACAW has shown a good performance unless it has been exposed to a Jamming Attack.It is seen in the simulation results that, a Jamming Attack in a mobile ad-hoc network leads to loss of performance of MACAW.Based on the simulation results, it can be claimed that Jamming Attacks cause approximately three times loss of network throughput where MACAW protocol is implemented.Delay rate in the network has significantly increased up to 800 seconds during Jamming Attack while it is close to zero second under normal network conditions.On the other hand, Data Dropped statistics show that 600,000 packages are discarded when MACAW is exposed to attack.In normal network conditions, this statistics is stable at the rate of 200,000 dropped data packages.In the jamming scenario, Network Load which is the final performance criteria shows that average load is at the rate of 4500,000 bits per second in the beginning of the simulation whereas it is stable at approximately 3500,000 bits per second at the end of the simulation.However, in the normal scenario Network Load statistics is stable at the rate of 1000,000 bits per second.Jamming Attack causes not only three times decrease in the network throughput but it also causes three times increase in the network load.This simulation experiment is the first study that deals with the performance evaluation of MACAW protocol under a constant Jamming Attack.Depending on results of our simulation experiment, it is strongly recommended other researchers to simulate performance of MACAW protocol under different security attacks such as Man in the Middle, Distributed Denial of Service and Spoof Attack.It is also recommended that precautions against attacks should be taken in MACAW protocol.
Table 1 .
Simulation has been carried out for 1 hour in a 1000 × 1000 meters area, Mobility Model status was stated as Simple Random Waypoint with constant speed of 10 meter/seconds.Network Throughput, Network Load and Delay parameters are taken as Performance Parameters.Data Rate was set as 11 Mbps which is maximum data rate for IEEE 802.11 b.Trajectory Simulation scenario parameters. | 3,728.2 | 2015-12-30T00:00:00.000 | [
"Computer Science"
] |
Systematizing Modeler Experience (MX) in Model-Driven Engineering Success Stories
Modeling is often associated with complex and heavy tooling, leading to a negative perception among practitioners. However, alternative paradigms, such as everything-as-code or low-code, are gaining acceptance due to their perceived ease of use. This paper explores the dichotomy between these perceptions through the lens of ``modeler experience'' (MX). MX includes factors such as user experience, motivation, integration, collaboration \&versioning and language complexity. We examine the relationships between these factors and their impact on different modeling usage scenarios. Our findings highlight the importance of considering MX when understanding how developers interact with modeling tools and the complexities of modeling and associated tooling.
Introduction
Model-driven engineering (MDE) is recognized as an established approach for developing complex software systems [1] bringing advantages during software system's development.However, its adoption is currently hindered by a range of factors [2], including poor tool support and its usability [3], social and organizational issues, and a mismatch between technical and research requirements [2].
As developers engage in the intricate task of modeling, whether for data, systems, or simulations, evidence from practice shows that their experiences diverge from conventional academic practices or guidelines.In [4], the authors first introduce the term User eXperience (UX) for MDE or Modeler Experience (MX).MX goes beyond the traditional definition of usability to include experience that includes individual feelings, such as emotions, affects, motivations, and values in the process of modeling, playing a crucial role in the success and adoption of MDE approaches.This paper takes the initial steps towards developing a theory of MX, as called for in the original work by Abrahão et al. [4].
Taking the above definition one step further, we argue that MX is not only about the modeling language, tools or individual perspectives, but also about how modeling is embedded in the organization and the mindset of the individuals.It can, therefore, be a tool to address the mindset barriers reported by Kalantari and Lethbridge [5].
A fundamental insight is that modeler experience depends on the concrete context and circumstances in which modeling is used.This is in line with the principles of Context-Driven Software Engineering, where Briand et al. [6] advocate for the importance of considering contextual factors, whether human (e.g., modelers background and experience), organizational (e.g., time and cost constraints) or domain-related (e.g., level of criticality, compliance with standards) when introducing a specific approach in an industrial setting.For example, organizations that use model-based systems engineering (MBSE), e.g., to build automotive products, have a very different approach and mindset towards modeling than organizations where modeling is only done informally and not enforced by process descriptions or similar measures.Therefore, another objective of this work is to identify scenarios where modeling has proven successful, based on existing literature and our collective experience on the topic, and to distill future usage scenarios based on successful industrial practices.
Based on the work of Bucchiarone et al. [7], we regard the modeling success stories of MBSE, low-code, and informal modeling and add infrastructure-as-code as an additional success story.To gain a better understanding of informal modeling, we differentiate semi-formal modeling from it.Overall, we are thus regarding five success stories in this paper: • Infrastructure-as-Code (IaC) aligns with the definition of modeling as the creation of system representations, where the models are embedded within the code itself.According to Madni et al. [8], a model can take various forms, including modeling languages, algorithms, equations, and parametric curves.This perspective offers a unified approach to both modeling and programming, suggesting that "programs are models," where code is viewed as a less abstract, textual model [9].In this context, users are not necessarily aware that they are modeling.Domain-Specific Languages (DSLs) are mostly implemented as textual languages and take advantage of features of the usual tools/IDEs for programming, like automation and version control.Although these languages are not agnostic of the specificities of different tools, one might argue that IaC is already an abstraction with inherent structure and intentionality, reflecting a specific system deployment.We add IaC as it is a success story for the use of domain-specific languages, especially in a programming context.• Low-code is an emerging paradigm combining modeling and graphical programming.There are claims that Low-code is not MDE according to [10], or that it is [11].In this work, we follow the latter.Low-code platforms are modeling environments in which a user combines different pre-defined building blocks into an overall workflow.They are often used to model and automate simple processes or to create dashboards to visualise data.Examples of such platforms are Node-RED1 or Outsystems2 .• MBSE in the Automotive Domain uses models intensively for systematically developing and analyzing complex automotive systems architectures with dedicated modeling frameworks.In this domain, the most commonly used languages are UML and SysML to represent the structure of the software architecture at a high level of abstraction and to describe the system's behaviour via state machines.Stateflow/Simulink models are also used to represent subsystems which involve I/O and control design.• Informal Modeling produces models which are sketches with no or little structure that have no direct or automatic translation to code.The main objective is to gain a collective understanding of some problem domain, while engaging in intensive communication.
• Semi-formal Modeling produces models which are possibly created with CASE tools or diagrammatic tools such as draw.io,sometimes with limited automatic analysis or code generation capabilities.
The primary objective of this expert voice is to introduce factors that contribute to MX and their interrelations in the four identified modeling success stories.Moreover, it outlines how the identified factors impact the modeling success stories.This work results from the week-long GI-Dagstuhl Seminar on "Human Factors in Model-driven Engineering" attended by researchers and practitioners experts in the topics of modeldriven engineering and human factors [12].The ultimate objective of this work is to contribute to designing and developing better MDE tools by understanding the MX factors that ultimately hinder MDE adoption.
This paper is structured as follows.In Section 2, we define and detail the five modelling success stories mentioned before.In Section 3, we introduce the relevant factors that influence MX.In Section 4, we analyze the five success stories in light of the factors identified.Finally, in Section 5, we discuss the results and outline the research challenges and opportunities related to MX.
Selected Modeling Success Stories
Bucchiarone et al. [7] introduce three modeling success stories as diverse but archetypal instances of successful applications of modeling.Due to their diversity, they exhibit a broad set of characteristics that influence the modeler's experience differently.Therefore, it is important to elicit the characteristics of each success story and analyze the extent to which they influence MX.As stated above, we also consider infrastructure-as-code as a modeling success story and consider it as such in the following.
Table 1 presents the characteristics that define each modeling success story.These characteristics help to understand the differences and commonalities of the identified success stories.At this stage, we do not consider the list of characteristics to be complete but sufficient to show that the success stories are different enough to warrant a separate treatment.
The mapping in Table 1 shows that the chosen success stories already provide a significant diversity in terms of the characteristics we have selected.This means they provide a sufficient difference regarding what modeler experience means in each of them and, ultimately, how they impact the factors of modeler experience as we describe in Section 4.
Factors of Modeler Experience
In this section, we first describe the methodology we have followed to identify a set of factors that affect the modeler's experience.We then present these factors grouped by inherent factors, technical factors, and non-technical factors.
Methodology
We conducted a focus group with ten modeling experts from academia and industry to better understand the factors contributing to MX.The starting point of the focus group was a discussion of modeling success stories that can be observed in the industry.These success stories were defined in smaller groups consisting of two to three focus While these characteristics were useful to distinguish the success stories and are used in Section 2, the group quickly realized that they are not suitable as descriptors of modeler experience since they are not focused on the modeler.Therefore, as a second step, the group engaged in a discussion that yielded factors more tailored towards the modeler: required training, maintainability, immediate benefits, integration in the programming ecosystem, and reduced friction between modeling and programming.Upon further reflection, the group identified that some of them were goals and others were very hard to measure, even qualitatively.
Therefore, the group engaged in a brainstorming session using the "1-2-4-all" technique: first, individuals brainstormed relevant factors on their own; second, two individuals came together as a group, discussed their respective factors, and consolidated them into a new list; third, four participants got together and consolidated again; and finally, the entire group discussed the different lists of factors and agreed on a final list of factors.
As an additional step, the focus group participants then discussed how the different factors relate to each other (cf. Figure 1).While originally trying to identify positive and negative influences, this idea was abandoned at one point in favor of a more generic notion of relationship.It is unnecessary to distinguish cases in which a factor can positively or negatively influence another.
The result of these intense discussions during the focus groups is the set of factors described below.
Inherent Factors
An inherent factor is based on the characteristics of the problem to address.The complexity of the language is inherent in the complexity of the problem.Language choice is also based on the domain and, ultimately, on the problem that needs to be solved.The chosen language needs to be able to address that problem and needs to have the necessary complexity to describe relevant issues in the problem domain.
• Language Complexity is a measure of effort to solve a modeling problem using a modeling language.Perceived language complexity has been listed as an impacting factor in [4].In their seminal work, France et al. [13] highlight the challenge of Managing Language Complexity faced by practitioners.Researchers have subsequently employed this classification to assess and prioritize challenges practitioners encounter in software modeling [14].• Language Choice is the process of selecting, extending, or defining a set of modeling languages that provide suitable domain-specific abstractions.Vendor lockin creates barriers to MX since it limits interoperability.It should, therefore, be included as a criterion in the selection process.Language choice is also influenced by the system domain, "a sphere of knowledge, influence, or activity.The subject area to which the user applies a program is the domain of the software."[15] According to a literature review [16], the language used directly influences MX.Modeling Languages usually also cover multiple view points [17].The need for modeling viewpoints was emphasized for practitioners and is evaluated based on its support for multiple viewpoints and large viewpoint management [18].
Technical Factors
These factors concern the way modelers work with the chosen languages and integrate them into their workflows.Specifically, we identified four technical factors: • Integration is the conceptual and technical capability of the modeling approach to fit into existing development processes and development platforms.Whittle et al. [19] categorize this issue under "Practical Applicability/Challenges of Applying Tools in Practice."This is captured in two sub-factors: 1) chaining tools together, addressing the ease or difficulty of using multiple tools for end-to-end functionalities, and 2) flexibility of tools, evaluating a tool's adaptability to various processes, tools, and working methods without imposing strict processes or requiring additional tools [19].Mohagheghi et al. [20] report that three of four companies mentioned significant efforts to integrate MDE into the existing processes.The domain was identified as a contributing factor towards adoption.Participants voiced that MDE seems most beneficial for bigger companies, projects, or companies with product lines or similar projects.The vision paper [21] identifies integration into DevOps workflows as an important step towards adopting MDE.They focused specifically on the domain of cyber-physical systems, which would include the automotive industry.• Tool UX: "A person's perceptions and responses that result from the use and/or anticipated use" of the modeling tool [22].Ease of use and maturity of tools are two aspects mentioned by Mohagheghi et al. [20].Both aspects can be categorized as Tool UX, where tool maturity is likewise closely related to integration and technical features.Interestingly, the authors found that companies with lower adoption of MDE considered maturity of tools to be worse than those with higher adoption.Some participants suggested enhancing the UI of modeling software to improve Tool UX.In their systematic literature review, Kalantari and Lethbridge [16] classified and documented MX issues into five distinct categories: utility, usability, reliability, emotional, and marketing.Utility and usability are related to ease of use, while reliability corresponds with the maturity of tools.• Versioning is the ability to track and merge model changes, facilitating model maintenance, and is one of the main enablers of Collaboration, as stated by Pietron [23].This suggests that versioning is also an important factor of MX.The inadequacy of version management support has been identified as a drawback of Model-Based Engineering (MBE) tools within the embedded systems domain [24].The technical challenge of combining versioning with blended modeling is addressed by Exelmans et al. [25].Collaboration typically alternates between two modes: asynchronous collaboration, where the work is divided, and synchronous collaboration, to obtain a shared understanding [26].In the literature, support for collaboration has consistently emerged as a desired attribute of modeling tools for practitioners [18,[27][28][29][30].In the more general field of software engineering, collaborative tools positively impact productivity [31,32].• Technical Capabilities result from the combination of modeling language and modeling tool.They describe how the models can be used in the development process, e.g., reason about properties of the modeled system via simulation or formal analysis or to generate downstream artifacts.For example, Liebel et al. [33] present results from a survey about the technical capabilities used in model-based engineering in the embedded systems domain.Störrle [34] presents results on a survey for which purposes models are used when modeling informally.
Non-technical Factors
These factors relate to modeling outside the problem domain and its technical use.Specifically, we identified four non-technical factors: • Modeler-Intrinsic Motivation are those factors that lead modelers to use modeling, in particular, the perceived benefits and positive emotions they experience.Intrinsic motivation is the inherent motivational source that drives individuals to engage in activities they personally find compelling.This stands in contrast to extrinsic motivation, where the incentive originates from an external source rather than the inherent appeal of the task [35].In software development, empirical research suggests that intrinsic motivation significantly predicts developers' overall experience [36].On the other hand, a lack of perceived benefit of modeling has been noted as one of the key mindset barriers for adopting MDE [5].• Organization-Intrinsic Motivation are the factors inside the organization that lead the organization and the modelers to adopt and foster modeling, in particular, because of perceived benefits for productivity, product quality, cost, or collective well-being.The benefits of software modeling practices have been widely discussed in the literature.A survey on modeling practices in the embedded software industry underscores cost savings, shortened development time, reusability, and improved quality as key motivations for adopting MDE [37].An empirical assessment noted the benefits of MDE in communication and control within a project [38].The assessment also noted that MDE adoption is affected by culture, expertise, and evangelism within the organization.MDE adoption requires organizational changes, notably, the need for a modeling 'champion' and carefully choosing initial projects for applying MDE [38].Vogelsang et al. [39] noted the importance of managing expectations when adopting MDE.• Organization-Extrinsic Motivation are the factors outside the organization which influence adoption, maturity, and approach of/to modeling, e.g., existing standards, regulations, tool availability and maturity, or customer demands.Adhering to regulations is one of the strong drivers for MDE adoption in the embedded systems industry [39].The Unified Modeling Language (UML) has been cited as de facto standard for software modeling, with a lot of tools available that support not only model creation and code generation but also viewpoint management, verification, etc. [18].However, it has been noted that existing research on modeling does not address quality issues reported in industrial context [40].
• Training includes all factors that are related to the skills and knowledge of modelers in using the selected modeling languages and the tools used to create, manipulate, analyze, and use the models.Training and education have been mentioned as an important factor affecting MDE adoption [38].Insufficient training resources and support, on the one hand, [5], and the substantial effort required for developer training on the other [39,41], are two significant factors in this context.Lack of fundamentals in MDE and education issues have been noted as major current problems in MDE [1].Moreover, perceived competence, defined as an individual's subjective judgment regarding their own skills and performance, is identified as a crucial factor contributing to intrinsic motivation [36].This recognition underlines the interconnected nature of factors in this domain, emphasizing that effective training impacts proficiency and influences individuals' intrinsic motivation in model-driven practices.
An important aspect of the different motivations is the willingness to take risks on the individual and organizational level.A small organization might be more willing to take a risk with its modeling approach and the tools used than a larger organization because it is more driven by the need to innovate and cannot afford a less risky but more expensive solution.
Applying MX to the Modeling Success Stories
In this section we highlight how the different MX factors apply to the modeling success stories introduced previously.We structure each section according to the different groups of factors.Table 2 also shows the importance of the different factors to the different sucess stories.
Infrastructure as Code
Inherent Factors Infrastructure as Code is typically used at the beginning of the development process when an infrastructure is needed to execute a developed software.This infrastructure is often managed by a single or only a few DevOps Engineers and evolves as the development process continues.The language depends on the chosen cloud provider or container orchestration tool, so the choice is limited.While it is simple to configure a CRUD web application, due to re-usability, community support, and extensive documentation, configuring specialized distributed systems may require more complex features of IaC languages.Technical Factors As IaC is usually written in an IDE (with additional language support), it directly benefits from many features such as versioning, auto-completion, formatting, and testability.IaC is, therefore, highly integrated into the development workflow.Command line interfaces or simple graphical interfaces are then used for provisioning and setting up infrastructure from IaC-configurations, which provides developers with a familiar tool's UX.
Non-Technical Factors Infrastructure as code is adopted by DevOps engineers to reduce repeated work, improve debugging capabilities, and improve consistency across similar software systems.Organizations further aim to reduce costs, gain flexibility, and improve documentation of their systems.However, depending on previous workflows and infrastructure management, the change to IaC can be demanding and requires training for operations engineers and developers.
Low-Code
Inherent Factors Mature low-code platforms support the entire application lifecycle management.They do use proprietary modeling languages, though, so language choice is limited by the chosen tool.Different platforms specialize in different use cases: Node-RED, e.g., is marketed as a tool for data processing and visualization for embedded systems whereas Microsoft's PowerBI platform is geared towards business process automation and visualization.With these use cases come different feature sets, different language complexity, different levels of extensibility and different support options that modelers have to take into consideration.Technical Factors Low-code platforms provide their own development environment.In many cases, these environments include collaboration and versioning features as well as features to automatically run, test, provision, and deploy the application using cloud-native concepts.They are separate from other IDEs developers might be using for other parts of the system, however.Usability of these tools is generally quite good [42] since the visual aspect of the environment is one of the most important features and many low-code platforms are specifically aiming at non-professionals.
Non-Technical Factors The switch to low-code platforms is often motivated by organization-wide decisions, e.g., to move to Microsoft PowerBI to modernize legacy systems.We cannot discern situations in which organisation would be pushed to adopt such systems due to external circumstances.Depending on the use case, the barrier of entry for such platforms can be significant, but documentation and training material are exhaustive and there are a variety of training options.
MBSE
Inherent Factors The modeling success story in the Model-Based Systems Engineering domain is characterized by extensively using highly sophisticated and standardized modeling languages.Examples are AUTOSAR for modeling the hardware/software architecture, and Matlab/Simulink/Stateflow for modeling the software's behavior.
Due to the ability to generate production code from the models, the modeling languages are highly complex and provide many technical features.The language choice in that domain is restricted as complex value networks between OEMs and suppliers require extensive artifact exchange and, thus, language standardization.Technical Factors In addition to the code generation capabilities, the tools also provide extensive analyses and simulation capabilities.Furthermore, the tools support versioning and integrating software artifacts, e.g., an OEM integrates components from various suppliers.Tool UX is often rather low as the tools are expert and niche tools in addition to the complex languages and the number of technical features.
Non-Technical Factors The restrictions of the domain highly influence the nontechnical factors of the modeling experience in the MBSE success story.Modeling languages and the modeling success story are restricted by organization-extrinsic motivation, e.g., standards or clients require companies to use certain modeling languages and corresponding tools.Furthermore, the company-internal and company-external collaboration is dependent on using the same modeling languages and conforming modeling tools.
Informal and Semi-Formal Modeling
Inherent Factors Informal Modeling mostly occurs early in the design process, where conformance to any standardized notation is of minimal concern, as opposed to (creative) expressiveness and a free flow of ideas.Often, the created sketches evolve into (semi-)formal models later on.Technical Factors Much of the informal modeling occurs with analog media such as pen and paper or whiteboards.Although digital alternatives (e.g., electronic whiteboards) exist, their adoption rate remains low in practice, despite technical advantages such as automated persistence and remote collaboration.We think this remains due to poor Tool UX (e.g., responsiveness, viewing angles, accuracy, . . . ) of these solutions compared to analog media, and we conclude this factor is of utmost importance for informal modeling.Non-Technical Factors Informal modeling, and its low-tech nature, comes natural to engineers, and has been done even since before it was considered a form of 'modeling'.Workforce is intrinsically motivated, and this motivation does not depend on organization-intrinsic or -extrinsic factors.It is often a collaborative activity.
Comparing the success stories
When comparing the different modeling success stories, we see they are in different places on the spectrum of MX factors.We illustrate the diversity of modeling success stories w.r.t.language complexity and technical capability of the tooling as an example.We believe there are different sweet spots for different modeling workflows: for instance, model-based systems engineering reaps benefits from being formal and, therefore, having high language complexity; this requires tools with high technical capability, which are also very complex to use.Informal modeling, on the other hand, uses languages with very low complexity and requires tools with less technical capabilities, e.g., to reason about models.Both modeling success stories are in stark contrast to UML and its tools in the early days when language complexity was high, but at the same time, rather informal, and the tools had low technical capability.When using other factors, the different success stories will be positioned differently.For instance, when contrasting language choice and organisation-external motivation, MBSE with a high motivation and few available languages poses fewer issues than informal modelling where languages are often made-up on the spot and syntax might not be understood between different companies nor even across teams.This illustrates that our factors are able to capture interesting trade-offs and that a more systematic investigation of these trade-offs in future research is necessary.
Conclusion and Future Work
We defined and explored the concept of Modeler Experience (MX) in the context of different modeling success stories: infrastructure-as-code, model-based systems engineering (MBSE), and informal and semi-formal modeling.MX, which encompasses factors such as usability, motivation, integration, and language complexity, highlights the dynamics between practitioners and modeling tools.One of the contributions of this paper is the delineation of technical and non-technical MX factors and the characterization of these factors in different modeling success stories.In addition, we show how the proposed MX factors differ between those modeling stories.By examining these sucess stories in the light of MX factors, the paper underscores the contextual nature of modeling practices and the need for tailored approaches to effectively address modelers' needs.
Although this paper lays a solid foundation for understanding MX, there are still many potential research paths to be explored in the future: The interplay between MX factors and the success of modeling activities suggests the need for a holistic approach to tool design and workflow integration.Moving forward, research should focus on empirically validating the identified MX factors across diverse contexts and exploring the impact of different workflows on adopting and sustaining modeling practices.
Further empirical studies are essential to quantify the relative importance of each MX factor and how they interact with one another.These studies could take the form of longitudinal research within organizations that are transitioning to or already utilizing model-driven approaches, as well as experimental studies comparing different tool sets and methodologies.Additionally, there is an opportunity to conduct cross-sectional analyses comparing MX across domains, such as automotive, aerospace, healthcare, and finance, where modeling plays a crucial role.
Another promising direction is the development and refinement of modeling tools that incorporate the principles of human-centered design.By focusing on the end-user experience, tool developers can create more intuitive interfaces, improve the integration with existing development ecosystems, and enhance collaborative features.This direction also calls for an iterative design process, where feedback from modelers is continuously incorporated into tool enhancements.
Case studies can provide in-depth insights into the practical challenges of addressing MX in specific contexts and the benefits of modeling in industry.By examining specific cases in detail, researchers can identify best practices, common pitfalls, and innovative uses of modeling tools that may not be evident through other research methods.
In conclusion, by continuing to explore and address the various dimensions of MX, we will serve both academic research and practical tool development, guiding efforts toward powerful models and tools that are a pleasure to use, thereby enhancing productivity and satisfaction for modelers worldwide.
Fig. 1
Fig. 1 MX factors and their relations
Table 1 :
[7]rview of the characteristics of the Modeling Success Stories based on Bucchiarone et al.[7]
Table 1 :
[7]rview of the characteristics of the Modeling Success Stories based on Bucchiarone et al.[7]
Table 2
Mapping of the identified factors to the modelling success stories.The table also shows the importance of the different technical and non-technical factors for the individual success story. | 6,209 | 2024-06-28T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A Common Origin for the QCD Axion and Sterile Neutrinos from SU (5) Strong Dynamics
We identify the QCD axion and right-handed (sterile) neutrinos as bound states of an SU (5) chiral gauge theory with Peccei-Quinn (PQ) symmetry arising as a global symmetry of the strong dynamics. The strong dynamics is assumed to spontaneously break the PQ symmetry, producing a high-quality axion and naturally generating Majorana masses for the right-handed neutrinos at the PQ scale. The composite sterile neutrinos can directly couple to the left-handed (active) neutrinos, realizing a standard see-saw mechanism. Alternatively, the sterile neutrinos can couple to the active neutrinos via a naturally small mass mixing with additional elementary states, leading to light sterile neutrino eigenstates. The SU (5) strong dynamics therefore provides a common origin for a high-quality QCD axion and sterile neutrinos.
I. INTRODUCTION
The QCD axion and right-handed neutrinos are wellmotivated new particle states beyond the Standard Model (SM).The QCD axion elegantly solves the strong CP problem via the Peccei-Quinn mechanism [1], where an (anomalous) U (1) PQ symmetry is spontaneously broken, giving rise to a pseudo-Nambu-Goldstone boson [2,3] that dynamically cancels the strong CP phase.Experimental bounds limit the U (1) PQ symmetry breaking scale to be f PQ ≳ 10 8 GeV [4].The QCD axion can also provide the missing dark matter component of the Universe [5][6][7], thereby solving two problems in the Standard Model.However, explicit violation of the U (1) PQ global symmetry must be entirely dominated by QCD dynamics, with any other violation, in particular from gravity, highly suppressed [8][9][10][11][12].An attractive solution to this axion quality problem is to realize the PQ symmetry as an accidental symmetry in the low-energy theory, similar to baryon and lepton number in the Standard Model.In particular, if new strong dynamics is introduced around the PQ breaking scale, then gauge and Lorentz symmetry can be used to accidentally preserve U (1) PQ up to very high dimension terms in the Lagrangian [13].
The seesaw mechanism [14][15][16][17][18] provides a similarly elegant explanation of the hierarchy between the masses of neutrinos and charged leptons.Assuming order one Yukawa couplings, this is simply achieved by introducing right-handed neutrinos with masses ≳ 10 10 GeV.As is well-known, this mass scale is similar to the PQ scale and the two can be related [19,20].Given this coincidence of scales, and the possibility of realizing the PQ symmetry as an accidental symmetry, we seek a solution that relates these two scales via strong dynamics.The strong dynamics also has the advantage of naturally generating a<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>the PQ scale via dimensional transmutation, obviating the need to introduce explicit mass scales in the scalar potentials that are normally used to UV complete the axion Lagrangian, as in the KSVZ or DFSZ scenarios.
A particularly interesting strong dynamics based on an SU (5) gauge theory with massless, chiral fermions was recently considered in Ref. [21] to realize a highquality QCD axion.The PQ symmetry was identified as a global symmetry of the strong dynamics and assumed to be spontaneously broken at a scale f PQ .This realises a low-energy QCD axion as a composite pseudo Nambu-Goldstone (NG) boson, thereby solving the strong CP problem in a similar manner to the original dynamical axion [22,23].Furthermore, the local gauge and Lorentz symmetry accidentally preserves the U (1) PQ global symmetry up to dimension nine terms in the low-energy effective Lagrangian, thereby ameliorating the axion quality problem.
In this paper, we build upon the SU (5) model in Ref. [21] to realize both a high-quality QCD axion and right-handed (sterile) neutrinos as bound states of the same UV dynamics.In particular, we show that the spontaneous breakdown of the PQ symmetry leads to QCD singlet states with Majorana masses of order the PQ scale, f PQ , which can be identified as composite sterile neutrinos.To generate the left-handed (active) neutrino masses, the composite sterile neutrinos are then coupled either directly or indirectly to the active neutrinos, realizing heavy or light sterile neutrino mass eigenstates, respectively.
In the case where the composite sterile neutrinos couple directly to the active neutrinos (with a dimensionseven Higgs-fermion coupling), light Majorana active neutrinos are obtained via a see-saw mechanism.The interaction is generated by integrating out PQ-charged scalar fields in a UV completion.Importantly, the quality of the PQ symmetry is not affected.Alternatively, an indirect coupling to the active neutrinos can occur via a naturally generated small mass mixing between an elementary, right-handed neutrino and the composite sterile neutrinos.This leads to sterile states with naturally suppressed (sub-TeV) Majorana masses and, depending on the scale of the UV completion, can realize the neutrinos as pseudo-Dirac states [24,25].The strong dynamics, therefore, plays a pivotal role not only in addressing the axion quality problem, but also in relating the axion and neutrino masses.
The composite sterile neutrinos share features similar to those previously studied in Refs.[24][25][26][27][28], however the connection with a composite QCD axion was not previously considered.The chiral UV gauge theory also provides an explicit 4D realization of the holographic 5D setups considered in Refs.[29][30][31], which solved the axion quality problem with a composite axion and partial compositeness in the SM fermion sector.Finally, previous work in Refs.[32][33][34][35] also addressed the axion quality problem with an accidental PQ symmetry of strong dynamics, although without any connection to neutrino masses.
The outline of our paper is as follows.In Section II we review the matter content of the SU (5) gauge theory, together with the global symmetry structure and IR dynamics.We then discuss the resulting bound state spectrum, which includes a composite, high-quality axion as well as QCD singlet bound states that are identified as composite right-handed neutrinos.The generation of neutrino masses is discussed in Section III, where we present models with both heavy and light sterile neutrino mass eigenstates.A holographic connection to the light sterile neutrino case is also discussed.Our concluding remarks are presented in Section IV.The Appendices contain supplementary material related to the QCD anomaly factor (App. A), representations of the NG bosons (App.B), solution of the axion quality problem (App.C), implications for axion dark matter (App.D), and mass-mixing in the light sterile neutrino scenario (App.E).
In the limit that the QCD coupling α s → 0, the SU (5) gauge theory has an SU (n f )5 × SU (n f ) 10 global symmetry, where n f = dim R ψ .In addition, there is a single SU (5)-anomaly-free global U (1) (analogous to B − L symmetry in SU (5) grand unified theories) for which the charges of ψ5 and ψ 10 satisfy Q5 = −3Q 10 .This is identified as the PQ symmetry.The representations of the fermions under the full flavor symmetry are shown in Table II.The QCD gauge group, SU (3) c , is a subgroup of the non-abelian flavor symmetry, and the latter is explicitly broken for α s ̸ = 0. Importantly, the PQ symmetry is anomalous with respect to QCD (see App. A), which will eventually lead to the composite axion obtaining a mass from non-perturbative QCD effects in the usual way.On the other hand, the fact that U (1) PQ has no SU (5) anomaly is important to ensure that the axion remains light and provides a solution to the strong CP problem.
B. IR Dynamics and Symmetry Breaking
Given the fermion content in Table I, the SU (5) gauge theory is asymptotically free in the UV and becomes strongly coupled in the IR.The dynamics of strongly coupled, non-supersymmetric chiral gauge theories are not well understood.Techniques such as 't Hooft anomaly matching, large-N , and the a-theorem can be used to place restrictions on the dynamics but do not, in general, single out a unique IR phase.(For a recent discussion of the IR dynamics of SU (N ) theories with a single flavor of antisymmetric + anti-fundamental chiral fermions see [36].) It was pointed out in Ref. [21] that for the current SU (5) model with n f flavors it is impossible to match the [SU (n f )5] 3 and [SU (n f ) 10 ] 3 anomalies if the SU (5) confines in the IR.The global symmetry is therefore spontaneously broken by the SU (5) gauge dynamics; how-ever, there remain several possible IR phases with different unbroken global symmetry groups.Furthermore, there is the possibility of forming bilinear condensates (i.e.⟨ψ5ψ 10 ⟩) that dynamically break the SU (5) gauge theory (see e.g.[37]).Following [21], we assume that (i) the gauge theory confines and no bilinear condensates form, and (ii) the flavor breaking condensate preserves at least an SU (3) subgroup, which is the weakly gauged SU (3) c .
C. Composite axion
The spontaneous PQ breaking by the condensate (3) gives rise to a composite pseudo-NG boson, which is the axion.We parameterise the Goldstone field containing the axion, a, as where a/f PQ ∈ [0, 2π).Under the PQ transformation ψ → e iQ ψ α ψ, with α an arbitrary phase parameter, the axion transforms as a → a + αf P Q , where f PQ is the axion decay constant.This constant obeys the relation f PQ = Λ 5 /g * , where g * is a typical coupling between the composite bound states (with mass scale Λ 5 ), satisfying As usual, QCD instantons generate a potential for the axion, providing a dynamical solution to the strong CP problem.The axion mass is then given by the standard expression [38,39], where The solution to the strong CP problem is spoiled if there are additional sources of explicit PQ violation, which will modify the axion potential.As is well known [9][10][11][12], gravity is not expected to preserve the global PQ symmetry and may induce higher-dimensional, Planck scale suppressed operators that contain the PQcharged fermions.Below the scale where SU (5) confines, these operators give additional contributions to the axion potential.Importantly, the combination of Lorentz symmetry and the SU (5) gauge symmetry restricts these operators to have dimension-9 or greater, such that U (1) PQ remains an approximate accidental symmetry at low energies.Planck scale induced contributions to the axion potential are then sufficiently suppressed provided that f a ≲ 10 9 GeV [21] (see App. C).This bound also has consequences for axion dark matter, which is further discussed in App.D.
D. Composite Fermion Bound States
In addition to the composite axion, there are massive fermionic bound states.As we now show, these include SM singlets that, as will be discussed further in Section III, can act as massive sterile neutrinos.We restrict our discussion to the 3-fermion, spin-1 2 bound states, for which there are two distinct SU (5) singlets, where we have written the bound states as right-handed Weyl fermions for convenience.These bound states all have PQ charge +1, and due to the spontaneous breaking of U (1) PQ by the condensate (3) obtain Majorana masses of order the resonance scale Λ 5 .
The bound states decompose into irreducible representations of SU (3) c that depend on the representation R ψ of the constituent fermions.For example, with R ψ = 3 ⊕ 3, we have: Note that in both cases Ψ 1 and Ψ 2 each contain two QCD singlet bound states3 , which can be identified as righthanded neutrino candidates.We denote these bound states by N 1,j and N 2,j , where the index j = 1, 2 represents the two different SU (3) c singlets: It was shown in [21] that the 't Hooft anomaly matching condition for U (1) PQ can be satisfied if the bound state in Ψ 1 that transforms in the R ψ representation is massless.Thus, anomaly matching provides no guidance as to whether or not U (1) PQ is spontaneously broken when the theory confines.As discussed in Section II B, we assume that it is spontaneously broken, which provides the necessary mechanism to realize a QCD axion in the low-energy theory.
III. NEUTRINO MASS
A particularly interesting feature of the model is that the same dynamics that spontaneously breaks the PQ symmetry and generates a composite, high-quality axion also produces composite, QCD singlet fermions.These can be identified as composite sterile neutrinos if there exist additional couplings that connect the stronglyinteracting SU (5) sector with the electroweak sector of the SM.The active neutrinos then mix with the spectrum of composite sterile neutrino states which, together with the effect of the PQ breaking condensate, leads to the generation of Majorana masses for the active neutrinos.The PQ symmetry therefore serves as a generalised lepton number.In the following sections we present two explicit realisations of this idea.First, we consider a model that, after integrating out the SU (5) sector (containing the heavy sterile neutrinos), reduces to the Weinberg operator at low energies.Second, we consider an alternative model that contains elementary sterile neutrinos which mix with the composite states, resulting in light sterile mass eigenstates.
A. Heavy Sterile Neutrino Model
Before presenting a renormalisable UV model, we first consider how the SU (5) and SM lepton sectors can be connected within an effective field theory (EFT) framework.The lowest dimension operators that can achieve this are dimension-7: where L i = (ν L,i , e L,i ) T are the SM SU (2) lepton doublets, which carry PQ charge +1, and H is the Higgs doublet 4 which obtains the VEV ⟨H⟩ ≡ 1 √ 2 (0, v) T , with v ≈ 246 GeV.The index j enumerates the SU (3) c singlets in (ψψψ), with j ∈ {1, 2} for either R ψ = 3 ⊕ 3 or R ψ = 8, as discussed in section II D. We assume that the scale Λ L satisfies Λ 5 < Λ L < M Pl , with the dimensionless couplings ξij , ξ′ ij allowing for flavor-dependent masses for the different neutrino flavors denoted by the index i.
The relevant low-energy effective theory below the SU (5) resonance scale, Λ 5 , contains only the SM degrees of freedom and the pseudo-NG axion, with the PQ symmetry non-linearly realised.(The heavy composite resonances, including the singlet fermion bound states, have been integrated out.)The leading term, consistent with the symmetries, that is induced by the operators in Eq. ( 12) is where ξ ij ≃ O(1) × ξij and, for simplicity, we have taken ξ′ ij = 0.The factors of f PQ and Λ 5 have been determined using dimensional analysis, assuming the strong dynamics can be described by a single mass scale and coupling (see e.g.[40]).After electroweak symmetry breaking, the above term generates Majorana masses for the neutrinos, where λ ν,i are the eigenvalues of ξ ik ξ jk in (13).Notice that the neutrino masses feature the usual see-saw factor v 2 /Λ 5 (with Λ 5 identified as the scale of the heavy SN1987A 10 sterile neutrinos), but also an additional suppression by the ratio of the resonance scale to the EFT scale Λ L .Consequently, reproducing the measured neutrino masses requires Λ 5 to be lower than the usual Type-I see-saw scale.A lower Λ 5 is also desirable to address the axion quality problem.Notice also that Eq. ( 13) leads to axion-neutrino couplings.
Comparing Eqs. ( 5) and ( 14), the ratios between the active neutrino masses and the axion mass are estimated to be This shows that if the EFT scale is close to the PQ scale, specifically when Λ L ≃ (13/g 5/6 * ) Λ 5 , the axion and neutrino masses are in approximately the same range.
The expression ( 14) is fitted to the observed neutrino mass spectrum to determine the viable parameter space.Assuming the neutrinos are normal ordered, and fixing λ ν,3 = 1, we use the mass of the heaviest neutrino to constrain Λ L in terms of Λ 5 .The lighter neutrino masses are then simply obtained by choosing appropriate λ ν,1 , λ ν,2 .The combination of neutrino oscillation measurements [41,42] and the upper bound on the sum of neutrino masses from cosmology [43][44][45] ( m ν,i < 0. The EFT description in Eq. ( 12) should remain valid up to the energy scale of the composite resonances, otherwise the new degrees of freedom in the UV completion will, in general, modify the flavor and PQ symmetry breaking dynamics discussed in Section II B. (While such a scenario could also be viable, we do not consider it here.)This corresponds to the requirement Λ L > Λ 5 .This condition is violated in the dark (light) grey regions in Fig. 1 for g * = 1 (4π).Taking into account the lower bound on f a from SN1987A, we then find that in most of the parameter space small values of the strong sector coupling, g * ≃ 1, are needed to generate the active neutrino masses in this scenario.
UV completion
A UV completion of the operators in Eq. ( 12) can be obtained by introducing two massive complex scalar fields ϕ, ϕ 2 .
We take these fields to have masses m ϕ , m ϕ2 > Λ 5 , so that they do not affect the confinement and symmetry breaking of the SU (5) strong dynamics discussed in Section II.In addition, the scalars TABLE III: Representations of the fields in the UV completion of the heavy sterile neutrino scenario.
do not obtain VEVs and therefore do not reintroduce the axion quality problem (or affect electroweak symmetry breaking).The relevant interaction Lagrangian is5 where y5, y 10 , y 2 are dimensionless couplings and m 12 ≲ m ϕ , m ϕ2 is a mass parameter.Note that we have suppressed the indices on the Yukawa couplings y5, y 10 that enumerate the different SU (3) c contractions, as well as the generation indices of the lepton doublet and coupling y 2 .The charges of the fields are listed in Table III.
Integrating out ϕ and ϕ 2 , as shown diagrammatically in Fig. 2, yields the effective Lagrangian The two terms correspond to the dimension-7 operators in Eq. (12).Assuming all dimensionless couplings are O(1), the energy scale of the effective operators is approximately given by It was shown in Fig. 1 that to generate the observed active neutrino masses Λ L cannot be significantly larger than Λ 5 .In the UV completion this corresponds to the requirement that m ϕ ∼ m ϕ2 ∼ m 12 ≳ Λ 5 .An alternative possibility is that the UV theory contains elementary, massless, right-handed neutrinos ν R with PQ charge −1.The PQ symmetry forbids explicit Majorana mass terms for the ν R , but they form the usual Dirac masses with the ν L (which here also have PQ charge −1).Tiny Yukawa couplings, y ν , would then normally be required to explain the active neutrino masses.However, the spontaneous PQ breaking in the SU (5) sector can generate Majorana masses, m R , for ν R , if there is mixing between the elementary ν R and composite operators.As we shall show, m R can be hierarchically smaller than Λ 5 , providing a means to naturally generate light sterile states and pseudo-Dirac neutrinos.
The elementary ν R can mix with the 3-fermion, dimension- 9 2 composite operators: where Λ R is the EFT scale satisfying Λ 5 < Λ R < M Pl , and ζij , ζ′ ij are dimensionless couplings, with i the neutrino flavor index and j ∈ {1, 2} enumerating the SU (3) c singlets in (ψψψ).After the SU (5) theory confines, these operators give rise to a (PQ-invariant) mass mixing between the ν R,i and the composite fermions N j .This effect is discussed further in App.E in the context of a toy model with a single composite resonance.In the lowenergy effective theory below the resonance scale Λ 5 , the effect of the above operators is to generate a Majorana mass term for ν R .Including also the Dirac mass term for the neutrinos, we obtain with where ζ ij ≃ O(1) × ζij and we have set ζ′ ij = 0 for simplicity.Notice that m R (which has again been estimated using dimensional analysis) is suppressed relative to the resonance scale, and if Λ R ≫ f PQ there will be light sterile neutrino states.Hence, both the see-saw and pseudo-Dirac limits of the neutrino mass matrix in (20) can be naturally obtained, depending on the ratio f PQ /Λ R .In the following, we assume diagonal couplings for simplicity: y ij ν = y ν,i δ ij and ζ ik ζ jk = δ ij .The see-saw limit of the neutrino masses is then given by and m sterile ν is given by (21). Figure 3 shows (green) contours of the sterile neutrino mass 6 in the f a -Λ R plane.We have fixed m active ν,3 = 0.05 eV, such that the contours correspond to different values of y ν,3 .The left and right panels are for R ψ = 3⊕ 3 and R ψ = 8, respectively.The red region is excluded by SN1987A and the region to the left of the blue lines is favoured to obtain a high-quality axion.The requirement that the EFT scale is above the resonance scale, Λ R ≳ Λ 5 , imposes an upper bound on m sterile ν (or equivalently y ν,3 ), as shown by the grey region.
The sterile neutrino masses can naturally be hierarchically smaller than the underlying scales f a and Λ R .How-ever, similar to the standard type-I seesaw, small Yukawa couplings, y ν , are then needed to obtain the active neutrino masses.With sterile neutrino masses of order the eV scale a coupling y ν ∼ 10 −12 is needed, while TeV scale sterile neutrino states correspond to y ν ∼ 10 −6 .In the pure Dirac mass limit (m R → 0), an active neutrino mass of m active ν = 0.05 eV corresponds to y ν ≃ 10 −13 .
UV completion
A tree-level UV completion for the operators in Eq. ( 19) is obtained by introducing a massive, complex scalar field ϕ with mass m ϕ > Λ 5 .Again, the scalar does not obtain a VEV and therefore does not affect the axion quality.The charge assignments of the fields are listed in Table IV, leading to the interaction terms where y5, y 10 , y R are dimensionless couplings and we have suppressed the indices that enumerate the SU (3) c contractions.For simplicity, we consider just one active neutrino flavor and one ν R flavor.The analysis can be straightforwardly generalized to three active neutrino flavors.Integrating out the massive scalar ϕ in ( 23), as FIG. 4: UV-completion of the dimension-6 operators in Eq. ( 24) arising from tree-level exchange of the heavy scalar ϕ.(The arrows represent fermion number.) shown diagrammatically in Fig. 4, yields )(ψ 10 ψ 10 ) + h.c., (24) which are the operators in Eq. ( 19), with Λ R ∼ m ϕ for O(1) couplings.
Holographic connection
Our light sterile neutrino scenario provides a possible holographic realization of the 5D model considered in Ref. [31].In the 5D model, both the axion and right-handed neutrinos are bulk fields charged under a bulk U (1) PQ gauge symmetry, which allows for hierarchically small sterile neutrino masses.According to the AdS/CFT dictionary (see e.g.Ref. [46]), the bulk U (1) PQ gauge symmetry is dual to a global symmetry in the 4D gauge theory (CFT), while the Kaluza-Klein mass eigenstates can be understood as due to a mixing between an elementary and composite sector.
For the right-handed neutrino, this would imply a mixing term ν R O, where ν R is an elementary fermion and O is an operator in the dual (CFT) gauge theory.A naturally small mixing can be generated when dim O > 4.
This is similar to what occurs in the light sterile neutrino case.This can be seen from the UV Lagrangian in (24), where the operator is O = ψ † 5ψ † 5ψ † 10 (assuming y 10 = 0); the dual 4D theory is then identified as the SU (5) gauge theory.Since dim O = 9 2 , the mixing is small (as seen in Section III B) and therefore the sterile neutrino partner of the active neutrino can be naturally light.Thus, the UV completion considered in the light sterile neutrino case provides a specific holographic realization of the 5D model.This holographic realization is not in perfect agreement with the 5D model of Ref. [31] because the left-handed (active) neutrinos were also bulk fields in the 5D model.This means that the dual theory should also feature mixing between the elementary ν L and composite operators.It would be interesting to generalize our UV completion to also incorporate this feature.Finally, the axion in the 5D model has exponentially suppressed couplings on the UV brane.This corresponds to essentially a purely composite (and high-quality) axion in the dual theory, as also occurs in the SU (5) gauge theory.
IV. CONCLUSION
The axion and right-handed neutrinos are motivated by two seemingly unrelated puzzles of the Standard Model.In this work, we have provided a common origin for the QCD axion and right-handed neutrinos as bound states arising from strong dynamics.This builds upon the chiral SU (5) gauge theory in Ref. [21], which contains a high-quality composite axion, to also include composite neutrino states.This solution also provides a possible UV description of the holographic models considered in Refs.[29,31].
Interestingly, the strong dynamics gives rise to composite sterile neutrino masses of order the PQ breaking scale.Depending on the origin of the coupling between the composite sterile neutrinos and the left-handed neutrinos, either pseudo-Dirac or Majorana neutrinos are possible.When the composite sterile neutrinos directly couple to left-handed neutrinos in a dimension-seven interaction with the Higgs, Majorana active neutrinos are obtained via a seesaw mechanism.The dimension-seven interaction can be generated by integrating out two PQcharged, massive complex scalar fields in a UV completion that preserves the quality of the PQ symmetry.Alternatively, the composite sterile neutrinos can mix with elementary right-handed neutrinos, via dimension- 9 2 operators, to induce naturally small couplings to the active neutrinos.This leads to sterile states that are hierarchically lighter than the PQ scale and can realise pseudo-Dirac neutrinos.The PQ symmetry plays the role of a generalised lepton number in realizing either the heavy or light sterile neutrino scenario.
There are number of phenomenological features of our model that could be tested in future experiments.In the pseudo-Dirac limit there is a contribution to the number of effective neutrino species; for reheating temperatures above the SU (5) confining phase transition, ∆N eff ∼ 0.1, which can be tested in upcoming CMB experiments [28,47,48].Alternatively, if the light sterile neutrinos have sub-TeV masses they could be detected at collider experiments.In the post-inflationary scenario (assuming the residual discrete PQ symmetry is broken to avoid stable domain walls), a first order SU (5) phase transition could give rise to a gravitational wave signal associated with the PQ scale (see e.g.[49]), which is worthy of further study.Our model also predicts axionneutrino couplings that could lead to effects in neutrino oscillations within the local DM axion halo [50].Finally, baryogenesis mechanisms can be straightforwardly incorporated into our model, such as the usual leptogenesis mechanism, or a cogenesis mechanism, as considered in Ref. [28].
The coincidence between the axion decay constant and seesaw mass scales can therefore be explained by strong dynamics, which naturally connects the axion and neutrinos in a way that can also address the axion quality problem.
• R ψ = 3 ⊕ 3: One of the adjoint representations (i.e.8) of SU (3) c in the RHS of both (B5) and (B6) corresponds to the generators of the unbroken SU (3) c subgroup.The remainder gives the SU (3) c representations of the NG bosons of the spontaneous symmetry breaking SU (n f )5 × SU (n f ) 10 → SU (3) c (i.e. when the group G in Eq. ( 2) is trivial).Importantly, as discussed in Section II B, all of the NG bosons are colored in the R ψ = 8 case and hence obtain masses of order √ α s Λ 5 .For R ψ = 3 ⊕ 3, there are two massless, QCD singlet NG bosons.This corresponds to the fact that, in this case, the gauging of SU (3) c preserves two residual U (1) flavor symmetries.Due to the cosmological bounds on additional relativistic degrees of freedom, this scenario is only viable if the composite sector remains out of equilibrium with, and colder than, the SM bath.On the other hand, if these U (1) symmetries are instead contained in G (i.e.not spontaneously broken by the strong dynamics) then there are no QCD singlet NG bosons, which would the cosmological requirements.
Appendix C: Explicit PQ Breaking and Axion Quality
A feature of the SU (5) chiral gauge theory is that the PQ symmetry is accidentally preserved up to very high order [21].As discussed in Section II B, the lowest dimension SU (5) and SU (3) c gauge invariant, Lorentz scalar operators that have non-zero PQ charge contain six fermion fields.This implies that the PQ symmetry is accidentally preserved up to (gravitationally-induced) dimension nine terms in the Lagrangian.The relevant operators are listed in Eq. (3).
While the leading PQ-breaking operators contain sixfermion fields in the present SU (5) model, it is interesting to consider whether there are generalisations of this model in which PQ-breaking terms arise from eightfermion (or higher) operators, since this would provide a more robust solution to the axion quality problem.A simple extension is to consider SU (N ) gauge groups with N ≥ 6 and fermions in the antifundamental and antisymmetric irreps.Using the Mathematica package GroupMath [51], we find that there always exist PQbreaking operators with either four or six fermions for all 6 ≤ N ≤ 16.
Displacement of the axion potential minimum
Planck suppressed, PQ-charged operators cause a displacement of the axion potential minimum from its CP conserving value.To determine this displacement, we consider the following Lagrangian containing the (leading) dimension nine operators, where the c i are dimensionless constants and the overall prefactor has been estimated using naive dimensional analysis (NDA) [52,53], assuming the gravitational EFT scale Λ Pl = 4πM Pl and M Pl ≈ 10 19 GeV.All of the above operators have PQ charge −2 and each term represents multiple gauge singlet combinations.For example, the decomposition of each operator into irreducible representations of SU (5) is showing that Φ PQ,1 includes six SU (5) invariant combinations, and similarly for Φ PQ,2 , Φ PQ,3 and Φ PQ,4 .There is an analogous decomposition for SU (3) c , further increasing the number of gauge singlet combinations.The Lagrangian (C1) gives rise to the following term in the low-energy effective theory below the SU (5) resonance scale, where c PQ is a constant that depends on the c i in (C1) and is assumed to be O(1).The resulting axion potential is then approximately given by where N is the QCD anomaly factor (see App. A), and f a ≡ f PQ /N .The first term is the usual QCD contribution and the second term arises from (C6), with c PQ = |c PQ | e iδ and δ representing an arbitrary phase from gravity that is not necessarily aligned with the phases in the SM.
The displacement of the axion potential minimum with respect to the CP conserving minimum is then found to be .These bounds are similar to those given in Ref. [21]; however, we have included all the dimension nine PQ-breaking operators in our analysis.
Appendix D: Axion Dark Matter
As is well-known, the axion provides one of the bestmotivated dark matter candidates.The production of axion dark matter in the early universe depends on whether the PQ-breaking occurs before or after the inflationary era.
If PQ-breaking occurs before (or during) inflation, then axion dark matter is produced via the misalignment mechanism.The relic axion abundance Ω a is then given by [55,56] where θ i = a i /f a is the initial misalignment angle, and h ≃ 0.68 is the present-day Hubble parameter (in units of 100 km s −1 Mpc −1 ).If the total dark matter relic density, Ω DM h 2 ≃ 0.12 [43], is due to axions, the required range of f a for an initial misalignment angle in the range θ i ∈ (0.1, 3) is This range is in tension with realizing the PQ symmetry as a high-quality accidental symmetry of the SU (5) chiral gauge theory.A modification of the misalignment mechanism is to assume that the initial velocity of the axion is nonzero -the so-called kinetic misalignment mechanism [57].For an elementary axion, this mechanism can produce the correct relic abundance for any decay constant in the range 10 8 GeV ≲ f a ≲ 1.5 × 10 11 GeV.It would be interesting to explore whether this mechanism can be implemented in a composite axion scenario, particularly for values of the decay constant f a ∼ 10 8 GeV that ameliorate the axion quality problem.Both of the above scenarios assume that the universe is not reheated to temperatures above the SU (5) de-confinement transition, which would restore the PQ symmetry.
Alternatively, in the post-inflationary PQ-breaking scenario the universe is reheated to temperatures above the PQ-breaking scale.As the universe cools, topological defects form which, depending on the domain wall number, may include stable domain walls.To determine the domain wall number, we first note that the QCD contribution to the axion potential in Eq. (C7) preserves the discrete symmetry, The physical domain of the axion field is a/f a ∈ [0, 2π|N |), with the anomaly coefficient N = −2 and N = −6 for the R ψ = 3 ⊕ 3 and R ψ = 8 models, respectively (see App. A).Therefore, the number of degenerate minima of the QCD potential in the physical domain, which is equivalent to the number of domain walls, is Since N DW > 1, explicit violation of the discrete PQ symmetry, which would allow domain walls to decay, is required if the post-inflationary scenario is to be viable.In principle, such a "bias" potential [58] which lifts the vacuum degeneracy could simply arise from Plancksuppressed, higher-dimension operators that explicitly violate the PQ symmetry, as considered in (C2)-(C5) or of higher dimension.For instance, 6 fermion, dimension-9 terms reduce the domain wall number in the octet model to N DW = 2.However, going beyond 6-fermion terms does not reduce the domain wall number further and therefore a new source of breaking would be required to lift the remaining degeneracy.This may arise, for example, if SU (3) c is embedded in a larger gauge group, such as recently considered in Ref. [59].
Assuming such a bias potential, the decays of cosmic strings and domain walls contribute to the axion dark matter density.With the present state-of-the-art calculations (see e.g.Refs.[60,61] for the case of axion strings and Ref. [62] for N DW > 1), there remains significant uncertainty in the quantitative estimation of the axion abundance from the decay of topological defects, which can dominate over the misalignment contribution.Therefore, the lower bound on f a arising from the dark matter relic density in the pre-inflationary scenario (in Eq. (D2)) could be significantly relaxed in the post-inflationary scenario.In fact, a robust upper bound of f a ≲ 5.4 × 10 8 GeV (or m a ≳ 11 meV) [63], can be derived on obtaining the required axion relic abundance from domain wall decay.In this Appendix, we present a toy model of the mass mixing in the light-sterile neutrino scenario by including a single right-handed neutrino resonance N 1 in the effective low-energy theory.The Lagrangian for this simplified model of the effects of the strong dynamics is given by where ∆ R is a Dirac mass mixing between the elementary field ν R and the composite resonance N 1 , whose value depends on the parameters of the UV completion.Note that in terms of the constituent fields, the operator corresponding to the Majorana mass term for N 1 is Φ PQ,1 in Eq. ( 3), which can be split into two PQ-charged fermionic operators.
The neutrino mass eigenvalues of (E2) are determined numerically and shown in Fig. 6 as a function of the Yukawa coupling y ν , assuming f PQ = 10 10 GeV, g * = 1, and with the active neutrino mass set to 0.05 eV.In the pure Dirac mass limit for the active neutrinos (∆ R → 0), a mass of m active ν = 0.05 eV corresponds to y ν ≃ 10 −13 .As the mass m ϕ of the scalar field in the UV completion (or Λ R in the EFT) is lowered, the sterile partner of the active neutrino increases in mass via the mixing with N 1 .More generally, the sterile partner of the active neutrino will mix with all the resonances of the strong dynamics.
FIG. 1 :
FIG. 1: EFT scale Λ L versus f a in the heavy sterile neutrino model for R ψ = 3 ⊕ 3 (left) and R ψ = 8 (right).Within the green band an active neutrino mass m active ν,3 = 0.05 eV is obtained with 1 ≲ g * ≲ 4π and Λ L > Λ 5 , assuming λ ν,3 = 1.The red shaded region is excluded by the SN1987A bound on f a [4].The estimated upper limit on f a for a high-quality axion consistent with the neutron EDM bound (assuming |Im (c PQ )| ≳ 10 −3 , see App.C) is shown by the blue dashed (dotted) line for g * = 1 (4π).The breakdown of the EFT validity when Λ L ≲ Λ 5 (= g * f PQ ) is depicted by the dark (light) grey shaded region for g * = 1 (4π).
3 ≤
13 eV) leads to the 2σ range 0.05 eV ≤ m active ν,0.06 eV.This restricts the allowed values of f a and Λ L , as shown in Fig.1for R ψ = 3 ⊕ 3 (left panel) and R ψ = 8 (right panel).Within the green band an active neutrino mass m ν,3 = 0.05 eV can be obtained with a strong sector coupling in the range 1 ≤ g * ≤ 4π.The lower edge of the band corresponds to g * = 1 and the upper edge to g * = min(4π, Λ L /f P Q ), such that Λ L > Λ 5 is always satisfied within the band.The range of f a excluded by SN1987A[4] is shown in red.Values of f a to the right of the dashed (g * = 1) or dotted (g * = 4π) blue line are disfavoured, since Planck-suppressed contributions to the axion potential can destabilise the solution to the strong CP problem (see App. C).
18 FIG. 3 :
FIG. 3: EFT scale Λ R versus f a in the light sterile neutrino model for R ψ = 3 ⊕ 3 (left) and R ψ = 8 (right).The green bands depict contours of m sterile ν , with the lower (upper) edges of the bands corresponding to g * = 1 (4π); the associated y ν,3 values for an active neutrino mass m active ν,3 = 0.05 eV are shown in the legend.The red shaded region is excluded by the SN1987A bound on f a [4].The estimated upper limit on f a for a high-quality axion consistent with the neutron EDM bound (assuming |Im (c PQ )| ≳ 10 −3 , see App.C) is shown by the blue dashed (dotted) line for g * = 1 (4π).The breakdown of the EFT validity when Λ R ≲ Λ 5 (= g * f PQ ) is depicted by the dark (light) grey shaded region for g * = 1 (4π)
5 FIG. 5 :
FIG. 5: Displacement from the CP conserving minimum, |∆ θeff |, due to Planck suppressed operators as a function of f a .The contours show different values of |Im (c PQ )|, assuming g * = 1.The solid (dashed) lines correspond to the QCD representation R ψ = 3 ⊕ 3 (R ψ = 8).The blue and red regions are excluded by the upper bound on the neutron EDM and SN1987A, respectively.
TABLE I :
Representations of the chiral fermions charged under the SU (5) × SU (3) c gauge symmetry.
TABLE II :
Representations of the SU (5) chiral fermions under the global flavor symmetry SU
TABLE IV :
Representations of the fields in the UV completion of the light sterile neutrino scenario.(Note that L has opposite PQ charge compared to the heavy sterile neutrino scenario.) | 9,430.4 | 2023-10-12T00:00:00.000 | [
"Physics"
] |
Mosquito community influences West Nile virus seroprevalence in wild birds: implications for the risk of spillover into human populations
Mosquito community composition plays a central role in the transmission of zoonotic vector-borne pathogens. We evaluated how the mosquito community affects the seroprevalence of West Nile virus (WNV) in house sparrows along an urbanisation gradient in an area with the endemic circulation of this virus. We sampled 2544 birds and 340829 mosquitoes in 45 localities, analysed in 15 groups, each containing one urban, one rural and one natural area. WNV seroprevalence was evaluated using an epitope-blocking ELISA kit and a micro virus-neutralization test (VNT). The presence of WNV antibodies was confirmed in 1.96% and 0.67% of birds by ELISA and VNT, respectively. The VNT-seropositive birds were captured in rural and natural areas, but not in urban areas. Human population density was zero in all the localities where VNT-positive birds were captured, which potentially explains the low incidence of human WNV cases in the area. The prevalence of neutralizing antibodies against WNV was positively correlated with the abundance of the ornithophilic Culex perexiguus but negatively associated with the abundance of the mammophilic Ochlerotatus caspius and Anopheles atroparvus. These results suggest that the enzootic circulation of WNV in Spain occurs in areas with larger populations of Cx. perexiguus and low human population densities.
Results
In all, 340829 female mosquitoes belonging to 13 species and five genera were trapped. The commonest species was Culex theileri Theobald (n = 282891), followed in descending order by Ochlerotatus caspius Pallas (n = 21155), Culex pipiens Linnaeus (n = 19268), Culex perexiguus Theobald (n = 5939) and Anopheles atroparvus Van Thiel (n = 5387). In addition, 1237 females of the potential WNV vector Culex modestus Ficalbi were captured. The other species were trapped in relatively low numbers and for this reason-and also because they are not involved in the transmission of WNV-were not considered in any of the analyses (with the exception of the species richness calculation). A positive relationship was found between the overall abundance of mosquitoes and the richness of vector species (est = 2.45, z = 6.05, p < 0.001).
Sera obtained from 2544 house sparrows were analysed to detect WNV antibodies. According to the ELISA tests, 50 birds (1.96%) from 18 different localities tested positively (Table 1), while 113 (4.44%) provided doubtful results. Of these birds, 17 (0.67% of the total individuals sampled) had neutralizing antibodies against WNV as confirmed by VNT (Table 1). These 17 WNV-positive birds were captured in five of the 45 studied localities, all of them in rural and natural areas in Huelva province (Fig. 1). WNV seroprevalence in these five localities ranged from 1.6% to 8.5%. Specific USUV-neutralizing antibodies were detected in a single bird (0.04%) captured in a natural area in Seville province. The human population density tended to be lower (0 in all cases) in areas with VNT-positive birds than in areas with negative cases (mean human population = 77.6, range: 0-1,424) (est = −1.90, z = 1.88, p = 0.06).
The relationships between ELISA and VNT seroprevalence rates and the number of mosquitoes captured and species richness are summarised in Tables 2 and 3, respectively. Only those variables included in the selected models (those with ∆AIC ≤ 2 compared to the best model) are shown. WNV seroprevalences estimated by ELISA were positively related to mosquito richness and the number of Cx. perexiguus captured but negatively related to the number of Oc. caspius and Cx. theileri captured. Similarly, for the case of the model based on the WNV seroprevalence according to the VNT, the prevalence of neutralizing antibodies against WNV was positively related to the number of Cx. perexiguus captured (Fig. 2) but negatively associated with the number of both the Oc. caspius and An. atroparvus.
Discussion
Both West Nile virus and USUV antibodies were found in wild house sparrows from southern Spain. The seroprevalence of WNV in house sparrows estimated by VNT was positively related to the abundance of Cx. 17 . It is important to note that WNV has been detected in Spain mainly in Cx. perexiguus and Cx. pipiens pools 14,19 . Moreover, Cx. perexiguus is an abundant ornithophilic mosquito that commonly uses house sparrows as hosts 17,27,28 . Interestingly, we found negative relationships between the abundance of two common mosquito species, An. atroparvus and Oc. caspius, and the prevalence of WNV antibodies in wild house sparrows. Both species have a mammal-biased feeding pattern, even though they can feed on birds 17,27 . Although WNV has been detected in wild collected Oc. caspius 29 , this species is described as an inefficient vector of WNV by the only experimental study of the vector competence of Oc. caspius conducted to date in Europe 13 . At least two factors help explain the negative association between these two mosquito species and WNV. Firstly, Oc. caspius prefers saltmarshes as larval breeding sites and An. atroparvus is commonest in sand dunes and scrubland, while Cx. perexiguus is frequently found in rice fields 30 . Consequently, Oc. caspius and An. atroparvus are probably more abundant in areas where Cx. perexiguus and/or other potential vector species for WNV such as Cx. pipiens and Cx. modestus are rarer. Secondly, the greater abundance of these mosquito species in the study area, where they feed mainly on mammals that are non-competent hosts for WNV, could lead to a reduction in the overall prevalence of WNV in birds. However, we were not able to identify any mechanisms that might support this hypothesis. Due to its mammal-biased diet and low vector competence, we would expect their abundance to have a low-but not negative-effect on WNV amplification. This is mainly because WNV transmission may be maintained by other vector-competent mosquito species present in the area. In addition, we observed a positive association between mosquito species richness and the seroprevalence detected by ELISA. The same non-significant tendency was found for WNV neutralizing antibodies detected by VNT. ELISA is a less specific technique than VNT and, consequently, individuals with positive sera for ELISA but negative for VNT have probably been exposed to other unidentified flaviviruses antigenically related to WNV. Using a SIR model, Roche et al. 15 concluded that mosquito species richness may increase the transmission success of vector-borne pathogens. However, such an association has never been tested empirically and could be the product of the assumption made in the model that species richness and vector abundance are positively related, a conjecture that, in fact, was supported by our data (see below). Consequently, our results support both the assumption of a positive relationship between vector richness and abundance, and the prediction of a positive relationship between vector richness and pathogen prevalence. Although Cx. perexiguus is the main vector of WNV in the area, other species such as Cx. pipiens and Cx. modestus may contribute significantly to WNV transmission 31,32 . These mosquito species, in addition to the others that co-exist in the area, could play a role in the transmission of certain flaviviruses. A number of flaviviruses have been isolated from mosquitoes (including Cx. pipiens) in Spain 19 , which potentially explains the positive correlation found between ELISA seroprevalence and mosquito species richness.
All positive cases of WNV-specific antibodies by VNT in bird sera were found in Huelva province, where evidence of WNV active circulation has existed since 2003, as demonstrated by the molecular detection of the virus in mosquitoes and the seroprevalence found in birds 33 . In addition, birds with WNV-specific antibodies by VNT were only detected in rural and natural habitats; none of the birds sampled in urban areas (n = 956) were seropositive. Moreover, the negative, marginally significant relationship we found between WNV seroprevalence and human population density may explain why WNV cases in humans are so uncommon in the study area despite the active circulation of the virus between vectors and avian hosts. Our results suggest that WNV, its main vector (Cx. perexiguus) and humans are not all present together in the same places. The seroprevalence of WNV in humans in southern Spain is very low (0.6%) and, mirroring the results for house sparrows in our study, a higher seroprevalence was detected in humans in rural areas than in suburban and urban areas 34 . Moreover, greater numbers of Cx. perexiguus were captured in natural and rural areas than in urban ones; likewise, the abundance of this species decreases as the percentage of land covered by built-up areas increases 35 . Indeed, only Cx. pipiens represents a risk for the transmission of WNV in urban areas 35 .
In conclusion, this study provides evidence of the central role of Cx. perexiguus in the enzootic circulation of WNV in southern Spain. The fact that WNV seropositive birds were found in both natural and rural areas, and tended to be present in areas with lower human densities, may explain the low incidence of WNV in humans in the area despite the local circulation of this virus between mosquitoes and wild birds.
Materials and Methods
Study area. This study was conducted in Andalusia, southern Spain (Fig. 1). This area is characterized by a Mediterranean climate with most precipitations concentrated during winter, while summer represents a long dry season. The study was conducted in 2013 at 45 different sites in Cadiz, Huelva and Seville provinces (southern Spain). The sampling sites (15 in each province) were situated in geographically close groups of three, each with one locality in a natural habitat, one in a rural habitat and one in an urban habitat (Fig. 1). The mean distance between localities within the same triplet was 5,740 m. Selection of the three habitat categories was performed after visual inspection of the areas based on the following criteria: urban habitats contained more densely human-populated areas than the other two habitat types; rural habitats had higher density of livestock than urban and natural areas; and natural habitats were selected on the basis of both lower human and livestock densities than in the other two habitat types, and a generally better conserved landscape.
Mosquito and bird sampling and identification. Mosquitoes were captured at the 45 sampling sites in
April-December, the period with maximum mosquito activity in southern Spain 30,36 . We used BG-sentinel traps baited with BG-lure and dry ice as a source of CO 2 , which is considered an effective method for mosquito diversity and abundance characterization 35 . At each site, once every 45 days, three traps were operated for 24 hours in each of the three localities of the same triplet. Overall, 135 traps (3 traps x 45 localities), with a mean distance between traps of 119 m (range 20-636 m), were employed during each mosquito trapping session for a total trapping effort of 810 trap nights. Mosquito sampling was conducted during days with favourable weather conditions (e.g. clear nights without rain). This procedure was repeated during 5-6 trapping sessions throughout the study period. Female mosquitoes were identified to species level following the morphological keys in Schaffner et al. 37 and Becker et al. 38 . Mosquitoes belonging to the univittatus complex were identified as Culex perexiguus based on male genitalia (see Harbach 39 ). For the case of samples compromising several thousands of mosquitoes captured per trap per night, we visually identified 500 individuals. These 500 mosquitoes were separated in five groups of 100 individuals, which were weighted to the nearest 0.001 g. This approach was used to estimate the proportion of individuals of each species for the rest of the sample based on the weight of the total number of mosquitoes captured 35 . Mosquito species richness, which ranged from 2 to 10, was calculated as the number of different species captured at each locality during the sampling period 35 . For each locality, the mean number of captures of the five commonest mosquito species in the study area (Anopheles atroparvus, Ochlerotatus caspius, Culex theileri, Culex pipiens and Cx. perexiguus) and of Cx. modestus, a potential WNV vector in the area 14,17,31 , were calculated.
House sparrows were sampled using mist-nests at the same localities during capture sessions in July-October, i.e. immediately after the breeding season to maximize the capture of juvenile birds and to better reflect virus circulation during the season from hatching until capture. Each bird was individually marked with a metal ring, sexed and aged 40 . A blood sample was taken from the jugular vein of each bird using a sterile syringe and preserved in a cool-box during the fieldwork session. In the laboratory, blood was allowed to clot at 4 °C overnight and was then centrifuged for 10 minutes at 4,000 rpm to separate the serum from the cellular fractions. Serum samples were frozen at −80 °C until further analysis. WNV antibodies detection. Serum samples from birds were analysed with the epitope-blocking ELISA Kit Ingezim West Nile Compac (INGENASA, Madrid, Spain) to determine the presence of WNV antibodies 41 . Positive results from ELISA may reflect past infections by WNV or even other unidentified flaviviruses circulating in the area. The cut-off value of this commercial ELISA test is set at 30% percentage of inhibition, while samples showing a percentage of inhibition between 30% and 40% are considered doubtful samples as established by the manufacturer 41 . All samples producing ELISA positive and/or doubtful results were subsequently analysed by a comparative micro virus-neutralization test (VNT) using WNV (strain Eg-101) and Usutu virus (USUV; strain SAAR1776) since the circulation of these flaviviruses has been demonstrated in the study area 42 . This confirmatory test allow to differentiate the specific antibodies against WNV from those elicited by other related flaviviruses. Neutralization titres were assigned based on the highest dilution of each serum capable of neutralizing the infection in vitro. Separate VNT were performed using serial (two-fold) dilutions (1:10-1:1280) of each serum sample using a micro-VNT method 21 . For a given sample, WNV-specific antibody responses were assigned when the observed VNT titres against WNV were at least four times higher than those observed against USUV 43 .
Human density quantification. We estimated the density of human population in the studied areas as the number of people living in a grid of 250 × 250 m. This information was obtained from the Andalusian Institute of Statistics and Cartography based on the number of residents registered in the local population census on 1 January 2013 (Base de Datos Longitudinal de Población de Andalucía). This variable was log-transformed to normalize its distribution.
Statistical analyses.
To estimate WNV seroprevalence we controlled for variables that operate at individual level (i.e. age, sex and date of capture) and others that operate at locality level (i.e. mosquito species richness and abundance of the different mosquito species). For this reason two-step analyses were performed. First, we fitted a generalized linear model to the seroprevalence of WNV using binomial distributed errors and including bird sex (fixed factor: male or female), age (fixed factor: juvenile or adult), month (continuous variable) and locality (fixed factor) as independent variables. Two different models were fitted using the results of ELISA and VNT as the dependent variable, respectively. Second, least square means (lsmeans) were calculated by retaining bird age and sampling locality, the only two significant factors explaining variance between individuals in WNV seroprevalence according to the previous models. This procedure allowed us to calculate both the ELISA and VNT seroprevalences for each of the 45 localities while controlling for the potential confounding effect of bird age. Third, two Linear Mixed-effects Models (LMM) were fitted using the lsmeans for ELISA and VNT seroprevalences as dependent variables. 'Province' and 'triplets' were included as random factors to account for the geographical stratification of the sampling design, and models were fitted using maximum likelihood and normal distributed errors. The independent variables included in these models were the three habitat categories (fixed factor: urban, rural or natural), the number of captures of each of the six main mosquito species found and species richness (continuous variables). Variance Inflation Factors (VIF) were checked to exclude collinearity between independent variables 44 and Akaike's Information Criterion (AIC) was used to select the best final models for each ELISA and VNT LMM model. Parameters were estimated by model averaging of all models with ∆AIC ≤ 2 45 , which were considered to similarly support the data. To normalize their distribution, the numbers of each mosquito species captured were log-transformed and the distribution of all predictors and model residuals were checked using qq plots in R software. We calculated the respective marginal coefficient of determination (R 2 ) for the fixed and random effects of the models according to Nakagawa & Schielzeth 46 .
Finally, two additional LMMs were fitted. One to test the model assumption of Roche et al. 15 of a positive correlation between mosquito richness and total abundance, and the other to compare the density of human population, as measured in Ferraguti et al. 35 , at sampling sites with and without VNT positive birds. All the statistical analyses were conducted in R (v. 2.14.2; R Development Core Team 2005) using the packages vegan, lme4, car, arm, Matrix, Rcpp, MASS, MuMIn and lsmeans.
Ethics statement. Bird sampling and mosquito trapping were performed with the necessary permits issued by the regional Department of the Environment (Consejería de Medio Ambiente, Junta de Andalucía) and in accordance with relevant guidelines and regulations. Procedures were approved by the Ethical Committee of CSIC and complied with current Spanish laws. Surveys and sampling on private land and in private residential areas were conducted with all the necessary permits and consent, and in the presence of owners. This study did not affect any endangered species. | 4,104.8 | 2018-02-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Factors Affecting Chemo-physical and Rheological Behaviour of Zr44-Ti11-Cu10-Ni10-Be25 Metal Glassy Alloy Supercooled Liquids
Corresponding Author: Apicella Antonio Advanced Materials Lab, Second University of Naples, Aversa, Italy Email<EMAIL_ADDRESS>Abstract: Segregation by selective cold crystallizations and glass transition changes in Zr44-Ti11-Cu10-Ni10-Be25 metal supercooled metastable liquid annealed at different temperatures have been theoretically correlated to melt viscosity modifications. Crystallization behavior has been found to be principally related to the high mobility and smaller Be, Cu and Ni atoms. Multiple exothermal peaks in isothermal DSC annealing’s have been observed in these bulk metal glassy supercooled liquids. A significant increase of the glass transitions temperatures were experimentally measured in cold crystallized samples. Isothermal and temperature scans by Differential Scanning Calorimetry have shown that the three smaller elements present in the alloy (namely Be, Ni and Cu) were involved in recrystallization process in the temperatures interval from glass transition to 470°C. Isothermal annealing's at temperatures ranging from 400° to 450°C have been considered. Glass transitions and crystallization kinetics in the super-cooled liquid have been measured.
Introduction
The potential properties of metal glasses have been exploited since 60's (Klement et al., 1960), but it is only in the last years that their processing properties received more attention. Recent researches and development on Bulk Metal Glasses (BMG) thermoplastic forming processes, allowed these materials to be considered as high-strength metal alloys that can be processed like polymers. Thermoforming of bulk metallic glasses needs deeper studies of the phenomena occurring in the metastable plastic liquid state. A metal glass alloy can be used in its metastable liquid state for thermoplastic forming. The rate of quenching plays a relevant role in the assesment of the part mechanical properties. Surface hardening, in fact, has been observed to occur at the mold surface of these processed parts and can be related to the faster cooling compared to the bulk one (Aversa et al., 2015).
However, although metallic glasses are still characterized by high costs that reduce their use in structural applications, these materials remain valid choices for utilization in high added value products where both performance and aesthetics are mandatory. This paper describes the physical factors influencing the thermoforming metal glasses processing properties.
Researches on these metallic glasses have been started since several decades (Klement et al., 1960) leading to the development of alloys with high metal glass-forming ability (Schroers, 2010). Nevertheless, thermoforming of these glass forming metal alloys is still far to be completely understood (Busch, 2000;Geyer et al., 1995). Rheological and solidification issues of metal glasses present some criticalities in the processing of the molten and relaxed state above their glass transition (Trachenko, 2008). Understanding of the solidification phenomena in a cast melt needs the investigation of the melt rheology as well as the crystallization processes occurring in the forming metal glass.
In particular, the study of the critical melt cooling rates in all the parts of the processed component should be seriously take into account in order predict the occurrence of undesired localized crystallizations and inhomogeneous glass formation during the mold filling. Even casting in simple geometries of metal glasses needs careful selection of adequate processing parameters, such as the rate of mold filling and local temperature control. Processing parameter choice is even more critical when fast cooling is required (Lewandowski et al., 2005;Morito and Egami, 1984). Processing technologies applied for thermoplastic polymers are being increasingly transferred to BMGs processing (Schroers et al., 1999). Plasticization of metal glasses above their glass transitions leads to the formation of a super cooled liquid that can be easily processed for thermoplastic forming. The metal glasses should be hence heated at temperatures above their characteristic relaxation temperatures to be processed in the temperatures region where they sufficiently soften into their metastable liquid while avoiding any undesired crystallization (Geyer et al., 1995).
The development of new high-strength metal glasses alloy formulations that can be easily processed like polymers may further favors the utilization of these materials. However, some process criticalities due to the occurrence of undesired phase separation alters the viscosity of the molten metal glass alloy while strongly reducing its final strength and resilience (Lewandowski et al., 2005).
The crystallization of the metal glass alloys in their super-cooled molten state, has been extensively investigated by Busch (2000) using Time Temperature Transformation Diagrams. Cold crystallizations from the melt are described to show faster kinetics at temperatures between the melting and the glass transition temperatures: The crystallization kinetic during an isothermal annealing is governed by the competition between the thermodynamic crystallization driving force of the crystal forming atoms (that increase as temperature is lowered) and their diffusion controlled kinetic (which increases when temperature is raised).
The glassy metal above its glass transition attains a viscous softened state where all alloy atoms differentially regain their mobility as temperature is increased. This atoms differential higher mobility reached when the alloy is brought from glassy to super-cooled metastable liquid state allows the glass forming atoms, which did not individually crystallize when quenched from the melt, to rearrange in configurations leading to undesired selective crystallization (selective cold crystallization) according to their relative diffusivities.
The crystallization mechanism of the completely amorphous and in presence partial crystallization has been investigated in order to completely account and describe the possible physical phenomena that could be occurring in the super cooled melt during the moulding process and how these phenomena could influence melt viscosity and the final mechanical properties of the processed part. Thermal analysis by Differential Scanning Calorimetry (DSC) in isothermal crystallizations has been used to measure and quantify the kinetics and heat of crystallization of our glassy metal. Isothermal tests on the supercooled melt at progressively higher temperatures above the alloy glass transition have been run.
Materials and Procedures
The kinetics of crystallization of the amorphous metal alloy, both in isothermal and temperature scans, have been investigated by mean of differential thermal calorimetry analysis (Aversa et al., 2015). In the isothermal scans, the sample is heated above the amorphous metal glass transition temperature and the kinetics of heat release are monitored versus time. In the temperature scan method, conversely, the sample was heated at fixed rate and then enthalpy changes were recorded as a function of temperature.
Materials
A plate of Zr 44 -Ti 11 -Cu 10 -Ni 10 -Be 25 of 1 mm thickness has been received (LM001B) from Liquid Metals Technologies Inc, Ca USA. The plate was water jet cut and tested in a Differential Scanning Calorimeter the plate supplied by Liquid Metals Technologies Inc. was prepared using an Engel injection molding machine operating at 1050-1100°C.
Procedures
A Mettler ADSC Differential Scanning Calorimeter was used in all temperature and isothermal scans.
Temperature scans were run at 1°C/min and 20°C/min for the untreated samples and at 10°C/min for the post annealing samples (namely, after ADSC isothermal tests).
Samples of liquid metal glass of weight ranging from 15 to 50 mg were placed in Aluminum pans, placed in the DSC at 200°C and brought to the final annealing temperature at a rate of 50°K/min. Depending on the corresponding annealing temperature, samples were annealed for times varying from 20 to 300 min until no further exothermic occurrence was recorded.
The initial and the signal transients have not been reported but have been considered for the calculation of the real time at the annealing temperature.
DSC Temperature Scans
DSC first run at 1°C/min is shown in Fig. 1. Glass state relaxation is identified as glass transition and it is evident as a step in the DSC thermo gram curve reported in Fig. 1. This step has been observed between 375°C (T i ) and 390°C (T f ). The glass transition temperature T g , has been calculated, as conventionally done for glassy polymeric materials, at the midpoint of this interval (382°C in our case). Above this glass transition, the higher mobility of the atoms in the metastable super-cooled metal liquid give raise to selective crystallization. During the transitions from the liquid state of higher energy to the lower energy crystalline state, heat is released and an exothermic crystallization peak is observed. The DSC thermal scan of Fig. 1 shows the multiple peaks complex crystallization behavior of our metal glass alloy. A first crystallization starts at 405°C, just above the end of the step glass transition (T f = 390°C). The exothermic process has a complex shape with a main peak located at 428°C and a shoulder occurring at 417°C. This complex behavior is due to the overlying of the exotherms associated to crystallizations of alloy metal atoms with different diffusivities and sizes.
Two apparently lower intensity crystallizations, with their maxima at 485°C and 520°C respectively, may be associated to the crystallization of metal atoms with lower diffusivity and mobility.
Presumably, the first observed thermal event involves the smaller Beryllium atoms (of atomic radius of 105 pm and Hexagonal crystallization close packed-HCP-lattice) and the Copper and Nickel atoms (which are characterized by very similar atomic radii 145 and 149 pm and by the same crystallization habit, Face centered cubic-FCC). The two exothermic peaks observed at higher temperatures in Fig. 1, may be associated to the crystallization of the bigger Zirconium and Titanium atoms that are characterized by atomic radii of 176 and 206 pm, respectively, and which both crystallize in the HCP lattice structure (Geyer et al., 1995).
The slow heating allows selective crystallization and segregation in these alloys. Schroers (2010) have found that a heating rates below 200°C/s are not sufficient to avoid crystallization.
The thermal behavior and crystallization kinetics of our BMG, has been first investigated through isothermal DSC experiments.
Isothermal DSC Annealing
Differential scanning calorimetry of Fig. 1 suggests that the smaller and higher mobility atoms that are present in the metastable liquid metal alloy could be involved in recrystallization process between 400°C and 470°C.
Isothermal annealing's at temperatures ranging from 400°C and 450°C have been then chosen to further investigate on the kinetics the thermal events occurring in the super-cooled melts. These DSC tests have further confirmed that two exothermic crystallization events characterize each isothermal annealing curve. Isothermal DSC scans run at 405°, 410° and 420°C are compared in Fig. 2. The curve for the annealing at 405°C in Fig. 2 presents two exopeaks at 27 and 89 min. The exotherm maximum has been taken as a measure of the crystallization kinetic associated with the half crystallization time. According to this procedure, the half crystallization times for the two thermal events shown on the DSC curve for the test at 410°C are 27 and 62 min while those for the test at 420°C are 17 and 31 min. The equilibrium enthalpy change of crystallization (∆H e ) and the half crystallization times derived from isothermal DSC tests run at 450, 440, 430, 420, 410, 405, and 400°C are reported in Fig. 3 and 4, respectively. In the data reported in figure 3 it has been assumed that no recrystallization occurred for temperatures below the glass transition (0 J/g) and that the maximum heat of crystallization attainable for the complete crystallization of all Be, Cu and Ni atoms, was 102 J/g. This value has been evaluated from the data of ∆H rel of the atoms of Be, Cu and Ni reported in Table 1 that reports BMG composition and single elements thermodynamic properties.
The experimentally measured equilibrium heat of crystallization from isothermal annealing strongly depends on the corresponding temperature of treatment; at higher annealing temperatures, higher heats of crystallizations are observed. This occurrence suggests that different kinetic and thermodynamic phenomena could be involved in the assessment of the heat released by the crystallization.
Busch (2000) and Geyer et al. (1995) have described that diffusivities of the smaller atoms in a metal metastable super-cooled liquid at lower temperatures, are higher than those expected for the viscosity. It can be then inferred that other thermodynamic factors may influence it. A feasible hypothesis is that changes in the alloy composition follow selective segregation that alters the chemo-physical properties of the still amorphous supercooled metastable liquid.
Moreover, the kinetic of the crystallization process suggests that a higher annealing temperature leads to shorter time to reach the equilibrium crystallization.
The increase of the glass transition temperature, which is observed for the annealed metal glass alloy, is a consequence of the lower diffusivity of the residual heaviest alloy atoms. This occurrence results in an increase of the viscosity of the residual supercooled liquid (Busch, 2000;Trachenko, 2008) confirming that at lower temperatures the diffusion cannot be described by the Vogel-Fulcher-Tammann law but an Arrhenius type law is more adequate.
Kinetic data for the samples glass forming metal alloy have been plotted in Fig. 4 according to Arrhenius type curve.
The two distinct crystallizations half times are well separated at lower annealing temperatures but they increasingly overlap as temperature is progressively raised up to 450°C where only a single exotherm was visible. The temperature dependency of the kinetics of these two crystallizations is represented in the plot of Fig. 4 as straight lines with different slopes. The activation energies for the two crystallization processes were evaluated from Fig. 4 and they are, 180 kJ/mol for the first occurring crystallization and 260 kJ/mol for the second occurring crystallizations.
The first crystallization process, which is characterized by a lower activation energy, could be attributed to the higher mobility Beryllium atoms (crystallized in HCP) since they are smaller than Nickel or Copper ones (i.e., 105 Vs 125-128 pm). Copper and Nickel are reciprocally soluble in any amount (namely, unlimited solid solubility) due to their common crystallization habit (FCC crystal structure), similar electro negativity (1.9 and 1.8, respectively), and atomic radii (125-128 pm).
It could be then inferred that first crystallization is related to the nucleation and growth of Beryllium in its HCP structure while the second process is due to the Cu/Ni solid solution FCC crystalline lattice structure formation. This progressive segregations process induced by the thermal annealing alters the relative ratios between the glass forming atoms of the BMG alloy leading to modifications of the physical as well as the thermodynamic material properties.
Post Annealing DSC Temperature Scans
Additional second DSC runs at a heating rate of 10°C/min have been carried out on the samples previously annealed at different temperatures. Although an apparent equilibrium complete crystallization was reached during each annealing tests (each isothermal DSC experiment was stopped when exothermic heat flow was zero), it has been noticed from the new DSC thermograms that the crystallization process reactivated when the sample was brought above the respective previous annealing temperature, Fig. 5.
The bulk metal glass compositional changes following heaviest atoms enrichment induced by the selective crystallization of the smaller Be, Cu and Ni atoms, progressively reduce the relative remanent atoms mobility in the metastable liquid. This occurence leads to a progressive increase of the supercooled metastable liquid glass transition. The smaller atoms crystallization process stops when the glass transition overcomes the isothermal annealing temperature. In the post annealing DSC temperature scan reported in Fig. 5, which is relative to the sample previously held at 405°C, the glass transition raised from 382.5°C to 420°C after annealing (as indicated by an arrow in Fig. 5). It can be noticed that the glass transition was indeed occurring just above the previous annealing temperature. Once exceeded the glass transition, in fact, the BMG forming residual smaller atoms regain sufficient mobility to undergo additional cold crystallization. The shoulder on the thermogram at 465°C may be associated to the crystallization of the residual Cu and Ni atoms while the crystallization peak at 510°C is due to Ti crystallization. The evident intense final exotherm at 550°C is associated to the heaviest Zr atoms crystallization.
The annealing, then, induces a significant increase of the temperature needed to promote the metal glass relaxation that is higher for samples held at higher annealing temperatures. This glass transition temperature increase is a consequence of the selective crystallization of the smaller and more mobile atoms. The DSC temperature scans on the alloys annealed at 405°C and 420°C are compared in Fig. 6. The glass transitions of the metastable BMG liquid after the annealing performed at 420°C is significantly higher than the corresponding glass transition of the sample annealed at 405°C. This is presumably due to the more intense segregation of the more mobile Be, Ni and Cu atoms. The end of the transition (T f ) moves from 390°C, for the as received amorphous metal glass alloy, up to 435°C when held at 405°C and up to 460°C for the sample held at 420°C.
It can be inferred that the additional crystallization, which is evident as an exotherm in the thermogram, is reactivated only when the metastable metal liquid state is reached during the temperature scan. At temperatures higher than the annealing one (420°C), the nucleation and crystallization of the atoms characterized by lower mobility in the metal metastable melt (i.e., Cu, Ni, Zr and Ti) restart.
The increase of the glass transition temperatures is due to the lower diffusivity and higher viscosity of the heaviest remaing atoms present in the metastable melt (Busch, 2000;Trachenko, 2008).
Chemorheological Model
Atoms movement and relaxation in supercooled liquid metals have been described by Trachenko (2008) as elementary Local Relaxation Events (LREs) where atoms transfer from their initial to a new equilibrium position. The number of occurrences of these LREs quickens at higher temperatures but it is lower for atoms of increasing size. Following annealing and small atoms segregation, higher temperatures are then needed to increase the number of local relaxation events (and hence glass transition) in the alloy of composition enriched of atoms of larger sizes.
The selective crystallization of some of the metal glass forming atoms is the result of the competition between the increasing driving force for crystallization and the relative different effective diffusivities (mobility) of the single atoms. Atomic mobility in liquids is assumed to be related viscosity via the Stokes-Einstein relation. In a crystallizing liquid, assuming steady-state nucleation, the nucleation rate is determined by the product of a thermodynamic contribution and a kinetic contribution as: where, D eff is the effective diffusivity of the atoms, T is temperature and A is a constant. At sufficiently higher temperatures, diffusivity is proportional to the inverse of the viscosity as D eff ∝ 1/η. Equilibrium viscosity in the supercooled liquid can be described by the Vogel-Fulcher-Tammann (VFT) relation: where, T 0 is a characteristic temperature related to the bulk metal glass relaxations (the glass transition). Thermodynamic parameters present in Equation 1 and 2 are influenced by the BMG changes in composition induced by the small atoms selective crystallization.
Viscosity changes in a crystallizing BMG metastable liquid can be modeled by modification of the parameter T 0 in Equation 2. This reference temperature for our completely amorphous BMG metastable liquid is initially 395°C while it is set to 435°C when the crystallization of the small Be, Ni and Cu is about 50% of the maximum achievable for an annealing at 405°C (evaluated from ∆H crystallization data from Fig. 3). When crystallization of Be, Cu and Ni atoms reach 80% for an annealing at 420°C (evaluated from crystallization ∆H data from Fig. 3) the value of the characteristic temperature T 0 is 460°C.
Conclusion
Metal glass alloys and thermoplastic forming process development needs the investigation and modeling of the viscosities changes in the bulk metastable liquid. In order to correctly process like polymers the metal glass high-strength alloys, more rheological investigations and viscosity modeling are necessary to attain a more deeper understanding of the physical phenomena occurring in the metal metastable molten state. Softening of the metal glass alloy in its metastable liquid state can be, then, correctly used for thermo-forming processes. Reheating from the glassy state of the metal glass allows to process and shape these materials in the correct temperature range needed to avoid minimum segregation and thermodynamic parameters modification leading to glass relaxation into a metastable liquid without BMG smaller atoms crystallization and segregation. The selective crystallization of the alloy smaller atoms may, in fact, induces an uncontrolled rise of the viscosity leading to a not homogeneous flow of the melt in the mold. The experimental results presented here describe the thermal events modifying the viscosity and glass transition of thermoformed liquid metals. | 4,701.2 | 2016-02-25T00:00:00.000 | [
"Materials Science"
] |
Molecular detection of human Plasmodium species in Sabah using PlasmoNex™ multiplex PCR and hydrolysis probes real-time PCR
Background Malaria is a vector borne-parasitic disease transmitted through the bite of the infective female Anopheles mosquitoes. Five Plasmodium species have been recognized by World Health Organization (WHO) as the causative agents of human malaria. Generally, microscopic examination is the gold standard for routine malaria diagnosis. However, molecular PCR assays in many cases have shown improvement on the sensitivity and specificity over microscopic or other immunochromatographic assays. Methods The present study attempts to screen 207 suspected malaria samples from patients seeking treatment in clinics around Sabah state, Malaysia, using two panels of multiplex PCRs, conventional PCR system (PlasmoNex™) and real-time PCR based on hydrolysis probe technology. Discordance results between two PCR assays were further confirmed by sequencing using 18S ssu rRNA species-specific primers. Results Of the 207 malaria samples, Plasmodium knowlesi (73.4% vs 72.0%) was the most prevalent species based on two PCR assays, followed by Plasmodium falciparum (15.9% vs 17.9%), and Plasmodium vivax (9.7% vs 7.7%), respectively. Neither Plasmodium malariae nor Plasmodium ovale was detected in this study. Nine discrepant species identification based on both the PCR assays were further confirmed through DNA sequencing. Species-specific real-time PCR only accurately diagnosed 198 of 207 (95.7%) malaria samples up to species level in contrast to PlasmoNex™ assay which had 100% sensitivity and specificity based on sequencing results. Conclusions Multiplex PCR accelerate the speed in the diagnosis of malaria. The PlasmoNex™ PCR assay seems to be more accurate than real-time PCR in the speciation of all five human malaria parasites. The present study also showed a significant increase of the potential fatal P. knowlesi infection in Sabah state as revealed by molecular PCR assays.
Background
Malaria is a mosquito-borne parasitic disease cause by the unicellular, eukaryotic protozoan parasites of the genus Plasmodium and the infective female Anopheles mosquitoes are the sole vector of human-to-human transmission. Malaria continues to be one of the most severe global public health problems that affect many of the tropical and subtropical poorest nations. Five causative Plasmodium parasites have been recognized by World Health Organization (WHO) as able to infect humans [1].
Malaysia is situated in the hot, humid equatorial region and, therefore, is receptive and vulnerable to the transmission of malaria. The malaria main focal regions in Malaysia include Sabah and Sarawak states situated on the Borneo Island and central interior regions of Peninsular Malaysia. These areas are also the home for a majority of the isolated indigenous populations. Despite significant reduction in malaria cases over the centuries, the surge of P. knowlesi infections across Malaysia, especially Malaysian Borneo, poses a challenge to malaria control programmes, which aim to eliminate malaria in Peninsular Malaysia by 2015 and in Malaysian Borneo by 2020 [1,2].
The empirical clinical diagnosis remains the most common method to diagnose malaria that is based on the observation of the clinical features of the disease. However, the accuracy of this clinical presumptive diagnosis is poor due to extremely wide spectrum of clinical signs and symptoms ranging from mild to severe malaria. Basically, microscopy (parasite morphology identification), immunochromatographic-based rapid diagnostic test (antigen detection), and molecular PCR assays (parasite nucleic acid detection) are the three main malaria diagnostic methods and they target the parasites in the peripheral blood with wide ranges of sensitivity and specificity as reviewed by Moody [3].
Overall, the advent of molecular PCR-based diagnostics has produced higher specificity and sensitivity in the identification and differentiation of all five human malaria parasites up to species levels. As a whole, the currently described molecular nucleic acid amplification PCR assays can be subdivided into three categories, there are: i) conventional-based PCR assays, such as nested PCR [4][5][6], semi-nested PCR [7], and single step multiplex PCR [8,9], ii) real-time or quantitative PCR (qPCR) assays based on fluorescence dyes (SYBR Green, high resolution melting) or hydrolysis probes technologies [10][11][12], and iii) the simplest and least technically demanding loop-mediated isothermal amplification (LAMP) assay [13]. Overall, PCR is able to detect parasites at low titer, generally below 5 parasites/μl of blood for all five human Plasmodium parasites [3,[7][8][9]11]. Amongst these molecular assays, nested PCR [4][5][6] targeting 18S ssu rRNA genes of all five human malaria parasites has been considered the molecular gold standard for malaria detection. However, due to the cumbersome and multiple amplification reactions that are needed in nested PCR assay (at least six PCRs conducted to differentiate all five human Plasmodium species), many researchers have attempted to develop a simpler, single step multiplex PCR system, which allows simultaneous identification of malaria parasites in a single tube reaction [8][9][10][11][12]. Multiplex PCR undoubtedly shorten the time and may be a useful diagnostic adjunct for diseases, such as malaria, that require prompt and effective treatment.
In the present study, 207 patient samples suspected for malaria were screened using two multiplex PCR assays both targeting 18S ssu rRNA gene of human Plasmodium species, single-step multiplex PCR assay (PlasmoNex™) [9] and combinations of two real-time PCR assays based on hydrolysis probes technique [10,12,14], respectively. Due to the lack of P. ovale case in Malaysian scenario, real-time primers and probe specific for this species was precluded in the present study. The results obtained from two PCR assays (PlasmoNex™ PCR and real-time PCR) were then compared. The discordances at the species level of two PCR assays were then confirmed by DNA sequencing. Overall, the aim of the present study was to test the application of these two published multiplex PCR platforms in the clinical diagnosis of malaria disease.
Study site and sample collection
The 207 clinical suspected malaria blood samples for the present study were collected between June 2012 and January 2013 from patients seeking medical care at government clinics around Sabah state, Malaysia. Approximately 3 ml of whole blood were collected in EDTA tube. Standard Giemsa-stained thick and thin blood films were prepared in the field and Plasmodium infection was determined by a field microscopist and then sent together with blood tubes to Sabah State Health Department. Genomic DNA was extracted from 200 μl of blood sample using QIAamp DNA Mini Kit (Qiagen, Germany), accordingly to manufacturer's instructions.
Hexaplex PCR (PlasmoNex™)
Multiplex PCR was carried out as described by Chew et al. [9]. Generally, 15 μl of PCR reagent mixture containing 20 mM of Tris-HCl, 20 mM of KCl, 5 mM of (NH 4 ) 2 SO 4 , 3.0 mM of MgCl 2 , 0.2 μM of each dNTP, pooled primers mixture, 1 U of Maxima® Hot Start Taq DNA polymerase (Thermo Scientific, USA), and 1.5 μl (~10 ng) of template DNA were used in the detection study. PCR amplification was carried out with an initial denaturation step at 95°C for 5 min; 35 repeated cycles at 95°C for 30 sec, 56°C for 30 sec, 65°C for 40 sec, followed by a final extension at 65°C for 10 min using Mastercycler® Gradient 5331 (Eppendorf, Hamburg, Germany). The amplified products were visualized on ethidium bromide stained 3% (w/v) agarose gel (Promega, Madison, WI) and gel image was captured using Gel Doc™ 2000 Gel Documentation System (Bio-Rad, USA).
Real-time PCR
Real-time PCR was performed by using primers, probes, and reaction conditions described previously by Shokoples et al. [12] and Divis et al. [14] with the following modifications: fluorophores for probes of P. falciparum were changed to Cy5-BHQ-1 and P. vivax to Texas Red-BHQ-2. Primers and probes were synthesized by Bioneer Corporation (South Korea) and are listed in Table 1 with the respective concentrations for each reaction. Three separate reactions were performed: (1) a screening reaction for the presence of Plasmodium species with Plasmodium genus-conserved primers pair (Plasmo1 and Plasmo2) and the Plasprobe to detect a conserved region of the Plasmodium 18S ssu rRNA gene of all five human malaria parasites [10], (2) a multiplex PCR for the detection of three Plasmodium species, i.e., P. falciparum, P. vivax, and P. Malariae, using species-specific forward primers paired with Plasmo2, and species-specific probes [12], and (3) a monoplex PCR for the detection of P. knowlesi with Plasmo1, Plasmo2 primers and a Pk probe [14]. Briefly, the monoplex and multiplex assays for Plasmodium speciation were performed with a final volume of 25 μL containing 5 μL of template DNA, 12.5 μL QuantiFast Multiplex PCR master mix (Qiagen, Germany), and 7.5 μL of pooled primers and probes mix. All assays were performed under standard conditions (1 cycle of 95°C for 5 mins; 45 repeated cycles of 95°C for 30 sec and 60°C for 30 sec) with the CFX96 Real-time PCR machine (Bio-Rad, USA). A cut-off of 40 cycles was used to define positive samples. The test panel included a number of controls: negative sample extraction as a negative control, β2macroglobulin (β2M) target amplification as a positive extraction control for the sample and a positive reference control to detect any variation between runs and non-template control for each of the master mixes.
Sequencing
Sequencing was only performed on the samples for which PlasmoNex™ and real-time PCR gave different speciation results. Sequencing was carried out with ABI Prism BigDye terminator cycle sequencing kits and ABI Prism 310 automated sequencer (Applied Biosystems, USA). Sequencing results were then BLAST searched on GenBank database for species determination.
Diagnostic sensitivity and specificity for three species The diagnostic sensitivity (true positive rate), specificity (true negative rate), positive predictive value (PPV) (probability that the diseases is present when the test is positive), negative predictive value (NPV) (probability that the diseases is not present when the test is negative), and disease prevalence (DP) of three species, i.e., P. vivax, P. falciparum, and P. knowlesi were calculated, based on 207 malaria positive samples, using PlasmoNex™ as the standard. The 95% confidence interval (95% CI) was also calculated using MedCalc-Diagnostic test evaluation [16]. The calculations were expressed as percentage for ease of interpretation.
Real-time PCR
Real-time PCR results indicated that all 207 malaria samples were positive with Plasmodium infections based on genus-conserved primers and probe, i.e., Plasmo1, Plasmo2, and Plasprobe. Species-specific real-time PCR indicated that 202 malaria samples were caused by single-species infection, i.e., 16 (7.7%), 37 (17.9%), 149 (72.0%) by P. vivax, P. falciparum, and P. knowlesi, respectively, while determination up to species level based on species-specific primers and probes failed for the balance five samples. No P. malariae infection was detected based on real-time PCR assay ( Table 2).
Sequencing result
Nine discordant results between two PCR assays were further confirmed via sequencing using 18S ssu rRNA species-specific primers. Four and one samples that failed in speciation based on species-specific real-time PCR primers and hydrolysis probes were actually single infected sample of P. vivax and P. knowlesi, respectively. Two samples diagnosed as P. falciparum infection based on multiplex real-time PCR assay were actually infected with P. knowlesi based on sequencing results and BLAST data, which were in agreement with the results obtained from PlasmoNex™ assay. Another two Falcprobe positive samples, which were suspected with mixed infections based on PlasmoNex™ results were then sent for sequencing and further confirmed that both samples were actually triple-species mixed infections with P. falciparum, P. vivax, and P. knowlesi. All mentioned sequencing results (n = 9) were in agreement with the results obtained from PlasmoNex™.
Diagnostic sensitivity and specificity for three species
The sensitivity and specificity of the real-time PCR in detecting P. vivax, P. knowlesi and P. falciparum were 72.7% and 100%, 96.8% and 100%, and 100% and 98.8%, respectively in species diagnosis. For P. falciparum positive samples, the probability of detection using the real-time PCR was 94.6% but the probability to not detect the P. vivax and P. knowlesi was 96.9% and 91.4%, respectively, in those negative samples when compared to PlasmoNex™. This indicated that P. knowlesi (74.4%) was the most prevalent among all Plasmodium species, followed by P. falciparum (16.9%) and P. vivax (10.6%) in Sabah (Table 3).
Discussion
PlasmoNex™ is a conventional multiplex PCR system developed for the simultaneous identification and differentiation of all five human malaria parasites in a single tube reaction together with an internal control. The system showed to be of high accuracy (sensitivity and specificity) in identification and differentiation of all five human Plasmodium species in both single-and mixed-species infections [9] and is applicable for usage in epidemiological study [17]. The real-time PCR applied in the present study was adapted from three published studies [10,12,14]. The genus-conserved primers, i.e., Plasmo1 and Plasmo2 and Plasprobe used to detect the presence of Plasmodium species originated from Rougemont et al. [10]. In their study, four major human Plasmodium species-specific probes, i.e., Falcprobe, Vivprobe, Ovaprobe, and Malaprobe were developed in order to further discriminate malaria parasites up to species level [10]. Basically, species-specific real-time PCR described by Rougemont et al. was designed to simultaneously identify all four species in two separate multiplex PCR mixtures, i.e., Falcprobe multiplexed with Vivprobe and Malaprobe multiplexed with Ovaprobe. The Pk probe specific for P. knowlesi detection was then developed in complementary to this Plasmodium screening assay [14]. One of the limitation of Rougemont et al. method is the inability of the assay to detect mixed infections, which is likely due to competition of the conserved primers (Plasmo1 and Plasmo2) for the different templates and biasness in amplification of species with higher level of infection [12,18]. Several years later, Shokoples et al. improved this method by using a set of specific-specific forward primers targeting four major Plasmodium species (excluding P. knowlesi) in replacement of genus-conserved forward primer (Plasmo1). In Table 2 Comparison of diagnosis of Plasmodium species by PlasmoNex™ PCR and hydrolysis probes real-time PCR for the sample collected from Sabah (n = 207) combination with a conserved reverse primer (Plasmo2) and species-specific probes, this real-time PCR assay was optimized for the multiplex assay in a single tube reaction, which also included the careful validation of single-and mixed-species infections [12]. Generally, there are several advantages of real-time PCR over conventional PCR. The real-time PCR is considered a rapid assay and the result is obtained in a straightforward manner based on completion of amplification without any post-PCR downstream analysis such as gel electrophoresis for result interpretation. The ability in quantification of DNA copy number as correlated with parasites density by microscopic examination which cannot be achieved by conventional PCR approaches is the major strength of the real-time PCR assays; however, the cost of reagents and equipment are much higher than that of any conventional PCR assays. Quantitative analysis of parasitaemia by real-time PCR does correspond with the clinical presentation of the disease and is useful in post-treatment detection of Plasmodium DNA to monitor response to therapy and/or to predict treatment failure possibly due to parasite resistance [19].
In the present study, no diagnostic divergence was assumed in the experiment design as the DNA samples were from the same source and both PCR assays described here were targeting Plasmodium 18S ssu rRNA gene. All DNA samples used here were successful extracted as indicated by the presence of internal positive control band (i.e., human β-haemoglobin in PlasmoNex™ assay) and fluorescence signal (i.e., human β2-macroglobulin in real-time PCR).
The PlasmoNex™ PCR assay is the only multiplex system that allows simultaneous identification and differentiation of all five human Plasmodium parasites in a single tube reaction. The accuracy of the assay was also being observed in the present study, in which two triple-species mixed infections were successful diagnosed and further confirmed by sequencing data. Of the 207 infected samples, nine had discrepant species identification based on two PCR assays. Two samples with P. falciparum positive as determined by real-time PCR were actually P. knowlesi single-species infection determined by PlasmoNex™ and confirmed by DNA sequencing. Five Plasmodium positive (Plasprobe positive) samples, which failed to be determined up to species level (species-specific probes negative) by real-time PCR were actually single infection of P. vivax (n = 4) and P. knowlesi (n = 1) based on PlasmoNex™ assay and sequencing results. The major finding in the present study was that the species-specific real-time PCR did not seem to be as specific as PlasmoNex™ assay especially in the detection of mixed infections. In two samples with triplespecies infections by P. vivax, P. falciparum, and P. knowlesi, multiplex real-time PCR (for P. vivax, P. falciparum, and P. malariae) and monoplex real-time PCR (for P. knowlesi) only successful picked up the P. falciparum infection. Failure of the multiplex real-time PCR indicated that there were possibly some internal diagnosis constraints, maybe due to competition for genus-conserved reverse primer or PCR reagents. Inter-laboratory variation such as difference in PCR reagent used, source of hydrolysis probes, type of thermocycler used, etc. might also be the contributing factors (not tested here). The failure of monoplex real-time PCR in the determination of P. knowlesi in cases with mixed infections can be explained by the possibility of diagnostic constraint present in the real-time PCR as commented on the real-time PCR developed by Rougemont et al. [12,18]. The Plasmo1 and Plasmo2 adopted in monoplex real-time PCR for P. knowlesi detection are genus-conserved primers for all five human Plasmodium species, therefore in the cases of mixed infections, P. falciparum and P. vivax fragments may also be co-amplified with the P. knowlesi fragment and this certainly lowered the concentration of P. knowlesi amplicon, possibly to the level below the threshold of Pk probe. Furthermore, in the mixed infections, parasite densities are varied substantially and there is a possibility Table 3 The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and disease prevalence (DP) of the real-time PCR compared to PlasmoNex TM PCR of biasness in the amplification of the species with high loads. In contrast, this diagnostic constraint (primer competition) was not observed in PlasmoNex™ assay, because the sensitivity and specificity of the hexaplex PCR assay were tested empirically to all five human Plasmodium species. From the results, the sensitivity and specificity of multiplex real-time PCR utilized in the present study seem to be limited especially in the cases of mixed infections. Furthermore, this real-time PCR assay was optimized and tested on four human Plasmodium species excluding P. knowlesi. Further validation on the sensitivity and specificity of the assay are needed prior to recruiting this assay as a routine malaria diagnostic tool. In Malaysia, P. knowlesi is recognized as a common cause of severe and potentially fatal human malaria. To date, 19 knowlesi malaria deaths have been reported in Malaysia Borneo, 12 cases in Sabah state [20,21] and seven cases in Sarawak [22][23][24], confirmed by PCR. Again, 72% of the malaria cases caused by P. knowlesi were confirmed using both molecular approaches in this study (Table 2). This further emphasizes the necessity to include molecular specific assay for P. knowlesi diagnosis as well as in surveillance and epidemiological studies.
Conclusions
Malaria is predominantly widespread in the tropical and subtropical regions and exerts immense health and economic burdens in many economic disadvantaged countries. Microscopic examination is the global accepted gold standard for routine laboratory diagnostic method for malaria. The invention of molecular PCR diagnostic tools can be useful for prospective and retrospective analysis of samples for surveillance and epidemiological studies. Of the currently available PCR assays, a straightforward single step multiplex PCR speed up the time for results compared with conventional molecular gold standard nested PCR assay. The PlasmoNex™ PCR assay seems to be more accurate than species-specific realtime PCR in the identification and differentiation of all five human malaria parasites up to species level in singleas well as mixed-species infections. This assay has successfully detected two triple-species mixed infections, which were misdiagnosed as P. falciparum single-species infection by real-time PCR. This suggests that Plasmo-Nex™ PCR assay may serve as an ideal adjunct method for accurate and effective diagnosis of patients presenting with malaria symptoms. The present study again provide evidences that P. knowlesi infections appear to be on the increasing trend, with the species now accounting for the majority of malaria cases in Sabah state after the state successful controlled malaria caused by P. falciparum and P. vivax. The increasing number of P. knowlesi infection that can be potentially lethal is now not only widespread in Malaysia but there is also a trend of emergence in many other countries of Southeast Asia. The growing impact of ecotourism and economic development in Malaysia are expected to subsequently lead to further increase in cases among locals and among travellers. Clinicians and laboratory personnel should be alert of this emerging species because it can be confused with benign P. malariae when diagnosed solely by microscopy. | 4,753 | 2015-01-28T00:00:00.000 | [
"Medicine",
"Biology"
] |
Formation of phase clusters and chimera states in hierarchical networks of Josephson junctions
We demonstrate regimes such as cluster synchronization and chimera states in a hierarchical network based on Josephson Junctions. We find that the fractal spectrum of possible system frequencies of the system forms a devils staircase. Hierarchical networks have interesting fractal features and could be used for scaling of some properties of fractal networks. In our structure, Josephson Junctions are connected through electric currents in the form of a Cayley tree.
Introduction
The study of large networks consisted of mutually coupled nonlinear oscillators are attractive and important in various fields of science. Among the various types of oscillators, such as van der Pole, Rössler, Chua, etc., special attention is paid to rotators made through Josephson Junctions (JJ) [1]. Ensembles based on JJ have been extensively investigated since they are widely used in various fields of science, from stable frequency self-oscillators [2] to with the neuromorphic processors [3]. An essential feature of these systems is that of synchronization. Synchronization is a fundamental phenomenon of nature observed when a large number of different rhythmic objects are interacting with each other. Examples involve circadian rhythms, actions of neurons as well as swarm behavior in birds and fish. In engineering, Josephson Voltage Standard, power combining, and phase antenna arrays, realized with van der Pole's oscillators, depend also significantly on synchronization processes. Various effects have been studied for oscillator ensembles, for example a load influence to synchrony stability [4], different topologies of oscillator arrays [5]- [8], and hysteresis phenomenon [9].
Chimera states have been discovered relatively recent, but they attracted great attention immediately. Since the first reports [10] on the phenomenon of simultaneous coexistence of synchrony and non-synchrony regimes, it have been seen and described in physical, biological, and chemical systems [11,12]. They have been found, in particular, in SQUID metamaterials [13] and neural networks [14]. Investigations of chimera states could be important for improvement of some technical characteristics of physical systems such as semiconductor lasers [15] and for diagnostics of brain pathologies because of strong connection of chimera states with Parkinson's disease, Alzheimer's disease, schizophrenia, brain tumors, and epileptic seizures [16].
In the present work we introduce dynamics of hierarchical networks based on JJ and connected by phase locking and demonstrate different regimes and global states of the systems. We present
Construction & Model
The model we investigate is presented in the Fig. 1. The structure is an ensemble of coupled JJ in the form of the Cayley tree. The external source consists of direct current and alternating current in the form of I source = I ac + I ac cos(Ωt). When the electric current I 0 arrives to the first node, it divides into two currents I 1 and I 2 , which in turn divide further into two currents, and so on. We thus form a fractal-like structure where the expression for the k-th electrical current is given by The current flowing through a k-element is given by the following expression where ϕ j is a phase of a j-th JJ, i cj is a critical current of a j-th JJ, and β c is a McCumber parameter. Figure 1. The investigated structure. Figure 2. A spectrum of normal modes of the investigated system in a linear regime.
Results
Initially we examined the system in the linear regime in order to determine all possible states of the system. We found that the spectrum of normal modes of the network is a Cantor set. The form of the spectrum is presented in the Fig. 2. Subsequently we investigated the system in the nonlinear nonlinear regime and we observed the formation of the phase clusters, which are shown in the Fig. 3. When the distribution of initial phases is uniform we found oscillator synchronization, provided a threshold in nonlinearity is exceeded; in this case the oscillators located on the equal levels are locked. Chimera states shown in Fig. 4 are also exist in hierarchical networks in other parameter regimes.
Acknowledgement
The Figure 3. A cluster synchronization. Figure 4. A chimera state. | 952 | 2020-03-01T00:00:00.000 | [
"Physics"
] |
LE CORBUSIER’S APARTMENT-STUDIO : 3D MODEL DATA OF PRELIMINARY RESEARCH FOR THE RESTORATION
Fondation Le Corbusier and A-BIME have collaborated to collect, analyse and organize preliminary research data for the restoration of the Le Corbusier’s apartment-studio in Paris using a methodology based in a 3D model. This work, developed by A-BIME since 2014, has been financed by Fondation Le Corbusier thanks to a ‘Keeping it modern Grant”, giving by the Getty Foundation. The aim is to have faster and easier access to various data of the building by using a clear and documented digital graphic document. For each step, the archives provided by the architect and the companies have been studied to understand and know the nature and quality of materials, technical arrangements and their on-site implementation. A study on technical installations and isolation as be provided as well by investigation in the apartment. By switching between the documents and the images, the know-how increases along with the development of the 3D model; the architectural elements have been drawn and modelled from the archive data and from the survey and diagnostic phase. The numerical construction, based on precast elements, follows the pioneering efforts of specialized architects with the BIM (Building Information Model) technology. Thanks to the .IFC format the model has been imported in several software in order to proceed structural and hydrothermal simulations. The results have been used has guidelines by the Fondation Le Corbusier to forecast the consequences of restoration choices. * Corresponding author
Le Corbusier's apartment-studio
Le Corbusier's studio-apartment occupies the last two floors of the Molitor apartment block, located at 24, rue Nungesser et Coli. Designed and built between 1931 and 1934 by Le Corbusier and Pierre Jeanneret, his cousin and associate, the building called "24 N.C." is situated in the 16th arrondissement at the border between Paris and Boulogne. As a project for a rental building, it offered the architect the opportunity to test the validity of his urban proposals. Radically renewing the Haussmann typology, the apartment block was built for a private developer. For Le Corbusier this was the beginning of a demonstration that his Radiant City project could provide the city dweller with air, light and greenery. Not overlooked, it benefited from fully glazed facades, constituting a radical novelty and contrasting with the surrounding buildings. Similarly, its reinforced concrete frame structure allowed the "free plan" to be implemented. As a result, the apartments, numbering two or three per floor, were delivered with just the sanitary facilities, each occupier being free to partition his apartment as he pleased. Modern comfort included both personal and service lifts, central heating, a laundry and drying room, cellars and garages in the basement and servants' rooms on the ground floor.
The architect was 44 years old when he received the commission for the Nungesser-et-Coli building. As a leader in the Modern Movement's battle against conventional architecture, the early 1930s were for him a time of great productivity since he had already received numerous commissions and was engaged in a number of urban planning projects.
In order to build his own apartment, Le Corbusier negotiated possession of the 7th and 8th floors, undertaking to build the roof of the property at his own expense. He had just married Yvonne Gallis, whom he met in 1922, and was living with her in an old, cluttered apartment in Saint-Germain-des-Prés. Le Corbusier wanted a family living environment for Yvonne and himself, the housekeeper and the dog Pinceau, as well as space for his painting and writing activities. He used the entire width of the building plot, an area of 240 m2 on two levels, to lay out the four main spaces making up the studio, the apartment, the guest room and the roof garden. All are exceptionally bright thanks to the glazed façades, the windows overlooking the courtyard and the skylights, for which the architect used the full range of Saint-Gobain products, including the famous Nevada glass bricks. The architect would inhabit this apartment studio from 1934 until his death in 1965. Access to the apartment is via a passageway reached by a service staircase and equipped with a service lift. The seventh floor contains the entrance, living room, kitchen, and atelier. The eighth and last floor contains a guest room and access to the roof garden.
The volumes of the studio-apartment were structured by the polychromy of the walls, while spatial continuity was emphasized by the grid-pattern tiles covering the floor. The main entrance is at the epicentre of the apartment's four areas. The handrail-free helical staircase leads up through a glass cube to the guest room and roof garden.
The large, pivoting wooden doors permit to open and close the various spaces of the apartment. Marked by a striking contrast between traditional architecture and modern technology, the Studio, this "atelier of patient research", extends under a curving arch 12 meters long.
Figure 1. Le Corbusier in his atelier -© FLC-ADAGP
Situated at the end of the corridor leading to the studio, the servant's room is endowed with real comfort for the time: a picture window looking onto the courtyard, electric lighting, a cupboard and even a water tap. The living room was arranged around the casing enclosing the lift machinery and the space taken up by the service lift and the chimney. The walls were faced with panels of oak-veneered plywood and the room furnished with the Canapé and Grand Confort armchair, co-designed by Le Corbusier, Pierre Jeanneret and Charlotte Perriand,. As in the rest of the apartment, works of art (by Le Corbusier himself, but also paintings by Fernand Léger or, sculptures by Jacques Lipchitz) and "poetic reaction" objects ( shells, bones, pebbles) were displayed in the niches and on the picture rails. Their arrangement was frequently varied. The kitchen communicates with the dining room. It is equipped with built-in furniture, a total innovation for the time. Two storage units structure the space and support worktops overlaid with pewter. In the area for preparing meals, the double sink receives light from a small courtyard. The walls are faced with white earthenware tiles. Stove and refrigerator are housed in their own niches. The service door opens on to a passageway leading to the servant's room. This is located on the far side of the apartment, thus preserving the couple's intimacy. The dining room has a sweeping view of Boulogne from a large picture window, which was remodelled several times, and from a balcony-loggia. The geometrical stained-glass window was made in Reims by the artist Brigitte Simon and added in 1949. A red woollen rug, woven in Tlemcen, Algeria, sets off the marble table designed by Le Corbusier and surrounded by four Thonet armchairs. Le Corbusier was fascinated by ocean liners and used their cabins as inspiration for the layout of his marital bedroom. He invented a raised bed resting on two feet and with a headrest fitted to the wall, its height allowing the couple to admire a view of Boulogne over the "dizziness-free" balcony balustrade. Madame had a vanity and her own bathroom with a hip bath; Monsieur had his shower and wash-hand basin -toilet and bidet were shared. Clothes were stored in ingeniously designed furniture, part of a particularly elaborate piece of domestic economy. The guest room was intended mainly for stays made in Paris by Le Corbusier's mother. It is equipped with a shower and washhand basin and divided up by a storage cabinet at mid-height grandly surmounted by a central heating device. In his projects Le Corbusier conceived a green space on the rooftops of Paris, blending into the surrounding urban environment. Between the two rounded vaults at the top of the building, he laid out a roof garden offering a breathtaking view of Boulogne and Paris. Figure 3. Sherwood, 1978 The property of the Le Corbusier Foundation, the apartmentstudio, awarded the Maison des Illustres seal of approval, was listed as a historic monument in 1972, and the entire building in 2017. Since 2016, as the world's first apartment building with entirely glazed façades, it has been part of a UNESCO World Heritage Site comprising a series of 17 works by Le Corbusier.
The restoration works
The apartment has reopened its doors to the public in 2018, following two years of restoration works led by the Fondation Le Corbusier. Despite its status as an icon of twentieth-century architecture, together with an aura of memory of the main architect of modernity, Le Corbusier's studio-studio has been little studied. Therefore, the Le Corbusier Foundation decided that this campaign should be an opportunity to increase the historical and material awareness of the apartment, through preliminary studies, but also by paying the greatest attention to the discoveries and observations made during the construction site. These studies also shed light on the restoration project options that have not yet been defined. The restoration work was based on historical and scientific studies carried out by Graf Franz, Marino Giulia (Graf, Marino, 2014
construction method
The 3D model has been built element by element according to the information found in the archives completed and verified with in situ non-destructive investigations.
Archive
Several archive center have been searched in order to collect has many document has possible. At the end, more than 7500 documents have been collected, documents from different nature : letters, bills, drawing, plans, … A first Excel database has been created to store these informations, reusing some work already done by the Foundation, and adding other documents after (structural works plans, …). A the end the database links each document with the part of the appartement it describes.
Research in situ (camera-georadar )
The apartment has been analyzed using infrared camera and géoradar. The first device has been used to observe the construction detail of the apartment: the exact position of the poles and the beams, the exact dimension of the bricks, … verifying the archives information and discovering some details never seen before (the specific disposition of the bricks used to build the shower, …).
On site measurement
In addition, multiple thermal and hygrothermal sensor have been placed inside and outside of the apartment. The data have been acquired during two years and has been used to compare and calibrate the numerical simulations.
3D model construction
The model has been built with the software Revit with all the information collected before. Each element has been construed thanks to the archive's information and the onsite investigations. The informed elements have then been placed in the 3D model thanks to a point cloud of the apartment delivering the exact position were each element must be. Requesting the model we verified than the amount of material used to build the numerical model is very close to the amount of materials payed by Le Corbusier for its apartment: Figure 9. Parallel between materials paid by Le Corbusier and quantities exported from the 3D model
IFC format (Industry Foundation Classes) is an open objectoriented file format used for BIM (Building Information
Modeling) based projects. It allows interoperability between different software. So, it is possible to export digital MockUp built thanks to the software Revit (which has a specific format: rvt) in an IFC format. For each object, it contains its geometry and different information such as the materials, the construction date, etc… However, loss of data is possible during the exportation from .rvt file to .ifc file. That's why, during the building of the digital MockUp in Revit, it is very important to make precise correspondence between the categories used in Revit and the classes IFC in order to limit this lost. The software has the tools needed to define this correspondence. Some searches are made to erase the loss of data.
Thanks to the ifc format, it is possible to put on the internet the digital MockUp which is a graphical representation of data.
Archive documents database
IFC format allows an exportation of data to database. This database will store different data from building to all the elements of a room. It will also contain all the historical archives used to the building of the MockUp. In the future, the database will also be filled by a software which will be used by different contributors of a project (from the archaeologist to the structures and materials engineer for instance).
Numercial simulation
Thanks to the .IFC format the model has been analysed with several software in order to provide different type of simulation to guide the restoration choices.
Structural simulations
Simulations have been made in order to verify the stability of the concrete vaults
Light simulation
The model has also been used to create a light simulation resulting in a map of the apartment with the average natural illumination in a year. This cartography indicates areas where the works of art will not be damaged by sun beams. Figure 14. Cartography of natural illumination of the appartement
Cultural mediation
By linking the MockUp and the database in a web project, it will be possible for people to interact with the MockUp : different data of an element of the building will be accessible online, such as photos, archive about the selected element, etc… The second stage of the cultural mediation project is to create a collaborative platform between the MockUp and visitors: For instance, if someone has an information about on element of the building which does not appear on the MockUp, he will be able to add the information. It will allow to have the best knowledge possible of the building. | 3,145.8 | 2019-05-04T00:00:00.000 | [
"Computer Science"
] |
Contextual fear memory impairment in Angelman syndrome model mice is associated with altered transcriptional responses
Angelman syndrome (AS) is a rare neurogenetic disorder caused by UBE3A deficiency and characterized by severe developmental delay, cognitive impairment, and motor dysfunction. In the present study, we performed RNA-seq on hippocampal samples from both wildtype (WT) and AS male mice, with or without contextual fear memory recall. There were 281 recall-associated differentially expressed genes (DEGs) in WT mice and 268 DEGs in AS mice, with 129 shared by the two genotypes. Gene ontology analysis showed that extracellular matrix and stimulation-induced response genes were prominently enriched in recall-associated DEGs in WT mice, while nuclear acid metabolism and tissue development genes were highly enriched in those from AS mice. Further analyses showed that the 129 shared DEGs belonged to nuclear acid metabolism and tissue development genes. Unique recall DEGs in WT mice were enriched in biological processes critical for synaptic plasticity and learning and memory, including the extracellular matrix network clustered around fibronectin 1 and collagens. In contrast, AS-specific DEGs were not enriched in any known pathways. These results suggest that memory recall in AS mice, while altering the transcriptome, fails to recruit memory-associated transcriptional programs, which could be responsible for the memory impairment in AS mice.
(WT) and AS mice under control conditions and following contextual fear memory recall.Our results suggest that contextual memory recall engages different transcriptional programs in AS mice than in WT mice, which could be responsible for the recall failure.
Memory recall further segregated hippocampal gene expression in AS mice from WT mice
To determine memory recall-induced changes in gene expression, we used 3 groups of either WT or AS mice for this study: control home-cage mice (control wildtype, CWT and control Angelman syndrome, CAS; to acquire basal gene expression in hippocampus of WT and AS mice), mice trained in the fear conditioning paradigm with no recall (trained wildtype, TWT and trained Angelman syndrome, TAS; to provide control for recall and detect long-lasting changes in gene expression induced by training), and fear conditioned mice with contextual recall at 24 h (recall wildtype, RWT and recall Angelman syndrome, RAS).Hippocampal tissues from these mice were collected at separate time points, either 24 h after training for the conditioned/training groups or 1.5 h post memory recall exposure for the Recall groups (Fig. 1A).Like previously reported 36 , there was no genotype difference in freezing time in the pre-conditioning period, while AS mice exhibited less freezing time in context-dependent memory recall (Fig. 1B), as expected from learning impairment.To examine the main sources of variance in the RNA-seq data, we first performed principal component analysis (PCA).On the second principal component, recall mice were segregated away from the rest of the mice including no-recall-Trained mice (Fig. 1C), indicating that memory recall is a driver of the variability between the samples.In addition, there was a clear separation by genotype in the Recall groups, which was driving the first component of the variability (Fig. 1C).On the other hand, there was no significant distribution separation between control home-cage mice, CWT and CAS, and no recall-Trained mice, TWT and TAS, suggesting that memory recall further differentiates the transcriptome of AS mice from WT mice.
Differential gene expression in hippocampus of WT and AS mice with or without memory recall
We first compared gene expression in hippocampus from WT and AS mice under control conditions.There were 62 DEGs, with 58 genes up-regulated while only 4 genes were down-regulated in CAS, as compared to CWT mice (Fig. 2A; Supplemental Table 1).The annotation of biological processes with gene ontology (GO) indicated that most of the genes under differential regulation were related to wound healing and organ development, extracellular structure and matrix organization, and nutrient and sulfur transporters (Fig. 2B).KEGG pathway enrichment analysis showed that the pathways with the highest number of inputs included "protein digestion and absorption", "focal adhesion, " "AGE-RAGE signaling pathway in diabetic complications", and "PI3K-AKT pathway" (Fig. 2C).
We next determined DEGs induced by either fear conditioning and/or memory recall by performing pairwise comparisons.When compared to CWT, TWT mice only showed 4 DEGs 24 h after fear conditioning training.Of these, 3 genes showed higher expression and 1 showed lower expression (Fig. 3A).More DEGs were identified when TAS mice were compared to CAS; 3 genes showed higher expression and 23 genes showed lower expression (Fig. 3B).Of note, there was no overlap between DEGs identified in TWT and TAS, suggesting that fear conditioning training triggers different transcriptional programs in WT and AS mice, at least for longlasting (24 h) changes.One possible reason for the low number of DEGs in the "Training" group is due to the fact that the samples were collected 24 h after training.By this time, expression levels of most learning-related genes have returned to basal levels 38 .The biological roles of the few long-lasting DEGs after training remain to be determined.
Similar pairwise comparison was used to assess contextual memory recall-associated changes in gene expression.When comparing RWT to CWT, there were 281 DEGs with 193 genes exhibiting higher expression after recall and 88 genes showing lower expression (Fig. 3C).Similarly, when comparing RAS mice to CAS, there were 225 genes with higher expression and 43 genes with lower expression in RAS mice (Fig. 3D).Of the identified DEGs, only 129 DEGs were shared between RWT and RAS, while the rest (152 in RWT and 139 in RAS) were genotype specific.
Functional annotation of DEGs with GO and KEGG revealed different pathways in WT and AS mice
To further understand the biological function of the DEGs, we performed Gene Ontology (GO) enrichment analysis using the recall-associated DEGs.According to their false discovery rate (FDR) values, DEGs from the RWT vs CWT comparison were significantly enriched mostly in biological processes that are related to extracellular matrix organization and responses to specific signals (Fig. 4A).In contrast, the DEGs from RAS vs CAS comparison were mostly enriched in nucleic acid and RNA metabolic processes (Fig. 4B).A detailed list of genes and GO terms ranked by FDR can be seen in Supplemental Table 2.
Next, we analyzed the DEGs from recall mice compared with control mice with the KEGG enrichment, which was used to describe molecular interactions and relationship networks.KEGG analysis revealed that, among the top 10 enriched pathways, 5 were shared between WT and AS mice, including "Protein digestion and absorption", "PI3K signaling", "Human papillomavirus infection", "Focal adhesion", and "ECM-receptor interaction pathways" (Fig. 4C).Another 5 pathways were uniquely altered either in RWT or RAS.For instance, "MAPK signaling pathway" and "Proteoglycan in cancer pathway" were only found with the RAS vs CAS comparison while "FoxO signaling pathway" and "Small cell lung cancer pathway" were only found with the RWT vs CWT comparison (Fig. 4C).A detailed summary of the KEGG signaling pathways is presented in Supplemental Table 3.
We also performed cell type analysis using cell type-specific markers previously identified in five purified brain cell types 39 , namely neurons, astrocytes, endothelial cells, microglia, and oligodendrocytes.Cell type analysis revealed that the DEGs were distributed not only in neurons but also in astrocytes, endothelial cells, microglia, and oligodendrocytes.Although recall-associated DEGs in neurons were similar in WT and AS mice, AS mice had more DEGs attributed to astrocytes and WT had more to endothelial cells (Fig. 4D).These findings need to be further verified by Single-cell profiling, including spatial transcriptomics.
Shared and distinct DEGs induced by memory recall in WT and AS mice
We next investigated in greater details the recall-related DEGs, including those shared between WT and AS mice and those specific to each genotype.Most of the 129 shared DEGs changed in the same direction with similar magnitude of changes in both genotypes (Fig. 5A, Supplemental Table 4).There were 3 exceptions: Car14 (carbonic anhydrase 14), Dnah6 (dynein axonemal heavy chain 6), and Wdr86 (WD repeat domain 86); these genes were decreased in WT and increased in AS after memory recall (Supplemental Table 4).The top biological processes of the shared DEGs included nucleic acid and RNA metabolic processes and cell differentiation and tissue development (Fig. 5B).These DEGs most likely respond to a variety of activities.The top enriched GO terms of the 152 WT unique DEGs included cell responses to various signals, extracellular matrix, and developmental cues (Fig. 5C; Supplemental Tables 5, 6).Of note, a few genes were present across different terms, such as Sox9, Nr4a1, Col3a, and Fn1.Intriguingly, the 139 AS unique recall DEGs (Supplemental Tables 5) were not significantly enriched in any GO biological processes, indicating a non-specific/random gene activation.
STRING analysis of the 152 WT unique DEGs revealed a few webs with both strong and weak connections (Fig. 6B).The largest network clustered around fibronectin1, collagen proteins, and their modifying proteins (circled with blue dashed line).Of note, this web also interacts with the transcription factor SOX9. Another interesting cluster consisted of the triangle with the 3 learning and memory associated proteins, ARC, Nr4a1, and Egr2 (circled with green dashed line).
Validation of the memory recall-induced DEGs
To validate the changes in gene expression obtained by RNA-seq, the expression of 17 genes (Supplemental Table 7), which showed distinct expression patterns in three comparisons according to RNA-seq, was analyzed by RT-qPCR.We found a strong correlation between RNA-seq and RT-qPCR data (R 2 = 0.8313; P < 0.0001; Fig. 7A), demonstrating the reliability of the results.Since gene expression in the ECM network was differentially altered in WT vs AS mice after recall, we performed RT-qPCR on Fn1 and Col6a1.Expression of Fn1 was significantly increased in RWT mice, as compared to CWT (Fig. 7B) when compared with unpaired t-test, which is consistent with the RNA-seq data.However, it was not statistically significant when using two-way ANOVA, possibly due to the small number of animals used.Fn1 level was higher in CAS than in CWT, and higher in RAS than in RWT mice, although neither comparison reached statistical significance.Expression of Col6a1 was increased after recall in both WT and AS mice (Fig. 7C).Immunostaining showed that FN1 was expressed mostly in the pyramidal cell body and dentate granular layers in hippocampus (Fig. 7D) with the highest expression in the subiculum (Fig. S1), which is consistent with the in-situ hybridization data from Allen Brain Atlas (http:// mouse.brain-map.org/ exper iment/ show/ 72119 593).Quantitative analysis indicated that recall significantly increased FN1 immunoreactivity in CA1, CA3, and dentate gyrus of WT mice, but not of AS mice (Fig. 7E).
Discussion
Learning and memory impairment in the fear-conditioning paradigm has been widely reported in AS mice 26,28,33 ; yet the underlying mechanism is not completely understood.Using RNA-seq, the present study showed that contextual memory recall-induced transcriptional changes in hippocampus of AS mice were significantly different from that of WT mice.In particular, principal component analysis showed that gene expression segregation between WT and AS mice was clearer after memory recall than under control condition, suggesting that UBE3A deficiency significantly modifies cellular responses to fear memory recall.Further data analyses revealed that memory recall activated distinct biological processes/pathways in WT and AS mice, although there were some shared pathways.The shared pathways with the largest DEG numbers included nucleic acid metabolic processes and tissue development, indicating that these processes are commonly activated by a variety of environmental cues.The WT unique pathways included cellular responses to various stimuli and extracellular structure organization.The functions of these pathways in synaptic plasticity and memory consolidation have been widely reported [40][41][42][43][44][45] .No significant enrichment was found in these pathways in AS mice, suggesting that the lack of UBE3A reduced their activation.No significant enrichment in GO biological process was found with the 139 AS unique recall DEGs, which again suggests a lack of an orchestrated response to fear memory recall.
Several groups, including ours, have reported reduced dendritic spine density, especially for mature spines, in the hippocampus of AS mice 33 .The extracellular matrix plays critical roles in brain development and in spine and synaptic maturation and plasticity 40,41 .ECM can influence the formation and differentiation of neurons and other cells in the brain, thereby affecting the establishment of neural circuits and the formation of memories 40,[42][43][44][45] .ECM stabilizes and remodels dendritic spines by a variety of mechanisms, including structural restriction, adhesion, ligand/receptor-driven intracellular signaling and epitope unmasking by proteases 41 .Our RNA-seq study showed that the ECM ontology was highly enriched with DEGs between WT and AS mice and the expression of these genes was upregulated in AS mice.Whether such upregulated ECM gene expression maintains spines/synapses in the immature stage and prevents their maturation and function in memory consolidation is an interesting question.
Of note, 5 of the10 ECM DEGs encode for fibronectin and collagens.Our RT-qPCR and immunostaining results indicated that while recall significantly increased FN1 expression in WT mice, no similar response was observed in AS mice.FN1 is involved in several key processes during brain development, including neuronal migration, differentiation, and synapse formation 46 .In particular, FN1 has been shown to regulate the activity of various signaling pathways involved in brain development, including extracellular signal-regulated kinase (ERK) pathways 47,48 .Recently, it has been shown that irisin, an exercise-linked hormone produced by cleavage of the fibronectin type III domain containing protein 5, provides neuroprotection in a brain derived neurotrophic www.nature.com/scientificreports/factor (BDNF)-dependent manner 49 .Irisin induces BDNF accumulation in hippocampal neuronal cultures and stimulates transient ERK activation thereby preventing amyloid-β oligomer-induced oxidative stress in primary hippocampal neurons 50 .We previously showed that upregulation of BDNF by treatment with an AMPA receptor modulator (ampakine), significantly improved synaptic plasticity and fear conditioning performance in AS mice 28 .It is thus conceivable that changes in ECM expression affect spine and synaptic plasticity by altering signaling from various trophic factors, hormones, and neuromodulators.On the other hand, BDNF has been shown to regulate the expression and function of ECM 51 .The altered ECM gene expression may also be linked to inappropriate BDNF signaling in AS mice.Many of the ECM functions are through binding to integrins since fibronectin and collagens contain the tripeptide arginine-glycine-aspartic acid (RGD), which is the recognition site for adhesive binding with integrins 52,53 .RGD peptides have been shown to evoke changes in synaptic plasticity and structural stability [54][55][56][57][58] , suggesting that the RGD-containing proteins found in our study (Fn1, Col18a1, Col1a1, Col3a1, Col1a2, Col6a1) may be involved in synaptic plasticity.Whether abnormal RGD-mediated integrin signaling also contributes to spine pathology in AS remains to be determined.
Another prominent DEG, which is also a FN1 interacting partner, is Col6a1 of collagen type VI (ColVI).Recent studies have shown that ColVI proteins assemble into a unique supermolecular structure to form the characteristic beaded collagen microfilaments present in ECM 59 .This super structure enables its interactions with not only other ECM filaments but also various signaling molecules and membrane-localized receptors and channels through which it regulates multiple cellular functions, such as mitochondrial integrity, autophagy, cell differentiation and survival, and tumor growth and migration [59][60][61][62][63] .Genetic mutations of COL6A1-6 or their abnormal expressions have been linked to various diseases, including muscular dystrophies 64,65 .A recent study has implicated ColVI in Alzheimer's disease (AD); in this case, both mRNA and protein levels of Col6a1 were increased in hippocampus of a mouse model of AD 66 .Furthermore, deletion of Col6a1 enhanced ß-amyloid toxicity, while treatment with ColVI prevented ß-amyloid-induced cell death 66 .Another recent study has linked COL6A2 gene mutations to progressive myoclonus epilepsy 67 , while polymorphisms of several COL6A genes are identified as rare risk factors for schizophrenia and bipolar disorders 68 .A recent behavioral study showed that Col6a1 deficiency results in social memory and object recognition impairment, which is associated with decreased brain dopamine and 5-HT levels 69 .It is thus conceivable that changes in collagen gene expression may contribute to abnormal spine/synapse maturation, circuitry wiring, and cognitive functions in AS.
Besides ECM, some other pathways may also contribute to AS pathogenesis.We noted higher expression in RAS mice of the Dimt1 (DIM1 dimethyladenosine transferase 1-like) gene coding for a methyltransferase that is involved in protein translation and is essential for ribosome biogenesis 70 , thus regulating cell proliferation and growth 71 .We also observed upregulation of the Sox18 gene and downregulation of the Sox9 gene in WT recall; both are components of the SOX transcription factor group involved in the regulation of embryonic and adult neurogenesis 72 .It has been reported that Sox9 downregulation is required for neurogenesis 73 , whereas the Sox18 is significantly downregulated in hippocampus of mice exhibiting cognitive impairment following sevoflurane exposure, suggesting a potential role of Sox18 in cognition 74 .Thus, changes of Sox9 and Sox18 in the WT Recall group could be related to normal learning ability, while the lack of similar changes in AS mice may contribute to memory deficits.
As mentioned earlier, principal component analysis showed that gene expression segregation between WT and AS mice was clearer after memory recall than under control condition.One interpretation of these results could be that fear-conditioning learning in AS mice failed to trigger changes in expression of genes that are essential for memory formation.Along this line, we previously showed that stronger or repetitive stimulations could rescue synaptic plasticity impairment in AS mice when a single stimulation failed 75 .Likewise, treatment with Figure 7. Validation of RNA-seq results.(A) Relation between expression levels acquired by RNA-seq and RT-qPCR, for three comparisons (CWT vs CAS, RWT vs CWT and RAS vs CAS).Means (n = 3) of selected genes were plotted as log2FoldChange value from RNA-seq and RT-qPCR.Strong Pearson correlation is shown between the expression levels measured using RNA-seq and RT-qPCR (R 2 = 0.8313).The dots of different colors represent different genes (see Supplemental Table 7).(B) The expression of Fn1 was detected by RT-qPCR.Two-way ANOVA followed by Bonferroni's test, (Genotype x contextual memory recall) interaction: F (1, 8) = 0.07372; Post hoc linear contrast: WT control vs. AS control, t (8) = 11.88,p = 0.1812; WT control vs. WT recall, t (8) = 10.69,p = 0.2204; AS control vs. AS recall, t (8) = 10.69,p = 0.4011.n = 3. (C) The expression of Col6a1 was detected by RT-qPCR.Two-way ANOVA followed by Bonferroni's test, (Genotype x contextual memory recall) interaction: F (1, 8) = 2.899; Post hoc linear contrast: WT control vs. AS control, t (8) = 33.11,p = 0.1259; WT control vs. WT recall, t (8) = 74.97,p = 0.007; AS control vs. AS recall, t (8) = 74.97,p = 0.0005.n = 3. (D) Representative images of Fn1 immunostaining in hippocampus of control WT and AS (home cage), WT Recall and AS Recall mice.Images were acquired with a 20 X objective.Scale bar, 50 www.nature.com/scientificreports/an ampakine, which enhances AMPA receptor function, or a SK2 potassium channel blocker, which facilitates NMDA receptor opening, also rescued long-term potentiation and memory impairment in fear-conditioning in AS mice 28,29 .It is thus tempting to speculate that the fear-conditioning training used in the current study, while appropriate to trigger the biological processes leading to memory coding in WT mice, was not sufficient in AS mice.
In conclusion, our results indicate that memory recall in WT and AS mice activates multiple transcriptional programs.Some pathways are shared by both genotypes, and most likely represent common transcriptional activities in response to various activities.The pathways specifically found in WT mice, including the ECM network clustered around fibronectin 1 and collagens, and pathways involved in responses to various stimuli, most likely play important roles in memory encoding.In contrast, the lack of such an orchestrated activation in AS mice may be one of the reasons for their memory deficits.Additionally, whether the AS unique DEGs, which are not enriched in any known biological processes, simply create "noise" background, or further hinder memory encoding remains to be determined.
One potential weakness is the small number of animals used in this study.Future research with a larger number of animals may further expand our findings.
Mice
Animal procedures were approved by the Institutional Animal Care and Use Committee of Western University of Health Sciences and were conducted in accordance with the guidelines of the NIH.Experiments were performed in 2-4 months old male wildtype (WT) and UBE3A-deficient (AS) mice housed in a 12-h light/dark cycle with food and water ad libitum.Original AS mice were obtained from the Jackson Laboratory, strain B6.129S7-Ube3a tm1Alb /J (stock no.016590), and a breeding colony was established as previously described 28 .For RNA-seq, a total of 18 mice were used (3/group).
Fear conditioning
Fear conditioning was performed as previously described 33 .Briefly, WT and AS mice were handled for 5 days before being subjected to contextual fear conditioning.Mice were placed in the fear conditioning chamber (H10-11 M-TC, Coulbourn Instruments).After a 2 min exploration period, three tone-footshock pairings separated by 1 min intervals were delivered.The 85 dB 2 kHz tone lasted 30 s and co-terminated with a footshock of 0.75 mA and 2 s.Mice remained in the training chamber for another 30 s before being returned to home cages.Contextual memory recall was tested 1 day after training in the original conditioning chamber with 5 min recording.
Tissue collection
Ninety minutes after recall, mice were anesthetized using isoflurane and were decapitated, and the hippocampus was rapidly dissected from the brain.Bilateral hippocampi were combined and flash-frozen in liquid nitrogen.The tissues were then transferred to a − 80 °C freezer before RNA extraction.
RNA extraction and sequencing
Total RNA was extracted from dissected tissue using RNeasy Mini Kit (QIAGEN) and genomic DNA was removed with RNase-Free DNase Set (QIAGEN) following the manufacturer's instruction.Total RNA concentration and quality were determined using a NanoDrop 2000 spectrophotometer (ThermoFisher Scientific), and integrity was evaluated on 1% agarose gels.
A total of 1 μg RNA was used as input for RNA sample preparation.The sequencing libraries were constructed by using NEBNext Ultra™ RNA Library Prep Kit following the manufacturer's protocol (New England Biolabs), and sequenced at Novogene Corporation (Beijing, China).
Sequencing data processing and bio-informatics analysis
Sequenced reads were trimmed for adapter sequences and by removal of low quality reads, and then mapped to the mouse mm10 genome version using STAR.Gene expression was then estimated by using FeatureCounts to compute read counts.To identify altered gene expression due to either genotype or fear conditioning, we performed differential gene expression analysis using DEseq2.The genes with false discovery rate (FDR) ≤ 0.05 and log2foldchange ≥ 0.5 (up-regulated genes) and log2foldchange ≤ − 0.5 (down-regulated genes) were considered as significantly differentially expressed genes (DEGs).Genes for which fold changes did not reach this threshold were not included in the functional analysis.For the functional analysis, we focused on genes with relatively high expression levels with a normalized read count of 100 or more in at least one of the groups (3 biological replicates).Enrichment in GO and KEGG pathways (https:// www.kegg.jp/ kegg/ kegg1.html) of the selected DEGs was analyzed using ShinyGO v0.741 76 and KOBAS tools [77][78][79][80] , respectively.The FDR ≤ 0.05 was used as the threshold to identify significant functional categories and metabolic pathways.Protein-protein interaction analysis of DEGs was based on the STRING database 81 .
Cell type composition analysis
Taking advantage of recently reported cell type specific genes 39 , we estimated specific cell type contributions to the DEGs, using genes specific to neurons, astrocytes, microglia, endothelial cells, oligodendrocytes, and oligodendrocyte precursor cells.
RT-qPCR
Reverse transcription was performed with 1 μg of total RNA using the High-Capacity cDNA Reverse Transcription Kit (Thermo Fisher Scientific).Quantitative PCR was performed using Fast SYBR Green Master Mix (Thermo Fisher Scientific) on a CFX 96 Real-time thermocycler (Biorad).Gapdh gene was used as a reference gene for 2 -ΔΔCt quantification.Primer sequences are listed in Supplemental Table 7.
Immunofluorescence
Immunofluorescence was performed as preciously described 82 .Briefly, sections were blocked in 0.1 M PBS containing 10% goat serum and 0.3% Triton X-100, and then incubated in primary antibody rabbit anti-Fibronectin (1:100, ab2413, Abcam) at 4 °C.Sections were washed three times in PBS and incubated in Alexa Fluor 594 goat anti-Rabbit IgG (A-11037, Life Technologies) for 2 h at room temperature.Sections were washed again three times in PBS before being mounted onto glass slides using VECTASHIELD mounting medium with DAPI (Vector Laboratories).To ensure similar antibody exposure, brains sections from different experimental groups were processed simultaneously, including using the same antibody solution with the same incubation time and conditions.Mean fluorescence intensity was calculated in pyramidal layer and the granular cell body layer by drawing a box along the cell body layer in the same anatomic locations across different sections of different animals in all experimental groups.In each region (CA1, CA3, DG), the mean intensity of three areas was used as a measurement of Fn1 staining intensity.
Statistical analyses
Results are reported as means ± SEM and compared using two-way ANOVA followed by Bonferroni's test (Graph-Pad Prism 6) or Student's t test (for pairwise comparison of RNA-seq data); statistical significance was set at p < 0.05, n represents the number of animals.
The study is reported under the guidance of ARRIVE guidelines.
Figure 2 .
Figure 2. Differential gene expression analysis in hippocampus of WT and AS mice.(A) Heatmap showing the expression profiles of DEGs between WT and AS mice.CWT/CAS: home cage WT and AS mice.The cluster heatmap was generated using the ClustVis 37 .Each row represents an individual gene.Red color indicates increased expression and blue indicates decreased expression.(B) Tree view showing the 10 most significantly enriched GO (biological process) terms for DEGs.The size of the solid blue dots corresponds to the enrichment false discovery rate (FDR) with bigger dots indicating more significant changes.Terms with many shared genes are clustered together.(C) Top 10 enriched KEGG pathways of DEGs.Dot size indicates the number of genes annotated as participants of any KEGG pathway.Dot color corresponds to the p-value.(D) Protein association network visualization generated with STRING from DEGs.Edges represent the protein-protein associations supported by interaction sources.Dashed lines delimit protein clustering according to functional roles (see text for details).
Figure 3 .
Figure 3. Differential gene expression analysis in hippocampus of WT and AS mice following fear conditioning and contextual memory recall.(A-D) Volcano Plot depicting up-regulated (right) and down-regulated (left) genes induced by training (A,B) or memory recall (C,D).The x-axis represents the Log2FoldChange, and the y-axis shows the − Log10 adjusted p-value.The red dots represent highly up-regulated and down-regulated genes (adjusted p-value ≤ 0.05 and a |Log 2 FoldChange|≥ 0.5).Blue dots indicate those genes with an adjusted p-value ≤ 0.05 and a |Log2FoldChange|≥ 0.1 and ≤ 0.5.Grey and green dots represent no significant change in gene expression.The top 10 DEGs with adjusted p-value ≤ 0.05 are labeled.CWT/CAS: home cage WT and AS mice; TWT/TAS: fear-conditioning trained WT and AS mice; RWT/RAS: fear-conditioning trained and recalled TW and AS mice.
Figure 4 .
Figure 4. GO and KEGG pathway analysis of recall-induced DEGs in WT and AS mice.(A,B) Tree view showing the 10 most significantly enriched GO (biological process) terms for recall induced DEGs in WT (A) and AS (B) mice.Size of the solid blue dots corresponds to the enrichment FDR with bigger dots indicating more significant.Terms with many shared genes are clustered together.(C) Top 10 enriched KEGG pathways of DEGs.Dot size indicates the number of genes annotated as participants of any KEGG pathway.CWT/CAS: home cage WT and AS mice; RWT/RAS: fear-conditioning trained and recalled TW and AS mice.Dot color corresponds to the p-value.(D) Potential contributions of different types of brain cells (neuron, astrocyte, endothelia, microglia, oligodendrocyte) to recall induced DEGs.Cell type analysis was performed using cell type-specific markers previously identified in five purified brain cell types.
Figure 5 .
Figure 5.Further analysis of recall-induced transcriptomic changes in WT and AS mice.(A) Venn diagram (same as in Fig. 3E) showing recall-induced DEGs shared by or unique to WT and/or AS mice.CWT/CAS: home cage WT and AS mice; RWT/RAS: fear-conditioning trained and recalled TW and AS mice.The overlap of the different circles represents the number of DEGs shared by the two genotypes and the GO enrichment of which is in panel B, while that for WT unique DEGs is in C. NA (not available) indicates that AS unique recall DEGs are not enriched in any pathways.(B) Significantly enriched GO terms of recall-induced DEGs shared by WT and AS.(C) Significantly enriched GO terms of recall-induced DEGs unique to WT mice. | 6,501.6 | 2023-10-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Machine learning dislocation density correlations and solute effects in Mg-based alloys
Magnesium alloys, among the lightest structural materials, represent excellent candidates for lightweight applications. However, industrial applications remain limited due to relatively low strength and ductility. Solid solution alloying has been shown to enhance Mg ductility and formability at relatively low concentrations. Zn solutes are significantly cost effective and common. However, the intrinsic mechanisms by which the addition of solutes leads to ductility improvement remain controversial. Here, by using a high throughput analysis of intragranular characteristics through data science approaches, we study the evolution of dislocation density in polycrystalline Mg and also, Mg–Zn alloys. We apply machine learning techniques in comparing electron back-scatter diffraction (EBSD) images of the samples before/after alloying and before/after deformation to extract the strain history of individual grains, and to predict the dislocation density level after alloying and after deformation. Our results are promising given that moderate predictions (coefficient of determination \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$R^2$$\end{document}R2 ranging from 0.25 to 0.32) are achieved already with a relatively small dataset (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim$$\end{document}∼ 5000 sub-millimeter grains).
Supplementary information to "Machine learning dislocation density correlations and solute effects in Mg-based alloys" Salmenjoki et al.
Supplementary Note 1: Data acquisition
To collect the corresponding grain data before and after loading, we started by finding the correct coordinates of the same position in the both images. We started by computing a map of mean squared difference (MSD) between the pixel orientation of 50 × 50 pixel sub images between before and after loading data. Then we collected the MSD map into a coordinate map by first retrieving the coordinates of the minimum in the MSD map for every sub image and then (median) filtering the coordinate map to reduce noise and achieve a smooth coordinate map between the two images. Obviously, the correspondence of the grains are not perfect as the after image has been deformed versus the before image but this way we get adequate matches as illustrated by the examples in Supplementary Fig. 1. But this way, we were able to collect a dataset where we have the features of the initial grain and the ρ GN D in the pixels of the post-deformation image corresponding to the initial grain. However as the figure shows, we lose some pixels that go missing due to new forming grain boundaries, inaccuracy in finding the correct pixels or changing shape of the grains. To counter the first, i.e. new grain boundaries, we took twinning into account by merging grains with grain boundaries matching the twinning misorientation in the after image (following the procedure documented in the MTEX manual [1]). This way we were able to reduce the number of missing pixels in some cases but, as noted in the original publication presenting the data, twinning is not dominant deformation mechanism in the studied samples [2]. One example of a twinning grain is illustrated in Supplementary size s circumference max width max height ρGND/s dρGND,1(4 µm) dρGND,2(4 µm) dρGND,3(4 µm) orientation parameters φ1, Φ, φ2 Grain Average Misorientation (GAM) sum of misorientations between nbrs θn number of neighbors n nbr average misorientation θn/n nbr Grain Orientation Spread (GOS) Supplementary Note 3: SVM training details The SVM was implemented with the scikit-learn library [3]. Our SVM used the radial basis function kernel. The hyperparameter, "tube" was set to 0.25 and the penalty parameter C ≈ 1.5 was deduced by finding the optimal value according to the loss on validation grains as illustrated in Supplementary Fig. 3.
Supplementary Figure 3. MSE error of SVM prediction for training and validation sets as a function of hyperparameter C.
Supplementary Note 4: Missing pixels in the EBSD image and impact on the SVM predictions As discussed in the main text and above in Supplementary note 1, finding one-to-one correspondence of grains in images before and after loading has several challenges. One is the increasing amount of noisy pixels in the after image. This arises at least partially from new, forming grain boundaries [4], but also drastic local orientation differences lead to missing values of ρ GN D in the after image. Due to the missing pixels, the target value for which the SVM mapping is done contains noise and it affects the SVM prediction success. To elaborate, Supplementary Fig. 4 shows the absolute error versus the fraction of missing pixels in the image after loading (i.e. 0 all pixels found, 1 all pixels missing) for single grain predictions along with the average curve. And as expected, the error of SVM output increases with the number of missing pixels and does so drastically when approaching 1.
Supplementary Figure 4. Absolute error between the true and predicted values versus the fraction of missing pixels in the grain after loading for single grains (points) and the mean (line). misorientation θ length of grain boundary direction vector g = (g1, g2) pointing from one grain center to another Supplementary Note 6: GN training details For the GN architecture, we used the encode-process-decode machine [5]. The idea of the model is to use the features of nodes (grains) and edges (grain boundaries between neighboring nodes) to predict node-wise variable (the target is log ρ GN D /s after loading). First, the model encodes the data, both node and edge features, into latent space and then processes the data by passing the latent representation of the features to neighbor nodes. The number of processing steps then decides on how 'far' the interactions are considered to apply (e.g. with three processing steps a node receives the data of all nodes reachable in three steps via the message-passing). Finally after the processing steps, the processed data is decoded from the latent space back to the desired form (i.e. the target).
Supplementary Note 5: grain boundary fetures used
We implemented the GN with the Deepmind's Graph Nets library [5]. For training, we applied the same idea as e.g. in [6] where the number of processing steps is fixed to some value but the GN is encouraged to solve the problem with as few steps as possible. We used five steps which is quite many considering that not necessarily grains that many jumps apart affect each other. And indeed, Supplementary Fig. 5a shows the training and validation loss during the training and the smallest validation loss is achieved with three processing steps. Other parameter values used for our model: 32 latent parameters for nodes, 16 latent parameters for edges, two hidden layers in encoder, decoder and processor. Learning rate was set to 5 · 10 −6 and the training was conducted with the early-stopping criterion, i.e. according to Supplementary Fig. 5a, the optimal GN was obtained after approximately 5000 epochs where the validation loss was at its minimum.
To further elaborate what the GN has learned at the end of the training, Supplementary Figs. 5b-c show the latent representation of the nodes after three processing steps in the space of reduced dimensions (t-SNE). In Supplementary Fig. 5b, the coloring is according the target value the GN is trying to learn and there are clearly separated regions with high ρ GN D /s and low ρ GN D /s grains. Furthermore in Supplementary Fig. 5c, the coloring is according to twinning / not twinning and again some separation can be seen in the latent space (i.e. grains that are more inclined to twinning). | 1,731.8 | 2023-07-10T00:00:00.000 | [
"Materials Science",
"Engineering",
"Computer Science"
] |
Efficiency Evaluation of Agricultural Informatization Based on CCR and Super-Efficiency DEA Model
. In this research, we want to evaluate the efficiency of input / output in agricultural informatization (AI) and the redundancy of AI by using DEA method. An index system evaluating the input and output in agricultural informatization was built with the support of CCR Model and Super-efficiency DEA Model, which contains 9 indices. We processed the data of agricultural informatization in Huaihua and Xiangxi, two areas in Hunan province, with DEMP and EMS software and analyzed the efficiency of agricultural informatization in different 5 years and figure out the tendency of developing status in AI from 2009 to 2013. The results show that the CRSTE, VRSTE, SCALE efficiency of AI in two areas is efficient and the inputs and outputs about AI in 2009 and 2011 have spaces of improving. In general, the developing tendency of AI efficiency in two areas is stable in these years even though they are not developed areas in agricultural informatization. The index system of AI evaluation in this research could be better and reliable in future if we get more data and add more indices in it, but it is hard for us now to get more data because of the limitation of incomplete statistical data released by local governments.
Introduction
With the development of information communication technologies (ICTs), the trend of informatization is spreading fast on the world. Agricultural informatization is an important component in our information society, and it is the inevitable outcome that modern information technology combines with agricultural interior demand.
Everett Rogers (2000) defines informatization as the process through which new communication technologies are used as means for furthering development as a nation becomes more and more an information society [1]. As a description of the development, informatization refers to the extent of a geographical area, economy or society continues to develop based information and information communication technologies, in other words, it refers to the degree of workforce size enhanced based on the information and information communication technologies [2]. Agricultural Informtization refers to the information communication technologies (ICTs) are comprehensively applied in the fields of agriculture, and penetrating into agricultural production, marketing, consumption, and rural social, economic, technological, and other specific aspects of the whole process [3].
The efficiency evaluation of AI in this research refers to the production inputs benefit which is achieved by calculating indices about the ICTs supporting agriculture and the output in agriculture. The indices needed in evaluation are different from other researches because of the different research aims.
Methods in informatizaiton measurement and evaluation abroad [4] usually are Machlup method, Porat method, Informatization Index method. The methods above aims to mainly figure out the informatization level of certain society or economy unit, however they do not focus on the efficiency and can't tell whether it is reasonable in input of the indices related with the AI.
Methods in informatizaiton measurement and evaluation in China are Delphi method, Analytic
Hierarchy Process (AHP) method, The method for the measurement of informatization level used in China, raised by National Information Center, combines Machlup method and Porat method, and includes 6 factors and 21 indices. [5] These methods above applied in domestic rural or agricultural informatization research mostly play extra effort on measurement of AI and service ability.
Comparing the researches on evaluating the level of AI, development status of AI and service ability of AI, the research on the efficiency of AI is rare.
Data Envelopment Analysis (DEA) method is seldom used in researches on evaluation of AI, but Machlup Method, Porat Method, AHP method, and Factor Analysis method are common methods in such researches. DEA method needs lesser indices than other methods discussed above to evaluate the efficiency of AI and overcome the shortage of data collecting difficulty.
CCR Model
Data envelopment analysis (DEA) is a nonparametric method in operations research and economics for the estimation of production frontiers [8]. It is used to empirically measure productive efficiency of decision making units (or DMUs). Non-parametric approaches have the benefit of not assuming a particular functional form/shape for the frontier, however they do not provide a general relationship (equation) relating output and input [9]. ( ) Description of CCR Efficiency Model: 1) Suppose there are n DMUs: DMU 1 , DMU 2 ,..., and DMU n .
2) Suppose m input items and s output items are selected with the properties.
3) Let the input and output data for DMU j be {x 1j , x 2j , •••, x mj )and (y 1j , y 2j ,---, y sj ) 4) The calculation of total efficiency of DMU l can be changed to linear programming problem in next slide.
5) Let
Xl=(x 1l , x 2l , •••, x ml ), Yl= (y 1l , y 2l ,---, y sl ) 6) We assume the return scale in CCR Model is not changed. 7) θ is the efficiency rate ,θ is at most 1(0≤θ≤1) Economy Meaning of θ: when the output Y is alternative by k DMUs output with a linear combinations, θ is the compressibility of its input X, θ is also known as the efficiency rate value.
1) If θ=1, the DMU being examined is the point of being efficient frontier surface, and it is efficient status.
2) If θ<1, the DMU being examined is invalid status, 1-θ is the more input of the DMU being examined than efficient status.
DEA efficient economic meaning: output can't be any increased unless one or more inputs to increase or reduce other types of output. Under the same condition, input can't be any decreased.
Super-efficiency DEA Model
When there are many DMUs which are efficient and their efficiency of evaluation is 1(θ=1), CCR model can't tell which is better among these DMUs. Anersen (1993) proposed the super-efficiency DEA model to sort efficient DMUs and give them ranking. An efficient decision-making unit can make it into a pro-rata increase, while the efficiency of the value does not change, the proportion of its investment increased is the super-efficiency evaluation value.
Data
Using the DEA and Super-efficient DEA model to evaluate the inputs and outputs of agricultural informatization (AI) in Huaihua and Xiangxi based on the data from 2009 to 2013 collected from statistical bulletin of national economic and social development in Hunan Province, We can analysis the data of inputs and outputs of agricultural informatization in two areas, and find out the aspects which can be improved in AI investment, also we can figure out the tendency of AI in two areas.
We built an index system containing 9 indices in concern of the feasibility and accuracy, and the data is collected from statistical bulletin of national economic and social development published by Statistical Bureau of Hunan Province.
Data about invest and output of AI in Huaihua and Xiangxi is listed in table 1:
Empirical Results and Discussion
The data was processed by DEAP (Data Envelopment Analysis Program, Version 2.1) using CCR model, the results are listed in table2. And we also used EMS software (Efficiency Measurement System, Version 1.3) using to process the data with Super-efficiency CCR model, the results are listed in table3. 2) The CRTES values of Huaihua in 2009 and 2011 are below 1, but greater than 0.9, which means the inputs and outputs about AI in these years have spaces of improving.
3) The scale merit of Huaihua increased in 2009 and 2011, which means the technical efficiency from CRS DEA is increased by expand the scale of AI investment. Analysis in table 3 and figure 1 1) In 2011, it is inefficient considering the inputs and outputs in AI in Huaihua.
Results
2) From 2010 to 2013, the efficiency of AI in Huaihua is gradually increased, but the efficiency in Xiangxi is relatively gentle and does not changed so much.
3) The efficiency from 2009 to 2010 in both areas is decreased, but the reasons are different. The increase of electricity for rural use leads to the decrease of efficiency in Huaihua, however, the decrease of rural radio coverage leads to the decrease of efficiency in Xiangxi.
Discussion
The efficient status (θ=1) does not means developed status or developing rapidly of AI, but only tell us that considering the output in those years, the inputs is not redundant.
Even the efficiency status of AI in Xiangxi is efficient, the developed degree in this area is lower than Huaihua, which can be seen from the data in table 1.
If we want to describe the development status of AI in undeveloped areas, more indices should been take into account in statistical bulletin.
Conclusions
The CCR model and Super-efficiency CCR Model are suitable to describe whether there any redundancy comparing inputs with outputs in AI, but can't figure out the development level of AI.
The index system can be modified better if more indices data is collected, and describe the efficiency of inputs and outputs comprehensively.
Even the efficiency status of AI in the two areas is efficient, the developing level of AI is still low, especially in the popularity rate of Internet. | 2,128 | 2014-09-16T00:00:00.000 | [
"Computer Science"
] |
SOME RESULTS ON LIGHTWEIGHT STREAM CIPHERS FOUNTAIN V1 & LIZARD
. In this paper, we propose cryptanalytic results on two lightweight stream ciphers: Fountain v1 and Lizard. The main results of this paper are the followings: -Wepropose a zero-sum distinguisher on reduced round Fountain v1. In this context, we study the non-randomness of the cipher with a careful selection of cube variables. Our obtained cube provides a zero-sum on Fountain v1 till 188 initialization rounds and significant non-randomness till 189 rounds. This results in a distinguishing attack on Fountain v1 with 189 initialization rounds. - Further, we find that the same cipher has a weakness against conditional Time-Memory-Data-Tradeoff (TMDTO). We show that TMDTO attack using sampling resistance has online complexity 2 110 and offline complexity 2 146 . - Finally, we revisit the Time-Memory-Data-Tradeoff attack on Lizard by Maitra et al. (IEEE Transactions on Computers, 2018) and provide our observations on their work. We show that instead of choosing any random string, some particular strings would provide better results in their proposed attack technique.
Introduction
Design and analysis of lightweight cryptographic algorithms have become one of the most important directions of research in cryptology. In the last two decades, the use of low powered and resource-constrained devices has increased rapidly, which resulted in the requirement of lightweight stream ciphers. The reduction of the state size reduces the power consumption of the cipher. So, designing ciphers with small states has become a challenging task in the cryptographic community. In the last few years, several new stream ciphers with smaller state sizes have been designed. The design principle of these lightweight stream ciphers differs significantly from the design principle of the standard stream ciphers.
In 2019, NIST initiated a standardization project on lightweight cryptographic algorithm (LWC [1]). LWC has received 57 submissions among which 56 of them were selected as Round 1 candidates. Among these 32 were selected for Round 2. Fountain v1 [32], designed by Zhang is one such cipher among the 56 candidates of Round 1. This cipher is an authenticated cipher with a state size of 256-bit and a key size of 128-bit. The complete state (i.e., 256-bit) is divided into four LFSRs, where each of the LFSRs is of 64-bit. A more detailed design specification of this cipher is provided in Section 2.6.
Sprout [2] was the first lightweight cipher, which involved a state size equal to the key size. In the design principle of Sprout, the key bits are repeatedly used to update its state. This repeated involvement of the key bits protects the cipher from the birthday attack. However, Sprout was immediately attacked in [21,5,23,11,33] because of several weaknesses in the design rationale. Later, two more lightweight stream ciphers were designed, namely Plantlet [27] and Fruit [13]. Plantlet [27] is based on a state of size 101-bit and a key of size 80-bit, whereas the state size and the key size of Fruit [13] is 80-bit. Another variant of Fruit with 128-bit state size was also proposed in [14]. A fault attack on Plantlet was proposed by Maitra et al. [24]. The 80-bit version of Fruit was cryptanalyzed by Dey et al. [9], Zhang et al. [34], and Hamann et al. [17].
Lizard is one such lightweight stream cipher designed by Hamann et al. [18] in 2016. The cipher is based on a 120-bit key size and 121-bit state size. The design specification of Lizard is very similar to Grain-like [19] stream cipher although it does not have any LFSR. In the case of standard stream ciphers (such as Grain, Trivium, etc.) the key can be recovered if the state of the cipher can be recovered at any round of the keystream generation phase. This is not the case for Lizard. In the case of Lizard, the key can not be recovered by inverting the cipher even if an attacker can recover the state of the cipher at any round of the keystream generation phase.
The first analysis of Fountain v1 has been presented in [28]. They presented a slide attack by using 32 relations on key bits with time complexity of 17 × 2 80 and the success rate claimed to be 98%. They have also studied some internal state transition properties of Fountain v1 which allows input data (key-IV-ad) that produce identical ciphertexts with probability 2 −32 .
The first analysis of Lizard was done by Banik et al. [6]. Later, a TMDTO attack has been suggested by Maitra et al. [22], where the time, memory and data complexities are less than 2 60 .
Our contribution. The main contributions of this paper are as follows.
-In this paper we propose a zero-sum distinguishing attack on Fountain v1 with reduced initialization rounds. We show that Fountain v1 with 189 initialization rounds can be distinguished from a random source with a very high confidence level. -Further, we analyze the security of Fountain v1 under conditional TMDTO attack using sampling resistance. Here we show that the state of the cipher can be recovered with online time, data, memory complexity 2 110 , and preprocessing complexity 2 146 . -Finally, we revisit the TMDTO attack on Lizard proposed by Maitra et al [22]. In their attack model, the adversary chooses a random binary sequence of size ψ and searches for it in the output keystream bits. However, since the probability of occurrence of any random binary sequence of length ψ is 1 2 ψ , the authors assumed that all patterns would provide the same data and time complexity. As they mention in their paper: "The probability of getting a ψ-bit keystream pattern is 1 2 ψ · · · the data complexity will be D = D × 2 ψ+τ ." However, in this paper, we show that all the patterns will not provide equal results. The expectations of the occurrences of different patterns are different. There are some patterns for which the expected number of keystream bits required is 2 ψ+1 − 2, whereas there are some patterns for which the same expected number is 2 ψ . Therefore, if the patterns are not suitably chosen, the data complexity will become double for some patterns. Organization of the article.
-Section 2 is a preliminary section where we define terminologies and describe the design specification of Fountain v1. -In Section 3, we present our distinguisher on reduced round Fountain v1.
-In Section 4, we describe our observations related to TMDTO attack on Fountain v1. -The design specification of Lizard is described in Section 5.1. -Section 5.2 revisits the attack of Maitra et al. [22] on Lizard.
-In Section 5.3, we explain our observation on the result proposed by Maitra et al. [22]. -Section 6 concludes the paper.
Preliminaries
This section is preliminary, here we describe some definitions, terminologies and design specifications of Fountain v1 [32].
2.1. Boolean function. An n-variable Boolean function f is a mapping from F n 2 = {0, 1} n to F 2 = {0, 1} i.e., f : F n 2 → F 2 . The set of all Boolean functions involving n variables is denoted by B n . A Boolean function f involving n variables can be represented in different forms such as truth table, algebraic normal form etc. Here we define only the algebraic normal form of an n-variable Boolean function. The algebraic normal form (ANF) of f ∈ B n is the following multivariate polynomial over F 2 . (1) Here a = (a 0 , a 1 , . . . , a n−1 ) ∈ F n 2 and λ a ∈ F 2 are the coefficients of the monomials of the multivariate polynomial. The degree of an n-variable Boolean function f is the number of variables present in the highest order monomial with λ a = 0 of the ANF of f (as in Equation (1)). In case of a random Boolean function f the ANF will have highest degree with probability 1 2 . 2.2. Cube variable and superpoly. From the definition of ANF of f ∈ B n (as described in subsection 2.1) it can be seen that f can be represented as a multivariate polynomial expression (as in Equation (1)). For any index set I = {i 1 , i 2 , . . . , i k } ⊂ {0, 1, . . . , n − 1} we define cube variables C I which is a set of variables C I = {x i1 , x i2 , . . . , x i k } and term t I which is monomial t I = x i1 x i2 · · · x i k . With this C I and t I the Equation (1) can be rewritten as (2) f (x 0 , x 1 , . . . , x n−1 ) = t I P s(I) (v 0 , v 1 , . . . , v n−1−k ) + Q I (x 0 , x 1 , . . . , x n−1 ), where v i ∈ {x 0 , x 1 , . . . , x n−1 } \ C I , i = 0, 1, . . . , n − 1 − k and t I does not divide any monomial of Q I . The polynomial P s(I) is known as the superpoly corresponding to the cube variables C I . We also use f I to denote the XOR of all outputs of f corresponding to all 2 k inputs for C I . In 2009, Dinur and Shamir [10] introduced the concept of cube variables and superpoly. In the same paper, they proved the following theorem.
Theorem 1 (Dinur and Shamir [10]). For any Boolean function f and an index set I, f I ≡ P s(I) mod 2.
Hence from Theorem 1, one can observe that the XOR of the outputs corresponding to all 2 k values of cube variables provides the corresponding superpoly. This idea has been exploited to analyze several stream ciphers.
In the case of a stream cipher, a keystream bit can be represented as a function of key and IV. As the IV bits are considered as public parameters, an attacker has the freedom to choose IV according to his/her choice. Among the complete set of IV bits, the attacker carefully selects some of the IV bits as cube variables and (s)he is allowed to get the keystream bits corresponding to all possible 0/1 values of all his/her cube variables for the unknown secret key. Further (s)he computes the sum on these obtained keystream bits to observe the presence of non-randomness in the respective superpoly. If there is significant non-randomness present in the superpoly, then (s)he can distinguish the stream cipher (pseudorandom bit generator) from a random source. If this sum (i.e., superpoly) is highly biased towards zero (or towards one), then the distinguisher is often called a zero-sum distinguisher. Here the prime goal of an attacker is to select cube variables in such a way that the corresponding superpoly becomes nonrandom.
2.3. Cube tester. Cube tester is an algorithm based on careful selection of cube variables which can test the non-randomness of a cipher (or a Boolean function). The concept of cube tester was first introduced by Aumasson et al. [3] in 2009. The main idea behind designing a cube tester on a cipher (or a Boolean function) is to select the cube variables in such a way that if there is any non-randomness in the cipher (or a Boolean function) the non-randomness must be reflected in the corresponding superpoly. By using this kind of testing procedure one can check several properties of a function such as the presence of any monomial in the function, presence of neutral variables in the function, upper bound of the degree of the function, and balancedness of the function, etc. In the following section, we discuss one such cube tester.
2.4.
Upper bound on the degree of a reduced function. Let f : {0, 1} n → {0, 1} be an n-variable Boolean function. We first fix a few variables of the function f . Without loss of generality, we assume that the first k variables x 0 , x 1 , . . . , x k−1 are fixed to zero. After fixing the k variables the reduced function will be a function involving n − k variables. Hence the maximum possible degree of this reduced function can be n − k. To check whether the degree of the reduced function is n − k or not, we select x k , x k+1 , . . . , x n−1 as cube variables. Further, we compute the sum on the output for all possible 0/1 values of x k , x k+1 , . . . , x n−1 . If the cube sum (i.e., the superpoly) is zero, then the degree of the reduced function is strictly less than n − k.
2.5. TMDTO on stream ciphers. Time-Memory-Data tradeoff (TMDTO) attack on stream ciphers was first introduced by Biryukov and Shamir to recover the internal states of a stream cipher [7]. However the initial idea of TMDTO attack on a block cipher was introduced by Hellman [20]. TMDTO attack is a chosen-plaintext attack where the adversary considers the cipher as a black box without considering the deep level internal architecture of the cipher. The complete attack consists of two phases: (1) preprocessing phase, (2) online phase. In the preprocessing phase, the adversary prepares the processing tables, and later, in the online phase, those tables are used to recover the secret key. The following five parameters determine a complete TMDTO attack: (1) size of the search space (N ), (2) required preprocessing time (P ), (3) online time complexity (T ), (4) space complexity (M ), and (5) available data in online phase (D). Hellman [20] derived the following tradeoff relation T M 2 = N 2 and P = N , where T ≥ D 2 and N represents the size of the total keyspace for a block cipher. In the case of a stream cipher, Babbage [4] and Golić [15] independently coined the Time-Memory tradeoff attack to recover the internal state by inverting the keystream sequences available in the online phase. This attack is also known as BG attack.
Further, Biryukov and Shamir [7] introduced the Time-Memory-Data tradeoff attack on stream ciphers by combining the ideas of Hellman's attack on block ciphers and BG attack on stream ciphers.
For a stream cipher, we do not require to cover the whole search space N , as D data segments are available in the online phase. By using the birthday paradox, only N D internal states covered in the preprocessing tables are enough to invert at least one of the D segments available in the online phase. To ensure that this inversion is successful the dimension of the preprocessing table must be k × t, such that kt 2 = N , which is also called the matrix stopping rule. Thus, the total number of preprocessing tables required to be constructed is t D . The preprocessing time will be P = N D and the memory requirement is M = kt D . One should note that the adversary needs to prepare a single table in case of t = D; otherwise (s)he is required to construct multiple tables. For constructing multiple tables, the adversary needs to modify the random function φ(·) as suggested in [7]. Now, the total time required to perform the actual attack is the time required to invert any one of the data segments D. In the worst case scenario, for each table, the adversary needs to undertake t attempts at inverting φ. So the total time complexity of an actual attack becomes In this way, by considering kt 2 = N the tradeoff curve between T , M and D becomes 2.5.1. BSW sampling resistance and TMDTO attack. Biryukov, Shamir, and Wagner introduced the idea of sampling resistance [8] to derive better tradeoff parameters for the TMDTO attack. This approach may have reasonable complexity even for stream ciphers having a large internal state size. They showed that if a stream cipher has sampling resistance R = 2 −l , then one can easily enumerate a set of special internal states which generate keystream bits with a particular prefix of length l. In a more simplified sense, we can say that if an adversary has access to a predetermined pattern of keystream bits of length l, then the same adversary can recover l specific internal state bits of the cipher by using that predetermined prefix of keystream of length l and guessing the remaining bits of the internal state. Thus, due to sampling resistance, the size of the search space N is reduced to N R = N 2 −l , and given D bits of the actual keystream, the expected number of special states encountered is DR = D2 −l . We attempt to invert only those data segments which are generated from the special internal states. So, only the special states are covered in the preprocessing tables. For this, the random function φ used to prepare the preprocessing table is required to be changed so that it maps a special state to keystream bits having a particular pattern of length l as a prefix. Here the attacker assumes that during the online phase there exists at least one data segment which has the required prefix, i.e., DR ≥ 1 in the online phase and only those data segments are attempted to be inverted.
By substituting DR and N R in place of D and N in the tradeoff equation (3), the following tradeoff relation is derived: where T ≥ (DR) 2 .
Remarkably, the lower bound on the time complexity of the online phase is reduced from D to DR in comparison to the traditional TMDTO attack with the TMDTO attack based on sampling resistance. The computation of sampling resistance of a stream cipher is a cipher specific problem. The tap positions of the filter functions, the feedback functions, and the size of the internal state have a significant influence on the calculation of sampling resistance. Conditional sampling resistance along with the TMDTO attack on several stream ciphers such as Grain-v1, Grain-128, Lizard have been proposed in [25,26,22].
2.6. Design specification of Fountain v1. Fountain v1 [32] is a lightweight authenticated stream cipher, designed by Zhang. This cipher is one of the candidates of Round 1 of the NIST competition [1]. Fountain v1 is based on three main components: one key of 128-bit length, one IV of 96-bit length and a variablelength associated piece of data. This 128-bit secret key and 96-bit IV are used to initialize the state of the cipher. After initializing the state by the secret key and IV, the cipher loads the associated data into its state. After that, the cipher generates keystream bits to encrypt plaintext bits.
The design of Fountain v1 is based on four LFSRs each of length 64-bit, one lightweight 4-bit to 4-bit S-box, one MDS matrix, one nonlinear filter function and one output function. The S-box used in the design of the Fountain v1 will be different for different phases. Here Figure 1 provides a pictorial description of Fountain v1. As our distinguisher works on the first 189 rounds, we only consider the key-IV initialization phase of the cipher. Regarding other phases of the cipher, one can go through the full design specification of Fountain v1 in [32]. In the following subsections, we describe the design specification and key-IV initialization phase of the Fountain v1.
State update function of key-IV initialization of Fountain v1.
Fountain v1 [32] is based on four LFSRs, each LFSR is of length 64-bit. The connection polynomial related to each LFSR is provided below.
LFSR 1 : 1 + x 12 + x 25 + x 31 + x 64 ; LFSR 2 : 1 + x 9 + x 19 + x 31 + x 64 ; The complete state of size 256-bit is divided into these four LFSRs. In each round the state bits of the LFSRs are shifted by following the usual procedure and the feedback bit of each LFSR is computed by using their respective linear feedback function and a nonlinear function. In each round one 4-bit to 4-bit S-box operates on the 4-bit of the current state of the cipher. The output of the S-box is further multiplied by an MDS matrix which generates 4 bits (y 0 , y 1 , y 2 , y 3 ). Further, these bits y i are XOR'ed with the linear feedback bit of LFSR i + 1, where i = 0, 1, 2, 3. The S-box which is used in the key-IV initialization phase is described in Table 1. Table 1. S-box for key-IV initialization phase x : 0 1 2 3 4 5 6 7 8 9 A B C D E F S(x) : 1 A 4 C 6 F 3 9 2 D B 7 5 0 8 E The MDS matrix by which the state is multiplied after the application of the S-box is defined over GF (2 2 ). For more detail regarding this matrix multiplication one can go through the original article [32]. The expression for the MDS matrix is provided below.
To describe the state update process during the initialization phase we use some notation. Let s are taken as an input to the S-box. Let x 1 , x 2 , x 3 , x 4 be the output from the S-box, then the input to the MDS matrix multiplication process will be the two elements x 4 ||x 3 and x 1 ||x 2 . These two elements are considered as elements in the finite field GF (2 2 ). More detail regarding this finite field can be found in [32]. Let y 1 , y 2 , y 3 , y 4 be the output bits after the multiplication by the MDS matrix, then we have the following.
The complete process i.e., first applying the S-box, then multiplying by the MDS matrix can be represented as an integrated S-box. The description of this integrated S-box is provided in Table 2. Table 2. Integrated S-box for key-IV initialization phase The final output bits y 1 , y 2 , y 3 , y 4 are further XOR'ed with the linear feedback of the four LFSRs. Hence, the feedback of the LFSRs will be as follows.
10+i + s (4) 31+i + y 4 for 0 ≤ i ≤ 63. The output bit of the Fountain v1 is computed by using a nonlinear filter function h and a linear function involving seven state bits. The algebraic normal form of the nonlinear filter function h(x) is defined below.
Key-IV initialization phase.
In this phase the state of the cipher will be initialized by one 128-bit secret key and one 96-bit IV. The secret key, IV and 32 padding bits are loaded into the state of the cipher in the following way. We denote key bits by k i , i = 0, 1, . . . , 127 and IV bits by v i , i = 0, 1, . . . , 95.
Here 0 ≤ i ≤ 7 and the constants C i , (1 ≤ i ≤ 4) are defined as C 1 = 0xf f , C 2 = 0x3f , C 3 = 0x00, C 4 = 0x80. After loading the key and IV into the state of the cipher, it will run for 384 rounds without generating any keystream bit as output. Instead these bits z i , for i = 0, . . . , 383 are XOR'ed with the feedback bits of the four LFSRs. Hence the feedback functions for the four LFSRs for 384 rounds will be as follows.
After the initialization phase the cipher enters into the associated data processing phase. The detailed description of associated data processing phase of Fountain v1 can be found in [32]. As our distinguisher works on reduced round Fountain v1 (specifically ≤ 384), we assume that the cipher starts generating keystream bits just after r initialization rounds (r ≤ 384). Several cryptanalytic results on reduced round ciphers [12,30] are also based on similar assumptions.
Our distinguisher on Fountain v1
In this section, we describe our zero-sum distinguisher on Fountain v1 with 189 initialization rounds. Our distinguisher is based on the careful selection of the cube variables (which are IV variables). We perform a cube sum on the output bits corresponding to all possible values of the selected cube variables for randomly many secret keys. This process checks the non-randomness in the corresponding superpoly. So our experimental work is based on two phases: (1) selection of the cube variables, (2) check of the randomness of the corresponding superpoly.
3.1. Experiments on Fountain v1. To select the cube variables we have followed a very similar technique as in [29]. Here our goal is to select the cube variables (i.e., IV variables) in such a way that the cube sum on the output bits corresponding to all possible values of the cube variables is zero for the maximal number of initialization rounds. We start with an empty cube set say C I (i.e., C I = ∅). Further, we add one good cube bit into the set. A cube bit will be called a good cube bit among all the IV bits when it produces a zero-sum for the maximum number of initialization rounds among the other IV bits. Algorithm 1 describes the cube variable selection process. This selection approach is also known as the Greedy approach. Now if we run the Algorithm 1 with an empty set C I = ∅, then the first cube bit that is added into the set is v 95 . As for this cube bit v 95 we achieve zero cube sum for the highest number of rounds among other IV bits. The main reason behind this is that v 95 will appear in the keystream generation function much later than the other IV bits (see Equation (8)). If Algorithm 1 produces a zero-sum for the same number of rounds for all the IV bits, then we modify the C I updation process. In this kind of scenario we select the two best cube bits at a time to update C I i.e., C I = C I ∪ {temp 1 , temp 2 }. During this cube selection process, several good cube bits (one or two bits at a time) will produce a zero-sum for the same number of rounds. In that case, we select the cube bit randomly (similarly for two bits at a time). We continue this process of adding good cube bits by using Algorithm 1. For cubes of size 26 we obtain two cubes C I1 and C I2 , whose indices are I Table 3. From Table 3 it can be observed that cube sum is zero for up to 188 initialization rounds and there is a significant bias in the superpoly at 189 rounds (i.e., Pr[superpoly = 0] = 0.93). These biases are computed using 256 random keys. One may note that the number of used random keys is sufficient for our distinguishing attack (see Remark 1).
Remark 1. Let A and B be two distributions and an event E happens in A with probability p and the same event happens in B with probability p(1 + q). Then to distinguish these two distributions one will require O( 1 pq 2 ) random samples. In fact [29] for more detail). To achieve a significant confidence level for our distinguisher, we have performed our experiment for 256 random keys. In case of our distinguisher for 189 initialization rounds, we have p = 0.5 and p(1 + q) = 0.93. Hence 32 pq 2 = 87. Since we have repeated our experiment for 256 random keys, the confidence level of our distinguisher is more than 99.7%.
Description of our distinguisher.
Here we provide a compact description of our distinguisher on Fountain-v1 with 189 initialization rounds. Consider all possible values of IV bits whose index belongs to I 4 for any random key (which is unknown). So, we will have 2 31 keystream bits for each random key. 3. XOR all 2 31 keystream bits for any random key and observe if the value obtained is 0 or not.
Repeat the Step 2 and
Step 3 for sufficiently many random keys. 5. If the value obtained in Step 3 is zero for a high proportion of random keys (≈ 0.93), then the source of the keystream bits is Fountain-v1 with 189 initialization rounds. 6. Else the source is not Fountain-v1 with 189 initialization rounds.
Sampling resistance of Fountain v1
In this section, we present an analysis of Fountain v1 using conditional BSW sampling. We first describe how we can recover some state bits of Fountain v1 after the complete initialization phase by fixing certain state bits. Then using the process described in [22] we find the best parameters for TMDTO attack on Fountain v1. It must be noted that the designer of Fountain v1 [32] has stated the following in the design specification of the cipher: " the 256-bit size internal state also eliminates the threat of the known form of the time / memory / data tradeoff attacks with respect to 112bit security, when taking into account the pre-computation / memory / time / data complexities." Fountain v1 has a state size of 256 bits which is more than twice the 112-bit security claimed by the author. The output bit of Fountain v1 is computed by using a nonlinear filter function h and a linear function involving seven state bits (see Equation (8)). The nonlinear filter function of Fountain v1 is described in Equation (7). We wish to exploit the distance between the two tap bits s 11+i , i = 0, · · · , 17 can be recovered using the equation 16+i + s (4) 7+i + s (4) 29+i + h, i = 0, · · · , 17. One can observe that we can not proceed further, as s (1) 29 needs to be guessed to recover s (1) 11 . The remaining 18 state bits, s (1) 11+i , i = 0, · · · , 17, are recovered from 18 keystream bits.
We can see that if we fix s (2) 24+i to 0, we do not need to guess s (1) 29+i . Hence we can recover more state bits. But it needs to be taken note of that after 34 bits are recovered, the next bit recovery involves the feedback bit of LFSR4 so we need to guess some more state bits which are involved in the feedback bit. We can use Equation (10) to recover 36 state bits by fixing 36 state bits and guessing 158 state bits. All the equations which are required to recover the state bits are mentioned in Appendix A.
4.1.
Trade-off parameters for Fountain v1. In Section 4 of [22] it has been mentioned that the best parameters for TMDTO under conditional BSW sampling are obtained when 5ψ + 2τ = n, where ψ is the number of bits recovered, τ is the number of bits fixed and n is the state size. In case of Fountain v1 the state size is n = 256. The online complexity of the attack will be T = M = D = 2 n−ψ 2 and the preprocessing table will be of size P = N D = 2 n+ψ 2 . For Fountain v1, we find the best values of ψ and τ for achieving the best online complexity. Here ψ = 36, τ = 38. It can be noted that we can recover 36 state bits by fixing 36 state bits, however we need to fix two more state bits to meet (5 · 36 + 2 · 38) = 256 = n. Hence the best possible parameters for TMDTO attack on Fountain v1 will be those described in Table 4. From the above discussion it can be seen that the online complexity of our attack is 2 110 and the pre-processing complexity is 2 146 . Here the online complexity is less than the complexity of exhaustive key search but the offline complexity is much higher than the complexity of exhaustive key search.
In this analysis we show that sampling is possible on the cipher and that if the assumptions on the precomputation cost is relaxed the online time of the TMDTO attack reduces to 2 110 . We do not view this to be a break of Fountain, but we point out that the cipher has a weakness against conditional TMDTO attack.
A note on a TMDTO attack on Lizard
In this section we revisit the TMDTO attack on Lizard proposed by Maitra et al. [22]. Further, we describe our observation on the same attack. For a better understanding of the TMDTO attack and our observation we first look into the design specification of Lizard [18], which is described in Section 5.1.
Short description of Lizard.
Structure: Unlike several popular stream ciphers Lizard doesn't have any LFSR. It has two NFSRs, namely NFSR1 and NFSR2 and the total state size is 121. The NFSR1 is of 31-bit, and the state of that NFSR is denoted by (S t 0 , . . . , S t 30 ). The size of the second NFSR, i.e., NFSR2 is 90-bit, and the state is denoted by (B t 0 , . . . , B t 89 ). Feedback functions of the NFSRs: The feedback function of NFSR1 is described in Equation (11).
The feedback function of the second register NFSR2 is described in Equation (12).
Output Function: The output bit z t is computed as (13) z The initialization process is done in four phases. Below we describe each of these phases one by one.
1. Key-IV loading: The key and IV size of Lizard are 120-bit and 64-bit respectively. The key and IV are denoted by K = (K 0 , . . . , K 119 ) and IV = (IV 0 , . . . , IV 63 ) respectively. The initial state of NFSR1 is as follows.
The 30 bits of NFSR2 are initialized as follows.
for 0 ≤ j ≤ 28 K 119 + 1, for j = 29 1, for j = 30 2. Grain like mixing: In this phase the output bit z t is computed, but not produced as output. Rather it is fed back into both the NFSRs for 0 ≤ t ≤ 127 rounds i.e., z t is XOR'ed with the feedback bits of both the NFSRs. The complete process of this mixing phase for 0 ≤ t ≤ 127 rounds is described below.
Here z t is computed by using Equation (13). 3. Second time key addition: In this phase, the key is XOR'ed bitwise with the feedback bits of both the NFSRs. The process of key addition is described below.
for j = 30 4. Final diffusion: In this final phase, both the NFSRs are clocked for 128 rounds without computing any keystream bit. The complete procedure which needs to be followed for t = 129, . . . , 256 rounds is described below.
The final states of the registers NFSR1, NFSR2 after finishing all these phases will be (S 257 0 , . . ., S 257 30 ) and (B 257 0 , . . ., B 257 89 ) respectively. After the final diffusion phase, Lizard becomes ready to produce keystream bits as output and enters into the keystream generation phase. During the keystream generation phase, the cipher updates its state by following the usual shifting and feedback mechanism and the keystream bit will be computed by using Equation (13). For more detailed design specifications of Lizard, one may look into the original article [18].
5.2.
The attack idea proposed in [22]. Here, in short, we explain the attack idea proposed in [22]. For a more detailed explanation, one can go through that paper.
Consider a cipher with state size ν, i.e., the total search space is N = 2 ν . We then try to deduce some bits from the secret state by fixing certain key stream bit pattern. So, fixing -τ bits of the state to a specific pattern -assuming a specific pattern of ψ keystream bits, and -assigning values to the rest of the ν − τ − ψ state bits we try to deduce ψ bits of the state. The total search space now reduces to N = 2 ν−τ −ψ .
Preprocessing Stage: In the pre-processing step, we generate the table to be used in the online phase. First we take a random string of length ν − τ − ψ, let us call it ω. Then we fix τ bits of the state to a specific pattern decided previously, this gives ν − τ − ψ + τ = ν − ψ of the state. Now we obtain the remaining ψ bits of the state using the ψ bits of the keystream bits which are of a specific pattern. So now we have the information of the complete state. If this state is clocked, the first ψ bits of the keystream is the pattern we had fixed earlier. We run this cipher for ν − τ − ψ more steps, and this pseudo-random string will be considered as the next element of the table, referred as f (ω). We repeat this t times to obtain a row. This whole process is repeated for m randomly chosen (ν − τ − ψ) bit strings to obtain m such rows. Thus this table contains mt elements of which only the first (SP) and the last element (EP) of the row is stored. According to the birthday paradox, with proper parameters, this table will have negligible collisions. At the same time, the online data (here keystream) should be of such an amount so that the attack becomes successful, i.e., we obtain the intended pattern in the table.
Processing Stage: In the output keystream, the attacker searches for a specific pattern of ψ bits in the (EPs) of the table. Upon a hit, the next immediate ν −τ −ψ bits (name this ξ) are considered to be the string to be searched in the offline table. If a match is found, it means the required (ν − τ − ψ)-bit state is stored in the (t − 1) th position of that row for some t. To obtain this state the adversary must operate the cipher (t − 1) times on the SP of the same row where the match was found. If a match is not found then an f operation is applied to ξ to obtain f (ξ) and this string is searched in the table. This process is repeated until a hit is obtained. In the worst case, the adversary has to apply f to ξ for t times. If no match is found even after applying f for t times, we reject this attempt and search for another block of the specified ψ bits in the output keystream. 5.3. Our observation. The authors in [22] suggested to fix any random string of size ψ and create the table in the preprocessing stage based on that fixed string to be the output keystream. Also, in the processing stage, we look for the same fixed pattern of the output keystream z. They have not mentioned any particular pattern for those ψ bits that might yield a better result than other possible strings.
Since the probability of the occurrence of any pattern is 1 2 ψ , apparently it seems that all patterns will have an equal effect on the final complexity. The authors have also used the expression 2 ψ in their calculation of this complexity. But we observe that some particular patterns provide better complexity in this attack compared to other strings. Let us explain this with a simple example of tossing an unbiased coin.
Suppose we are tossing a coin repeatedly. We are looking for a particular chosen pattern as output for any two consecutive tosses. Two consecutive tosses can have four possible outcomes: HH, T T , HT , T H. Since the coin is unbiased, the probability of the occurrence of any of these four outcomes in two consecutive tosses is 1 4 . But if we look at the expected number of tosses required to achieve our desired pattern, we can see that all four patterns do not give the same results. With very simple calculations we can check that the expected number of tosses required to achieve HT or T H as output is lower than that of HH or T T . This kind of problem can be treated using the idea of Absorbing Markov Chain [16]. Here we give a short description of the procedure. A detailed explanation can be found in [16].
We consider all the possible intermediate states that occur to achieve our desired pattern. For example, if we want to achieve the pattern HT HT , the intermediate states are H, HT , HT H. The initial state is denoted as φ and the final state is HT HT . From the initial state to reach the final state, each of the intermediate states should be achieved.
φ → H → HT → HT H → HT HT Now, from any state, the next state can be attained with probability 1 2 . It can come back to any of the previous states or can stay at the same state with probability 1 2 . For example, from the state HT H, the next state HT HT can be attained if the next toss gives tail (probability is 1 2 ). Otherwise, it comes back to state H. Now, we construct a 5 × 5 matrix M where the r-th row and column correspond to the r-th state from the end. The entry m i,j is the probability of attaining the state corresponding to the column from the state corresponding to the row. Now, we ignore the first row and column and call the remaining matrix Q. Then we compute (I − Q) −1 . The sum of the entries of the last row of (I − Q) −1 gives the expectation of the pattern. For a detailed explanation, one can go through Chapter 11 of [16].
When we are looking for some fixed pattern in the output keystream, all possible patterns of size ψ do not have the same expected value for the number of keystream bits required to achieve such a pattern. Usually, the patterns involving many 0's and 1's or vice versa have a higher expected value for the number of keystream bits required, compared to the strings where the total number of 0's and 1's is balanced.
Experimentally we have observed that the maximum expectation for a ψ-bit pattern can be observed for the patterns of all zeros or all ones. This expectation is 2 ψ+1 − 2, we prove the same in Lemma 2.
The minimum expectation can be observed for the patterns 1 followed by all zeros or zero followed by all ones. This value is 2 ψ . We provide the theoretical justification of this in Lemma 1. These kinds of problems can be treated by using the idea of Absorbing Markov Chain [16].
Lemma 1. For the patterns of the form, Proof. Since the two patterns are symmetric, we prove only our assertion for 100 · · · 0. The other result can be similarly proved.
So, the states in this case are 100 · · · 0, · · · , 100, 10, 1, φ. The matrix corresponding to this string is M = (m i,j ), where Now, ignoring the first row and first column, we obtain the matrix Q of size ψ × ψ.
Subtracting it from the identity matrix I ψ,ψ , we have X = I − Q as follows, Now, we compute the sum of the last row of X −1 . Suppose, X −1 = Y . Let us focus on the last row of Y X = I. Suppose, the last row of Y is (y 1 , y 2 · · · y ψ ). The r-th element of the last row of Y X is given by y i x i,r . Since the last element of the last row of Y X = I is 1, we have y ψ = 2.
Since all the elements of the last row of Y X = I are zeros, except the last element, we have the following equations.
Lemma 2. For the patterns consisting of all 0's or all 1's, the expected number of trial is 2 ψ+1 − 2.
Proof. We prove this only for the pattern 00 · · · 0. In this case, the state matrix is, Removing the first row and column and then subtracting from I, we have X = I −Q as follows, for 1 ≤ i ≤ ψ − 1 x i,j = 0 otherwise.
From [31], it is clear that this expectation can not be less than 2 ψ . So, 2 ψ is the minimum possible expectation, which is achieved by the patterns 100 · · · 0 or 011 · · · 11. Hence it can be observed that selection of the above mentioned patterned string will provide improved result.
Conclusion
In this paper, we have studied two lightweight stream ciphers: Fountain v1 and Lizard. We have proposed a zero-sum distinguishing attack on Fountain v1 with 189 initialization rounds. Further, we have observed a weakness in Fountain v1 under TMDTO attack. The TMDTO attack on Fountain v1 has online complexity 2 110 and offline complexity 2 146 . Although the online complexity of our TMDTO attack is less than the security parameters claimed by the designer (designer's claim 2 112 ), its offline complexity is quite high. Hence our observation does not create a major security threat to Fountain v1. However, we believe that Fountain v1 requires some modifications to provide a full level of security. Finally, we have revisited the TMDTO attack on Lizard by Maitra et al. (IEEE Transactions on Computers, 2018) and proposed that some particular string will provide better results over the selection of any random string.
Acknowledgment
We would like to thank the reviewers for their valuable suggestions and comments, which considerably improved the quality of our paper. | 10,946.6 | 2023-01-01T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Thermal and Radiation Stability in Nanocrystalline Cu
Nanocrystalline metals have presented intriguing possibilities for use in radiation environments due to their high grain boundary volume, serving as enhanced irradiation-induced defect sinks. Their promise has been lessened due to the propensity for nanocrystalline metals to suffer deleterious grain growth from combinations of irradiation and/or elevated homologous temperature. While approaches for stabilizing such materials against grain growth are the subject of current research, there is still a lack of central knowledge on the irradiation–grain boundary interactions in pure metals despite many studies on the same. Due to the breadth of available reports, we have critically reviewed studies on irradiation and thermal stability in pure, nanocrystalline copper (Cu) as a model FCC material, and on a few dilute Cu-based alloys. Our study has shown that, viewed collectively, there are large differences in interpretation of irradiation–grain boundary interactions, primarily due to a wide range of irradiation environments and variability in materials processing. We discuss the sources of these differences and analyses herein. Then, with the goal of gaining a more overarching mechanistic understanding of grain size stability in pure materials under irradiation, we provide several key recommendations for making meaningful evaluations across materials with different processing and under variable irradiation conditions.
Introduction
During irradiation, atomic displacements within a metallic lattice result in a variety of microstructural changes, such as dislocation loop and network formation, stacking fault tetrahedra (in FCC metals), precipitation, partitioning, and void formation [1][2][3]. These changes affect the material properties and eventually lead to material failure [3,4]. For example, exposing metals to irradiation results in hardening and embrittlement due to the production of defects which impede the motion of dislocations [3,5,6] and degrade the thermal conductivity [7].
The resistance of a material to radiation damage is determined by its ability to accommodate radiation-induced point defects (vacancies and interstitials) [8]. Radiation damage tolerance can be enhanced by controlling the point defect mobility. One approach for controlling defect mobility is via chemical stabilization, for example, through alloying additions. Mao et al. demonstrated that adding W to Cu increases the migration energy for vacancy and the threshold displacement energy, leading to lower point defect diffusivity [9]. Another approach of limiting defect mobility is through the introduction of point, planar, or volumetric defect sinks, such as grain boundaries, phase boundaries, twin boundaries, nanopores, nanoparticles, and nanoclusters, as trapping sites [1,2,[10][11][12].
The sink efficiency of these microstructural features has been used to describe the ability of an interface to reduce radiation damage by absorbing nearby defects [1,[13][14][15].
It is defined as the ratio of defects absorbed by a boundary to defects absorbed by a perfect sink [14,16]. Sink strength describes the effect of defect sinks spread throughout the material [15]. Evaluating and comparing sink efficiencies and strengths is key to designing radiation tolerant materials [17]; however, they are experimentally challenging to measure. Sink efficiency has been quantified by measuring the defect denuded zones [18], but denuded zones are not always observed [14]. Their presence is a consequence of defect trapping at the interfaces, and in the case of grain boundaries it may vary depending on the grain boundary character [19][20][21][22][23], the strength of other sinks [13,24], grain size [9], defect recombination rate [13,14], and irradiation conditions [9,14,22]. Nanostructured materials have spurred interest due to the increased grain boundary volume with decreasing grain size [25], and therefore have been identified as promising candidates for radiation-tolerant materials due to their high sink density [9,26]. This benefit is offset by the propensity for nanocrystalline materials to suffer from detrimental grain growth at low homologous temperatures [27][28][29]. Nanocrystalline metals without kinetic stabilization tend to undergo grain growth to minimize the high grain boundary energy present. As such, they are not thermodynamically stable and can lose their ability to tolerate damage during prolonged irradiation. Grain growth during irradiation has been observed in various materials even at low temperatures where no thermally induced grain growth would be expected [30][31][32][33][34][35][36][37]. Further, at higher homologous temperatures this irradiation-induced grain growth couples with thermally driven grain growth.
In response, efforts have been made to design materials with a combination of various defect sinks to enhance both thermodynamic and radiation stabilities. For example, nanotwinned Cu with nanovoids has been studied under ion irradiations and has shown good damage tolerance and better thermal stability than nanocrystalline Cu [38,39]. However, more widespread progress has been limited by the lack of a knowledge of the fundamental interaction mechanisms between radiation-induced defects and grain boundary sinks, which is needed to quantify the performance of unalloyed nanocrystalline materials as a function of irradiation conditions. This lack of collective understanding can be in part explained by the large variations of study parameters complicating a comparison of results combined with the lack of in-situ/operando experimental capabilities.
This review paper seeks to unravel the extant literature on the defect stability of nanocrystalline primarily FCC materials through an examination of unalloyed Cu as a model system, and further explorations of a handful of dilute Cu alloys. This system is chosen based on the breadth of literature across grain size regime, processing conditions, irradiation conditions, and temperatures. Further, these conditions have led to complexity in analyses and often incongruent reported findings. Additionally, pure metals are desirable model systems to study radiation damage as there is no influence of secondary phases [2]. The behaviors of Cu under irradiation conditions are certainly not representative of all materials classes, and therefore this review focuses primarily on how the variation of theoretical and experimental analyses can lead to a lack of cohesive knowledge on Cu, which would have a similar affect in other materials classes. With a more holistic picture of the span of the literature, we propose some guidelines for the generation of consistent, valid information and conclusions in this space.
The structure of this paper will sequentially review the basis of the literature for pure Cu. In Section 2, how grain size affects the radiation damage tolerance will be summarized. Section 3 focuses on the grain growth regimes in irradiation and/or thermal environments. In Section 4, the implications of synthesis and processing on grain boundary character and thus the response to irradiation are discussed. The effect of irradiation on material properties as a function of grain size is presented in Section 5. In Section 6, the impact of the radiation environment and why it is critical to consider it while comparing and analyzing data is addressed. Finally, Section 7 will discuss the impact of the collective findings from the literature on factors that cloud conclusive observations and findings.
Grain Size Impact on Stability
With grain boundaries serving as effective defect sinks, one can wonder if a smaller grain size always results in higher damage tolerance due to higher sink density or if there is a limit. Numerous studies have shown enhanced radiation damage tolerance in nanocrystalline materials [4,22,23,40]. Lower defect densities have been measured in nanocrystalline (NC) Cu compared to coarse-grained (CG) Cu after He ion bombardment at high temperatures [9,22]. Similar results are reported for NC-Ni after in situ ion irradiation [40]. Room-temperature Kr ion irradiation of NC-Pd demonstrates a decrease in defect density as the grain size decreases from 80 to 10 nm [23]. Improved radiation tolerance in nanocrystalline materials has also been measured in terms of defect size (i.e., vacancy loops and stacking fault tetrahedra). An increase in cavity size with grain size was observed in NC-Cu under ion irradiation [22]. Similarly, Barr et al. reported an increase in the maximum size of dislocation loops with increasing grain size in the range 20-100 nm in NC-Pt. They ascribed this increase to the ability of dislocation loops to grow and coalesce inside larger grains [18].
Using molecular dynamics (MD) simulations, Bai et al. explained this enhanced radiation tolerance by a "loading-unloading" mechanism. In this proposed mechanism, interstitials migrate to the grain boundaries and are absorbed in them. The interstitial-loaded grain boundaries then emit interstitials to annihilate bulk vacancies. This recombination mechanism has a lower energy barrier than vacancy diffusion, allowing for the removal of less-mobile vacancies [41]. Other mechanisms have been identified by Chen et al. through MD simulations in α-Fe: bulk-chain absorption and grain boundary chain absorption models [42]. Radiation resistance mechanisms in nanocrystalline materials are reviewed in more detail elsewhere [43]. These different models attempt to mechanistically rationalize why nanocrystalline materials, with their high grain boundary density, have shown better irradiation tolerance.
On the other hand, some studies have shown that a smaller grain size has no effect on the radiation tolerance or that it can be detrimental under certain conditions. For example, Barr et al. reported an independence of dislocation loop density with grain size in NC-Pt thin films during in situ heavy ion irradiation at 300 • C, showing no improved radiation tolerance with regard to defect density with reduced grain size in the studied NC regime [18]. Chimi et al. observed a larger defect accumulation rate at −258 • C (15 K) under ion irradiation in NC-Au than in CG-Au but a lower rate at room temperature [44]. The authors attribute this behavior to the lower threshold energy for defect production near the grain boundaries at low temperature [44,45]. Moreover, detrimental radiation-induced amorphization at the grain boundaries has been reported in NC-Si during ion irradiation at high temperatures [46]. When grain boundaries absorb interstitials, they leave an excess vacancy concentration in their vicinity, resulting in amorphization. Indeed, nucleation of amorphous Si occurs when the vacancy concentration reaches a critical value.
Experimental studies have shown that smaller grains do not always result in better radiation tolerance depending on the irradiation conditions [18,[44][45][46][47]. The work reported by Shen uses an energetical approach to explain the difference in radiation tolerance [48]. With their assessment, there are two opposite effects on the energy of an irradiated material: a smaller grain size (1) results in higher grain boundary energy and (2) decreases the free energy resulting from defects as the defect accumulation in the grain interior is suppressed. In this analysis, the grain size needs to be carefully optimized to balance the two effects [48]. While in theory nanocrystalline materials have high radiation tolerance, the conditions for which nanograins are beneficial to the radiation tolerance are not well understood. Models developed to understand the improved radiation damage only consider simple grain boundary structures under specific irradiation conditions [41][42][43].
While nanocrystalline materials appear very promising in terms of radiation damage tolerance under certain energetic conditions, they suffer from a lack of microstructural stability and they are highly susceptible to grain growth even at low homologous temperatures [29,49]. Irradiation-induced grain growth has been observed in many materials at temperatures as low as −223 • C [9,12,[31][32][33][34][35][36], which represents a temperature at which thermally driven grain growth would not be expected. Table 1 summarizes published results for irradiation-induced grain growth in pure Cu. These data are compared to data for pure thin-film NC-Cu without irradiation where purely thermally driven grain growth is observed [27,28]. For this review, Cu was chosen based on the breadth of literature available compared to other FCC materials. While the recommendations offered in Section 8 are applicable to other FCC material systems, summarizing the performance of all FCC materials is beyond the scope of this review. The studies given in Table 1 show a wide range of test conditions, and fundamentally indicate that outside of the presence of extrinsic stabilizing mechanisms, nanocrystalline Cu grains will grow under most irradiation and thermal conditions, which are often imposed separately. Therefore, a comprehensive understanding of their radiation tolerance under heuristic environments requires nuanced control of the experimental conditions, and more specifically, an understanding of grain growth in environments combining irradiation and thermal effects. The next section will probe these combined environments as it is essential in order to improve the grain structure stability and therefore maintain the high sink density of nanocrystalline materials.
Grain Growth Regimes in Combined Irradiation/Thermal Environments
Data from Table 1 are plotted in Figure 1 to indicate the breadth of trends observed in the irradiation of NC-Cu conducted at various temperatures. Some general trends can be deduced from Figure 1. The grain size increases with irradiation dose and grain growth stagnation is observed at high irradiation doses. Additionally, the grain growth rate rises with temperature. Significant contributions from temperature on the grain growth are expected at 400-500 • C, as shown in Table 1 for the unirradiated Cu materials [27,28]. To deconvolute the complex grain growth phenomena caused by combinations of irradiation and thermal exposure, Kaoumi et al. identified three grain growth regimes for nanocrystalline materials under irradiation: (1) a purely thermal regime at temperatures above recrystallization, (2) a thermally assisted regime where both irradiation and thermal effects contribute to the grain growth, and (3) an athermal regime where irradiation effects dominate [31,32]. The first regime has been well covered in the literature [52]. In this regime (thermally activated grain growth), the growth is driven by the reduction in grain boundary free energy and can be described using a power-law-based equation (R 2 − R0 2 = To deconvolute the complex grain growth phenomena caused by combinations of irradiation and thermal exposure, Kaoumi et al. identified three grain growth regimes for nanocrystalline materials under irradiation: (1) a purely thermal regime at temperatures above recrystallization, (2) a thermally assisted regime where both irradiation and thermal effects contribute to the grain growth, and (3) an athermal regime where irradiation effects dominate [31,32]. The first regime has been well covered in the literature [52]. In this regime (thermally activated grain growth), the growth is driven by the reduction in grain boundary free energy and can be described using a power-law-based equation (R 2 − R 0 2 = αt, where R is the mean grain radius, R 0 the mean initial grain radius, α a temperature-dependent constant, and t is the time) [53][54][55].
In the second (irradiation and thermal effects) and third (primarily irradiation-driven grain growth) regimes, the irradiation-defect interactions come into play. Focusing on the third grain growth regime (irradiation-induced), grain growth has been explained using a thermal spike (e.g., thermal event) approach [32,35,50,56,57]. In this theory, when the collision event ends, the energy of the remaining recoil atoms is thermalized within the lattice, resulting in a localized temperature increase, called a thermal spike. Some studies in the literature use the terminology thermal event to distinguish the thermalized kinetic energy caused by a keV-MeV strike from the thermal spike resulting from predominately electronic energy loss associated with 100 MeV-10 GeV strikes. Liu et al. were the first to suggest thermal spike diffusion phenomenon [50]. If the thermal spike occurs on or near a grain boundary, the atoms are thermally activated and can jump across the boundary [32,[57][58][59], resulting in grain boundary migration and thus grain growth.
Similarly to the thermally activated grain growth, power law equations for irradiationinduced grain growth have been developed over the years (equations of the type D n − D o n = KΦ, where D is the mean grain diameter, D o the initial mean grain diameter, Φ the ion dose, and K and n are experimental constants) [32,50,60,61]; however, the models do not explain the growth stagnation observed at high irradiation doses [23,30,31,62,63]. The grain growth stagnation has been attributed to the fact that thermal events occur too far from the boundaries to induce boundary motion [35,57,62]. Grain growth only occurs if the cascade volume is larger than the grain volume and overlaps the boundaries [57,62]. In a parallel theory, Singh et al. ascribe grain growth stagnation to the loss of high-mobility grain boundaries during grain growth [63]. However, most irradiation-induced grain growth data have been collected on thin films and an inconvenience of using thin films is the specimen thickness effect [64]. It has been shown that grain growth may stagnate when the grain size approaches the dimension of the film thickness due to surface grooving at the intersections of the boundaries and the film surface [63][64][65].
Modeling and simulation studies have shown that the grain growth kinetics are faster during annealing and irradiation as compared to thermal exposure alone. Using atomistic simulations of a high-angle Σ5(210) grain boundary in a Cu bicrystal, Jin et al. showed that irradiated grain boundaries are about twelve times more mobile than unirradiated boundaries [66]. They surmise this is due to the more frequent rearrangements and migration of atoms. Similarly, MD simulations in NC-Ni comparing thermally and irradiation-induced grain growth have shown that the latter is much faster during the same simulation time (100 ps) [57].
Diffusion plays a significant role in defect annihilation and can partially explain why nanocrystalline metals are theorized to have good radiation tolerance. Smaller grains result in shorter diffusion lengths to nearby sinks, allowing for easier vacancy annihilation at the grain boundaries [67,68]. In larger grains, only vacancies within a certain diffusion distance from the boundaries will migrate and get annihilated, leaving some in the grain interior [67]. Moreover, grain boundaries are known to be "short-circuit" diffusional paths due to their lower atom packing [54]. The diffusivity along grain boundaries increases as the grain size decreases, but also as the misorientation angle increases [3,54]. High-angle grain boundaries typically have lower activation energy for diffusion and therefore higher diffusivities [54]. Grain boundary character as well as the defect cluster size also affect the defect mobility.
Atomistic simulations in Cu have demonstrated that mobility decreases as the boundary character complexity and defect cluster size increase [16]. Moreover, irradiation-enhanced diffusivities tend to be much larger than thermal diffusion coefficients (by several orders of magnitude) due to the greater concentration of vacancies and interstitials generated during irradiation [3,69].
Temperature also plays an important role in diffusion. Five material-dependent defect mobility regimes/recovery stages have been defined [1,70,71], with Stage III being the primary regime for the experiments cited in this work. During Stage III, both interstitials and vacancies have enough thermally driven mobility to migrate. Details about the other defect mobility regimes can be found elsewhere [1,70].
While grain growth under combined thermal and irradiation conditions is critical for understanding the evolution of the microstructure and radiation damage tolerance during service, the nature of the grain boundary structure plays a main role in the accommodation of irradiation-induced defects. Considering the grain size alone is not sufficient, it is important to also study grains as a function of distribution, grain boundary character, and chemistry, which will be covered next in Section 4.
Grain Boundary Character Controlled through Synthesis and Processing
Atomistic simulations in Cu have shown that the interaction between grain boundaries and defects is sensitive to the boundary microstructure [16,21]. Room-temperature heavy ion irradiation of bicrystal Cu shows a higher defect absorption rate in low-angle grain boundaries (LAGBs) due to the cooling-induced lattice strain attracting more point defects [19]. Density functional theory (DFT) calculations conducted on Cu confirm that LAGBs are stronger sinks than high-angle ones [13]. At low angles, the boundary sink strength is high due to the local stress field of the neighboring dislocations, and it increases with the misorientation angle (i.e., higher dislocation density). However, as the misorientation further increases, the dislocation stress fields tend to cancel each other out, decreasing the boundary sink strength [13]. Additionally, Vetterick et al. have shown experimentally and via MD simulations that non-equilibrium grain boundaries are stronger sinks for point defects compared to equilibrium boundaries, due to their higher energy and free volume [72]. In turn, nanocrystalline materials are typically produced by non-equilibrium processes [73], such as severe plastic deformation (SPD) [72][73][74], and thin-film synthesis methods, such as physical vapor deposition (PVD) [72,73]. These approaches have enabled grain boundary engineering attempts to enhance their sink strength [47,51,[75][76][77].
Another way of processing nanomaterials is SPD. It enables production of dense bulk specimens, removing the issue of the specimen thickness effect. However, the smallest grain size achievable by SPD is typically higher than what can be obtained via thin-film deposition. Mechanical milling can produce grain sizes between 5 and 50 nm [84,85]. Similarly, high-pressure torsion (HPT) can achieve grain sizes as low as 10 nm. Equalchannel angular pressing (ECAP) and accumulative roll bonding (ARB), for example, produce ultrafine-grained materials (grain size < 1 µm) [76,86]. Impurities can also be more difficult to control than in thin-film processing due to potential extraction or refining remnants or surface contamination [74,86,87]. Impurities are known to decrease grain boundary mobility due to solute drag [63,88], and can act as grain pinners and retard grain growth [86]. In addition, impurities can trap interstitials and vacancies, delaying the formation of clusters [71]. The published experimental data on irradiation-induced grain growth in Cu have nearly exclusively been obtained from materials processed via thin-film deposition (Table 1), and therefore high-purity specimens. Thus, the impurity effect on the irradiation-induced grain growth cannot be confirmed.
Limited irradiation studies on SPD-processed materials have been reported, with most studies reporting on steels [89]. Nita et al. studied NC-Ni and NC Cu-0.5Al 2 O 3 processed by ECAP followed by HPT and confirmed that nanograins produced via SPD successfully suppress the irradiation-induced damage [51,77]. Consequently, there is ample opportunity to study the irradiation tolerance of broader nanocrystalline classes of metals Nanomaterials 2023, 13, 1211 8 of 16 produced by SPD methods; however, many caveats must be considered when analyzing the resulting data. Table 2 compares thin-film deposition and severe plastic deformation in terms of specimen purity, grain size, grain structure, and process scalability. While thin films have some limitations, they allow better control of the grain structure compared to SPDprocessed materials. The multitude of techniques available to produce nanocrystalline materials and the processing variables within each synthesis method lead to a lack of consistency, complicating the comparison and analysis of experimental data. For example, when comparing materials with different grain sizes that are processed differently, one should also consider the difference in microstructure (grain boundary misorientation, initial defect density, impurities, etc.). Table 2. Comparison of thin films and SPD-processed bulk materials.
Impact on Mechanical Properties
Irradiating metals at low temperature (<300 • C) usually results in hardening and embrittlement in metals [3,5]. Many studies have reported an increase in yield strength as well as a decrease in uniform elongation with increasing damage level in neutron-irradiated FCC materials [7,34,77,[90][91][92][93][94][95][96]. Figure 2 compiles yield strength and uniform elongation data from the literature for neutron-irradiated Cu as a function of irradiation dose. Most of the tests have been performed on micro-grained specimens (20-40 µm grain size) and irradiated at low damage levels (<0.5 dpa). Mohamed et al. [34] compared the irradiation-induced hardening in coarse-grained and NC-Cu during neutron irradiation between 0.0034 and 2 dpa. Radiation hardening was observed at all damage levels for the micro-grained material; however, the NC-Cu showed some softening for doses up to 1 dpa, due to irradiation-induced grain growth. Grain size measurements indicate that grain growth saturation occurs above 1 dpa and despite the levelling off of grain size, hardening was observed at 2 dpa [34].
hardening in coarse-grained and NC-Cu during neutron irradiation between 0.0034 an dpa. Radiation hardening was observed at all damage levels for the micro-grai material; however, the NC-Cu showed some softening for doses up to 1 dpa, du irradiation-induced grain growth. Grain size measurements indicate that grain gro saturation occurs above 1 dpa and despite the levelling off of grain size, hardening observed at 2 dpa [34]. Irradiation hardening has two causes: source hardening and friction hardening Source hardening is hypothesized to be a result of the irradiation-produced cluster def providing back stresses on dislocation sources, often modeled by Frank-Read sour which raise the stress required to enable source multiplication [5,34,96]. Singh et al. as well as Fabrietsiev et al. [91] observed dislocation segments decorated by clu Irradiation hardening has two causes: source hardening and friction hardening [3]. Source hardening is hypothesized to be a result of the irradiation-produced cluster defects providing back stresses on dislocation sources, often modeled by Frank-Read sources, which raise the stress required to enable source multiplication [5,34,96]. Singh et al. [96] as well as Fabrietsiev et al. [91] observed dislocation segments decorated by cluster defects. The unpinning of the defect bound dislocations translates into a yield drop in the tensile curves [91,92,96]. In addition to source hardening, friction hardening is also responsible for the increase in yield strength. Irradiation-produced defects impede the motion of the dislocations [5,34]. The increase in yield strength due to irradiation hardening is proportional to the root square of the number density of obstacles, which is directly proportional to the total fluence. Once the microstructure saturates, the radiation hardening slows down [5]. It is worth noting that the irradiation-induced hardening decreases as the irradiation temperature increases. Fabrietsiev and Pokrovsky compared the properties of Cu irradiated at 80 • C and 150 • C and observed a lower (about 50 MPa) increase in strength between the unirradiated and irradiated conditions for the material irradiated at 150 • C compared to the material irradiated at 80 • C. They ascribe this difference to the higher defect mobility at elevated temperatures; it is easier for the dislocation to overcome obstacles at higher temperatures [92]. Multiple studies have shown that post-irradiation annealing can recover some yield strength [91,96].
In addition to the mechanical properties, both the electrical and thermal conductivities are reduced by the presence of irradiation-induced defects and transmutation products as reported in the case of neutron irradiation [7]. Overall, the effect of irradiation on microstructure and properties is highly dependent on the radiation conditions, and it is important to consider these conditions while studying and comparing irradiation damage in materials. The considerations will be expanded upon in Section 6.
Impact of Radiation Environment
In addition to the dependence on microstructure, the radiation environment also impacts the observed irradiation damage, which complicates comparisons across experimental reports. Ion irradiation has been used to study radiation damage in materials and emulate neutron irradiation [97]. It is considerably more affordable, enables irradiated material handling, requires shorter cycles [98], and allows better control of the irradiation conditions than neutron irradiation [98,99]. However, the correlation between the two is not straightforward. One main difference between ion and neutron irradiation is the particle energy spectrum: the ion energy spectrum is very narrow, while the one for neutron extends over several orders of magnitude [98]. There is also a large difference in weighted recoil spectra (recoil spectra weighted by number of defects or damage energy produced) between the irradiation species [3,98,99]. Furthermore, the penetration depth is much lower in the case of ion irradiation. While travelling through the lattice, ions undergo electronic excitation (unlike neutrons); they quickly lose energy, resulting in a shorter penetration depth (nm to 100 µm [98,100] compared to greater than 1 mm for neutron irradiation [98][99][100]) and higher damage rate [3,98,99]. This can impact the microstructural evolution [101] and makes the measurement of bulk properties difficult [99]; however, the higher damage rate can be partially compensated for by increasing the irradiation temperature [102][103][104]. For example, to reproduce the effects of neutron irradiation at 300 • C, ion irradiation needs to be conducted at 500 • C [97]; however, the higher ion irradiation temperature can lead to thermal annealing, affecting the microstructure.
Another difference is the type of defects observed in the material after irradiation. Light ions produce isolated damage or small clusters while heavy ions and neutrons create large defect clusters [98,99]. Although heavy ions can reproduce features observed during neutron irradiation, ion irradiation lacks nuclear transmutation products, which can play a significant role in the development of damage [97,100,101,105]. Multiple beam systems have been used to co-implant H and He in addition to heavy ions to more accurately emulate H, He, and knock-on damage production expected in a neutron-irradiated material [97,100,106]. The irradiation species and particle energy will also affect the cascade size and morphology [62,107]. Experimental studies have shown that heavier ions resulted in higher grain growth rate in NC-Ni and NC-Pd [30,62], and from Figure 1 this would appear to be the same in NC-Cu for Kr or Cu ions. The size of the thermal spike/event is also dependent on the recoil energy as well as the target material properties [3,33,36]. Li et al. measured greater grain growth in NC-Au than NC-Pt after room temperature 200 keV Ar + irradiation, and this difference was explained by a lower grain boundary activation energy for Au [62].
Apart from the thermal spike/event caused by the cascade, beam heating can also occur, leading to temperature increases [108]. The heat input is proportional to the beam current; therefore, the beam heating can be limited by limiting beam current. However, this results in longer irradiation times needed to achieve a specific irradiation dose [97]. It is important to note that for the room temperature irradiation-induced grain growth data plotted in Figure 1, the temperature rise from beam heating was negligible [9,31,32,50].
Another important environmental aspect to mention is the mode of irradiation. Irradiation can be conducted using a rastered beam or a broad beam [100]. The raster-scanning mode is considered as pulsed irradiation while the broad beam is steady/continuous irradiation [109]. The irradiation mode affects the material differently due to the different time scales implemented. In the case of pulsed irradiation, during a cycle, a given volume element is under the beam for only a fraction of time. This means the immediate dose rate is much higher than the average one, leading to a high defect production rate. Furthermore, during pulsed irradiation, defects have time to anneal out before the beam passes through again, resulting in lower effective defect production than during continuous irradiation [97]. Experiments have shown that pulsed irradiation suppresses swelling [109,110], but the impact on other microstructural features is less known [100]. In addition, low-frequency (<2 Hz) pulsing can result in local heating, and thus thermal annealing, which limits defect accumulation [109].
Impact of the Collective Findings on Generating New Knowledge
In the prior sections, we have presented the fundamental mechanisms for grain size stability under irradiation, and the breadth of literature providing reports on these findings. The reports cover a wide span on starting grain sizes and irradiation conditions, many of which do not decouple interlinked thermal and irradiation drivers. These processing and testing variations, in turn, affect the resultant mechanical property findings. Unravelling these findings is not trivial, but some important implications emerge from this review. Firstly, processing definitively impacts the microstructure in ways that affect irradiation damage tolerance. For example, the features of the grain structure, such as grain size distribution, energetics (e.g., LAGBs vs. HAGBs), grain morphologies (equiaxed vs. columnar), and alloy and grain boundary chemistry (thin-film vs. bulk processing), must all be carefully documented and parametrically controlled to reveal valid irradiation grain growth effects under specific irradiation conditions and temperatures. Secondly, the irradiation conditions, such as the type of irradiation (ion, neutron, electron, or others), the applied or generated temperature, the cycle time length, the bombarding species mass, and the beam application (pulsed vs. continuous), all correlate with different energy-materials interactions and thus defect-generation conditions, and therefore must also be carefully controlled within a given measurement. Careful consideration and control of these parameters will allow for the generation and validation of experimental findings, and more confident implementation and validation of computational models. The new knowledge generated from such studies will underpin the design of new materials for nuclear power generation and transmission, such as high-strength, high-conductivity radiation-stable conductors in fusion machines [111].
Summary and Recommendations
Nanocrystalline materials, with their high sink density, have demonstrated some promise for increased radiation damage tolerance. However, their lack of thermal stability makes them highly prone to grain growth, reducing their sink density and thus their capacity to accommodate irradiation damage.
In this paper, we illuminate how nuance is critical in predicting and understanding grain size stability under irradiation. The large range of radiation environments can lead to significantly different radiation damage, complicating the analysis and comparison of radiation damage effects. In addition, the various processing methods for synthesizing nanocrystalline materials alter the microstructure and therefore the response to irradiation. Notably, grain structure and the impurity content significantly impact the interaction between irradiation defects and sinks.
The extant literature on Cu grain size stability under irradiation reports a range of irradiation conditions and microstructures, complicating one-to-one comparisons and necessitating continued experiments and modelling to advance the understanding of nanostructured materials tailored for use in irradiation environments. We identify multiple thrusts crucial for meaningful comparisons across grain sizes and irradiation conditions: (a) In-depth material preparation studies to understand the effect of the processing method on the damage tolerance. This includes deeper explorations into bulk processing methods that might be suitable for specific radiation environments. Most irradiation-induced grain growth studies have been conducted on thin-film materials. Studying irradiated bulk materials would allow the effect of impurities to be investigated, as well as the removal of the specimen thickness effect. (b) Deeper studies of impurity content effects to decipher chemical variations on the damage tolerance, focusing on the difference between lab-grown and commercially processed materials. (c) Exploratory studies on the interplay of primary knock-on atom (PKA) energy, damage cascade, and irradiation temperature effects. (d) Higher throughput in situ and ex situ testing to study grain growth effects under a wider span of irradiation doses and/or temperatures on the same starting material such that trends can be reported with higher confidence, at least for the chosen irradiation type (ion vs. neutron vs. electron). (e) Round-robin type of experiments probing single-sourced Cu samples (with constant range of grain sizes) exposed to the same energy and species to help the community focus on specific irradiation condition effects. | 7,827.4 | 2023-03-29T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Fault diagnosis for PV system using a deep learning optimized via PSO heuristic combination technique
A heuristic particle swarm optimization combined with Back Propagation Neural Network (BPNN-PSO) technique is proposed in this paper to improve the convergence and the accuracy of prediction for fault diagnosis of Photovoltaic (PV) array system. This technique works by applying the ability of deep learning for classification and prediction combined with the particle swarm optimization ability to find the best solution in the search space. Some parameters are extracted from the output of the PV array to be used for identification purpose for the fault diagnosis of the system. The results using the back propagation neural network method only and the method of the back propagation heuristic combination technique are compared. The back propagation algorithm converges after 350 steps while the proposed BP-PSO algorithm converges only after 250 steps in the training phase. The accuracy of prediction using the BP algorithms is about 87.8% while the proposed BP-PSO algorithm achieved 95% of right predictions. It was clearly shown that the results of the back propagation heuristic combination technique had better results in the convergence of the simulation as well as in the accuracy of the prediction of the fault diagnosis in the PV system.
Introduction
With the fast development of the renewable energy technology, it contributes as a basic source of electricity in many countries. Renewable energy produces 18.18% from the share of electricity worldwide in 2016 according to the latest reports [1]. Due to the increase recently in the PV generation and the wide use of it worldwide, PV faults had aroused which attracted a lot of attention. These faults influence a lot Faculty of Engineering Electrical Power Department, Cairo University, Giza, Egypt the reliability and the performance of the PV system. The causes that make these faults occur may be partial shading fault, temperature fault, modules aging, cell damage and the short-circuit or the open circuit of the modules of the PV [2][3][4][5][6]. Temperature faults arise from the high temperature of the surface of the PV panels after the sunlight absorption. Partial shading fault occurs due to the presence of clouds or fallen leaves or dust. Both short and open circuit faults occur after long time operation of PV due to modules aging. The principle of finding out the fault occurrence in a PV system helps a lot in preventing the system degradation and in obtaining the system's reliability. The problems caused by the faults occurrence in the PV systems is affecting the operating efficiency, damage may be caused to the system components and may also result in dangerous fire threats and safety hazards.
Bilal Taghezouit et al. [7] presented a detecting of faults strategy based on double exponential technique. This method proved efficiency and applicability for different faults detection. This work had its drawbacks too. It could work for one scale, so it was not suitable for multiple scales. Using deep learning techniques-as stated-will have a high ability in improving the results. Bilal Taghezouit et al. [8] also designed an efficient method using principal component analysis model and multivariate monitoring schemes were used for fault detection. Although of the good results achieved, the designed method is good only for detection in one scale (time) not for multiscale system. Zhicong Chen et al. [9] proposed the random forest (RF) ensemble learning algorithm for the detection and diagnosis of PV early faults. The used RF method takes some fault features as the operating real-time voltage and current strings. It also applies a method of grid-search for optimizing the RF model parameters. The types of faults applied for study are degradation, open circuit, line-line fault and partial shading. This method could reach a high accuracy in prediction making it a good method. In the proposed technique different types of faults are predicted.
Haizheng Wang et al [10] applied the analysis of uncertainty method to the fault diagnosis of PV. A modeling probability method for a PV array parameter distribution is presented. It could solve the nonlinearity as well as uncertainty of the PV output interval. This method needs more verification because of the diversity of the different characteristics of different faults.
Yuanliang Li et al. [11] proposed fault diagnosis method based on the identification of the fault parameters as a diagnosis method for faults of the array of PV. It can recognize faults and describe them quantitatively through the identification of the faults of the parameters using the (I-V) curve of the PV array. It has a drawback described in that it is suitable to be used in case of good irradiance so the case of partial shading if occurs the method is not having the same performance.
Ling Chen et al. [12] presented a method for fault diagnosis using back propagation neural network with Levenberg-Marquardt (L-M) algorithm for the modules of the PV. The fault diagnosis for PV modules is designed on basis of long-distance wireless fault diagnosis using Zigbee technology. This method was able to detect four kinds of faults like short circuit, partial shading, open circuit and abnormal degradation.
Qiang Zhao et al. [13] proposed a PV fault diagnosis method using Fuzzy C-mean clustering which was used for the clustering of the PV fault samples. The fuzzy membership algorithm was also used in this work for the final fault diagnosis. It has the advantage of classifying the fault data from the normal data without previous knowledge.
JingnaPan et al. [14] suggested a fault diagnosis method using an uncertainty analysis based on nonparametric statistical modeling. A method for acquiring the threshold of fault diagnosis is proposed. This was a new idea for setting a dynamic threshold for fault diagnosis.
The artificial neural network [15][16][17][18][19] is a method for simulating the human brain in the way it is used for solving problems. The BP neural network [20][21][22][23][24][25] is a commonly used approach for fault diagnosis and is a multilayer fed forward network composed of three layers or more. The training of the forward network is done by calculating the error for the back propagation algorithm. Different weights are used for the connection of back and front layer neurons. The input and the output layers are connected via the hidden layer. The connection weights are revised until reaching the values with the least error between the actual and the expected values. The response of the input accuracy increases as the correction of the error is done. BP neural network has a disadvantage that the data base having long training time is required to make convergence. This long training periods of time as well as choosing improper samples may lead to low accuracy for prediction of faults in PV systems. That is why using Particle Swarm Optimization with BP neural network was proposed for the purpose of solving these mentioned disadvantages.
The main motivation for the work presented in this paper is the huge effect that results from the availability of different faults in the PV system which cause direct degradation in the performance of the system. This degradation leads to malfunction of the system. This reason aroused the idea of applying a new technique for fault diagnosis in the PV systems. In this work some contributions are achieved such as studying the PV performance under various occurrence of faults by using some features for recognition such as the short-circuit current (I SC ), open circuit voltage (V OC ), the voltage value at maximum power point (V m ) and the maximum power (P m ). This used method decreases the speed of running and the time of execution of the diagnostic method. The performance of the proposed algorithm which is back propagation neural network combined with PSO method is evaluated in the diagnosis of faults for PV systems. This method combines both the ability of the global search of the PSO algorithm as well as the local search ability of the back propagation neural network. The BPNN-PSO techniques improve the convergence of the diagnostic method as well as increasing the accuracy of the prediction of the fault diagnosis of the photovoltaic systems. So, when comparing the contribution of this method with other works done before it would be clear that this work could efficiently predict several types of faults that happen to the PV system and cause its degradation. Other works can only predict a type or two of these faults [26]. The main contribution as well is the combination of a heuristic optimization technique with a deep learning neural network technique which has the ability of providing a better learning method for obtaining correct predicted faults.
To make a comparison with the state-of-the-art methods with the current presented work, it can be clearly shown that a lot of methods have been proposed for the same purpose of fault diagnosis with different techniques. If we particularly consider the methods applied before using deep learning different techniques and use them for comparison, a summarized comparison is shown in Table 1. Comparative analytics is provided based on the state-of-the-art of different deep learning techniques in this table. This paper is organized as follows: Sect. 2 demonstrates the system of fault diagnosis that was used and the parameters for these faults recognition. Section 3 presents the hybrid proposed algorithm of the combined Back propagation neural network and Particle swarm optimization (BP-PSO) for the faults detection in PV systems. Section 4 shows the used data in applying this technique. The faults prediction and diagnosis were studied using MATLAB and results are shown in Sect. 5. A conclusion is finalized in Sect. 6. Figure 1 shows the schematic diagram of the system that is fault diagnosed. It is composed of a PV array module having four modules in series and three in parallel (4 × 3) having a DC load, system alarms and some modules for recording the PV system states. A BP-PSO network is the tool used for diagnosis.
The proposed system configuration
The change of PV parameters under different faults occurrences is analyzed by simulating a PV module using MATLAB/SIMULINK referring to the mathematical model given in [30]. This model is selected for simulation as it is a model that can be practically applied so the outcome can be easily shown in real time.
Some specifications of the PV module are given in Table 2 when standard test conditions are applied: irradiance = 1000 W/m 2 , temperature = 298°K. These conditions are the optimum conditions that are available in the PV system external environment that is why they are selected. These studies had involved the same idea of fault diagnosis for PV systems using nearly the same parameters but with different techniques such as Voltage and Current Observation and Evaluation [31] and string level monitoring for fault diagnosis [32]. Different faults where considered during the simulation such as: a) partial shading, b) faults of temperature and c) aging cells faults which appears when different series resistances are used. The curves of I-V (current voltage) and V -P (voltage power) are formulated as displayed in Figs. (2,3,4). As the parallel resistance value of the PV cells increases, the values of V oc and I sc have only small changes while the values of V m and P m decreases as shown in Fig. 2(a) and (b).
In Fig. 3(a) and (b), the effect of changing the value of the series resistance is illustrated. When the series resistance decreases the values of V oc and I sc changes only a little while the V m and P m increases.
As the temperature of the cell increases as shown in Fig. 4(a) and (b), I sc increases while the values of V m , P m and V oc decreases. These changes are caused due to the concept of negatively correlating of the bandgap with the ambient temperature. As this ambient temperature increases, the forbidden band center is approached by the Fermi energy of the PV gradually. The diffusion coefficient of the PV is related positively with Fermi energy and I sc .
The above paraded results are an indicator that I sc , V oc , V m and P m are used to show if a fault occurred or not as well as the type of this fault. Their values are used as identification parameters and so used as an input matrix: This paper is concerned with six kinds of faults as listed in Table 3.
Back propagation-particle swarm optimization algorithm
The BP-PSO technique is used in this paper which mainly has a great advantage of combining both the ability of local search of the BP neural network and that of the global search in the PSO [33][34][35]. Using this hybrid technique results in quickly getting faster solutions for the prediction of faults in the PV array. Making this combination results in high efficiency prediction of fault types which is considered as an important contribution as well as the idea of emerging the PSO onto the deep learning technique. The normalization of the test data is made, and they are then used as inputs to the input layer then training occurs, and sigmoid function is then applied in the training layer where the mechanism of learning and the classification are done. The fault-data of the PV system are then processed to the PSO layer for the classification of faults. The optimization results are then obtained at the output layer. Clearly, the methodology depends on first Temperature and partial shading combination 5 Temperature and cell aging combination 6 applying the training using BP neural network where the sigmoid function is applied and then the PSO algorithm is processed for fault classification. This fault diagnosis method schematic diagram is shown in Fig. 5.
The proposed BPNN-PSO method proposed steps are as follows: a) The faults recorded are normalized as they have different magnitudes.
Due to the different magnitudes of the used identification parameters I sc , V oc , V m and P m , the method of linear transformation is used to normalize the input matrix X [36].
where x i j is the initial input matrix, z i j is the normalized input matrix, x min is the minimum value of each row of matrix X, x max is the maximum value of each row of matrix X, y min is the minimum value of each row in the normalized matrix, y max is the maximum value of each row in the normalized matrix.
b) The sigmoid function is applied to the normalized faults and the optimal results are used to be the particles in the search space of the PSO.
Using suitable number of neurons and activation functions for the BP-PSO neural network results in making the training process faster in convergence and takes less time [37][38][39][40].
The applied sigmoid function used is given by The linear activation function is given by The number of the hidden neurons required is calculated by where floor(y) is a function used for round down such as floor(3.2) = 3,n is the number of neurons in the hidden layer, n i is the number of neurons in the input layer,n 0 is the number of neurons in the output layer, a is a constant. The weight given by C i j is a connection weight that connects the neuron i available in the hidden layer with the neuron j of the input layer in the back propagation neural network using the sigmoid function. The weight given by W i j is a connection weight that connects the output layer with the hidden layer using the linear function. By examination, it was clear that by increasing the number of neurons in Fig. 5 The schematic diagram of the proposed BP-PSO fault diagnosis method in PV system the hidden layer, the accuracy of training of the back propagation neural network increased.
The output matrix that is the result of the output layer is expressed as follows L in = P out (9) where: Z in is an input normalized matrix, Z out is the output matrix after applying the sigmoid function, P in is the input matrix to the particle swarm optimization layer, P out is the output matrix after applying the particle swarm optimization, f 1 is the processing method of the particle swarm optimization, L in is the input matrix-after applying the PSO-to the output layer, L out is the output matrix processed by the linear function to the output layer. c) Update the position and the velocity of the particles after applying the sigmoid function using the particle swarm optimization algorithm.
The particles of the PSO algorithm are chosen by taking the optimal results of training as the initial particles in the search space. The particles had an initial position x i and initial velocity V i . The best local position is given as p i and the best global position of the whole swarm is given by p g [41][42][43]. The update of the velocity and the position of the particles is done using the following equations: where w is the inertia weight factor ∈ (0, 1), t is the number of iterations, c 1 , c 2 are the cognitive and social components, respectively,r 1 , r 2 are independent random numbers that are set between 0 and 1, p t i is the local best previous position of the ith particle in iteration t, p t g is the global best previous position among all the particles in iteration t, x t i is the ith particle's current position in iteration t, x t+1 i is the ith particle's next position in the next iteration, v t i is the ith particle's current velocity,v t+1 i is the ith particle's next position. d) Evaluate the fitness function of the particles in the particle swarm.
The assigned fitness function is the mean square error of the neural network which is given as: where j is the states for faults occurring in the PV system, j= 1, 2, ...,6 M SE j is the mean square error for the j faults, y i j.des is the output desired value, y i j.out is the output actual value of the j th neuron.
When the particle's new position is better than the current local best position, the local best position is updated. If this particle's position is better than the global best position, update the global best position to be the new particle's position. e) Stop the algorithm when reaching the maximum number of iterations or when small error is achieved.
If the maximum number of iterations are achieved or small error is reached, then stop the algorithm and output the results. Otherwise get back to step 3, until these requirements are met.
Computational complexity
The complexity of the proposed BP-PSO is an important factor that should be taken into consideration. If the hidden layers have M neurons, the BP-PSO algorithm will require approximately 5 M + 3 multiplications and 5 M + 2 additions. It is also important to take into consideration the number of iterations which will increase the computational complexity. If the number of iterations is P, then the complexity will be approximately increased P times. Taking an example for this, if M = 3 and P = 2 so the proposed algorithm will attain only 36 multiplications and 34 additions.
Data analysis
The reliability of the PV array used is verified by building it in MATLAB /Simulink where the simulation was done based on the standard conditions. The Dataset used is from [30] which is taken from Sunpower SPR-X20-250-BLK module and its parameters are obtained from National Renewable Energy Laboratory (NREL). The number of sets of values of data samples used for the BP-PSO NN is 300 when the irradiance ∈ [100 w/m 2 ,2000 w/m 2 ], temperature ∈ [273.15 Table 4.
Results and discussion
The data collected were applied to BPNN and to the BPNN-PSO used in this paper using MATLAB. The mean square error of the 240 training samples of the fault diagnosis of the PV system is shown in Fig. 6.
The solid blue line in Fig. 6 represents the mean square error in the process of training. The dotted line is the targeted mean square error. Convergence is clearly achieved by 350 steps using the BP neural network while convergence occurs by only 250 steps using BPNN-PSO. It is also shown that the mean square error of the BPNN-PSO is less than that of the BP neural network. This ensures that the BPNN-PSO used in this paper results in faster convergence, highly efficient training process and great accuracy in fault diagnosis. However, the complexity of this method arises from the time the algorithm takes to reach the results which can be seen not to be very long theoretically, but practically it will take more time to get the fault types occurrence.
The error histogram with 20 bins for the BPNN-PSO for fault diagnosis of the PV system is shown in Fig. 7. The bins represent the number of vertical bars observed on the graph. Each vertical bar represents the samples number from the data set. It shows the error between the target and the predicted values just after the training of the neural network. Zero-error line represents the zero-error value on the error axis. The zero error here is at the bin with center -0.00292.
Ten samples of the test data were selected for the purpose of comparison of the ability of the prediction of the fault type using both BP and BPNN-PSO. The results of this comparison are shown in Table 5. To emphasis this comparison and show that the performance of BPNN-PSO is better than the BP, Fig. 8 shows a comparison graph indicating the performance of both algorithms. The last two columns in Table 5 show whether the predictions are correct or wrong using both algorithms; this √ indicates correct predictions and × indicates wrong predictions. In the samples, a case of prediction was wrong which was sample 7 and was predicted faulty as fault index 4 instead of 5. This occurs in deep learning commonly as the learning process can never give 100% accuracy at all the time. This can be considered well during the system design.
By making a comparison of the results accuracy achieved in this work with previous work in [9], it is found that the accuracy of prediction of the faults in the PV array system described is 95% while that in "Random forest based intelligent fault diagnosis for PV arrays using array voltage and string currents" by Chen, Zhicong, et al. [9] is 85% accuracy. Also, the accuracy of prediction in "Assessment of machine learning and ensemble methods for fault diagnosis of photovoltaic systems" by Adel Mellit, et al. [44] is 81.73%. The accuracy of prediction of faults in "Cost-effective fault diagnosis of nearby photovoltaic systems using graph neural networks" by Jonas Van Gompel, et al. [45] is 87.5%.
Referring to all the above work, the performance of the proposed algorithm of BPNN-PSO shows dominance and higher performance in classification. This proves the superiority of the applied work of BPNN-PSO than the back propagation (BP) neural network with Levenberg-Marquardt (L-M) algorithm proposed in PV fault diagnosis [9], ensemble learning (EL) method proposed in [44] and graph neural networks (GNN) IN [45]. To measure the overall performance, it is typically done by the success rate which is defined as the ratio between the correctly classified instanced to the entire instances. One of these measures is the F-score. The weighted F-score is used as a reference which is defined as the average of all F-scores resulted in each class which are (F-inspect, F-monitor and F-running). It is calculated by: where where A is the number of correct instances classified and C is the number of incorrect instances classified. While recall is calculated by: where B is the number of correct instances but not classified. When calculating the F-score for the BP-PSO neural network presented, it was found to be 0.973 indicating a high performance in the fault occurrence classifications.
Another performance metrics that can determine the performance of the proposed BP-PSO algorithm are precision and recall. Precision is defined as the ratio of the correct positive observations relative to all observations predicted positively in the actual class. Recall is the ratio of the correct positive observations predicted to all observations in the actual class.
By calculating the precision and recall of the proposed BP-PSO neural network as a sort of performance indication, it was found that the precision is 90.75% and the recall is 88.56% indicating a high accuracy performance of fault diagnosis.
The applied algorithm shows outstanding results for fault detection. This have practical implications in PV systems as the process of faults detection improves a lot the efficiency, reliability as well as the safety of the whole system. If these faults are not detected, high cost will be associated with the power loss from the PV module. The staff responsible for
Conclusion
The parameters I sc , V oc , V m and P m were chosen as the identification parameters for the system's fault diagnosis after the analysis of the PV output. The proposed algorithm of the BP-PSO neural network was applied for the purpose of predicting the fault type that occurs in the PV system. These types include temperature faults, cells aging, partial shading faults, the combination of temperature and partial shading faults and the combination of temperature and cell aging faults. The results of the simulation show that the proposed algorithm significantly improves the convergence and has higher prediction accuracy for the faults type. The back-propagation algorithm converges after 350 steps while the proposed BP-PSO algorithm converges only after 250 steps in the training phase. The accuracy of prediction using the BP algorithms is about 87.8% while the proposed BP-PSO algorithm achieved 95% of right predictions. This algorithm can intelligently predict the type of fault in real time without more hardware support. The impact pf applying this to various PV systems is of huge contribution. The fault detection using this algorithm with this accuracy increase the lifetime of the system, reliability and safe functionality. Although a lot of methods were previously introduced, this method has high accuracy, classification advantage and quick detection. It is important to determine that this property of fault detection could make the mission of PV systems maintenance easier, especially the large-scale systems. As a result, no effort or time is needed to be wasted to determine the fault type. Consequently, the technique can find a solution for the sudden reduction of power that occur due to unexpected failures. These previous results can encourage the recommendation of the energy and power societies to increase using AI techniques for the purpose of classification and detection of faults. This could make a huge jump in the production of power in energy systems through avoiding failures. This algorithm also makes the task of maintenance much easier. Governments should also focus on this purpose by raising the investment for developing monitoring techniques. As a result, gathering the data will be with high accuracy and quickly. The PV solar industry will be benefited from using the proposed algorithm specifically PV systems of large-scale. The promising achieved solution of fault detection will be able to gain much better optimization in cost, time and maintenance efforts. The prediction was not correct at some points which may be considered as a limitation. This work can be applied to other PV systems to test its performance. In the future, the PSO algorithm can be modified by changing the values of C 1 andC 2 to be changed according to a certain equation so they will not be constant. This can change the optimum results and may increase the accuracy of prediction. It can also be applied practically as a future work.
Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). The fund to this is study was a personal fund only without any other outside help.
Availability of data and materials All data used in the paper are referred to in the references used in the paper.
Conflict of interests
The authors declare that they have no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. | 6,788.6 | 2023-03-30T00:00:00.000 | [
"Computer Science"
] |
Applications of Machine Learning in Real-Life Digital Health Interventions: Review of the Literature
Background Machine learning has attracted considerable research interest toward developing smart digital health interventions. These interventions have the potential to revolutionize health care and lead to substantial outcomes for patients and medical professionals. Objective Our objective was to review the literature on applications of machine learning in real-life digital health interventions, aiming to improve the understanding of researchers, clinicians, engineers, and policy makers in developing robust and impactful data-driven interventions in the health care domain. Methods We searched the PubMed and Scopus bibliographic databases with terms related to machine learning, to identify real-life studies of digital health interventions incorporating machine learning algorithms. We grouped those interventions according to their target (ie, target condition), study design, number of enrolled participants, follow-up duration, primary outcome and whether this had been statistically significant, machine learning algorithms used in the intervention, and outcome of the algorithms (eg, prediction). Results Our literature search identified 8 interventions incorporating machine learning in a real-life research setting, of which 3 (37%) were evaluated in a randomized controlled trial and 5 (63%) in a pilot or experimental single-group study. The interventions targeted depression prediction and management, speech recognition for people with speech disabilities, self-efficacy for weight loss, detection of changes in biopsychosocial condition of patients with multiple morbidity, stress management, treatment of phantom limb pain, smoking cessation, and personalized nutrition based on glycemic response. The average number of enrolled participants in the studies was 71 (range 8-214), and the average follow-up study duration was 69 days (range 3-180). Of the 8 interventions, 6 (75%) showed statistical significance (at the P=.05 level) in health outcomes. Conclusions This review found that digital health interventions incorporating machine learning algorithms in real-life studies can be useful and effective. Given the low number of studies identified in this review and that they did not follow a rigorous machine learning evaluation methodology, we urge the research community to conduct further studies in intervention settings following evaluation principles and demonstrating the potential of machine learning in clinical practice.
Introduction
Background Digital health interventions [1], including modalities such as telemedicine, Web-based strategies, email, mobile phones, mobile apps, text messaging, and monitoring sensors, have enormous potential to support independent living and self-management [2], and reduce health care costs [3]. They have also shown great promise in improving health [4]. With the advent of new tools and algorithms for machine learning, a new class of smart digital health interventions can be developed, which could revolutionize effective health care delivery [5].
The term machine learning is widely used across disciplines but has no universally accepted definition [6]. This is in part explained by the breadth of the areas it covers and because researchers from diverse disciplines have historically contributed (and still contribute) to its development. Broadly, it refers to an algorithmic framework that can provide insights into data, while facilitating inference and providing a tentative setting to determine functional relationships.
Machine learning has been applied in multiple health care domains, including diabetes [7], cancer [8], cardiology [9], and mental health [10]. Most of the developed machine learning models and tools in research settings have investigated the potential of prognosis [11], diagnosis [12], or differentiation of clinical groups (eg, a group with a pathology and a healthy control group or groups with pathologies) [13], thus demonstrating promise toward the development of computerized decision support tools [14]. The key requirements for the development of these tools are sufficiently large datasets (in terms of both number of participants and explanatory variables to explore) and accurate labels, typically provided by expert clinicians. The premise is the identification of those data structures or variables (eg, clinical, behavioral, or demographic variables) that are associated with the target outcome (eg, whether a person has cancer). In this regard, useful knowledge can be derived from the available data, which can empower patients to monitor their health status longitudinally and support health professionals in decision making with regard to management, treatment, and follow-up interventions where required.
Despite a considerably growing body of research literature in the use of machine learning in health care applications [15], it is astonishing how few of these suggestions are actually translated into clinical practice [16]. There is remarkably limited empirical evidence of the effectiveness of machine learning applications in digital health interventions. This is rather surprising, since any proposed health care solutions would reach their full potential only if they are embraced by the medical community, becoming integrated within properly designed digital health interventions and tested in real-life studies with patients and health professionals.
Objective
Considering that machine learning models and tools have not been widely and reliably used in clinical practice, whereas the peer-reviewed literature in the field is growing exponentially, we wanted to assess the progress made in smart data-driven health interventions applied in real-life research settings-that is, the real world in which constraints in available resources or opportunities to collect reliable data may exist, as opposed to simulation or laboratory-based studies [17]. In this direction, we present a systematic literature review of digital health interventions incorporating machine learning algorithms, by identifying and mapping their features and outcomes, with the aim to improve our knowledge of the design and development of impactful intelligent interventions.
Inclusion and Exclusion Criteria
We sought to identify digital health nonpharmacological interventions incorporating machine learning that were assessed in pragmatic studies. In this context, the inclusion criteria for study selection were (1) the study should be conducted with patients or health professionals, or both, in a real-life setting, (2) machine learning algorithms or models were used in the digital health intervention (rather than merely reporting statistical hypothesis testing results or statistical associations), (3) quantitative outcomes of the study were presented, and (4) the article describing the study was written in English. We excluded retrospective studies, case reports, ongoing studies, surveys or reviews, laboratory or simulation studies, studies describing protocols, qualitative studies, and all studies published before 2008 from the review because we wanted to determine the status of recent research developments in the field that have been used in clinical interventional settings.
Literature Search and Screening
We searched the PubMed and Scopus bibliographic databases for studies published after 2008 using the string "(machine learning) OR (data mining) OR (artificial intelligence) AND health" for search within the title, abstract, and keywords of the articles. We limited "Species" in PubMed to humans.
Both authors independently screened the identified articles following the literature search to minimize bias in the selection process. Any disagreements were resolved by discussion between the authors and reaching a consensus. We screened the abstracts of the candidate articles for inclusion and subsequently read the full text of the articles deemed eligible according to the inclusion criteria. Subsequently, we excluded articles not providing sufficient information about the application of machine learning or for being ineligible. We used the Effective Public Health Practice Project (EPHPP) tool to assess the methodological quality of the included studies, which has been found to be reliable [18]. The studies that focused on interventions were synthesized (AKT) according to their target (ie, target condition), study design, number of enrolled participants, follow-up duration, primary outcome and whether this was significantly positive, machine learning algorithms used in the intervention, and outcome of the algorithms (eg, prediction of a target outcome).
The systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [19]. Multimedia Appendix 1 shows a completed PRISMA checklist.
Literature Search Outcomes
Our last search in November 2018 returned 1386 articles from the PubMed database and 7024 articles from Scopus. We imported all the retrieved records into Mendeley (version 1.19.3) bibliography management software (Mendeley Ltd) [20], which identified 1093 duplicates. We screened the abstracts of the remaining 7317 results according to our inclusion and exclusion criteria and identified 21 eligible articles. The reviewers read the full text of the 21 articles and agreed on 8 for inclusion as eligible articles. The flow diagram in Figure 1 summarizes the reasons for excluding research articles for study inclusion following the PRISMA format ( Figure 1).
Quality Assessment
On the basis of the EPHPP criteria for selection bias, design, confounders, blinding, data collection, and dropouts, we found the methodological quality to be moderate for 2 of the 8 (25%) studies [21,22] and weak for the remaining 6 (75%) studies [23][24][25][26][27][28] (Table 1). Most studies were poorly rated because of selection bias, insufficient care in controlling for confounders, and the high percentage of withdrawals or dropouts (or the absence of their description). The design of a randomized or controlled clinical trial was described in 3 (37%) studies [21,25,28], and 5 (63%) interventions were evaluated in a pilot or experimental single-group study (Multimedia Appendix 2).
Type of Intervention and Target Population
The interventions targeted depression prediction and management [23], speech recognition for people with speech disabilities [24], self-efficacy for weight loss [22], detection of changes in biopsychosocial condition of patients with multiple morbidity [25], stress management [26], treatment of phantom limb pain [27], smoking cessation [21], and personalized nutrition based on glycemic response [28] (Multimedia Appendix 2).
Of the 8 interventions, 3 (37%) targeted patients: individuals with a diagnosis of depression [23], those with multiple morbidities such as lung disease and cardiovascular disease [25], and those with phantom limb pain [27]. One (13%) intervention targeted people with speech disabilities [24]; 4 (50%) interventions targeted individuals who had no explicit diagnosis of a disease or impairment [21,22,26,28]. All target groups comprised adults. The average number of enrolled participants in the studies was 71 (range 8-214), and the average follow-up study duration was 69 days (range 3-180).
Applications of Machine Learning and Outcomes
Overall, 6 of the 8 (75%) real-life studies of digital health interventions aided by machine learning algorithms showed statistical significance (at the P=.05 level) in health outcomes. Different summary measures were used in the identified studies to assess primary outcomes, which reflects the lack of standardization both in methodology and in the metrics used in the research fields. Where possible, we aimed to use the accuracy of the algorithms used and the P value (eg, for showing statistical significance of outcomes in an intervention group compared with a control group) as the principal summary measures. We briefly describe all included studies below in terms of intervention purpose and content, evaluation outcomes, and implications for clinical practice.
Burns et al [23] described a multicomponent mobile-based intervention that used machine learning models to predict the mood, emotions, cognitive and motivational states, activities, and environmental and social context of patients with depression, along with feedback graphs for self-reflection on behavior and coaching provided by caregivers. The predictive models were based on phone sensor-derived variables (eg, global positioning system, ambient light, phone calls), and regression along with decision trees was used. The accuracy of the models was promising for location prediction (60%-91%), but prediction was very poor for emotions such as sadness. Overall, the 8 participants in the study became less likely to meet the criteria for a diagnosis of major depressive disorder (P=.03), and their symptoms of depression and anxiety were decreased by the end of the study (P<.001). Patients were also satisfied with the intervention (5.71 average rating on a scale 1 to 7), and 6 of 7 treatment completers (86%) indicated that the intervention was helpful in understanding triggers for negative moods. Despite the benefits of self-reflection on behavior through the use of a multicomponent mobile health monitoring system and the clinical improvements shown in the study, the authors reported that the clinical utility of the prediction models they used should be improved, since the prediction outcomes (eg, location and mood) were merely displayed to the users, and there were no direct interventions based on them.
Hawley et al [24] described the use of a device capable of recognizing the speech of people with dysarthria and generating voice messages. The authors used hidden Markov models to determine the proximity of a spoken word to a personalized speech model for that individual. However, only 67% recognition accuracy was achieved in this real-life observational study with 9 participants. Participants noticed that ease of communication was reduced through the device compared with their usual communication method of either speaking or speaking supported by a conventional voice-output communication aid, mainly due to the low accuracy of speech recognition. Nevertheless, feedback from participants was positive about the device's concept, given that speech recognition was improved.
Manuvinakurike et al [22] focused on changes in self-efficacy for weight loss through the provision of personal health behavior change stories found on the internet. An algorithm based on adaptive boosting was developed to find the most relevant story based on the stage of change and the demographic characteristics of a user, along with the emotional tone and overall quality of the story (accuracy between 84% and 98% for the classification of 5 stages of change). Testing of the algorithm with 103 users revealed significantly greater increases in self-efficacy for weight loss (P=.02) and a statistically insignificant effect on change in decisional balance (P=.83). In addition, the medium used to tell the stories, being either text or an animated conversational agent, had no effect on health behavior change. The authors concluded that their approach could maximize participants' engagement in longitudinal health behavior change interventions.
Martin et al [25] used a system in which decision trees could predict unplanned hospital visits of patients with multiple morbidities such as lung disease or cardiovascular disease. Alerts were sent to health professionals, who acted on the alerts according to agreed guidelines. The system was based on information received via patient phone calls with lay care guides. Linguistic and metalinguistic features were extracted, together with the patient's status, to train the prediction models (positive predictive value of 70% for predicting unplanned events). A randomized controlled trial with 214 patients for 6 months (the largest trial we found in the review in terms of number of enrolled participants and duration) showed a reduction of 50% in the number of unplanned hospital events of participants in the intervention group compared with control. The most common response to an alert indicating that a patient needed attention (red alert) was to phone the patient the next day to reassess the situation and contact their general practitioner (3% of calls), suggest or plan a visit to their general practitioner (11% of calls), or call an ambulance (<0.01% of calls). In summary, the authors reported that predictive analytics on an ongoing basis could be used to signify risk of hospitalization and guide the health care system to take appropriate actions.
Morrison et al [26] used push notifications to enhance engagement of smartphone users for stress management. They used a naïve Bayes classifier to predict whether a user would respond to a notification, thereby building a personalized intelligent mechanism for notification delivery, based on the times within a day a user was more likely to view and react to the received messages. However, this exploratory study with 77 participants showed no statistically significant difference between participants receiving the messages sent "intelligently" and those receiving a message daily or occasionally within 72 hours (Cohen d=0.14 for intelligent vs daily group and d=0.5 for intelligent vs occasional group, for actions taken in response to messages). Although notification delivery based on time had no effect on the study groups (ie, response to notifications was no different), the authors concluded that frequent daily messages may not deter users from engaging with digital health interventions.
Ortiz-Catalan et al [27] applied myoelectric pattern recognition algorithms for the control of a virtual limb in patients with phantom limb pain and used gaming along with augmented and virtual reality for treatment. This single-group study with 14 participants revealed that patients' symptoms of phantom limb pain were significantly decreased (by about 50%) at the end of the provided treatment for 6 months (P=.0001 for reduction in intensity and quality of pain). The authors suggested that their novel treatment could be used after failure of evidence-based treatments such as mirror therapy and before proceeding with invasive or pharmacological approaches. Sadasivam et al [21] used a recommender system to send motivational messages to individuals, targeting smoking cessation. The system was based on Bayesian probabilistic matrix factorization to predict message rating, through the processing of data from the user's previous ratings of messages, along with other users' ratings. This randomized controlled trial with 120 users showed that the system was more effective at influencing people to quit smoking than were standard tailored messages (rule-based system) with proven effectiveness (P=.02) and resulted in a similar cessation rate. The authors concluded that their recommender system could be used instead of standard systems for influencing smoking cessation because it was more personalized (it learned and adapted to a person's behavior) and could incorporate a considerably greater number of variables; however, larger trials would be needed to demonstrate the system's effectiveness.
Zeevi et al [28] used gradient boosting regression to predict the postmeal glycemic response of individuals in real life, according to blood parameters, dietary habits, anthropometrics, physical activity, and gut microbiota. The results from this randomized controlled study with 24 participants showed that a personalized diet based on postmeal glycemic predictions could statistically significantly modify elevated postprandial blood glucose (P<.05 for predicting low levels of blood glucose ["good diet"] vs high levels of blood glucose ["bad diet"], which was comparable with diets selected by experts). The authors reported that their approach could be used in nutritional interventions for controlling or preventing disorders associated with poor glycemic control, such as obesity, diabetes, and nonalcoholic fatty liver disease. However, evaluation periods of months or even years would be needed first to clearly indicate the effectiveness of the proposed algorithm.
Principal Findings
This review is, to our knowledge, the first to systematically examine the features and outcomes of digital health interventions incorporating machine learning that were implemented and assessed in real-life studies [17]. With this aim in mind, we differentiated our review from previous investigations that focused only on the broader use of artificial intelligence in medicine in the context of specific diseases [29,30], machine learning techniques [31,32], or risk prediction models, such as through mining of electronic health records [33,34], and did not consider real-life evaluation of the respective interventions. The need to demonstrate evidence of an intervention's effectiveness in the real world has been highlighted in several other studies [35][36][37]. Our main finding is that most of the digital health interventions showed significantly positive health outcomes for patients or healthy individuals, which demonstrates the virtue of machine learning applications in actual clinical practice. However, given the small number of studies identified in this review and their considerable limitations highlighted above, further work is warranted to demonstrate the effectiveness of digital interventions relying on machine learning applications in real-life medical care.
Our review found 8 different cases of machine learning applications in a real-life setting: depression prediction and management, speech recognition for people with speech disabilities, self-efficacy for weight loss, detection of changes in biopsychosocial condition of patients with multiple morbidity, stress management, treatment of phantom limb pain, smoking cessation, and personalized nutrition based on glycemic response. The reviewed studies had several implications for clinical practice, such as better engagement of patients with interventions [22], the identification of risk for hospitalization [25], or the introduction of novel treatment methods [27]. Among the studies, those for speech recognition of people with speech disabilities [24] and notification delivery for stress management [26] clearly reported insignificant outcomes, whereas 6 studies showed significant outcomes, but they were of low to moderate methodological quality. Only 3 studies were in the form of a randomized controlled trial, which limited the ability to fully identify the added value of machine learning-enabled interventions compared with standard care. To this end, further rigorous studies with adequately powered samples (recruiting considerably more participants than the average number of 71 participants found in this review) are needed, which would generate the evidence base for the effectiveness of machine learning in clinical practice. To that effect, large trials and publicly accessible databases that have become available over the last few years, such as the UK BioBank and the Physionet database, are providing rich resources that could facilitate insights.
The delivery of motivational messages [21,26] or stories [22] for health behavior change and engagement seems to be an emerging area of digital health interventions incorporating machine learning. These studies also demonstrated the latest efforts to promote individuals' personalized self-management and to put them at the center of health care [38]. Considering the effectiveness of tailored messaging in influencing health behavior change [39], further research in this area is warranted.
The surprisingly small number of identified pragmatic studies in our review might raise some concerns and indicates the substantial challenge of systematically evaluating digital health interventions that incorporate machine learning [40]. In this context, the retrospective validation of algorithms and models, given the availability of one or more datasets, constitutes only the first step in the evaluation process [28]. The second step involves the integration of the algorithms and models within a digital health tool, such as mobile phone-based tools [23], internet-based tools [14], or an aid device [24]. The third step requires the assessment of the developed tool as a digital health intervention in a real-life research setting (eg, through a randomized controlled trial), together with patients or health professionals, or both [28,41]. The final step would be the monitoring of actual uptake and use of the intervention in real-world settings and outside of a research setting [42], which is, however, rarely reported [43]. Admittedly, this process is challenging and anything but trivial. It requires a significant amount of time and resources, which might not always be available, and multidisciplinary collaboration among experts in different fields, such as engineering, computer science, behavioral science, and medicine, which might not be straightforward. However, such synergistic collaborative approaches are likely necessary in the development of evidence-based, sustainable, and impactful digital health interventions [44,45].
Limitations
We used the term machine learning, along with broader terms such as data mining and artificial intelligence, for our literature search, rather than keywords for specific machine learning algorithms or domains relevant to digital health, such as telemedicine. This might have inadvertently omitted studies that could have contributed to the progress made in machine learning applications for digital health. We combined the aforementioned terms with the generic term health, aiming to conduct a broad search within the provided boundaries and to include the most pertinent articles relevant to digital health. We searched for articles in a limited number of databases (ie, PubMed and Scopus), which nevertheless are two of the most widely used databases internationally [46]. We did not hand search any studies reported in other reviews or the included studies, and we did not assess the interrater reliability. A meta-analysis was not possible due to the heterogeneity of the included studies.
Conclusion
Our review showed that real-life digital health interventions incorporating machine learning can be useful and effective. Considering the small number of studies examined in this review and their limitations, further evidence of the clinical usefulness of machine learning in health service delivery is needed. We encourage researchers to move beyond the retrospective validation of machine learning models, by integrating their models within appropriately designed digital health tools and evaluating their tools in rigorous studies conducted in real-life settings. | 5,444.6 | 2019-04-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
APPLICATION OF THE RATIONAL (G′/G)-EXPANSION METHOD FOR SOLVING SOME COUPLED AND COMBINED WAVE EQUATIONS
In this paper, we explore the travelling wave solutions for some nonlinear partial differential equations by using the recently established rational (G′/G)-expansion method. We apply this method to the combined KdV-mKdV equation, the reaction-diffusion equation and the coupled HirotaSatsuma KdV equations. The travelling wave solutions are expressed by hyperbolic functions, trigonometric functions and rational functions. When the parameters are taken as special values, the solitary waves are also derived from the travelling waves. We have also given some figures for the solutions.
Introduction
In the past decades, the travelling wave solutions of nonlinear partial differential equations (NLPDEs) play an effective role in physics, engineering and applied mathematics. The mathematical models of these subjects give important information about the behaviour of the physical event. Therefore, it is very important to obtain the traveling wave solutions of NLPDEs [32]. The NLPDEs have interesting structures that deals with many phenomena in physics, chemistry and engineering, for example; in fluid flow, plasma waves, mechanics, solid state physics, oceanic phenomena, atmospheric phenomena and so on. Many researchers have been proposed various different methods to find solutions for nonlinear partial differential equations and nonlinear fractional differential equations [36][37][38][39][40]. Such as the inverse scattering transform method [1], the Hirota's bilinear method [2], truncated SOLVING SOME COUPLED AND COMBINED WAVE EQUATIONS 117 Painlevé expansion method [3], the tanh-function expansion method [4], the Jacobi elliptic function expansion method [5], the homogeneous balance method [6][7][8], the trial function method [9], the exp-function method [10,34], differential transform method [33], the Bäcklund transform method [11], the generalized Riccati equation method [12][13][14][15], the sub-ODE method [17][18][19][20], the original (G ′ /G)-expansion method [16,29], the double (G ′ /G,1/G)-expansion method [35] etc.. Since there is not a common method that can be used to solve all types of nonlinear evolution equations.
Some researchers established several powerful and direct methods. Wang et al. [16] first introduced the (G ′ /G)-expansion method to find travelling wave solutions of nonlinear evolution equations. Later Islam et al. [21] proposed the rational (G ′ /G)-expansion method which aims to derive closed form travelling wave solutions. In this paper we use the rational (G ′ /G)-expansion method and apply for the combined KdV-mKdV equation, the reaction-diffusion equation, and the coupled Hirota-Satsuma KdV equations. We derived abundant solutions for each equation that is different from the solutions in the literature.
Description of the Method
Suppose that u = u(x, t) is an unknown function depends on the x and t variables and we define the polynomial P in u(x, t) and its various order partial derivatives and nonlinear terms as We use the following steps, to solve Eq.(1) by means of the rational (G ′ /G)expansion method.
Step 1: We assign a new variable U (ξ) in terms of x and t variables and a new transformation: where is ξ 0 a constant and s is the velocity of the wave. The transformation in Eq.(2) transforms Eq.(1) into an ordinary differential equation (ODE) for u = U (ξ).
where U and its derivatives with respect to ξ are the elements of the Q polynomial of U (ξ).
Step 2: Next we integrate Eq.(3) one or twice as possible. Suppose that the solution of Eq.(3) can be written in the following form where a j and b j (j = 0, 1, 2, ..., n) , (a n ̸ = 0, b n ̸ = 0) are arbitrary coefficient to be found later. Next we write, the G = G(ξ) function, which satisfies the following second order ODE; where λ and µ are real constants. We convert Eq.(5) into (G ′ /G) form, From Eq.(5) or Eq.(6) the solution for (G ′ /G) as follows where c 1 and c 2 are constants.
Step 3: To determine the value of n, which is the degree of U (ξ), in Step 2, we apply the homogeneous balance method, that is balancing between the highest order nonlinear terms and the highest order derivatives in Eq. (3). The degree of other terms in Eq.(3) can be written as in the following form [21] where deg[U (ξ)] is the degree of U (ξ).
Step 4: After determining the value of n, we substitute Eq.(4) along with Eq.(5) into Eq.(3). Equating the coefficients of (G ′ /G) to zero, gives a system of algebraic equations. In order to solve these equations we use the computer software programme such as Maple or Matematica. If there is a possible solution, we obtain values for a i , b i , λ, µ and s (i = 0, 1, 2, ..., n).
Step 5: Finally we substitute the values of a i , b i (i = 0, 1, 2, ..., n), λ, µ, s and the solutions given in Eq. The KdV and mKdV equations are widely studied popular soliton equations. The nonlinear terms appearing in the KdV and mKdV equations often exist in applied science and engineering, such as in plasma physics, ocean dynamics and quantum field theory [22][23][24]. If we combine the quadratic nonlinear term of the KdV equation and the qubic nonlinear term of the mKdV equation, then we get the combined KdV-mKdV equation or the Gardner equation [25] u t + αu u x + βu 2 u x + u xxx = 0 (8) where α and β are nonzero parameters. This equation describes the wave propagation of bounded particle,sound wave and thermal pulse [26][27][28].
where s is the velocity of the wave and the superscript of U shows the derivative of U with respect to ξ. Next, we integrate Eq.(9) and deduce the following equation where C is an integration constant to be found later. We use homogeneous balance method, such as balancing the terms U ′′ and U 3 in Eq.(10) we get n = 1, so we can write Eq.(4) as Next we substitute Eq.(11) into Eq.(10) and organize the equation in terms of the powers of (G ′ /G). Hence equating the coefficients of (G ′ /G) and its powers to zero in the resulting equation, gives a system of algebraic equations for a 0 , b 0 , a 1 , b 1 , s and C. Solving the set of equations by using the computer programme Maple, we get the following set of solutions.
Example 2. The reaction-diffusion equation
We have the reaction-diffusion equation [30] u tt + αu xx + βu + γu 3 = 0 (22) where α , β and γ are nonzero constants. The traveling wave variable Eq.(2) reduces the Eq.(22) into an ODE where s is the velocity of the wave. Next we express the solution of the Eq.(23) in terms of (G ′ /G) as it is written in Eq.(4), where G = G(ξ) satisfies the second order linear ODE in Eq.(23). We use homogeneous balance method, such as balancing the terms U ′′ and U 3 in Eq.(23) we get n = 1, hence from Eq.(4), we have Substituting Eq.(23) into Eq. (22) and write the left hand side in terms of (G ′ /G).
Hence equating the coefficients of the resulting equation to zero, gives a system of algebraic equations for a 0 , b 0 , a 1 , b 1 and s. Solving the set of equations by using the computer programme, we get the following set of solutions: where b 1 , λ and µ are all arbitrary constants. Substituting Eq.(25) into Eq.(24) we get the following solution and (G ′ /G) is given in Eq. (7). Substituting Eq. (7) into Eq.(26), we deduce the following travelling wave solutions.
Example 3. The coupled Hirota-Satsuma KdV equations
The coupled Hirota-Satsuma KdV equations (CHSK) describes an interaction of two long waves with different dispersion relations [31]. We will consider the CHSK equations in the following form Making the transformations u( where s is the velocity of the wave to be determined later. We get the CHSK equations in the following form By balancing the highest order derivatives and nonlinear terms in Eq.(34), we get n = 2 and from Eq.(4) we write the solutions of Eq.(33) as Substituting Eq.(35) into Eq.(34), and we convert Eq.(34) into a polynomial in (G ′ /G). Equating the coefficients of the same power of (G ′ /G) to zero, yields a set of simultaneous algebraic equations. Solving the set of equations for a i , b i , e i , d i (i = 0, 1, 2) and s by using the computer programme, we get the following set of solutions SOLVING SOME COUPLED AND COMBINED WAVE EQUATIONS 127 where a 0 , b 0 , e 2 , λ and µ are constants. Substituting Eq.(36) into Eq.(35), hence we reach the following solutions and (G ′ /G) is given in Eq. (7). Substituting Eq. (7) into Eq.(37), we deduce the following travelling wave solutions.
Conclusion
In this paper, we have obtained various types of travelling wave solutions for the combined KdV-mKdV equation, the reaction-diffusion equation, and the coupled Hirota-Satsuma KdV equations that are solved by using the rational (G ′ /G)expansion method. The main idea of this method is to reduce the partial differential equation to an ODE by using the travelling wave transformation (Eq.(2)), after integrating the ODE in Eq.(3), once or twice, then express the ODE in a compact form. This ODE can be written by a n-th degree polynomial in terms of (G ′ /G), where G = G(ξ) is the general solution of the second order LODE in Eq. (5). In order to find the positive integer, we use the homogeneous balance method, that is balancing between the highest order derivative term and nonlinear term. The coefficients of the polynomials can be obtained by solving a set of algebraic equations. Generally, the resulted algebraic equations can be solved by using Maple software program. It is mostly possible to find a solution of the algebraic equations, but it is generally unable to guarantee the existence of a solution. Despite of this, the rational (G ′ /G)-expansion method is still powerful method for finding travelling wave solutions of nonlinear evolution equations. The rational (G ′ /G)-expansion method is also direct, concise, elementary that the general solution of the second order ODE Eq.(5) is well known and effective that it can be used for many other nonlinear evolution equations, such as the generalized shallow water wave equation, the compound KdV-Burgers equations, the Klein-Gordon equation, the generalized KPP equation, the approximate long water wave equations, the coupled nonlinear Klein-Gordon-Zakharov equations, and so on. Therefore, various explicit solutions of these nonlinear evolution equations can be obtained by this method.
Author Contribution Statements Authors contributed equally and they read and approved the final manuscript.
Declaration of Competing Interests The authors declare that they have no competing interest. | 2,531.2 | 2022-03-31T00:00:00.000 | [
"Mathematics"
] |
High-Temperature Nano-Indentation Creep of Reduced Activity High Entropy Alloys Based on 4-5-6 Elemental Palette
There is a strong demand for materials with inherently high creep resistance in the harsh environment of next-generation nuclear reactors. High entropy alloys have drawn intense attention in this regard due to their excellent elevated temperature properties and irradiation resistance. Here, the time-dependent plastic deformation behavior of two refractory high entropy alloys was investigated, namely HfTaTiVZr and TaTiVWZr. These alloys are based on reduced activity metals from the 4-5-6 elemental palette that would allow easy post-service recycling after use in nuclear reactors. The creep behavior was investigated using nano-indentation over the temperature range of 298 K to 573 K under static and dynamic loads up to 5 N. Creep stress exponent for HfTaTiVZr and TaTiVWZr was found to be in the range of 20–140 and the activation volume was ~16–20b3, indicating dislocation dominated mechanism. The stress exponent increased with increasing indentation depth due to a higher density of dislocations and their entanglement at larger depth and the exponent decreased with increasing temperature due to thermally activated dislocations. Smaller creep displacement and higher activation energy for the two high entropy alloys indicate superior creep resistance compared to refractory pure metals like tungsten.
Introduction
There is a strong demand for refractory metals for use in the harsh environment of next-generation nuclear reactors because of their high melting point, elevated temperature strength, and creep resistance [1,2]. High entropy alloys (HEAs) or complex concentrated alloys (CCAs), composed of multiple principal elements in equimolar or near equimolar proportions, have drawn intense attention as structural materials for the nuclear industry due to their excellent elevated temperature properties and irradiation resistance [3][4][5]. These systems consist of a single phase in HEAs or multiple solid solution phases in CCAs [3,4]. In particular, refractory high entropy alloys (R-HEAs) constituting refractory elements promise a gamut of excellent elevated temperature properties [6][7][8]. HfTaTiVZr and TaTiVWZr are two newly developed HEAs consisting of 4-5-6 refractory elements [5,9]. These alloys were found to have higher strength at elevated temperatures, superior to conventional nuclear reactor materials such as P92 and SS316 steels [9]. The inherently low activity of the constituent elements makes them attractive for next-generation nuclear reactor applications due to the short "hands-on" time and easier post-service recycling prospect [9].
Here, the nano-indentation creep behavior was evaluated for two novel refractory high entropy alloys, HfTaTiVZr and TaTiVWZr, at high load and elevated temperature. We performed two sets of experiments in the current study: (i) static constant load hold (CLH) protocol was adopted for one set of experiments to exclude possible surface effects and (ii) dynamic mechanical analysis (DMA) mode was used to study oscillatory load and indentation size effect. This allowed the comparison of response from static and dynamic loads, for which there is no systematic study in literature. The effect of temperature on the creep behavior of the R-HEAs was also investigated systematically. Stress exponent, activation volume, and activation energy of alloys at different combinations of temperature and loads were compared with pure tungsten (W), a reference metal used in nuclear reactors [33].
Experimental
Alloys with a nominal composition of HfTaTiVZr (named as Ta-Hf alloy hereafter) and TaTiVWZr (named as Ta-W alloy hereafter) with an equimolar proportion of constituent elements were prepared by arc-melting high purity elements (>99.9%) in a Ti-gettered argon atmosphere. The ingots were flipped and remelted at least four times to ensure chemical homogeneity. The as-cast alloys and pure W were polished with silicon carbide papers and diamond suspension to a mirror finish for microstructural characterization and nano-mechanical tests. The structure of the alloys was characterized using the Rigaku III Ultima X-ray diffractometer (XRD Rigaku Corporation, Tokyo, Japan) with a 1.54 Å wavelength Cu-Kα radiation. Scanning electron microscopy (SEM) was done using FEI Quanta ESEM (FEI Company, Hillsboro, OR, USA) with in-built energy-dispersive spectroscopy (EDS) to analyze the grain size and microstructure. Transmission electron microscopy (TEM) of as-cast alloys was performed on FEI Tecnai F20 operating at 200 kV. Thin foils for transmission electron microscopy were made using FEI Nova NanoLab 200 focused ion beam SEM (FIB-SEM).
Nano-indentation creep tests were done using Triboindenter TI-Premier (Bruker, Minneapolis, MN, USA) equipped with the XSol600 heating stage to heat samples up to 600 • C. The tests were done in an Ar+5% H 2 gas environment to avoid oxidation. A Berkovich sapphire tip was used for all the creep tests. A standard fused quartz reference sample was used for the initial tip calibration. For each material, two different types of tests were performed: (i) static constant load hold (CLH) and (ii) dynamic load hold in DMA mode. The static creep tests were done by ramping the load to 1 N and 5 N at temperatures of 298 K, 423 K, and 573 K, and then held at maximum load for 120 s to determine creep response before unloading. High loads were used to avoid surface effects. In dynamic load tests, 100 mN, 500 mN, and 1000 mN loads were used to study the creep behavior. The frequency and amplitude were set to 100 Hz and 10% of the peak load, respectively. In all tests, a high loading rate of 20 mN/s was chosen to minimize plastic deformation during the loading segment. The samples were held at a prescribed temperature for at least 20 min to reduce the temperature gradient between the tip and the sample and to allow the indenter tip to reach a steady state. In addition, thermal drift was automatically corrected by the Triboindenter software and was between 0.05 and 0.1 nm/s during testing. At least 12 indents were made for each condition to get statistical variation. The distance between two indents was kept larger than 100 µm to avoid overlapping of their plastic zones.
Results
All constituent elements of the two current refractory high entropy alloys belong to the 4-5-6 elemental palette with high melting points as shown in Figure 1a. The relatively short period of time required for the refractory elements found in the current HEAs (i.e., Hf, W, Ta, Ti, V, and Zr) compared to other elements such as Mo, Nb, and Ni to reach "hands-on" level after irradiation is shown in Figure 1b [34]. X-ray diffraction patterns of Ta-Hf and Ta-W HEAs in as-cast and annealed (at 723 K for 2 h) conditions are shown in Figure 1c,d, respectively. The peaks were indexed to a single phase BCC structure for Ta-Hf without any secondary phases or precipitates. Ta-W, on the other hand, showed a BCC1 major phase and a BCC2 minor phase. Both HEAs annealed at 723 K for 2 h showed identical structure as the cast alloy confirming their good microstructural stability. Backscattered SEM images of the as-cast Ta-Hf and Ta-W alloys are shown in Figure 1e The representative load-displacement curves from the nano-indentation creep of the three systems are shown in Figure 2a,b at a load of 1 N and temperatures of 298 K and 573 K. The plots for other conditions were very similar and not shown here. An array of 2 × 6 indents were made with 100 µm separation covering several grains and therefore representative of several grain orientations. Overall, under the same load and temperature, the indentation depth in Ta-Hf HEA was greater than that in Ta-W and W, indicating the lowest hardness among the three systems studied. Figure 2c,d show the change of indentation depth versus holding time at the peak loads of 1 N (at 298 K and 573 K). There was an initial sharp rise in creep depth followed by slowing down the rate of increase. Creep displacement varied with the alloy composition, load, and temperature. Figure 2e,f represent the magnitude of total creep displacement as a function of temperature for W, Ta-Hf, and Ta-W alloys at 1 N and 5 N. The total creep displacement was in the range of 50 nm to 450 nm depending on alloy composition, holding load and temperature. A minimum of 12 indents were done in each condition for each alloy and the mean value was considered for comparison, the error bar likely attributed to the different crystal orientations. The creep displacement was larger at a higher temperature and load. Creep is a thermally activated process and partly depends on dislocation mobility, which increases at a higher temperature and load. The reduction in creep displacement for the two HEAs at 423 K may be explained in terms of dislocation mobility limitation by phonon-drag or secondary Peierls barrier [35]. The creep displacement for Ta-Hf and Ta-W high entropy alloys was roughly half of that of pure W, which may be attributed to the sluggish diffusion and highly distorted lattice structure in HEAs.
Figure 1. (a)
Refractory elements belonging to the 4-5-6 group/period; (b) time in years required for group 4-5-6 refractory elements to reach "hands-on" level after exposure [34]. X-ray diffraction analysis of (c) HfTaTiVZr (Ta-Hf) and (d) TaTiVWZr (Ta-W) refractory high entropy alloys in as-cast and annealed conditions showing single-phase body-centered cubic (BCC) crystal structure for Ta-Hf and a BCC1 major phase and BCC2 minor phase for Ta-W; backscattered scanning electron microscopy image of (e) Ta-Hf and (f) Ta-W alloys showing equiaxed grains with an average grain size of ~250 μm for Ta-Hf and formation of two phases in Ta-W; insets showing selected area diffraction pattern of the alloys. Energy-dispersive X-ray spectroscopy of (g) Ta-Hf and (h) Ta-W alloys confirming a homogeneous distribution of elements in Ta-Hf alloy and partitioning of Ta and W into dendrite phase and Ti, V, and Zr into the matrix in Ta-W alloy. (a) Refractory elements belonging to the 4-5-6 group/period; (b) time in years required for group 4-5-6 refractory elements to reach "hands-on" level after exposure [34]. X-ray diffraction analysis of (c) HfTaTiVZr (Ta-Hf) and (d) TaTiVWZr (Ta-W) refractory high entropy alloys in as-cast and annealed conditions showing single-phase body-centered cubic (BCC) crystal structure for Ta-Hf and a BCC1 major phase and BCC2 minor phase for Ta-W; backscattered scanning electron microscopy image of (e) Ta-Hf and (f) Ta-W alloys showing equiaxed grains with an average grain size of~250 µm for Ta-Hf and formation of two phases in Ta-W; insets showing selected area diffraction pattern of the alloys. Energy-dispersive X-ray spectroscopy of (g) Ta-Hf and (h) Ta-W alloys confirming a homogeneous distribution of elements in Ta-Hf alloy and partitioning of Ta and W into dendrite phase and Ti, V, and Zr into the matrix in Ta-W alloy. The creep behavior of Ta-Hf and Ta-W HEAs was also studied under dynamic load and compared with W. The samples were loaded to predefined loads of 100 mN, 500 mN, and 1000 mN and held for 120 s under oscillatory loads. The amplitude of load was fixed at 10% of the peak load. Figure 3a,b show the creep displacement versus holding time as a function of load for Ta-W at 298 K and 423 K. The maximum creep depth increased with increasing load and same trend was observed for Ta-Hf and W. Figure 3c,d show the maximum creep displacement as a function of load during dwell time for all studied alloys at 298 K and 423 K, respectively. Increasing peak load and temperature led to an increase in creep displacement. Creep displacement was found to be similar for the two HEAs and lower than that of W in agreement with the static load data. The effect of surface oxidation was minimal (i.e., sample maintained shiny appearance) because of the Ar + H2 gas environment and high loads used for the creep tests. The creep behavior of Ta-Hf and Ta-W HEAs was also studied under dynamic load and compared with W. The samples were loaded to predefined loads of 100 mN, 500 mN, and 1000 mN and held for 120 s under oscillatory loads. The amplitude of load was fixed at 10% of the peak load. Figure 3a,b show the creep displacement versus holding time as a function of load for Ta-W at 298 K and 423 K. The maximum creep depth increased with increasing load and same trend was observed for Ta-Hf and W. Figure 3c,d show the maximum creep displacement as a function of load during dwell time for all studied alloys at 298 K and 423 K, respectively. Increasing peak load and temperature led to an increase in creep displacement. Creep displacement was found to be similar for the two HEAs and lower than that of W in agreement with the static load data. The effect of surface oxidation was minimal (i.e., sample maintained shiny appearance) because of the Ar + H 2 gas environment and high loads used for the creep tests.
Discussion
During holding in the constant loading stage, creep displacement (h) is a function of time (t) and expressed as [28]: where ℎ and are indentation depth and time at the beginning of holding segment, and a, p, and k are fitting constants. Equation (1) showed a correlation coefficient R 2 > 0.95 with the experimental data as shown in Figure 2c,d with dashed lines. For the self-similar Berkovich indentation tip, the following were used to obtain indentation strain rate (Equation (2)) and hardness (Equation (3)) [28]: where is the time derivative of the fitted displacement-time curve (i.e., Equation (1)), P is the applied load, hc is contact depth, given by hc = hmax −0.75 P/S for Berkovich indenter, and hmax and S are maximum penetration depth and material stiffness, respectively. The stress exponent (n), which provides valuable insight into creep deformation process, was calculated as [28]: where, is strain rate sensitivity. Figure 4a,b show the stress exponent for W, Ta-Hf, and Ta-W and its dependence on the temperature at static loads of 1 N and 5 N. The stress exponent as a function of dynamic loads for selected alloys at 298 K and 423 K are shown in Figure 4c,d. Each value of n was averaged for 12 independent indentations in the range of 20 to 140 depending on the alloy composition, temperature, and load. The creep stress exponent typically indicates the creep mechanism; n = 1 is associated with diffusion creep, n = 2 with grain boundary sliding, and n ≥ 3 with
Discussion
During holding in the constant loading stage, creep displacement (h) is a function of time (t) and expressed as [28]: where h 0 and t 0 are indentation depth and time at the beginning of holding segment, and a, p, and k are fitting constants. Equation (1) showed a correlation coefficient R 2 > 0.95 with the experimental data as shown in Figure 2c,d with dashed lines. For the self-similar Berkovich indentation tip, the following were used to obtain indentation strain rate (Equation (2)) and hardness (Equation (3)) [28]: where dh dt is the time derivative of the fitted displacement-time curve (i.e., Equation (1)), P is the applied load, h c is contact depth, given by h c = h max −0.75 P/S for Berkovich indenter, and h max and S are maximum penetration depth and material stiffness, respectively.
The stress exponent (n), which provides valuable insight into creep deformation process, was calculated as [28]: ε is strain rate sensitivity. Figure 4a,b show the stress exponent for W, Ta-Hf, and Ta-W and its dependence on the temperature at static loads of 1 N and 5 N. The stress exponent as a function of dynamic loads for selected alloys at 298 K and 423 K are shown in Figure 4c,d. Each value of n was averaged for 12 independent indentations in the range of 20 to 140 depending on the alloy composition, temperature, and load. The creep stress exponent typically indicates the creep mechanism; n = 1 is associated with diffusion creep, n = 2 with grain boundary sliding, and n ≥ 3 with dislocation creep. Therefore, for the three current systems, despite the wide range of n, dislocation creep was the dominant mechanism. The stress exponent of HEAs at 1 N decreased significantly from 130-140 to 30-50 with an increase in temperature due to thermally activated dislocations at elevated temperature; however, the degree of reduction was not sharp in the case of pure W (from~50 to~30). The magnitude of stress exponent depends on the density of dislocations involved during deformation and whether the deformation is dominated by generation or annihilation of dislocations [36]. The decrease of n value with increasing temperature is due to the enhancement of dislocation movement and more thermal recovery at a higher temperature compared to their generation [17]. A similar trend was reported for other alloys [28,37]. The significantly larger drop in stress exponent with increasing temperature for HEAs may be due to less dislocation movement in their highly distorted lattice structure at room temperature leading to a high n value. With an increase in temperature, the thermally activated process and dislocation movement were more significant and stress exponent decreased sharply. Whereas, for W, even at room temperature, the annihilation rate may be high. At a load of 5 N, the same behavior was observed for W (i.e., decrease of n at a higher temperature); however, the two HEAs demonstrated slightly different behavior. There was an increase of n with increasing temperature from 298 K to 423 K and then reduction at 573 K. This is consistent with their lower creep displacement at an intermediate temperature which was shown in Figure 2f. The increase of stress exponent at 423 K for the high entropy alloys may be the result of the generation of more dislocations which act as a barrier for their movement [11]. Thermally activated cross slip of dislocations restricted their movement and resulted in an increase of n [38]. However, with a further increase in temperature to 573 K, thermally activated movement of dislocations was easier and caused a drop in n. This trend was seen only at the maximum load of 5 N, which may be due to a higher generation rate of dislocation at 5 N compared to 1 N [39], which increased the possibility of entanglement. Moreover, in BCC metals/alloys, the strain rate sensitivity and thermally activated deformation are controlled by kink-pair nucleation and screw dislocation propagation. The mobility of screw dislocations in BCC-HEAs may be restricted due to the lattice distortion. With increasing temperature, as a result of thermal fluctuations, the migration of a screw dislocation line from one Peierls valley to an adjacent one increases. In certain cases, upon propagation of dislocations in opposite directions under applied stress, their mobility is limited by phonon-drag or secondary Peierls barrier which results in higher stress exponent. In contrast, at a relatively higher temperature, the recombination of kinks with opposite line-directions may reduce the stress exponent [35]. The ISE on the creep stress exponent of BCC-HEAs was studied by nano-indentation using the DMA method. DMA was used to study the behavior of the alloys under oscillating loads and compared with the previous data at static loads. Figure 4c,d show the variation of n parameter as a function of load for Ta-Hf and Ta-W HEAs along with W in DMA tests. As peak load increased from 100 mN to 1000 mN (i.e., contact depth increased from 800 nm to 2500 nm), the n value increased for all samples at 298 K. This indicates an apparent size effect on the stress exponents, though all n values were in the range of dislocation dominated deformation mechanism. The indentation size effect for stress exponent has been reported in previous studies [18,24,40]. During the loading process, dislocations are generated in the plastic deformation zone beneath the indenter and their density is in direct proportion to the load/depth [19,39]. At low applied load, the dislocation generation rate may be slower than the dislocation annihilation rate, so lower stress exponent is obtained. At high load, the dislocation generation rate becomes very fast. During the holding time, the stress leads to dislocation propagation and they may interact and entangle with one another. Entanglement of dislocations may result in an increase of the stress exponent up to 100 at higher applied load or indentation depth [29,41]. At lower depth, the dislocations have higher mobility and diffusion rate because they are closer to the free surface, and as a result, the value of stress exponent is lower [42]. The mobility of screw dislocation in BCC metals (like Ta, Mo, and Fe) has been reported to be significantly enhanced near the free surface [39,42]. The indentation size effect of stress exponent was less significant at 423 K than at 298 K. This may be due to the counter balance of generation and annihilation rate of dislocations at elevated temperature. Less pronounced size effect for the hardness of materials at elevated temperatures has been also reported [43]. The value of n for the two HEAs at 423 K and the intermediate load of 500 mN was lower than that at 100 mN and 1000 mN. At 500 mN and elevated temperature, it is likely that the diffusion rate of dislocations was much higher than their generation rate which led to lower n; however, the mechanism remained unchanged. intermediate load of 500 mN was lower than that at 100 mN and 1000 mN. At 500 mN and elevated temperature, it is likely that the diffusion rate of dislocations was much higher than their generation rate which led to lower n; however, the mechanism remained unchanged. The high value of stress exponent obtained using nano-indentation may be attributed to the complex stress state below the indenter [15,17,32,40]. The stress exponent is a strong function of the composition and microstructure of the alloy since mobile dislocation density and activation area may vary significantly [44]. The Ta-W HEA showed a higher n value in most of the conditions than Ta-Hf, which may be attributed to the dendrites in Ta-W which may act as barriers for dislocation movement. From Figure 4, it is evident that the stress exponent obtained during the DMA test is slightly lower than in the static test. This may be due to oscillatory load which may result in better dislocation propagation and a lower n value.
where k is Boltzmann constant and T is temperature. The average hardness value over the holding stage at each temperature was used to calculate the activation volume. The averaged activation volume was 0.25 ± 0.1 nm 3 (~13b 3 ), 0.5 ± 0.11 nm 3 (~20b 3 ) and 0.4 ± 0.09 nm 3 (~16b 3 ) for W, Ta-Hf, and Ta-W, respectively. Lattice parameter (a0) is ~0.34 nm for current HEAs [9] and ~0.31 nm for W, and b = 1/2 a0 [111] is Burgers vector of BCC alloys/metals. The activation volumes for all three systems were in the range for kink-pair nucleation and movement of screw dislocations in BCC alloys/metals [35]. V* value is in the range of 10b 3 -1000b 3 for dislocation creep while diffusion-mediated creep is typically associated with lower values of V*. Tungsten showed a slightly smaller activation volume compared to the two HEAs. Activation volume describes the degree of dislocation nucleation, the smaller activation volume indicating easier nucleation of dislocations [35]. Activation volume for The high value of stress exponent obtained using nano-indentation may be attributed to the complex stress state below the indenter [15,17,32,40]. The stress exponent is a strong function of the composition and microstructure of the alloy since mobile dislocation density and activation area may vary significantly [44]. The Ta-W HEA showed a higher n value in most of the conditions than Ta-Hf, which may be attributed to the dendrites in Ta-W which may act as barriers for dislocation movement. From Figure 4, it is evident that the stress exponent obtained during the DMA test is slightly lower than in the static test. This may be due to oscillatory load which may result in better dislocation propagation and a lower n value.
The activation volume (V*), which depends on stress exponent and hardness (H), was calculated for further insight into the creep mechanism as [17]: where k is Boltzmann constant and T is temperature. The average hardness value over the holding stage at each temperature was used to calculate the activation volume. The averaged activation volume was 0.25 ± 0.1 nm 3 (~13b 3 ), 0.5 ± 0.11 nm 3 (~20b 3 ) and 0.4 ± 0.09 nm 3 (~16b 3 ) for W, Ta-Hf, and Ta-W, respectively. Lattice parameter (a 0 ) is~0.34 nm for current HEAs [9] and~0.31 nm for W, and b = 1/2 a 0 [111] is Burgers vector of BCC alloys/metals. The activation volumes for all three systems were in the range for kink-pair nucleation and movement of screw dislocations in BCC alloys/metals [35]. V* value is in the range of 10b 3 -1000b 3 for dislocation creep while diffusion-mediated creep is typically associated with lower values of V*. Tungsten showed a slightly smaller activation volume compared to the two HEAs. Activation volume describes the degree of dislocation nucleation, the smaller activation volume indicating easier nucleation of dislocations [35]. Activation volume for BCC CoCrFeNiCuAl 2.5 thin film HEA measured using a Berkovich nano-indenter was reported to be~0.5 nm 3 [15], while FCC CoCrFeCuNi thin film and coarse-grained CoCrFeMnNi showed one order of magnitude lower activation volume of 0.08 nm 3 and 0.05 nm 3 , respectively [15,27]. The temperature dependence of indentation creep rate is empirically correlated through a power-law relation [45]: where A and R are structure-dependent and universal gas constant, respectively, and Q is the activation energy. A plot of ln( . ε/H n ) versus 1/T yields a slope of −Q/R [46,47] as plotted in Figure 5 for all the studied alloys. The average strain rate and hardness over holding time at each temperature were selected for analysis. Linear regression indicated creep activation energy of 352 ± 10 kJ/mol, 925 ± 100 kJ/mol, and 1000 ± 50 kJ/mol for W, Ta-Hf, and Ta-W, respectively. The calculated higher activation energy of 900-1000 kJ/mol for HEAs compared to pure W may be associated with severe lattice distortion and sluggish diffusion in HEAs resulting in a greater degree of dislocation interaction and supporting their higher creep resistance [48]. Activation energy for CoCrFeNiMn [12] and precipitation-hardened (FeCoNiCr) 94 Ti 2 Al 4 [49] HEAs were reported to be~300-400 kJ/mol and 300-800 kJ/mol, respectively, from tensile tests. However, to the best of the authors' knowledge, there are no reports on activation energy of HEAs by the nano-indentation creep test. In summary, the high activation energy for Ta-Hf and Ta-W compared to pure refractory metals like tungsten support their excellent creep resistance. This suggests the potential use of these alloys in next-generation nuclear reactors as well as fossil fuel power plants where refractory metals are currently used.
Entropy 2020, 22, x FOR PEER REVIEW 9 of 12 BCC CoCrFeNiCuAl2.5 thin film HEA measured using a Berkovich nano-indenter was reported to be ~0.5 nm 3 [15], while FCC CoCrFeCuNi thin film and coarse-grained CoCrFeMnNi showed one order of magnitude lower activation volume of 0.08 nm 3 and 0.05 nm 3 , respectively [15,27]. The temperature dependence of indentation creep rate is empirically correlated through a power-law relation [45]: where A and R are structure-dependent and universal gas constant, respectively, and Q is the activation energy. A plot of ln( /H n ) versus 1/T yields a slope of −Q/R [46,47] as plotted in Figure 5 for all the studied alloys. The average strain rate and hardness over holding time at each temperature were selected for analysis. Linear regression indicated creep activation energy of 352 ± 10 kJ/mol, 925 ± 100 kJ/mol, and 1000 ± 50 kJ/mol for W, Ta-Hf, and Ta-W, respectively. The calculated higher activation energy of 900-1000 kJ/mol for HEAs compared to pure W may be associated with severe lattice distortion and sluggish diffusion in HEAs resulting in a greater degree of dislocation interaction and supporting their higher creep resistance [48]. Activation energy for CoCrFeNiMn [12] and precipitation-hardened (FeCoNiCr)94Ti2Al4 [49] HEAs were reported to be ~300-400 kJ/mol and ~300-800 kJ/mol, respectively, from tensile tests. However, to the best of the authors' knowledge, there are no reports on activation energy of HEAs by the nano-indentation creep test. In summary, the high activation energy for Ta-Hf and Ta-W compared to pure refractory metals like tungsten support their excellent creep resistance. This suggests the potential use of these alloys in nextgeneration nuclear reactors as well as fossil fuel power plants where refractory metals are currently used. TaTiVWZr high entropy alloys. The activation energies for the current refractory high entropy alloys were higher than tungsten by almost a factor of three.
Conclusions
In summary, indentation creep tests for reduced activity HfTaTiVZr and TaTiVWZr HEAs were performed by static and dynamic loads at 298 K, 423 K and 573 K. The creep mechanism of the alloys were compared in terms of stress exponent and activation volume. Comparison of the creep resistance of the three systems was done based on creep displacement and activation energy. The following conclusions were drawn: (1). The creep exponent was in the range of 20-140 and activation volume was in the range of 13-20b 3 , indicating that the time-dependent deformations for all alloys were dislocation dominated. (2). The stress exponent decreased with increasing temperature owing to thermally activated dislocations and the reduction was sharper for HEAs compared to pure W. (3). The creep exponent increased with increasing load (depth) leading to an apparent size effect due to a higher generation rate of dislocation and their entanglement at larger penetration depth. A higher diffusion/annihilation rate of dislocations near the free surface at a smaller depth may be another possible explanation. TaTiVWZr high entropy alloys. The activation energies for the current refractory high entropy alloys were higher than tungsten by almost a factor of three.
Conclusions
In summary, indentation creep tests for reduced activity HfTaTiVZr and TaTiVWZr HEAs were performed by static and dynamic loads at 298 K, 423 K and 573 K. The creep mechanism of the alloys were compared in terms of stress exponent and activation volume. Comparison of the creep resistance of the three systems was done based on creep displacement and activation energy. The following conclusions were drawn: (1) The creep exponent was in the range of 20-140 and activation volume was in the range of 13-20b 3 , indicating that the time-dependent deformations for all alloys were dislocation dominated. (2) The stress exponent decreased with increasing temperature owing to thermally activated dislocations and the reduction was sharper for HEAs compared to pure W. (3) The creep exponent increased with increasing load (depth) leading to an apparent size effect due to a higher generation rate of dislocation and their entanglement at larger penetration depth.
A higher diffusion/annihilation rate of dislocations near the free surface at a smaller depth may be another possible explanation. (4) HEAs showed smaller creep displacement and higher activation energy compared to pure tungsten, which may be attributed to sluggish diffusion and severe lattice strains. | 7,197.4 | 2020-02-01T00:00:00.000 | [
"Materials Science"
] |
Analytical model for assessing the investments in connectivity for small airports
The purpose of the research is to develop a model for the analysis of investments in connectivity. The developed model allows a more accurate capture of the impact of sources of fluctuations in investment projects aimed at increasing connectivity, using Wiener Processes and Poisson distribution. The results are solutions for improved decision-making in aviation infrastructure to the benefit of the regional economy.
Introduction
The interest on investment impact assessment theories have increased in recent years due to the high degree of uncertainty faced by some organizations (private sector and public) where the decision to make a strategic investment is required (competitive environment) or is a requirement external of the organizational environment (providing new standards).
Designing an investment in air transport architecture should take into account: direct costs (e.g. acquisition, operation, maintenance), indirect and intangible costs (e.g. increasing waiting time, affecting the right to privacy) and benefits (e.g. increasing number of passengers).
The principle of assessment in case of neutrality towards risk can be used to evaluate an investment project that depends on an extensive set of variables. For each variable, the expected growth rate is adjusted in order to reflect the price risk resulting from market mechanisms. Impairment of investments is inherent and leads to the following dilemma: low and frequent investments or major investment and rare ones. Uncertainty, irreversibility, growth potential and competition are factors that influence behavior and investment decision [1].
By analogy with the market, the air transport system can be viewed as a network of airport organizations (system elements) interdependent where the motivation to invest increases if the other elements of the network system succeed in doing the same thing. The possibility of contagion investment is based on two arguments: contagion proximity of benefits (important economic benefits can be obtained from high commercial air service * Corresponding author<EMAIL_ADDRESS>connected); proximity contagion security (which is the critical mass of network system elements that must invest in order to persuade others to do the same). From an economic perspective, small regional airports frequently suffer from limited traffic, fixed infrastructure requirements and insufficient revenues to cover their costs [2]. In this context, the research question is: How connectivity should sustain these small airports in order to survive in the emergence of multiple sources of jump risk?
The paper is organized as follows. First, we offer a brief overview of the existing airport network in Romania. Then, the functioning of airports is examined with regard to the connectivity parameters. In section 4 the impact of fluctuations' sources in investment projects is analyzed, considering independent jumps as size and appearance time. The results of the analysis, with a particular focus on newly developed regional airports, are presented in section 5. Finally, we debate model's limitations and possible directions for future research.
Overview of airport network in Romania
As of 2018, Romania has 17 airports certified [3], twelve of which are part of the trans-European transport network (TEN-T). The air transport in Romania has experienced rapid growth in the last years ( Also the volume of goods transported has had a similar trend, increasing by over 60%. However, air travel represents only 3% of the total passengers' traffic and 1% of the freight traffic [3]. Of the 13 airports included in the TEN-T, 2 are under government control, and 11 were transferred to the ownership and control of local authorities (county council). Decentralization argument was the reorganization/liberalization of the air transport market. This transfer was a challenge, especially a financial one, for the local authorities while the outdated and requires significant investment in order to satisfy the current and near future requirements. Due to the high potential of airport infrastructure in attracting investments and business activity, the Brasov county council has decided to build an airport, based on public -private partnership. Brasov is the only major city within Romania without an airport, even though a sum of indicators recommends building one: it has a sufficiently large catchment area (160 km to Bucharest, 140 km to Sibiu, 170 km to Târgu Mureș); it is the most important tourist city in Romania with a great growth potential (e.g. tourists accommodation capacity increased in 2013 by 60% compared to 2009, it is a candidate to the title of European Capital of Culture); it is a city with a long tradition in the aviation industry (IAR factory established in 1925); demographic potential of the area.
Dimensions of connectivity
A large number of international studies confirm the importance of connectivity in air transport networks for regional economic development and also in order to boost national and international investments [6,7,8,9]. The concept of connectivity is defined in terms of integration of an airport or regional network in the global air transport network. In Table 1 it is listed the most frequently used and cited connectivity indicators. According to some specialists, the definition of connectivity should include four properties: realistic, intensive, dimensionless and normalized, global [12]. Jenkins (2011) defines connectivity as a supply-side measure that indicates how-well integrated a specific airport in into a larger network. International Air Transport Association (IATA) was developed in 2007 a connectivity indicator based on the number of available seats per flight that are weighted by the size of the destination airport [16].
The recent research [5] defines the airport connectivity in terms of direct and indirect connectivity. The potential for growth at national level was highlighted by the evolution of the airport connectivity index that was increased with 38% compared to 2007 (in monetary value, about 160 million EURO). Improvements in connectivity has brought benefits both to users (eg. reducing time spent in transit, increasing the frequency of service, improving the quality of service) and to wider economy (eg. increasing domestic market, strengthening territorial, economic and social cohesion, enhancing the level of productivity, enhancing the foreign direct investment) [17].
According to Oxford Economic Forecasting (2005), a 10% increase in connectivity (relative to GDP) will raise the level of productivity in the economy by a little under 0,5% in the long run [18]. Another specialists estimate 0,07% for the elasticity between connectivity and long-run productivity [19].
Model for assessing the investments in connectivity
Model reviews the investment statements with multiple jump sources and identifies management solutions that increase the value of the investment opportunity. These jumps are considered independent of each other, each having random jump size and timing (Poisson distribution). Stochastic processes with a single source of discontinuity [20,21] or multiple sources of fluctuation, including specific cases of catastrophic events [22] are all approaches that influence the investment.
The proposed model continues the analysis of security investments [23], introducing the Poisson distribution, which is an appropriate model for the distribution of events with significant impact on the organization in a given period of time (corresponding investment). Thus, we can simulate the different values of the input parameters, the result of both investment projects and the optimal time to change the strategy. Positive and negative jumps, which have as a source significant events can be uncertain regarding timing and consequences of new information on technological advances, legislative changes, macroeconomic developments (e.g. inflation, oil prices) or connectivity trend index related to road and rail.
To assess the benefits (B), we propose a model where the intervention of arriving stochastic jumps has a constant probability per unit time. When a jump appears the benefit is changed with connectivity index CI calculated using the Smyth and Pearce model.
In the benefit equation (1), σ is the volatility and υ is a random number generated using the standard normal distribution N(0,1), and q is the frequency of the rare event.
The factor * ∑ ( ) + represents the Wiener process that can affect the volatility σ. This process was modeled using Poisson distribution (specific to rare events), and the sum represent the discretization of the integral that characterizes continuous time processes. Because the molecular motion of the particles is governed in terms of statistical normal distribution standard Wiener process affecting the same issue, these processes are also called Brownian motion.
The developed model allows a more accurate capture of the impact of sources of fluctuations in investment projects aimed at increasing connectivity. Neglecting the leap risk can lead to significant underestimation of the real value of investment opportunities, with negative consequences for decision-making. The model provides a realistic picture of the problems of air transport infrastructure investments for the following reasons: it uses the collective knowledge and experience, it incorporates the understanding of the "virtual marketplace" interactions, it introduces parameters associated with financial risk (volatility) and it depends on the level of progressive connectivity.
A case study for Brasov airport
The exemplification is made for the Brasov airport, based on estimates from the feasibility studies. To determine the evolution of the connectivity index the formula of probability complementary events is used (equation 2), where the calculated connectivity index, based on KPMG and Mott McDonald estimates, in the next 5 years is 0.15.
The equation (2) allows to estimate connectivity index values for time intervals less or more than 5 years. Thus, the connectivity index for the next 3 or 10 years is 0.09 and respectively 0.28. The result of the investment project benefits is calculated using the program Mathcad. Numerical computation highlights investment opportunities for different values of connectivity and volatility index. A summary of results is shown in Table 2.
Random normalized values obtained range between 1.84 and 0.212, at an initial cost of investment normalized of 0.5. Results are highly sensitive to the asymmetry of the jump size, which increases uncertainty, affecting the value of the investment opportunity.
The model was implemented in a simple, fast and efficient tool using the Mathcad program in a way in which data can be simulated based on two parameters: connectivity index and Wiener process factor. Running successively allows assessment of sensitivity and a significant reduction of the uncertainties in the complex field of possible investments in air connectivity.
The model provides the decision maker a complete picture of the critical elements for this type of investment due to its agility and versatility.
The model can also be adapted for other examples of benefit assessment: airport security investments, where connectivity index is replaced with security risk index.
Conclusions
Investment in infrastructure projects is an extremely complex problem for small airports because revenues are insufficient to cover the high operating costs and limited traffic. However, important economic can obtained benefits from aviation high level connectivity. The research leads to the development of a framework for investment planning for a better use of funds by reducing operation costs and uncertainty for potential gains as well as substantiate predictions about the performance of the strategy. Limitations come from changing modeling assumptions calculation (complete system of markets and geometric Brownian motion asset prices) and the difficulty of quantifying the political, economic, industrial and technological uncertainties.
The inclusion of multiple sources of jump risk in a dynamic and realistic model, in which the investment is treated as a stochastic process, is an original contribution that provides for strategic decision makers particularly useful information both on the outcome of the investment project and on the optimal time to change the strategy.
The analysis of the results obtained by applying the model to a scenario involving the development of a new aviation infrastructure (Brasov airport) highlighted the following: confirming the forecasts of the investment performance and exploiting the uncertainties in the competitive market for potential gains.
Because very few studies/ international projects have included data on Romanian airports and a national study on the measurement of connectivity has not been accomplished so far, amid the rapid development of the national air transportation system (the emergence of new airports or developing existing ones), it should initiate a program to determine both investment efficiency indicator based on the level of benefits as well as the costs of connectivity. | 2,772 | 2019-01-01T00:00:00.000 | [
"Economics"
] |
An FPGA Scalable Software Defined Radio Platform Design for Educational and Research Purposes
Marcos Hervás 1,*, Rosa Ma Alsina-Pagès 1,† and Martí Salvador 2,† 1 GTM—Grup de Recerca en Tecnologies Mèdia, La Salle—Universitat Ramon Llull, C/Quatre Camins, 30, 08022 Barcelona, Spain<EMAIL_ADDRESS>2 KAL, John Cotton Building, Sunnyside, Edinburgh EH7 5RA, United Kingdom<EMAIL_ADDRESS>* Correspondence<EMAIL_ADDRESS>Tel.: +34-932-902-445 † These authors contributed equally to this work.
Introduction
It is well known that the Software Defined Radio (SDR) platforms have great versatility to change the system features just by updating the firmware and changing a few components such as the antenna, the filters or the amplifier.La Salle R & D has been working on SDR platform design and performance for the last decade in the framework of a long haul ionospheric radiolink sounder and modem, and the physical layer design [1].In this environment, the need for a compact platform to conduct both the sounding and the data transmission for a remote sensing application was considered; this project led us to the design of the IRIS platform, which is presented in this paper.
The project, which used the High Frequency (HF) band, consisted in the transmission of sounding and sensor data from Antarctica to Spain in a 12,760 km ionospheric radiolink, for which we have used several SDR-based platforms designed over the last decade.Part of the study consisted in the sounding of the channel [2][3][4] in order to evaluate channel performance and characteristics.Once the sounding was performed and analysed, the physical layer tests could be conducted [5][6][7], in order to reach the final frame proposal [8].This 11-year project gave us the knowledge and the requirements of the optimum SDR platform to be designed to work on a project with such characteristics.In 2015, after closing the physical layer modem design for the long distance radiolink, the group started a new remote sensing HF project, this time using Near Vertical Incidence Sounding (NVIS), a technique which the group had already worked with in the past in terms of ionosphere sounding [9].The first prototype of this project has been developed with the IRIS platform [10], exploiting all its advantages to implement a SDR configurable system.
IRIS is a platform that was designed with the maximum flexibility in order to be used for both educational [11] and research purposes.Several platforms have been implemented for educational purposes previously such as [12][13][14] .In order to perform all the SDR applications desired, which involve Multiple-Input Multiple-Output (MIMO), the platform requires 2 high speed Analog to Digital Converters (ADCs), 2 high speed Digital to Analog Converters (DACs), and supports different clock input signals and different communication ports enumerated below: • Universal Serial Bus (USB) 2.0 On-The-Go (OTG).• A 10/100/1000 Ethernet connectivity.
• A Universal Asynchronous Receiver Transmitter (UART) for low speed communications such as console.• Peripheral Component Interconnect Express (PCIe).
Educational and research purposes have different requirements or restrictions.The educational platforms will be distributed to every pair of students.For this reason, a low cost platform is mandatory while a minimum performance should be obtained.Research purposes usually do not have such a restriction cost, however, the platform permit to adjust the performance of the components to the application requirements.Table 1 summarises the hardware requirements for both educational and research purposes.It should be noticed that these requirements should be accomplished for educational purposes while for research are desirable.Finally, IRIS is VITA 57 compliant, in order to add the possibility of expanding it with new Hardware, i.e., Digital Signal Processor (DSP) units, through a Field Programmable Gate Array (FPGA) Mezzanine Card (FMC) connector.The platform has been designed following Electromagnetic Interference (EMI) rules to obtain the best performance [15].These rules have direct implications in the Effective Number of Bits (ENOB) on a design with ADCs or DACs.
The advantage of the versatile design of the platform is that, if it is used for educational purposes, the students are provided with a lite version of the platform, which they can use to develop real software radio applications previously explained conceptually in class.But in the research field, the full version of the platform will be used with the highest design features.
In Section 2, the state of the art of standard platforms that satisfies our requirements is reviewed, Section 3 provides the system description of IRIS, Section 4 shows the measures made over the system performance, Sections 5 and 6 present the applications and the conclusions, respectively.
Existing Platforms
Current technology has led to the design of high-performance radio modems that work digitally with bandwidths up to 60 MHz.Therefore, they can directly process Radio Frequency (RF) signals without an external mixer for HF band.The leading FPGAs manufacturers, Xilinx and Altera, mainly propose the use of evaluation boards [16], as high performance SDR system, with additional subsystems based on the FMC VITA compliant connectors [17] from third parties such as Terasic, MVD Cores, 4DSP, HiTech Global and Nutaq Innovation, or other manufacturers such as Analog Devices [18] and Texas Instruments.This solution presents drawbacks in terms of scalability, size and unit cost.The cost can be up to some thousands of dollars which is prohibitive for educational purposes.Moreover the use of multiple boards makes the size bigger than integrated solutions.
Integrated commercial platforms can be divided into two categories: high performance SDR systems such as USRP, 4DSP or Nallatech in [19][20][21] or low cost SDR platform for amateurs and RF enthusiasts such as bladeRF [22].These solutions have been developed by non-FPGA manufacturers and partially fit our requirements.They integrate the analog front-end, the processing core and some communication ports.The analog front-end is composed of high speed ADCs and DACs with a throughput ranging from 40 to 125 MSPS and a resolution ranging from 12 to 16 bits.
The higher performance platforms cover our needs of signal integrity and throughput for research applications but their price is too high for an educational platform.Moreover, these platforms have usually a limited number of communication ports such as PCIe, Ethernet or Universal Serial Bus (USB) 2.0 On-The-Go (OTG).The platform presented in this work integrates all communication ports previously mentioned to have more flexibility for some applications such as connecting the platform to a laptop or to an Ethernet network.
The low cost SDR platform covers our requirements for educational applications with low throughput and low resolution converters, for example 40 MSPS and 12 bits.The price of these platforms ranges from $450 up to $1000, while the manufacturer price of the IRIS platform is around $250 for the lite or educational version.Moreover, these platforms are not supplied with a PCIe port for higher throughput applications than the presented in this work.
The platform presented in this work has a good trade-off between performance and cost.To do that, main components or subsystems, such as ADC, DAC, FPGA and clock, are pin-compatible with other families of the same manufacturer to adjust the performance and price to the application.This allows us to decide which ones can be assembled.
System Description
The IRIS platform design takes into account both the requirements of scalability in the performance and the unit cost, minimizing it for educational applications, and easily migrating to a higher performance design when the requirements of the application need it.For this reason, IRIS has been designed with pin-compatible components such as the FPGA, DAC, ADC and the clock input oscillators of the clock manager.To reduce the unit cost, the platform can be built without some additional hardware included in the design, i.e. the Ethernet transceiver, the Synchronous Dynamic Random-Access Memory (SDRAM) memory, the secondary Serial Peripheral Interface (SPI) flash memory which is used for specific applications or the USB OTG.In Figure 1 The IRIS platform has two SPI flash memories, one of them is used to store the FPGA bitstream program file, and the other memory is used to store additional information such as file system or Microblaze [23].
Microblaze [23] is an embedded microprocessor that can be configured as a part of its own hardware in Xilinx FPGAs.This embedded microcontroller allows us to control peripherals with an Intelligent Property (IP) core such as Ethernet or USB OTG through C programming language with a stand-alone application or Linux operating system, which allows easier protocol stack programming than in HDL.Moreover, the system has a volatile SDRAM Double Data Rate type three (DDR3) memory, which is used to store and access data when the application is running at high speed.
The system has 5 communication interfaces: (i) a USB to UART bridge, which can be used to send data at low speed or as a Microblaze console when it is configured on the board; (ii) an Ethernet transceiver able to be configured as 10/100/1000 Mbps; (iii) a USB OTG with a throughput of 480 Mbps; (iv) a PCIe with a throughput of 2.5 Gbps, these interfaces can be used for higher throughput application requirements.
The analog front-end is composed of a dual ADC and a dual DAC, the input and output analog signals, respectively, are injected via SubMiniature version A (SMA) connector.Both signals are single-ended, and they are differentially coupled through operational amplifiers.The high performance analog front-end permits the processing of bandwidths of RF signals ranging from Direct Current (DC) up to 60 MHz.
The clock signal of the converters is distributed via a clock manager, this solution minimizes the jitter and allows to divide the clock frequency of each output.The FPGA unused pins are routed to the FMC connector for future applications.
The IRIS platform has been designed to be versatile, for this reason the platform has a Samtec FMC connector routed following the VITA 57 standard to connect the FPGA to an optional hardware, mainly a Texas Instrument DSP.The interface between the FPGA and DSP through a bus of 64 bits [24] is planned with 69 pins of 3.3 V CMOS routed to the FMC connector.DSPs are commonly used in radio modems for operations such as Fast Fourier Transform (FFT), coding and applications at baseband.
All components with a SPI port (ADC, DAC, clock distributor/manager, SPI memories and FMC routed) have been multiplexed with 3 chip select signals minimizing the number of pins used.The platform has a Real Time Clock (RTC) connected to the FPGA through an Inter-Integrated Circuit (I2C) port, with the port routed to the FMC connector.These standard ports (I2C and SPI) of the FMC connector will enable future communications between the FPGA and other components.
Core Processing
The signal processing core is a Xilinx FPGA of the family Spartan-6.The reasons for choosing the integrated circuit are the need of having Gigabit Transceiver Ports (GTPs) for PCIe and the ability to migrate the package from low cost to higher performance.Table 2 shows us the Spartan-6 family overview.Spartan-6 FPGA with a part number finishing with the letter T have GTPs, for example XC6SLX45T has 4 GTP ports.The 4 GTP ports used are: (i) 1 lane PCIe port, which allows connectivity between the PC and the IRIS; (ii) for future applications, 1 is routed to the FMC connector and (iii) the other 2 are routed to SMA connectors.
The chosen package is the Spartan-6 FG(G)484 because the 45T and 150T with 45 k and 150 k logic cells versions allow us to switch between them them without any Printed Circuit Board (PCB) changes.Higher packages with GTPs than FG(G)484 have been discarded because of their high price.
The chosen devices are XC6SLX45T and XC6SLX150T with FG(G)484 package which have 296 I/O pins, 4 GTPs and the package is a Ball Grid Array (BGA) with a size of 23 × 23 mm.
ADC
The manufacturers with the highest market impact factor are Analog Devices, Intersil, Linear Technology, Maxim and Texas Instruments.The company with the highest market share is Analog Devices with the 48.5%.For this reason and for previous experience in other projects, Analog Devices is the high speed ADC manufacturer chosen for the IRIS platform.
The filter parameters applied to the search were: a resolution higher than 10 bits and a sampling frequency higher than 60 MSPS (see Table 3).The design is implemented with the family of highest number of pin-compatible ADCs.The rest of families do not allow us to migrate a design from a throughput of 10 bits to 16 bits.The AD9204 [25] is pin-compatible with AD9268 [26], AD9251, AD9258 and AD9231 families permitting a migration of resolution ranging from 10 to 16 bits and a sampling frequency ranging from 60 to 125 MSPS.A particular emphasis is made in AD9204 and AD9268 families because they help to fit the tradeoff between price and performance.The assembled AD converter for educational purposes is the AD9204 which has the lowest performance and the lowest cost.For research applications the assembled AD is the AD9268 which has the higher resolution bit and the highest sampling frequency.The ADC inputs are differential, so an operational amplifier has been used to make the conversion from single-ended to differential, the integrated circuit used is ADA4938-2 which is a dual ultra-low distortion differential ADC driver [27].
AD9204
The AD9204 is a dual-channel ADC, which is powered with 1.8 V, has a resolution of 10 bits and sampling frequency of 20/40/65/80 MSPS.It is supplied with a high performance sample-and-hold circuit and on-chip reference voltage.The converter can correct errors of each code with internal logic that provide a precision of 10 bits at 80 MSPS which mitigate the error at higher temperatures.The input clock signals are differential.Optionally an internal duty cycle stabilizer (DCS) can be used to compensate high variations of the duty cycle.The digital output data can be formatted as binary, Gray code or two's complement.It is supplied with a Data Clock Output (DCO) which is used to register the data in reception.The digital output data supports 1.8 and 3.3 V CMOS depending on the power supply of the digital power supply.The AD9204 is supplied in a Leadframe Chip Scale Package (LFCSP) package of 64 pins.
The sample-and-hold circuit provides excellent performance for input frequencies up to 200 MHz and is designed for low cost and low consumption applications.The standard serial port interface supports some features such as the digital output data format, an internal clock divider, power-down, timing of DCO/DATA, offset adjustment and different reference voltage mode.
AD9268
The AD9268 is a dual-channel ADC with 16 bits of resolution.It works at sampling frequencies of 80/105/125 MSPS.It is designed for communication applications, providing high performance, low cost and reduced size.The input dynamic range can be configured via the SPI port.The ADC also has a DCS to compensate the clock input signal variations maintaining the converter features.The output data bus uses 1.8 V CMOS or LVDS technology.The device programmability and control is done via a SPI communication port.
The AD9268 is supplied in a LFCSP package of 64 pins.Optional Dither On-chip can increase the Spurious Free Dynamic Range (SFDR) level.The converter has an excellent SNR in the entire frequency band.The digital output drivers accommodate the signal to 1.8 V CMOS or LVDS, this allows the use of only a single 1.8 V power supply.The standard serial port interface supports some features such as the digital output data format, an internal clock divider, power-down, timing of DCO/DATA, offset adjustment and different reference voltage mode.
DAC
As stated before, Analog Devices is the company with the highest market share in analog to digital and digital to analog converters.The search and filter parameters provide higher resolution than 12 bits and sampling frequency higher than 100 MSPS, a brief summary is shown in Table 4.
The families 911X and 971X are cheaper than the other families.However, only the resolution can be modified, and the frequency clock signal is fixed to 125 MSPS.The last 3 families AD9125, AD9122 and AD9148 have a high clock frequency of 1 Gbps with a resolution of 16 bits, but there are not other cheaper pin-compatible converters with lower features.
The chosen DAC families are AD9745, AD9746, AD9747, AD9780, AD9781 and AD9783 [28,29].These are pin-compatible among themselves and offer us the possibility to use converters with 12 bits and a throughput of 250 MSPS up to 16 bits and 500 MSPS.The assembled DA converter for educational purposes is the AD9745 which has the lowest resolution, lowest maximum sampling frequency and lowest unit cost.For research applications the assembled DA is the AD9783 which has the highest resolution in bits and highest maximum sampling frequency.The DAC outputs are differential and the expected output signal for the antenna has to be single-ended.So, an operational amplifier has been used to make the conversion from differential to single-ended, the integrated circuit used is AD8045 which is an ultra-low distortion high speed operational amplifier [30].The main features of these converters are explained below.The converters AD974X AD9741/AD9743/AD9745/AD9746/AD9747 of the family AD974X, have resolutions of 8/10/12/14/16 bits respectively, and these devices are pin-compatible allowing the migration from a low resolution of 12 bits to higher resolutions, up to 16 bits.This migration is used to fit the ratio between cost and performance.These dual-channel DACs have a sampling frequency of 250 MSPS and include gain and offset compensation.They are fully programable with a SPI port.
Family AD978X
The converters AD9780/AD9781/AD9783 of the family AD978X, have a great dynamic range and the devices are pin-compatible.These devices are dual-channel DACs with a resolution of 14/15/16 bits and a maximum frequency clock of 500 MSPS.These converters have special features such as gain and offset compensation and a proprietary architecture that allows the synthesis of analog frequencies over the Nyquist frequency moving the energy of the fundamental frequency to the image frequency.These components also are fully programable via the SPI port.
Clocking System
The clocking system has been designed with the AD9511 [31] clock distributor of Analog Devices.This clock manager is recommended by Analog Devices for the highest performance converters of our design.The AD9511 has the functionality of distributing the clock input signal to multiple outputs with a core which has a PLL on-chip.The maximum achieved SNR is limited by the jitter at high frequencies as can be observed in Equation ( 1), where σ is the jitter, f in is the input frequency and SNR(dBFs) is the SNR when the input signal is at the full-scale, for this reason, the design emphasizes to minimize both the jitter and the low phase noise in order to maximize the signal integrity.
SNR(dBFS) = −20log(2π f in σ)
( There are 3 input clock signals that can be configured via the SPI port.These inputs can work at frequencies of up to 1.6 GHz.The input voltage level has to be between 150 mV peak to peak and 2 V peak to peak.These inputs are differential, but they can be used as single-ended connecting a capacitor between the negative input and ground.The inputs are CLK1, CLK2 and REFIN as a reference of the PLL circuit.The inputs CLK1 or CLK2 that are not used should be in power-down mode to avoid crosstalk between the inputs. The PLL circuit is composed of a programable reference divider, a low noise Phase Frequency Divider (PFD), a precision charge pump and a programable feedback divider.The PLL can synchronize frequencies of up to 1.6 GHz connecting an external VCXO or VCO to the inputs CLK2/CLK2B with the REFIN input.
The AD9511 has 5 independent clock outputs.Three of them are standard LVPECL and are able to work at frequencies up to 1.2 GHz, the others can be configured as CMOS or LVDS working at frequencies up to 250 MHz and 800 MHZ respectively.Each output has a configurable integer divider up to 32.The relative phase between 2 outputs can be configured with a phase divider.
The clock source for the analog front-end converters can be taken either from an external crystal oscillator, a Phase Locked Loop (PLL) frequency synthesizer or an external oscillator as can be seen in Figure 2. The input clock source has been configured to work with 3 different topologies based on crystal oscillator, VCO using a Temperature Compensated Crystall Oscillator (TCXO) as PLL reference input and TCXO at the work frequency.The components of these inputs are pin-compatible and soldering jumpers on the PCB the clock input configuration can be used.These pin-compatible components can be single-ended or differential soldering some jumpers.
When assembling the desired integrated circuit and the correct jumpers one of the three configurations can be selected.This allows our students to compare and validate how the performance of the clock input signal affects the quality of the analog front-end signals.Moreover, for research applications it enables us to assembly the required clock input signal depending on the desired analog front-end performance.
Crystal Oscillator
This is the lowest performance clock input source because the frequency deviation and jitter are higher than a TCXO which is temperature compensate.However, this solution is the cheapest.The scheme followed is shown in Figure 2a, where the crystal oscillator is connected to the CLK2 input and a SMA connector to the CLK1 input to allow the injection of an external clock.
VCO Using a TCXO as PLL Reference Input
This configuration uses an accurate clock input source with frequency which is lower than the desired as a reference in REFIN input.The clock input will be generated with a VCO connected to CLK2 input comparing this accurate reference and the VCO signal divided by an integer up to 32.The required signal to adjust the VCO is generated in the PLL and is supplied by the CP pin.This scheme is shown in Figure 2b, and has better performance than the one based on crystal oscillator because the TCXO has low jitter and low frequency deviation.However, it has a higher cost.
As the crystal oscillator mode the CLK1 input mode is connected to a SMA connector.A SMA connector is wired to REFIN, it will be used to evaluate the output clock performance as a function of different external clock inputs.
Temperature Compensated Crystall Oscillator (TXCO)
The block diagram is shown in Figure 2c.This is the highest performance and the highest cost solution.The clock input signal is connected to the CLK2 with a TCXO at the desired frequency.This configuration allows injecting signal in CLK1 input with a SMA connector o soldering a crystal oscillator, it can be used to compare the system performance as a function of the input performance clock.
Communication Ports
This platform can be controlled with 5 different communication ports, 2 UART, a USB OTG, a 10/100/1000 Ethernet and a 1 lane PCI express.This high connectivity allows the student to work from remote places with only a computer and the IRIS.From the scholar point of view, it allows us to use the platform in SDR courses in both the Bachelor and the Master degree, as well as in post-graduate programs.A deeper description about the communication ports is done below: UART: The system has 2 UARTs, a pair of pins routed directly to a connector and a USB to UART converter.The integrated circuit for the USB to UART converter is the CP2103 from Silabs.
This integrated circuit has a configurable output voltage and has been fixed to 2.5 V.For this reason, the converter has been routed to the bank 0 of the FPGA which is powered with 2.5 V.
USB OTG: The USB OTG subsystem can be Host or Device only changing a pair of jumpers.The integrated circuit is the USB3320 [32] from SMSC, currently Microchip.The USB3320 is a high speed USB 2.0 ULPI transceiver, where ULPI is the physical interface between the integrated circuit and the FPGA, that achieves up to 480 Mbps.
A hardware IP core is required to control the peripheral, translating the microblaze AXI bus to ULPI for host or device application.
The USB peripheral can be used to connect a laptop to our platform for high speed applications up to 480 Mbps or to connect a USB hard drive and save data.
Ethernet: An integrated 10/100/1000 Gigabit Ethernet Transceiver has been used namely 88E1111 model from Marvell [33].This integrated circuit supports different MAC interfaces to communicate with the FPGA.
It is well known because it has been used and tested by Xilinx in [16] and we have designed some applications using it and a Spartan-6 before the integration in the IRIS.It offers a certain flexibility in order to choose a different IP core inside the FPGA to connect it to a microblaze.
PCIe: The PCIe bus is connected to a GTP port, the GTP is a full duplex high speed serial transceiver port able to transmit up to 3.125 Gbps.The PCIe used is one lane width of generation one which can achieve up to 2.5 Gbps.
It is the peripheral with the highest throughput and can be used in applications which require communications between a PC with PCIe and the IRIS.
The jitter in this high speed bus can be a problem.To mitigate it, the jitter attenuator ICS874001L from the manufacturer IDT has been added.
EMC and Signal Integrity Design Rules
Special attention has been given to Electromagnetic Compatibility (EMC) and signal integrity for a proper design of high speed signals and analog signals.
This section summarizes the best practices studied and applied in the PCB layout.In Section 3.6.1 the power supply scheme and the filtering are described, in Section 3.6.2 the impedance matching of microstrip and stripline waveguide is studied, in Section 3.6.3 the grounding best practice and the improvement it represents is introduced, in Section 3.6.4 the PCB stack-up is commented and finally in Section 3.6.5 the track equalization of high speed signal between the DRAM and the FPGA is presented.
Power Supply and Filtering
The power supply has been separated in two groups, analog and digital sources.These power supply outputs have been low pass filtered taking into account the maximum system frequency.Both analog and digital ground planes have been virtually separated and all the components have decoupling capacitors in all power supply pins.
The power supply subsystem of the IRIS is very complex because of the great number of different voltage sources.The FPGA used needs 1.2 V for powering the core up to 3 A, VCC 0 needs 2.5 V, VCC 1 , VCC AUX and VCC 2 need 3.3 V.Moreover, the analog front-end needs 1.8 V, 3.3 V, 5 V and −5 V.These voltage power supply has to be provided by a single source of 9-12 V.
Switched regulators have been used to power the FPGA and the digital parts because of their high current consumption.Moreover, switched regulators are more efficient than linear regulators.The use of linear regulators has been relegated to power analog integrated circuits which demand lower current than FPGA.
The output of a switched regulator has a voltage ripple that can be considered as interference at the harmonics of the switching frequency.These interferences are coupled in the load circuits and can degrade the signal quality when the interference is on the frequency band used.For this reason, a low pass filter with capacitors and ferrites following a pi structure has been used in each output.The accuracy of the analog signals is more sensitive than the digital ones.
The pi filter consists of two input low ESR ceramic capacitors, a ferrite bead with a resonance frequency of 1 GHz and supporting up to 3 A and two output low ESR ceramic capacitor.
Additional capacitances are added to work as decoupling capacitors as close as possible to each power input of each integrated circuit reducing both the inductance presented by the ground plane and the non desirable noise effect produced by one integrated circuit upon another.There are two methodologies for decoupling capacitors connected in parallel, the first is using different capacitors, with capacitance variations of decades, and the second is using the same capacitance values.Previous studies [15] demonstrate that using capacitors of the same value, the probability of having antiresonance frequency in the decoupling network decreases, for this reason this technique has been applied.
The linear regulator is capable of rejecting the power supply ripple as a function of the load capacitance and the output current, which corresponds to the linear regulator ADP3333 obtained from [34].As mentioned before, the analog signals are more sensitive to noise and interference than the digital ones, for this reason, linear regulators are used as analog power supply circuit adding this extra interference rejection, so the lower the frequency, the better rejection.The analog integrated circuits are the ADC, the DAC, the operational amplifiers and the clock distributor.
In order to avoid crosstalk between integrated circuits in the power supply due to the current transient, the more sensitive parts of the analog components have independent power linear regulator such as the power supply pin of the internal clock circuit of the DAC or the 1.8 V analog power supply and 1.8 V digital power supply of the ADC.
A full block diagram of the power supply can be seen in Figure 3 where ADP5052 [35], LM2576 and LT3471 are switched regulators, ADP3333 [34], ADP3335 [36] and TPS51200 are linear regulators and LPF is the pi low pass band filter.
Impedance Matching
When the signal wavelength is comparable to the length of the signal's track, it can be seen as a waveguide.In Figure 4 the waveguide microstrip, stripline, differential microstrip and differential stripline used can be seen.The characteristic impedance of the tracks that work as a waveguide has been designed to be 50 Ω for single-ended and 100 Ω for differential pairs, knowing the dielectric characteristics.This routing consideration to design the tracks has taken into account: the GTP port, the USB OTG, the ADCs, the DACs, Ethernet transceiver, the FMC and the DDR memory.
Grounding
The ground is one of the fundamental ways to minimize noise and it has to be carefully designed.A properly designed ground system can provide protection against interference and emission.The impedance of a conductor depends on the frequency as can be seen in Equation (2).
Any current return through the ground plane presents a difference in voltage Equation (3).
The copper impedance of a track or plane is basically inductive and as digital integrated circuits demand current peaks on each switching, this demands are converted into voltage differences which are coupled to the rest of circuit.The voltage difference that follows the Equation (4) depends on both the inductance and the derivative of the current demanded.
Therefore, an accurate ground design is supposed to minimize the impedance Z g , more precisely, the inductance L g , and decrease the flow current I g through a different path.The impedance Z g depends on the geometry of the track or plane, the wider the ground is, the lower the inductance.A full ground plane covering the PCB surface is the solution that presents the lowest impedance, for this reason it was chosen for our design.All the ground connections have been made with via holes between the proximity of the pad and the ground plane.
The stripline waveguide signal appears between two ground or power planes without discontinuities and this is accomplished with full ground and power layer.Moreover, the use of full power and ground in parallel produces an additional embedded low capacitance to the decoupling capacitors.Additional information about the grounding planes can be read in PCB stack-up section.
PCB Stack-up
The selection of the PCB stack-up number of layers has taken into account the following considerations.
•
Internal signals have to be placed between full ground or power planes to be considered as striplines.
•
A full power plane in parallel with a ground plane adds.
•
The higher frequency signals or the more susceptible ones to be disturbed can be routed on internal layers for non-desirable coupling effect reduction.• At least 2 internal layers are needed to route some BGA components such as the FPGA, the DDR memory and the FMC connector.
•
The PCB thickness has to be 62 mils or 1.6 mm for PCIe standard connectors.
For these reason, a PCB stack-up of 8 layers has been chosen as it can be seen en Figure 5.The internal Mid-Layer 1, Mid-Layer 4 and Mid-Layer 6 are ground planes, Mid-Layer 3 is the power plane and Mid-Layer 2 and Mid-Layer 5 are the internal layers for routing signals.
Top Layer GND Mid-Layer2 Power Planes GND Mid-Layer 5 GND Bottom Layer Top and bottom layers have been routed as microstrip traces and internal layers as stripline.The internal layers which are enclosed between 2 planes are isolated from external disturbances.The ground plane and the power plane are contiguous layers because the closer the parallel conductive surfaces are, the greater the desirable parasitic capacitance.The selected manufacturing parameters are shown in Table 5.These parameters have been used to calculate the track widths for the waveguide.This stack-up ensures the required thickness.The design of a high speed digital bus such as the interface between the DDR memory and the FPGA has to ensure that the length tracks are equal.Differences in length between tracks at high frequencies represent a non negligible delay between them.For this reason, as can be seen in Figure 6, some nets have been extended to obtain the same length.The equalize trace lengths technique places a defined accordion-shaped track extension.
Results
Currently, the IRIS platform for educational purposes has been manufactured and assembled with the lowest performance components.A picture of the platform is shown in Figure 7.
The ADC is the model AD9204 whith a resolution of 10 bits and a maximum sampling frequency of 65 MHz.The DAC model is the AD9745 with a resolution of 12 bits and a maximum sampling frequency of 125 MHz.Both ADC and DAC have been clocked at 50 MHz with a standard crystal oscillator of the manufacturer FOX.Measurements of the accuracy in terms of ENOB have been carried out in the analog front-end to check the correct performance of the EMC and signal integrity criterions applied in the design.
Expansion
The parameters measured for the ADC are described below: • SNR for both the carrier reference (dBc) and the converter full-scale reference (dBFS).• Free Spurious Dynamic Range (FSDR) which is the difference between the full-scale converter and the powerful spurious.• ENOB.
•
Signal to Noise And Distortion Ratio (SINAD).• Total Harmonic Distortion (THD).• Total Harmonic Distortion plus Noise (THD + N) which is equal to the SINAD when the measures is done from DC to half sampling frequency.
These 6 parameters are defined and considered as basic measures of the ADC dynamic performance in [37].However, only the SFDR has been measured because the DDS generated in the FPGA as a source of the DAC has a lower level of SNR than the desirable 73 dB, according to the quantization error formula of the converter which depends on the number of bits.
ADC Performance Measurement
The ADC sampling measurements require the injection of a pure signal and the analysis of the samples through the Discrete Fourier Transform (DFT) in order to observe the power of the different spurious.The RF signal generator 8642B from HP has been used to inject the input signal to the system.This RF generator has a much better performance than the ADC AD9204 and guarantees a power of non-harmonic distortion below −100 dBc and a phase noise of −138 dBc/Hz at an offset of 20 kHz from the carrier.This performance is better than the approximately 60 dB of SNR that the ADC is able to measure with a resolution of 10 dB.The analysis of the data has been done instantiating an IP core known as chipscope, allowing us to save some signals up to 131,072 samples inside the block RAM of the FPGA and send those signals to the PC via JTAG port.Finally, that data is exported to be read and analysed with Matlab software.
The input signals of the IRIS ADC were analysed with a spectrum analyser to evaluate the accuracy of the signal before the signal is sampled.This input signal spectrum from 1 MHz to 50 MHz is shown in Figure 8a, where the second and third harmonics are more powerful than the noise floor.However, higher harmonics are less powerful than the noise floor because of the anti-aliasing filter.All parameters have been calculated from the results obtained in Figure 8b with Matlab where the full scale, the quantization noise level and the process gain have been drawn.The quantization noise is 6.02 • Nbits + 1.72 = 61.92dB lower than the full-scale.
The mathematical relationships between ENOB, SINAD, SNR and THD assuming all are measured with the same input signal amplitude and frequency in Equations ( 5)-( 8) can be used to help us to obtain all the required parameters.The performance parameters are measured for a bandwidth of 25 MHz which is the Nyquist bandwidth.The noise power N 0 has been calculated integrating the noise over a non-distorted bandwidth and extrapolating it for the 25 MHz of bandwidth and the distortion D integrating the 8 most powerful spurious.
The values obtained are summarized in Table 6.The ENOB and the real number of bits of the converter are very similar, 9.78 and 10.It demonstrates that the analog front-end has been designed accurately.
DAC Performance Measurement
The measurement set-up of the DAC performance has been carried out by injecting the digital samples of a sine wave from the FPGA core to the DAC and evaluating the output of the antialiasing filter with a spectrum analyser.The sine wave has been generated with the DDS Xilinx IP core, the spectrum analysis of the signal generated by the DDS before pass to the converter has been done with Matlab and can be seen in Figure 9b.The SFDR has been calculated as the ratio between the signal power and the integration of the powerful spourious.The analysis to obtain results has been done from Figure 9a, the SFDR calculated is 68 dBc and 69.15 dBFs and the quantification noise is 6.02 • Nbits + 1.72 = 74 dB, which is 5 dB greater than the SFDR, near to 1 bit in terms of ENOB.Some peaks that appear at the recovering filter at DAC output are generated by the DDS as we can see in Figure 9b and the noise level generated by the DDS is higher than the quantification noise, for this reason we are not able to measure the accuracy of the converter precisely to obtain more performance values.
Educational
For educational purposes, the IRIS is being used as a platform for practical SDR cases in the Master of Telecommunication Engineering (MET) and in the Bachelor of Telecommunications.The MET students use the platform to implement a real part of a SDR system.They simulate a ionospheric HF radio link with 5 hops between Antarctica and Spain [1].The channel presents Doppler effect and inter-symbolic-interference (ISI) due to the multipath caused by the ionospheric layers.The channel characterization information is obtained from previous work [3].
The students design a wide-band modulation scheme such as OFDM or spread spectrum with a bandwidth up to 3 kHz to avoid or compensate these non-desirable effects and simulate it in Matlab software in baseband.Finally, they implement an upconverter and a downconverter to accommodate the signal in the HF band and to convert it into baseband again to demodulate it in Matlab, respectively.The communications port used to send and receive data to and from the PC is the UART with a throughput of 921,600 bps, which is enough for the 3 kHz of bandwidth typically used in HF communication.The IP core required to do this communication and the Matlab scripts have been supplied to the students.This one of the practical cases studied at MET, working in groups to improve not only the technical contents but also the transversal competencies.
The Bachelor of Telecommunications students use the platform to study basic concepts of VHDL and programmable logic oriented in the field of application of software defined radio.They are introduced to VHDL and FPGA devices, through the development of finite-state-machines (FSM), FIR filters and correlations in both parallel pipe-line and sequential structures.They observe that a parallel pipe-line structure obtains a higher speed using the maximum number of logic resources and sequential structures with lower speed using a lower number of logic resources.The students use Xilinx IP cores such as the Direct Digital Synthesizer (DDS) or FFT to understand the use of these well known elements in SDR.Finally, they test the effect of the undersampling for frequency downconversion and implement practical cases such as an IQ modulator.A VHDL basic project to control the hardware described previously is available in [38].
Research
In the research field, the platform with the highest features is being used to design a broadband HF radio-modem for both NVIS and oblique ionospheric transmissions of a single hop.This modem will be used to deploy a sensors network in remote placements such as the Spanish Antarctic Station (SAS) in Livingston Island [1] to gather the data of the sensors placed up to 300 km away from the SAS.The bandwidth of the base band signal is 100 kHz which will allow us to develop modulation schemes with a bandwidth larger than the single HF channel of 3 kHz.The platform will carry out the tasks of frequency up and down conversion, and through a microblaze and the gigabit Ethernet port the data is transmitted to and received from an embedded platform which contains a Linux operating system, the commercial Raspberry Pi.This platform will be the device that writes and reads the information in a Hard Drive, i.e., the data gathered in a remote sensors network.Finally, the embedded platform and the Pulse Per Second (PPS) of a GPS system will synchronize both receiver and transmitter.
Currently, our research group have designed a NVIS radio-modem for emerging communications, see Figure 10a.NVIS communications supports larger range of distances between transmitter and receiver than other known communications systems such as Very High Frequency (VHF) or Ultra High Frequency (UHF) without repeaters or satellites.In fact, when standard telecommunication infrastructures were collapsed in recent natural disasters, only amateur radio operators were able to communicate.The acquisition and signal processing procedures has been carried out in the IRIS.The programmable logic does the task of upsampling and downsampling baseband signals of 100 ksps for transmitting and receiving, respectively.Spartan-6 of the IRIS also contains a microblaze to control the internal peripherals of the platform and to communicate the Programmable Logic with the control system through Ethernet.A Raspberry Pi set-up all the peripherals: (i) a power amplifier to amplify the signal; (ii) a wattmeter measuring the reflection coefficient; (iii) a Hard Disk Drive (HDD) to store or retrieve data files; and (iv) a GPS to synchronize both the transmitter and the receiver in time without the use of Internet.A more detailed description can be found in [10].
Table 7 shows a comparison between the basic specifications and the cost of some platforms comparable to the IRIS.It should be noticed that the cost of the IRIS platform is the manufacturing price, while the cost of the other platforms is the selling price.
Conclusions
IRIS is a compact and integrated SDR platform with a high grade of scalability and connectivity, and it follows all the requirements mentioned previously for educational and research purpose applications.The platform has an accurate design in terms of signal integrity and EMC as has been shown in the ADC performance measurements.IRIS outperforms other platforms of the state of the art with the same unit cost, or comparable performances with much less unit cost (see Table 7), i.e., the XtremeDSP Development Kit-Virtex-4 Edition with ADCs and DACs resolution of 14 bits has a cost of thousands of dollars.Its analog front-end presents better performance than the IRIS platform with an ENOB of 12.3 bits, which improve our educational version of 10 bits of resolution in 2.5 bits.However, the cost of IRIS is much lower than the XtremeDSP Development Kit.The USRP N200/N210 from Ettus Research, with dual ADCs and DACs of 14 and 16 bits of resolution, respectively, has an ADC SFDR of 88 dBc and DAC SFDR of 80 dBc, which is higher than our 79.2 for ADC and 68 for DAC.Moreover, Ettus platforms come with a powerful software framework and drivers to speed-up the implementation of different applications.However, the cost of this Ettus version is much higher than the IRIS platform, and price is our priority for educational purposes because the aim of this platform is that students and researchers of our University program it at low level focus on the optimization of the algorithms.
Currently, the IRIS is being used as an educational platform for putting into practice some concepts reviewed in the lecturers for both Bachelor and Master of Telecommunications.The feedback obtained from the students is very positive because they have the chance to work with a real platform, and this way they train the transversal competencies and applied work.The great connectivity of the platform allows the students to work from places geographically far from university with only a PC and the IRIS.From the scholar point of view, it allows us to deploy SDR courses and the Master in Telecommunications in an online format using a real system.The system may be connected to an internet network or a Laptop directly using the Ethernet transceiver or the USB OTG.
The IRIS platform can be used for both on-site and on-line programs.The platform was created to apply teaching methodology based on learning by doing [11] using a real system that the students can use throughout the year in school or at home.Some schools use virtual labs, however, working with a real system helps students reach transversal competencies and increase their knowledge.Finally, for future applications the system can be expanded through the FMC connector, by adding some subsystems such as a DSP to our system.
The IRIS platform has an excellent trade-off between features and cost, to fit the maximum number of applications in both educational and research areas, thanks to the possibility to assembly or not the main different pin-compatible components.
Figure 1 .
Figure 1.Block Diagram of the IRIS platform.
Figure 3 .
Figure 3. Power supply distribution of the IRIS platform where analog and digital separated sources can be distinguished.
Figure 5 .
Figure 5. Stack-up of the 8 layers and signals and planes distribution of the PCB.
Figure 6 .
Figure 6.Track equalization between the FPGA and DRAM to obtain the same length.
Figure 7 .
Figure 7. Picture of the IRIS platform and the functional distribution.
Figure 8 .
Figure 8.A single tone of 9.68 MHz injected in the IRIS to determine the ADC performance parameters: (a) signal generated by the HP8642B measured with a spectrum analyser; (b) the ADC signal provided by the chipscope in the FPGA and analysed with Matlab.
FullFigure 9 .
Figure 9. DAC performance analysis carried out injecting a single tone of 9.68 MHz with a DDS and measuring the output with an spectrum analyser: (a) DDS injected in the DAC inputs by the FPGA; (b) ideal DDS generated by the FPGA to analyse the DAC performance.
Figure 10 .
Figure 10.Low cost transmitter installed in Cambrils, Spain, 400 km away from the receiver, Barcelona, to test NVIS radio-communications: (a) a picture of the whole digital radio-modem based on the IRIS platform; (b) the wideband HF folded dipole placed at the transmitter side; (c) the wideband HF folded antenna with the balun placed in the receiver side.
Table 1 .
Requirements of the platform for different applications.
Table 3 .
Analog Devices ADC converters for a resolution higher than 10 bits and throughput higher than 60 MSPS.
Table 4 .
Analog Devices DAC converters for a resolution higher than 12 bits and throughput higher than 100 MSPS.
Table 5 .
Manufacturing parameters of the PCB stack-up.
Table 6 .
Performance parameters of the ADC. | 11,027.8 | 2016-06-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Education"
] |
Robust Least-SquareLocalization Based on Relative Angular Matrix in Wireless Sensor Networks
Accurate position information plays an important role in wireless sensor networks (WSN), and cooperative positioning based on cooperation among agents is a promising methodology of providing such information. Conventional cooperative positioning algorithms, such as least squares (LS), rely on approximate position estimates obtained from prior measurements. This paper explores the fundamental mechanism underlying the least squares algorithm’s sensitivity to the initial position selection and approaches to dealing with such sensitivity. This topic plays an essential role in cooperative positioning, as it determines whether a cooperative positioning algorithm can be implemented ubiquitously. In particular, a sufficient and unnecessary condition for the least squares cost function to be convex is found and proven. We then propose a robust algorithm for wireless sensor network positioning that transforms the cost function into a globally convex function by detecting the null space of the relative angle matrix when all the targets are located inside the convex polygon formed by its neighboring nodes. Furthermore, we advance one step further and improve the algorithm to apply it in both the time of arrival (TOA) and angle of arrival/time of arrival (AOA/TOA) scenarios. Finally, the performance of the proposed approach is quantified via simulations, and the results show that the proposed method has a high positioning accuracy and is robust in both line-of-sight (LOS) and non-line-of-sight (NLOS) positioning environments.
Introduction
In recent years, positioning and navigation technology has been playing an increasingly important role in many applications, such as public safety, law enforcement, rescue operations, traffic management, inventory tracking, home automation, etc. At the same time, location-based services also have significant commercial value [1]. The Global Navigation Satellite System (GNSS) is the most widely-used navigation and positioning technology, providing services that are suitable for most applications in an open environment [2]. However, the GNSS might fail to provide reliable services due to interference and in some challenging environments such as cities, forests and indoors, due to the weakness of GNSS signals.
An effective way of solving this problem is to supplement and enhance GNSS with terrestrial positioning systems. At present, there are several positioning systems, including cellular-based positioning, WiFi positioning, and ultra-wideband-based (UWB-based) positioning systems. In particular, with the development of large-scale multiple-input and multiple-output (MIMO) systems, positioning based on mmWave communication that is becoming an emerging research focus has also received increasing attention [3,4].
The sensors in the terrestrial system constitute a positioning network, and we are interested in locating the sensors based solely on measurements from the multi-target scene. Based on the nodes' exchange measurements and other data, the WSN positioning scenarios can be divided into cooperative and noncooperative. Compared with traditional positioning methods, cooperative (also known as collaborative) positioning has important research and application significance. For instance, cooperative positioning of connected vehicles that effectively utilizes the relative observations from the vehicle-to-vehicle (V2V) devices has become a significant trend of future cooperative intelligent transportation system (ITS) applications [5]. Its advantages have been confirmed theoretically and algorithmically [6,7]. The analysis of Fisher information can show that nodes can obtain better positioning accuracy and availability in cooperative scenarios [8].
For various ground systems, the primary positioning methods can be divided into four categories, among which the distance-based time of arrival (TOA) and time difference of arrival (TDOA) positioning methods are more common [9]. At present, various classic algorithms for cooperative positioning are available, such as maximum likelihood (ML) estimation [10], extended Kalman filter (EKF) [11], particle filter (PF) [12], etc. These algorithms use the minimum mean squared error (MESE) as the evaluation criterion of estimation, and under certain conditions, they are essentially equivalent to the LS estimator. For instance, when the ranging error has a Gaussian distribution, the ML algorithm is equivalent to the LS estimator.
Related Works
When estimating the location of user nodes, it is necessary to select a reasonable initial value. Unfortunately, sometimes, there is a problem with initial value sensitivity when the algorithms are used. In addition, the NLOS propagation error of the signal is also an important factor affecting the positioning accuracy. Thus far, some efforts have been made to solve these problems. By way of observation, if measurements based on the received signal strength (RSS) and angle of arrival (AOA) are considered as the auxiliary positioning methods, the NLOS effect can be suppressed by the algorithms, and the positioning accuracy can be improved [13][14][15][16]. In general, hybrid positioning methods with multiple measurement methods tend to have higher positioning accuracy and be more robust than those based on a single measurement. The localization based on mmWave communication is a hybrid positioning method, but differs from the traditional approach. Its advantage lies in the unique structure of the MIMO systems, so it can determine the position of targets using only one base station by measuring the angles of both the transmitted and received signals [17]. This scheme reduces the cost of base station deployment, but also has apparent disadvantages. One major problem is the diffuse reflection effect of the signal, which causes a higher angle measurement error. For this reason, the positioning results will not be accurate; this topic remains to be studied, and the related improvements are yet to be attained [18,19].
As for the algorithms, a variety of robust positioning algorithms are available. The localization technique based on multidimensional scaling (MDS), which was first proposed by Shang et al. [20], offers a new solution of node localization. Recently, several localization methods related to the classical MDS method have been applied to sensor networks. Forero and Giannakis presented a robust multidimensional scaling based on regularized least squares [21]. Focusing on the problem of localization in mixed LOS and NLOS scenarios, a novel localization algorithm called the Gaussian mixed model based non-metric multidimensional (GMDS) was proposed [22]. The advantages of MDS are that one can obtain actual positions between nodes by setting only a few anchor nodes; besides, the anchor nodes' deployment has no strict restriction. However, it is unreliable in large-scale networks with sparse connectivity. The work of Destino and Abreu transformed the original WLS function into a convex one by introducing the Gaussian kernel function and optimizing the smoothing parameters [23]. In the literature [24,25], the problem has been formulated by applying robust statistics techniques on squared range measurements. This provides the opportunity to find the estimate efficiently. However, this formulation is not optimal in the ML sense [23]. Another class of methods is based on the convex relaxation technique. The paper [26] derived a maximum likelihood estimator for Laplacian noise and relaxed it to a convex program by linearizing and dropping a rank constraint. Soares et al. [27] set forth a convex underestimator of the maximum likelihood cost for the sensor network localization based on the convex envelopes of its parcels. At the same time they capitalized on the robust estimation properties of the Huber function and derived a convex relaxation [28]. It is known that the semidefinite programming (SDP) algorithm is one of the most used methods, which also transforms the position model into a convex optimization problem by applying the convex relaxation technique [29][30][31]. These approaches can not only limit the errors caused by the NLOS effect, but also make the cost function convex. In other words, they are insensitive to the initial value's selection. However, the downside of these algorithms is that the estimation accuracy will decrease slightly. Besides, other approaches such as the parallel projection method (PPM) [19,32], projection onto convex sets (POCS) [33,34], etc., have been proposed. These two methods turn the LS cost function into a convex one. At the same time, the PPM can be used in the distributed cooperative scenario, which significantly reduces the pressure of information interaction between the two nodes. An outlier detection method was proposed by Wang et al. [35] based on the maximum entropy principle and fuzzy set theory.
Contributions
It is known that the LS cost function is nonlinear and nonconvex in the global region. We consider the location result of an iterative algorithm depending on the selected initial value if the function has more than one local optimal point, and it seems that the initial value selection is associated with the convexity of the cost function. Existing studies analyzed the LS source localization problem to determine the condition of the function being convex [36,37]. Similarly, for WSN localization, the convexity or the number of local extremum points seems to relate to the quantity of targets and the ranging error. To explore this problem further, we perform a study of the LS model for WSN localization to understand theoretically how the targets and the ranging error affect the extremum point. The major contributions of the paper are as follows: • A sufficient and unnecessary condition for the LS cost function to be convex is proposed and proven for WSN positioning.
•
We define the relative angle matrix for both noncooperative and cooperative scenarios and show that the LS function can be transformed into a globally convex function if all the targets are located inside the convex polygon formed by its adjacent nodes. • A robust algorithm that detects the relative angular matrix is proposed for WSN localization. Additionally, we improve the algorithm by using angle constraints so that it can be used in both the AOA/TOA and TOA positioning methods, which extends the applicability of the method.
It is worth noting that with the development of MIMO technology, the acquisition of ranging information and angle information between two nodes becomes easier, as the technology provides hardware and technical support for the measurement of the relative angular matrix. On the other hand, the position of the virtual anchors can be calculated, so mmWave positioning can be transformed into an AOA/TOA positioning model in the traditional sense, and the null space algorithm improved in this paper can be used to solve the position.
The rest of the paper is organized as follows. In Section 2, some basic definitions and model descriptions are given. In Section 3, we analyze the convexity of the unconstrained LS positioning model and derive a sufficient and unnecessary condition for the function to be convex. In Section 4, we first provide the definition of the relative angular matrix, then subsequently prove some important properties, and propose a novel null space algorithm by adding the angle constraint. We perform a numerical simulation that aims to verify the correctness of the proposition and evaluate the performance of the proposed algorithm in Section 5. Finally, we conclude the paper in Section 6.
Definition of the Nodes and Links
In the wireless network location scenario based on the TOA method, we assume there are M targets and use the set F c = {T 1 , T 2 , ..., T M } to enumerate them. The real coordinates of a target are x j ∈ R η , 1 ≤ j ≤ M, j ∈ N + , where η ∈ N + is the Euclidean spatial dimension of the location scene. There are N anchors, which are represented by the set F a = {A 1 , A 2 , ..., A N }, and their real coordinates are s i ∈ R η , 1 ≤ i ≤ M, i ∈ N + . Let the set of all node records be F t ; then, F t = F a ∪ F c . Assume that there is a total of L ranging links in the positioning scenario and the set of links is L = {l 1 , l 2 , ..., l L }. In the cooperative scenario, the ranging links can be divided into two categories: that of ranging links between targets and anchors (AT links) and that of cooperative ranging links between two targets (TT links). The set of all distance observations is denoted by D t , the set of AT links by D a , and the set of TT links by D c . Obviously, we have D t = D a ∪ D c , D a ∩ D c = ∅, |D t | = L, and |D a | = N. Let d K i K j and d K i K j be the real and estimated distances between the nodes K i and K j .
Definition of the Errors
In the TOA positioning method, since the signal is affected by noise, the multipath effect, and the NLOS effect during propagation, the observed value is not the real distance between the two nodes, and there usually exists a ranging error. Let ε = [ε 1 ε 2 ... ε L ] T ∈ R L be the ranging error vector, where ε i represents the error on the ith link. In the LOS environment, the ranging error is caused entirely by noise, which usually follows a Gaussian distribution with a mean of zero and a constant variance. Here, we denote it by ε los . In the NLOS environment, besides the noise, there also exists a positive deviation that follows the Gaussian distribution with both the mean and variance being constant. Here, we denote it by ε nlos . Then, the ranging error can be modeled as follows: where ε los ∼ N 0, σ 2 los , ε nlos ∼ N µ nlos , σ 2 nlos , and µ nlos > 0. In the AOA method, we assume that there are observation errors in the relative angle matrix Ω, which will be defined in the fourth part of this paper. Hence, whereΩ represents the observation of the relative angular matrix and ∆ is the error matrix, an element δ ij of which follows the Gaussian distribution δ ij ∼ N µ α , σ 2 α .
Noncooperative Scenario Description
In the noncooperative localization scenario, there are M targets, and there is no link between any two targets. Consider target T j ; the target's real and estimated positions are x j andx j ; then, the distance can be given by: where · 2 represents the Euclidean distance between two nodes. In the noncooperative scenario, there are N ranging links that are denoted by l 1 , l 2 , ..., l N , respectively. Let d i andd i be the real and estimated distances, corresponding to link l i . Then, we can assign the values as follows: where 1 ≤ i ≤ N, i ∈ N + and q ja is the quantity of all links directly connected to node T j . Assume that the distance observations are ρ i ∈ D a , 1 ≤ i ≤ N, i ∈ N + . If the observation error is considered, the relationship between the distance observation and the true distance is given by: Accordingly, the unconstrained LS estimation model in the noncooperative scenario can be expressed as follows: arg min
Cooperative Scenario Description
The cooperative scenario is very similar to the noncooperative one. The difference is that there exist ranging and information interactions between two targets. Let x j andx j be the real and estimated coordinates of node T j ; then, In the cooperative scenario, there are L ranging links in total, which are denoted by l 1 , l 2 , ..., l L . Let d i andd i be the real and estimated distances corresponding to link l i . Next, we can assign the values as follows. If 1 ≤ i ≤ N, i ∈ N + , each l i is an AT link, and we assign the values according to Formulas (4) and (5). If N < i ≤ L, i ∈ N + , l i is a TT link, and we assign the values as follows: Above, q jc is the quantity of TT links with endpoints T j , T k such that k > j. Additionally, there also are T k ∈ U j , and U j is the set of targets that have ranging links with T j . In set U j , the elements' subscripts are arranged in increasing order, and T k is the gth element, the index of which is calculated by: Let ρ i ∈ D t , 1 ≤ i ≤ L, i ∈ N + be distance observations; then, the relationship between ρ i and d i satisfies Formula (6). Accordingly, in the cooperative scenario, the unconstrained LS model is constructed as: In this paper, we refer to F s and F c together as the LS positioning cost function and denote it by F.
Convex Analysis of the Model
In the previous part, we presented the unconstrained LS localization model in the cooperative and noncooperative scenarios. For the positioning problem, we are interested in finding the global optimum of the cost function. Due to the nonconvex property of the LS function, there may be multiple local optima or stagnation points in most scenes. As a result, a local minimum solution is obtained in the iterative process, which affects the positioning accuracy significantly. In this part, we discuss the nonconvex property of the unconstrained LS localization model to study when the cost function is convex and when it is nonconvex. To simplify the analysis, it may be worth considering the positioning problem on the two-dimensional plane, i.e., for η = 2. It is convenient to generalize the conclusion to a higher dimensional space. Let s j = [x s i y s i ] T be the coordinates of the anchor and x j = x j y j T be the coordinates of the target. At the same time, the corresponding estimated position isx j = x jŷj T .
Analysis of the Noncooperative Scenario
It is known that the convex property of a function is directly related to its Hessian matrix, and the following theorem holds: Theorem 1. The second-order condition ensures the function is convex. Assume that function f of x is second-order differentiable, and let its domain be dom f . If dom f is a convex set and the Hessian matrix exists, then a sufficient and necessary condition for the function to be convex in dom f is that, for ∀x ∈ dom f , its Hessian matrix is a semipositive definite matrix [38].
To simplify the problem, we first consider the scenario of only one target, and the target number is T 1 . In this case, there are N links between T 1 and the anchors. We try to compute the gradient (the first-order differential) and the Hessian matrix (the second-order differential) of the cost function F s . The calculation results are shown in Equations (13) and (14).
Considering Theorem 1, we obtain the following corollary: If dom F s of the LS cost function is R 2 , it obviously is a convex set. Assume that the set of allx 1 , which satisfy condition ∇ 2 Corollary 1 is a sufficient and necessary condition for F s to be convex. Finding a set A that satisfies the condition is equivalent to dividing the domain in R 2 so that F s is convex in each divided subinterval.
Next, the condition of ∇ 2 x 1 0 will be further analyzed. Note that ∇ 2 x 1 is a real symmetric matrix, and the two following lemmas hold: Lemma 1. The eigenvalues of a real symmetric matrix are all real numbers.
Lemma 2.
A real symmetric matrix is a semipositive definite matrix if and only if all its eigenvalues are nonnegative [39].
Lemma 2 shows that the assessment of whether a matrix is positive semidefinite can be transformed into the determination of whether the eigenvalues are positive or negative. Hence, we consider calculating the eigenvalues of the Hessian matrix. Let J = ∇ 2 x 1 ∈ R 2×2 , and let λ be the eigenvalue of J; then, the eigenvalue polynomial of J is: where J ij are the elements of J. Let G = 0; then, the following characteristic polynomial equation can be obtained: It is a quadratic equation of one variable. From Lemma 1, we observe that there must be two real roots, and the discriminant of roots ∆ ≥ 0 is invariable. From the distribution relation of the two roots, the sufficient and necessary condition of having two nonnegative real roots is: Therefore, we can derive the following corollary: Corollary 2. Set A consists of all the sets ofx 1 that satisfy the condition of the inequality group (15).
Corollary 2 is an equivalent condition of the LS cost function F s being convex. Compared with Corollary 1, Corollary 2 transforms the condition that the Hessian matrix is semipositive definite into the solution of the inequality system, which provides a feasible method for finding the set satisfying the requirements. However, due to the nonlinear characteristics of Equation (17), it is difficult to obtain the analytic solutions of inequalities. In the next part, we discuss a particular case and try to provide the proof.
Proposition 1.
If there exists a set C in which all the estimatesx 1 satisfyd i − ρ i ≥ 0, then the LS cost function F s is convex and C ⊆ A.
The proof of Proposition 1 is given in Appendix A.
If there is more than one target and their number is M, the Hessian matrix J has 2M rows and 2M columns, i.e., J ∈ R 2M×2M . Consider a decomposition of J, and let J = ∑ N i=1 Q i . In the preceding formula, Q i is the Hessian matrix corresponding to the ranging link l i . It is easy to observe that there are four elements, and the elements of Q i are located on the diagonal or adjacent positions of J. An example is shown in Figure 1b, where D k ij represents the corresponding elements in J, which are the second-order partial derivatives of F s with respect to the coordinates of a target. We translate the four elements in Q i to the upper left corner via the elementary row-column transformation of matrices. Let the transformed matrix be Q i , as shown in Figure 1c; we observe that Q i and Q i are similar matrices. The following lemma holds for similar matrices: This shows that Q i and Q i have the same eigenvalues. Let G i = λI − Q i ; to obtain the eigenvalues of G i , we construct the following block for G i : Above, , where g kl is the element at the kth row and lth column of G i . From the determinant theorem of block matrices, we know that: Let |G i | = 0. According to the previous analysis, the two solutions of |G 11 | = 0 are λ = 1 and Proposition 1 still holds if there is more than one target, and it is a sufficient and unnecessary condition for F s to be locally convex. The nonconvex interval of F s decreases with the increasing ε i and the number of links that satisfy ε i > 0.
Analysis of the Cooperative Scenario
Similarly, we try to find a set ofx j for the condition that the LS cost function is convex in the cooperative scenario. We guess that Proposition 1 is also valid in that scenario and try to prove it. In this scenario, the ranging link consists of two parts, an AT link and a TT link. Inspired by Lemma 3, we decompose Hessian matrix J into several submatrices Q i . Each Q i is actually a function related tô The element of Q i is the second-order partial derivative of function f d i with respect to the target coordinatesx j . In Section 3.1, it has been proven that Proposition 1 is satisfied if l i is an AT link. Next, we will show that Proposition 1 is also satisfied for TT links. To facilitate the analysis, we select one TT link l k in the cooperative scenario and denote the targets at the end of the link as T m and T n , the estimates of which are (x m ,ŷ m ) and (x n ,ŷ n ), respectively. If the distance measurement is ρ k , then the LS cost function can be written as: Computing the partial derivatives of F c with respect tox m ,x n ,ŷ m ,ŷ n , the gradient vector (the first-order differential) can be obtained as follows: Hence, the Hessian matrix is: We are interested in the eigenvalue of J; the characteristic polynomial can be calculated as follows: Then, the four eigenvalues can be solved as follows: Ifd k − ρ k ≥ 0, thus a TT link has similar properties to those of an AT link. In the following, we try to generalize this further. If there are two kinds of ranging links in the LS cost function, this property remains unchanged. Consider the case of M targets in the cooperative scenario; the Hessian matrix of F c is J ∈ R 2M×2M . For example, the elements of the matrix are shown in Figure 1a, where D k ij represents the corresponding element that is the second-order partial derivative of F c with respect to the coordinates of a target. Consider a decomposition of J, and let J = ∑ L i=1 Q i . In the preceding formula, Q i is the Hessian matrix corresponding to the ranging link l i . If l i is an AT link, the analysis and result shown in Section 3.1 apply. If the ranging link is a TT link, there are sixteen elements in each Q i . An example is shown in Figure 1d. All the elements in Q i are translated to the upper left corner, as shown in Figure 1e. We denote the transformed matrix by Q i and let G i = λI − Q i . Similarly, we construct the following block for G i : Above, , and g kl is the element at the kth row and the lth column of G i . Similar to Formula (23), we have: Let |G i | = 0. Reviewing the analysis for TT links, the four solutions of |G 11 | = 0 are λ 1 = λ 2 = 0, The analysis above shows that Proposition 1 is still valid in the cooperative scenario. Similar to the noncooperative scenario, the nonconvex interval of F c decreases with the increasing ε i and the number of links that satisfy ε i > 0. Proposition 1 shows that if the conditiond i − ρ i ≥ 0 is satisfied, an appropriate interval in which the LS cost function F is convex can always be found. Based on this, we can describe the condition for F to be convex globally.
Proposition 2 is a sufficient and unnecessary condition for the global convexity of the LS cost function. In a practical scenario of the ranging error being greater than zero, the distance observation ρ i is positive. In this case, the condition of global convexity is not satisfied. If ε i ≤ −d i , we have ρ i ≤ 0; then,d i − ρ i ≥ 0 is invariable, and F is convex in the global range. Although the global convexity condition is satisfied if all ε i ≤ −d i , it is a low probability event that all the observation values will be negative because of the independence between the distance measurement errors. Therefore, in an actual location determination scenario, it is not a common phenomenon for the LS cost function to be convex in the global range.
Null Space of the Relative Angle Matrix
From the analysis in Section 3, it is known that a positive ranging error will cause the LS cost function to be nonconvex. In this condition, when the iterative method is used to search for the optimal solution, it may fall into a local minimum, causing the result obtained to not be the optimal global solution. There are generally two ways of solving this problem. The first is to divide the domain R 2M into several intervals so that the cost function is convex in each subinterval. Then, the appropriate initial value is selected in each subinterval, and the result is obtained by the iterative method. Proposition 1 shows that such subintervals must exist, so this method is feasible in any case. The second method is to modify the original LS localization model to make it convex in the global range. The advantage of this method is that the convexity weakens the requirement of initial value selection. That is, the solution is insensitive to initial value selection. Based on Proposition 2, we propose a robust method using the relative angle matrix for WSN positioning. The basic idea of this method is to transform the LS cost function into a globally convex function by calculating the relative angle matrix. This algorithm ensures that the minimum obtained by the Gauss-Newton iteration method will be the optimal global solution. Some further analysis will also be performed in this part.
Definition of the Relative Angle Matrix
Definition 1. The relative angle matrix in the noncooperative scenario is defined as: where Ω s ∈ R N×N . If θ iTj is the angle between l i and l j , then θ iTj is given by: Assume the following formulation: where P s ∈ R N×2M and ε s ∈ R N . Let ∇x 1 = 0; this condition is equivalent to: Further, the relationship between relative angle Ω s and P s is as follows: Substituting (30) into (31), we obtain: Imitating the noncooperative scenario, we can define the relative angle matrix in the cooperative scenario.
Definition 2.
The relative angle matrix in the cooperative scenario is defined as: where Ω c ∈ R L×L ; when i = j, if θ iTj is the angle between l i and l j , then θ iTj is defined by Formula (28). If i = j, let ω ij represent the element of the ith row and jth column in Ω c . If l i is an AT link, then ω ij = 1; otherwise, l i is a TT link, and then, ω ij = 2. Assume that: where P c ∈ R L×2M and ε c ∈ R L . Then, ∇x j = 0 is equivalent to: Multiplying both ends of the equation by P c results in: In the cooperative scenario, it still holds that: Substituting Formula (36) into Formula (37), we obtain: We refer to the relative angle matrices Ω s and Ω c collectively as Ω. The following property of Ω is established generally.
Property 1.
In the two-dimensional plane, let r be the rank of Ω, i.e., r = rank (Ω). Assume that there are M targets in the localization scenario, and the number of unknown variables is τ = 2M; then, the inequality r ≤ τ is satisfied.
The proof of Property 1 is given in Appendix B.
Null Space Algorithms
From Formulas (32) and (38), we can conclude that the ranging error is the null space of Ω. It is also known from Property 1 that once Ω has been determined, if the number of nontrivial solutions of ranging error ε is N B , we have N B ≥ 1. If and only if r = τ, then N B = 1. If r < τ, the solution satisfying the equation should be a set, i.e., there is an infinite number of ε satisfying the equation. Hence, if a basic solution of ε has been obtained, ε is the linear space formed by Φ, where Φ ∈ R N A ×(τ−r) . Let Φ = ϕ 1 ϕ 2 ...ϕ τ−r ; then, the general solution of ε can be expressed as: where k i ∈ R and ϕ i ∈ R N A . Equation (39) indicates that for ε in the same linear space, the LS cost function F has the same local optimal point. However, convexity will vary with ε. If ε > 0, F is likely to be a nonconvex function; thus, multiple local optima will exist. If ε 0, it is known from Proposition 2 that F can be transformed into a convex function in the global range. Assuming that Ω is known, we can determine Φ for a given ε. If there exists a linear combination of column vectors in Φ that results in ν 0, then by the same property of the local optima, we can change the LS cost function F and make it convex in the global range.
Proposition 3.
Let ν be an element of the null space of Ω such that ν 0; then, Equations (7) and (12) can be rewritten as: arg min In this paper, we refer to F s and F c collectively as F . In Proposition 3, F and F have the same local optimal point, i.e., the objective functions are equivalent. Additionally, F is also globally convex, which endows it with the characteristic of large-scale convergence. Thus, using it can reduce the sensitivity to the initial value selection. If we want to apply Proposition 3, the following problems remain to be solved:
•
How do we achieve Ω? • For an arbitrary Ω, does ν satisfy the condition of ν 0? • When there are errors in Ω, how do we deal with them?
For the first problem, the direct method is to measure the angle, which is similar to the AOA method. In this way, Proposition 3 is transformed into an AOA/TOA hybrid location algorithm. Another method is to transform the distance data into the corresponding angle via the cosine theorem and construct the relative angle matrix. To answer the second question, we will prove that the following properties are valid: Property 2. In both noncooperative and cooperative scenarios, there always exists ν 0 if the node to be located is inside the convex polygon composed of adjacent nodes, whereas there is no ν satisfying this condition if the node is outside the convex polygon.
The proof of Property 2 is given in Appendix C.
According to Property 2, Proposition 3 can be applied in both noncooperative and cooperative scenarios; however, there is a limitation that it can only be applied if all the targets are located in the convex hull composed of adjacent nodes. In contrast, Proposition 3 does not hold if there are targets outside the convex hull formed by neighboring nodes.
In the third problem, the angle measurement errors will influence the final positioning result. In addition, in the process of calculating the null space of Ω, it is possible that no suitable ν 0 exists due to the errors. In this case, Proposition 3 will also be invalid, and it is necessary to eliminate it.
The paper [36] used the method of principal component analysis (PCA) to reduce the deviation of the relative angular matrix in source localization. The main steps are as follows. First, Ω is decomposed by SVD. Then, all the eigenvalues are sorted in descending order, and the large eigenvalues are selected as the main eigenvalues. At the same time, the eigenvectors corresponding to the eigenvalues are selected to reconstruct Ω, which is denoted by Ω . The null space of Ω is determined; one of the vectors is selected as the value of the base vector ϕ 1 and is multiplied by the coefficient k 1 to satisfy ν = k 1 ϕ 1 0. The PCA method is simple and efficient; when the measurement errors are not large, it can calculate ν well. In WSN localization, the PCA method cannot obtain the appropriate ν as the errors and the number of targets increase. If the cosine theorem is used to transform the ranging data into angle data, the ranging measurement error will be converted into the angle measurement error after the calculation. Hence, similar problems will also exist. We consider the main reason for this problem to be that the distance circles formed by the node and the range measurement may not intersect at one point. Then, the sum of radian measures of the relative angles of each node at the same point will not equal 2π. Letα 1 ,α 2 , ...,α n be the estimated values of the angles, formed by the target and its adjacent nodes, and the corresponding angle measurements or calculated values be α 1 , α 2 , ..., α n . We define the angle least squares cost function as follows: Formula (42) is a linear optimization problem with equality constraints and can be transformed into an unconstrained optimization problem by introducing Lagrange multipliers. If the Lagrange multiplier is λ, then Formula (42) is equivalent to: arg min We calculate the partial derivatives of f α : Setting every derivative in (44) to zero, we obtain the following optimal point: Using this method, the relative angle between targets and adjacent nodes can be estimated. After that, the sum of relative angular radians of each target will be 2π. According to the properties of the relative angular matrix, ν 0 must exist. After that, F can be transformed into a globally convex function by Proposition 3 and solved by the Gauss-Newton iteration method.
If the cosine theorems are used to calculate the angle, the theorem may be inapplicable. In other words, because of the existence of the ranging error, the triangle's trilateral side lengths may not satisfy the theorem's condition, and the method needs to be further improved. In this paper, we use the generalized cosine law. Assume that the angle in a triangle is denoted by β; then, we calculate β as follows: Above, tis calculated from the trilateral relationship according to the cosine theorem. We call the respective methods the "angle-based null space algorithm" (A-NLS) and "cosine law-based null space algorithm" (C-NLS), whereby the angles are obtained by angle measurement or the cosine law calculation. The main steps of the algorithm are shown in Algorithm 1.
Simulations and Results
In the fifth part of this paper, we will validate the proposition and the proposed algorithm using a numerical simulation. The following simulation chooses two typical scenarios of non-cooperative and cooperative positioning for analysis and verification.
Simulation Scenario Setting
In the noncooperative scenario, we assumed that there were four anchors and one target. In the cooperative scenario, there were two targets and six anchors. The Cartesian coordinate system was established in the two-dimensional plane. The coordinates of each anchor are shown in Table 1. Three kinds of environments were simulated: the NLOS environment, the LOS environment, and the case of negative ranging errors. We can consider the latter case as a particular condition. Although it is not common in actual positioning practice, we can regard it as the equivalent ranging error after using Proposition 3. The magnitudes of errors of each scenario in various situations are shown in Tables 2 and 3. Table 1. Coordinates of the anchors.
Convexity Verification
First, function F s was simulated globally to observe the convexity in various environments. The convexity of F s was considered under various ranging errors. The function image, the semipositive definite condition, and the estimates calculated by the iterative algorithm were obtained. The results are shown in Figure 2. Figure 2c,f,g shows the condition of Hessian matrix J at each point in the plane; the red part represents semipositive definite, the blue part seminegative definite, and the green part indefinite. Theorem 1 shows that if J is semipositive definite, the function is convex. It is observed from Figure 2c that if the targets were in the NLOS environment, the semipositive definite area was discontinuous. The sum of the indefinite and seminegative definite areas was larger. If the targets were in the LOS environment, the semipositive definite area was continuous, while the sum of indefinite and seminegative definite areas was smaller. Proposition 2 shows that if there is a positive ranging error, the cost function is nonconvex in the global range. Therefore, in the above two environments, F s cannot be nonconvex in the global range. The target position was solved for by the Gauss-Newton iteration method, with different initial values selected from different directions. The results are shown in Figure 2b,e,h. In the NLOS environment, because of the positive errors, the LS cost function F s had more than one local optimal point due to nonconvexity. When the initial value varied, the iterative algorithm converged to different minimums. In the LOS environment, there were lower ranging errors. Although the LS cost function F s was also nonconvex in the global range, there was no increase in the number of local optimal points. In this case, the iterative algorithm converged to the same location. Hence, the original LS model was applicable in the LOS environment.
The results obtained if the ranging error was negative and satisfied the condition for the cost function F s to be convex are shown in Figure 2g,i. It is observed that J was semipositive definite and F s was convex in the global range. Figure 2h shows the results of the Gauss-Newton iteration algorithm with various initial values. The circle formed by the dotted lines in the graph indicates that the ranging error was negative, and its size is the absolute value of the distance observation. According to the results, the algorithm can eventually iterate to the same location regardless of the initial value, which confirms the global convergence in this case.
In the cooperative scenario, various initial values were selected, and the target location was calculated by the Gauss-Newton iterative algorithm. The iterative images are shown in Figure 3a-c. In the NLOS positioning environment, because of the positive ranging errors, the cost function was not convex in the global range. Therefore, various initial values caused the iteration algorithm to converge to different minimums. In the LOS positioning environment, although F c was also nonconvex in the global range, the positive ranging error was lower; hence, the algorithm could still converge to the same local optimal point. If the ranging error was negative, the cost function was convex in the global range. Thus, regardless of the selected initial value, the algorithm would converge to the global optimal point.
Null Space Algorithm Performance
To compare the performance of algorithms (LS, A-NLS, and C-NLS), the following simulations were performed in both the noncooperative and cooperative scenarios. We assumed that all the targets were located in the polygon composed of adjacent nodes. The coordinates of each anchor are shown in Table 1. We performed simulations in both the LOS and NLOS environments. The ranging error parameters were set to µ nlos = 5 m, σ 2 los = 3 m 2 , and σ 2 nlos = 4.5 m 2 , and in the AOA angle measurement, it was assumed that the parameters of the relative angle error matrix were µ α = 0.1, σ 2 α = 0.5. In the noncooperative scenario, the real location of the target was x 1 = [2, 4] T , while in the cooperative scenario, the real locations of the two targets were x 1 = [4,3] T and x 2 = [−6, −5] T . For the LS, C-NLS, and A-NLS algorithms, any position was selected as the initial iteration value of the algorithm in each calculation. To consider all directions of the location of the initial value, the distribution x 0 ∼ N (µ 0 , Σ 0 ) was used, where µ 0 is the mean vector and Σ 0 is the covariance matrix; their values were set to µ 0 = [50, 50] T and Σ 0 = 100 0 0 100 . As a reference, we chose the SDP and PPM algorithms to compare the performance with that of the null space algorithm proposed in this paper. For the PPM algorithm, the initial iteration position of each target was equal to the average of coordinates of its adjacent nodes. For each scenario, 100 numerical simulations were performed. The estimated positions obtained by the algorithms were compared with the real positions of the targets. The root mean squared errors (RMSE) were calculated according to Formula (47).
We calculated the convergence probability for different algorithms. The simulation results are shown in Figure 4, Tables 4 and 5. If the convergence threshold was 5 m, we can see from the graphs or tables that in the LOS environment, there was a high convergence probability of each algorithm in the noncooperative scenario, while in the cooperative scenario, the convergence probability of the LS and PPM algorithm became lower. In the NLOS environment, the convergence probability of the LS and PPM algorithm decreased seriously, especially in the cooperative scenarios, which was almost non-convergent. The convergence performance of the SDP algorithm in the cooperative scenario also decreased considerably. However, the C-NLS algorithm maintained a high convergence probability in all environments and scenarios. The performance of the A-NLS algorithm was similar to that of the C-NLS algorithm, but it slightly decreased in the NLOS environment of the cooperative scenario. Table 5. Convergence probability in the cooperative scenario. Afterwards, the corresponding cumulative probability distributions of errors were obtained. The results are shown in Figure 5. In particular, Figure 5a,c shows that in the LOS environment, the differences between the algorithms were not significant in both the noncooperative and cooperative scenarios. In the NLOS environment, as shown in Figure 5b,d, the differences between the algorithms were apparent. The LS and PPM algorithms showed large positioning errors in the cooperative and noncooperative scenarios. The main reason is that the positive ranging errors increased, which directly led to a sharp decline of positioning performance.
Algorithms LS PPM SDP
The location precision of the SDP and null space algorithms (A-NLS, C-NLS) did not decrease in the NLOS environment, which reflects the stability of these algorithms in various situations; i.e., the algorithms can obtain better location estimates in both the LOS and NLOS environments. At the same time, the null space algorithm proposed in this paper was slightly better than the traditional SDP algorithm in both noncooperative and cooperative scenarios and achieved the desired goal.
Conclusions
In this paper, a necessary and sufficient condition for the global convexity of the LS cost function was specified for WSN positioning. Generally, when all the ranging errors were far less than zero, the LS cost function was convex. Next, we defined the relative angle matrix in both the noncooperative and cooperative scenarios and proved the two essential properties. We observed that if all the targets were located in the convex polygon formed by their adjacent nodes, the LS cost function could be transformed into a globally convex function by constructing measurements with a negative distance. Based on the analysis, we proposed a robust algorithm for WSN localization. The proposed method reduced the sensitivity of the Gauss-Newton iteration algorithm to initial value selection and made the function globally convex. In other words, the function had the characteristic of large-scale convergence. In the fifth part of the article, numerical simulations were performed to verify the proposition described in the third part, and the robust algorithm was compared with the conventional methods. The results showed that the null space algorithm effectively constrained the error in the NLOS environment and obtained more accurate positioning results in both the LOS and NLOS environments.
In the future, we will carry out a study on the impact of varying the topologies and number of anchors and targets. Furthermore, we will deal with the problem when the targets are not located inside the convex polygon formed by its neighboring nodes.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
Before proving the proposition, we first give the following lemma: Lemma A1. A matrix, which is the sum of finite semipositive definite matrices, is still semipositive definite.
Proof of Proposition 1. Consider decomposing J and the representation J = ∑ N i=1 Q i , where: To find the eigenvalues of Q i , let: Simplifying it, we obtain: The two roots of λ can be obtained as follows: As Formula (A4) shows, matrix Q i has two eigenvectors, and λ 1 > 0 is invariable. If another eigenvector satisfies λ 2 =d i − ρ i ≥ 0, then Q i 0. From Lemma A1, we know that if d i − ρ i ≥ 0, 1 ≤ i ≤ N, i ∈ N + holds, then J 0. Then, the set of all estimatesx 1 that satisfy this condition is C.
Appendix B
Proof of Property 1. We first show that this property holds in the noncooperative scenario with one target. Let the angles between two adjacent anchor nodes and the target be α 1 , α 2 , ..., α N . Then, we can compute that: An elementary row transformation is performed on Ω s ; then, the simplified matrix can be structured as: From this, we observe that the rank of Ω s with one target satisfies r ≤ 2. Consider the case of more than one target in the noncooperative scenario. For each node, we can perform a similar elementary row transformation, and the number of nonzero rows is no more than two. Hence, we know that the number of nonzero rows of Ω s will not be greater than τ. In the cooperative scenario, the first to the Nth rows of Ω c are similar to those of Ω s . For rows numbered from N + 1 to L, due to the existence of the TT link, the number of nonzero elements per row is four; if i = j, we have ω ij = 2. The reason is that the targets connect between the cooperative link. If each target has at least two known location nodes connected to it, the element in the respective row can be reduced to zero by a row transformation similar to that in the noncooperative scenario. In the simplified relative angular matrix Ω c , each user node is independent of other nodes, so it must hold that r ≤ τ.
Appendix C Definition A1. An adjacent node of T j is defined as the node that has a ranging link with T j , whether it is an anchor or a target. Definition A2. A point p in a convex set K is said to be an extreme point if it cannot be written in the form p = tx + (1 − t) y, where x and y are distinct points of K and 0 < t < 1; informally, this means p is not between two other points of K.
Lemma A2. Every point in a bounded closed convex set must be a convex combination of its extreme points [40]. Lemma A3. Let K 1 , K 2 be convex subsets of R n and R m , respectively. Then, K 1 × K 2 ⊂ R n × R m R m+n is convex. Furthermore, point (P 1 , P 2 ) is an extreme point of K 1 × K 2 if and only if P 1 is an extreme point of K 1 and P 2 is an extreme point of K 2 [40].
Proof of Property 2. We first assume that there is only one node in the scenario, and the error vector is set to ε = [ε 1 ε 2 ... ε N ] T . According to Formula (32), As Figure A1a shows, if a coordinate system is established with T 1 as the origin, then Formula (32) can be rewritten as follows: Take T 1 as the center of a circle; consider a unit circle, and suppose that the intersection with l i occurs at point P i . It is easy to observe that are the coordinates of points P 1 , P 2 , ..., P N in the Cartesian coordinate system. If P 1 , P 2 , ..., P N form a convex polygon and P 1 , P 2 , ..., P N are the extreme points of that polygon, the origin T 1 must be a point inside the convex polygon. Lemma 3 shows that there exists a convex combination, so that the coordinates of T 1 are composed of convex combinations of P 1 , P 2 , ..., P N . In other words, ∃ ξ 1 , ξ 2 , ..., ξ N , ξ i ∈ (0, 1) and Let ε i = ξ i ; then, we can prove that Property 2 holds in the single target scenario. If more than one node exists, we suppose that the j th user node T j has a total of r jt links, including r ja AT links and r jc TT links. These links divide the plane into r jt parts. Every two adjacent links form an angle with the T j as the vertex. Let the radian measures of angles at T j be γ j1 , γ j2 , ..., γ j(r ja −1) , γ jr ja , ..., γ jr jt . The two adjacent links constituting these angles are numbered l s j1 , l s j2 , ..., l s jr jt , l s j1 , where s jk is the number of all links connected to T j , which satisfies s j(k+1) > s jk .
The error vector is set to ε = [ε 1 ε 2 ... ε L ] T ; according to Formula (36), it holds that: As shown in Figure A1b, consider any T j , and consider a unit circle centered at T j that intersects l i at points P s j1 , P s j2 , ..., P s jr jt . Then, the jth and j + 1th lines in Equation (A10) can be written as follows: .., P s jr jt in the Cartesian coordinate system. Then, P s j1 , P s j2 , ..., P s jr jt form a convex polygon, and they are the extreme points of that polygon. Hence, all the points inside the polygon constitute a closed convex set H j ⊆ R 2 . If all T i are located in the corresponding convex polygons, then set H j exists for ∀j, 1 ≤ j ≤ M, j ∈ N + .
(a) (b) Figure A1. Establish the coordinate system with the user node as the origin. Draw the unit circle with the origin as the center. The points and angles defined by each link are as shown. Graph (a) denotes one target T 1 in the noncooperative scenario, and Graph (b) denotes the cooperative scenario, where one of the user nodes is T j .
As can be observed from Lemma A3, the Cartesian product H t = H 1 × H 2 × ... × H M ⊆ R η is also a convex set. Combining extreme pointsx j , 1 ≤ j ≤ M, j ∈ N + in H j forms a new extreme point x = [x 1x2 ...x M ] T ∈ R 2M in H t in H t . According to Lemma A2, the origin is a convex combination of extreme points. In other words, ∃ ξ 1 , ξ 2 , ..., ξ L , ξ i ∈ (0, 1) and ∑ L i=1 ξ i = 1 satisfy Equation (37). If ε i = ξ i , then Property 2 also holds in the multitarget scenario. | 13,042.4 | 2019-06-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Inter-Floor Noise Monitoring System for Multi-Dwelling Houses Using Smartphones
The noise between the floors in apartment buildings is becoming a social problem, and the number of disputes related to it are increasing every year. However, laypersons will find it difficult to use the sound level meters because they are expensive, delicate, bulky, etc. Therefore, this study proposes a system to monitor the noise between the floors, that will measure the sound and estimate the location of the noise using the sensors and applications in smartphones. To evaluate how this system can be used effectively within an apartment building, a case study has been performed to verify its validity. The result shows that the mean absolute error (MAE) between the actual noise generating position and the estimated noise source location was measured at 2.8 m, with a minimum error of 1.2 m and a maximum error of 4.3 m. This means that smartphones, in the future, can be used as low-cost monitoring and evaluation devices to measure the noise between the floors in apartment buildings.
Background
Population concentration due to urbanization has led to housing shortages, and many cities opted for the construction of multi-dwelling houses, which can be supplied in large quantities at a relatively low cost, as a solution [1]. In multi-dwelling houses, however, the residents are easily exposed to the noises of neighbors, as the walls and slabs are shared with other households. The continuous exposure to external noises of the residents of multi-dwelling houses may cause physical and mental health problems, such as high blood pressure, annoyance, and sleep disorders [2][3][4]. As such, inter-floor noise has also caused discord amongst neighbors, including an elevated number of disputes, assaults, and even arson [5][6][7].
To address disputes related to inter-floor noise, it is essential to secure objective noise data. Sound level meters are generally used to obtain objective noise data. It is difficult, however, for non-experts to use sound level meters, because they are expensive, delicate, and bulky [8]. The recent technical development of smartphones has opened up a possibility where they can be used as substitutes for sound level meters [9][10][11].
Smartphones are powerful mini-computers with various sensors (e.g., microphones, accelerometers, gyroscopes, and GPS) and are owned by the majority of the population. They can be used as low-cost noise monitoring tools with available broadband internet access [12].
A number of studies have been conducted lately to examine the accuracy of smartphone noise measurement applications (apps). Murphy and King [11] tested the accuracy of several noise measurement apps on two platforms (Android and iOS) using 100 smartphones. The test results showed that one of the apps was very accurate in measuring the noise levels with errors less than ±1 dB from the actual sound levels in the reference value range. The conducted study indicated that noise measurement apps have a potential to be used as sound level meters in the future. Zamora et al. [13] proposed environmental noise-sensing units using smartphones. According to these experimental results, if the smartphone application is well tuned, it is possible to measure noise levels with an accuracy degree comparable to professional devices for the entire dynamic range typically supported by microphones embedded in smartphones. Garg et al. [8] proposed an averaging method for accurately calibrating the noise acquired through a smartphone microphone. This method achieves an accuracy of 0.7 dB.
Smartphones also provide an inexpensive and flexible infrastructure for the measurement of overall environmental noise (e.g., noise and air pollution) in cities. Various related studies have shown that smartphone apps are useful for environmental monitoring evaluation [14][15][16][17]. Although the aforementioned studies verified the accuracy of smartphone noise measurement apps and their potential as environmental monitoring tools, studies on the possibility of using smartphones to address the inter-floor noise problem are not sufficient.
The problem to be solved in relation to inter-floor noise is to identify the noise types and locations of those noise sources [18]. This is important, since some disputes have resulted from misunderstanding of the noise sources by listeners [18]. Most studies on inter-floor noises, however, are focused on noise measurement [3,19], noise reduction measures [20,21], and annoyance measurement [22,23]. If smartphones can identify objectively and reliably the noise source locations and noise types in real time, they can contribute to dispute mediation.
Motivation and Objective
Inter-floor noise is transmitted to neighboring households in multi-dwelling houses, and unpleasant sounds disturb other house residents. In South Korea, where most people live in multi-dwelling houses, 88% of the apartment residents are under stress due to inter-floor noise [24]. In South Korea, most apartments have been constructed in the wall column structure style since the 1980s, due to reasons of constructability, economic efficiency, and a reduction in the construction period. In apartments with the wall column structure, all four apartment sides are made of concrete, with a large vibration transfer coefficient. Thus, the airborne sound that is generated on the upper floor and the vibration that is generated at the bottom of the upper floor are easily transferred to the lower floor [25].
In particular, the wall column structure apartments built before 2005 in Korea generally used a concrete slab thickness ranging from 135 mm to 150 mm, but in recent years, with the emergence of frequent inter-floor noise problems, a new regulation was established to standardize the slab thickness to be at least 210 mm [3]. Despite the legal regulations on the slab thickness, the number of complaints related to inter-floor noise has increased from 8795 in 2012 to 28,231 in 2018 ( Figure 1). This phenomenon appears to have occurred because there was no solution for noise mitigation for the existing apartments built before 2005, when the regulations on the slab thickness were enacted. The regulations can be applied only to the newly built apartments because improved construction methods, such as reinforced thicknesses of the walls and floor slabs and application of floating floors, have not been made available for the existing apartments. However, there has been an increase in the number of complaints related to inter-floor noise in new apartments built under new regulations. The study conducted by Park, Lee and Lee [3] verified that the slab thickness did not have any effect in lowering the indoor noise level. This phenomenon appears to have occurred because there was no solution for noise mitigation for the existing apartments built before 2005, when the regulations on the slab thickness were enacted. The regulations can be applied only to the newly built apartments because improved construction methods, such as reinforced thicknesses of the walls and floor slabs and application of floating floors, have not been made available for the existing apartments. However, there has been an increase in the number of complaints related to inter-floor noise in new apartments built under new regulations. The study conducted by Park, Lee and Lee [3] verified that the slab thickness did not have any effect in lowering the indoor noise level.
The increase in the inter-floor noise complaints has led to conflicts and disputes among neighbors [26]. Emotional reactions to noise problems even led to a number of retaliatory crimes between neighbors, such as arson and murder [27]. As the conflicts caused by inter-floor noise expanded to a social problem, the South Korean government established a 'center for inter-floor noise mitigation between neighbors' in 2012, to oversee the disputes related to inter-floor noise. The center, however, has no legal rights and on-site investigation for objective noise measurement and shows some limitations in solving the inter-floor noise problem, due to a lack of manpower. The inter-floor noise problem is still unsolved, and thus, more effective measures are required to resolve the occurring disputes.
As noise is judged from a subjective perspective due to its environmental nature, conflicts due to a difference in opinions cannot be avoided. To resolve such conflicts, it is necessary to prove the fact that a noise level higher than the inter-floor noise criterion occurred, state its duration, and the degree of damage caused. Therefore, this study proposes an inter-floor noise monitoring system for measuring the inter-floor noise and estimating the noise time and location, by utilizing sensors and mobile applications of widely available smartphones. The proposed system enables recording various data related to inter-floor noise, and it is expected to be used as an important tool for resolving disputes related to inter-floor noise in the future.
Research Method
In this study, a system to monitor inter-floor noise using smartphones is proposed. To verify the validity of the system, apartment B, completed in 1996 and located in Gyeongsan City, Gyeongsangbuk-do, South Korea, was selected as a case study site. For inter-floor noise monitoring, The increase in the inter-floor noise complaints has led to conflicts and disputes among neighbors [26]. Emotional reactions to noise problems even led to a number of retaliatory crimes between neighbors, such as arson and murder [27]. As the conflicts caused by inter-floor noise expanded to a social problem, the South Korean government established a 'center for inter-floor noise mitigation between neighbors' in 2012, to oversee the disputes related to inter-floor noise. The center, however, has no legal rights and on-site investigation for objective noise measurement and shows some limitations in solving the inter-floor noise problem, due to a lack of manpower. The inter-floor noise problem is still unsolved, and thus, more effective measures are required to resolve the occurring disputes.
As noise is judged from a subjective perspective due to its environmental nature, conflicts due to a difference in opinions cannot be avoided. To resolve such conflicts, it is necessary to prove the fact that a noise level higher than the inter-floor noise criterion occurred, state its duration, and the degree of damage caused. Therefore, this study proposes an inter-floor noise monitoring system for measuring the inter-floor noise and estimating the noise time and location, by utilizing sensors and mobile applications of widely available smartphones. The proposed system enables recording various data related to inter-floor noise, and it is expected to be used as an important tool for resolving disputes related to inter-floor noise in the future.
Research Method
In this study, a system to monitor inter-floor noise using smartphones is proposed. To verify the validity of the system, apartment B, completed in 1996 and located in Gyeongsan City, Gyeongsangbuk-do, South Korea, was selected as a case study site. For inter-floor noise monitoring, an inter-floor noise monitoring application was developed using sensors built into smartphones. To this end, the functions of such sensors were identified and used to achieve the target functions for the inter-floor noise monitoring system. Table 1 shows the smartphone sensors and their functions, that were used in this study in order to implement the developed application. The microphone was used to obtain the sound pressure level (SPL). The accelerometer and gyroscope were used to measure the vibration acceleration level (VAL) created by a heavy impact on part of a building. Moreover, GPS was used to locate the smartphone Sustainability 2020, 12, 5065 4 of 13 and to measure the timing of the occurring noise. Wi-Fi was used to transfer the obtained inter-floor noise information to a server. The developed inter-floor noise monitoring application requires a certain level of sound as a baseline for determining inter-floor noise. In this study, the legal criteria existing for the case study site (i.e., for South Korea) were applied. Inter-floor noise is largely divided into floor impact noise (e.g., running and walking sounds), which is generated when the energy is applied directly to the floor, and airborne sound (e.g., conversation and musical instrument sounds). Therefore, when a floor impact occurs, inter-floor noise must be determined by measuring the SPL of the lower floor and the vibration acceleration level generated by construction components (e.g., ceilings, walls, and windows). Table 2 shows the criteria for each type of inter-floor noise, as specified by the Ministry of Environment and the Ministry of Land, Infrastructure and Transport of South Korea. In the case of floor impact noises, inter-floor noise is determined when 'LAeq 1 min' exceeds 43 dB in the daytime and 38 dB at night, or when 'LAmax' exceeds 57 dB in the daytime and 52 dB at night. LAeq 1 min corresponds to the average value of noise measured for one minute, using a sound level meter. LAmax denotes noise with the highest dB value among the noises generated during the measurement period. In the case of airborne sounds, inter-floor noise is determined when 'LAeq 5 min' exceeds 45 dB in the daytime and 40 dB at night. The length of airborne noise detection was extended to five minutes, to reflect the long-lasting characteristics of television noise or musical instrument sounds. Therefore, in this study, inter-floor noise was determined by applying the above-mentioned criteria to the smartphone application. Figure 2 shows the configuration of the proposed monitoring system for the measurement of inter-floor noise levels and the estimation of noise source locations. In general, the system contains four steps. In the first step (the inter-floor noise sensing step), noise and vibration data are obtained Sustainability 2020, 12, 5065 5 of 13 from the place where data acquisition is required. Data is collected using the microphone, gyroscope, and accelerometer embedded in a smartphone. The decibel value and vibration velocity (i.e., noise data) are acquired every second, and the surrounding noise is recorded every minute. The acquired noise and vibration data are then transferred to a web server through Wi-Fi wireless communication in the second step (the inter-floor noise data transfer step). In this case, the transferred data consist of the ID and location of the measuring device, noise acquisition time, decibel level (dB) values, and vibration velocity (m/s 2 ). The web server stores the transferred data in a database in real time. Figure 2 shows the configuration of the proposed monitoring system for the measurement of inter-floor noise levels and the estimation of noise source locations. In general, the system contains four steps. In the first step (the inter-floor noise sensing step), noise and vibration data are obtained from the place where data acquisition is required. Data is collected using the microphone, gyroscope, and accelerometer embedded in a smartphone. The decibel value and vibration velocity (i.e., noise data) are acquired every second, and the surrounding noise is recorded every minute. The acquired noise and vibration data are then transferred to a web server through Wi-Fi wireless communication in the second step (the inter-floor noise data transfer step). In this case, the transferred data consist of the ID and location of the measuring device, noise acquisition time, decibel level (dB) values, and vibration velocity (m/s 2 ). The web server stores the transferred data in a database in real time. Figure 3 shows a schema of tables that are stored in a database. The database consists of a number of tables, such as the NoiseHistory, DeviceList, and RecordList. Each table contains noise data, information on noise measuring devices, and recorded files. In the NoiseHistory table, the ID of the device that transferred the data, acquisition time, decibel values, and vibration velocity are stored. When the decibel value is higher than the threshold, "1" is recorded in the noise field. In this Figure 3 shows a schema of tables that are stored in a database. The database consists of a number of tables, such as the NoiseHistory, DeviceList, and RecordList. Each table contains noise data, information on noise measuring devices, and recorded files. In the NoiseHistory table, the ID of the device that transferred the data, acquisition time, decibel values, and vibration velocity are stored. When the decibel value is higher than the threshold, "1" is recorded in the noise field. In this instance, noise is determined using the criteria displayed in Table 1. Information on the ID and location of each device is stored in the DeviceList table. Information on the files recorded by each device is stored in the RecordList table. In the third step, the developed application estimates the location of the noise source, based on the records stored in the database. The application stores the noise data values in real time, converts them into decibel values, and determines the noise location using the estimation algorithm. In the final step, the acquired inter-floor noise information is visualized on the user's smartphone screen.
System Design
Sustainability 2020, 12, x FOR PEER REVIEW 6 of 13 instance, noise is determined using the criteria displayed in Table 1. Information on the ID and location of each device is stored in the DeviceList table. Information on the files recorded by each device is stored in the RecordList table. In the third step, the developed application estimates the location of the noise source, based on the records stored in the database. The application stores the noise data values in real time, converts them into decibel values, and determines the noise location using the estimation algorithm. In the final step, the acquired inter-floor noise information is visualized on the user's smartphone screen. Figure 4 shows the application execution screen. The information that can be found in the application includes the timing of occurring noise, the noise measurements at that time, the estimated noise location and the noise type. The location at which the noise occurred is displayed on the floor plan of the measurement site and is located at the bottom of the application. The noise type (e.g., floor impact or airborne noise) can be determined using the recorded vibration values. It is determined as floor impact noise if there is vibration information when the noise occurred, or as airborne noise if there is no vibration information available.
Noise Source Location Estimation Method Used in This Study
Previous studies on sound source location estimation have been conducted using specialized equipment, such as microphone arrays. Those studies were also arranged for limited experimental environments [28,29]. The proposed system, however, uses only smartphones, thereby providing a method for many people to easily estimate noise source locations. In this study, an attempt was made to estimate noise source locations using differences in the sound intensity. For this method, hardware configuration and operation are very simple, even though it is difficult to calculate the exact distance to the sound source. The purpose of this study is not in finding the exact location of noise, but rather in estimating the approximate noise source occurrence area. Figure 4 shows the application execution screen. The information that can be found in the application includes the timing of occurring noise, the noise measurements at that time, the estimated noise location and the noise type. The location at which the noise occurred is displayed on the floor plan of the measurement site and is located at the bottom of the application. The noise type (e.g., floor impact or airborne noise) can be determined using the recorded vibration values. It is determined as floor impact noise if there is vibration information when the noise occurred, or as airborne noise if there is no vibration information available.
Noise Source Location Estimation Method Used in This Study
Previous studies on sound source location estimation have been conducted using specialized equipment, such as microphone arrays. Those studies were also arranged for limited experimental environments [28,29]. The proposed system, however, uses only smartphones, thereby providing a method for many people to easily estimate noise source locations. In this study, an attempt was made to estimate noise source locations using differences in the sound intensity. For this method, hardware configuration and operation are very simple, even though it is difficult to calculate the exact distance to the sound source. The purpose of this study is not in finding the exact location of noise, but rather in estimating the approximate noise source occurrence area.
Due to the nature of sound, a lower decibel value is measured as the distance increases. Based on this phenomenon, a method of estimating noise sources using the proportions of the decibel values measured through four smartphones is described. As shown in Figure 5a, it is assumed that noise measurement devices (T = {T 1 , T 2 , T 3 , T 4 }) are placed in the form of a grid in two-dimensional coordinates. Each device has a decibel value (dB) and coordinate information (x, y). In this study, among the noise measuring devices (T), three devices (S 1 , S 2 , S 3 ) are arbitrarily selected according to the decibel level to locate the noise source. As shown by Equation (1), among the devices (T), the device with the largest decibel value (dB) is designated as S 1 .
For example, when a noise or vibration takes place, assuming that the highest decibel value was observed in T 1 among the devices (T), the T 1 device is set as S 1 . Subsequently, as shown by Equation (2), the device (T) located on the horizontal line of S 1 is selected as S 2 .
Here, S 2 is a device which has the same y-coordinate value as, but a different x-coordinate value to, S 1 . Lastly, as expressed by Equation (3), the device having the largest decibel value among the devices other than the devices designated as S 1 and S 2 is selected as S 3 .
When it is assumed that T 1 ·db = 80, T 2 ·db = 40, and T 3 ·db = 60, the placement of S 1 , S 2 , and S 3 can be expressed as shown in Figure 5b. In this case, the approximate values of X and Y that serve as the estimated location coordinates of the noise source are obtained using Equations (4) and (5).
Width means the distance between S 1 and S 2 , and height is calculated as the distance between S 1 and S 3 . Figure 5 shows the estimated noise source locations using Equations (4) and (5). Due to the nature of sound, a lower decibel value is measured as the distance increases. Based on this phenomenon, a method of estimating noise sources using the proportions of the decibel values measured through four smartphones is described. As shown in Figure 5a, it is assumed that noise measurement devices ( = { , , , }) are placed in the form of a grid in two-dimensional coordinates. Each device has a decibel value (dB) and coordinate information ( , ). In this study, among the noise measuring devices ( ), three devices ( , , ) are arbitrarily selected according to the decibel level to locate the noise source. As shown by Equation (1), among the devices Width means the distance between and , and height is calculated as the distance between and . Figure 5 shows the estimated noise source locations using Equations (4) and (5).
Experiment Overview
In this study, inter-floor noise data were acquired using four smartphones to estimate the noise source locations, and one smartphone was used to display the inter-floor noise data in real time for the user. Thus, a total of five smartphones were used in the experiment. Table 3 shows the software components used in the experiment. In this study, JSP programming language was used based on Apache Tomcat (a web application server-WAS) in a Windows 10 Pro operating system for system development. Moreover, the database was managed by linking Apache Tomcat with MySQL. Android 5.0 APIs was used as an operating system to control smartphones. Table 4 shows the hardware components used in the experiment. As the noise source locations were estimated using the differences in the sound intensity acquired from four measuring devices, only one smartphone model was used for the same conditions. Hardware was easily obtained, and devices with the sensors required for system implementation were selected.
Experiment Overview
In this study, inter-floor noise data were acquired using four smartphones to estimate the noise source locations, and one smartphone was used to display the inter-floor noise data in real time for the user. Thus, a total of five smartphones were used in the experiment. Table 3 shows the software components used in the experiment. In this study, JSP programming language was used based on Apache Tomcat (a web application server-WAS) in a Windows 10 Pro operating system for system development. Moreover, the database was managed by linking Apache Tomcat with MySQL. Android 5.0 APIs was used as an operating system to control smartphones. Table 4 shows the hardware components used in the experiment. As the noise source locations were estimated using the differences in the sound intensity acquired from four measuring devices, only one smartphone model was used for the same conditions. Hardware was easily obtained, and devices with the sensors required for system implementation were selected.
. Experimental Environment and Method
To evaluate the performance and applicability of the proposed system to measure inter-floor noise and track the noise source locations, the experiment was performed in an apartment that serves as a representative for the residential type of multi-dwelling houses. Table 5 shows the overview of the experiment site. The floor of the experiment site consisted of a reinforced concrete slab (180 mm), insulating materials (20 mm), lightweight concrete (40 mm), cement mortar (40 mm), and floor finishing materials ( Figure 6). To collect noise and vibration data, smartphones were installed on the ceiling of each room (Figure 7). The exact installation locations can be found on the floor plan ( Figure 4). The smartphone located at the bottom left corner was then designated as the origin, and the scales were marked at 24.2 cm intervals in the horizontal direction and at 23 cm intervals in the vertical direction.
As for the noise generation type, real impact sources (e.g., human footsteps and dropped objects) were used rather than standard impact sources (i.e., impact balls), to create an environment similar to real inter-floor noise in the experiment. At certain points over the ceiling, random noises were generated for over 20 s at a time (i.e., impacts of >70 dB, human voices, musical instrument sounds).
The experiment was repeated 100 times, whilst the noise occurrence locations were randomly changed, and the actual noise occurrence locations were then compared to the estimated locations displayed in the application.
Experimental Evaluation Method and Results
To evaluate the performance of the system, the errors between the actual noise occurrence locations and the estimated noise source locations were obtained using the mean absolute error (MAE). MAE was calculated using Equation (6).
where rpoint i is the epicenter of the i-th actual noise and ePoint i is the estimated location of the i-th noise. Figure 8 shows the distance function to obtain the absolute error between the actual noise epicenter and the estimated location. Table 6 shows the experiment results. The calculated mean absolute error (MAE) was 2.8 m, while the minimum and maximum errors were 1.2 and 4.3 m, respectively.
where is the epicenter of the -th actual noise and is the estimated location of the -th noise. Figure 8 shows the function to obtain the absolute error between the actual noise epicenter and the estimated location. Table 6 shows the experiment results. The calculated mean absolute error (MAE) was 2.8 m, while the minimum and maximum errors were 1.2 and 4.3 m, respectively. where is the epicenter of the -th actual noise and is the estimated location of the -th noise. Figure 8 shows the function to obtain the absolute error between the actual noise epicenter and the estimated location. Table 6 shows the experiment results. The calculated mean absolute error (MAE) was 2.8 m, while the minimum and maximum errors were 1.2 and 4.3 m, respectively. where is the epicenter of the -th actual noise and is the estimated location of the -th noise. Figure 8 shows the function to obtain the absolute error between the actual noise epicenter and the estimated location. Table 6 shows the experiment results. The calculated mean absolute error (MAE) was 2.8 m, while the minimum and maximum errors were 1.2 and 4.3 m, respectively. Exact noise source locations could not be identified with the calculated values, but they were sufficient to distinguish among noise occurrence areas (Room 1, Room 2, Room 3, or Room 4) of the study site. Therefore, the proposed system performed the following four target functions using the smartphone sensors and the developed application: (1) it displayed the degree of inter-floor noise (dB) and recorded its values in the application by using the smartphone microphone devices; (2) it detected vibration using accelerometers and gyroscopes and classified the types of inter-floor noise (e.g., floor impact noise, airborne noise); (3) it estimated the noise source locations using the differences in the sound intensity and visualized the locations on the apartment floor plan; and (4) it provided reports of inter-floor noise on an hourly, daily, and monthly basis. Such reports are generated based on the information stored in the database, so that the recorded data can be accessed if a dispute occurs. | 6,746.6 | 2020-06-22T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Arf GTPase interplay with Rho GTPases in regulation of the actin cytoskeleton
ABSTRACT The Arf and Rho subfamilies of small GTPases are nucleotide-dependent molecular switches that act as master regulators of vesicular trafficking and the actin cytoskeleton organization. Small GTPases control cell processes with high fidelity by acting through distinct repertoires of binding partners called effectors. While we understand a great deal about how these GTPases act individually, relatively little is known about how they cooperate, especially in the control of effectors. This review highlights how Arf GTPases collaborate with Rac1 to regulate actin cytoskeleton dynamics at the membrane via recruiting and activating the Wave Regulatory Complex (WRC), a Rho effector that underpins lamellipodia formation and macropinocytosis. This provides insight into Arf regulation of the actin cytoskeleton, while putting the spotlight on small GTPase cooperation with emerging evidence of its importance in fundamental cell biology and interactions with pathogenic bacteria.
Introduction
The actin cytoskeleton comprises a scaffold of polymeric actin filaments that are assembled and disassembled to organize cell architecture and direct many cell processes. One of the key mediators of actin polymerisation is the ubiquitous actin related protein 2/3(Arp2/3) complex, which itself requires activation by nucleation promoting factors (NPFs). Neural Wiskott-Aldrich syndrome protein (N-WASP) and WASP family verprolin homolog (WAVE) are the best characterized of these proteins, and their regulation is of considerable importance. 1 It is understood that N-WASP exists in an auto-inhibited conformation, with its Arp2/3-activating verprolin homology-cofilin-acidic domain (VCA) shielded by the GTPase binding domain (GBD). Binding of the Rho family GTPase Cdc42 to the GBD releases the VCA domain, enabling it to bind and activate the Arp2/3 complex 2 as indicated in Fig. 1A. Conversely despite knowing that the Rho GTPase Rac1 can trigger Wavemediated Arp2/3 activation for over 2 decades, the precise molecular mechanism of regulation remains elusive. WAVE is part of the heteropentameric WRC, comprising of WAVE, Cyfip, Nap1, Abi1 and HSPC300 or their homologues. 3 Rac1 has been shown to directly interact with Cyfip, 4,5 however its affinity for the protein is very low (»10 mM), suggesting that additional factors likely participate in WRC activation. Recent research has identified such factors that may contribute to WRC regulation. 6 Activation of immunopurified WRC in vitro required an electrostatic interaction between the polybasic domain of WAVE and acidic phospholipids such as phosphatidylinositol (3,4,5) triphosphate (PIP 3 ), in addition to Rac1 binding 7,8 as demonstrated in Fig. 1B. Proteins containing SH3 domains such as the IRSp53, Toca1 and WRP interact with proline rich regions of Abi2 and WAVE, and facilitate membrane recruitment and activation of the WRC. 9 Also many transmembrane receptors such as GPCRs, neuroligins and protocadherins 5 have been reported to contain a conserved motif termed the WRC-interacting receptor sequence (WIRS) that facilitates the recruitment of WRC to the plasma membrane. 10 WIRS motifs have been demonstrated to directly interact with a composite surface on Sra and Abi in the WRC, which is unique in that it can only interact with the fully formed complex. Furthermore, phosphorylation of WAVE by proteins such as Abl, Src and Cdk5 kinases is believed to be key players in WRC regulation and may destabilise interactions between the VCA and Sra, promoting activation. 5,11 Recapitulating WRC activation in vitro has uncovered important aspects of its regulation. Recent efforts modeling WRC activation at phospholipid bilayers showed that Rac1 is required, but not sufficient for WRC activation in cell-free extracts. The work found an unexpected requirement for ADP-ribosylation factor 12 (Arf) GTPases, further implicating involvement of these proteins in cytoskeletal regulation, while opening up the intriguing possibility of 2 GTPases working together to directly modulate a Rho effector.
Arf driven regulation of actin cytoskeleton dynamics
Arf GTPases are best known for their role in membrane trafficking and vesicle sorting 13 and, like other GTPases, Arfs act as molecular switches by shuttling between their active GTP-bound and inactive GDP-bound conformations. Hydrolysis of bound GTP is stimulated by GTPase-activating proteins (GAPs), whereas the exchange of GDP to GTP is mediated by guanine nucleotide exchange factors (GEFs). The 6 mammalian Arfs are grouped into 3 classes on the basis of sequence homology, class I (Arfs 1-3), class II (Arfs 4-5) and class III (Arf6). While the class I and II Arfs are primarily localized in and around the Golgi apparatus, 14,15 Arf6 is found predominantly at the plasma membrane and on a subset of endosomes. 16 The involvement of Arfs in actin dynamics has primarily been attributed to their ability to activate lipidmodifying enzymes, which alter the membrane microenvironment. Arfs are capable of directly modulating local phosphoinositide synthesis, which has an impact on various actin regulatory proteins. Arfs have also been implicated in indirect activation of Rho GTPases. For example, active Arf6 recruits the bi-partite Rac GEF Dock 180-ELMO, 17 likely due to local PI(4,5)P2 generation at on the plasma membrane at the leading edge of a cell, stimulating Rac activation.
Arf6 may also modulate Rac activity by controlling the availability of lipid raft components, 18 due to its role in endosomal recycling, which has been shown to be instrumental in the attachment and spreading of anchorage-dependent cells. Furthermore, inhibiting the activity of Arfs has been shown to directly impact Rac-dependent membrane ruffling, 19 phagocytosis 20 and breast cancer cell migration. 21 Arf proteins have also been shown to down-regulate the activity of Rho GTPases. At Golgi membranes, Arf1 down-regulates Cdc42 activity by recruiting ARH-GAP21, 22,23 a cdc42 GAP. Another interesting aspect, which further enforces the notion that Arf GTPases coordinate regulation of actin cytoskeleton, is the interaction between Arf GAPs and Rho GEFs. The bestknown example of this unique mode of actin remodeling is the interaction between Arf GAP GITI, and b-Pix, a Rac GEF. GIT1 forms a complex with b-Pix, and inhibits the activity of Rac1 at the leading edge of cells. 24,25 Arf regulation of the WAVE Regulatory Complex Despite the plethora of research implicating Arf GTPases in cytoskeletal remodelling there has been little evidence for their direct interaction with actin regulators such as NPFs. Arf1 has though been implicated in the recruitment of both Rac1 and the WRC component CYFIP to the trans-Golgi network 26 (TGN). Here it aids in the generation of AP1-Clathrin coats, also promoting membrane tubulation as a result of NWASP driven Arp2/3 complex dependent actin polymerisation. The precise role of Arf in regulating full WRC activation at the plasma membrane though has not been outlined.
Reconstitution studies in cell-free extracts showed that both PI(4,5)P2 and PI(3,4,5)P3 recruited the WRC and Rac, yet remarkably the WRC was only activated on PI(3,4,5)P3 12 . The difference was found to be the activation status of Arf GTPases, which although present on both lipids, were only GTP-bound (active) on PI(3,4,5)P3. In vitro binding studies with purified components showed that Rac1 and Arf1 were individually able to bind weakly to recombinant WRC and poorly activate it, but when both GTPases were anchored at the membrane, recruitment and concomitant activation of WRC were dramatically enhanced. This cooperativity between the 2 GTPases was sufficient to polymerize actin filaments in a WRC-dependent manner that propelled phospholipid-coated beads through cell-free extracts.
The recruitment and activation of WRC at the membrane is not restricted to Arf1, as the related Arf5, and Arl1, a distant member of the Arf GTPase family, could also achieve similar activity. These key findings suggest that the Arf GTPase family have overlapping or partially redundant functions. Arf6, which is predominantly found to be associated with the plasma membrane, was also found to regulate actin assembly via the WRC. 27 Unlike, other Arf family members, Arf6-mediated actin polymerization was not achieved by direct interaction with the WRC but instead through recruitment of the Arf GEF ARNO, which acts at the plasma membrane where it recruits and activates Arf1 to collaborate with Rac1. This highlights the spatiotemporal coordination between 2 distinct classes of small GTPase that underlies actin polymerization at the plasma membrane, as described in Fig. 2. This biochemical work has been reinforced with evidence demonstrating that Arf plays an important role in WRC regulation in the cell.
Identifying phenotypic changes in actin dynamics in mammalian cells is problematic due to redundancy of many actin regulatory proteins, especially true for the Arf GTPases. Fortunately Drosophila melanogaster has only one member in each of the Arf classes. Drosophila S2RC cells form characteristic lamellipodia with the cells appearing uniformly round when adherent. Depletion of any individual component of the WRC, or Rac1 has been demonstrated to abolish lamellipodia formation. Depletion of the Arf1 homolog Arf79f 28 in S2RC cells also abolished lamellipodia formation with cells appearing spiky, characteristic of loss of a WRC dependent activity. Interestingly, the expression of human Arf1 resulted in restoration of lamellipodia in Arf79 depleted Drosophila cells. However, the expression of active Rac1 in these cells failed to restore lamellipodia formation, further signifying the direct importance of Arf. Consistent with this, active Arf79F was critical for Sra1 localization and concomitant generation of lamellipodia both in cells as well as in vitro. 28 Furthermore, a recent study demonstrated that Arf6 potentiated the formation of Rac1 and Wave dependent ventral F actin rosettes in breast cancer cells upon epidermal growth factor (EGF) stimulation. 29 In addition, the authors could demonstrate that interference with ARF6 expression resulted in poor activation and plasma membrane localization of Rac1 in response to EGF treatment. The study highlights a potential role for ARF6 in linking EGF-receptor signaling to Rac1 recruitment and activation at the plasma membrane to promote breast cancer cell directed migration.
Salmonella manipulates Arf GTPases to activate the WAVE Regulatory Complex
Bacterial pathogens manipulate the cytoskeleton 30 to establish infections and have long been used to better understand how actin dynamics are being regulated including those governing the WRC. Salmonella enterica (hereafter Salmonella) is a Gram-negative facultative intracellular pathogen that infects and colonizes vertebrate hosts with outcomes ranging from sub-clinical infections to life-threatening systemic disease. Upon contact with a host cell, Salmonella translocates a cohort of virulence effector proteins into the host cell via its Type III Secretion System. 31 Some of these effector proteins enter the cytosol where they are able to remodel the actin cytoskeleton resulting in host membrane ruffling that drives Salmonella entry via macropinocytosis. 32 It is known that Salmonella requires the WRC to generate membrane ruffles, 32 which is mediated by targeting small GTPase signaling pathways. The virulence effectors SopE and SopE2 mimic host cell GEFs 33,34 by triggering the activation of Rac1 and Cdc42, and Cdc42 alone respectively. Salmonella utilizes SopE to recruit WRC in a Rac1 dependent manner. 35 However as already indicated Rac1 alone is not sufficient to activate WRC, 12 and would need an activated Arf to drive WRC dependent actin assembly. Salmonella does not encode any known Arf GEF, therefore to activate Arf and subsequently the WRC, the pathogen targets the network of host Arf GEFs. 36 Salmonella targets and recruits the host GEF ARNO 35 (also known as cytohesin 2) to pathogen entry foci to activate Arf1, which cooperates with SopE-activated Rac1 to drive WRC dependent actin assembly. 27 ARNO is maintained in the cytosol in an auto-inhibited conformation, but is recruited and activated at the plasma membrane via Arf6 and acidic phospholipids such as PI (3,4,5)P3. 37 Interestingly there are 2 splice variants of ARNO that differentially interact with phospholipids, the presence of 3 glycine residues within the PH domain (3G), results in recruitment to PI(4,5)P2, whereas a double glycine version (2G), is preferentially recruited to PI (3,4,5)P3. 38 It is highly probable that these different variants have distinct biological functions, with only the 2G variant being shown to promote the production of Rac1 dependent ventral actin structures in Beas-2b and HeLa cells upon Phorbol myristate acetate (PMA) stimulation. 39 With Arf and PI(3,4,5)P3 already identified as Figure 2. Collaboration between Arf and Rho GTPases to potentiate actin assembly via WRC. The Wave regulatory complex (WRC) exists in an inactive state i.e. the VCA domain of WAVE is not free to bind to the Arp2/3 complex to induce actin polymerization. Upon external stimuli, such as effector protein delivery by Salmonella or on EGF stimulation, Arf6 recruits and activates ARNO that in turn stimulates the exchange of GDP (white circle) bound to Arf1 for GTP (blue circle). Activated Arf1 consequently anchors via its exposed myristoylation moiety (black lines) to the plasma membrane. The Arf1 binding partner remains unclear, but nevertheless membrane-anchored active Arf1 and Rac1 work in cooperation to recruit and activate the WRC (i.e., release the VCA domain) that induces Arp2/3-dependent polymerization of actin filaments (pink).
being important for WRC driven actin assembly, this result is not surprising.
The recruitment of ARNO to the membrane by Arf6 triggers WRC-dependent actin polymerization and Salmonella uptake via Arf1 27 . ARNO recruitment to invasion sites is also aided by host Arf6 GEFs EFA6 and BRAG2 as well as PI (3,4,5)P3 production via the Salmonella effector SopB. Surprisingly, efficient Salmonella entry also requires host Arf GAPs, inactivators of Arf signaling. 40 This suggests that cycles of GTPase activation and inactivation facilitate the actin polymerisation required for pathogen uptake. Salmonella thus exploits a remarkable interplay between both hostand bacteria-derived GEFs and GAPs to subvert the cytoskeleton and force entry into non-phagocytic cells.
Escherichia coli interfere with Arf signaling to block WAVE Regulatory Complex activation
Enteropathogenic and enterohemorrhagic E. Coli (EPEC and EHEC) are major global threats to human health that cause acute gastroenteritis and bloody diarrhea respectively. 41 Unlike Salmonella, EPEC and EHEC are extracellular pathogens. They use their T3SS to secrete numerous virulence effector proteins targeting the actin cytoskeleton to form cell surface pseudopodia called actin pedestals where they establish infection. 42 Actin pedestals enable both EPEC and EHEC to colonise the surface of intestinal epithelial cells, resulting in distinctive 'lesions' characterized by the destruction of brush border microvilli characteristic to these cells. As a result the pathogen is able to escape into the basolateral region, where the bacteria encounter macrophages. Both EPEC and EHEC use multiple mechanisms to avoid being engulfed by the infiltrating professional phagocytes. 43 Macrophages facilitate uptake of foreign bodies through a process of actin driven phagocytosis. 44 Phagocytosis in part is driven via WRC dependent actin assembly, a process that requires cooperating Arf and Rac1 GTPases. 45 In an attempt to evade this process EPEC likely interferes with one or more of these components. The effector protein EspG, know to interact with Arf1 was thus an intriguing candidate to investigate. EspG is conserved across EPEC, EHEC and Citrobacter, and was originally described as a homolog of VirA in Shigella. 46 EspG has multiple known functions, and acts as a molecular scaffold by simultaneously binding p21-activated kinases (PAK) and GTP bound Arf GTPases. 47 EspG is also known to act as a Rab-GAP and interferes with Golgi signaling. 47,48 EspG was found to incapacitate WRC activation via a dual mechanism. 45 Firstly, EspG binding to Arf1 impedes cooperation with Rac1, thereby inhibiting WRC recruitment and activation. Further investigation of the mechanism by which EspG incapacitates WRC, identified key residues in the Arf1 a¡1 helix and switch-1 domain, which might be critical for WRC activation. In addition EspG's interaction with Arf6 sterically hinders its interaction with Arno, 45 thereby preventing Arf1 activation and consequent WRC-mediated phagocytosis. Another EPEC/ EHEC injected effector protein EspH has been previously shown to inhibit actin driven phagocytosis by disrupting the actin cytoskeleton. EspH inactivates host Rho GTPases 49 such as Rac1 by directly binding to Rho GEFs that are needed for activation of Rho GTPases.
Thus, manipulation of the WRC underpins diverse virulence strategies where invasive intracellular pathogens activate the WRC while extracellular pathogens inhibit the WRC.
Conclusion
It is well established that Arf GTPases are involved in vesicle trafficking and they have long been implicated in regulating the actin cytoskeleton. However, until now there has been scant evidence for direct regulation of NPFs. Arf GTPases, in particular Arf1, coordinate with Rac to activate WRC and facilitate lamellipodia formation. The ability of pathogens to target Arf GTPases that manipulate the host actin cytoskeleton to establish infection, further strengthens the significance and prime importance of these small GTPases in regulating critical processes at the plasma membrane.
Future prospective
Despite linking and underlining the importance of Arf GTPases in WRC regulation, the precise means by which this is achieved is still uncertain. A possible interaction between Arf1 and the WRC component Nap1 has been reported. 12 Even so, to date there is no conclusive evidence of a direct interaction between Arf and any component of the WRC, with an identified binding site still elusive. It is possible that the regulation of WRC by Arf GTPases is not dependent on any direct interactions watsoever. Indeed manipulation of the local environment, or more interestingly other key players, such as Rac1 may be the most important function of Arf here. Cooperation between GTPases is becoming of increasing interest, but understanding how this is achieved is challenging. Whether Arf directly modulates Rac to potentiate its affinity for the WRC, or physically blocks or recruits other proteins that are involved in activation of the complex is something that needs to be investigated.
Pathogens, as discussed here are great tools to enhance our understanding of basic cell biology. They continue to prove invaluable assets in the biologist's quest to better comprehend not only Arf-Rac cooperativity, but also the potential interaction and cooperation of other small GTPases, and the actin cytoskeleton. Multiple pathogens have evolved to intricately manipulate host cells in the most efficient manner, and as with Salmonella and the WRC, they likely hijack numerous as yet unidentified fundamental pathways.
Disclosure of potential conflicts of interest
No potential conflicts of interest were disclosed. | 4,127.2 | 2017-11-03T00:00:00.000 | [
"Biology"
] |
Contribution to the Optimization of the Energy Consumption in SDN Networks
With the advent of new technologies such as IoT (Internet of Things) and Big Data, the increase in users and their different communications have led to a significant increase in energy consumption in network equipment. A new networking technology called SDN (Software Defined Network) is born. It aims to make network management easier. The SDN consists of decoupling the control plane that is the brain, the data plane or the muscles of the network. It allows the programmability of network devices and also the redirection of flows. One or more centralized controllers use algorithms to act remotely on network devices. Because of its operation, this new technology offers opportunities to improve network performance and optimize energy consumption. In this paper, we will use this technology (SDN) to suspend links or routers when they are not used while taking into account the congestion that degrades the quality of service in the network. We have formulated this problem as a linear integer program and proposed algorithms to process it in normal period and peak period. We have used the OMNET ++ simulator to evaluate our algorithms. Thus, our approach showed that 87.5% of ports and 33.33% of links could be shut down to save energy.
Introduction
Communication networks are progressively evolving in terms of size and performance.There are two types of network equipment: active devices such as to an increase of these equipments in quantity and in performance.The increase of these and their performance increases the consumption of the electrical energy they need for their operation.
A study conducted in 2009 [1] shows that information and telecommunications technologies (ICTs) alone consume 2% to 10% of global energy consumption.Hubs, switches and routers consume 6 Twh/year in the USA [2].Therefore the search for a mathematical model for the consumption of energy in the communication networks becomes a real concern for the companies of the moment.In traditional networks, when a packet arrives on a port of a switch or router, it applies the routing or switching rules that are registered in an operating system.Generally, all packets that have the same destination follow the same path.In high-end models, hardware is able to recognize the type of application and apply the specific rules to it.But this programming is rigid.It can only be changed manually by the administrator, which obviously takes time [3].
The advent of Software Defined Network (SDN) technologies appears to be a good alternative for acting remotely and dynamically on equipment in order to model energy consumption.
In SDN technology, it is a centralized controller that will be responsible for routing packets in the network via SDN protocol (openflow) by programmability by injecting routing rules provided by the application layer (Figure 1).
In this paper, we use this paradigm to act on network devices by enabling or disabling router ports when they are not working.A new strategy has been developed which takes into account the peak periods (dense traffic) and normal (low traffic) minimizing energy while avoiding congestion.The authors of [4] have in their minimized approach with high delays leading to packet losses.Our model is based on that of these authors [4].We have implemented an energy minimization model based on a new strategy.Our work will revolve around the following points: Section 2 will cover previous work.Section 3 describes the mathematical model of our approach.We present in Section 4, the resolution approach.In Section 5, we will evaluate the performance of the model.Section 6 concludes our article.
Previous Work
In this section, we will present some previous work that has addressed the issue of energy consumption in networks.We also formulate some basic assumptions that our future mathematical model must respect.
Mathematical Model Research Reduce Energy Consumption Optimally
The consumption of energy has been treated by some researchers [4] [5] [6] [7] [8].In [4], the authors have minimized under three constraints using on/off technology in conventional wired networks, the consumption of energy in the networks (delay, packet loss and jitter).A saving of 40% was obtained.But at each extinction, a delay variation led to packet losses in case of heavy traffic.The authors of [5] [8] minimized the energy in SDN networks using the "compression" approach of the routing table.According to the authors, the devices that can implement the SDN rules uses TCAM (Content-addressable Ternary Memory).This memory in which are recorded the rules, is expensive and greedy in energy.So you have to compress the routing table.Compression maximizes rule space, increasing the number of paths.The consumption of a high-capacity link and that of an unsolicited link is low, some flows can be redirected to other links.
As a delay solution, the authors have defined the default rule for forwarding packets to the default port without contacting the SDN controller.An energy saving has been observed but with degradation of the quality of service in terms of congestion in the network.As for the works of [6] and [7], the authors have shown that it is the number of ports of the routers on the network that consume.
Consumption modeling would reduce router interfaces when they are not in use.
We orient our work in this way while finding a mathematical model of energy minimization in the SDN network which satisfies the "QoS" in terms of delay of transmission and loss of packets.
Basic Assumptions
Each port has the same rate of energy consumption.
Each port can handle multiple services so it can redirect traffic.
Choose in the graph, the path or paths that seem to offer more energy saving.
Our future mathematical model will be obtained through a set of processes that will involve theories such as graphs; trees etc.
Modeling the Problem
This section will cover the description of our mathematical model.
We formulate our problem as a linear integer program in contributing to the optimization of energy consumption in SDN networks.Let N be a network and n(t) a sub-network of N ( ( ) We defined a dynamic approach based on two situations of graph theory: 1) The activation and deactivation of the ports of the routers for a given time t.we note the change in energy consumption.
2) We will assume that a router cannot be turned off and therefore must remain awake.
Once switched on, at each given instant t, a link is established between a router i and a router j.It will therefore be a question of maximizing a number of ports to be deactivated in order to have energy savings under "QoS" constraints.In our approach, we will enable or disable router ports using a smart approach.
The number of ports to disable ( ) λ , the total number of ports on a router.This parameter max λ is fixed and depends on the types of routers.If the total number of ports on a router is 8, how much should I disable and how, to hope to save the maximum energy?
Let us consider the function f: the consumption of energy in the sub-networks.
We express the model of our approach in mathematical from called objective function of the solution of the problem that is to say (1) and (2).
Which amounts to: , , , , A link is the junction between two routers interfaces., i j N ∈ having a maximum load capacity of ( ) , C i j packet traffic per second.The links can be in the following state 0 or 1.Let k be the link state variable: 0 state where the router is off and 1 state where the router is on.
The objective function to be minimized becomes: where is the energy consumption of the router i whose ( ) t ports were deactivated at time t and ( ) ( ) , , f k i j t represents the energy of the link i, j at time t.
( ) ( )
, , f i t t is the function to be explained.Power gain function of router i: "g".The function g is a linear and increasing function with respect to the number of ports deactivated at time t.Let ( ) ( ) : the energy gain of the router i such that: This is the final expression of our function "energy consumption" of router.
Otherwise ( ) ( ) where C i is the total consumption of the router i and , , , We deduce that if all ports are disabled, ( We showed above that our function is linear in ( ) , With C i = chassis consumption (C ch ) + consumption of ports (C p ) on.From where: The explicit objective function becomes: Under constraint of: In our approach, when a router is turned on, it establishes a link with its neighbors and so on in the network.
Our 7-router network has 8 ports each, only one port is turned on and the (n − 1) ports are off with its links.The circuits are to be avoided in the choice of favorable paths (having the lowest weight).
The Modified SPRING Protocol (MSP), is responsible for the extinction of ports and links unsolicited by the request (see Algorithms below).
This Figure 3 (maximum tree of minimum weight) is obtained thanks to the Kruskal algorithm.The ports of the adjacent routers are stopped using a modification of the SPRING or Segment Routing algorithm as well as the links (see algorithms) [9].Note that SPRING is designed for SDN.
The resolution approach having been found, we will evaluate the performance of our model.
Evaluation and Performance
In Table 1 below, we present our results.
How to cite this paper: Kra, L., Gondo, Y., Gooré, B.T. and Asseu, O. (2018) Contribution to the Optimization of the Energy L. Kra et al.DOI: 10.4236/jst.2018.8300560 Journal of Sensor Technology routers, switches, etc. (information transmission devices) and passive devices such as cables, fiber optics, etc. (interconnection equipment).Of the two types of equipment, only assets are energy intensive.The evolution of the networks leads
Table 1 .
Evaluation of our approach. | 2,325.4 | 2018-09-29T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Hot Machining of Hardened Steels with Coated Carbide Inserts
Problem statement: The benefits of easier manufacture of hardened ste el components can be substantial in terms of reduced machining costs and lead times compared to the traditional route involving machining of the annealed state followed by heat treatment, grinding/EDM and manual finishing. But machinability of hard material throu gh conventional machining is hindered due to excessive wear of the cutting tools and differently in achieving desired quality of the machined surfa ce. In end milling the cutting tool is not in constant operation and so undergoes a heat cycle during the intermittent cutting. This alternate heating and co oling of the inserts lead to the thermal cracks and subsequently failure of the tool. Approach: This study was conducted to investigate the effect of preheating through inductive heating mechanism in e nd milling (vertical milling center) of AISI D2 hardened steel (56-62 HRC) by using coated carbide tool inserts. Apart from preheating, two other machining parameters such as cutting speed and feed w re varied while the depth of cut was kept constant. Results: Tool wear phenomenon and machined surface finish we re found to be significantly affected by preheating temperature and other two va riables. Preheating temperature of 335°C coupled with cutting speed of 40 m min , depth of cut of 1.0 mm and feed of 0.02 mm/tooth resulted in a noticeable reduction in tool wear rate leading to a maximum tool life 188.55 min. In addition, cutting speed of 56.57 m min −1 together with feed of 0.044 mm/tooth and depth of cut 1.0 mm at which maximum VMR (9500 mm) was secured provides a better surface finish with minimum surface roughness 0.25 μm leaving a possibility of skipping the grinding an d polishing operations for certain applications. Conclusion/Recommendation: Through the end milling of preheated AISI D2 harden ed steel by using TiAlN coated carbide cutting tool it can be concluded that an overall enhanced machinability is achievable by preventing catastrop hic damage of the cutting tool at higher levels of feed and cutting speed.
INTRODUCTION
Hardened steel is one of the difficult-to-cut materials. During the last few years numerous studies have been conducted to improve the machinability of this kind of materials and to explore and develop new techniques to minimize machining costs while maintaining the quality requirements of the machined parts. The benefits of direct manufacture of components from hardened steel are expected to be substantial in terms of reduced machining costs and lead times compared to the traditional route of machining in the annealed state followed by heat treatment, grinding or Electrical Discharge Machining (EDM) and manual finishing [1] . Recent advances in machine tool technologies coupled with improved cutting tool inserts have opened up new opportunities for investigation in machining of hard materials especially for their bulk removal. Hot machining process which includes preheating of work-piece is gaining interest as it results in reduced shear strength creating a condition conducive to metal cutting [2] .
The technology of preheating or hot machining is not new and heat sources such as flame, electrical resistance, induction and plasma arcs were used [3] . Difficult-to-cut materials such as stainless steel, S-816 alloy, X-alloy, Inconel-X, Timken 16-25-6 and Navy Grade V, nickel chromium steel and alloy steels have been hot machined by Tour and Fletcher [4] , Armstrong et al. [5] , Krabacher and Merchant [6] , Schmidt and Roubik [7] and Barrow [8] . Through analyses of their works, an important phenomenon is revealed-tool life increases to a maximum value for an optimum temperature range followed by a diminishing effect. Another important observation is the reduction in strain-hardenability and flow stress of material with increase in preheating temperature. In recent times, hot machining for cutting hard materials has been adopted by several researchers. Dumitrescu et al. [3] applied High-Power Diode Laser (HPDL) in turning of AISI D2 tool steel. HPDL was found to inhibit saw tooth chip formation, suppress chatter, deter catastrophic tool fracture and bring about substantial reduction in tool wear and cutting forces leaving minimal effect on the integrity of the machined surface. It is, therefore, less likely to experience very adverse effects on the machined surface due to preheating.
Maity and Swain [2] adopted plasma assisted heating in turning of high manganese steel using carbide tool and concluded that the effect of increased workpiece temperature would have a very significant effect on tool life. Ozler et al. [9] integrated plasma gas heating in turning of austenitic manganese steel and noticed that tool life would increase with increase in heating temperatures. He concluded that the decrease in the strength of the workpiece is induced by the influence of heat most of which is transferred to the chip-tool interface.
Preheating of workpiece by induction heating has been recently reported to enhance the machinability of other materials. Amin et al. [10] carried out preheated induction heating in end milling of AISI D2 hardened steel using PCBN inserts and observed that machining of preheated material led to surface roughness values well below 0.4 µm, with which grinding and polishing operations could be avoided for certain applications. Preheated machining has been found to reduce the amplitude of the lower frequency mode of chatter by almost 4.5 times at the cutting speed of 50 m min −1 . It was also established by several other earlier studies [11,12] that preheating had great potential in lowering chatter.
It is apparent that preheating enhances the ductility of the material for easier chip formation and flow over the rake surface of the tool. This easier formation and flow of chip is expected to improve the tool life and surface finish of the machined components. Earlier study conducted by Amin et al. [10] was restricted with a lower range of preheating temperature (100-150°C) to avoid a situation where preheating might lead to softening of the hardened work-piece. The current study was initiated to investigate the scope of preheating the work material to a higher level of temperature closer to re-crystallization temperature. Thus for AISI D2 hardened steel work material preheating was performed to a temperature range from 250-450°C by using induction heating approach prior to end milling operation.
MATERIALS AND METHOD
The machining operation was carried out on a Vertical Machining Center (VMC) using a 40 mm diameter tool holder fitted with Sandvik 1030 PVD coated carbide inserts. End milling operation was performed under dry cutting condition with a 5 mm constant radial depth of cut. Experimental set-up for hot machining of AISI D2 hardened steel is shown in Fig. 1. One edge out of the four cutting edges of a tool insert was used for each set of experimental conditions. Thus machining was initiated with a new sharp edge of an insert and continued for a 100 mm pass of cut followed by checking of the flank wear. This procedure was continued until the flank wear of the tool reached a magnitude of 0.30 mm. Olympus tool maker microscope was used to measure the flank wear with a magnification of (20 x). The 0.30 mm flank wear criterion was adopted in accordance with the ISO standard (ISO standard 8688-2, 1989 for tool life testing of end milling).
Selection of machining conditions:
The cutting conditions were selected primarily by considering the recommendations made by cutting tool manufacturer (Sandvik tools) and the knowledge on practices gathered through contemporary literatures on hard machining. Selected three main parameters: Cutting speed, feed and preheating temperature were changed while the axial depth of cut, d was kept constant at 1 mm. The ranges of parameters used for experimentation were: Feed, f: 0.02-0.044 mm tooth −1 ; cutting speed, [13] as showed in Fig. 2 was taken into consideration to limit the maximum level of preheating temperature. Experimental conditions were set by choosing the discrete values lying within the above mentioned ranges of the three selected parameters. Table 1 shows 20 sets of experimental conditions corresponding to which the machining operations were conducted. Data on tool life and surface roughness values of the machined surface are also included.
Process of preheating: An induction heating device having a capacity of 25 kVA was used for preheating the work-piece. As shown in Fig. 1 the induction heating coil was mounted just ahead and in close proximity of the with an accuracy of ±1%). Work-piece preheating temperature was calibrated by measuring for a particular current value and feed rate of the machine table as used during actual machining. So to obtain a desired preheating temperature of work-piece surface during machining operation a particular rated current value was set for a specific feed rate of VMC system.
Work and cutting tool materials:
The work material as received from supplier was in the form of a block hardened by oil quenching and tempered to a hardness range of 56-62 HRC having 300×250×100 mm in dimension. Hardness of work material was verified and found to comply with the supplier's specifications as showed in Table 2.
As mentioned earlier the material used for machining operation was AISI D2, the microstructure (1000× magnification) of which is shown in Fig. 3.
The end milling tool holder was a Sandvik Coromill 390 Endmill: R390-020B20-11L employing indexable inserts having code: Sandvik 1030 Coromill 290 R290-12T308E-PL. The TiAlN coated carbide inserts having four sided cutting edges were used as received from the supplier. Figure 4 shows a schematic diagram indicating the geometry of tool insert (Sandvik 1030) as coated through PVD method by manufacturer with relevant dimensions in Table 3. However, under the same cutting speed but with a lower feed (f =0.044 mm tooth −1 ) the situation improved and the tool wear reached the limiting value of 0.30 mm after about 35 min of machining (Fig. 6). Preheating of work-piece has been found to further increase the tool life by reducing the tool wear rate quite significantly.
Progression of tool wear as a function of machining time with different preheating temperatures is shown in Fig. 6. Enhanced tool life was obtained with the increase in preheating temperature. The longest tool life was achieved at 450°C preheating temperature. As shown in Fig. 2, AISI D2 hardened steel re-crystallizes at a temperature ranging from 850-1050°F which is equivalent to 455-565°C. This is why the maximum preheating temperature applied in this experiment was set at 450°C which is lower than the recrystallization temperature. A preheating temperature higher than 450°C could pose an undesirable effect on the work material especially in the context of hardness.
Tool life was estimated from the plot in Fig. 6 and 80 m min −1 ) and two temperatures (room temperature, 30°C and a preheating temperature of 335°C). It is clearly evident that tool life is enhanced with preheating for the same cutting speed. But the influence of preheating temperature on tool life is found to be high at cutting speed of 56.57 m min −1 and it becomes less prominent with the highest cutting speed (80 m min −1 ).
As shown in Fig. 8, the higher the feed the lower is the tool life. These results may be explained in terms of higher stress encountered by the tool due to higher feed. However, the metal removed per tool life would be an appropriate criterion for assessing machinability. Figure 9 shows the Volume of Metal Removed (VMR) per tool life for different cutting conditions. In this case the feed was kept constant with 0.044 mm tooth −1 . Preheating temperature was 335°C and cutting speed was varied into three levels. It is apparent that compared to room temperature, preheating led to higher VMR per tool life irrespective of any cutting speed. However, at lower cutting speed VMR increase due to preheating is marginal while at medium speed (56.57 m min −1 ) it is maximum with a decline at the higher cutting speed. Fig. 10. Irrespective of whether machining was performed at room temperature or with preheating, for a constant cutting speed surface roughness value increased with increasing feed. But with increase in cutting speed there is no such trend.
DISCUSSION
Tool wear followed by machining under room temperature was very intense with severe abrasive and notch wear of the cutting edge as evident in Fig. 11a. These phenomena can be considered as the result of carbides constituents (as shown in Fig. 3 responsible for the enhanced abrasive wear resistance of D2 tool steel.
Abrasive wear is much likely to be a significant wear process with coated carbides due to the high hardness of tungsten carbide. According to Becze et al. [14] , the carbide phase thus hampers the machinability of hardened D2 both in terms of increasing the flow stress of the material and inflicting severe abrasive wear on the tool. Figure 11b of 250°C preheated machining shows slightly similar trend where the abrasive wear was not so severe compared to room temperature machining but there is a higher scale of notch wear. This may be due to the insufficient temperature to induce appreciable softening of the work material.
Preheating of work material at 335 and 450°C led to occurrence of uniform average wear on the cutting edges as shown in Fig. 11c and d. However, preheated machining with 450°C presents a smooth type of wear with features characterizing the diffusion wear process which is temperature dependent.
Diffusion wear is a mechanism where a constituent of a workpiece material diffuses into or forms a solid solution with the tool or chip material. Hence, an EDAX analysis, shown in Fig. 12, was performed to investigate the diffusion characteristics of the workpiece into the cutting tool. The analysis shows the significant existence of Ferum (60.88%Fe), carbon (25.3%C) and chromium (4.9 %Cr) on the tool surface as shown in Fig. 12b.
As shown in Fig. 10 above, at lower level of feed (0.02 and 0.044 mm tooth −1 ) surface finish had improved having lower roughness values as the cutting speed was increased. But at higher feed (f = 1.0 mm tooth −1 ) surface finish had generally deteriorated having higher roughness values as the cutting speed was increased. It is observed from the plot that with preheating of work material surface roughness values are close to or below 0.3 µm at any combination of cutting speed and feed. In case of the cutting speed of 56.57 m min −1 at which maximum VMR was secured, even a better surface finish is possible to be maintained with lower range of roughness values. Thus with preheating it would be possible to skip the grinding and even polishing operation in preparing die and mold for certain applications.
CONCLUSION
Through the end milling of preheated AISI D2 hardened steel by using TiAlN coated carbide cutting tool it can be concluded that an overall enhanced machinability is achievable by preventing catastrophic damage of the cutting tool at higher levels of feed and cutting speed. To be specific the following conclusions can be drawn from the conducted experiments: • Preheating of the AISI D2 work material enhances the tool life by slowing down the tool wear rate and preventing catastrophic tool failure • Higher cutting speed was found to diminish the positive effect of preheating. A range of 40-60 m min −1 for cutting is expected to be suitable with a preheating temperature of 336°C. Cutting speed of 56.57 m min −1 at which maximum VMR was secured, provides a better surface finish with roughness values lower than 0.3 µm • Thus with preheating it would be possible to skip the grinding and even polishing operation in preparing dies and molds for certain applications • A linear regression equation for tool life has been established for a range of preheating temperature (30-450°C). This equation would be useful to predict the tool life for a particular preheating temperature lying within the range • However, incorporation of preheating mechanism obviously incurs costs, a detailed study is necessary to check whether the costs are offset by the benefits obtained through the process
ACKNOWLEDGEMENT
This study was conducted under the purview of the e-science project (03-01-08-SF0003) entitled "Enhancement of Machinability of Hardened Steel in End Milling Using Advanced Cutting Methods and Tools", funded by the Ministry of Science, Technology and Innovation of Malaysia (MOSTI). The authors gratefully acknowledge the financial support. | 3,796.8 | 2009-06-30T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
FPGA Implementation of Enhanced Montgomery Modular for Fast Multiplication
. This Paper proposed an enhanced Montgomery and efficient implementation of Modular Multiplication. Cryptographyoprocess is usedufor providingphigh informationmsecurity when a data is transferredmfrom transmitter to receiver. Various using methods like RSA, ECC, the Digital Signature Algorithm. The propose Montgomery algorithm usin RSA algorithim of cryptography is implemented in two different input both the inputs are 8 bit input. Coding have been done in Verilog language and the results are simulated on Vivado Software. For physical testing, we have used an FPGA NESYS 4 DDR hardware board that have Artix-7 FPGA chip on it produced by digilent company. The propose method shows good results in term of the number of slice flip flop, LUTs, and number of IOBs and power consumption. The proposed method shows better results as compare to other previous methods in term of different result parameters.
Introduction
A key issue for researchers in recent years has been data encryption. Technology is evolving at a rapid pace, necessitating the use of cutting-edge encryption techniques. Modular multiplication for advanced encryption and cryptography was proposed in this research article using the montgomery method. The advantage of using bit shift multiples of modulus to clear out the least significant bits before shifting them out. Even if the modulus of a regular modular multiplication results in a value that is less than its own modulus, it will be deducted from the result until it reaches zero. There is no need for subtractions in Montgomery multiplication since the bits are shifted out when the multiplicand is handled. In this study, a unified and dual-radix architecture is used to achieve Montgomery multiplication.
The Figure 1 to the right shows an example of a multiplier block layout. The Montgomery multiplication is the building block for Die-Hellman and RSA public-key cryptosystems' modular exponentiation operations. The most compelling reason to *Akanksha Jain<EMAIL_ADDRESS>investigate fast and inexpensive modular multipliers for long integers is the search for faster and cheaper modular exponentiation processors. An elliptic key encryption over the niteeld GF has recently been implemented using Montgomery multiplications. (p). Additionally, discontinuous exponentiation in excess of GF(2k) and elliptic key cryptography became possible due to the introduction of Montgomery multiplication in GF(2k), which was first reported by. By developing an extensible Montgomery multiplication architecture, we can now look at different parts of the design space and see how different trade-offs affect performance on a small chip. Our design must be scalable, and we explain the broader theoretical issues with Montgomery multiplication afterward.
Fig. 1 Shows the Multiplier block architecture
This is followed by a discussion of the parallel evaluation of a word-based algorithm. This method is used to derive and present the modular multiplier's architecture. Furthermore, they conduct simulations to determine area/time tradeoffs and present an arts order evaluation of the multiplier's performance for various operand precisions. Despite more than three decades of research, public-key cryptography (PKC) is still considered computationally demanding, especially when used on embedded processors.
Since operations like exponentiation and scalar multiplication use operands with sizes in the hundreds or thousands of bits, this is most likely the case. RSA and ECC are two types of public-key algorithms that use multi-precision modular arithmetic for their security. This is especially true for modular multiplication on embedded CPUs. For modular multiplication, cryptographers have devised a number of effective reduction algorithms that can be implemented in the most efficient manner. An essential modular reduction technique is the Montgomery algorithm, which was first presented in 1985 [11] and has since been widely used in real-world applications. Further examples of reduction algorithms include Barrett [2] and Quisquater [13,14]
Montgomery module multiplication
The following are the definitions of unified and dual-radix. All the hardware and software needed to work with operands in both prime and binary extension fields is referred to as a unified architecture. The multiplier for GF(p) in [4] was modified in [3] to illustrate that a unified multiplier is possible with relatively modest alterations. If the unified multiplier uses a bigger radix value for GF(2n) than the radix for GF, it is referred to as a dual-radix multiplier (p). For the Montgomery multiplier's hardware, the term "architecture" is often employed. Every clock cycle, n bits of the multiplicand are processed by a radix-2n multiplier. Radix (2,4) multipliers are multipliers that operate in radix-2 for GF(p) and radix-4 for GF(p) (2n).
The radix (2,4) multiplier design is used in this thesis. There are important time-area concerns in the design of a dual radix multiplier, as adding an extra radix should have minimal impact on signal propagation time while keeping silicon area to a minimum. Multiplying GF(2n) with the same radix as multiplying GF(2n) is known as dual-radix architecture, because it uses a different radex for the two operations (p). The hardware of the Montgomery multiplier is referred to as its "architecture." The multiplicand is multiplied by a radix-2n multiplier per clock cycle, processing n bits of the multiplicand. A radix (2,4) multiplier is a multiplier that works with GF(p) in radix-2 and with GF in radix-4 (or vice versa) (2n). The radix (2,4) multiplier design is used in this thesis. When designing a dual-radix multiplier, there are important time-area concerns. Adding an extra radix should have little effect on the time it takes for a signal to travel and take up as little silicon space as possible. The Montgomery modular multiplication is discussed in detail in the preceding section.
In Section II, we'll go over the many types of Montgomery modular multiplication. This section-III focuses on past research that was given by other researchers. Finally, explain the strategy given in Section IV. In Section V, the proposed method's simulations and outcomes are examined. Section VI concludes with a discussion of the conclusion. In this section, we will discuss the review of literature on the Montgomery modular multiplier and also the different VLSI. Pajuelo- Holguera, et.al. (2021), An FPGA device and the HLS programming approach were used to create the Montgomery Modular Multiplier. Parallel multiplier and parallel adder were built in order to implement the MMM in this manner. Authors tested this parallel hardware proposal against a sequential hardware version, a software version, and fifteen other studies in the literature. The hardware sequential version and the software implementation, respectively, benefit from a speedup of 8 and 18.5. In addition, our concept outperforms the competition in terms of turnaround time and effectiveness [1]. Gu, Z., et al. (2020, June) A new approach of modular multiplication based on Karatsuba-like multiplications was described in this study. NIST primes, as well as generic moduli based on Montgomery modular multiplication, benefit from this strategy. With our strategy, we can reduce the number of steps required to do integer multiplications from three to one [3]. Parihar, and others (2019), Presented in this study are the findings.
Tests show that the new suggested multiplier requires less clock cycles than earlier versions. A whole MM can be executed in the least amount of time possible by the multiplier under consideration. Because of the multiplier's fast speed and the tiny number of clock cycles, the system achieves an extremely high throughput rate. In order to include more hardware for format conversion, the suggested multiplier requires a larger footprint. This multiplier's surface area is, nonetheless, equivalent to other multipliers. There is a 44.8 percent reduction in clock cycles and a 50.2% reduction in the time it takes to complete an MM with the proposed multiplier compared to the current MM CCSA.
The study by Verma et al. FPGAs were used to implement RSA on early word-based radix 2 and 4 architectures given in this study. In early word-based systems, the most significant bits may be computed using just the most fundamental operations. As a result of the DSP48Es, which add 48 bits and run at high frequency, the word size was set at 48 bits for this project. The cycle time of a Montgomery design is mostly dictated by the addition operation in word-based Montgomery designs. This improvement has been made possible by the use of DSP48Es for addition and an early word-based technique for determining bits on FPGAs. [9].
Rabet, et.al 2017, This study, presented an algorithm for large prime-characteristic finite fields (Fp). They show the results of our design after routing and Placement on Xilinx's Artix-7 and Virtex-5 Field Programmable Gate Arrays. For any implementation that requires modular multiplication and uses cryptographic algorithms like RSA, ECC, or pairing-based cryptography, we have systolic implementations that can be applied to it. In order to work with the architectures and designs we created, the features of the Field-Programmable Gate Arrays were adapted. A satisfactory performance was achieved in the latency area with the NW-8 design. This architecture can run all bit lengths associated with traditional security levels in 33 clock cycles or less (128, 256, 512, or 1024 bits).
Although it takes 66 clock cycles to complete the same amount of work as the NW-8, the NW-16 offers significant improvements in terms of surface area compared to that processor. A variety of word counts can be accommodated by our systolic design, which employs the method CIOS. Kuang, et.al. It was in 2016 that a radix-4 scalable architecture for Montgomery modular multiplications was presented in this research work. Our experiments have shown that our design uses significantly less power and significantly less space for the hardware than any previous work.
To use the multiplier, one must meet various requirements, such as the amount of space in hardware and the amount of power it consumes, before it can be used in various applications. A radix-4-based architecture for Montgomery modular multiplications has been proposed in this short paper. The results of our experiments show that our design requires significantly less space for hardware and consumes significantly less power than any other. It is possible to use the proposed multiplier for a wide range of applications [15] provided that they meet various requirements, including the amount of space that the hardware needs to occupy. These include the amount of space that the hardware occupies and how much energy it uses. This research was done by Kuang et al. (2015, FCS-based multipliers keep the input and output operands of the Montgomery MM in the carry-save form to avoid the format conversion, resulting in fewer clock cycles but a larger area than the SCS-based multiplier.This paper proposes a low-cost and high-performance Montgomery multiplier based on a modified version of the SCS-based Montgomery multiplication algorithm. Through the use of k-partitions, the multiplier operand is divided into smaller pieces that can be processed in parallel and independently, reducing the overall amount of computational complexity that is needed. Another method for modular exponentiation, known as the Square and Multiply method, is implemented and compared to an ordinary Montgomery multiplier and the k-partition method for four sets of input bit lengths of 128, 256, 512, and 1024 bits, respectively. In comparison to the other two methods, the Square and Multiply method uses significantly less power [21]. The high-speed Montgomery modular multipliers now have more registers and higher energy consumption, allowing for faster decryption and encryption thanks to redundant carry save formats for all inputs and outputs of the modular multiplication. Kuang et al. presented their findings in this study (2012).
Proposed Methodology
In this part of the article, we will go over the suggested procedure. In reality, many complicated cryptographic algorithms are built on top of relatively straightforward modular arithmetic. Since integers are the only numbers that can be used in modular arithmetic, only addition, subtraction, multiplication, and division can be performed on them. Only in modular arithmetic are all operations performed in relation to a positive integer, also known as the modulus, as opposed to the elementary arithmetic you learned. This is the only significant difference between the two. The proposed approach is based on modular arithmetic computation, specifically Montgomery modular multiplication, more commonly known as Montgomery multiplication.
It's a method for quickly multiplying modular numbers. The Montgomery modular multiplication technique employs a specialised representation of numbers known as the Montgomery form. Algorithms use Montgomery forms of a and b for the Montgomery form of ab modified by N, which is more efficient. By dividing ab by N and keeping only the remainder, the conventional method of modular multiplication reduces the size of the double-width product ab.
This division necessitates an estimate and correction of the quotient digits. If R > N is coprime to N, then all that must be divided for a Montgomery multiplication is R by the Montgomery form's only dependent variable R. Selecting the constant R's value in such a way that division by R is simple can significantly speed up the algorithm's computation time.
In the above discuss the basic of montgomery multiplication. Now discuss the proposed method fast montgomery multiplication that is based on counter approach. In the proposed method use counter approach shown in below algorithms.
Montgomery Modular Multiplication Algorithm
Let N be a k-bit odd number, and let R be an additional factor defined as 2k mod N, where 2k1 N 2k. The N-residue is the product of two integers, x and y, where x, y N.with respect to R can be written as X = x × R (mod N) Y = y × R (mod N) ( Based on (1), the Montgomery modular product Z of X and Y can be obtained as Z= X × Y × R−1 (mod N) (2) where R−1 is the inverse of R modulo N, i.e., R × R−1 = 1 (mod N). Based on (1), the Montgomery modular product Z of X and Y can be obtained as Z= X × Y × R−1 (mod N) (2) where R−1 is the inverse of R modulo N, i.e., R × R−1 = 1 (mod N). Algorithm 1 illustrates the Montgomery modular product of X and Y using the radix-2 version of the Montgomery modular multiplication algorithm, designated as Algorithm MM. Observe that Algorithm 1's Xi notation indicates the i th bit of X in binary form. Furthermore, a segment of X from the i th bit to the j th bit is denoted by the notation Xi: j. Algorithm MM's S has a convergence range of 0 S 2N/2 + 2N/4 + + + 2N/2k1 2N. Algorithm 1-Algorithm MM52: 5-to-2 CSA Montgomery Multiplication Algorithm 1 illustrates the Montgomery modular product of X and Y using the radix-2 version of the Montgomery modular multiplication algorithm, designated as Algorithm MM. Observe that Algorithm 1's Xi notation indicates the i th bit of X in binary form. Furthermore, a segment of X from the i th bit to the j th bit is denoted by the notation Xi: j. Algorithm MM's S has a convergence range of 0 S 2N/2 + 2N/4 + + + 2N/2k1 2N.
Flow Chart
In the next section, we'll talk about how the proposed Montgomery Multiplication design was tested and what theresults were. The Montgomery multiplication algorithm [3], [6,] [10], [11] can be used to efficiently perform modular multiplication. Two numbers are multiplied by this procedure modulo P.avoiding division by P's modulus to get the finished item, a series of additions are made. Let the multiplicand multiplier, and modulus each be represented by an integer (a, B, P).
Simulation Results
In this section, we are describing the implementation details and design issues for our proposed research work. By searching, we have observed that for our proposed work, Vivado Software is the well-known platform of Xillinx to perform the suggested approach. We tend to perform some experimental tasks in verilog code on VIvado Software.
Result Parameters
The RTL design of the proposed improved Montgomery Multiplication design can be seen in Figure 4, which can be found below. Register-transfer-level extrapolation is used in hardware description languages (HDLs) [16] [20] like Verilog and VHDL to construct high-level models of a circuit, from which lower-level views and, eventually, real routing can be determined. Figure 4 (a) is a demonstration of and Figure 4 (b) is a view of the proposed FIR design as seen from the perspective of inter RTL technology.
Implemented Results
In the given figure 6, which shows the FPGA board that's NEXYS 4, apply the UCF file to this board. This FGPA board is made by Digilent. The proposed method is validated on this board and achieves the same result as the simulation of Montgomery Multiplication. It shows same outputs in Synthesize on FPGA Board which shown in figure 5 in Simulated wave. In the above TABLE I and II shows the different result parameters calculated in this research work. This is taken less No of LUT (Look up Table). Less no of Flip-flops. Based on the implementation results, it is know that the proposed design requires less area consumption than the existing deigns. the performance of Montgomery multiplication algorithm is improved there by the low space complexity is achieved in performing RSA cryptosystems. For example, on-chip power and various other performance parameters are shown below. calculation of power and activity based on the implemented net list and other data sources such as constraint and simulation files.
Table 2. Result Of Device Utilization Summary
In the above figure 6, which shows the FPGA board that's NEXYS 4, apply the UCF file to this board. This FGPA board is made by Digilent. The proposed method is validated on this board and achieves the same result as the simulation of Montgomery Multiplication.
In the below figure 7, the input output synthesized design outcomes of the proposed design. The I/O synthesized proposed montgomery multiplication algorithm was designed with the help of the UCF file. The UCF file is generated after the verification of simulation outcomes.
In the next figure below, figure 8, shows the outcome of the proposed Montgomery Multiplication Design I-sim simulator. We can clearly verify the outcomes of the proposed design in Figure 8. Input is denoted by A(7:0) and B(7:0) and output is denoted by S(15:0). Table III gives comparision of different method in terms of no of LUT's and power consumption our proposed method take less power as compare to previous methods its takes 6 methods take more power as compare to the proposed method.
We also compare in terms of no of LUT's means it take less area as compare to previous method it take less no offlip flops and registers.
Conclusion
This research work presented a enhanced Montgomery and efficient implementation of Modular Multiplication. The method that is presented here makes the RSA algorithm more time efficient. The Montgomery Multiplication algorithm that has been presented has the benefit of being able to replace the division operation with the bit shift operation. The implementation of Montgomery multiplication requires a trade-off between the amount of space on the chip and the amount of time it takes to perform the computation. The advantages of shifting at least significant bits of the partial product by setting it to zero outweigh the disadvantages of the earlier approach. The proposed approach produces satisfactory results in terms of the number of slice flip flops, LUTs, and IOBs. In terms of the amount of power used, the results obtained using the proposed method are superior to those obtained using other, more traditional approaches. The comparison is shown in the third column of the table above. | 4,496.8 | 2023-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
High order algorithms for numerical solution of fractional differential equations
In this paper, two novel high order numerical algorithms are proposed for solving fractional differential equations where the fractional derivative is considered in the Caputo sense. The total domain is discretized into a set of small subdomains and then the unknown functions are approximated using the piecewise Lagrange interpolation polynomial of degree three and degree four. The detailed error analysis is presented, and it is analytically proven that the proposed algorithms are of orders 4 and 5. The stability of the algorithms is rigorously established and the stability region is also achieved. Numerical examples are provided to check the theoretical results and illustrate the ef-ficiency and applicability of the novel algorithms.
Introduction
The subject of fractional calculus (theory of integration and differentiation of arbitrary order) can be considered as an old yet novel topic. It is an ongoing topic for more than 300 years, however since the 1970s, it has been gaining increasing attention [1]. Firstly, there were almost no practical applications of fractional calculus (FC), and it was considered by many as an abstract area containing only mathematical manipulations of little or no use [2,3,4]. Recently, FC has been widely used in various applications in almost every field of science, engineering, and mathematics and it have gained considerable importance due to their frequent appearance applications in fluid flow, polymer rheology, economics, biophysics, control theory, psychology and so on 10 [5, 6,7].
The main reason that fractional differential equations (FDEs) are being used to modeling real phenomena is that they are nonlocal in nature, that is, a realistic model of a physical phenomenon depends not only on the time instant but also the previous time history [8]. In other words, fractional derivative provides a perfect tool when it is used 15 to describe the memory and hereditary properties of various materials and processes [9,10]. Some of the other main differences between fractional calculus and classical calculus are: (1) FDEs are, at least, as stable as their integer order counterpart [11,12]; (2) Using FDEs can help to reduce the errors arising from the neglected parameters in modeling real-life phenomena [13,14]; (3) In some situations, FDEs models seem 20 more consistent with the real phenomena than the integer-order models [15,16]; (4) Fractional order models are more general [17] and in the limit results obtained from FC coincide with those obtained from classical calculus [18] and so on.
The wide applicability of FC in the field of science and engineering motivates researchers to try to find out the analytical or numerical solutions for the FDEs. It is well 25 known that the analytical and closed solutions of FDEs cannot generally be obtained and if luckily obtained always contain some infinite series (such as Mittag-Leffler function) which make evaluation very expensive [19? , 20]. For this reason, necessarily, one may need an efficient approximate and numerical technique for the solution of FDEs [21]. 30 Odibat et al. constructed a numerical scheme for the numerical solution of FDEs based on the modified trapezoidal rule and the fractional Euler's method [22]. To obtain a numerical solution scheme for the fractional differential equations, authors of [23] divided the time interval into a set of small subintervals, and utilized quadratic interpolation polynomial between two successive intervals to approximate unknown 35 functions. Cao and Xu applied quadratic interpolation polynomial to construct a high order scheme based on the so-called block-by-block approach, for the fractional ordinary differential equations [24]. The convergence order of this scheme is 3 + α for 0 < α ≤ 1, and 4 for α > 1. Diethelm proposed an implicit numerical algorithm for solving FDEs by using piecewise linear interpolation polynomials to approximated the 40 Hadamard finite-part integral [25]. Yan et al. designed a high order numerical scheme for solving a linear fractional differential equation by approximating the Hadamard finite-part integral with the quadratic interpolation polynomials [26]. This method is based on a direct discretisation of the fractional differential operator and the order of convergence of the method is O(h 3−α ). A high order fractional Adams-type method 45 for solving a nonlinear FDEs is also obtained in this paper. Pal et al. designed an extrapolation algorithm for solving linear FDEs based on the direct discretization of the fractional differential operator [27].
In this paper, we will introduce two new numerical algorithms for solving the nonlinear FDEs, which are expressed in terms of Caputo type fractional derivatives. In 50 these algorithms properties of the Caputo derivative are used to reduce the FDE into a Volterra type integral equation of the second kind. We then use the Lagrange interpolation polynomials of degree three and four to approximate the integral and the proposed numerical algorithms has the truncation error O(h 4 ) and O(h 5 ) for all α > 0. The stability of the numerical method is proved based on the properties of the weights in the 55 numerical algorithm under the assumption that the time T > 0 is sufficiently small.
Such properties are used in the first time to prove the stability of the numerical methods for solving fractional differential equations. To our best knowledge, there is no numerical algorithm for solving nonlinear fractional differential equation with the convergence order greater than 4 in the literature. We also introduce a new way to analyze 60 the stability of the numerical methods for solving fractional differential equations.
The outline of the paper is as follows. Numerical algorithms are presented in Section 2 by using the piecewise Lagrange interpolation polynomial of degree three and degree four. Section 3 deals with the error analysis of the presented algorithms and stability analysis of these algorithms is given in Section 4. Linear stability analysis of the proposed schemes is given in Section 5 to achieve stability region of these methods. To demonstrate the effectiveness and high accuracy of the proposed methods some numerical examples are provided in Section 6. Finally, Some concluding remarks are given in Section 7.
70
Consider the nonlinear fractional differential equation, with α > 0, where C 0 D α t denotes the Caputo fractional derivative and f (t, u) satisfies the Lipschitz condition with respect to the second variable, i.e., there exists a constant L > 0 such It is well known that the initial value problem (1) is equivalent to the Volterra integral equation where h(t) = (1) at the point t j is denoted by y j . For 75 notational convenience, let F(τ) = f (τ, y(τ)) and F j = f (t j , y j ).
Numerical algorithm I
We start with computing the value of y(t) at t 1 , t 2 and t 3 , simultaneously. Consider the following integral for the first three steps (k = 0, 1, 2) whereF(τ) is chosen to be the piecewise Lagrange cubic interpolation polynomial of F(τ) associated with the nodes t 0 , t 1 , t 2 and t 3 . In this way, we havê For this reason, after some elementary calculations y k+1 for the first three steps k = 0, 1, 2 can be approximated as follows: As it is mentioned above, the first three step solutions y 1 , y 2 and y 3 are coupled in (6), thus need to be solved simultaneously. An explicit solution of these three equations is given in Appendix A section.
80
To construct the scheme for the next steps, I k+1 , k ≥ 3 is descritized as follows in which like as (4), for the first three integrals ( j = 0, 1, 2, 3),F is the piecewise Lagrange cubic interpolation polynomial of F(τ) associated with the nodes t 0 , t 1 , t 2 and t 3 . For the reminder integrals ( j = 3, 4, . . . , k + 1),F j+1 is chosen to be the piecewise Lagrange cubic interpolation polynomial of F(τ) associated with the nodes t j−2 , t j−1 , t j and t j+1 . In this way, for k ≥ 3 we havê The following special cases should be excluded: For this reason, after some explicit calculations y k+1 for k ≥ 3 can be approximated as follows: To summarize, we obtain the following novel scheme: where d k+1 j are defined as above.
Numerical algorithm II
Consider the following integral for the first four steps (k = 0, 1, 2, 3) whereF(τ) is the piecewise Lagrange interpolation polynomial of degree four associated with the nodes t 0 , t 1 , t 2 , , t 3 and t 4 . Therefore, one can achieve the following Hence y k+1 for the first four steps k=0,1,2,3 can be determined as follows: It is obvious that, the first four step solutions y 1 , y 2 , y 3 and y 4 are coupled in (14), thus need to be solved simultaneously. An explicit solution of these four equations is given in Appendix B section.
85
To design the schema for the next steps, I k+1 , k ≥ 4 is descritized as follows in which like as (12), for the first four integrals ( j = 0, 1, 2, 3),F is the piecewise Lagrange interpolation polynomial of degree four associated with the nodes t 0 , t 1 , t 2 , , t 3 and t 4 . For the reminder integrals ( j = 4, 5, . . . , k + 1),F j+1 is the piecewise Lagrange interpolation polynomial of degree four associated with the nodes t j−3 , t j−2 , t j−1 , t j and t j+1 . In this way, for k ≥ 4 we have the following weights The following special cases should be excluded: Therefore y k+1 for k ≥ 4 can be approximated as follows: Thus a new numerical algorithm II is described by (14) and (19) with the weights b k+1 j defined as above.
Error analysis
For the numerical algorithm I the truncation error at the step k + 1 is defined by whereỹ k+1 is an approximation to y(t k+1 ), evaluated by using the algorithm I (11) with exact previous solutions, i.e. for k ≥ 3, For the numerical algorithm II (19), the definition of truncation error is the same as (20), whereỹ k+1 for k ≥ 4 is as follows: Proof. We have, by Eqs.
Theorem 2. Let r k+1 (h) be the truncation error defined in (20). If F(τ) ∈ C 5 [0, T ] for some suitable chosen T , then for the numerical algorithm II (14) and (19) there exists a positive constant C > 0, independent of h, such that Proof. The details of the proof is similar to that of Theorem 1 so are neglected. We have, by Eqs.
Stability analysis
The stability of a numerical scheme mainly refers to that if there is a perturbation in the initial condition, then the small change cause small errors in the numerical solution [28,29]. Suppose that y k+1 andỹ k+1 are numerical solutions in (11), and the initial conditions are given by y then it concluded that the scheme (11) is stable [30]. It is similar to define the numerical stability for the numerical algorithm II (14) and (19). Assume that F(τ) is sufficiently smooth and C α > 0 is independent of all discretization parameters. Firstly, we introduce two lemmas which will be used in stability analysis.
95
Lemma 1. For the weights of the novel scheme (11) we have where C α only depends on α.
Proof. For d k+1 0 , we have where j − 1 ≤ ξ j ≤ j, j = 1, 2, 3. Therefore we have Using similar analysis it can be shown that for j = 1, 2, 3, k − 1, k, k + 1 there exist C α , which is dissimilar values at each cases, such that the following inequality is holds.
For j = 4, 5, . . . , k − 2 we have Hence, above equation has the simplify form, Combining all above results, by choosing sufficiently large C α , and also sufficiently small T one can get (24) to complete the proof of the Lemma.
Lemma 2.
For the weights of the novel scheme (19) we have where C α only depends on α.
Proof. The idea of the proof is similar to that of Lemma 1, so is omitted.
Proof. Suppose that y k+1 andỹ k+1 are numerical solutions in (11), and the initial conditions are given by y (i) 0 andỹ (i) 0 respectively. We shall use mathematical induction. Assume that is true for ( j = 0, 1, . . . , k). We must prove that this also holds for j = k + 1. Note that, by assumptions of the given initial conditions, the induction basis ( j = 0) is true. We have, using the Lipschitz condition assumption (2), By Lemma 1 one can get which implies that Now for sufficiently small T , one can complete the proof by using the mathematical induction (27) and by choosing constant C α,T sufficiently large.
Proof. The proof is similar to the proof of Theorem 3.
Linear stability analysis
Consider the following test problem to investigate stability region of the presented methods: The new method (11) gives the following iteration formula for solving (28): Denoting z = λ h α , we get Let y j = ξ j , then by assuming ξ = e iθ with 0 ≤ θ ≤ 2π we get the following stability region for the scheme (11) The stability region of the algorithm II (14) and (19) can be achieved in a quite similar way. The stability region of the numerical algorithm I is obtained in Figs. (1) and (2)
Numerical results
To check the numerical errors between the exact and the numerical solution, nu- Example 1. Consider the following fractional differential equation where The exact solution is y(t) = t 4 . At the time t = 1 for different step sizes h and different The exact solution to this initial value problem is y(t) = t 2 − t. Example 3. Consider the following fractional differential equation The exact solution is y(t) = t 4 − 1 2 t 3 . Table 7 shows the absolute errors of the presented schemes and the method reported in Ref. [27] at the time t = 1. From this Table it is observed that the error of presented method is decreased significantly. we shall try to follow this idea to construct higher order schemes for solving nonlinear FDEs.
Appendix A
The idea of solving y 1 , y 2 and y 3 form (6) is as follows. For simplicity, we assume that f (t, y) = µy + g(t) for understanding the idea of the numerical method. We have the following linear system of equations, from (6), , in which Substituting (35) and (38) to (37) leads to in which Now, firstly one can calculate y 3 from given initial conditions and known function g(t).
Then y 2 and y 1 can be calculated by (38) and (35), respectively. | 3,551.8 | 2021-02-17T00:00:00.000 | [
"Mathematics"
] |
Evaluation of cement-bonded particleboards produced from mixed sawmill residues
This study evaluates the application feasibility and properties of cement-bonded particleboards produced from mixed tropical hardwood species. Wood residues from a typical sawmill were collected, dried and used in the manufacturing of the cement composites. The wood residues used were from Ceiba pentandra and Gmelina arborea timber species. The residues were mixed in seven ratios in the production of the composite samples. Two control experimental samples containing unmixed residues of each species were also produced. The test carried out on the boards were flexural strength, water uptake properties and wet and dry screw withdrawal resistance. The effect of the wood mix ratio on the board properties was evaluated. The result showed that all properties except the screw withdrawal resistance were significantly influenced by the mix ratios (p < 0.05). The wet and dry screw withdrawal resistance ranged from 1170 to 1770 N and 1360 to 1830 N, respectively. The optimum wood mix ratio for enhancing mechanical performance of the boards was 1:4 of C. pentandra/G. arborea wood residues. Based on the result of this study, the particleboards produced can be used as wood composite ceiling tiles in building applications.
Introduction
Wood processing mills generate millions of tonnes of residues annually. Some of these residues include sawdust, planer shavings and bark, which are often incinerated to generate heat energy in integrated sawmills. In some cases, such residues are collected at municipal plants and incinerated to generate heat for district heating and electricity (Alm and Karlsson 2016). The beneficial use of sawmill residues in green heating systems is only applicable in developed countries, where such processes complement and have almost replaced the fossil-based heating grid. However, in developing countries, much of these residues are treated as waste having no commercial value. Many sawmills do not have the facilities to incinerate residues and usually carry out open burning practices (Fabunmi et al. 2012;Effah et al. 2015). This method is of serious environmental concern with the release of volatile compounds and particulate matter in the atmosphere. Landfilling as an option is also not generally practiced because it increases the total cost of processing, coupled with the shortage of available landfills. It therefore becomes imperative to find alternative ways by which these residues can be utilized within the scope and capability of a developing economy. In Africa, sawmill residues are inevitable but a significant volume may be reduced by careful process optimization. However, these residues are being utilized in other areas of applications including wood composite manufacturing, wood pelletization, animal bedding, etc. One major limitation in the use of this material is that it sometimes contains residues from several tree species that are converted within a period. Some of these residues may not be suitable for the intended application, and sorting may be a difficult option to implement when different timber species are converted together. This study therefore investigates the use of such mixed residues in manufacturing cement-bonded ceiling tiles.
Cement-bonded wood particleboard has been the subject of many researches for many decades. Different wood species, including lignocellulosic fiber materials, have been used to produce composite boards bonded with Portland cement (Savastano et al. 2000;Semple et al. 2002;Fan et al. 2012;Ardanuy et al. 2015). In addition, mixed-wood and non-wood species have been used to produce cement and lime-bonded boards (Badejo 1988;Aigbomian and Fan 2013;Garcez et al. 2016). This study seeks to address the potential of residues of some tropical hardwood species as raw materials for manufacturing cement-bonded ceiling tiles. The species of interest include Ceiba pentandra and Gmelina arborea which are predominant in the area of study. The use of these mixed-wood species in cementbonded composites has not been previously reported in the literature. However, cement composites with G. arborea wood residues have been studied (Aladenola et al. 2008;Amiandamhen and Izekor 2013;Owoyemi and Ogunrinde 2016). The manufacturing of cement-bonded particleboard is faced with some disadvantages, one of which is the inherent incompatibility between some wood species and Portland cement (Na et al. 2014;Hachmi et al. 2017). As a result, inhibiting species are sometimes treated or additives are used to reduce the setting time of cement (Karade et al. 2003;Na et al. 2014). In this study, the wood species were treated with hot water and calcium chloride (CaCl 2 ) was used as an additive. This was necessary based on the results from previous study using G. arborea sawdust (Amiandamhen and Izekor 2013).
The properties of cement-bonded particleboards depend on several parameters including the nature of the biomass material, the biomass blends and the ratio of the cementbiomass mix (Sotannde et al. 2012;Castro et al. 2018). Similarly, several studies have investigated the properties of cement-bonded particleboards produced from mixed biomass species; however, there are very few studies on the performance of controlled mixing of wood species in cement composites. In one study, Garcez et al. (2016) blended Eucalyptus grandis and Pinus ellioti sawdust between 0 and 100% by mass and found intermediate to lower values of compressive strength and intermediate to higher values of dynamic modulus of elasticity. Woodcement boards with 100% of E. grandis sawdust had higher values of compressive strength compared to boards with P. ellioti sawdust (Garcez et al. 2016). In a blend of hardwoods and softwoods for wood-crete production, Aigbomian and Fan (2013) found lower composite properties compared to unblended composites, which could be a result of an adverse effect of mixing, such as bridging between the particles. Thus, there is a literature gap on the properties of blended hardwood species. Therefore, the objective of this study is to evaluate the properties of cement-bonded particleboard made from mixing two tropical hardwood species. The aim of the study is to demonstrate the feasibility of manufacturing wood-cement ceiling boards from mixed-wood residues obtained from a typical sawmill in Benin City, Nigeria.
Experimental
The wood residues used were Ceiba pentandra (L.) Gaertn. and Gmelina arborea Roxb. The residues were collected from a local sawmill just after primary conversion of the logs. The wet residues were air-dried to a moisture content of 18%. The wood residues were hammer-milled and sieved through a 1-mm mesh. The particles were soaked in hot water maintained at a constant temperature of 100°C in a water bath for 1 h. This was necessary to remove any water-soluble sugars and hemicelluloses which tend to inhibit cement hydration. The leachate was drained and the wood particles were air-dried for 24 h. Thereafter, the particles were conditioned at 20°C and 65% relative humidity (RH) for 96 h to an equilibrium moisture content of 12%.
Sample preparation and testing
The target board density was 1.2 g/cm 3 for a cement/wood ratio of 2:1. The actual board size was 350 9 350 9 6 mm. Therefore, the amount of cement and wood particles in each board was 588 and 294 g, respectively. A pre-calculated amount of water, based on the relationship used in previous study (Amiandamhen et al. 2016), was added to the mixture. A measured quantity of CaCl 2 (3% w/w of the cement) was dissolved in the water to accelerate the hydration process of the cement.
The wood particles were mixed together according to Table 1.
The wood particles, cement and water were mixed thoroughly in a planetary style mixer for about 10 min. The mixture was transferred quantitatively into a wooden mold of known dimensions. The preformed mat was covered with metal plates, and a metal bar was placed at the edge of the bottom plate to set the thickness of the board. The plate was cold-pressed at 1.23 MPa for 24 h. The same procedure was repeated for all the samples (A-I). Each sample was replicated four times. Thereafter, the plates were removed and the boards were air-cured for 28 days prior to cutting and testing.
The edges of the boards were trimmed to avoid edge effects. The boards were cut into test sample sizes according to ASTM D1037 (2012) and conditioned at 20°C and 65% RH for 96 h before testing. The tests evaluated were modulus of rupture (MOR), apparent modulus of elasticity (MOE), wet and dry screw withdrawal resistance (WSW/DSW), water absorption (WA) and thickness swelling (TS). These tests were considered due to the proposed ceiling application of the products and were performed according to ASTM D1037 (2012). Preliminary investigations using C. pentandra sawdust in a cement/wood ratio of 2:1 yielded poor screw withdrawal resistance and high water absorption. It is the aim of this study to evaluate these properties and ascertain the potential of the combined residues for manufacturing marketable ceiling boards.
Experimental design and analysis
The experiment was laid out as a completely randomized design with nine treatments (wood mix ratios) and four replicates. Analysis of variance (ANOVA) was conducted to determine the effect of the production variables (mix ratios) on the properties evaluated. Duncan's new multiple range test (DNMRT) was used in the separation of means where significant differences occurred.
Results and discussion
Effect of wood mix ratios on MOE and MOR Figure 1 shows the effect of the wood mix ratios on the MOE of the cement boards. Sample H with a mix ratio of 1:4 (C. pentandra/G. arborea) had the highest mean MOE of 2.73 GPa. Moderate values were also observed for samples F, A and E with mix ratios of 2.3:1, 1:1 and unmixed C. pentandra boards, respectively. There was no relationship between the mixing ratios and MOE. The average MOE value obtained in this study is in the range reported by Tittelein et al. (2012) of a low-density wood-cement particleboard using a cement/wood ratio of 2:1. Although stiffness characteristic is a function of the cement-wood ratio (Frybort et al. 2008), the presence of different wood species may cause considerable variation in mechanical properties (Garcez et al. 2016). From the result, sample G with a mix ratio of 1:2.3 (C. pentandra/G. arborea) had the lowest mean MOE.
The MOR of the samples is presented in Fig. 2. The mean MOR values ranged from 0.43 to 4.13 MPa. Samples E and H have higher MOR values of 4.13 and 4.08 MPa, respectively. This indicates that the wood mix ratio in sample H (1:4, C. pentandra/G. arborea) is optimum mix for enhancing the flexural property of the wood-cement composites. Sample E with 100% C. pentandra also proved to yield good bending property. The pattern of variation among the other samples shows that there is no linear relationship between the MOR and the wood mix ratios. Similar values in MOR were also observed by several authors, although the studies reported were from single wood species (Sotannde et al. 2012;Tittelein et al. 2012;Amiandamhen and Izekor 2013).
Effect of wood mix ratios on WA and TS
The results of the 24-h submersion of the samples in water for WA and TS are presented in Figs. 3 and 4, respectively. The WA test showed that boards made from unmixed G. arborea had the lowest mean value of 18.63%, while boards produced from unmixed C. pentandra had the highest mean value of 53.64%. The mean WA values for the other mixing ratios, with the exception of 1:1 (C. pentandra/G. arborea), were relatively high. This indicates that the samples absorb too much water during submersion, probably due to an adverse effect of mixing as observed by Aigbomian and Fan (2013). From the observation, it could be explained that inadequate inter-particle mixing results in a porous structure in the matrix, which inevitably leads to high water absorption. On the contrary, unmixed G. arborea sawdust forms a relatively compacted product with Fig. 1 Effect of mix ratio on MOE cement, and the product is moderately resistant to moisture (Amiandamhen et al. 2016). The TS of the particleboards ranged from 6.43 to 14.32%. Sample F had the least TS, while sample G had the highest TS.
Effect of wood mix ratios on WSW/DSW Figure 5 shows the WSW and DSW perpendicular to the plane of the tested samples. DSW was carried out on samples at 12% moisture content, while WSW was carried out at a MC of 60%. Static withdrawal was performed at a rate of 6.6 mm/min according to the procedure of Rammer and Zelinka (2004). Sample C with a wood mix ratio of 1:1.5 (C. pentandra/G. arborea) had the lowest mean value of 1360 and 1170 N, while sample I with a ratio of 4:1 (C. pentandra/G. arborea) had the highest mean value of 1830 and 1770 N for DSW and WSW, respectively. Although SW depends on the particleboard density among several factors (Miljković et al. 2007), it was observed that the values vary slightly irrespective of the sample density. Okino et al. (2004) also reported that the SW perpendicular to the plane of particleboard was between 1500 and 2020 N. These values are higher than the previous values obtained for nail withdrawal resistance of the same type of panels. Other authors also found screw withdrawal resistance to be higher than nail withdrawal resistance. This difference is due to the surface friction, which holds the fastener in a structural panel. The greater the frictional surface, the higher the withdrawal strength (Rammer and Zelinka 2004).
Data analysis and mean comparisons
The result of the ANOVA is presented in Table 2. It was observed that the wood mix ratios have a significant effect on the properties evaluated on the particleboard (p \ 0.05), except for the wet and dry screw withdrawal resistance. Table 3 shows the multiple mean comparisons on the board properties.
Conclusions
The increasing interest by many countries in cement-bonded wood products is likely due to the durability of the product, markets and design flexibility. With affordable technology and materials, this product provides the opportunity to increase the economic potential of developing countries. Wood residues will continue to be a significant material for manufacturing cement-bonded products in developing countries, because of the high volume of the residues generated in sawmills. The utilization of these residues will not only reduce the negative environment effects associated with open burning, but will also help to promote a circular bioeconomy. From a typical sawmill that generates different wood residues, it is almost impossible to sort these residues according to species. This study was conducted to provide a technical assessment of the feasibility of producing cement-bonded particleboards produced from mixed residues and evaluation of the board properties for ceiling applications. The study revealed that the physical properties of the particleboards produced met the minimum Fig. 4 Effect of mix ratio on TS requirements for cement-bonded particleboards according to BS EN 634-2 (2007), for use in humid and dry environments. As a result, the products can be used for the proposed application as ceiling tiles for building constructions. For manufacturing structural panels using these wood residues, increasing the cement content is recommended for durability purposes.
Funding Open access funding provided by Linnaeus University.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 3,630.2 | 2021-04-28T00:00:00.000 | [
"Materials Science"
] |
Identifying challenges in maintenance planning for on-demand UAM fleets using agent-based simulations
The novel aircraft architectures for Urban Air Mobility (UAM), combined with pure on-demand operations, mean a significant change in aircraft operation and maintenance compared to traditional airliners. Future flight missions and related variables such as the aircraft position or utilisation are unknown for on-demand operation. Consequently, existing methods to optimise aircraft assignment and maintenance planning cannot be transferred. This study examines the behaviour of an aircraft fleet in an on-demand UAM transport system regarding the interlinking between operation and maintenance. Initially, a potential maintenance schedule for UAM vehicles is deduced. A transport and maintenance simulation is introduced where aircraft are modelled as agents servicing a simple network. As aircraft reach their maintenance intervals, they transfer to one of the maintenance bases and compete for that resource. Since that competition can result in avoidable waiting times, the maintenance costs are extended by running costs for the bases and opportunity costs for missed revenue during these waiting periods. Opportunity costs are cost drivers. To reduce the waiting times, two operational approaches are examined: Extending the opening hours of the maintenance facilities and checking the aircraft earlier to reduce simultaneous maintenance demand. While an extension of operating hours reduces the overall maintenance costs, the adjustment of tasks is more effective to lower waiting times. Thus, an improved system needs to use a combined approach. That combination results in overall maintenance costs of approximately $ 58 per flight hour of which about seven percent account for the opportunity costs.
Introduction
Within this introduction, developments in the field of Urban Air Mobility (UAM) as well as a brief overview how the interlinking between operation and maintenance is researched and is followed by the structure of this study.
The concept of UAM is not a novelty of the past decade. Helicopter shuttles have existed since the 1950s in New York City 1 and by today, on-demand services are available in a few global cities to high-net-worth individuals.
Upcoming social changes and quickly advancing technologies indicate that UAM will leave its niche. The number of inhabitants of urban areas will increase by one billion within this decade, most of them in Asia [1]. Simultaneously, the disposable incomes in Asia will experience an upswing [2]. Growing cities face transportation challenges and the increasing income of several hundred million inhabitants will broaden their financial possibilities.
At the same time, fundamental obstacles for UAM have been overcome in recent years. The 5G standard for cellular networks provides the fundament for precise positioning of autonomous flying vehicles [3]. Lithium-ion batteries have been constantly improved to specific energy and power rates that enable test flights for a range of Urban Air Mobility Vehicles (UAMVs) [4]. The integration of autonomous controls into personal vehicles is showcased in pilot cities and integrated into serial production 2 . Customers selfevidently use app-based mobility providers transporting millions of passengers a day 3 .
These promising conditions are motivation for more than 100 companies to develop UAMVs. Risks, for instance manufacturers' too optimistic promises or the effects of the COVID-19 pandemic on the attitude towards aviation and personal mobility, question the potential success of UAM. 4 While the development of UAMVs is in full swing, the operational aspects of UAM are less regarded. To the best of our knowledge, besides one paper interviewing maintenance mechanics about challenges with electric UAMVs no research focusing on the maintenance and its operational implications for UAM has been conducted. Nonetheless, a tempting $ 5 billion revenue is the estimated market volume for UAM services, such as insurance, maintenance or certification for 2035 [5].
This study is a first step to picture and understand mechanisms between on-demand operation of UAM fleets and the requirements for the technical operational ecosystem. Therefore, a transport network simulation consisting of three major elements is developed in MATLAB. Vertiports are airports for UAMVs and are simplified as agglomeration of landing pads, from which the aircraft start and land on. The UAMVs are modelled as agents moving through the network servicing flight requests. The maintenance bases are equipped with a limited number of slots that can maintain one UAMV and have restricted opening hours. Initially, different settings of initial fleet ages are compared. The influence of operational parameters on the interlinking between maintenance and operation is investigated by changing the maintenance operation hours and checking aircraft prior to the end of their maintenance intervals. The most impactful parameters are combined to identify the best setup regarding maintenance costs and transport capacity within the scope of this simulation. With this work, a simple UAM operation and maintenance simulation is presented. The impact of operational decisions on the queuing for maintenance and the impact on the overall transport performance is demonstrated. Furthermore, readers will understand the importance of aircraft assignment for on-demand aviation regarding maintenance.
This publication is grouped into five sections as folows: The following Sect. 2 provides the fundamentals of aircraft maintenance and covers literature simulating UAM transport networks as well as aircraft scheduling and assignment for on-demand operation. The identified research gap is presented subsequently.
The UAM maintenance simulation is introduced in Sect. 3. A generic UAM maintenance schedule is deduced as well as the composition of cost modelling is presented, before elementary parts of the simulation are shown.
The results are presented in Sect. 4. They consist of four parts, the initial simulation, evaluating the influence of different fleet ages at simulation begin and two studies to improve the interlinking between operation and maintenance and the best parameter combination.
Lastly, the conclusion and an outlook for future research options is drawn in Sect. 5.
Fundamentals and literature review
This section summarises relevant literature concerning first research for UAMV maintenance and aircraft maintenance in general, maintenance planning for on-demand aviation and UAM transport simulations. The existing research gap is concluded in the final step. Therefore, the structure is divided into four parts. Firstly, Sect. 2.1 provides a brief overview of available literature for maintenance of UAMV and at commercial airlines and its general scheduling. Second, Sect. 2.2 shows two different approaches to simulate UAM. The consecutive Sect. 2.3 combines both previous sections with literature about maintenance planning scheduling for different on-demand flight operations and outlines and why their optimisation approaches are limited in their applicability to our problem statement. Last and with the overview of the relevant literature, the research gap is identified in Sect. 2.4.
Aircraft maintenance and its planning
Within this subsection, the literature on maintenance considerations for UAMV is reviewed. Furthermore, a summary on maintenance interval development for civil airlines and their check scheduling procedures is provided.
Maintenance considerations for UAMV
Even with vibrant activity in the field of UAMV, maintenance for those vehicles is a widely untouched field. Thus, only three publications providing challenges or guidelines for maintenance of UAMV could be identified.
First, Holden et al. [6] provide some initial thoughts on how maintenance could look like for the case of Uber Elevate. Compared to a light helicopter, the authors estimate a maintenance cost reduction of approximately of about 50 % by increasing check intervals of electric motors to 10,000 FHs. Furthermore, they predict smaller checks in intervals of 100 FHs and a major one once a year.
Second, Rajendran and Srinivas present four major challenges for UAM [7]. One of the challenges is fleet maintenance and the authors highlight the necessity to reach a balanced utilisation of the resources required for maintenance checks. However, they do not include further information how they believe the actual maintenance tasks will look like.
The third work is by Naru and German, who organized a workshop with aircraft mechanics to identify challenges with electric UAMV [8]. Their observations were grouped into four main blocks: demanding a training standard for electric power-plant mechanics, the battery handling and exchange, the importance of modularity in design for easy maintenance access, impression of the UAMV concepts. Ideas on how maintenance and tasks for UAMV could be estimated are also not provided.
Jain et al. do not research maintenance for UAMV, but map challenges for electric aviation in general [9], that can be transferred to UAM applications. As Rajendran and Srinivas, the authors also identify a knowledge gap for the maintenance of electric aircraft. As key hurdles for electric aircraft, they identify a suitable battery capacity,the impact of different propulsion system configurations and regulatory uncertainties. Furthermore, they highlight the importance of integrating maintenance experts in early phases of aircraft design.
Aircraft maintenance intervals and scheduling
Maintenance is compulsory for every aircraft to maintain continuing airworthiness. The minimum scheduled maintenance and inspections requirements for a new aircraft type are determined by a maintenance review board consisting of experts from the manufacturer, authorities and operators [10,11]. The resulting time between overhauls can be defined with three measures: Flight Hours (FHs), Flight Cycles (FCs) and calendar days [12]. The Remaining Useful Lifetime (RUL) describes the real or predicted time until a component or the whole aircraft must undergo a maintenance event and can be measured in the three introduced parameters. Calendar days are usually more relevant for low utilisation aircraft, as many task intervals are limited to a certain time or utilisation in FHs or FCs [10]. These three measures apply to aircraft of the transport category and also to powered lift configurations, which include UAMVs [12].
The final maintenance programme is derived from the minimum maintenance requirements issued by the maintenance review board, the aircraft's individual equipment installed and the requirements of the operator and technical operations provider [10,13]. The scheduled maintenance tasks are combined to checks to minimize necessary downtimes by a appropriate grouping of maintenance activities, e.g., by avoiding duplicate access times. These grouped activities are typically expressed as 'letter checks', starting from A-checks with high frequency but less invasive tasks up to less frequent D-checks with larger work scopes [14].
Furthermore, for traditional airliners, these checks are grouped into line and base maintenance. Line maintenance is integrated into the regular flight plan of the individual aircraft tail signs. Line maintenance is smaller in scope as well as shorter in duration assuring the nominal condition of the aircraft. The A-Checks are part of line maintenance and are conducted at the home base or at designated outstations, when aircraft have scheduled ground times, usually overnight [10].
Base or heavy maintenance has a more far-reaching extent, such as general overhaul of the aircraft, related repairs or updates of the aircraft, and is grouped depending on its scope in C-or D-Checks 5 . For airliners, the duration of these checks can reach multiple weeks and they are usually scheduled in times of lower demand, e.g. the winter [10].
Most airlines operate multiple aircraft. To comply with the maintenance requirements and ensure a high utilisation at the same time, the aircraft must be routed efficiently. Fleet Assignment Problems (FAPs) and routing of aircraft are part of scheduling problems and have been tackled by airlines and researchers in a large variety since the late 1980s [15][16][17].
On the basis of Sherali et al. [18], a simple explanation of FAPs is provided. Typically, FAPs are mixed-integer problems being based on the airline's deterministic flight schedule to minimise the overall aircraft assignment costs as objective and so optimising the overall operational profit. They include at least three basic constraints. First, covering all missions in the flight plan once. Secondly, balancing out the number of arriving and departing aircraft for all airports. Last, the number of available aircraft must not be exceeded. Further constraints can be added, if required. The solution of the FAP can be integrated into further problems as the crew scheduling or the Aircraft Maintenance Routing Problem (AMRP) or they can be combined into one model [16,19,20]. While FAPs aim to optimise the object of operational profit, AMRPs ensure the aircraft are routed to maintain continuous airworthiness. AMRPs mostly include further boundary conditions, e.g. maintenance events that take place at the home base or at special outstations only. A solution to the AMRP exists, when the airlines' flight plan can be split into a number of Euler tours for the individual aircraft. Euler tours are coherent routes that start and end at the same airport. A periodic fleet schedule is deduced from the set of Euler tours with the goal to enable sufficient ground times at the right stations to conduct regular maintenance events. [21] The required time for C-and D-Checks is accounted for by decreasing the number of available aircraft in the FAP's constraint for the check's duration [18].
Simulating UAM transport networks
To research the interlinking between UAM operation and maintenance events, an operational environment for the flight movements must be developed. Such UAM transportation networks are modelled with two different approaches simulations. The first approach are specified transport modelling frameworks. They are developed to compare modal choices, traffic flows with humans modelled as agents iterating their plans until an equilibrium is achieved. The other approach is a stochastic simulation using a general programming language with UAMVs being usually the smallest unit. Both options are mostly tailored to a certain problem and have different features specified to the researchers' needs. A short overview of both options is provided within this section.
MATSim is one option for a dedicated transport modelling framework as it is an open-source software that models individual persons with time-dependent travel plans and transport vehicles as agents. During the simulation, an iterative loop modifies the persons' travel plans and scores the overall results until no significant improvements can be detected. For further information about MATSim see [22]. Rothfeld et al. [23] presented an UAM extension to MATSim's ground-based network modelled as a second layer of airborne transportation. It enables travellers to switch between different modes of transportation to fulfil their travel plans. An initial analysis of the UAM extension is published by the same group of authors [24] for a hypothetical use case in Sioux Falls. A network of ten vertiports with one hundred vehicles is defined to analyse the effect of ground-based process time, vehicle speed, passenger capacity, fleet size, and network capacity on the passenger number and trip duration. More detailed research about operational parameters and methods for an automated vertiport placement can be found by Ploetner et al. [25] and Rothfeld et al. [26].
The second option of stochastic models is used by Kohlman and Patterson [27], who presented an objectoriented and stochastic transport network to size and compare different UAMV concepts. Their model is built from four parts accounting for the network and missions, the UAMVs, the vertiports and the demand. Within the network and mission models, the time steps and a rebalancing of vehicles is defined. UAMVs and vertiports are defined as a set of properties for the vehicle's technical details, the number of landing pads at vertiports and their distribution, which form the network layout. They propose a simple and adaptable hexagonal shape suitable for a wide range of cities in the United States. Within that network, they analyse UAMV concepts regarding their fuel consumption, emissions, operating costs and the infrastructure investment costs for alternative fuel concepts.
A subsequent study by Kohlman et al. [28] considers an adjusted network layout for the San Francisco Bay Area (United States) using the adapted UAM simulation environment of [27]. Different types of UAMV designs are sized to serve within this network and compared regarding operational costs, emissions, average load factor and waiting time as passenger pooling is enabled.
Shiva Prakasha et al. [29] follow a similar approach developing an agent-based simulation environment for the design of UAMVs presenting Hamburg (Germany) as test case. Similarities in the simulation are found in the demand modelling and the uniform aircraft fleet compared to Kohlman and Patterson [27]. While Shiva Prakasha et al. model the vertiports without capacity limitation and disregard any detour from the beeline, their aircraft assignment uses a bidding model to assign one or two passengers to one UAMV making it more sophisticated.
The here presented studies focus on the impact of operational parameters on the UAM transport system performance [23,25] or the aircraft sizing [27][28][29]. The impact and the related limitations of maintenance on the UAM transport performance has not been considered.
Fleet assignment problems and maintenance routing problems for on-demand operation
Unlike presented in the previous chapter, FAPs are not approached by airlines with transport simulations, but with mathematical optimisations as airline flight plans are known in advance and are deterministic. Besides classic operational concepts with a scheduled flight plan, on-demand flight services have been researched as well. The most prominent concept of them is the so called fractional ownership [30]. Owners are entitled to use a certain amount of flight hours of a certain aircraft fleet depending on their bought-in share, while the fractional ownership company covers aspects such as pilot training or aircraft maintenance for annual management fees [31]. Alike the UAM intends, fractional ownership companies follow a strategy to provide on-demand aviation. In the 2000s, also the idea of "per-seat and on-demand" aviation, a kind of shared flight hailing, with affordable small jets and their operational planning was researched (See: [32,33]). Hence, research for scheduling problems with the focus on AMRP for fractional ownership companies as well as "per-seat and on-demand" aviation is presented in this section. Keysan et al. [34] researched maintenance scheduling and planning for a fleet of light jets being operated on a "perseat and on-demand" concept. The flight plan is generated the night before operation to determine the aircraft's flight paths using a time-space network model being regularly updated. The maintenance checks must be scheduled within a certain tolerance around the maintenance intervals. Their scheduling uses a penalty function for the deviation from the optimal maintenance time, ensuring the aircraft are evenly distributed for the next maintenance checks. In their use case, an aircraft fleet is increased in different steps up to 288 jets, resulting in 86 % to 99 % maintenance capacity utilisation. The more evenly the integration of new aircraft into the fleet is, the higher is the utilisation.
Munari and Alvarez [35] used a standard mixed-integer programming model to research the optimal operation for fractional ownership aircraft fleets. They integrated maintenance constraints and researched upgrades in the aircraft type for requested missions to avoid (more expensive) repositioning flights. A total operating cost reduction of 1.7 % could be obtained on average by integrating upgrades. Similar to Keysan et al. [34], they scheduled the aircraft paths based on a fixed planning horizon, but with a length of 3 days.
Yang et al. [31] presented a decision support tool for the operation of fractional ownership companies. They investigated a simultaneous aircraft routing combined with maintenance events and crew restrictions for near optimal solutions with a 24 hour planning horizon. The maintenance events were considered as 2.5 hour long checks for randomly selected 20 % of the aircraft. The gap in the decision tool between optimal and near optimal solution was on average 3 %, but the calculation time was faster by two magnitudes. In a further study, Yang et al. [36] studied the dynamically changing environment for fractional ownership companies using three different types of heuristics to keep the changes to existing aircraft routing, generated in the last planning horizon, small. They researched strategies for reserve fleet, altering the ground times for the aircraft and repositioning of the aircraft. The largest operational costs savings of approximately 10 % were achieved using a reserve fleet in the size of 8 to 20 % of the regular fleet. The heuristics were tested in a generic simulation, emulating the behaviour of an operation for a fractional ownership company. The simulation generates random flight requests in a generic network with 100 airports for 36 hours to research the quantitative impact of the heuristics. Their generic approach is similar to the stochastic transport simulations, e.g. of [27] presented in the previous chapter.
The presented studies approach different scheduling problems, the earlier one by Keysan et al. [34] is an example of a pickup and delivery problem [32], the latter ones are rather traditional airline routing problems [18,37]. Nonetheless, they share a certain foresight of future flight requests, that must be covered. The limited foresight is tackled with rolling horizon approaches by creating and regularly updating aircraft routings as further flight mission are added to the system [31]. In our use case of pure on-demand UAM however, there is no knowledge of firm future missions. At the same time, the presented work by Yang et al. [36] shows, that a simulation is an appropriate tool for testing heuristics and therefore a simulation is used our studies as well.
Research gap
As demonstrated in this section, maintenance considerations for UAMVs have hardly been addressed by past research. With the exception of the work by Naru and German [8], maintenance implications for UAM operations have not been covered thus far. To tackle this knowledge gap, we will focus in our study on the following two aspects: First, a potential UAMV maintenance schedule for UAMV is derived in Sect. 3, as there is hardly any estimation how maintenance intervals for UAMVs could look like. The only source providing information does not include references, that back up their estimations [6].
Second, a transport and maintenance simulation is developed to understand the operation and maintenance interlinking for on-demand UAM operations. Civil aviation and its scheduling is different from on-demand UAM operation. Solving AMRPs as part of FAPs require definite flight plans to create paths for individual aircraft and for their operational optimisation. However, in our use case of pure on-demand UAM there is no information on future missions. Additionally, unlike the maintenance locations within airline networks, the UAM transport system will not necessarily have maintenance bases integrated into their network system, but at external sites [6,38]. Consequently, the mathematical optimisations shown in Sect. 2.3 are not applicable to on-demand UAM. Besides understanding the interdependencies for a fleet of UAMV that requires maintenance, operational changes to improve the operationmaintenance interlinking are examined in Sect. 4.
UAM maintenance simulation
Within this section, the generic maintenance schedule for UAMVs is deduced and a presentation of the UAM maintenance as well as operational simulation, the costs and demand modelling is provided.
The constraints of an AMRP impose a different problem which is not transferable to on-demand operations. At the same time, a specific transport simulation with agents modelling individual passengers, provides an unnecessary high depth of detail and complexity. Consequently, a general transport simulation is the selected approach for this study. Prior to the simulation with a transportation modelling, a generic UAMV maintenance schedule is derived from existing Aircraft Maintenance Manuals (AMMs), an expert interview and a conclusion by analogy from automotive industry, which is presented in Sect. 3.1. The maintenance costs are expanded beyond the actual check costs to account for the different nature of on-demand mobility including opportunity costs for spilled mission requests and are shown in Sect. 3.2. The elements of the UAM maintenance model are explained subsequently in Sect. 3.3. An overview of the input and output parameters is given in Sect. 3.4 and 3.5. Last, the applied testing methods to the simulation are presented in Sect. 3.6.
Potential maintenance schedule for UAMV
No UAMV has been certified nor has a corresponding Certification Specification (CS) been issued. Hence, no UAMV maintenance manuals exist and an UAM maintenance schedule has to be derived. The European Union Aviation Safety Administration Special Condition VTOL-01 [39] provides hints to which standards UAMVs will probably be certified and consequently also defines the frame for its future maintenance requirements. Special Condition VTOL-01 features elements of both, aeroplanes and helicopters, inherent in the design of UAMVs and demands the same safety levels as for aircraft of the transport category CS-25. Nonetheless, they base the special condition mainly on CS-23 Amendment 5 for small airplanes and integrate elements of CS-27 for small rotorcraft. [39] Consequently, the simple and generic maintenance intervals for UAMVs are condensed from two AMMs, one for a CS-23 and one for a CS-27 aircraft. Both reference aircraft are four-seater driven by piston engines [40,41]. The maintenance intervals for UAMVs are displayed in Table 1 of Sect. 3.2.1. The FH-triggered checks are alternating, meaning every 100-FH since the last FH-triggered check, either one 100-FH or one 200-FH check is due. The FC-triggered checks are also alternating, so that every 1750 FC the aircraft either undergo the 1750-FC-or the 3500-FC-Check. All checks are conducted independent of each other, meaning shorter FH-based are not included in the more extensive FC-triggered checks.
An expert interview with an aircraft mechanic for transport category aircraft was conducted. He is also in charge of the maintenance for single-engine propellerdriven aircraft with four seats, such as Cessna 172R or Piper PA-28, in an aviation club for private pilots. He identified the potential number of Maintenance Man Hours (MMHs) for the 100 FH and 200 FH of those CS-23 aircraft checks based on his experience and the maintenance billings of the aviation club with 24 and 40 MMHs including preparation time. Those values are considered to be equal to MMHs for the 100 FH and 200 FH for UAMV.
They are also shown in Tab. 1. In absence of further information for the more extensive 1750-FC-and 3500-FC-Checks, the numbers of required MMHs are assumed by the authors with an increase by 50 % and 100 % compared to the 200 FH check and are shown in the same table as well.
The obtained MMHs from the expert interview, combined with the maintenance intervals, can be transferred to a MMH/FH ratio for the UAMVs. As the maintenance checks for the UAMVs are also FC driven, the numbers of FCs per FH must be assumed. A range between 1.5 and 3 flights per FH results in 0.38 to 0.44 MMHs/FH. That range is in line with the information provided by Robinson Helicopter Company, that states the MMHs/FH ratio of 0.4 for their light, four-seated, piston-engine-driven Robinson R44 [42]. The close proximity of both ratios indicates that our approach is suitable to identify maintenance intervals and MMHs.
Cost modelling
The goal of the cost modelling is to capture all costs related to maintenance of UAMVs. Maintenance costs for traditional airliners are often referred in literature to expenses for material and labour for the actual check costs C C,i [43,44]. In this study a broader approach is chosen to cover all maintenance-related costs and therefore includes running costs C R,j for the infrastructure and equipment of maintenance sites and their operational expenses. Also, the opportunity costs C Opp,l are covered. The composition of the overall maintenance costs C Maint is shown in Equation (1).
The basis for the financial figures of this cost modelling is the year 2020. Within this chapter, we aim for the first detailed maintenance costs estimation for UAMVs and their technical operation system. The subsequent estimations are subject to uncertainties because no UAMV is even certified yet. With our focus on matured system, it has to be noted that the integration phase of new technologies for UAMVs may result in higher initial maintenance costs.
Check costs C C, i for maintenance events
The costs C C for maintenance checks are grouped into labour costs C Lab and material costs C Mat [43,44]. The rate for one MMH is set to $ 70, which is in line with the range for an aircraft mechanic of $ 53 to $ 81 (cf. [43,44]) but noticeably lower than one MMH for a rotorcraft with $ 115 [45].
The labour costs are the product of the required MMHs with the rate for one MMH. Knowing the overall maintenance check costs, the material costs can be identified. Based on the maintenance billings, the previously introduced expert classifies the check costs for an 100-FH-Check for one of CS-23 aircraft within the range of $ 1,630 to $ 2,170. An average of $ 2,000 for one 100-FH-Check is assumed. The overall check costs minus the labour costs for the required 24 MMHs results in material costs of $ 320 for that check. The material costs for the 200-FH-Check are assumed by the authors to be doubled compared to a 100-FH-Check, resulting in check costs of $ 3,440.
According to the expert interview, smaller checks for CS-23 aircraft are mainly labour intensive, the more extensive checks usually require far more part replacements which increase their material costs. The expert did not provide information for the costs of more extensive checks. (1) The 1750-FC-Check material costs are set to $ 6,400 by the authors, which is ten times the material expenses for a 200 FH check. Alike to an engine overhaul, the 3500-FC check is not meant to represents the costs for the overhaul of the UAMV's battery and electric propulsion system. An overhaul for a piston engine of CS-23 aircraft is in the range of $ 18,000 6 and $ 22,000 7 . These material costs are therefore assumed to be $ 20,000.
Fully electric UAMV power plants are expected to have far less unique rotating parts due to a reduction of complexity compared to other aircraft designs [6,38,46,47]. Therefore, the material costs cannot be transferred directly, but must be scaled down. With the Pipistrell Velis, a first all-electric aircraft has been certified, however no maintenance costs are available for that aircraft [47]. As of 2022, full-electric aircraft have only been tested thus far and their commercial passenger service is yet to commence [48]. Consequently, a comparison beyond aviation is required, even though it may introduce additional uncertainties. Two studies investigated the operation cost for road vehicles and concluded lower maintenance costs of a battery electric vehicle compared to a vehicle with an internal combustion engine at 19 and 25 %, respectively [49,50]. Based on their findings, the material costs basis of $ 20,000 is reduced by 20 % accounting for the overhaul of the propulsion system. The resulting material costs of $ 16,000 plus the labour expenses cause overall check costs of $ 21,600 for the 3500-FC-Check.
The overview of all maintenance intervals and the corresponding costs are shown in Table 1.
Running costs C R for maintenance sites
To maintain UAMVs, corresponding sites must be established and operated. Independent of their utilisation, running costs and their depreciation must be covered. As no UAMV maintenance facilities exist today, their potential costs are deduced by analogy from costs of existing aircraft maintenance shops. The investment for a new engine shop in Poland with more than 1,000 employees is estimated with a minimum of $ 180 Mio. 8 resulting in an expense of about $ 180,000 per workplace. However, facilities for engine overhauls are costlier than ones for other aspects of aircraft maintenance according to an expert from Lufthansa Technik's business development. Therefore, UAMV maintenance facilities are assumed to be less costly resulting in an estimated investment of $ 100,000 per workplace for this simulation.
All investments for the UAMV maintenance bases are assumed to be depreciated over 15 years. That time span is a trade-off between six to eleven years for tools 9 and 25 years for shipyards in Germany, which are considered comparable regarding their depreciation to aircraft hangars. 10 Consequently, the annually depreciation C Invest for the investment in a maintenance base is $ 6,667 per workplace assuming the above-mentioned investment costs.
In times with slack t Slack , when mechanics wait for one UAMV to be checked, running costs such as their wages or payment for the administrative overhead, still accumulate. As the personal planning can lower these slack costs C Slack , they are assumed to be half the wrap rate for one MMHs with $ 35/h.
The running costs C R,j are shown in Eq. 2).
Opportunity costs C Opp
Opportunity costs C Opp compensate for missed revenue for a stakeholder, when one choice is made over another. The simulation in this study is maintenance-centric and does not include any revenue earned during paid flight missions. At the same time, UAMVs cannot generate revenue when waiting for a maintenance check and for the duration of the maintenance events themselves. However, maintenance events are essential to maintain airworthiness and keep the aircraft in a condition to generate revenue. Hence, there is no option to avoid them and only additional ground times beyond the essential maintenance check times are considered for opportunity costs in this study. Therefore, only ground times that exceed the minimum maintenance downtimes are considered for the calculation of opportunity costs C Opp , independent whether the aircraft must wait when the base is occupied or closed. The opportunity costs are calculated similar to the average revenue during t Opp . For airlines, revenue is the product of yield with Revenue Passenger Kilometers (RPKs) [51]. Equation (3) is a modification of that approach tailored to UAM. The ticket price is the transport fare C Fare per distance d and shown as first factor of the equation. The RPKs are shown in the second factor of Equation (3). They are the product of available seats n Seats with the average Passenger Load Factor PLF av and the average distance d av covered by a vehicle during operations without maintenance per time t multiplied with the actual opportunity time t Opp .
All variables, besides d av , are constant and known prior to a simulation's start. Depending on the input change, the overall transport capacity might change and so the average flight distance d av could change as well. For simplification, the d av is calculated and averaged for one test run of the simulation without any maintenance event and is kept constant. Hence, the hourly opportunity costs equal the average hourly revenue per aircraft, when no maintenance is considered. The hourly opportunity costs are kept constant for all simulations at $ 153.2/h.
Transport and maintenance simulation
The four main elements and the mechanisms of the UAM transport and maintenance simulation are presented in this section. Kohlman and Patterson's publication [27] serves as inspiration for transport modelling and network. For further and more detailed information regarding the transport simulation, we encourage to read their publication.
The length of our UAM transport and maintenance simulation for one run is set to 365 days. The length of one time step is 10 s resulting in 8,640 steps per day (cf. [27]) and accumulates to about 3.2 Mio. time steps for the length of 365 days. A simplified structure of this simulation routine is shown in Fig. 1.
Within the demand and dispatch model, a flight request for one vertiport per time-step might be generated. If a flight request is generated with the help of the aircraft assignment and operational decisions element, a UAMV is chosen for the mission, if available. Also, it is controlled whether a landing pad at the starting vertiport of the mission is available. For the flight mission, the aircraft parameter such as the number of FHs or FCs are updated. These pieces of information are fed back into the assignment and operational decisions part as they might cause that one UAMV is no longer available for flight missions when a maintenance check is due. Moreover, the vertiports and maintenance bases are updated and the information is integrated into the aircraft assignment and operational decisions elements. Those four elements are explained further in the following subsections.
Demand and dispatch model
Both, the flight demand from one vertiport and its dispatch location are stochastically determined. For each of the 3.2 Mio. time steps and seven vertiports, a uniformly distributed random number between 0 and 1 is compared to the vertiport's demand probability function. If that random number is lower or equal to the function's value at that time step, a flight is requested for that vertiport. The demand probability functions differ for the central and outer vertiports and are displayed in Fig. 2. For example, at the center cub at 8 am the probability is 0.14, indicating that the random number must be 0.14 or less in order to trigger a flight request at that vertiport. The destination is also determined with a uniformly distributed random number and depends on the vertiports' demand weightings, which are shown in Tab. 3. Round flights from and to the same vertiport are not possible, the destination is always a different vertiport. In this simulation, all outer vertiports have the same constant probability of becoming the destination while the central vertiport has double the probability of one outer vertiport. Repositioning flights can be triggered, when a flight request cannot be serviced due to a missing available UAMV. Unsuccessful requests are not serviced at a later point in time. If a missing vehicle is the reason and a uniformly distributed random number surpasses the rebalance parameter shown in Tab. 3, a repositioning flight is triggered. The UAMV is ferried from the vertiport with the most available vehicles. A large set of random numbers is stored and used in the same order for every simulation of this study. So, repeatability is ensured and changing random numbers are prevented from masking or exaggerating modifications of the parameters.
Aircraft assignment and operational decisions
Whenever a flight mission is requested, one UAMV must be assigned. Only the UAMVs that are instantly available at the departing hub are considered to service the flight. Nonetheless, when two or more UAMVs are available, a choice must be made. An equal wear of the fleet is favourable concerning the long-term fleet planning and uniform maintenance requirements [52,53].
Assigning one of multiple available aircraft to a flight mission may take many parameters into account and become highly complex. Aware of the complexity of FAPs for traditional airlines, a very simple method to assign UAMVs is integrated. Complying with the demand for an even usage, the aircraft with the least FHs since the begin of the simulation is assigned to a flight mission. It is crucial to assign the aircraft according to FHs since simulation start and not, for example, the overall FHs of the UAMVs. That selection ensures an equal usage of the aircraft fleet. Otherwise, the initial difference in FH at the beginning of the simulation would be equalled, resulting in uneven usage. The battery is charged after every flight to avoid any restrictions for future flight missions, the battery level triggering charging is set to 99 %.
Vertiport and maintenance base model
A simple and adaptable network model is chosen for the purpose of this study. Vertiports are uniformly arranged in a hexagonal pattern; one central vertiport is surrounded by six outer ones being an adaptable approach suitable for many cities with a ring highway [27,54]. The network layout is shown in Fig. 3 including all point-to-point connections. Unlike Fig. 3 indicates, flights are modelled as straight and not curved line between two vertiports. Each vertiport is defined by a number of landing pads, the position in the network and a weighting for its flight activities. The landing pads can only be occupied by one UAMV per time, that is either starting or landing. Further separation of the vehicles or air traffic management is not included into our simulation. While the number of landing pads are limited for vertiports, the number of parking slots for ready vehicles is assumed to be unlimited.
Maintenance bases are similar in structure to vertiports. Modifications are their unlimited number of landing pads and the limitation of hangar bays in which UAMVs can be checked simultaneously. Each maintenance bay has a number of allocated mechanics who are assumed to work on one UAMV simultaneously. The duration of the maintenance checks is the fraction of the required MMH divided by the number of simultaneously working mechanics. Moreover, the bases can be either opened or closed according to their opening hours shown in Tab. 3. During closure, the maintenance work is paused and resumed, when the base opens again.
Future vertiports are believed to be at traffic hubs, airports, business districts or highway cloverleafs [54][55][56]. Of those sites, only an airport might provide enough space for UAMV maintenance. All others are not feasible to include large facilities and hence maintenance bases are believed to be at off-grid places [6,38].
The two maintenance bases for this simulation are located centrally between the vertiports 1, 2, 3 and 4 respectively 1, 5, 6 and 7 and are also marked in Fig. 3. UAMVs reaching the end of an interval limit after a flight, proceed to the closer maintenance base. Vehicles at the central vertiport are distributed equally to either base.
UAMV model
UAMVs are the smallest unit in the simulation and are defined by a set of properties, such as their tail-sign or cruise velocity. They are modelled as agents servicing the demand within the network. Different mission segments are modelled with a timer-controlled state machine. The timer determines, how long a vehicle remains in a state before transitioning to the consecutive one. States can be either of fixed or of variable length. Variable state lengths depend on the cruise distance or whether a landing pad is available. Tab. 2 provides an overview of the state definitions for all types of flight missions. It also includes the consecutive states, their length and the required energy rate during that state. Similar states are also defined for repositioning flights and maintenance-related segments. The length of the maintenance checks depends on which check of Tab. 1 is due.
The battery is modelled as black box and its energy level is tracked unit-less between 1.0 for full and 0.0 for empty. After each mission, the battery is fully recharged. The recharging time depends on the mission and how deep the battery has been discharged. The recharging rate is set to 1C so that the charging time is linear-distributed between zero to one hour, depending on the level of discharge. The energy depletion rate for various states is referred to the cruise consumption rate P Cruise of 1/5112 indicating that the battery capacity would last 5112 s in cruise [28]. If multiple UAMVs are in hold for one vertiport, they are prioritised according to their remaining battery level. The vehicle with the lowest battery level is transferred to landing first.
All UAMV timers are incremented each time step. When a timer reaches the segment length, the UAMV is transferred to the next state. If the cruise segment has finished and all landing pads are occupied, the vehicle is not transferred to Landing but to Hold. UAMVs in hold have priority over all other vehicles and proceed to landing as soon as a landing pad becomes available. The state unloading is followed by a Battery Charge after each flight mission. Also, there are no battery level restrictions regarding future missions to keep the level of unnecessary complexity low.
Cruise segments are modelled and indicated as straight lines between two vertiports.
Restrictions in airspace and air traffic control instructions cause detours from the beeline and for compensation, a routing factor of 1.42 is applied to missions [28]. For repositioning flights and transfers to or back from maintenance checks, the UAMV states 0 to 8 also apply. Preparation times for actual checks is included in the maintenance checks themselves.
In our simulation, we consider a mature UAM transport system with a fleet of 160 aircraft. Those aircraft are integrated into the fleet stepwise in five tranches of 32 UAMVs each. At the start of the simulation, the tranches have an average age of 500, 750, 1000, 1250 and 1500 FCs respectively with a standard deviation of 50 FCs among each tranch. With the average trip length, the corresponding FHs for each aircraft at the start of the simulation are calculated.
Input parameters
An overview of the simulation's input parameters is provided in Tab. 3.
In orientation of this section, the parameters are grouped into different categories. The assigned values are used for the initial simulation in Sect. 4.2. Numbers before the semicolon apply to outer vertiports, numbers behind apply to the central port.
Model performance monitoring
The nature of on-demand UAM transport differs from classic airline operation and accordingly its metrics have to be adjusted. An overview of performance metrics for this study is presented in Tab. 4. The two most significant metrics, maintenance costs and the network availability, are shortly explained. Maintenance costs are crucial for the operator, whereas for passengers the availability of the transport system is paramount. The network availability is the ratio of fulfilled transport requests divided by all requested flights. If it undercuts a certain level, passengers might consider the service to be unreliable and the operator might be challenged with decreasing passenger numbers and revenue.
Testing and verification
The simulation was created from scratch and was tested during each development step by different means: • Tracking every aircraft's path and movement. • Counting the unserviced requested flights in two different ways and cross-checking the results. • Comparing the overall number of ready UAMVs to the demand probability function. • Tracking the energy level in the batteries to picture the vehicles' change of operational states and compared them to results of Kohlman and Patterson's publications [28]. • Running a test simulation with one vehicle and comparing the results with the expectations.
Simulation results
Subsequent to the introduction of the UAM simulation in the previous chapter, the results of the simulations are presented in this section. Three simulations are shown in detail and examine the impact of changes in one starting condition and two selected operational parameters on the interlinking between maintenance and operation. Initially, a maintenance-free scenario and the baseline help to understand the transport and maintenance simulation and provide references to compare later modifications to. The maintenance-free scenario, used for the scaling of the maintenance bases, is shown in Sect
Set-up for the initial simulation
Initially, a simulation with 160 UAMVs and over a period of 365 days is run without any maintenance events to size the right maintenance capacity and to serve as a reference for later comparisons.
Also, the upper bound of the network availability is determined with a daily average of 76.8 %. Later simulations in the following chapters with maintenance events will be compared to that theoretical maximum. There are two reasons, why in the maintenance-free case not all flight request can be fulfilled. The first reason is that no UAMV is available at the vertiport at the moment of the fight request, the second one is that no landing pad is available. The UAMVs have logged an average of 5.17 FHs per day and service a daily average of 11.27 flight missions leading to 2.18 FCs/FH. With the maintenance schedule in Tab. 1 the average MMHs/FH can be calculated to 0.41 MMHs/FH for one UAMV. Hence, a fleet of 160 UAMVs with an average of 5.17 FHs a day will require a total of approximately 337 MMHs a day. For the baseline simulation, a total daily maintenance capacity of 360 MMHs is provided by two maintenance bases being approximately 107 % of the averagely required daily MMHs. Both maintenance bases are alike and are equipped with two hangar bays for two simultaneous checks. In each bay ten mechanics are assumed to work in parallel providing 10 MMHs per simulation hour. The bases' opening hours are from 8am to 5pm representing a daytime operation.
Baseline
The initial simulation is based on the input parameters of Tab. 3 and serves as reference for subsequent comparisons. In Fig. 4, the daily MMHs of both bases and the daily fleet waiting hours are shown for the simulation time of one year. The average utilisation of the maintenance bases is at 85.9 % and it varies daily between 14.9 % and 100 %. Even if the overall capacity is designed to be sufficient for the average daily demand, at 149 days of the simulation period, the capacity of 360 MMHs is fully tapped.
On every day of the year, vehicles must wait. The highest waiting hours are accumulated on days with a complete maintenance base utilisation. Waiting hours also occur on days, where the utilisation is below 100 %. First, that can be caused by slack time in the beginning of the day followed by the later arrival of too many UAMVs that cannot be serviced at the same time. Second, vehicles arrive at the bases after closing or before opening time and hence need to wait for their maintenance checks. The waiting hours Fig. 5 for the days 170 to 290. The pattern of the smaller A and B checks seems to be equally distributed. That behaviour can be explained with the standard deviation of 50 for the initial FC distribution at simulation start. However, for the larger D checks a pattern can be observed in the figure (Note: C checks are not visible in that excerpt of Fig. 5).
In times, when there are no C or D checks, the waiting times are smaller as there is overall less demand for the maintenance resource. The D checks can be grouped into five blocks. However, these blocks in Fig. 4 are mainly due to the fluctuations of the smaller checks.
The development of the absolute maintenance costs and the accumulated fleet FHs over the course of the simulation are shown in Fig. 6.
The accumulated fleet FHs are shown as solid grey line. The graph increases almost linearly, but flattens very slightly on days with high waiting hours, for example on the days 48 to 52 (Hardly visible in that plot). The maintenance costs in absolute numbers are displayed as the black dashed line. The absolute costs exhibit a similar, but more pronounced behaviour than the waiting hours. Each maintenance event triggers discrete costs, which arise at the check, and cause the gradual incline. In periods of low maintenance activities or when mostly less expensive A and B checks are due, the slope is flatter. That is the case between day 130 and 180. In times of a high base utilisation, long waiting hours or when costlier checks are conducted, the slope is steeper. The steeper incline is observed between day 180 and 280.
Both graphs of the previous figure combined, form the maintenance costs per FH and are displayed in Fig. 7. The maximum daily increase in absolute costs is comparable for an early and a late day in the simulation, while the overall fleet FHs are constantly increasing over time. As a consequence, the impact of costs for a single check on the maintenance costs/FH is more significant at simulation begin.
The first peak in the graph is caused by the costs of checks until day 12, before the shop utilisation slightly drops (See Fig. 4), divided by the still comparable low overall fleet FHs. The second peak in the graph has the same reason, this time caused by the reduction in maintenance activities at around day 32. The more overall costs and FHs are accumulated, the smaller the percentual impact of the increasing check costs becomes and as consequence, the smaller is the variation in the graph. The decline of the graph between approximately For a more detailed view of the maintenance costs and their composition after 365 days is displayed in Fig. 8. Only $ 44 account for the actual check (Labour and material costs), which is 47 % of the total costs. While another 3 % account for running the maintenance bases and the slack time of the mechanics. The remaining part of approximately $ 46 accounts for the opportunity costs accounting for approximately one half of the complete maintenance costs and are primarily caused by long waiting times for the checks.
Even if the network availability shows that approximately only one of 50 flights is not serviced due to maintenance restrictions, the high number of opportunity costs underlines that the interlocking between operation and maintenance is far from good in this baseline scenario. Within the next sections, the influences of changing starting conditions and adaptions in operation are presented.
Parameter studies
Within the scope of this section, different types of parameter studies are presented. The first parameter study is a modification of the simulation's boundary conditions. The initial usage of the UAMVs at simulation start is compared in five different scenarios in 4.3.1. The second type focuses on operational options to improve the operation and maintenance interlocking. For that purpose, the maintenance capacity is extended by increasing the opening hours in Sect. 4.3.2. Furthermore, we examine how reassigned, earlier maintenance checks can reduce the fleet waiting hours and hence lower the maintenance costs in 4.3.3. Lastly, the increased maintenance capacity and the earlier maintenance checks are combined to find the overall best option regarding
Initial UAMV age at simulation Start
In the last section, the influence of the initial UAMV FHs and FCs is noticeable in the frequency of maintenance checks (See Fig. 4). To further research the impact of the UAMV age at simulation begin, the baseline and four further scenarios are investigated within this subsection. The scenarios are indicated with the numbers 1, 2, 4 and 5. The baseline of Sec. 4.2 is numbered with 3 in this section. A visualisation of the FCs distributions for the five different scenarios is shown in Fig. 9. In Tab. 5 the most important Key Performance Indicators (KPIs) for the five scenarios are compared. On a macroscopic level, a major trend can be observed: The more equally the initial FCs of the UAMV are distributed, the lower the waiting times and the maintenance costs are and as a consequence, the network availability is higher. For the scenario 1, approximately every eighth request cannot be serviced due to the maintenance related waiting times. For scenario 5, only one in 83 flight requests remains unserviced.
The maintenance costs are split up into the different cost components in following figure 10. The most significant difference is in the opportunity costs, which decrease for scenarios 1 to 5. The opportunity costs are the consequences of the differences in the vehicles waiting hours, which are also reduced from scenario 1 to 5. As the base utilisation is increasing from scenario 1 to 5, the the slack time of the mechanics is reduced and hence the running costs decline as well.
The slight variations in the costs for maintenance material and labour are the consequences of the different overall number of checks and a different check distribution during the simulation period. Those changes are caused by the different starting conditions of the five scenarios. For instance, in scenario 3 (Baseline) the overall number of checks is higher with 3,106 compared to scenario 2 with 3,077. However, in scenario 2, there are six more C checks with signifcantly higher material costs and hence the material costs are slightly higher than in scenario 3.
The big differences in the average waiting time and the network availability can be illustrated with a comparison of the daily MMHs and fleet waiting time. Both KPIs are exemplarily shown for both extremes, the scenarios 1 and 5, in Fig. 11. The behaviour of the non-shown scenarios 2 to 4 is a transition between both shown subfigures. The MMHs are plotted in light grey, while the waiting hours are shown in black.
On the left-hand side of Fig. 11, in subfigure (a), the fleet waiting hours appear periodically, as can be seen in the large amount of waiting hours between approximately day 275 and 340. The periodic appearance of the maintenance checks is the consequence of the vehicles being in roughly the same age at simulation begin and the equal usage of the vehicles during operation. A small time period in which the end of the maintenance intervals of all UAMVs are reached are the consequence. Hence too many UAMVs require maintenance at the same time. As the capacity of the bases is limited, they become bottlenecks and vehicles must wait to be maintained. Especially the time intensive checks cause thousands of waiting hours, which also decrease the network availability on maintenance-intense days. Between day 290 and 310 only 27 % of the flight requests could be serviced.
In Fig. 11 (b) the waiting time fluctuates and does not follow a certain pattern. At the same time, the utilisation of the bases is comparable constant and hence the maximum daily waiting hours are smaller by approximately one magnitude.
The different starting scenarios result in different levels of operation and maintenance interlocking. Especially in scenario 1, a different aircraft assignment or maintenance planning is fundamental for a reliable operation. The further
Maintenance capacity
From the maintenance provider's point of view, there is the option to influence the transport capacity and consequently also the overall maintenance costs by adapting the maintenance capacity. That adaption can be implemented by increasing the number of mechanics working simultaneously or by extending the opening hours of the maintenance bases. In the baseline simulation approximately 70,000 overall fleet waiting hours pile up. 63 % of them are accumulated in times when the maintenance bases are closed, the remaining 37 % are caused when UAMVs must wait while the base is opened and all maintenance bays are already occupied. As the number of waiting hours is larger when the bases are closed, the maintenance capacity is increased by adapting the opening hours of the shops. With the increasing capacity, two opposing trends set in. On the one hand-side, a higher maintenance capacity reduces the waiting time and hence the costs related to that are lowered. On the other hand-side, the extended opening hours reduce the utilisation of the shops as the initial maintenance size provides already 107 % of the theoretically required capacity.
The maintenance capacity is increased in eight steps of 1 h to closing times from 17:00 to 24:00. In Tab. 6 the most important KPIs are compared.
The overall maintenance costs are reduced for longer opening hours of the maintenance bases. However, the steps in cost reduction decrease for later closing times of the maintenance shops and reach a minimum at a closing time of Fig. 12 provides a detailed cost break down. The costs for the actual maintenance checks, the material and labour costs, are almost constant as the number of both check types, the FH-based and the FC-based ones, only varies a little. The low variation is due to the slightly different number of maintenance checks as displayed in the two right columns of Tab. 6.
The running and opportunity costs show the expected opposing behaviour. The opportunity costs are reduced for longer operation hours of the maintenance bases as the waiting hours decrease. However, with a further increase in the opening hours, that effects starts to be limited. The running costs behave in the opposed manner, as longer opening hours mean less utilisation and hence more slack time of the mechanics starts to increase the costs. The closing time of 23:00 combines the lowest overall costs and the highest network availability. If the maintenance facilities are opened longer, the overall maintenance costs increase again.
In Fig. 13 a comparison of the daily MMHs and the fleet waiting time of the baseline scenario with the scenario with a closing hour of 23:00 is displayed between day 30 and day 130. For the baseline, the maximum shop capacity is fully utilised on 150 of 365 days, while the maximum capacity of 640 MMHs a day is never required, when maintenance is run until 23:00.
The fluctuation in the utilisation is also higher for longer opening hours. At the same time, the utilisation of the maintenance shops is just at 52 %. That low utilisation indicates, that there is room for improvement with a more sophisticated scheduling and maintenance planning approach. Rescheduled maintenance checks to an earlier time point with the intension to reduce waiting times is one options. That approach is presented in the next chapter.
Trading remaining useful lifetime for earlier checks
Within this section, the option of performing maintenance checks prior to the end of the UAMV RUL is examined. The factor F RUL describes how much of an interval can be exchanged for an earlier maintenance check. For example, F RUL = 10 % indicates that either the 100-FH-or 200-FH-Check can be conducted after 90 FHs since the last FH-driven check. Alike, the FC-based checks can be conducted after 1575 FCs instead of 1750 FCs since the last FC-driven maintenance event. To do so, UAMVs are transferred to a maintenance base if the three following conditions are met: If more UAMVs fulfil these conditions at the same time, the vehicle with the highest FHs since the last check will be assigned to the earlier maintenance check. Checking the UAMVs earlier means RUL is spoiled. Additional checks do not only results in additional material and labor costs, but also add (unnecessary) ground time to the UAMVs. Hence, the time for the additional checks must also be considered as opportunity time and causes additional opportunity costs C Opp,RUL which are included for all F RUL > 0. F RUL is varied in seven steps between 2.5 % and 20 %. In the baseline simulation (F RUL = 0 %) no earlier checks are possible. In Tab. 7 the KPI are for the different simulations are displayed.
Three trends are observed in the table: Allowing earlier maintenance checks in general reduces the maintenance costs and lowers the waiting times. As a consequence, the comparable network availability is increased as well. The overall number of conducted maintenance checks increases and so does the base utilisation for an increasing F RUL . For the maintenance costs, the waiting times and the network availability, a minimum is found between a F RUL = 2.5 % and F RUL = 7.5 %.
A detailed breakdown of the maintenance costs is displayed in Fig. 14. The overall lowest maintenance costs are found for conducting checks up to 5 % prior of the intended checks. Afterwards the maintenance costs increase again.
There are two reasons for that: The number of checks increases and hence the labour and material costs as well as C Opp,RUL are higher. The second reason is the increase in waiting hours. Assigning UAMVs for earlier checks, as long as the capacity is available, has the side effect, that the maintenance bay will be occupied. The occupation length depends on the actual check and varies between 2.4 and 8 h. If one aircraft arrives at the maintenance facilities because the regular FH or FC threshold is reached, it must wait. The earlier vehicles are allowed to be maintained, the higher is the shop utilisation. Hence the chance, that UAMVs with regular assigned maintenance checks cannot be maintained directly after arrival also increases. As a consequence, the waiting time increases as well.
In the last section a general trend was noticeable: When the waiting hours are reduced, the network availability increases. This observation is only partly valid in this parameter study. Not only the waiting UAMVs cause the 'non-availability' of the aircraft, but also when they are maintained unnecessarily often, which causes additional ground-time. Hence, more and earlier checks do not only increase the maintenance cost, but also reduce the availability of the UAMV fleet. That interconnection explains why for F RUL = 7.5 % the average daily waiting time is lowest, but the network availability is slightly lower compared to F RUL = 5 %. Fig. 15 is an excerpt of the daily fleet waiting and working hours of both maintenance bases for the days 45-155.
The baseline has higher maximum and average daily waiting times, while the working hours are more balanced for F RUL = 5 %. The full maintenance capacity of 360 h is more seldom utilised for F RUL = 5 % and the daily minimum working hours are higher compared to the baseline.
When reassigning maintenance checks before they become mandatory, the best option regarding maintenance costs and and also network availability is for an F RUL = 5 %.
Best parameter simulation
In the previous sections, the influences of individual adaptions in operation are examined. To find the best possible combination within the scope of this simulation, the two parameters for changing the operating hours of the maintenance base and performing earlier checks are analysed in combination.
The ranges of the varied parameters is shown in Tab. 8. In total 30 different combinations are simulated, of which the best 27 regarding low maintenance costs and a high average comparable network availability are displayed in Fig. 16. The same level of F RUL is indicated with the same symbols in the plot. The size of the symbols indicates the closing time of the maintenance bases. The later the maintenance closes, the larger the symbols are plotted. Low maintenance costs and a high network availability are both desired goals. A Pareto-frontier is shown as dashed line and connects the best options. Those Pareto-optimal solutions are marked with the letters (a), (b) and (c) in Fig. 16.
The further left and the higher the markers are located in the figure, the more favourable the results are. The six results for an F RUL = 2.5 % show the overall best solutions followed by results for F RUL = 5 % and F RUL = 7.5 %.
All three Pareto-optimal solutions are obtained for a maintenance base closing hour of 21:00, however the impact of the closing time is not as significant as the impact of F RUL . Indicated with (a) in Fig. 16 is the simulation with F RUL = 2.5 %, (b) represents a F RUL = 5 %. (c) is the result of F RUL = 7.5 %.
A comparison among them unveils the following trends: First, the comparable network availability varies only slightly. For all Pareto-optimal solutions, the range is between 99.65 and 99.69 % which is a difference of approximately 0.4 ‰. For that range in network availability, about one in 285 to 323 flights cannot be serviced due to maintenance restrictions. Second, the impact on the maintenance costs is more significant. Between (a) and (c) the costs vary between approximately 58 and $ 61/ FH being a percentual difference of 3.7 %. Third, longer operation hours of the maintenance bases reduce the overall costs until 21:00, for closing hours of 22:00 the overall costs rise again. As the percentual reduction in maintenance costs is significantly higher than the percentual loss for comparable network availability in the Pareto-optimal solution (a), that scenario is considered as the best case. Fig. 17 shows the fleet waiting time and the working hours of the Pareto-optimal solution (a) for the course of the simulation. The average daily fleet waiting time is only 23 hours. At the same time, full maintenance capacity of 560 MMHs a day is never fully required and the average maintenance utilisation is at 61 %.
The cost breakdown of the Pareto-optimal solution (a) is shown in Fig. 18. In the initial simulation of Sect. 4.2, opportunity costs account for about half of the overall costs. For this parameter set, they account for only 7 % of the overall costs. However, the share of running and slack costs is increased as the opening hours are extended from approximately 3 % of the baseline to 16 %. Approximately three quarters of maintenance costs of $ 58/FH now reflect material and labor costs for the actual checks, quantifying the improved interlinking between operation and maintenance events.
The utilisation of the maintenance bases is still comparable low, which causes the higher share of running costs in the maintenance costs on the other hand side. It can be concluded, that even the best case is far from overall theoretical possible optimum. To approach the optimum with no waiting time of the vehicles and maintenance utilisation of 100 % or very close to it, a more sophisticated maintenance scheduling is necessary. A first idea to achieve that goal is presented in further research possibilities in the next section.
Conclusion and outlook
This study is the first work to research a potential maintenance schedule for UAMVs and the interlinking between maintenance and on-demand operation for UAM. It is meant as a basis for further explorations in the field of UAM maintenance and its scheduling. A simulation is presented as feasible approach to picture the interaction between vehicle operation and maintenance events. Initially, a potential maintenance schedule for UAMVs is derived based on literature and an expert interview. It is integrated into an agent-based simulation consisting of three major elements: The vertiports, the UAMVs and the maintenance bases. The simulation demonstrates the interlinking between operation and maintenance for a number of performance parameters; most important are the serviced flight requests and maintenance costs. The wider maintenance cost modelling approach showed that opportunity costs for not serviced flight missions have a decisive impact on the maintenance costs for on-demand UAM and must not be disregarded. The quantitative influences of one boundary condition and two operational parameters are analysed in three parameter studies. Extending the opening hours and reassigning the maintenance checks to an earlier date are feasible options to improve the interlocking between maintenance and operation. A concluding optimum search identified a heuristic so that 99.7 % of the flight requests compared to a maintenance free scenario, could be fulfilled. In that best parameter simulation, the heuristic combines extended base opening hours with earlier maintenance checks and results in maintenance costs of approximately $ 58/FH.
After deducing a potential UAMV maintenance schedule and the examination of operational boundary conditions and decisions, we want to summarize the main observations (O) of this paper. Under the assumptions made for the UAMV maintenance and transport simulation, these are as follows: O 1 A transport simulation is a feasible approach to picture the interaction between on-demand operation and maintenance events and to research boundary conditions as well as operational changes.
O 2 The initial distribution of the fleet age has a strong impact on the queuing for maintenance events. It hence influences the maintenance costs and network availability. Fleets with a strong spread in the initial aircraft age at simulation start, face less waiting time for maintenance, while fleets with a similar aircraft age cause long waiting hours.
O 3 Increasing the maintenance capacity by extending the opening hours reduces the waiting times and increases the slack time. For those opposing trends a minimum can be identified.
O 4 The option for earlier checks proofed to be efficient to reduce the waiting time and hence the maintenance costs. For earlier maintenance checks, there is also an optimum as too earlier checks increase the number of unnecessary checks, ground time and block the maintenance facilities.
O 5 A combination of extended shop working hours and earlier maintenance checks provides a comparable network availability of 99.7 % and maintenance costs of $ 58/FH.
At the same time, we noticed limitations (L) of our settings and results and want to include them in the following list. For all limitations, we propose potential improvements to resolve them. L 1 The assignment of UAMVs to flight mission is preliminary based on a single parameter (numbers of FHs). In Sect. 4.3.3, a first step towards a UAMV 'flow control' for maintenance checks is presented as three conditions are required for an earlier check. A more elaborated approach would be the implementation of a conflict detection for maintenance checks. If more slots are required than are available at the same time, Fig. 18 Maintenance costs breakdown the conflicting aircraft could be assigned differently to avoid waiting times.
The opportunity costs are assumed as constant in the simulation. However, opportunity costs are actually demand-dependent. For example, if a UAMV waits for a maintenance check at 2 a.m. and there is a limited demand, which can be serviced with a small number of remaining aircraft, there are actually no opportunity costs. The opposite is in times of highest demand, where more flight request cannot be serviced. The opportunity costs can be turned into a time-dependent variable by coupling them with the actual demand distribution curve of Fig. 2, which would make this simulation more realistic. L 4 Maintenance checks cause discrete increases of the costs in the simulation period. At the same time, the maintenance costs per FH converge towards a certain value and hence, a certain simulation duration is necessary. One option to reduce calculation time could be a scaled down model with a shorter runtime that is able to provide comparable solutions and conclusions.
L 5 The number of parking slots at vertiports and maintenance bases is unlimited. With the implementation of a limited the number of parking slots at the vertiports as well as at the maintenance bases could make the transport simulation more realistic.
L 6 In this study, some boundary conditions have not been changed. For example, only one type of UAMV and maintenance schedule is researched. The transport network was not altered and the demand curve has not been changed. Diverse UAMV types with different performance parameters and maintenance schedules are believed to be another field for further investigation. Also, changes in the network layout, the demand model are worth to be researched. The same applies for a ramp-up of operation or a fleet replacement process with a later generation of aircraft requiring a different amount of maintenance.
L 7 Within this research only scheduled maintenance events are examined. Including unscheduled events after an unusual finding during a scheduled maintenance check or after an incident during operation with variable lead times would be another element of uncertainty to the maintenance scheduling. It could be used to quantify the quality of operation and maintenance interlinking.
L 8 In the simulations, the battery was charged with 1 C. Changing the recharging rate for batteries could show an interesting correlation between the charging speed and maintenance. Especially if a battery degradation model, that includes the effects of charging speed on the loss of battery capacity and power, is implemented.
Funding Open Access funding enabled and organized by Projekt DEAL.
Data availability Not applicable.
Declarations
Conflict of interest The first version of presented simulation study was developed during the corresponding author's Master Thesis project in cooperation with Lufthansa Technik AG, which provided financial support to this study, between November 2019 and May 2020. We would like to thank the reviewers, the involved colleagues at DLR Institute of Maintenance, Repair and Overhaul as well as our partners at Lufthansa Technik AG and at Technical University of Munich.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 17,543.4 | 2023-06-27T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
En route to sound coding strategies for optical cochlear implants
Summary Hearing loss is the most common human sensory deficit. Severe-to-complete sensorineural hearing loss is often treated by electrical cochlear implants (eCIs) bypassing dysfunctional or lost hair cells by direct stimulation of the auditory nerve. The wide current spread from each intracochlear electrode array contact activates large sets of tonotopically organized neurons limiting spectral selectivity of sound coding. Despite many efforts, an increase in the number of independent eCI stimulation channels seems impossible to achieve. Light, which can be better confined in space than electric current may help optical cochlear implants (oCIs) to overcome eCI shortcomings. In this review, we present the current state of the optogenetic sound encoding. We highlight optical sound coding strategy development capitalizing on the optical stimulation that requires fine-grained, fast, and power-efficient real-time sound processing controlling dozens of microscale optical emitters as an emerging research area.
INTRODUCTION
The World Health Organization warns of a fast growing hearing problem for years 1 reporting, as of 2021, more than 30 million people in the world having severe to profound hearing loss and yet almost another 30 million, profound to complete. 24][5] Most eCI users achieve fair open-set speech perception in the quiet.An eCI system, composed of an external sound processor and implanted stimulator, converts sound into biphasic electric current pulses stimulating, via an intracochlear electrode array, spiral ganglion neurons (SGNs) tonotopically organized along the spiral anatomy of the cochlea (Figure 1).The external processor, running the sound coding strategy, decomposes sound into frequency bands and extracts the intensity within each band.Extracted intensities serve as scaling factors for electrical pulses delivered in an interleaved fashion to electrode contacts (channels) located at the tonotopic positions corresponding to the respective frequency bands.
The electrically conductive fluid inside scala tympani of the cochlea, where the intracochlear electrode array is implanted, causes a wide spread of the electric current pulse containing information of a given frequency band for each of the 12-24 eCI electrode contacts of the array (depending on the manufacturer 6 ).This leads to activation of a large fraction of SGNs (Figure 2B).3][14][15] Efforts to extend the number of functional channels and reduce channel interactions are based on improving sound coding strategies and stimulators by enabling focused stimulation [16][17][18][19][20][21][22] or current steering using multipolar stimulation (virtual channels). 23Alternatively, to improve neural interface, neurotrophin gene therapy increasing SGN survival and causing regeneration of spiral ganglion neurites 24,25 or direct stimulation of the auditory nerve 26 has been proposed.Although some of these studies have shown potential to enhance hearing experience, there remains a major clinical need for improvement.
Alternative SGN stimulation by light has the potential to overcome eCI bottlenecks (for review see ref. [29][30][31][32] ).As light can be spatially confined, future optical cochlear implants (oCIs) could stimulate smaller fractions of SGNs enabling a higher number of perceptually independent stimulation channels.Two approaches to optical SGN stimulation, namely infrared direct neural stimulation (INS) 33 and optogenetics, 34 have been proposed.0][41][42][43] It was shown by recordings of midbrain activity that the spectral selectivity of optogenetic SGN stimulation outperforms that of electrical stimulation 27,28,34 (Figure 2).This conclusion was also reached for the human cochlea in computational studies investigating spread of excitation in a realistic 3D model of the cochlea 44 in comparison to clinical electric field imaging data. 45Moreover, SGN recordings demonstrated that ultrafast channelrhodopsins such as Chronos, f-Chrimson, and vf-Chrimson enable optogenetic stimulation to achieve near physiological SGN firing rates [46][47][48] (Figure 4).
In parallel, the technological implementation of the oCI progressed since the proof-of-concept study on flexible multichannel oCIs based on microscale thin-film gallium nitride (GaN) light emitting diodes (LEDs). 49Optimization of their light extraction and beam shaping with use of optical concentrators and micro-lenses as well as their technical characterization have been advanced 50,51 and application in animal studies has been shown. 28Furthermore, studies with larger emitters 52,53 and waveguides 54 have been undertaken and followed by implementation, characterization, and application of a complete proof-of-concept preclinical oCI system. 55Such a low-weight oCI system based on a custommade sound processor and driver can employ a dedicated real-time optical sound coding strategy taking advantage of increased number of stimulation channels and/or parallel stimulation.
Yet for optogenetic hearing restoration to be translated, work toward optical sound coding strategies is required in addition to preclinical development and characterization of viral gene therapy and oCI.Most currently used sound coding strategies for eCIs are based on filterbank processing and interleaved stimulation at a constant stimulation rate (Figure 3B).In general, the conversion of sound to electric current pulses works as follows.A microphone samples the sound from surroundings at a rate fast enough for faithful reconstruction of the audible frequencies.Audio samples are processed with a filter bank to decompose the signal into frequency bands, with each band corresponding to a channel of the implant.The amplitude of each band is then extracted, e.g., by Hilbert transformation, and this determines the amplitude of the biphasic electrical pulses for the respective channel.The amplitudes are further adjusted as per patient-specific threshold and comfort levels.These pulses stimulate the SGNs in the cochlea following the place-coding principle, i.e., electrodes transmitting information on high sound frequencies stimulate SGNs toward the base of the cochlea, while those for low frequencies are placed toward the apex.
The continuous interleaved sampling (CIS) 56 strategy was developed to tackle the problem of channel interaction, resulting from the current spread, by interleaved (non-simultaneous) stimulation.For a better transfer of temporal information, the CIS strategy uses high stimulation rates (usually about a thousand pulses per second per channel) along with short ($100 ms) pulses and inter-pulse intervals.To increase the temporal resolution of electrical hearing even more, a number of n-of-m type strategies were developed (e.g., SMSP, 57 SPEAK, 58 ACE, 59 PACE 60 later trademarked as MP3000, TPACE, 61 TIPS 62 ).All of them are based on the principle of selecting the most significant spectral features and neglecting less important components.Sound input is filtered into m frequency bands and envelope information is extracted from each of them.Out of the m bands, n bands containing the largest sound pressure are used for stimulation in an interleaved manner within a given time period and at a fixed frame rate.With low n, the spectral representation of audio input is reduced, but the stimulation rate can be increased resulting in better temporal resolution.Conversely, for high n, gaining better spectral representation channel stimulation rate decreases.4][65] However, regardless of the coding strategy, eCIs would need more independent channels in order to better encode spectral information.Increasing the array density would require a substantial size reduction for the single electrode contact that is limited by the maximum current density tolerated by the material.7][18][19][20][21][22][23] This is achieved by current steering technique where two or more channels are stimulated at different intensities to create intermediate channels between them (e.g., SpecRes strategy and its commercial version, HiRes Fidelity 120 66 ).Current steering aims at improving the transfer of the original spectrum by enabling different frequencies between fixed channels.Such a strategy requires an independent current source for each electrode.Benefits for native speaking Korean users were reported. 67Yet in another study, only 3 out of 10 subjects improved in perception of one or more spectral cues while speech understanding in noise was not improved in any subject. 68A study on 65 European subjects (seven languages) demonstrated no improvement in speech understanding in any of them with HiRes Fidelity 120.Nevertheless, utilizing the increased number of perceptually independent channel predicted for the oCI might capitalize on the input stage of such a strategy and/or sound processor hardware.Efforts to improve speech recognition include model-based coding strategies, which started with auditory-model-based ACE versions (EZ-ACE and IHC-ACE). 69Stimulation based on auditory modeling (SAM) 70,71 and bio-inspired coding (BIC) 72 strategies, implement the simulation of auditory system properties and subsequent encoding.Mimicking physiological hearing in terms of spread of excitation, cochlear delays, compression, phase locking, neural refractoriness, spike rate facilitation and adaptation, SAM and BIC, go beyond traditional strategies.They calculate each pulse individually, account for preceding stimulation results and enable individual inter-pulse intervals.These approaches could serve efforts to achieve near normal auditory percepts with optical stimulation, which, however, would require substantial computational power and increase the energy budget.
Implementation of optical emitter arrays
Thanks to the development in the miniaturized light-emitting, -focusing, and -delivering components, oCIs can be implemented with either an array of intracochlear optoelectronic emitters (''active oCI'', Figure 3A) or a waveguide array with extracochlear optoelectronics (''passive oCI'', Figure 3A). 73,74Both oCI implementations have properties, advantages and challenges to be considered when designing optical coding strategies.
Both oCI concepts have already been implemented in proof-of-concept studies in animals showing improved spectral selectivity and dynamic range over eCIs. 27,28,34,39,47,48,75Besides safety and stability considerations, an important emitter property is the spatial radiation A B Figure 2. Spectral selectivity of natural acoustic hearing compared to oCI and eCI stimulation (A) Illustration of experiment where multiunit activity in response to (from left to right) acoustic, electrical, or optical stimulation of the cochlea is recorded from inferior colliculus in adult Mongolian gerbils using a multielectrode array.Tonotopic organization is color coded.(B) Assessing the cochlear spread of excitation (SoE) for (from left to right) acoustic (4 kHz, 100 ms tone burst), optical (''passive'' [single waveguide, mid turn] and ''active'' [block of 4 LEDs, apical] oCI, 1 ms), and electrical (mono-[2nd electrode] and bipolar [2nd-3rd electrode] eCI, 100 ms) stimulation of the gerbil cochlea by multielectrode recordings of multiunit activity (color scale and white lines) from neuronal clusters of the frequency-ordered (or tonotopically-organized) auditory midbrain.Confined spatial tuning curves indicate low SoE, i.e., high frequency selectivity of optical stimulation via waveguides or LEDs that is more similar to the acoustic stimulation than for the broad SoE of eCI.Comparison of SoE for the different modalities is based on the strength of the multiunit responses (d', color code) elicited by acoustic, optical, and electrical stimulation as detected at a given midbrain electrode (asterisk indicate best electrode: i.e., electrode with lowest response threshold) and analyzed by signal detection theory measure: lines correspond to d' of 1.5 (small dash), 2 (large dash) and 3 (continuous).Intensity on linear axes for optical and electrical stimulation are provided for better comparability to other experiments and accessibility of energy requirements.Modified from ref. 27,28 pattern or intensity profile that governs the spread of neural activation in the cochlea.A Lambertian profile is characteristic of most LEDs, though the profile can be modified by use of special micro-optics, such as conical concentrators and micro-lenses. 51Laser-coupled waveguides, dependent on the outcoupling structure, can have a Gaussian emission profile, and therefore, spatially more confined optical stimulation is straight forward.At first glance, a narrow Gaussian profile seems optimal for oCI operation as it combines high irradiance and spatial selectivity.Yet, such emitters will need to be oriented very carefully and stably in the scala tympani, pointing directly toward the site of neural stimulation in the modiolus as even a slight shift in the orientation could substantially degrade neural excitation. 44Ultimately, all these factors will play a role in determining the efficiency of stimulation and these questions need to be addressed by well-constrained modeling to inform the right choice of the emitter properties.Finally, regardless of the implementation, the design might consider electrophysiological recording functionality for validation and quantification of neural excitation in analogy to eCI.
Active oCI
The active oCI was the first multichannel oCI that already proved its feasibility in hearing restoration in animal studies. 28,75This approach is very similar to the one known already from the eCI systems: wires from the driver electronics feed to the stimulation channels, which are incorporated inside a tight polymer encapsulation (Figure 3A).Already in 2014, Hernandez et al. showed promising results of optogenetic stimulation of the cochlea using an ancestor of an LED-based active oCI. 34Application of this single-channel device to a deaf mouse, similar to first single-channel electrical device prototype implanted in a deaf human (According to Eisen (2003), before implantation the patient had cochlea removed and only a remaining nerve stump of the auditory nerve was found where the active electrode was located.The ground electrode was embedded into the temporal muscle and monopolar configuration of stimulation was employed.As auditory nerve fibers were likely nonresponsive several weeks after cochlea removal, patient hearing experience could be explained by electrical stimulation of the cochlear nucleus (next stage in the auditory pathway) making this attempt closer to a brainstem stimulator than the CI.Although the 1957 attempt of Djourno and Eyrie `s was not the first attempt to treat deafness with electric stimulation, it was the first avoiding stimulation of the intact cochlea eliminating the electrophonic hearing effect.) in 1957 by Djourno and Eyrie `s, 76,77 paved the way for the future development of the entire concept.Also in 2014, Gossler et al., presented a proof of concept for wafer level processed mLED multichannel oCIs. 49Optimized for the mouse cochlea, linear arrays of mLEDs with a size of 50 3 50 mm were flip-chip-bonded on a flexible substrate carrying lines and contacts, characterized optoelectrically and inserted successfully into postmortem mouse cochleae. 49Since then, further developments of mLED based oCIs have been pursued [50][51][52] and first animal studies on multichannel optogenetic stimulation using microfabricated oCIs based on miniature commercial LEDs and even smaller custom mLEDs were already presented. 28,75Ten-channel oCIs with the slightly bigger commercial LEDs (220 3 270 mm, C460TR2227-S2100, Cree) 52 and addressing of individual LEDs by separate p-lines and a common n-line for all LED were characterized in vitro and in vivo. 75Combined with a custom-made preclinical sound processor and oCI driver circuitry, these oCIs enabled successful behavioral testing of optogenetic stimulation in rats providing the first proof of concept for a complete multichannel oCI system. 55,75ncreasing the number of channels was achieved with smaller custom mLEDs (60 3 60 mm) and matrix addressing.There, blocks of emitters are connected with a common contact and each emitter in a block has another common contact with corresponding emitter of other blocks. 50iring is minimized but addressing is only partially independent: each mLED within a given block is independently addressable as long as only one block is selected at a given time.Employing this approach up to 144 mLEDs could be operated with 12 n-contacts and 12 p-contacts. 50sing mLED-based oCIs higher spectral selectivity of optogenetic stimulation compared to eCI was reported based on recordings from inferior colliculus in adult Mongolian gerbils. 28However, with the typical maximal radiant flux of $0.8 mW of the individual mLEDs only a third of them elicited significant neural responses.Therefore, the spatial selectivity was assessed by activating blocks of four neighboring mLEDs, likely underestimating the selectivity achievable with mLED implants.Increased mLED emission and/or using more potent channelrhodopsins, and/ or increasing their expression in SGNs will be required for studying the spread of excitation upon stimulation by individual mLEDs.Indeed, efforts toward improved light extraction using microscale optical concentrators and lenses (10 mm in diameter) enhanced the mLED performance in vitro. 51Recently, advanced concepts of LED addressing for independent operation of a large number of channels were presented. 78uch a tri-state switching scheme would allow for increased number of LEDs with minimal wiring in the future implant designs.
The high-efficiency low-voltage organic LED (OLED) is another emitter candidate for the active oCI implementation.Integration of OLEDs on top of the complementary metal oxide-semiconductor substrates (OLED-on-CMOS technology) 79 is an interesting concept for a fully integrated optoelectronic system in oCIs provided that achievable irradiance matches the requirements of neural excitation.OLED technology already offers a variety of colors (red, orange, white, green, and blue) and the CMOS architecture would enable the exact and fast control of a large number of emitter elements in arrays with a minimum number of metallic lines (daisy chaining of chip-to-chip interconnects) for mechanical flexibility of the implant. 80Integrated photodetection would make it possible to obtain direct feedback on the stimulus intensity for a closed-loop feedback control of the individual OLED that could serve the oCI fitting and control of operation.OLED technology was already presented to stimulate cells placed on top of the two-dimensional array as well as modulate cortical neurons. 81,82Nevertheless, OLED technology is yet to be tested for application in the cochlea.
Finally, microscale lasers such as vertical-cavity surface-emitting lasers (VCSELs) have been employed for active oCI 83 that provide narrow beam profiles and should offer sufficient radiant flux for optogenetic stimulation.Still, supplying sufficient current to the laser as well as achieving a stable hermetic yet transparent and flexible encapsulation of the array remain important challenges to tackle.
Passive oCI
Although the active oCI implementation is currently the most technologically ready, passive oCIs represent an attractive option with a favorable safety and stability profile.In this design, all electrically active components are hermetically enclosed in the titanium housing of the implant in analogy to the current sources in eCI systems 73 (Figure 3A).Although there is no proof-of-concept preclinical animal study with chronic multichannel implementation yet, first important development goals have already been achieved.
Light delivered from optical fibers chronically implanted into the round window for single-channel stimulation of the cochlea drove a behavioral response in awake gerbils and proved an auditory percept due to optogenetic stimulation. 39In a follow-up study, the spectral selectivity of the waveguide approach was addressed by recordings from inferior colliculus in response to stimulation from three independent fibers in three different regions of gerbil cochlea. 27Despite the fact that the fibers were placed into cochlear windows opposing the medial wall, near physiological spectral selectivity was found (Figure 2B).Such high selectivity was not observed with blocks of 4 mLEDs inserted inside scala tympani similar to the typical CI position (Figure 2B).Computational modeling studies also confirmed that Gaussian-profile emission of waveguides to be more suitable candidates than Lambertian-profile mLEDs for use in oCIs, as they achieve higher irradiance at the same radiant flux with lower spectral spread. 445][86][87] Recent studies on fabrication and characterization of waveguide arrays for oCIs demonstrated successful implantation into the gerbil cochlea, 88 yet functional results remain to be obtained.
Implementation of the optical CI driver hardware
Just like for eCI, the oCI hardware design determines the features available for coding strategy.Considerations for oCI design include single vs. multiple wavelength(s), e.g., for diversified excitation or combined excitation and inhibition, as well as number, type and operation of emitters and their addressing.The latter is relevant for independent operation of channels as required for parallel stimulation of multiple sites.Power consumption is another important consideration as current estimates of required pulse energy of oCI exceed that of eCI (see below).Hence, maximizing efficiency of emitter operation such as driving laser diodes with large current but ultrashort pulses that are then integrated by optogenetically modified SGNs is important and will also serve the oCI heat management.Efficient integration of injected currents by SGNs has been demonstrated for direct current injection 89 and optogenetic stimulation. 90Moreover, power-efficient optogenetic emulation of physiological sound coding with parallel stimulation should balance the transfer of spectral and intensity information.For example, optogenetic coding might consider recruiting emitters that neighbor one of the n channels for broadening neural excitation at a specific tonotopic position for encoding loud sounds.
Future oCIs directly trigger firing of SGNs that express channelrhodopsins (ChR, light-gated ion channels), i.e., bypassing dysfunctional or lost IHCs.Wavelength, kinetics, and energetics of optogenetic stimulation need to be tuned to the action spectrum, light sensitivity, conductance, and gating kinetics of the ChR employed.To a first approximation the temporal fidelity of optogenetic SGN stimulation is limited by deactivation time constants of the channelrhodopsins.For example, the blue-light-activated ChR2, 91 which was the first ChR employed for neural stimulation 92 and is widely used in the life sciences, deactivates with the time constant of $10 ms at room temperature and enables firing up to $50 Hz in neurons. 92Likewise, the red-light-activated ChR Chrimson (peak response at 590 nm) has a rather long deactivation time constant of $25 ms at room temperature. 47Both ChR seemed ill-suited for the high temporal fidelity desired for optogenetic sound encoding.Therefore, efforts have been undertaken to engineer ultrafast ChRs.Chrimson variants, that, at physiological temperature, have deactivation constant of 3.2 ms (fast, f-Chrimson) and 1.6 ms (ultrafast, vf-Chrimson) have been generated 47 and characterized. 47,48,93hese variants enable reliable SGNs firing with good temporal precision (vector strength of at least 0.5) at stimulation rates of up to $200 Hz (Figure 4). 47,48,93Even faster deactivation constant of 0.8 ms at physiological temperature 46 is exhibited by the ultrafast bluelight-activated Chronos (peak response at 500 nm) 94 enabling stimulation rates beyond 200 Hz. 46However, the resulting shorter channel opening time provides less charge transfer per photon absorption, which in turn extends the power needed for stimulation.Therefore, oCI strategies working with currently available ChRs will likely limit the stimulation rate to 200 Hz, i.e., lower than contemporary eCI strategies.We consider it likely that potential disadvantages of temporal coding resulting from the lower stimulation rate will be offset by the enhanced spectral coding.
Ultrafast red-light-activated ChRs such as f-Chrimson and vf-Chrimson seem to be good candidates for clinical translation given suitable kinetics and the lower risk for phototoxicity for the lower energy photons.As the energy requirement of optogenetic stimulation is another important consideration and more favorable for f-Chrimson than for vf-Chrimson, f-Chrimson might be the better choice as it lends to comparable temporal fidelity of SGN firing. 47,48Still, the single-channel energy threshold of 0.5 mJ found by auditory brainstem response (ABR) recordings in response to 1-ms-long laser light pulse 47 indicates that the energy requirement still exceeds that of the single-channel eCI ABR energy threshold of 0.025 mJ (calculation based on interpolation from ABR protocol with a 1-ms long train consisting of ten bipolar pulses of 45 ms phase duration resulting in a threshold of $60 mA 95 and electrode impedance of 7 kU).Increased number of oCI channels comparing to eCI may lead to higher energy consumption.Also, influence of light source type as well as transduction efficiency on overall energy budget should be considered too (for more discussion see section on risks and issues of oCI technology).Aside from gene therapeutic efforts to enhance the transduction efficiency and the membrane targeting of the ChR, discovery and engineering of ChRs with larger single channel current will be valuable activities to further lower the oCI energy budget.(C) Traces from a putative SGN unit expressing vf-Chrimson in response to 400-ms-long trains of laser pulses (43 mW, 1 ms).Modified from ref. [46][47][48] Current preclinical work often involves early postnatal intracochlear injection of adeno-associated virus (AAV) which routinely achieves transduction of R70% of the SGNs, [46][47][48] outperforming transduction upon direct intramodiolar pressure injections of AAV in the adult animal (10-40%). 39,93AAV are good candidate vectors, not compromising hearing [96][97][98][99] and providing long-term transgene expression in postmitotic target cells 47,100,101 despite them not integrating into the host cell genome. 32For example, stable expression of ChR over $2 years upon administration of a single AAV dose was demonstrated for the mouse SGNs 43 and stable expression in retinal ganglion cells was shown in for non-human primates. 102In general, AAV-based gene delivery is already in use in numerous clinical trials on the eye. 32,103First results of a clinical study on AAV-mediated optogenetic manipulation of retinal ganglion cells show favorable safety data. 104Preclinical efforts are being undertaken to develop efficient AAV-based genetic SGN manipulation of the mature cochlea that include work on non-human primates. 105Optimized ChRs and their expression in SGNs for energy-efficient optogenetic stimulation is an important requirement for optogenetic hearing restoration.
PREDICTIONS FOR OPTICAL SOUND CODING
Optical sound coding strategies need to consider several constraints such as limited temporal fidelity of coding reflecting ChR kinetics, increased number of stimulation channels relative to electrical CI system, requirements for the power-efficient operation of optical emitters, maximal current consumption, and battery lifetime.The minimal duration and intensity of a light pulse sufficient for optogenetic SGN activation depends on the level of ChR expression (see above), the single channel conductance of the ChR, as well as on the ChR's closing kinetics.Taking advantage of the expected greater frequency selectivity of the oCI, parallel stimulation by a number of emitters selected from 50 to R100 emitters, could be considered, which seems feasible given the current technology.Recruitment of neighboring emitters for increasing the population of activated SGNs at a given tonotopic position will likely help mimic physiological loudness coding that relies on increasing the number of activated SGNs and their firing rate.
Parallel stimulation
To evaluate advantages of parallel stimulation over interleaved stimulation, we employed custom scripts in MATLAB R2016a (The MathWorks Inc.) to generate electrodograms or ''emittograms'' for a ten-channel implant, state-of-the-art eCI or e.g., LED-based active oCI used in proofof-concept studies of oCI system 52,55 (Figure 5).Sound processing stages were implemented similar to that of the CIS strategy, 56 except that there was no interleaving in the parallel stimulation (for details see STAR Methods in supplemental information).For a fixed pulse rate, parallel stimulation offers a longer pulse duration (Figure 5B) in comparison to interleaved stimulation Figure 5A), which is advantageous for oCI systems that accommodate the limited temporal fidelity of optogenetic stimulation with state-of-the-art ChRs (see above).Employing ChRs with faster kinetics (hence increasing temporal fidelity of coding) and larger conductance could enable stimulation at higher rates while keeping the power budget in check (Figure 5C).
Increased number of emitters
Since the oCI development is still at the preclinical stage, it is not yet possible to test sound coding strategies in studies with human patients.However, in silico evaluation approaches will be beneficial for the future development of the entire system and smooth translation into clinical trials.They ideally complement animal experiments and help to focus these studies, thereby reducing the number of animals required for preclinical evidence.Existing speech intelligibility metrics could be used for analysis of in silico results (for extensive review and comparison see ref. 106 ).In general, most of them are based on the articulation index (AI) framework developed already in 1947 at the Bell Telephone Laboratories for solving problems not related to CIs. 107 AI allowed for quantitative analysis of the sound recognition capability of human hearing regarding fundamental characteristics of the input sound.The fractional articulation index (fAI) 108 based on AI, and short-time objective intelligibility measure (STOI) 109 are the metrics most commonly used to study intelligibility of eCI coding strategies as they are free from prediction bias 106 and their scores show high correlation with speech intelligibility. 108,109Although both of them could be good candidates for evaluation of oCI coding strategies, for the following analysis we selected fAI which tends to underestimate intelligibility as compared to STOI and, hence, represents a lower boundary of what can be achieved. 106][9][10][11] The reason for this limitation has been attributed to electrode interactions 10 as well as CI technology and implantation techniques at the time of these studies. 112A prospective oCI with high spectro-temporal resolution is expected to overcome these limitations of CI hearing.In this context, we performed an in silico evaluation of speech intelligibility with different number of channels (up to 64 spectral bands).The evaluation was implemented in MATLAB R2016a (The MathWorks Inc.) and used publicly available code of Analysis & Resynthesis Sound Spectrograph (ARSS) 113 and fAI. 108An open speech corpus, 114 comprising of 10 speakers with 3842 utterances in total, was used for the study (for details see STAR Methods in supplemental information).As the real-world scenarios are rarely in completely quiet environments, we added white noise to each file at a signal-to-noise ratio (SNR) of +5 dB.The ARSS algorithm is based on filter-bank analysis and quite similar to the CIS coding strategy. 56The resynthesized audio from ARSS was compared to the original audio by calculating the objective intelligibility measure, fAI.The fAI scores corroborates the hypothesis that higher number of spectral channels is beneficial for speech understanding of CI users, provided there is no channel interaction (Figure 6, where 0 represents poor and 1 high intelligibility).This result is intuitive and supported by the fact that the speech recognition of normal-hearing listeners continues to improve with increasing number of spectral bands, even when the eCI listeners' performance does not show a significant iScience Review improvement. 10Provided that scaling the number of non-overlapping stimulation channels is feasible for oCI it is expected to offer better speech intelligibility.
2][43] Combined sub-threshold optical followed by electrical stimuli resulted in reduction of thresholds beyond those needed for each modality alone.It could be also worth investigating if optogenetic stimulation would benefit from preceding sub-threshold electric stimuli as reported for the INS hybrid approach. 115In any case, however, it would require more advanced and elaborate hardware, resulting in technologically more complex CIs with larger form factor of the implant and yet more new implementations of the sound coding strategies that could possibly take advantage of e.g., lowering thresholds for stimulation and reducing overall power budget of the system.
TOWARD A CLINICAL SYSTEM
7][118] Soon, teams in Melbourne (Graham Clark), San Francisco (Robin Michelson and Michael Merzenich), Vienna (Erwin Hochmair, Ingeborg Desoyer, and Kurt Burian), and Paris (Claude Henry Chouard and Patrick MacLeod) joined the race to deliver a commercial eCI system.By the end of the 1980s, the eCI became a standard hearing rehabilitation device. 119ow, around 60 years after the first CI implantation, the oCI concept started a similar path and it might be another uphill battle as the oCI must compete with a well-established eCI in terms of hearing quality, stability, risks, and costs to make it attractive to patients and health systems.In this case, not only the device needs to be tested in clinical trials on human and approved but also optogenetic modification of the SGNs via AAV-mediated gene therapy.However, as the number of physical channels in eCI, as well as speech recognition in quiet reached steady state for over a decade, 6,120 it might be a good time to shift focus of engineers toward oCIs.In a German survey study hearing impaired patients (26% bilateral) revealed a demand for improved CI hearing beyond their current experience with eCI, mainly in terms of speech recognition in background noise, greater music appreciation, and more natural sound impression. 15Overcoming these shortcomings by increasing the number of channels using oCI and implementation of new sound coding strategies seem worth the effort.As mentioned, substantial parts of the current technology can likely be adapted to work in oCIs.Hence, that focus of oCI development needs to be mostly put on the optical stimulation module and the implant driver (hardware development) and a new sound coding strategy (software development).Given the combination of gene therapy and new medical device the costs of the future oCI system will likely exceed nowadays eCI system (average lifetime cost of V53,000 in case of unilateral implantation 121 ).Nevertheless, if achieved, better quality of life by improved speech and music perception could justify the cost difference.Aside from possible benefits for hearing restoration, other factors such as awareness and apprehension of gene therapy, aesthetics, or convenience in terms of battery lifetime will play a role for the patient's choice, too.Building trust based on good gene therapy education combined with the use of minimal invasive methods should clearly highlight risk-to-benefit ratio of oCI for the future patients. 122As the oCI would follow the form of the current eCI, aesthetics important for many patients would not be compromised over eCI systems. 123Depending on the final decision on emitter type (active or passive) to be used in the oCI, battery lifetime may vary.Nevertheless, careful design of sound coding strategy in terms of stimulation patterns will help to extend this time.High demand beyond hearing aids and CIs will speed up the development of high-capacity rechargeable batteries.
RISKS AND ISSUES OF OPTICAL COCHLEAR IMPLANT TECHNOLOGY
The main objective of oCI development is an increased number of non-overlapping stimulation channels inside the cochlea to enable higher spectral resolution than in eCI.Technological feasibility is suggested for active oCIs that could integrate 144 mLEDs on 12 mm (a third of the length of human scala tympani).The insertion of active oCIs into explanted mouse cochlea-5-fold smaller than human cochlea-with 93 mLEDs inside Scala tympani covering a tonotopic frequency range of 72.2 kHz (base) to 2.5 kHz (apex) was reported. 124Moreover, functional SGN stimulation by oCI versions with fewer mLEDs was demonstrated in Mongolian gerbils. 28,50Given the roughly 3 times larger human cochlea, it would The fractional articulation index (fAI) score was calculated from comparison of audio resynthesized using Analysis & Resynthesis Sound Spectrograph (ARSS) to the original audio from 10 speakers with 3842 utterances in total.White noise at a signal-to-noise ratio (SNR) of +5 dB was added to each file.Stimulation rate was set to 500 pps/channel.Each boxplot displays the median, the lower and upper quartiles, and the minimum and maximum values that are not outliers.Outliers (not shown) were computed using the interquartile range (points above the upper quartile +1.5 times the distance between upper and lower quartile or below the lower quartile À1.5 times the distance between the upper and lower quartile).seem amenable to implant oCIs of similar design. 124Yet, the challenge for clinical translation of such active oCIs is the stable encapsulation that provides hermetic sealing of the optoelectronics and yet maintains optical transparency and mechanical flexibility.Also, due to the limited optoelectronic conversion efficiency of LEDs, active oCIs were shown to heat up in their core in the worst-case scenario when LEDs were operated at much higher currents then necessary to elicit behavioral response in rats. 75Although, the temperature around these 10-channel oCIs never exceed 1 K at a distance of $100 mm (less than the distance of the oCI to the SGNs) which is below 2 K limit of the ISO 14708-1 standard for implantable medical devices, heat dissipation is an important concern.These problems do not plague the waveguide-based passive oCI implementation and this advantage could make them a favorable choice for the first generation of clinical oCIs.
Due to the trade-off between ChR open time and energy requirements, sound coding strategies of the first generation oCIs will likely operate at lower stimulation rates than eCIs.This could compromise the temporal fidelity of sound encoding such as for sound localization, which however, is limited also for eCI (mostly due to the lack of synchronization of stimulation in bilateral eCI).With a vector strength of 0.5 for stimulation rate of R200 Hz we would expect the temporal precision of oCI coding to enable temporal fine structure coding, which has been implemented for low frequency eCI channels with subtle improvements in speech understanding. 125Encoding the sound envelope has been shown to operate well also with lower eCI rates [126][127][128] that would seem amenable to oCI coding.
In order to meet current standards, oCIs will need to function for many years without failure ideally covering lifespan of patient.In reality, the survival rate of eCIs strongly depends on device and patient age and reaches up to 30 years in a 30-year analysis window with MED-EL devices reaching 10-year cumulative survival rate of 99% in adults and 97% in children. 129Due to possible similarities in design between contemporary eCI and future oCI (Figure 3A) similar protection of internal components should be achievable.In such case, lower longevity of the device could be expected mostly due to light-emitting elements failure.In case of LEDs expected lifetime can reach up to 100,000 (depending on construction 130,131 ) while for laser diodes up to 70,000 h (at around 40 C in continuous-wave operation 132 ).Assuming continuous stimulation at rate of 300 pps with 1-ms-long pulses the oCI could reach decades-long operation: 40 years for LED or 25 years for laser diodes.Nevertheless, failures of individual emitters are to be expected in decade-long operation.In our unpublished preclinical work, we use oCIs with various extents of mLEDs failures which in most cases where scattered randomly along the length of the oCI.Extended dropouts potentially leading to ''dead zones'' of stimulation were not observed, but their occurrence during decade-long operation cannot be excluded.This will reduce the oCIs capacity of gapless stimulation of the tonotopic array of SGNs.However, dropouts of electrodes are quite common in clinical eCI and, if individual, can typically be coped with by the patients.Nonetheless, reimplantation is undertaken if electrode array failure is more substantial and this also presents an option with future oCIs.
Longevity of the entire concept of optogenetic hearing restoration also depends on stable expression of ChRs.Unleashing the full potential of optogenetic hearing restoration will be best served with a rich complement of light-sensitive auditory nerve fibers. 43Ideally, most if not all of still available SGNs should be transduced.From the vision restoration efforts it seems that 30% of ChR-expressing retinal ganglion cells are realistic and sufficient. 133Nevertheless, efforts should be taken to increase the transduction rates by more efficient means of virus administrating.Ideally, a standardized non-invasive method will help assess the functional ChR expression in the SGN e.g., by optical ABR measurements as is heavily used in preclinical studies.In case of insufficient transduction, redosing of AAV could be considered and a procedure has been suggested based on a temporal bone study. 134This might also be considered for patients that experience reduced efficiency of oCI function that cannot be fixed by refitting of the oCI.
Another critical point concerning oCI function is the orientation of emitters for optimal optical stimulation.Irradiance at the level of the SGN somata in Rosenthal's canal was recently investigated as a function of emitter distance and orientation in an in silico study of intracochlear light propagation. 44Results show that the irradiance follows the inverse-square law of optics when shifting emitter toward or away from the Rosenthal's canal and emitter orientation has greater impact on irradiance for waveguide-than for mLED-based implants.If waveguide outcoupling with low numerical aperture is chosen, the irradiance in the ganglion decreases with the change of the emitter orientation by G30 relative to the direct vector from emitter surface to the Rosenthal's canal.Here, a good balance of efficiency of stimulation and robustness toward axial CI rotation should be considered.Although, most of solutions for preclinical oCI are based on biocompatible materials or at least encapsulated in such, insertion of foreign body into the cochlea will cause grow of a scar tissue.Computational modeling shows an attenuation of irradiance at the SGN somata, but due to forward scattering by scar tissue the spread of excitation was not significantly greater. 44Efforts toward hearing preservation surgery and mitigation of the foreign body reaction will help maintaining high efficiency of optical stimulation.6][137] In terms of design, reliable stimulation with less optical spread might be achieved using perimodiolar implants designed to stay in close proximity to the SGNs. 138,139Preoperative imaging to plan a safe trajectory and robotic insertion are among future techniques that could help oCI to achieve better performance. 140,141onsidering energy consumption, a custom-built preclinical CI system managed to achieve similar time of operation of single-channel stimulation with eCI and oCI based on mLEDs (8 vs. 7 h, respectively) using general purpose commercial out of shelf components. 55Assuming twice as many channels for oCI then eCI, it would result in half of the battery life for oCI comparing to eCI.By using dedicated components and optimization of the system for clinical use this should still stay comparable.Nevertheless, a waveguide-based oCI system and f-Chrimson 47 as the ChR considering coding strategies compared to current eCIs would require a 2-fold greater energy consumption.This could be improved by development of less power-hungry laser diodes and/or ChRs with 5-10-times larger conductance that have been recently described. 142,143
Figure 1 .
Figure1.Illustration of the cochlear implant (CI) system comparing electrical CI (eCI) and optical CI (oCI) Wide spread of electric current from each of eCI electrodes is indicated comparing to confined-in-space optical stimulation with ''active'' or ''passive'' oCI.Expected increased number of perceptually independent channels of oCI and possibility to use multiple channels at very same moment of time are also depicted.Main parts of the system: (1) external behind-the-ear (BTE) sound processor, (2) inductive radio frequency (RF) link between external and internal part, (3) internal stimulator, (4) implant, (5) intracochlear array.
Figure 3 .
Figure 3. Hardware and software building blocks of CI systems (A) Illustration of contemporary two-piece design of eCI and future ''active'' or ''passive'' oCI systems showing similarities in design.Main parts of the system: (1) external behind-the-ear (BTE) sound processor, (2) magnetic radio frequency (RF) transfer link between external and internal part, (3) internal stimulator, (4) implant, (5) stimulation array.Electrically active components are highlighted in red for eCI and active (LED-based) as well as passive (waveguide-based) oCIs.(B) Scheme of sound coding into the artificial stimulation of the cochlea with CI systems.
Figure 4 .
Figure 4. Extracellular recordings of firing rates from single putative SGNs in response to optogenetic stimulations (A) Spiking activity from a putative SGN units expressing Chronos in response to 400-ms-long trains of laser pulses (30 mW, 1ms for <700 Hz or 500 ms for R700 Hz) at different frequencies.(B) Activity of an SGN units expressing f-Chrimson in response to 900-ms-long trains of laser pulses (1 ms).(C) Traces from a putative SGN unit expressing vf-Chrimson in response to 400-ms-long trains of laser pulses (43 mW, 1 ms).Modified from ref.[46][47][48]
Figure 5 .
Figure 5. Emittograms representing the output signals of sound coding strategies with different stimulation modes for eCI or oCI (A) Interleaved stimulation at 500 pps/channel.(B) Parallel stimulation at 500 pps/channel.(C)Parallel stimulation at 5000 pps/channel.The top panels show the activation patterns for an audio sample of the word ''choice'', and the middle and bottom panels are the zoomed-in plots.The sound processing was based on CIS strategy for ten channels.The vertical lines within the plots represent the onset of the eCI or oCI pulses and not the actual pulses.Parallel stimulation offers longer pulse duration, or higher pulse rate, or a combination of both.
Figure 6 .
Figure 6.Objective intelligibility measure for different number of spectral channelsThe fractional articulation index (fAI) score was calculated from comparison of audio resynthesized using Analysis & Resynthesis Sound Spectrograph (ARSS) to the original audio from 10 speakers with 3842 utterances in total.White noise at a signal-to-noise ratio (SNR) of +5 dB was added to each file.Stimulation rate was set to 500 pps/channel.Each boxplot displays the median, the lower and upper quartiles, and the minimum and maximum values that are not outliers.Outliers (not shown) were computed using the interquartile range (points above the upper quartile +1.5 times the distance between upper and lower quartile or below the lower quartile À1.5 times the distance between the upper and lower quartile). | 9,835 | 2023-08-01T00:00:00.000 | [
"Physics"
] |
The rate of convergence of a generalization of Post–Widder operators and Rathore operators
In this paper, we study local approximation properties of certain gamma-type operators. They generalize the Post–Widder operators and the Rathore operators, and approximate locally integrable functions satisfying a certain growth condition on the infinite interval [0,∞)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0,\infty )$$\end{document}. We derive the complete asymptotic expansion for these operators and prove a localization result. Also, we estimate the rate of convergence for functions of bounded variation.
The Post-Widder operators P n were intensively studied by several authors [3,4,9]. In recent years, several authors defined and studied variants of the Post-Widder operator which preserve several test functions [5-7, 13, 16]. In order to include the similar operator by Rathore [12] (see below), we study in this paper a more general gamma type operator depending on a positive parameter, which includes both, the Post-Widder operators and the Rathore operators as special cases.
Let E be the class of all locally integrable functions of exponential type on [0, +∞) with the property | f (t)| ≤ Me At (t ≥ 0) for some finite constants M, A > 0. The gamma-type operators P n,c (cf. [10,Eq. (3.3) ]) associate to each f ∈ E the function where c is a positive parameter. We emphasize the fact that c may depend on the variable x. Note that the integral exists if nc > A. The definition can be rewritten in the form with the kernel function In the special case c = 1 these operators reduce to the Rathore operators R n ≡ P n,1 , given by [10,Eq. (3.6)] If we substitute c = 1/x, we obtain the Post-Widder operators (1.1).
In this paper we derive the complete asymptotic expansion for the sequence of operators P n,c in the form The coefficients a k ( f , c, x), which are independent of n, will be given in an explicit form. It turns out that associated Stirling numbers of the first kind play an important role. As a special case we obtain the complete asymptotic expansion for the Rathore operators R n and for the Post-Widder operators P n . Secondly, we study the rate of convergence of the sequence P n,c f (x) as n → ∞ for functions of bounded variation. More precisely, we present an estimate of the difference P n,c f (x) − ( f (x+) + f (x−)) /2.
Main results
For q ∈ N and x ∈ (0, ∞), let K [q; x] be the class of all functions f ∈ E which are q times differentiable at x. The following theorem presents as our main result the complete asymptotic expansion for the operators P n,c . Theorem 2.1 Let q ∈ N and x ∈ (0, ∞). For each function f ∈ K [2q; x], the operators P n,c possess the asymptotic expansion as n → ∞, where s 2 ( j, i) denote the associated Stirling numbers of the first kind.
The associated Stirling numbers of the first kind can be defined by their double generating function ∞ i, j=0 as n → ∞. In particular, we obtain the Voronovskaja-type formula In the special case c = 1 we have the complete asymptotic expansion for the Rathore operators, In the special case c = 1/x we have the complete asymptotic expansion for the Post-Widder operators Our second main result is an estimate of the rate of convergence for functions f ∈ E, which are of bounded variation (BV) on each finite subinterval of (0, ∞).
Theorem 2.2
Let f ∈ E be a function of bounded variation on each finite subinterval of (0, ∞). Then, for each x > 0, we have the estimate For the proofs of Theorems 2.1 and 2.2 we need a localization result for the operators P n,c . Since it is interesting in itself we state it as a theorem.
The constant β can be chosen to be
Auxiliary results and proofs
Firstly, we study the moments of the operators P n,c . Throughout the paper, let e r denote the monomials, given by e r (x) = x r (r = 0, 1, 2, . . .). Furthermore, define In the following, the quantities m j denote the unsigned Stirling numbers of the first kind defined by We recall some known facts about Stirling numbers which will be useful in the sequel. The Stirling numbers of the first kind possess the representation In particular, we have P n,c e 0 = e 0 , P n,c e 1 = e 1 and P n,c e 2 (x) = x 2 + x nc , Application of formula ( 3.1) yields P n,c e r (x) = 1 (nc) r r j=0 r j (ncx) j and the index transform j = r − k completes the proof.
Lemma 3.2
The central moments of the operators P n,c are given by In particular, we have P n,c ψ 0 x (x) = 1, P n,c ψ 1 x (x) = 0 and P n,c ψ 2 x (x) = x/ (nc). Proof Application of the binomial formula yields for the central moments and an index shift r → r + k yields the desired representation.
Proof Taking advantage of the formula (3.2) we obtain Note that r +k i+k = 0, for i > r . Using the binomial identity j r +k r +k The inner sum is to be read as zero if i > j − k. Since which completes the proof.
In order to derive Theorem 2.1, a general approximation theorem due to Sikkema [14, Theorem 3] (see also [15]) will be applied. For j ∈ N and x > 0, let H ( j) (x) denote the class of all locally bounded real functions f : [0, ∞) → R, which are j times differentiable at x, and satisfy the additional condition f (t) = O t − j as t → +∞. An inspection of the proof of Sikkema's result reveals that it can be stated in the following form which is more appropriate for our purposes.
In the application used in the proof of Theorem 2.1, we restrict H ( j) (x) to consist only of locally integrable functions. We proceed with the proof of the localization result (Theorem 2.3), which will be applied in the proofs of Theorems 2.1 and 2.2.
Proof of Theorem 2.3 Let
say, where s = nc > 0 and denote the lower and the upper incomplete gamma function, respectively. We use the well-known asymptotic behaviour of the incomplete gamma function for large parameters z and b. It holds [17, Eq. (7.3.18)], as z, b → ∞ such that the ratio λ = b/z is bounded away from unity, i.e., λ ≤ λ 0 < 1, where λ 0 is a fixed number in (0, 1). In a similar kind it holds [17, Eq. (7.4.43)], as z, b → ∞ such that the ratio α = z/b is bounded away from unity, i.e., α ≤ α 0 < 1. If δ = x the integral I 1 vanishes. Let us consider the case δ < x. Since 3) implies that Application of Stirling's formula, Application of Stirling's formula leads to The latter inequality is equivalent to the obvious inequality 2t ≤ log 1+t 1−t = 2 t + t 3 /3 + t 5 /5 + t 7 /7 + · · · , for t = δ/x ∈ [0, 1). Combining the above results we obtain the desired estimate with the constant β = β 2 .
Proof of Theorem 2.1 Let x > 0 and put
Interchanging the order of summation, we obtain this implies the desired expansion (1.4) with the associated Stirling numbers of the first kind s 2 (i, j) as defined in Eq. (3.2).
Now we turn to the estimate of the rate of convergence for BV functions. For the proof of Theorem 2.2 we apply the following properties of the kernel function φ n,c (x, t) as defined in (1.3).
Lemma 3.5
The kernel function φ n,c (x, t) satisfies the following estimates: The second estimate is obtained in an analogous manner.
Lemma 3.6
For fixed x > 0, Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2,121 | 2023-05-25T00:00:00.000 | [
"Mathematics"
] |
TIMELESS promotes reprogramming of glucose metabolism in oral squamous cell carcinoma
Background Oral squamous cell carcinoma (OSCC), the predominant malignancy of the oral cavity, is characterized by high incidence and low survival rates. Emerging evidence suggests a link between circadian rhythm disruptions and cancer development. The circadian gene TIMELESS, known for its specific expression in various tumors, has not been extensively studied in the context of OSCC. This study aims to explore the influence of TIMELESS on OSCC, focusing on cell growth and metabolic alterations. Methods We analyzed TIMELESS expression in OSCC using western blot, immunohistochemistry, qRT-PCR, and data from The Cancer Genome Atlas (TCGA) and the Cancer Cell Line Encyclopedia (CCLE). The role of TIMELESS in OSCC was examined through clone formation, MTS, cell cycle, and EdU assays, alongside subcutaneous tumor growth experiments in nude mice. We also assessed the metabolic impact of TIMELESS by measuring glucose uptake, lactate production, oxygen consumption, and medium pH, and investigated its effect on key metabolic proteins including silent information regulator 1 (SIRT1), hexokinase 2 (HK2), pyruvate kinase isozyme type M2 (PKM2), recombinant lactate dehydrogenase A (LDHA) and glucose transporter-1 (GLUT1). Results Elevated TIMELESS expression in OSCC tissues and cell lines was observed, correlating with reduced patient survival. TIMELESS overexpression enhanced OSCC cell proliferation, increased glycolytic activity (glucose uptake and lactate production), and suppressed oxidative phosphorylation (evidenced by reduced oxygen consumption and altered pH levels). Conversely, TIMELESS knockdown inhibited these cellular and metabolic processes, an effect mirrored by manipulating SIRT1 levels. Additionally, SIRT1 was positively associated with TIMELESS expression. The expression of SIRT1, HK2, PKM2, LDHA and GLUT1 increased with the overexpression of TIMELESS levels and decreased with the knockdown of TIMELESS. Conclusion TIMELESS exacerbates OSCC progression by modulating cellular proliferation and metabolic pathways, specifically by enhancing glycolysis and reducing oxidative phosphorylation, largely mediated through the SIRT1 pathway. This highlights TIMELESS as a potential target for OSCC therapeutic strategies. Supplementary Information The online version contains supplementary material available at 10.1186/s12967-023-04791-3.
Introduction
Head and neck squamous cell carcinoma (HNSC), a significant global health concern, is most commonly and severely manifested as oral squamous cell carcinoma (OSCC) [1][2][3].Alcohol, tobacco, betel, human papillomavirus, poor hygiene, and diet are the most known incidence of OSCC [3][4][5].Presently, treatment options for OSCC predominantly include surgery, radiotherapy, chemotherapy, and immunotherapy.However, for patients suffering from recurrent, metastatic, and advanced stages of OSCC, treatment efficacy is notably poor, primarily involving radiotherapy and chemotherapy [6].Furthermore, these treatment modalities are often associated with substantial side effects, complicating the management of OSCC patients [7].Despite advancements in OSCC therapies, the overall prognosis for OSCC patients remains grim.Consequently, early diagnosis and the identification of critical molecular pathways in OSCC are imperative for enhancing patient prognosis and guiding treatment strategies.
Cancer development is a complex, multi-faceted process influenced by a network of genetic and metabolic factors.Intracellular metabolic disturbances, involving a variety of kinases, metabolic pathways, and epigenetic modulators, play a central role in the initiation and progression of cancer [8].A hallmark of many cancers, including OSCC, is the upregulation of glycolysis, driven by the aberrant regulation of glycolytic enzymes and glucose metabolism pathways [9].
Circadian rhythms, inherent in most organisms, regulate an array of cellular, metabolic, physiological, and behavioral activities in mammals [10].These rhythms are orchestrated at the molecular level by a series of core circadian genes, including CLOCK, ARNTL, CRY1, TIMELESS, PER1, and NPAS2, forming a transcriptional and translational feedback loop [11].The link between disrupted circadian rhythms and cancer pathogenesis has emerged as a focal area of research.Studies have shown that anomalies in circadian rhythm can disrupt normal physiological processes, potentially leading to tumorigenesis and cancer progression [12].For instance, diminished PER1 expression in OSCC has been associated with advanced disease stages and decreased 5-year survival rates [13,14].PER1's role extends to influencing cellular metabolic pathways, such as glycolysis, via the PI3K/AKT signaling pathway, thereby impacting OSCC development and progression [15].
Several studies have highlighted the prominent expression of the TIMELESS gene in various cancer types.Specifically, high expression of TIMELESS has been observed in human breast cancer tissues, with two associated SNPs (rs2291738 and rs7302060) linked to an increased risk of breast cancer [16].Functional studies further reveal that reducing TIMELESS levels significantly curtails the proliferation of the breast cancer cell line MCF-7 [17].Similarly, in cervical cancer, TIMELESS overexpression is associated with a higher risk of recurrence and poorer recurrence-free survival rates, suggesting its potential as an independent prognostic marker in the early stages of this cancer [18].Moreover, TIMELESS has been implicated in the development and progression of various other cancers, including nasopharyngeal carcinoma, prostate cancer, lung cancer, colorectal cancer, and kidney cancer [19][20][21][22][23].In line with these findings, our preliminary studies indicate a specific upregulation of TIMELESS in OSCC tissues, correlated with a negative impact on prognosis.The objective of our current research is to explore the role of TIMELESS in OSCC.We aim to delineate the relationship between TIMELESS expression and OSCC development, thereby understanding its potential influence on the progression of this malignancy.
Public data and clinical samples collection
We utilized the TCGA and the CCLE databases to study the expression patterns of core circadian genes in head and HNSC patients and corresponding cell lines.In addition, with ethical clearance from the Air Force Medical University's Ethics Committee, we systematically collected 133 tissue samples of OSCC and their adjacent noncancerous tissues.These samples were obtained from patients treated at the Second Affiliated Hospital of Air Force Medical University between January 2021 and December 2022.Informed consent was obtained from all patients before sample collection.Then we analyzed the relationship between the expression of TIMELESS and the prognosis of OSCC patients by collecting the prognosis data for all patients.
To assess the role of TIMELESS, SCC-15 cells were infected with a TIMELESS overexpression lentivirus, while SCC-9 cells were transfected with a TIMELESS knockdown lentivirus.Corresponding control groups were also established for comparative analysis.The lentiviruses were developed by Shanghai OBiO Biotechnology Co. Ltd., and the SIRT1 overexpression vector was created by Gene Pharma (Shanghai, China).We utilized the GL401 vector (pcSLenti-U6-shRNA-CMV-puro-WPRE) for generating TIMELESS shRNA lentivirus and control lentivirus, and the GL186 vector (pcSLenti-CMV-MCS-3xflag-PGK-Puro-WPRE) for the TIMELESS overexpression lentivirus and its empty vector control.Transfection procedures were carried out according to the manufacturer's instructions.Post-transfection, cells stably expressing the desired genes were selected using puromycin (Beyotime, China) for five days.The transfection efficiency was verified using quantitative real-time PCR (qPCR) and Western blot analysis.
Each experiment was replicated three times.The relative expression levels of the target genes were calculated using the 2 −ΔΔCt method.
Hematoxylin and Eosin (HE) and Immunohistochemistry (IHC) Staining
For histological examination, all collected tissue specimens underwent formalin fixation and were embedded in paraffin.Hematoxylin and eosin (HE) staining was performed using the Solarbio kit (G1120, China) to assess the general tissue morphology.For immunohistochemical analysis, we adhered to the established staining protocols as outlined in previous literature [24] and the manufacturer's instructions provided by MXB (KIT-9730, China).
The processed tissue sections were first deparaffinized and rehydrated.Antigen retrieval was then performed using a citrate-based solution.Following this, sections were incubated in a sequential manner with appropriate primary and secondary antibodies.The visualization of the target antigens was achieved using diaminobenzidine (DAB) as the chromogen, followed by counterstaining with hematoxylin to highlight the nuclei.Quantification of immunostaining was based on two key criteria: the proportion of positively stained cells and the intensity of staining.The proportion of positive cells was scored on a scale from 0 to 4: 0 for less than 10% positive cells, 1 for 10-25%, 2 for 26-50%, 3 for 51-75%, and 4 for over 75% positive cells.Staining intensity was graded as follows: 0 indicating no staining, 1 for weak staining, 2 for moderate staining, and 3 for strong staining.A composite staining score, ranging from 0 to 12, was then calculated by multiplying the intensity score by the proportion score.For the immunohistochemistry assays, we used the Ki67 antibody (AF0198) from Affinity Biosciences, and the PCNA antibody (60097-1-Ig), SIRT1 antibody (13161-1-AP), HK2 antibody (66974-1-Ig), PKM2 antibody (60268-1-Ig), GLUT1 antibody (66290-1-Ig), LDHA antibody (66287-1-Ig), and TIMELESS antibody (14421-1-AP) from Proteintech Group, Inc.
Western blot analysis
Cellular proteins were isolated using RIPA lysis buffer (Beyotime, P0013B, China), supplemented with phenylmethylsulfonyl fluoride (PMSF, Beyotime, ST506, China) for protease inhibition.The total protein content was quantified using the BCA Protein Assay Kit (Beyotime, P0012S, China).These proteins were then resolved by SDS-PAGE and electrotransferred onto PVDF membranes, with the transfer tailored to the molecular weights of the proteins of interest.β-actin served as an internal reference for protein loading and transfer efficiency.
Protein bands were visualized using the ECL luminescent reagent (Beyotime, P0018FS, China).The Image J software was employed for the development and quantitative analysis of the bands, specifically assessing their gray value intensities.This process enabled an accurate determination of the relative expression levels of the targeted proteins in the samples.
MTS cell viability assay
For the MTS assay, we seeded (0.5-1) × 10 4 cells per well in a 96-well microplate, each well containing 100 μL of culture medium, and incubated them under standard conditions.After 24 h, 10 μL of MTS solution (Bestbio, BB-4204, China) was added to each well.The plates were then incubated at 37 °C for a duration ranging from 1 to 4 h.Cell viability was determined by measuring the absorbance at 490 nm, representing the optical density (OD) of each well.
Clonal formation experiment
In the clonal formation assay, we plated 1000 cells in each well of 6-well plates.These cells were cultured for approximately two weeks, after which they were stained with crystal violet (Beyotime, C0121, China) to visualize and count the colonies.Each experimental condition was replicated in three separate wells to ensure consistency and reliability of the results.
5-Ethynyl-2′-deoxyuridine(EdU)assay
For the EdU assay, a cell density of 4 × 10 3 to 1 × 10 5 cells per well was maintained in 96-well plates.The cells were treated with 50 μM EdU (Ribobio, C10310-1, China) for 2 h, then fixed and stained using the Apollo and Hoechst solutions.The percentage of EdU-positive cells, indicating active DNA synthesis, was assessed under a live-cell imaging system.
Cell cycle analysis
To analyze the cell cycle, (5-10) × 10 6 cells were harvested, washed twice with cold PBS, and fixed in 70-90% ethanol at − 20 °C overnight.Post-fixation, cells were washed and resuspended in PBS, treated with 20 μL of RNase A for 30 min at 37 °C, and stained with 400 μL of propidium iodide (PI) solution (Bestbio, BB-4101, China) for 30 min in the dark at 4 °C.The stained cells were then subjected to flow cytometric analysis to determine the distribution of cells across different phases of the cell cycle.
Measurement of glucose uptake, lactate production, pH, and oxygen consumption rate
Cells were seeded in 96-well plates at a density of 2000 cells per well.After washing thrice with PBS, glucose uptake was assessed using the Glucose Uptake Colorimetric Assay Kit (Sigma, MAK083, USA).Lactate secretion in the cell culture medium was quantified using the Lactate Assay Kit (Sigma, MAK065, USA).Both assays' results were normalized to the total protein content of each sample.The pH of the medium was measured with a pH meter (Bohlertech Technology, China), and the oxygen consumption rate (OCR) of the cells was determined using the Hansatech Oxytherm system (Hansatech, UK).These experiments were performed in triplicate and replicated three times to ensure consistency.
Subcutaneous tumor xenografts in nude mice
Male nude mice (BALB/c), aged 4-6 weeks, were acquired from the Experimental Animal Center of the Air Force Medical University.These mice were housed in a specific pathogen-free facility and randomly divided into two groups, each comprising five mice.Following a week of acclimatization, SCC-9 cells, either with TIME-LESS knockdown or control cells (1 × 10 7 ), suspended in diluted Matrigel Matrix (BD, 354234, USA), were subcutaneously injected into the flanks of the mice.The growth of the tumors was monitored every five days, with tumor volume measurements recorded.Tumor volume was calculated using the formula: width 2 × length × 0.5.
The health and mortality of the mice were monitored throughout the experiment.At the study's conclusion, tumors were excised, weighed, photographed, and processed for formalin fixation and paraffin embedding.This experimental protocol was approved by the Ethics Committee of the Air Force Medical University.
Statistical analysis
The analysis of all collected data was conducted using GraphPad Prism version 9.0 software (GraphPad Software Inc., La Jolla, CA, USA).Results were presented as mean values ± standard error of the mean (SEM).For comparisons within the same group, the paired t-test was employed, while the unpaired t-test was utilized for analyzing differences between two separate groups.In instances involving more than two groups, a one-way analysis of variance (ANOVA) was applied.For survival analyses, Log-rank test was used to analyze the survival curve.Correlation analysis was performed using Pearson's correlation.A p-value of less than 0.05 was set as the threshold for statistical significance.
Upregulation of TIMELESS in OSCC and its correlation with poor prognosis
Our initial investigation focused on the expression of core circadian genes in head and HNSC using data from the TCGA database.This analysis revealed a notable increase in the expression levels of ARNTL, CRY1, and TIMELESS, contrasted with decreased expression of NPAS2 and PER2 in HNSC.Notably, TIMELESS exhibited the most significant upregulation among these genes (P = 1.624E-12) (Fig. 1A).To further validate these findings, we examined some OSCC patient samples.Through qRT-PCR, Western blot, and IHC analyses, we observed a consistent pattern: TIMELESS expression was markedly higher in OSCC tissues than in adjacent non-cancerous tissues (Fig. 1B-E).Based on the aforementioned observations, the role of TIMELESS expression in prognosis of OSCC was investigated.TIMELESS expression is positively associated with clinical stage (P = 0.037), but not with other factors, including sex, age, and neoplasm histologic grade (Additional file 1: Table S1).The survival probability of patients with high TIMELESS expression was significantly lower than that of patients with low TIMELESS expression (P = 0.025) (Fig. 1F).This indicates a potential prognostic significance of TIMELESS for patients with OSCC.
TIMELESS Enhances OSCC cell survival in vitro
We embarked on examining the role of TIMELESS in OSCC both in vitro and in vivo.Initially, we assessed the mRNA expression of TIMELESS in various OSCC cell lines using the CCLE database, which revealed heightened expression levels (Fig. 2A).Subsequent Western blot analysis confirmed that TIMELESS.
protein expression was significantly higher in five OSCC cell lines (SCC-9, CAL-27, SCC-4, SCC-15, and SCC-25) compared to normal oral keratinocytes (NOK) (Fig. 2B).Based on these findings, SCC-15 cells, with lower native expression, were selected for overexpression studies, while SCC-9 cells, exhibiting higher TIMELESS expression, were used for knockdown experiments.The resulting stably transfected cell lines were validated using qRT-PCR and Western blot analysis (Fig. 2C and D).
To assess cell proliferation, we conducted colony formation, MTS, and EdU assays on OSCC cell lines.Our findings revealed that knocking down TIMELESS in SCC-9 cells led to a significant reduction in cell growth, as evidenced by diminished colony formation and a lower number of EdU positive cells, in comparison to control groups.Conversely, overexpressing TIMELESS in SCC-15 cells resulted in enhanced colony formation, increased cell proliferation, and a higher count of EdU positive cells (Fig. 2E, G).Moreover, we also have overexpressed TIMELESS in SCC-9 cells and knocked down TIME-LESS in SCC-15 cells.The stable transfected cell lines were screened out successfully and verified by qRT-PCR and western blot (Additional file 1: Fig S1A and B).For detecting the cell proliferation ability, colony formation assay and MTS assay were performed.Above experimental results displayed that compared to the corresponding control groups, overexpression of TIMELESS promoted colony formation and cell growth ability in SCC-9 cells.Knockdown of TIMELESS decreased cell growth ability GLUT1and weakened colony formation in SCC-15 cells (Additional file 1: Fig S1C and D).
The influence of TIMELESS modulation was also evident in the cell cycle distribution.Specifically, TIMELESS knockdown in SCC-9 cells increased the proportion of cells in the G1 phase while decreasing those in the S phase.In contrast, overexpressing TIMELESS in SCC-15 cells showed an inverse effect, with a decreased G1 phase population and an increased S phase population (Fig. 2H).This shift in cell cycle phases was accompanied by corresponding changes in the expression of cell cyclerelated proteins.In SCC-9 cells, TIMELESS knockdown led to decreased levels of Cyclin D1, Cyclin E1, CDK2, and CDK4.On the other hand, TIMELESS overexpression in SCC-15 cells upregulated these proteins (Fig. 2I).These results suggest that TIMELESS may play a role in accelerating cell division and facilitating the transition from the G1 to the S phase of the cell cycle.Additionally, we evaluated the effect of TIMELESS modulation on apoptosis in these cell lines.Interestingly, no significant changes were observed in the number of apoptotic cells, indicating that the influence of TIMELESS on OSCC cells primarily pertains to proliferation rather than apoptosis (Additional file 1: Fig S2 ).
Inhibition of OSCC tumor growth by TIMELESS knockdown in vivo
In vivo studies were conducted using a nude mouse model to assess the effects of TIMELESS knockdown on OSCC growth.We injected SCC-9 control cells and SCC-9 cells with stable TIMELESS knockdown subcutaneously into nude mice.This approach resulted in a 100% tumor formation rate in all 10 mice, with no instances of spontaneous mortality observed during the experiment.The growth of these tumor cells under the skin led to the formation of visible tumor masses (Fig. 3A).Comparative analysis between the control group and the TIME-LESS knockdown group revealed a notable reduction in tumor size and weight in the latter, indicating that TIME-LESS knockdown effectively inhibited tumor growth in the xenograft model (Fig. 3B-C).Additionally, the lifespan of the mice subjected to TIMELESS knockdown was significantly extended (P = 0.033) (Fig. 3D).It is noteworthy that there were no substantial differences in the overall body weight of the mice across the various treatment groups (Fig. 3E), suggesting the specificity of TIMELESS knockdown effects on tumor growth rather than general health.Histological examination of the xenografts further supported these findings.Tumors developed from SCC-9 cells with stable TIMELESS knockdown showed a marked decrease in Ki67 and PCNA staining, reflecting a lower cellular proliferation rate in these tumors (Fig. 3F-H).This decrease in proliferative markers confirms the critical role of TIMELESS in driving OSCC tumor growth in vivo.
TIMELESS modulates glucose metabolism in OSCC cells
In exploring the role of TIMELESS in cellular metabolism, we focused on its impact on glucose metabolism, a key aspect of tumor cell proliferation.We conducted glucose metabolic phenotype assays on stably transfected OSCC cells, measuring glucose uptake, lactate production, cell media pH, and oxygen consumption rate as per the manufacturer's protocols.Our results indicated that TIMELESS knockdown in SCC-9 cells led to a decrease in both glucose uptake and lactate production.In contrast, overexpressing TIMELESS in SCC-15 cells enhanced these glycolytic processes (Fig. 4A, B).Additionally, TIMELESS knockdown was associated with an increase in oxygen consumption rate and pH in SCC-9 cells, suggesting a shift towards oxidative phosphorylation.Conversely, overexpressing TIMELESS in SCC-15 cells resulted in reduced oxygen consumption and lower pH levels, indicative of enhanced glycolysis (Fig. 4C, D).We also assessed the metabolic phenotype of fatty acids in OSCC cells but did not observe significant differences in the levels of free fatty acids, cholesterol, or phospholipids (Additional file 1: Fig S3).Further analysis revealed that overexpressing TIMELESS in SCC-9 cells promoted glucose uptake and lactate production, while its knockdown in SCC-15 cells had the opposite effect (Additional file 1: FigS4A, B).Similarly, TIMELESS overexpression in SCC-9 cells reduced oxygen consumption and pH, whereas its knockdown in SCC-15 cells led to increased oxygen consumption and pH (Additional file 1: FigS4C, D).These findings suggest that TIMELESS plays a significant role in promoting glycolysis and inhibiting mitochondrial oxidative phosphorylation in OSCC cells, thereby contributing to the metabolic reprogramming associated with tumor growth and survival.
TIMELESS augments glycolysis via upregulation of key metabolic molecules
Having established TIMELESS's role in altering glucose metabolism in OSCC cells, we next sought to understand the underlying mechanisms.We analyzed the correlation between TIMELESS expression and key metabolic genes, including SIRT1, HIF1A, and MYC, using data from the TCGA HNSC database.This analysis revealed a significant positive correlation between TIMELESS and SIRT1 levels (P = 0, r = 0.48) (Fig. 5A).Further analysis of OSCC tissues reinforced this correlation, showing a significant positive relationship between TIMELESS and SIRT1 mRNA levels (P = 0.0106, r = 0.4392) (Fig. 5B).Western blot and qRT-PCR analyses further supported these findings, demonstrating that SIRT1 expression increased with TIMELESS overexpression and decreased following its knockdown (Fig. 5C and D).Immunohistochemistry (IHC) in tumor tissues from nude mice also showed reduced SIRT1 expression upon TIMELESS knockdown (Fig. 5E).These observations led us to hypothesize that TIMELESS might regulate glucose metabolism predominantly through SIRT1 modulation.
To test this hypothesis, we overexpressed SIRT1 in TIMELESS knockdown SCC-9 cells, confirming the transfection efficiency by Western blot (Fig. 5F).Results showed that TIMELESS knockdown decreased glucose uptake and lactate production in OSCC cells, which was reversed by SIRT1 overexpression (Fig. 5G and H).Similarly, assays measuring oxygen consumption rate and pH indicated that TIMELESS knockdown increased these parameters in OSCC cells, whereas SIRT1 overexpression attenuated these effects (Fig. 5I and J).
We further investigated whether TIMELESS influences the expression of glycolytic genes such as HK2, PKM2, LDHA, and GLUT1 through SIRT1.Our data showed that overexpression of TIMELESS significantly increased the expression of these glycolytic enzymes, which was reduced upon TIMELESS knockdown (Fig. 5C-E).These results collectively suggest that TIMELESS plays a crucial role in the regulation of glycolysis in OSCC cells, predominantly by modulating SIRT1 and thereby influencing the expression of key glycolytic enzymes.
Discussion
The significance of circadian genes in cancer development has increasingly come into focus in recent years.A body of evidence highlights that the circadian gene TIMELESS is notably upregulated in various cancers, including breast, cervical, nasopharyngeal, and prostate cancers, where its elevated expression is linked to enhanced tumor growth [16][17][18][19][20].Our study aligns with these findings, revealing a marked increase in TIME-LESS expression in OSCC tissues and cells, which correlates with a poorer prognosis.Additionally, our results extend this understanding by demonstrating that TIME-LESS not only promotes OSCC cell proliferation but also contributes to tumor growth both in vitro and in vivo.These outcomes reinforce the potential of TIMELESS as a prognostic marker in OSCC.While the connection between circadian rhythm disruption and cancer genesis is well-documented, the specific role of TIMELESS in the metabolic reprogramming of OSCC cells has not been thoroughly explored.Metabolic reprogramming, particularly the shift towards glycolysis, is a defining characteristic of tumor cells and plays a crucial role in cancer development and progression.Our research sheds light on this aspect by illustrating how TIMELESS influences glucose metabolism in OSCC, promoting glycolytic processes, and inhibiting oxidative phosphorylation.This insight into TIMELESS's role in metabolic alteration provides a deeper understanding of its contribution to OSCC pathogenesis and highlights the gene's potential as a target for therapeutic interventions.
Core circadian genes are integral in regulating a wide array of metabolic processes, including glucose and lipid metabolism, oxidative phosphorylation, and mitochondrial dynamics [25].Disruptions in these circadian mechanisms have been implicated in metabolic disorders such as hyperlipidemia.For instance, BMAL1 regulates the synthesis of primordial lipoproteins and their expansion into larger forms by controlling MTP and ApoAIV through transcription factors Shp and Crebh [26].Similarly, altered transcriptional cycling of core clock genes like BMAL1, CLOCK, and PER3 in skeletal muscle is associated with disrupted circadian rhythms in type 2 diabetese [27].Furthermore, REV-ERB nuclear receptors, crucial components of the molecular clock, play a significant role in controlling the circadian period and modulating metabolic responses, such as diet-induced obesity [28].One of the hallmark features of malignant tumors is the reprogramming of energy metabolism.Typically, eukaryotic cells utilize mitochondrial oxidative phosphorylation to fully oxidize glucose, their primary energy source, in the presence of oxygen [29].However, cancer cells often diverge from this pathway, preferring aerobic glycolysis even under sufficient oxygen levels, a phenomenon known as the Warburg effect [30].The circadian system exerts a profound influence on glucose metabolism and regulates a vast array of physiological and metabolic functions in mammals [31].The extent to which circadian genes modulate metabolic reprogramming in OSCC is not fully understood.Our study reveals that in OSCC cells, the overexpression of TIMELESS increases glycolytic capacity while reducing oxidative phosphorylation, with TIMELESS knockdown cells exhibiting the opposite trend.This shift toward aerobic glycolysis, which is characterized by increased lactate production and reduced pH in the tumor microenvironment, aligns with the activation of glycolytic genes.We observed that the expression of SIRT1, HK2, PKM2, LDHA, and GLUT1 is upregulated with increased TIMELESS expression and downregulated upon its knockdown.These findings indicate that TIMELESS plays a pivotal role in promoting the glycolytic phenotype in OSCC cells, likely through the regulation of SIRT1 and key glycolytic enzymes, thereby contributing to the progression of OSCC.
Silent information regulator 1 (SIRT1), a NAD + -dependent histone deacetylase, plays a crucial role in various cellular processes, including gene silencing, cell cycle regulation, fat and glucose metabolism, oxidative stress response, and cellular senescence [32,33].It is known for deacetylating the tumor suppressor protein p53, which is its first identified nonhistone substrate.This action of SIRT1 on p53 results in the modulation of cellular metabolism, particularly by enhancing mitochondrial oxidative phosphorylation and suppressing aerobic glycolysis [34].Research in pancreatic cancer has demonstrated that a loss of SIRT1 correlates with reduced expression of glycolytic pathway proteins like GLUT1 and decreased cancer cell proliferation [35].Conversely, SIRT1 overexpression has been found to upregulate GLUT1 transcription and promote both cell proliferation and glycolysis in bladder cancer cells [36].Additionally, the AMPK/ SIRT1 pathway significantly influences glycolysis regulation in response to follicle-stimulating hormone in follicular granulosa cells, with SIRT1 activation leading to increased glycolytic protein expression and lactic acid production [37].Given SIRT1's pivotal role as a metabolic sensor, we investigated its interaction with TIMELESS in the context of OSCC.Our analysis indicates a positive correlation between the expression of TIMELESS and SIRT1 in head and HNSC.We found that knocking down TIMELESS reduces SIRT1 activity, thereby promoting glycolysis and inhibiting oxidative phosphorylation.Conversely, overexpression of SIRT1 reverses these metabolic effects.Notably, changes in TIMELESS expression were paralleled by similar trends in the expression of downstream glycolytic targets such as HK2, PKM2, LDHA, and GLUT1 in OSCC cells.These findings lead us to propose that TIME-LESS contributes to OSCC progression by enhancing aerobic glycolysis.This effect is likely mediated through SIRT1, which, in turn, regulates the expression of key glycolytic enzymes and GLUT1.
Current research increasingly links circadian gene dysregulation to cancer development.Our study reveals that in OSCC, TIMELESS is overexpressed, leading to suppressed oxidative phosphorylation and increased glycolysis and cell proliferation.Notably, knocking down TIMELESS and subsequently overexpressing SIRT1 in OSCC cells reversed the glycolytic changes observed with TIMELESS knockdown alone.These findings suggest that TIMELESS's tumor-promoting role is mediated through SIRT1 activation and glycolysis regulation, positioning TIMELESS as a potential therapeutic target for addressing metabolic anomalies in OSCC.
Our study offers novel insights into the critical regulatory role of TIMELESS in the glucose metabolism and progression of OSCC.These findings enhance our understanding of the biological mechanisms driving OSCC progression.Given its influence on tumor growth, TIMELESS emerges as a promising biomarker for OSCC diagnosis and a potential therapeutic target.
Conclusions
TIMELESS facilitates OSCC cell growth primarily by enhancing glycolysis and suppressing oxidative phosphorylation, a process mediated through the activation of SIRT1.
Fig. 1
Fig. 1 Upregulation of TIMELESS in OSCC and its correlation with poor prognosis.A The expression of core-clock genes in HNSC from TCGA database.B qRT-PCR analysis the expression of TIMELESS mRNA in 33 OSCC tissues.C, D Western blot analysis the expression of TIMELESS protein in 18 OSCC tissues.E IHC analysis the expression of TIMELESS in 133 OSCC tissues.Scale bar, 100 μm.*P < 0.05; **P < 0.01.F The prognostic analysis of TIMELESS expression was determined in 133 OSCC patients
Fig. 3
Fig. 3 Inhibition of OSCC tumor growth by TIMELESS knockdown in vivo. A. The resected tumors from nude mice.B The weight of tumors in nude mice.*P < 0.05; **P < 0.01.C Tumor volume changes in nude mice.D Survival analysis on nude mice.E The body weight of nude mice.F HE staining, IHC staining of TIMELESS, Ki67 and PCNA in nude mice tumor tissues.Scale bar, 50 μm.G Comparison of Ki-67-positive cells in tumor tissues of nude mice xenograft model with different treatment as indicated.H Comparison of PCNA-positive cells in tumor tissues of nude mice xenograft model with different treatment as indicated
Fig. 4
Fig. 4 TIMELESS modulates glucose metabolism in OSCC cells.A The level of glucose uptake was examined.B Lactate production was examined.C Cell medium pH.D Oxygen consumption level of cell.Data shown were the mean ± S.E.M. from three independent experiments.*P < 0.05; **P < 0.01
Fig. 5
Fig. 5 TIMELESS augments glycolysis via upregulation of key metabolic Molecules.A Correlation analysis between TIMELESS expression and SIRT1, HIF1A and MYC in HNSC tissues form TCGA database.B Correlation analysis between TIMELESS mRNA levels and SIRT1 mRNA levels in 33 OSCC tissues.C qRT-PCR analysis of the expression levels of TIMELESS and key glycolysis genes in OSCC cells.D Western blot analysis of the expression levels of TIMELESS and key glycolysis genes in OSCC cells.E IHC staining of TIMELESS, SIRT1, HK2, PKM2, LDHA and GLUT1 in nude mice tumors.F The transfection efficiency was examined with western blot.G The level of glucose uptake was examined.H Lactate production was examined.I Cell medium pH.J Oxygen consumption level of cell.Data shown were the mean ± S.E.M. from three independent experiments.*P < 0.05; **P < 0.01 | 6,767.6 | 2024-01-04T00:00:00.000 | [
"Medicine",
"Biology"
] |
Urban Systems , Urbanization Dynamics and Land Use in Italy : Evidence from a Spatial Analysis
Sustainability of agriculture is challenged by increasing sprawl in urban agglomerations. Under increasing agglomeration economies in large and even medium sized cities, more and more soil is being subtracted to agriculture, depriving agricultural activities of the main production factor. The extent to which the expanding urbanization threatens agricultural development depends on the urban spatial structure, however. In this work it is empirically investigated how the relationship between soil use and soil consumption is shaped by the compactness of a city. For the population of LAU1 (province) main cities in an Italian region (Lombardy), compactness is measured as the density gradient and estimated using Central Business District models. It is found that more compact cities exhibit relatively lower-than-expected soil consumption in the period 1999-2007. Results suggest that agglomeration economies are not enemies of agricultural activities per se. Nonetheless, urbanization needs to be accompanied by urban fringe containment.
Introduction
The increased awareness on issues related to competition on soil use has driven the attention of policy makers on the future challenges for the European agriculture in the period after the ongoing reform.Sustainability of agriculture is faced by land take which is in turn promoted by increasing urbanization pressures.Such pressures respond not only to socio-demographic trends but also to the need of local administrations to balance current expenditures with land use rights (Pareglio, 2013).In this respect, the recent crisis might consolidate and even stimulate this trend in expanding urbanization, especially in small and medium sized cities, where land represents a scarce resource to a lower extent only.
Alongside sustainability of agriculture, in terms of natural resources and, hence, land, consideration is given to the promotion of agricultural diversification with the objective to preserve territorial-specific characters of local agriculture.It is not surprising that academic and policy discussion about land use policies turns central in the discussion about the future of rural development actions in Europe (MIPAAF, 2011;European Commission, 2012).In fact different modes of urban expansion generate differentiated land use patterns with related consequences on agricultures, especially in rural territories at the margins of large urban agglomerations.Central appears, therefore, to establish a connection between the spatial structure of a city and the use of land.Unfortunately land use data are scarce, on the one side.On the other side it is not as easy to provide a classification of urban spatial structure allowing comparison across heterogeneous territories.
In an attempt to produce a territorial characterization of local agricultures, the traditional approach followed in agricultural economics literature has been based on multivariate statistical analysis (Cannata, 1989(Cannata, , 1995;;Anania & Tarsitano, 1995;Cannata & Forleo, 1998).In these studies, at a national level, the territorial characterization has been pursued by introducing socio-environmental and economic variable in the statistical analysis.In this way, the synthetic output was capable of representing the rural dimension of territories alongside other dimensions closely related to agriculture.Building on this framework, some other studies have proposed detailed classification of territories at a more local, usually regional scale (Esposti, 2000;Gallego, 2004;Vard et al., 2005;Anania & Tenuta, 2008;Asciuto et al, 2008), frequently with more emphasis on some particular variables to capture local specificities of that territory.The focus of this stream of literature is, however, more on the territorial characterization of the local agro-economic systems.Few is said about the relationship between urban structure and use of resources in general and, more specifically, of land.This is because, on the one side, the methodological approach (multivariate analysis) does not allow moving beyond the evidence suggested by the statistical association.In other words, no causal link can be established between socio-agri-economic characters and land use.On the other side, the output of a multivariate statistical analysis is usually an indicator expressing the degree of urbanization.The relationship between urbanization and land use is then implicitly assumed and not further investigated.
In this paper the issue is approached from a different perspective.By focusing solely on land as a production factor, the work is aimed at constructing a link between land use patterns and urban spatial structure.Although the analysis belongs, in methods and contents, to the urban and regional studies literature, the issues discussed in this work are closely connected with agriculture.Actually, land is a primary input in the agricultural production and, within the more general discussion on sustainability of rural development, urbanization density is likely the best predictor of a variety of territorial characters ultimately connected with agriculture and with rural development, such as, for instance, population density, income, provision of services.In fact, recent research has shown that land use patterns provide the best characterization of the territories in relationship to their urban/rural structure (Pareglio & Pozzi, 2013).Finally, it is worth to recap that land is the most important resource for which the urban and the rural economy compete in the same territory.
With respect to the methodology, the paper aims at detecting a clear relationship between urban spatial structure and land use/take.Admittedly, the most noteworthy effort in this work is the attempt to estimate the urban structure as the density degree of urbanized area in available land.
Thus, the methodological approach is arranged in a two-step procedure.To approximate urban spatial structure the framework of Central Business District Theory is used and the density gradient estimated.The gradient, a measure of how much urbanization follows a distance-from-the-centre decay is taken and interpreted as a measure of city compactness: when the density gradient is high the city is compact and when the density gradient is low the urban spatial structure is more characterized by sprawl.In the CBD literature, the optimal size of the city is in fact determined by, among others, the consumer preferences on income, housing space and travel time.The optimal size of the city defines an urban fringe separating rural territories from the urbanized area.A lower gradient indicates that the market for agricultural land clears at a lower distance from the centre and hence that, ceteris paribus, the fringe will be located closer to the city, saving peri-urban agricultural land from the urbanization pressures.The measure is also preferred to standard indicators such as population density for two main reasons.Firstly population density says very little about how urbanized areas are distributed in the geographical space of the territory as the same level of population density may in fact correspond to very different geographies of urbanization.Secondly, population density highly correlates with urbanization density, which is used as a measure of land use in this research.
Once density gradient is estimated, in the second step, a relationship is built between land use, measured as the urban to total land ratio at the municipality level, land take, measured as the change of this ratio between two periods, and urban spatial structure, described by the density gradient.This is a simple linear relationship and does not in fact contribute assessing any causality between spatial structure and land use/take1 .Concerning the relevance of this analysis for the agricultural sector the change in land use over time is considered the best proxy of land take based on available data.On the one hand it clearly accounts for the change in urbanization patterns.On the other hand, being urbanized area the complement to one of agricultural area (and assuming that forestry area remains constant), it is also capable of accounting for the dynamic in Usable Agricultural Area (UAA).
The analysis is based on municipality data for the Lombardy region.All data come from the statistical office of the Region and are made available to the public through the geoportal2 .Available data provide measures of urbanized land and hence allow determining the urbanized to total land ratio and its change over time.More precisely data is available for the years 1999 and 2007.Density gradient is estimated at the provincial level (LAU1 in Eurostat classification) using shares of urbanized area as dependent variable and, for robustness check only, using population density.
Results provide clear evidence that a lower than expected land take is associated with a more compact urban structure.The remaining of the paper is organized as follows.The next section describes the methodology used to estimate the density gradient.Estimation results are presented and discussed in section three together with the figure relating urban structure to land use and land take.Follow conclusion.
Methodology
This section briefly introduces econometric methodologies used for the empirical analyses.Presented methods are considered the standard workhorse of the economic literature investigating urban spatial structure and the objective is that of measuring the linkage between the degree of urbanization and the distance from the main centre of economic activity (Central Business District).
After seminal works of Alonso (1964), Mills (1967) and Muth (1969) Central Business District (CBD) has become the main reference for theoretical as well as empirical analysis of urbanization patterns.From a theoretical viewpoint the model provides a simple and treatable tool to explain urbanization dynamics and, at the same time, it offers meaningful insights than can be easily tested empirically.Probably this is the reason why, after 40 years, CBD continues to represent a key reference for the analysis of urban structure (McMillen 2006;Paulsen, 2012).
The main implication of a mono-centric urban structure is that urbanization density exponentially declines with increasing distance from CBD.Such prediction can be used, in fact, to explain a variety of phenomena related to urbanization such as, for instance, variation in housing prices, in land values and in population and employment densities.
Rarely studies have considered the share of urbanized area as a proxy for urbanization density.This is probably to be attributed to the lack of consistent and comparable measures of land use.This might appear surprising, at least considering that land use conversion pushed by urbanization pressures is a considerably warning phenomenon related to population and income growth (Brueckner, 2000).
The main objective of CBD literature is the estimation of density gradient, hence the model parameter describing how urbanization density varies at varying distances from CBD.This is a simple but meaningful indicator of urban spatial structure.The coefficient is expected negative and, the larger its magnitude in absolute value, the more compact is the urban structure.Hence low values of the coefficient can be interpreted as evidence of urban sprawl.Following the standard empirical specification in Equation (1), the density gradient can be measured as the absolute value of the b parameter.
( )
(1) This empirical specification has become very common, provided that the complex non-linear dynamic predicted by the theoretical model is simplified into a linear model to be estimated with common methodologies.In Equation (1) U i is the urban density, in this case the ratio of urbanized to total area while D i is the geographical distance separating the municipality from the CBD (assumed to be the main city in the province).Finally a is the estimate of the log of urbanization density in CBD and e i is the stochastic disturbance.
Following this stream of literature, this work is concerned with estimation of density gradients for the provinces in the Lombardy region.The availability of a unique data source at the regional level allows the comparison of results and, hence, of the different urban structures dominating urbanization patterns in the provinces of Lombardy.For the purpose of our empirical analysis, the estimated density gradients are used, in a second step, to relate urban structure to land use and land take.
Concerning estimation of the density gradient, a common problem in cross sectional studies is the violation of the independence hypothesis made about the error term (Anselin, 1988a(Anselin, , 1988b)).This is particularly the case of urbanization density, since land use decisions are known to be affected by external environmental conditions which are usually unobservable and therefore omitted, causing spatially related regression residuals.Based on Anselin's works, correction can be implemented by either assuming that the dependent variable follows a mixed regressive-spatial autoregressive process (Spatial Lag model) or by allowing a spatial structure in the error term (Spatial Error model).In the specific case of density gradient estimation McMillen (2003) has shown that the Spatial Error specification is to be preferred provided that spatially auto-correlated residuals are likely caused by the omission of information related to neighbourhood characteristics.
The model in Equation ( 1) is modified accordingly and the final specification is expressed as follows: ( ) The W matrix in Equation ( 2) is a row-standardized contiguity matrix expressing the contiguity relationships between municipalities in the same province.The contiguity relationships are defined using the threshold distance criterion, according to which municipalities are said to be neighbours if separated by a distance which value is below a pre-determined threshold.By time to time the threshold at the provincial level is defined in a way to allow each municipality to have at least one neighbour.
Results
Estimation results based on Equation (2) are summarized in Table 1.Results are obtained by using the urban to total ratio of land as a proxy of urbanization density.Since the largest part of related literature makes use of population density to proxy urbanization density (Baumont et al., 2004;Rodríguez-Gámez & Dall'erba, 2012), our result are also replicated using population density to ensure consistency with previous literature.These results are not shown but they are available upon request to the authors.
Model parameters have been estimated for both the available years in the dataset, 1999 and 2007.The coefficient a, the model intercept, is to be interpreted as the log of urbanization density in the main municipality.Results, in first and fourth columns of Table 1, indicate a high value of the parameter in the cases of Milano, Monza, Varese, Como and Bergamo.This suggests the good predictive capacity of the model.As a further indication of the model capacity to fit the data, the estimated values of 2007 are always larger than those in 1999.This is coherent with evidence that average share of urbanized area has increased over time in all these cities.
By focusing the attention to the coefficient b it is possible to observe that results largely confirm the model's prediction.The expected value of the coefficient is in fact negative, assuming a distance decay effect.Estimates show that this is the case in a majority of municipalities, with the exclusion of Cremona, where such a relationship seems inverted and of Lecco, Mantova and Sondrio, where the estimated coefficient is not significant.
Both in terms of coefficient slopes and statistical significance, the relationship between urbanization density and distance from CBD is unchanged comparing 1999 and 2007.In both years, high density gradients appear in the cases of Bergamo, Brescia Como and Varese while especially low values appear in Milano, Lodi and Monza and Brianza.
Finally, for what concerns the spatial autocorrelation coefficient λ, this is always positive and significance with the only exception of Mantova.Based on existing evidence, results in this empirical analysis confirm the prominence of spatial relationships for the urbanization process.It is possible to infer that urbanization pattern at the municipality level is affected also by forces operating at a larger spatial scale.Once the density gradient, a summary measure of the compactness of cities, has been estimated, it is possible to construct a relationship between land use (the urbanized to total land ratio in 1999), land take (variation in urbanized to total land ration between 1999 and 2007) and urban structure (compactness).The purpose of this part of analysis is to see whether spatial structure can affect land use and its change.The three measures have been plotted together and the result is presented in the Figure 1.
In the vertical axis it is indicated the average (at the provincial level) land take between 1999 and 2007, while average land use (at the provincial level) is indicated in the horizontal axis.Again, based on average data, the provinces of Milano, Monza and Brianza, and to a lower extent also Como, Bergamo, Lecco and Varese, exhibit a high value of land use.By the opposite the highest land take has happened in provinces of Mantova, Lodi and Cremona, and in Milan to a lower extent.
The negative line has been obtained by interpolation of average data.Immediately the relationship between land take and land use appears negative, meaning that a large land take has characterized areas with lower scarcity of land.In fact, the phenomenon of land use change and hence reconversion of land use from agriculture to urban, is a character of areas in which urbanization density was lower in 1999.
In the figure, the dimension of the circle associated to each province represents the value of the density gradient previously estimated.The larger the circle, the more compact is the urban spatial structure.
Provided the negative evidence between land take and land use, it is worth paying attention to the fact that more compact urban structures have generated a lover than expected value of land take.This appears clearly by noting that provinces where the density gradient is high reported values of land take lower or at least equal to the value expected from the negative relationship.In the case of territories where land take was initially high, this is the case of Bergamo, Como and Varese, which are located under the black line in the figure.By the opposite Milano and Monza and Brianza are located above the line.In the case of territories where land take was initially
Conclusions
This empirical work has been concerned with the relationship between land use, land take and urban spatial structure.The issue is of particular relevance for agricultural policies to the extent that a correct use of land, scarce by definition, represents nowadays a necessary condition for the effective implementation of sustainable rural development programs.Development cannot be considered as sustainable if economic and population growth subtracts resources to agriculture, hence impeding rural development.Subtraction of resources to agriculture is a central issue in the debate about the future of agricultural and rural policies in Europe.In particular, given that economic growth requires a certain degree of urban development, which urban structure is more effective with respect to the objective of preserving land?
In an attempt to provide a different viewpoint on the issue, this paper presents empirical analysis on the relationship between urban spatial structure and land use/take.Differently from the traditional approach to the territorial analysis of urbanization, based on multivariate statistical analysis on a number of different indicators, this work focuses on a single variable (urbanized area) for the territorial analysis.This variable is related to the urban spatial structure, as measured by the compactness degree of the urbanized area within the territory of the province.
To describe the urbanization pattern, a density gradient is estimated, following the empirical literature on CBD.A higher value of the density gradient identifies a more compact urban spatial structure while low values indicate sprawl.
The scene pictured by the result of the analysis, using data for 1999 and 2007 is the following.Urban agglomerations structured in a compact manner around a single economic centre have prevented excessive consumption of agricultural land through its conversion for urbanization purposes.This general result need to be further declined considering the different levels of urbanization in the region.Hence, in the most urbanized part of the region, territories characterized by urban sprawl such as Milano and Monza e Brianza, have experienced a larger than expected land take while a lower than expected land take has been noted in more compact cities.
Figure 1 .
Figure 1.Land use, land Consumption and compactness of urban spatial structure.low, this is the case of Pavia, which is below the line.By the opposite Cremona, Lodi and Mantova are above the line.
Table 1 .
CBD Estimates (1999 and 2007)based on the share of urbanized area. | 4,516.4 | 2014-09-18T00:00:00.000 | [
"Economics"
] |
The Membership Problem for Hypergeometric Sequences with Quadratic Parameters
Hypergeometric sequences are rational-valued sequences that satisfy first-order linear recurrence relations with polynomial coefficients; that is, a hypergeometric sequence $\langle u_n \rangle_{n=0}^{\infty}$ is one that satisfies a recurrence of the form $f(n)u_n = g(n)u_{n-1}$ where $f,g \in \mathbb{Z}[x]$. In this paper, we consider the Membership Problem for hypergeometric sequences: given a hypergeometric sequence $\langle u_n \rangle_{n=0}^{\infty}$ and a target value $t\in \mathbb{Q}$, determine whether $u_n=t$ for some index $n$. We establish decidability of the Membership Problem under the assumption that either (i) $f$ and $g$ have distinct splitting fields or (ii) $f$ and $g$ are monic polynomials that both split over a quadratic extension of $\mathbb{Q}$. Our results are based on an analysis of the prime divisors of polynomial sequences $\langle f(n) \rangle_{n=1}^\infty$ and $\langle g(n) \rangle_{n=1}^\infty$ appearing in the recurrence relation.
Introduction
Background and Motivation.Recursively defined sequences are ubiquitous in mathematics and computer science.A fundamental open problem in this context is the decidability of the Membership Problem, which asks to determine whether a given value is an element of a given sequence.The Skolem Problem for C-finite sequences (those sequences that satisfy a linear recurrence relation with constant coefficients) is the best known variant of the Membership Problem.The Skolem Problem asks to determine whether a given C-finite sequence vanishes at some index [4].Decidability of this problem is known for recurrences of order at most four [17,26] but is open in general.Proving decidability of the Skolem Problem would be equivalent to giving an effective proof of the celebrated Skolem-Mahler-Lech Theorem, which states that every non-degenerate C-finite sequence that is not identically zero has a finite set of zeros.
In this paper we consider the most basic case of the Membership Problem for a class of P-finite sequences (those sequences that satisfy a linear recurrence with polynomial coefficients).Specifically, we consider the Membership Problem for the class of hypergeometric sequences.A rational-valued sequence ⟨u n ⟩ ∞ n=0 is hypergeometric if it satisfies a recurrence relation of the form where f, g ∈ Z[x] are polynomials, and f (x) has no non-negative integer zeros.By the latter assumption on f (x), the recurrence relation (1) uniquely defines an infinite sequence of rational numbers once the initial value u 0 ∈ Q is specified.The term hypergeometric was introduced by John Wallis in the 17th century [27] and hypergeometric sequences and their associated generating functions, the hypergeometric series, have a long and illustrious history in the mathematics literature.In particular, hypergeometric series encompass many of the common mathematical functions and have numerous applications in analytic combinatorics [5,10].
The Membership Problem for hypergeometric sequences asks, given a recurrence (1), initial value u 0 ∈ Q, and target t ∈ Q, whether t lies in the sequence ⟨u n ⟩ ∞ n=0 .At first glance, this problem may seem easy to decide.Without loss of generality we can assume that the sequence ⟨u n ⟩ ∞ n=0 either diverges to infinity or converges to a finite limit.If the sequence does not converge to t then one can compute a bound B such that u n ̸ = t for all n > B. Such a bound can also be computed in case one is promised that ⟨u n ⟩ ∞ n=0 converges to t, by using the fact that the convergence to t is ultimately monotonic.However the above case distinction does not suffice to show decidability of the Membership Problem!The problem is that it is not known how to decide whether a hypergeometric seqeuence converges to a given rational limit.The latter is related to deep conjectures about the gamma function (see the discussion below).In this paper we will take a different route to establish decidability of certain cases of the Membership Problem.
Contributions.We approach the Membership Problem by considering the prime divisors of the values of a hypergeometric sequence ⟨u n ⟩ ∞ n=0 .The overall strategy is to exhibit an effective threshold B such that for all n > B there is a prime divisor of u n that is not a divisor of the target t.Our two main contributions are as follows: • The Membership Problem for hypergeometric sequences whose polynomial coefficients (as in ( 1)) have distinct splitting fields is decidable (Theorem 11).• The Membership Problem for hypergeometric sequences whose polynomial coefficients are monic and split over a quadratic field is decidable (Theorem 13).The proofs of our main results involve two different implementations of our general strategy.The proof of Theorem 11 applies the Chebotarev density theorem to find a single prime p ∈ Z that does not divide the target t but divides all members of an infinite tail of the sequence.Meanwhile, the proof of Theorem 13 shows that for all sufficiently large n there exists a prime p, that is allowed to depend on n, such that p divides u n but not t.To find such a prime we rely on (a mild generalisation of) a result of [3] concerning prime divisors of the values of a quadratic polynomial.
Theorem 11 expands the class of sequences for which the Membership Problem can be solved and further isolates its hard instances.The paper [22] handles perhaps the easiest sub-case of the Membership Problem that does not fall under Theorem 11, namely when the polynomial coefficients both split over Q.The second main result of the present paper handles another naturally occurring sub-case: when the polynomial coefficients split over the ring of integers of a quadratic field K.A common refinement of these two cases-that the polynomial coefficients split over K-is the subject of current research.Generalisations of the results of [3] to higher-degree polynomials are a subject of ongoing research in number theory and potentially would allow us to extend our approach beyond the quadratic case.
Related Work.There is a growing body of work that addresses membership and threshold problems for sequences satisfying low-order polynomial recurrences.Here the Threshold Problem asks to determine whether every term in a sequence lies above a given threshold, for example, whether every term is non-negative.
The recent preprint [12] establishes decidability results (some conditional on Schanuel's Conjecture) for both the Membership and Threshold Problems for hypergeometric sequences.The approach of [12] relies on transcendence theory for the gamma function (as well as underlying properties of modular functions established by Nesterenko [19]).By contrast, the algebraic techniques of the present paper seem appropriate only for the Membership Problem.We note that the approach of [12] requires certain restrictions, e.g., decidability is only unconditional when the parameters are drawn from imaginary quadratic fields.
The problem of deciding positivity of order-two P-finite sequences and of deciding the existence of zeros in such sequences is considered in [11,14,21,23].These works all place syntactic restrictions on the degrees of the polynomial coefficients involved in the recurrences, and all four give algorithms that are not guaranteed to terminate for all initial values of a given recurrence.For example, in [11] the termination proof of the algorithm for determining positivity of order-two sequences requires that the characteristic roots of the recurrence be distinct and that one is working with a generic solution of the recurrence (in which the asymptotic rate of growth corresponds to the dominant characteristic root of the recurrence).Simple manipulations show that the Membership Problem considered in this paper is equivalent to the problem of finding a zero term in an order-two P-finite sequence ⟨u n ⟩ ∞ n=0 arising as a sum of two hypergeometric sequences.Links between the Membership and Threshold Problems and the Rohrlich-Lang Conjecture appear in previous works [13,22].Here the Rohrlich-Lang Conjecture concerns multiplicative relations for the gamma function evaluated at rational points.
The p-adic techniques used in the present paper bear many similarities with work on developing criteria for hypergeometric sequences to be integer valued.For example, work by Landau in 1900 [15] uses p-adic analysis to establish a necessary and sufficient condition for integrality in the so-called class of factorial hypergeometric sequences.In more recent work, Hong and Wang [9] establish a criterion for the integrality of hypergeometric series with parameters from quadratic fields.We observe that some of the intermediate asymptotic results in Hong and Wang's note are close to [1, Corollary 3.1] (Proposition 4 herein).
Structure.The remainder of this paper is structured as follows.We briefly review preliminary material in Section 2, including some standard assumptions about instances of the Membership Problem that can be made without loss of generality.In Section 3, we recall useful technical results on the prime divisors of hypergeometric sequences that satisfy monic recurrence relations (see (2)).In Section 4, we prove Theorem 11.The proof of Theorem 13 is given in Section 5. We discuss ideas for future research in Section 6.The remaining appendices prove technical results omitted from the main text.
Preliminaries
Hypergeometric Sequences.A hypergeometric sequence ⟨u n ⟩ ∞ n=0 is a sequence of rational numbers that satisfies a recurrence of the form (1) where f, g ∈ Z[x] are polynomials, and f (x) has no non-negative integer zeros.By the latter requirement on f (x), the recurrence (1) uniquely defines an infinite sequence of rational numbers once the initial element u 0 is specified.
An instance of the Membership Problem for hypergeometric sequences consists of a recurrence (1), an initial value u 0 ∈ Q, and a target t ∈ Q.The problem asks to decide whether there exists n ∈ N such that u n = t.We say that such an instance is in standard form if (S1) the initial condition is u 0 = 1; (S2) the polynomial g(x) has no positive integer root; (S3) the target t is non-zero; (S4) the polynomials f and g have the same degree and leading coefficient.
For the purposes of deciding the Membership Problem, we can assume without loss of generality that all instances are in standard form.An arbitrary instance can be transformed into one satisfying Condition (S1) by multiplying the sequence and target by a suitable constant.Instances of the Membership Problem that fail to satisfy Conditions (S2) and (S3) are trivially solvable.The positive integer roots of g can be computed and for any such root n 0 , we have u n = 0 for all n ≥ n 0 .Finally, for recurrences that fail Condition (S4) we have that n) either converges to 0 or diverges in absolute value.Under the assumption that t ̸ = 0, in each case we can compute an effective threshold n 0 such that u n ̸ = t for all n ≥ n 0 .
The p-adic valuation.Let p ∈ N be a prime.Denote by v p : Q → Z ∪ {∞} the p-adic valuation on Q. Recall that for a non-zero number x ∈ Q, v p (x) is the unique integer such that x can be written in the form x = p vp(x) a b where a, b ∈ Z and p divides neither a nor b.The value v p (0) is defined to be ∞.
The valuation possesses two important properties:
- Asymptotic estimates for series over primes.Given ∼ ∈ {<, =, >} and x ∈ Q, we denote sums over primes p ∈ N such that p ∼ x by p∼x .Let π(x) := p≤x 1 count the number of primes of size at most x.The following result is a consequence of the celebrated Prime Number Theorem.
Theorem 1.For π(x) as above, we have As an aside, an element a ∈ Z is a square modulo a prime p ∈ N if there exists an x ∈ Z such that x 2 ≡ a (mod p).An element a ∈ Z is a quadratic residue modulo p if a is both a square modulo p, and furthermore a and p are co-prime.We denote by L p the set of quadratic residues modulo p.
Recall the first of Mertens' three theorems [16] (see also [2,Theorem 4.10]), In the sequel we shall make use of the following refinement of Mertens' theorem.
Proposition 2. Suppose that a ∈ Z is not a perfect square.Then p≤x, a∈Lp .
Proposition 2 appears in work by Selberg [24,Equation (3.3)] on an elementary proof of Dirichlet's theorem in arithmetic progressions.
Monic Recurrences
In this section, we study hypergeometric sequences ⟨u n ⟩ ∞ n=0 , satisfying first-order recurrences of the special form where f ∈ Z[x] has no non-negative integer roots.We call such a recurrence monic.
We analyse the prime divisors of sequences ⟨u n ⟩ ∞ n=0 that satisfy such a monic recurrence.In particular, we recall two results that will serve as stepping stones toward our main decidability theorems in the subsequent sections.Following [1], for a fixed prime p, the first result establishes an asymptotic estimate for the p-adic valuation v p (u n ) as n tends to infinity.Next, following [3], when f is a quadratic polynomial we prove a result that yields asymptotic estimates on the size of the largest prime divisors of u n as n tends to infinity.The restriction on the degree is necessary given the state of the art: estimates on large prime divisors constitute hard open problems in the theory of polynomials [8,7].
3.1.Asymptotic growth of valuations.Let p ∈ N be prime.Consider a hypergeometric sequence ⟨u n ⟩ ∞ n=0 , satisfying a monic recurrence (2).Since In this section we recall the result of [1] that characterises the asymptotic growth of v p (u n ) in terms of the number of roots of f in Z/pZ.The key tool in this argument is Hensel's Lemma.
Then for all e > 0 there exist polynomials Define a Hensel prime for f ∈ Z[x] to be a prime that does not divide the discriminant of any irreducible factor of f .Since the discriminant of an irreducible polynomial is non-zero, all but finitely many primes are Hensel primes for a given polynomial.
Given a prime p, suppose that f ∈ Z[x] has m roots in Z/pZ, i.e., suppose that f factors as where α 1 , . . ., α ℓ ∈ Z, g ∈ Z[x] has no root modulo p, and m = m 1 + • • • + m ℓ .In this case, if p is a Hensel prime for f then for all e > 0 we can apply Hensel's Lemma to obtain a factorisation where β 1 , . . ., β ℓ ∈ Z, and h ∈ Z[x] has no root modulo p.In other words, f has exactly m roots in the ring Z/p e Z.
The following result is a reformulation of [1, Corollary 3.1].For later use, we formulate the result so as to make explicit the dependence of the bounds for v p (u n ) on the prime p.The proof remains the same.
Let p be a Hensel prime of f such that f has m roots modulo p. Then there exist effectively computable constants ε, n where ε depends only on f .
Proof.The function |f (x)| is eventually monotonically increasing on N.There exists an effectively computable bound n 0 such that for all n ≥ n 0 and all 1 Furthermore, there exists an effective constant ε 0 > 0, independent of p, such that for all n ≥ n 0 and all 1 ≤ k ≤ n we have Fix n ≥ n 0 and define e max to be the smallest power of p such that Since p is a Hensel prime, by Hensel's Lemma, there is a factorisation where Now for all 1 ≤ e ≤ e max the set {k ∈ N : p e | k − β i } is an arithmetic progression with common difference p e and so Combining inequality (5) with Equation ( 4) we obtain Let ε := ε 0 + 1.The desired result follows by sandwiching the term . We assume that −β is not a perfect square, which is equivalent to assuming that f is irreducible.Let a, b ∈ Q be such that 0 ≤ a < b.Let c, d ∈ N.For all n ∈ N we define and Informally speaking, the following theorem gives effective super-linear lower bounds on the growth of the function that maps n to the greatest prime divisor of F n .The result itself and the proof are a slight generalisation of [3,Theorem 5.1].The main difference is that we permit I(n) to be the intersection of an interval and an arithmetic progression, whereas the work cited above considers unrefined intervals I(n) = {1, . . ., n}.
Theorem 5. Let M ∈ N.There exists an effectively computable bound B ∈ N such that for all n > B there exists a prime p > M n that divides F n .
Proof.Given n ∈ N, we have the prime factorisation F n = p p ep where e p := v p (F n ) for each prime p.Note that e p = 0 for all but finitely many p.Taking logarithms, we get log(F n ) = p e p log p.
Partitioning the above sum into a sub-sum over primes at most M n and a sub-sum over primes greater than M n, we obtain The theorem at hand follows from a lower bound on the sum p>M n e p log p on the left-hand side of (7).To this end we have two sub-goals: give a lower bound on log(F n ) and an upper bound on p≤M n e p log p.
Write A := b−a c .The following lower bound on log(F n ) is a consequence of Stirling's formula.The proof is in Appendix A. Claim 6.We have the bound log(F n ) ≥ 2A(n log n − n).
The next task is give an upper bound on p≤M n e p log p.Here we follow the approach in [3] and further partition the sum into those primes p < n (treated in Claim 7) and those primes n ≤ p ≤ M n (treated in Claim 8).
Claim 7.There exist positive constants ε, n 0 > 0 such that if n > n 0 , then p<n e p log p ≤ An log n + εn.
Proof.Let S n be the set of primes p < n such that p divides F n and p is a Hensel prime for f .Observe that p<n e p log p − p∈Sn e p log p ≤ ε 0 log n for an effective constant ε 0 .Indeed, if p < n is a prime divisor of F n that does not lie in S n then p divides the discriminant of f -and there are finitely many such primes.Thus to prove the claim it will suffice to show the following bound for some effective constant ε 1 : For p ∈ S n , we establish an upper bound on e p which follows from Proposition 4: Here the constant ε 2 is effective and independent of the prime p.The justification is given in Appendix A. We next argue that there exist effective constants ε 3 , ε 4 , n 1 > 0 such that the following chain of inequalities is valid for all n ≥ n 1 .We have that p∈Sn e p log p ≤ No prime in S n divides the discriminant of f .Since the latter is equal to −4β, no prime in S n divides β.In addition, every prime in S n is a divisor of F n ; i.e., a divisor of k 2 + β for some k ∈ I(n), we have that β is a quadratic residue modulo p for every prime p ∈ S n .Thus, for sufficiently large n, we have that (by Proposition 2) for some effective constant ε 5 .
Proof.Let n ∈ N. Suppose that p > (b − a)n is a prime divisor of F n .For such primes, we shall first show that e p := v p (F n ) ≤ 2. Assume, for a contradiction, that there are distinct integers The desired result follows from the estimate on π(x) given by the Prime Number Theorem (Theorem 1).□ We return to the proof of Theorem 5. From Equation ( 7), Claim 7, and Claim 8, there exist positive constants ε, n 0 > 0 such that if n > n 0 then p>M n e p log p ≥ An log n − εn.
In turn, the above lower bound entails that for sufficiently large n, there exist prime divisors p | F n such that p > M n.This concludes the proof.□
Decidability: different splitting fields
In this section we show decidability of the Membership Problem for recurrence sequences that satisfy a first-order relation of the form (1) subject to the condition that the polynomial coefficients f, g ∈ Z[x] have different splitting fields.To this end, it is useful to introduce the following terminology.Let p be a Hensel prime for f g.We say that the recurrence (1) is p-symmetric if the two polynomials f and g have the same number of roots in Z/pZ.Otherwise we say that the recurrence is p-asymmetric.
We first show decidability of the Membership Problem in the case of p-asymmetric recurrences and then we apply the Chebotarev Density Theorem to show that every recurrence in which f and g have different splitting fields is p-asymmetric for infinitely many primes p. Lemma 9.There is a procedure to decide the Membership Problem for the class of hypergeometric sequences whose defining recurrences are p-asymmetric for some prime p.
Proof.Suppose that the hypergeometric sequence ⟨u n ⟩ ∞ n=0 satisfies the recurrence (1) and moreover that there is a prime p with respect to which the recurrence is pasymmetric.We want to decide whether such a sequence reaches a given target value t.
Consider the sequences ⟨x n ⟩ ∞ n=0 and ⟨y n ⟩ ∞ n=0 respectively defined by the monic recurrences x n = g(n)x n−1 , y n = f (n)y n−1 , with x 0 = y 0 = 1.Then u n = xn yn and hence, for the aforementioned prime p, by the multiplicative property.
Recall that p is, by definition, a Hensel prime for both f and g.Hence, by Proposition 4, we obtain an asymptotic estimate of the form where m f is the number of roots of f modulo p and m g is defined similarly.Here the implied constant depends on f g and p.The proof concludes by noting that v p (t) is a constant, whereas v p (u n ) is bounded away from v p (t) for sufficiently large n (note this threshold is computable).We deduce that u n ̸ = t, again, for sufficiently large n, from which the desired result follows. □ We now give a sufficient condition for a recurrence to be p-asymmetric.We use the following consequence of the Chebotarev Density Theorem.Let K be a Galois field of degree d over Q, and denote by O its ring of integers.Let Spl(K) be the set of rational primes p such that the ideal pO totally splits in O, i.e., such that pO = p 1 • • • p d where the p i are distinct prime ideals.The following result appears as [18,Corollary 8.39] and [20,Corollary 13.10].The latter reference attributes the result to Bauer.Theorem 10.Let K and L be Galois extensions of Q such that K ̸ = L. Then Spl(K) and Spl(L) differ in infinitely many primes.
We state the main theorem of this section.
Theorem 11.There is a procedure to decide the Membership Problem for the class of hypergeometric recurrences (1) whose polynomial coefficients have different splitting fields.
Proof of Theorem 11.Let ⟨u n ⟩ ∞ n=0 satisfy a recurrence (1) for which the coefficients f and g have respective splitting fields K and L, with K ̸ = L. Recall that there are only finitely many primes that are not Hensel primes for f g.By Theorem 10, there exists a Hensel prime for f g that lies in exacly one of the two sets Spl(K) and Spl(L).For such a prime p, the recurrence ( 1) is p-asymmetric.Hence the result follows from Lemma 9. □ We note that the recurrence (1) can be p-asymmetric even when f and g have the same splitting field.We demonstrate this phenomenon with the following example.
It is straightforward to verify that 7 is a Hensel prime for f g by noting that it does not divide the discriminants of the respective irreducible factors of f and g.To show that the recurrence is 7-asymmetric, observe first that f factors as (x + 4)(x + 3)(x 2 + 1) over Z/7Z, where x 2 + 1 is irreducible; thus f has two roots in Z/7Z.On the other hand, g factors into a pair of irreducible quadratic polynomials over Z/7Z and hence has no roots.
We can now follow the argumentation of Lemma 9 to decide the Membership Problem for ⟨u n ⟩ ∞ n=0 with respect to any given target t ∈ Q.Consider the monic recurrences To obtain bounds on v 7 (y n ), note that |f (k)| ≤ n 4 for all n ≥ 2 and 1 ≤ k ≤ n.Proposition 4 gives the inequality For any target t ∈ Q, the above bound allows us to compute a threshold B such that for all n > B we have Since and hence u n ̸ = t.
Decidability: quadratic splitting fields
In this section, we focus on the decidability of the Membership Problem for recurrences in which both f, g ∈ Z[x] are monic and split completely over a quadratic (degreetwo) extension K of Q.
Recall that a number field K is quadratic if and only if there is a square-free integer The assumption that f and g are both monic ensures that the roots of both polynomials are algebraic integers in Q(sqrtβ).As shown in [25,Chapter 3], the following holds.
Theorem 12. Suppose that β ∈ Z is square-free.Then the ring of algebraic integers in Q( √ β) has the form Z[θ], where The main result of the section is as follows.
Theorem 13.The Membership Problem for recurrences of the form (1) is decidable under the assumption that f, g are both monic and both split over a quadratic extension K of Q.
The proof of Theorem 13 is given in Sections 5.1 to 5.4.The details differ slightly according to the two cases for the generator θ of the ring of integers of K, as presented in Theorem 12.In the subsections below, we treat the case for θ = The necessary adjustments for the case θ = √ β are given in Appendix B. Henceforth we assume a normalised instance of the Membership Problem, given by the recurrence (1) and target t ∈ Q.Our goal is to exhibit an effective bound B such that u n ̸ = t for all n > B. To this end, our strategy is to find B such that for all n > B there exists a prime that divides u n but not t.At the conclusion of the proof of Theorem 13, we demonstrate the argument and techniques with a worked example, namely Example 2 in Section 5.4.
Let β ≡ 1 (mod 4) be a square-free integer and K = Q( √ β) a quadratic field over which the polynomials f and g in (1) split completely.Let θ := for the minimal polynomial of θ.
5.1.
Partitioning the roots of f g.Let R be the set of roots of f g.We partition R into disjoint subsets (which we shall call the classes of R) with α, α ∈ R in the same class if and only if α − α ∈ Z.We say that a subset of S ⊆ R is balanced if f and g have the same number of roots in S, counting repeated roots according to their multiplicity.A subset is unbalanced otherwise.The linchpin of the proof of Theorem 13 is the balance of roots in the classes.
If each class (as above) is balanced then the roots of f and g can be placed in a bijection under which corresponding roots differ by an integer and have the same multiplicity in f and g respectively.In this case, by cancelling common factors in the expression f (k) , we see that for n sufficiently large u n is a rational function in n.For such an instance, the Membership Problem reduces to the problem of deciding whether a univariate polynomial with rational integer coefficients has a positive integer root, which is straightforwardly decidable.A detailed account for this argument is given in [22,Appendix B].
Let us now consider the case where there is an unbalanced class C. By the assumption that f and g have the same degree, there must, in fact, be at least two unbalanced classes.It follows that there is an unbalanced class that is not contained in Z (i.e., an unbalanced class of quadratic integers).
Here it is convenient to define the following linear ordering on R. Given elements aθ + b and a ′ θ + b ′ in R (where a, a ′ , b, b ′ ∈ Z), define aθ + b ≺ a ′ θ + b ′ if and only if one of the following four mutually exclusive conditions holds: (1) a ′ ≤ 0 < a, (2) 0 < a < a ′ , (3) a < a ′ ≤ 0, images of some balanced classes φ(C0) Figure 1.Image of φ on Z as well as the positions of constants used in the proof of Theorem 12 to determine that v p (u k ) ̸ = 0 for k that satisfy a 0 θ ′ + 1 3 θ ′ ≤ k ≤ a 0 θ ′ + 2 3 θ ′ .Note that the preimages α ∈ R such that 1 ≤ φ(α) ≤ n are precisely those roots for which α ≼ α 0 .
(4) a = a ′ and b < b ′ .Note that the classes in R are intervals with respect to the order ≺.Thus the order lifts naturally to a linear order on classes.In particular, the least unbalanced class C 0 is well-defined.Let α 0 = a 0 θ + b 0 be the greatest element in C 0 .Then {α ∈ R : α ≼ α 0 } is unbalanced because this set is a disjoint union of balanced classes and C 0 .Further, a 0 > 0 because the least unbalanced class is necessarily a subset of quadratic integers of the form a 0 θ + Z.Here we note that the image of an unbalanced class under the automorphism of K that interchanges √ β and − √ β is likewise an unbalanced class and so a 0 > 0. 5.2.Threshold conditions.Next we exhibit a threshold B (defined in terms of the recurrence (1)) such that for all n > B there are rational integers θ ′ and p, with p > n prime, satisfying the following conditions: (P1) m θ (θ ′ ) ≡ 0 (mod p); (P2) The function φ : R → Z defined by is an order embedding of (R, ≺) in ({0, 1, . . ., p − 1}, <).(P3) The set {α ∈ R : 1 ≤ φ(α) ≤ n} is unbalanced.The definitions for θ ′ and p follow.Consider the interval and let M be an upper bound on {|a|, |b| : aθ + b ∈ R}, and the height of the minimal polynomials of the elements of R. By Theorem 5, there is an effective threshold B, which we may assume to be greater than 3M (M + 1), such that for all n > B there exists a prime p > 3M n that divides the product Furthermore, since p is prime, we deduce that there exists k 0 ∈ I(n) ∩ (2N + 1) such that k 2 0 ≡ β (mod p).We define θ ′ ∈ N to be the number such that k 0 = 2θ ′ + 1.We will show that θ ′ and p satisfy Conditions (P1)-(P3).Now Thus θ ′ satisfies Condition (P1).We turn next to establishing Condition (P2).Since k 0 ∈ I(n) and k 0 = 2θ ′ + 1, we have Combining (12) with the inequality 1 ≤ a 0 ≤ M and rearranging terms gives The inequality θ ′ ≤ p 4M in (13) implies that for all roots aθ + b ∈ R, φ(aθ + b) is equal to (for the latter, recall that R contains no positive integers).Further, since |b| ≤ M < θ ′ for all aθ + b ∈ R, we conclude that φ is an order embedding of (R, ≺) into ({0, . . ., p − 1}, <).This establishes (P2).Equation ( 12) and the inequality θ ′ ≤ 3M from (13) yields Hence φ(α 0 ), the image of the greatest element in C 0 is upper bounded by n.From the definition of the order (R, ≼), for α ∈ R we have that α ≼ α 0 if and only if φ(α) ≤ n.Thus (P3) follows from the fact that the set {α ∈ R : α ≼ α 0 } is unbalanced.Condition (P2) entails that ψ and φ agree on R, while Condition (P1) entails that ψ is a ring homomorphism.(We note in passing that the kernel of ψ is a prime ideal p appearing in prime ideal factorisation of pZ[θ].)Hence the polynomial f g splits over Z/pZ and φ maps the roots of f g in K to roots of f g in Z/pZ.Consider the decomposition of the p-adic valuation Let h(x) be an irreducible factor of either f or g.Then h(x) is monic, of degree at most 2 and height at most M .Since p > 3M n, we easily see that |h(k)| < p 2 for all 1 ≤ k ≤ n and hence v p (h(k)) ∈ {0, 1}.It follows that v p (u n ) is equal to the number of roots of g in Z/pZ that lie in {1, . . ., n} minus the number of roots of f in Z/pZ that lie in {1, . . ., n}, counting repeated roots according to their multiplicity.Observe that this count takes place on the set {α ∈ R : 1 ≤ φ(α) ≤ n}.By Condition (P3), the aforementioned set is unbalanced and so it quickly follows that v p (u n ) ̸ = 0. 5.4.Concluding the proof of Theorem 13.Finally, let us return to the decidability of the Membership Problem in the setting of Theorem 13.By our standing assumption that all instances of the problem are normalised we have that t ̸ = 0. We have exhibited a bound B such that for all n > B there exists a prime p > 3M n such that v p (u n ) ̸ = 0.This means that if p 0 is the largest prime such that v p0 (t) ̸ = 0 then for n > max B, p0 3M we have u n ̸ = t.Thus we have reduced the Membership Problem in this setting to a finite search problem.This immediately establishes decidability and concludes our proof of Theorem 13.
We illustrate the construction underlying Theorem 13 with a worked example.Write p 0 for the largest prime such that v p0 (t) ̸ = 0.By Theorem 5, there is a bound B > 3M (M + 1) such that for all n > max(B, p0 3M ), there is a prime p with v p (u n ) ̸ = v p (t).This permits us to reduce the Membership Problem for ⟨u n ⟩ ∞ n=0 and t to a finite search problem.
Given a target t and sufficiently large n, the process in the proof of Theorem 13 finds a prime p with v p (u n ) ̸ = v p (t). Below we illustrate the idea of the proof in the specific case t = 11 59 and n = 61.(Here we have p 0 = 59 and hence n > 3M (M + 1) and n > p0 3M , as required in the proof of Theorem 13.)We will establish the existence of a prime p such that v p (u 61 ) ̸ = 0 and v p (t) = 0, witnessing that u 61 ̸ = t.
Guided by the proof of Theorem 5, we observe that prime p := 1481 > 3nM is a divisor of It follows that v p (u 61 ) = −1, while v p (t) = 0.
Discussion
In light of the results in Section 4 a clear direction for further research is to examine the decidability of the Membership Problem for recurrences whose polynomial coefficients share the same splitting field.We recall that previous work [22] established decidability when the polynomial coefficients split over the rationals.The present work considers the case when the two polynomials split over the ring of integers of a quadratic field.In future work we will consider the more general case in which the all roots of the coefficient polynomials have degree at most two.As far as the authors are aware, the only known results in this direction are the (un)conditional decidability results for quadratic parameters in [12].Extending the approach of the present paper to the case of polynomials with roots of degree more than two would require new results on large prime divisors on the values of such polynomials, which is an active area of research in number theory.
The remaining part of the proof for the case β ≡ 1 (mod 4), as given in Subsection 5.3 and Subsection 5.4, carries over to the present case without change.
e
p log p = log(F n ) − p≤M n e p log p.
However, this leads to a contradiction since p ≥ (b − a)n ≥ k 3 − k 1 .Hence for each prime divisor p | F n with p ≥ (b − a)n, we find that e p = v p (F n ) ≤ 2. Thus we bound the summation in the statement of the claim by n<p≤M n e p log p ≤ p≤M n 2 log p ≤ 2 log(M n)π(M n).
Asymptotic estimate for the largest prime divisor.Fix a polynomial f (x) : | 8,911 | 2023-03-16T00:00:00.000 | [
"Mathematics"
] |
User Factors Affecting Both Subscription Intention and Time for 4G Wireless Internet Service
While much attention is paid to 4G wireless Internet service based on long term evolution (LTE) technology, the previous studies investigating both subscription intention and of 4G wireless Internet service time are lacking. This study attempts to fill this void by analyzing the subscription intention and time of 4G wireless Internet service users using a bivariate two-equation model. Further, as previous studies on self-efficacy and innovativeness are lacking in mobile service adoption despite the importance of user capability for adopting Internet service, this study intends to fill the void by including user technical competency representing possession of comprehensive knowledge for 4G wireless Internet service, extensive use of wireless Internet, and the use of advanced smart phone. The analysis results of bivariate two-equation model using the sample of 810 Korean users show that if users have the comprehensive knowledge for 4G wireless service, or use advanced mobile smart phone, or greatly use wireless Internet through advanced mobile smart phone, or who are male they are more likely to adopt 4G wireless service. The subscription time is shorter for the users who extensively use wireless Internet and use advanced mobile smart phone or, are male, and have high before tax monthly income. This study contributes to literature in mobile services by suggesting user technical competency as affecting subscription intention and time for 4G wireless Internet service. Managers may better concentrate their marketing efforts to the group of people using wireless Internet extensively and advanced mobile smart phone, and who are male.
Introduction
The 4G mobile industry is showing the steepest rate of growing as a part of telecommunications industry (FreedomPop, 2019;KT Service Internet Provider, 2015;UbiFi, 2020). 4G wireless internet service was developed from the efforts to search the niche market after communication service market is saturated, to advance the wireless communication technology, and to utilize the frequency bandwidth (SPARK Services, 2020). 4G wireless internet service is complementary to other high-speed, mobile communication services. Given the advantages of high speed and mobility, 4G wireless internet service will provide lots of opportunity to occupy most of the market for telecommunication service where wired and wireless internet service is converged, and to improve the service scope of telecommunication business. Thus, both the wired and wireless communication service providers are eager to obtain the rights to do the 4G wireless internet service business. LTE (long-term evolution-advanced) technology which is a 4G mobile communication service is prevailing the offers for 4G wireless service. Although 5G is a currently state-of-the-art technology, our study focuses on 4G wireless internet service because 4G is the next cutting-edge technology besides 5G and users have sufficient usage experience to provide knowledgeable and reliable answers to the factors affecting its usage than 5G which has a shorter period of its diffusion.
This study has several motivations to fill the void in the previous studies on 4G mobile service. First, this study attempts to analyze the subscription intention and time (i.e., time taken before actual subscription to new 4G wireless internet service occurs after the service is initiated in the market) of 4G wireless internet service based on LTE technology using a bivariate two-equation model as the previous studies investigating the subscription intention and time simultaneously are lacking. Previous studies on mobile service (Fang & Fang, 2016;Li & He, 2015;Sanakulov & Karjaluoto, 2015) have rarely considered these both aspects at the same time. Although it is necessary to analyze two critical uncertainties involved with new-technology adoptions whether and when the target market will begin to use them, previous studies examining both subscription and time are almost nonexistent. This study utilizes a bivariate two-equation model where the subscription intention and the subscription time are specified as two-step processes by the maximum likelihood estimation method. Using the survey data from the residents in metropolitan areas, this study attempts to analyze the subscription intention and time for 4G wireless service.
Second, our study intends to fill the void in the factors affecting the subscription intention and time of 4G wireless internet service by including user factors such as user technical competency, which is defined as a group of factors for self-efficacy and innovativeness of users, and indicates the extent that users have knowledge and usage experience of wireless internet required for adopting 4G wireless internet service, and tend to pursue new technology. While many empirical studies exist on the adoption of mobile internet service in other countries, the studies on the user factors for the adoption of 4G wireless internet service based on LTE technology are relatively rare (Jung et al., 2015). The exact links of 4G wireless internet service to economic growth remain unclear until we look at the details of which kinds of people use broadband (Ericsson, 2009).
Previous studies on self-efficacy and innovativeness are lacking in mobile service adoption, despite the importance of user capability for adopting new internet service (Kuo & Yen, 2009;Lee & Kim, 2009;Lee & Quan, 2013;Scott & Walczak, 2009). The 4G wireless internet service is an innovative technology, and its usage is related to innovativeness of users. People with greater extent of innovativeness likely establish more positive perceptions in terms of ease of use for Wireless Mobile Data Services (WMDS) in China (Lu et al., 2008). Personal innovativeness is much related to the perceived ease of use of 3G mobile services (Kuo & Yen, 2009). Innovativeness affect perceived ease of use for mobile game service (Lee & Quan, 2013). The users with the greater extent of self-efficacy and innovativeness tend to perceive less difficulty while using mobile game service. Thus, innovative users are likely to have intention to subscribe to 4G wireless internet service within a short time. This study intends to suggest user technical competency, that is, the possession of comprehensive knowledge for 4G wireless service, extensive use of wireless internet (more than 1 GB per month), and use of advanced mobile smart phone (model not older than 3 years).
Third, given the great usage and expectations of wireless internet, this study intends to provide an empirical study investigating other user factors such as user demographics (gender, age, education, income). The target sample is composed of Korean users for 4G wireless internet service. Evidence posits that the telecommunications services are greater in economic impact in developing countries such as Latin America and Asia than others (Thompson & Garbacz, 2011). China provides mobile communication quality which varies from one place to other because of the difference in economic advancement, and perceived communication quality has an influence on users' continued usage intention (Li & He, 2015). Although there are many empirical studies on adoption of mobile internet service in other countries, the studies are lacking on the adoption of 4G wireless internet service based on LTE technology, which is a convergence of internet and broadband service. This study intends to fill this void.
Wireless and Broadband Internet Service
The current trend toward wireless internet has created great change in the world of mobile wireless networks (FreedomPop, 2019;UbiFi, 2020). 4G wireless internet service allows customers who are distributed geographically to be provided with information and contents in a collaborative, computer-based environment. 4G wireless internet service is a telecommunication service utilizing portable mobile devices to access information and content on the internet. 4G wireless internet service can be provided in moving vehicles at high speeds, which make it differ from wireless LAN (local area network) and low-speed wireless internet based on telecommunication devices. 4G wireless internet service enables high-speed access to wireless internet when users are driving long distances. This technology improves the mobility of WiMax developed by IEEE 802.16 Group and was accepted by IEEE in December 2005 as mobile WiMax standardization. Recently, LTE technology is the technology that prevails the offers for 4G wireless service.
The 4G wireless technology attracts more users by circumventing drawbacks of limited bandwidth, limited mobility, and instability of 3G service (S.-C. Lin et al., 2015). The implementation of 4G wireless internet service facilitates the convergence of service and technologies and development of Korean IT industries. In Korea, after 2.3-GHz frequency bandwidth had been allocated to 4G wireless service, several studies investigated the technology standardization and the possibility of 4G wireless internet service adoption in the industry.
4G wireless internet service provides four advantages compared with other telecommunication service; 4G wireless internet service enables (a) high-speed (more than 1 Mbps) (b) mobile telecommunication when users gain access (c) using various mobile devices (e.g., PDA, notebook computer, smart phone) (d) through wireless internet while they are moving. 4G wireless internet service can be provided much more cheaply than fixed-line services and offer cheap broadband infrastructure.
The diffusion of the convergence of internet and mobile phones in Korea has been phenomenal. The market for 4G wireless internet service based on LTE technology is likely to grow significantly for two reasons. First, the evolutionary advancement of networks is realized in preparation for an enormous increase in network traffics. The great market opportunity for 4G wireless internet service is possible by the rapid diffusion of smart phones, netbook computers, and mobile devices and application stores of large scale which contribute to a large demand for mobile data traffic. AT&T Wireless initiated 3G in July 2004 to provide the United States' first 3G voice and data network. The current 3G network cannot realize the large-scale increase of the network traffic, and this makes the evolutionary advancement to the next generation of network necessary (Jeon & Lim, 2010). Second, it is possible to create new markets for services through 4G wireless internet service such as games, music, payment services, logistics, and disaster prevention and recovery. The generation of new services and applications will be a great opportunity for further profits. The increasing saturation of mobile technology such as phones and note pads offers the endless range of commercial activities including shopping, real-time news, buying tickets, banking, and booking through the internet in the "pockets" of consumers.
As data and voice technologies are combined, wireless broadband networks can decrease the strategic revenue position of traditional DSL/Cable and ISPs businesses, and can acquire the lead on cellular networks. The 4G wireless internet service is based on fast-developing networks that combine localized WLANs to build up a nationwide infrastructure for wireless service, which provides seamless wireless access to users through roaming from one spot of a city to another and among cities.
The wireless LAN does not provide support for continuous mobile communication when users are moving in the area wider than 100 m 2 . The communication speed is fast in the following descending order: wired internet, wireless LAN, digital multimedia broadcasting (DMB) (satellite/ earth), wideband code division multiple access (W-CDMA), and mobile internet. 4G wireless internet service provides communication service for the users who are moving in the speed at most of 60 km/hr. DMB (satellite/earth), W-CDMA, and mobile internet also support the communication service for the users who are moving. The price of wireless LAN, satellite DMB, and 4G wireless internet service is lower than that of mobile internet and W-CDMA.
Factors Affecting Subscription Intention and Time
The studies on the factors wireless broadband internet service can be reviewed in the groups of broadband and mobile service. The studies of former group investigated factors influencing broadband usage. For example, Park and Yoon (2005) suggested that the key factor for the spread of broadband in Korea is the government's policy for competition through deregulation. Tanguturi and Harmantzis (2006) investigated the behavioral, economical, and technological factors which affect the choosing of wireless technologies. The other studies used technology acceptance model (TAM) to investigate the adoption of mobile services (Chong et al., 2011;Shin, 2011). TAM is the widely used theory on studying adoption of mobile technology and services (Sanakulov & Karjaluoto, 2015). Mobile broadband exerts an important direct influence on gross domestic product (GDP), and low-income countries obtain much greater benefit from mobile broadband (Thompson & Garbacz, 2011). Perceived ease of use, perceived enjoyment and usefulness, and continuance usage intention of mobile service are affected by perceived communication quality (Li & He, 2015). For the studies of mobile service, various applications have been considered such as brokerage service (J. Lin et al., 2011), health care (S.-P. Lin, 2011), and mobile apps (Fang & Fang, 2016).
To focus on user capability which is suggested as crucial for adopting new internet service in previous studies (Kuo & Yen, 2009;Lee & Quan, 2013), this study develops user technical competency as a group of factors for self-efficacy and innovativeness of users, which indicates the extent that users possess knowledge and usage experience of wireless Internet required for adopting 4G wireless Internet service, and are likely to pursue new technology. This study describes user technical competency factors as composed of concepts for self-efficacy and innovativeness and intends to suggest three factors of user technical competency, that is, the possession of comprehensive knowledge for 4G wireless service, extensive use of wireless Internet, use of advanced mobile smart phone. Each of factors for user technical competency is chosen as it positively influences self-efficacy and innovativeness. Self-efficacy is assessed by the possession of comprehensive knowledge for 4G wireless internet service and extensive use of wireless. For instance, Lee and Kim (2009) suggested that the usage of intranet is affected by web experience which represents self-efficacy of this study. Scott and Walczak (2009) suggested that computer self-efficacy had an influence on perceived ease of use and adoption of ERP (enterprise resource planning) system's training tool. Users with self-efficacy have a little difficulty in using ubiquitous mobile game service, showing that marketing efforts better center on the users with experience with related technology (Lee & Quan, 2013). Thus, people with high selfefficacy are likely to require less efforts to realize their goals and reduce barriers than users with low-self-efficacy.
Besides factors of user technical competency, this study adopts Korean users' demographics which consist of gender, age, education, and income based on the previous studies in mobile service which used users' demographics. For example, Hwang et al. (2016) suggested demographics factors affecting mobile application usage such as gender, age, and application types. Based on actual user experience and behavior log data, Hwang et al. suggested the moderating effects of gender and age on the usage of mobile apps. Thus, three factors of user technical competency along with users' demographics such as gender, age, education, and income can be posited to influence subscription intention within a shorter time. Figure 1 suggested the effect of user technical competency and demographics on subscription intention and time for 4G wireless service.
The Analysis Model
It is necessary to develop and validate a response scale that provides more accurate predictions not only of whether a future adoption will occur but also of when this adoption behavior is most likely to occur (Ittersum & Feinberg, 2010). This study proposes to estimate timed intent measure by presenting respondents with multiple time intervals for a specified time horizon. The study applies bivariate equation model which predicts the subscription to 4G wireless internet service based on LTE technology and the subscription time. The model describes the probability to subscribe to 4G wireless internet service and the "conditional" subscription time using the sample which decided to subscribe to 4G wireless internet service and two separate probability equations. Furthermore, the model predicts the impact of the increase of subscription rate on the subscription time.
The respondents are depicted as i N = 1 2 , , , . y i * and T i * denote the probability of subscription and subscription time, respectively. Then, the following equations are defined: γ i and δ i are parameters which should be estimated. u i and υ i are disturbance terms, and ω i and x i are vectors of independent variables. The subscription time has positive value.
Thus, T i * is defined as the natural log of subscription time to have value in the range of real values which may be positive or negative in bivariate normal distribution.
Some of ϖ i and x i may be same vectors of independent variables. y i * is not measured and whether y i * is greater than 0 is measured. That is, whether users will subscribe to 4G wireless internet service is measured. y i which indicates whether users will subscribe to 4G wireless internet service is defined as follows: 1(•) is the indicator function which takes 1(0) if the condition in the parenthesis is true (false). y i has the value of 1(0) if the respondent will (not) subscribe to 4G wireless service. The subscription time is measured when y i has the value of 1. The respondent chooses one answer of subscription time among eight examples. The variable of subscription time is defined as follows: If respondent indicates that he (or she) will subscribe to 4G wireless internet service within from 6 to 9 months, he (or she) will choose third example and I i 3 is one ( ) I I I I I I I i , , and are the standard deviation of y i * and T i * , and the correlation between these terms, respectively. The correlation indicates the interaction between subscription to 4G wireless internet service and subscription time. σ 1 and σ 2 can be set to be 1 and σ without losing generalizability if there are no extra constraints on the parameters. Then, the bivariate normal distribution is presented as follows: If u i and ν σ i / are set to be z i 1 and z i 2 , respectively, Equation 6 is estimated using bivariate standard normal distribution. That is, ( z i 1 , z i 2 ) follows bivariate standard normal distribution, BVN ( , , , , ) 0 0 1 1 ρ . Φ( ) • is standard normal cumulative distribution function (CDF) and Ψ( , , ) z z i i 1 2 ρ is the bivariate standard normal CDF. The probability that ith respondent does not subscribe to 4G wireless internet service can be represented as follows:
Pr
Pr .
The probability that ith respondent subscribe to 4G wireless internet service within 1 month is presented as follows: The probability that ith respondent subscribe to 4G wireless internet service within from 1 to 3 months is given as follows: The probability that ith respondent subscribes to 4G wireless internet service after 24 months is provided as follows: Pr , ln Pr Thus, the final form of log of maximum likelihood function to estimate parameters of bivariate two-equation suggested in the study is presented as follows: ln ln ln , , The parameters, γ δ ρ σ , , , and should be estimated using the study sample. Table 1 indicates the operational definitions for the factors of user technical competency and demographics and the descriptive statistics of variables. The possession of the comprehensive knowledge is measured as a dichotomous item according to whether users are confident in knowing how to use recent models of 4G wireless internet service without difficulty (0 = No, 1 = Yes). Extensive use of wireless internet is measured as a dichotomous item (0 = No, 1 = Yes) according to whether the amount of wireless internet data communication is more than 1 GB per month (0 = No, 1 = Yes). Use of advanced mobile smart phone is a dichotomous item according to whether users are currently using advanced mobile smart phone having the model which is not older than 3 years (0 = No, 1 = Yes).
Data Collection
The data collection method employed online survey site which our researcher composed using a google service based on the structured questionnaire. The concepts of 4G wireless internet service are explained to respondents and the questions regarding the markets of 4G wireless internet service and their intentions to subscribe to 4G wireless internet service are asked. The questions which have ambiguous meanings are corrected until they have clear and straightforward meanings. The questions are asked to examine whether respondents are able to buy mobile devices and are "really" ready to subscribe to 4G wireless service. The major advantages of 4G wireless internet service over other internet services are described in terms of mobility, maximum communication speed, communication quality, and usable mobile devices. For instance, 4G wireless internet service is possible on the fast moving vehicles and the maximum communication speed is 7 Mbps which is 23 times greater than the speed of current wireless internet service (384 Kbps). In the pilot test, the explanations of 4G wireless internet service are reviewed to examine whether they deliver appropriate meaning of the 4G wireless service. The survey is requested to potential users in Seoul, that is, the capital of Korea, and its suburb area. The specialized survey company conducted the data collection for the scientific sample collection. The survey area is subdivided into the 43 (nine small cities and 34 district areas of two large cities), and the sample is selected in proportion to the size of the population of the subdivided area. The objective of the survey is to collect data for the estimation of subscription intention and time for 4G wireless service. The age of respondents is restricted to be from 20 to 65 years for the reliable response from their knowledge of 4G wireless service. The final sample includes 810 responses where the return rate is 45%.
The statistics are presented for three groups: the entire sample, the sample which is composed of the respondents who suggest (or do not suggest) the intentions of subscription. The questions are composed to ask for the intention to subscribe and for the time of subscription. For the simplicity of question, the respondents are allowed to choose one answer among the candidate examples. Most (86.5%) of respondents have already subscribed to 4G wireless internet service and currently possess no intention to subscribe to 4G wireless service. One hundred and nine respondents have the intentions to newly subscribe to 4G wireless service, which is 13.5% of the 810 randomly sampled respondents. Table 2 presents the distribution of the intentions to subscribe to 4G wireless internet service and the time of subscription. (1) is 6.63, the hypothesis ρ = 0 is rejected at 1% significance level. The t-value for the estimation of ρ is 38.41 which is significant at 1% significance level. Thus, the hypothesis ρ = 0 is rejected using t-test. The estimated value for ρ is −0.9734. The negative value for ρ indicates that as the probability of subscription to 4G wireless internet service increases, the subscription time for 4G wireless internet service decreases. The respondents who have the lower probability of subscription to 4G wireless internet service will subscribe to 4G wireless internet service after longer time due to their specific individual situations. The significant correlation between subscription probability and subscription time makes bivariate two-equation model more appropriate than single equation model. Thus, it is better to use bivariate two-equation model to analyze the intention or time of subscription using user technical competency and demographics.
Results and Discussion
In the bivariate two-equation model, the effects of the possession of comprehensive knowledge for 4G wireless service, use of advanced mobile smart phone, extensive use of wireless internet, and gender on the intention to subscription are significant (p < .01, respectively). This indicates that if users possess the comprehensive knowledge for 4G wireless service, or currently use advanced mobile smart phone, or extensively use wireless internet through advanced mobile smart phone, they are more likely to adopt 4G wireless service. This shows that factors of user technical competency which indicates self-efficacy and innovativeness positively influence the intention to subscribe to 4G wireless service. Furthermore, male users are inclined to have stronger intention of subscription than female users. The estimated coefficients of explanatory variables in Table 3 indicate that the subscription intention and the subscription time depend on the social and economic characteristics of respondents. The respondents who have known comprehensively the 4G wireless internet service before are more likely to subscribe to 4G wireless internet service than the ones who have not known the 4G wireless internet service before. Furthermore, the respondents who currently use advanced mobile smart phone have greater probability to subscribe to 4G wireless internet service than the ones who do not currently use advanced mobile smart phone. In addition, male respondents are more likely to use 4G wireless internet service than female respondents. Managers should concentrate their marketing efforts toward these groups of customers. Although not significant, the age of respondents is positively related to the probability of subscription to 4G wireless service. The negative coefficient of Age 2 indicates that the age of respondents begins to negatively affect the probability of subscription after some age. Furthermore, the coefficient of the square term of age is negative indicating that the time to the subscription to 4G wireless internet service begins to decrease after the age of users exceed "some" age. Until then, the subscription time to 4G wireless internet service increases as the age of users increases. The positive constant term for the estimated bivariate model indicates that the predicted subscription time will be positive even if the values of the explanatory variables are small. This indicates that although 4G wireless internet service is initiated in the market, users will subscribe to 4G wireless internet service after some time.
Previous studies on mobile service (Fang & Fang, 2016;Li & He, 2015;Sanakulov & Karjaluoto, 2015) have rarely considered both subscription intention and time aspect at the same time. Thus, our study also suggests the user factors negatively affecting subscription time. The respondents who have known the 4G wireless internet service before are more likely to subscribe earlier to 4G wireless internet service than the ones who have not known the 4G wireless internet service before. The subscription time is shorter for the users who extensively use wireless internet and use advanced mobile smart phone or, are male, and have high before tax monthly income (p < .01, respectively). This shows that the respondents who currently use advanced mobile smart phone or wireless internet extensively will subscribe to 4G wireless internet service earlier than the ones who do not currently use advanced mobile smart phone or wireless internet extensively. This generally shows that the factors affecting subscription intention influences the subscription time negatively. In addition, male respondents will use 4G wireless internet service earlier than female respondents. The level of income is also negatively related to the subscription time. Thus, the marketing efforts of 4G wireless internet service should be concentrated to the extensive users of advanced mobile smart phone and wireless internet, male users, and the users with high income level. The promotion of the advantages of 4G wireless internet service to the users of advanced mobile smart phone and the extensive users of wireless internet and male users turned out to be effective strategy to increase the probability of subscription to 4G wireless service.
Conclusion
While there are many empirical studies on adoption of mobile internet service, the studies are lacking on the acceptance of 4G wireless internet service based on LTE technology. This study attempts to fill this void by analyzing the subscription intention and time of 4G wireless internet service users using a bivariate two-equation model using the data collected from a survey of the residents in metropolitan areas. The analysis results of bivariate two-equation model using the sample of 810 Korean users show that user factors of user competency which represents self-efficacy and innovativeness, and gender positively influence subscription intention. That is, if users have the comprehensive knowledge for 4G wireless service, or greatly use wireless internet through advanced mobile smart phone, use advanced mobile smart phone, or who are male they are more likely to adopt 4G wireless service. The subscription time is shorter for the users who extensively use wireless internet and use advanced mobile smart phone or, are male, and have high before tax monthly income. Our study offers insights regarding diffusion of 4G wireless internet service overcoming two critical uncertainties involved with new-technology adoptions whether and when the target market will begin to use them and specific user factors such as user competency (especially extensive use of wireless internet or use of advanced mobile smart phone) and demographics are suggested as affecting subscription intention and time.
Implications for Researchers
Using a bivariate two-equation model, this study estimated both the subscription intention and time (i.e., time taken before actual subscription to new 4G wireless internet service occurs after the service is initiated in the market) of 4G wireless internet service based on LTE technology. As previous studies on mobile service (Chong et al., 2011;Fang & Fang, 2016;Li & He, 2015;S.-P. Lin, 2011;Sanakulov & Karjaluoto, 2015;Shin, 2011;Thompson & Garbacz, 2011) have rarely considered both subscription intention and time aspects at the same time, this study utilizes a bivariate two-equation model where the subscription intention and the subscription time are estimated as two-step processes by the maximum likelihood estimation method. While there are many empirical studies on diffusion of mobile internet service in other countries, the studies on user factors such as user competency for the adoption of 4G wireless internet service based on LTE technology are lacking. It is posited that the rapidly advancing telecommunications services is more economically impactful in developing countries in Asia (Thompson & Garbacz, 2011). The exact links of 4G wireless internet service to economic growth can be clear when the details of which kinds of people use broadband are examined (Ericsson, 2009). Thus, given the great usage and potential of wireless internet, this study provides an empirical study investigating user capability and demographics (gender, age, education, income) for 4G wireless internet service. As previous studies on self-efficacy and innovativeness are lacking in mobile service adoption, despite the importance of user capability for adopting internet service (Kuo & Yen, 2009;Lee & Kim, 2009;Lee & Quan, 2013;Scott & Walczak, 2009), this study contributes to literature in mobile services regarding self-efficacy and innovativeness by suggesting user technical competency as affecting subscription intention and time for 4G wireless internet service. While user capability should be important for adopting 4G wireless internet service and previous studies posited that people with greater extent of innovativeness likely establish more positive perceptions such as more perceiving ease of use for wireless mobile service (Lu et al., 2008), previous studies on self-efficacy and innovativeness are comparatively rare in 4G wireless internet service. Utilizing user technical competency and demographics, this study analyzes the subscription intention and time for 4G wireless internet service based on LTE technology through a bivariate two-equation model where the subscription intention and the subscription time are specified as two-step processes.
There are several future research issues. First, in the future study, more diverse set of sample needs to be exampled. Second, other analysis method can be used in the future study. For instance, the dichotomous choice contingent valuation methods can be useful to predict the demand for the market such as 4G wireless internet service market which is currently not widespread. Third, the intention to substitute 4G wireless internet service for various internet services can be useful in explaining the future structure of competitive telecommunication markets. The users of wired broadband service are less likely to change it with 4G wireless internet service if these services are complementary rather than substitutive. Managers can determine whether these services are substituting each other based on comparison study of these services. Fourth, the use of binary variables in our study precludes reliability and validity test of our measures. Future study can employ multi-items for measures and suggest reliability and validity of measures. Fifth, future studies can provide comparison among subsamples according to variables. For instance, it is possible to separate the entire sample into male cases and female cases, and show how the usage of wireless internet service is different among these samples. Finally, the interaction effects can exist between user contingency variables and users' demographics, and future study can show how these interaction can affect the usage of cutting-edge wireless internet service.
Implications for Practitioners
The results provide implications to practitioners by improving understanding of the kinds of users who finally use it; corporate managers can understand the nature of potential customers. The studies in the context of Korean mobile services using user technical competency and demographics provide insights regarding diffusion of 4G wireless internet service, that is, whether and when the target market will begin to use them, and regarding the type of potential customers for 4G mobile services. For instance, the respondents who possess the comprehensive knowledge such that they can easily start using new wireless internet service are more likely to subscribe to 4G wireless internet service. The extensive users of wireless internet or advanced mobile smart phone are more likely to subscribe to 4G wireless internet service and the subscription time is shorter than the nonusers of these devices.
The results can support manager of 4G wireless internet service prepare market segmentation strategies based on user technical competency and demographics. To create value from the launch of 4G wireless internet service into market within a shorter time, managers may better concentrate their marketing efforts to the group of people who are likely to subscribe to 4G wireless internet service within a shorter than longer time: people who are male with high income level using wireless internet extensively and advanced mobile smart phone. For specific type of customers, the estimated model enables the estimation of future expected subscription and time for 4G wireless services. Managers may better concentrate their marketing efforts to the group of people using wireless internet extensively and advanced mobile smart phone, and who are male with high income level, and be better prepared for moving into 4G wireless internet service within an estimated time. The providers of 4G wireless internet service should develop competencies in their capability to have market research insight to target the specific segment of customers, and understanding of the rapidly changing mobile market space. Information service providers should target the specific group of customers through engaging a competitive services differentiation strategy. | 7,956.2 | 2020-10-01T00:00:00.000 | [
"Business",
"Computer Science"
] |
Efficacy of Repetitive Transcranial Magnetic Stimulation Combined With Visual Scanning Treatment on Cognitive-Behavioral Symptoms of Unilateral Spatial Neglect in Patients With Traumatic Brain Injury: Study Protocol for a Randomized Controlled Trial
Left hemispatial neglect (LHSN) is a frequent and disabling condition affecting patients who suffered from traumatic brain injury (TBI). LHSN is a neuropsychological syndrome characterized clinically by difficulties in attending, responding, and consciously representing the right side of space. Despite its frequency, scientific evidence on effective treatments for this condition in TBI patients is still low. According to existing literature, we hypothesize that in TBI, LHSN is caused by an imbalance in inter-hemispheric activity due to hyperactivity of the left hemisphere, as observed in LHSN after right strokes. Thus, by inhibiting this left hyperactivity, repetitive Transcranial Magnetic Stimulation (rTMS) would have a rebalancing effect, reducing LHSN symptoms in TBI patients. We plan to test this hypothesis within a single-blind, randomized SHAM controlled trial in which TBI patients will receive inhibitory i-rTMS followed by cognitive treatment for 15 days. Neurophysiological and clinical measures will be collected before, afterward, and in the follow-up. This study will give the first empirical evidence about the efficacy of a novel approach to treating LHSN in TBI patients. Clinical Trial Registration: https://www.clinicaltrials.gov/ct2/show/NCT04573413?cond=Neglect%2C+Hemispatial&cntry=IT&city=Bologna&draw=2&rank=2, identifier: NCT04573413.
INTRODUCTION
It is estimated that every year in Europe, around 235 persons per 100,000 people are affected by Traumatic Brain Injury (TBI) (1). TBI is a major cause of mortality and morbidity in young people, and its incidence is increasing in persons aged 65 years and older (1). TBI is associated with substantial health care costs, some of which are indirect and long-term, as they are related to loss of productivity and caregiver burden (1). Moreover, TBI is a major cause of long-term disability, impacting patients' and caregivers' quality of life (2).
Left hemispatial neglect (LHSN) is a common condition associated with long-term disability in patients affected by TBI. A recent study showed that about 30% of TBI patients are affected by LHSN (3), a spatial attentive syndrome characterized by a reduced ability to attend, perceive and consciously represent the left contra-lesional space in the absence of a primary sensory deficit (3). Persons with LHSN fail to attend to any stimulus coming from the left-handed space, which can affect the ability to carry out many everyday tasks, such as walking, eating, reading, and getting dressed. Those patients are also often affected by anosognosia for hemiplegia and LHSN. This condition hinders motor and cognitive recovery, predisposes to falling, and reduces independence (3). Furthermore, LHSN in TBI is often associated with a mixture of attention, motor, memory, executive function, and processing speed deficits (1). These impairments lead to a complex cognitive and behavioral picture, which may interfere with standard cognitive treatments (i.e., visual scanning protocols or prism adaptation). Indeed, as scientific evidence for effective TBI treatment is still low, LHSN often remains an untreatable and disabling condition in this population, possibly leading to prolonged length of stay in rehabilitation and a poorer outcome (3).
Rehabilitation methods for LHSN associated symptoms were extensively investigated in persons with right cerebral stroke (4)(5)(6)(7)(8). Previous studies showed the efficacy of 1 Hz inhibitory repetitive Transcranial Magnetic Stimulation (i-rTMS) on visuospatial symptoms in persons with an ischemic lesion of the right hemisphere. In particular, i-rTMS was applied to the posterior parietal cortex (PPC) of the unaffected hemisphere for 2 weeks. Remarkably, the observed effects persisted 15 days after i-rTMS treatment (9)(10)(11). These results can be explained considering that spatial attention deficit in LHSN due to a right middle cerebral artery territory stroke relates to abnormal activation of the neural system that mediates attentive spatial operations in the healthy brain (12). Lesions of the right PPC (or of the inter-hemispheric connectivity) cause hyperactivity of the left hemisphere. The subsequent inter-hemispheric imbalance leads to a biased attentive allocation toward the ipsilesional space (12). Consequently, inhibition of this hyperactivity may have a rebalancing effect, reducing left spatial attention deficit in LHSN. Moreover, recent studies in stroke patients showed the possibility of improving standard cognitive treatments' efficacy (i.e., visual scanning) if i-rTMS would precede the latter on the unaffected hemisphere (13,14).
The neural correlates of the inter-hemispheric imbalance associated with LHSN symptoms are often assessed using visual evoked potentials (VEPs). In particular, N1 is a posterior negative deflection in the VEPs, peaking around 180 ms after stimulus presentation, with greater amplitude for stimuli presented in the contralateral hemifield (15). In stroke, it has been demonstrated that LHSN is associated with a smaller amplitude and delayed latency of N1 for left presented stimuli compared to right presented stimuli (15)(16)(17)(18). This finding suggests that, in stroke, N1 is a neurophysiological index of impairment on left stimulus processing and, thus, a sign of inter-hemispheric imbalance in LHSN (15,18,19). Furthermore, a recent study comparing TBI patients with LHSN against controls showed the presence of hemispheric differences in latencies and amplitudes of the N1 component of VEPs to stimuli presented on both sides (15). These data suggest that the right hemispheric stroke imbalance model could also be applied to explain LHSN symptoms in TBI. In the latter, the hemispheric imbalance could be partly due to a diffuse axonal injury affecting the white matter tracts (20).
What is still unknown is whether in LHSN due to TBI, i-rTMS on the left PPC followed by a visual scanning protocol may be an effective treatment as in right hemisphere stroke, considering that in TBI, the damage is often more widespread and multifocal (1). However, Bonnì et al. demonstrated that a 2-week protocol of i-rTMS (30 Hz Theta burst stimulation on the left PPC) applied to a person affected by LHSN due to TBI reduced the hyper-excitability of the left PPC-primary motor cortex connectivity (20). In this single case, the authors demonstrated a bilateral increase of functional connectivity in the frontalparietal network on functional Magnetic Resonance Imaging (fMRI). The rebalancing effect induced by brain stimulation was associated with remarkable improvements in LHSN cognitive and behavioral symptoms (20).
According to this preliminary evidence, this study's central hypothesis is that LHSN symptoms in both stroke and TBI rely on similar neurophysiological correlates. Indeed, as observed for right hemisphere strokes, LHSN in TBI patients may be caused by an imbalance in inter-hemispheric activity due to the left hemisphere's hyperactivity (12,21). Consequently, i-rTMS might have a rebalancing effect by inhibiting this hyperactivity, reducing LHSN symptoms and related disability in TBI patients.
Thus, the general aim of this randomized controlled trial is to test the efficacy of a novel therapeutic approach based on i-rTMS applied to the left PPC followed by a visual scanning treatment (VST) in comparison to the same cognitive treatment preceded by a sham stimulation on neurophysiological and clinical correlates of LHSN in a sample of TBI patients. In particular, this study's specific aims are (1) to assess the efficacy of i-rTMS applied to the left PPC + VST on the interhemispheric imbalance in patients affected by LHSN after TBI. In doing so, we will make use of measures of interhemispheric functional connectivity derived from N1. (2) To detail the effect of combined i-rTMS + VST on cognitive symptoms of LHSN in TBI patients as measured by specific clinical measures.
(3) To assess whether the effect of the combined i-rTMS + VST treatment has the potential to promote long-lasting lessening of the behavioral manifestations of LHSN in activities of daily living.
Study Design
The SMaRT TraCE trial ("Stimolazione Magnetica Ripetitiva Transcranica nel Trauma Cranio-Encefalico"; in English: repetitive transcranial magnetic stimulation in traumatic brain injury) is a single-blinded randomized controlled trial (RCT) with pre-test, post-test, and 12 weeks follow-up assessments. The design provides two parallel groups of patients with LHSN symptoms after TBI (i-rTMS + VST and SHAM + VST) with a 2:2 randomized allocation ratio in a superiority trial design. Figure 1 shows the study flowchart.
Participants
Patients with LHSN due to TBI will be recruited in the Neurorehabilitation Unit of the IRCCS Istituto delle Scienze Neurologiche di Bologna. Subjects will be recruited accordingly to the following eligibility criteria: Inclusion Criteria: 1. Diagnosis of TBI; 2. Diagnosis of LHSN with specific assessment tests [asymmetry score in the Bells test > 3, (22)]; 3. Intra-hospital rehabilitation setting (ordinary hospitalization or DH); 4. Age between 18 and 80 years; 5. Time after injury between 3 weeks and 1 year; 6. Level of cognitive functioning (LCF ≥ 5); 7. Adequate language comprehension to give informed consent.
Language comprehension will be considered satisfactory if equal or superior to 75% in ordinary conversation, in the presence of an aphasic disturbance or deafness. Use of eventual hearing aids is allowed. In case of doubts, a simple or language comprehension test (token test) will be administered. 8. Presence of inter-hemispheric asymmetries in the EEG activity evidenced by qualitative evaluation; Exclusion Criteria: 1. Medical instability at enrollment, defined as the acute onset of an unexplained derangement of vital parameters (i.e., temperature, blood pressure, pulse rate, respiratory rate, oxygen saturation, level of responsiveness) outside the normal range (for example, fever, acute internist conditions, etc.) and/or the onset of any new medical condition requiring unexpected additional diagnostic procedures and treatments (for example, severe pain, reduction of urinary output, etc.); 2. Presence of epileptogenic alterations to the EEG and/or previous epileptic seizures; 3. Presence of intracranial implants of a metallic material; 4. Presence of devices that could be altered by i-rTMS, such as pacemakers, ventriculoperitoneal shunt, baclofen pump; 5. Decompressive craniectomy; 6. Drugs conditioning the state of consciousness-vigilance such as benzodiazepines; 7. Cortical blindness and/or visual agnosia; 8. Concomitant psychiatric disorders and/or history of substance abuse; 9. Post-traumatic agitation; 10. Post-traumatic complications (i.e., hydrocephalus).
The principal investigator (PI) or a delegate will check the eligibility criteria before enrollment. After verifying the eligibility criteria, the PI will provide eligible patients and their caregivers with all the information and details relative to the study in simple language.
Intervention
Intervention is based on previous studies (4,9,10) and on an RCT protocol for LHSN after stroke (13). In particular, seven sessions of i-rTMS will be administered over 15 days (9). In detail, the parameters used in each session will be: 1. The stimulation coil will be positioned tangentially over P5 accordingly to the international EEG 10/20 system, which corresponds to the target non-lesioned left posterior parietal cortex (4); 2. 90% of the motor threshold; 3. Frequency: 1 Hz; 4. Each session consisted of one train of 900 pulses, which resulted in a whole stimulation period of 15 min.
The stimulation site was chosen based on previous studies. In those studies, TMS over P5/P6 (9, 23) or P3/P4 (24,25) was shown to reduce contralesional neglect related symptoms in patients with a unilateral brain lesion. Each i-rTMS session will last 15 min and be administered every other day (e.g., Monday-Wednesday-Friday, Monday-Wednesday-Friday, Monday).
VST is a conventional cognitive protocol based on the administration of a structured series of tasks aiming at improving spatial exploration abilities (26). It provides various visual scanning tasks to increase the patient's awareness of the LHSN clinical manifestations and teach strategies to improve spatial exploration abilities (10). VST will be administered following the i-rTMS. In particular, three different training tasks will be used: 1. Visuospatial training; 2. Reading and copying training; 3. Copying of line drawings on a dot matrix.
All training tasks include three increasing levels of difficulty, thus giving nine possible task-difficulty combinations. Each level of difficulty will be practiced until the subject will reach a level of accuracy of 75%.
The training will be carried out in 50 min sessions for 5 days a week within 15 days (10) for 11 sessions. When the i-rTMS is also carried out, the visual scanning protocol administration will immediately follow the brain stimulation.
Control
In the control group, a SHAM placebo stimulation is implemented. SHAM stimulation parameters are the same as the intervention stimulation, but the TMS coil will be positioned at 90 • on the target area. Thus, no FIGURE 1 | Study flow-chart. The PI or a delegate checks patient's eligibility (i.e., inclusion/exclusion criteria). Selected patients will be assessed (Pre-treatment assessment) before random allocation in the intervention or in the control group. Patients will be assessed also post-treatment and in the 12 weeks follow-up. i-rTMS, repetitive transcranial magnetic stimulation.
specific cortical modulation will be implemented (SHAM stimulation). The VST protocol will be administered with the same modalities and time frame for this group, as detailed for the intervention group. All routine care is permitted for both groups during the study period.
Outcomes
Our operational objectives and outcome measures are divided into primary and secondary endpoints to reach all aims. Our outcomes will allow us to evaluate different clinical and neurophysiological aspects of LHSN and any differential improvements induced by the rehabilitation protocol.
Visual evoked potentials (VEPs) and lateralized visual processing will be collected accordingly to a method already introduced for LHSN assessment in stroke patients (13,17,21).
In particular, VEPs are collected during a passive visual detection task with lateralized stimuli (13,18) to investigate interhemispheric imbalance in LHSN (12,30,31). As reported in a previous study (13), the EEG will be recorded while patients perform the task on a computer screen. The view distance will be 50 cm from the screen. A white central fixation cross is displayed on a black background (Figure 2). Patients are instructed to fix the cross during the whole task, and whenever participants lose the fixation, feedback will be provided to recover it. A stimulus will be presented randomly on the fixation cross' either sides at a distance angle of 28 • along the midline on each trial. Stimuli are 1 × 1 cm yellow squares and will be displayed for 96 ms. Before the subsequent trial, a black background with the fixation cross will be presented for 1,000 ms (Figure 2). Four blocks of 64 stimuli will be delivered, and the overall task will have an average duration of 20 min.
The EEG will be recorded from 18 Ag/AgCl-cup electrodes according to the 10/20 system referenced to the linked ear lobes. The EEG signal will be recorded from electrodes: Fz, Cz, Pz, C4, C3, P4, P3, F4, F3, Oz, O1, O2. The negative peak of the N1 will be considered to analyze amplitude in microvolt (µv) and latency in milliseconds (ms). N1 amplitude and latency will be analyzed separately for left and right-presented stimuli over P3 and P4 (13,17). Then, indexes of inter-hemispheric imbalance will be extracted. In particular, the Visuospatial Attention Bias Index (vABI; 13) and the Interhemispheric transmission time [IHTT, (27)], which are based, respectively, on N1 amplitude and latency.
The vABI will be extracted in two steps. First, we will calculate the averaged activities of the N1 for stimuli presented on both sides from considered electrodes (i.e., mean of P3 and P4). These averaged activities will measure the two hemispheric activations after lateralized stimulus presentation. The individual differences in the activations for lateralized stimuli will be calculated according to the following formula: vABI = N1 amplitude for right stimuli -N1 amplitude for left stimuli. The vABI will be calculated as a lateralization index (32) on N1 amplitude. Such an index can measure imbalance for left and right stimuli processing as it measures the activation of the two hemispheres in response to lateralized stimuli.
Similarly, based on N1 peak latency, we will compute the IHTT (27, 28), an indicator of the EEG signal's transmission times from one hemisphere to the other. In particular, IHTT measures the difference between N1 latencies for left presented stimuli on the posterior electrodes P3 and P4 (i.e., left IHTT = N1 latency on P4 -N1 latency on P3), thus constituting a single index in milliseconds of the inter-hemispheric transmission times specifically for left presented stimuli (13).
Secondary Endpoints
The secondary endpoints focus on the impact of rTMS on clinical and motor indexes. In particular, we will test visualspatial attentive functioning with a standard neuropsychological battery for LHSN assessment, the Behavioral Inattention Test [BIT; (33)]. The BIT consists of two subscales (cognitive and behavioral) with standardized scores, where lower ratings indicate a more severe visual-spatial impairment. The degree of functional independence in daily living activity (i.e., eating or reading) will be tested with the Catherine Bergegò Scale (CBS). In contrast, we will test motor ability with the Motricity Index (MI) Trunk Control Test (TCT) and the motor subscale of the Functional Independence Measure (FIM tm ) (34,35). Finally, attentive functioning will be tested with a specific subtest (i.e., alertness and Visual Field/Neglect) of the Test of Attention Performance (TAP), an attentive computerized battery (13,35).
Sample Size
The sample size was calculated using the following formula: Where N is the sample size; σ2: population variance established by previous studies = 2.1; δ2: absolute error allowed for parameter estimation for vABI = 2.8; z: constant (corresponding to the value of the standardized normal random variable) that depends on the level of confidence desired for the estimation. Fixing α = 0.05 and 1β = 0.80, (Z 1−α + Z 1−β ) 2 = 10.5. The sample size resulting from the formula is 24. Consequently, the minimum sufficient sample to reach all aims is, assuming 10% of subjects lost to follow-up, 28 subjects (14 X group), enrolled over 1 years. In case of recruitment delays, a multicentric extension of the study will be considered to guarantee an adequate number of participants.
Statistical Analysis
Differences in vABI and IHTT will be analyzed between the pre-treatment (T0), post-treatment (T1), and follow-up (T2) phases for both groups of patients (group r -TMS + TCC and SHAM + TCC group) to evaluate neurophysiological correlates of LHSN in the two groups.
In randomized controlled trials, the recommended statistical procedure to test pre-, post-treatment, follow-up control group design is the analysis of covariance (ANCOVA) (36,37). In the context of ANCOVA, post-treatment and follow-up are considered dependent variables with pre-treatment variables as covariate (36,37). However, ANCOVA requires the satisfaction of two assumptions, i.e., homogeneity vs. heterogeneity of the population at the pre-treatment time point and normal data distribution. In the present study, we will verify first the two ANCOVA assumptions for each outcome measure and, afterward, we will adopt the appropriate corrections and statistical approaches as follows: 1. Homogeneity vs. heterogeneity test: For each outcome, homogeneity of regression slopes will be verified (38): i.e., the values of b should not be significantly different between groups. If the assumption is violated, a CHANGE measure (i.e., a score of gain in the specific outcome) will be considered in a two × two mixed-model ANOVA with the between factor group and the within factor time (post-treatment and followup). 2. Normal distribution test: Normal distribution will be verified (i.e., Kolmogorov-Smirnov test larger than 0.05) (36,37) for each outcome variable. In case the assumption is violated, a linear logistic transformation will be performed.
If ANCOVA assumptions are verified for the primary outcome, we will analyze covariance for each neurophysiological index using a mixed-model ANCOVA with a 2X3 design. The "between" factor will be represented by the randomization group (rTMS, SHAM), whereas the "within" factor will be the assessment time (T1, T2) with T0 as a covariate. Whenever necessary, Greenhouse-Geiser correction will be applied, and corrected p-values will be reported in the ANCOVA. Further adjustments will be made for other possible confounding factors such as age, gender, education of the participants, and lesion location (if heterogeneity between participants in lesion locations emerges from clinical data). Besides pvalues, effect sizes will be provided to assess the size of the treatment effect. Similar analyses will also be performed for clinical and motor outcomes after verification of ANCOVA assumptions. The BIT, the TEA, the CBS, and the motor function tests provide standard scores, separated for each test, and scores including a general performance with cutoffs that allow discriminating pathological performance. Several mixed-model ANCOVAs will be applied for every test in the 2X3 design described above. Finally, we will perform correlation analyses (both parametric and non-parametric) to evaluate the relations between the neurophysiological indices and clinical measures. Two-tailed t-tests for independent samples will be employed to investigate the differences between groups. Data analysis will be performed using MatLab (The Mathworks Inc.) and SPSS (version 13). A significance level of 5% (i.e., p-value = 0.05), corrected for multiple comparisons when needed, will be accepted.
Additional clinical and demographic information, such as handedness of subjects, prescribed medications, and relevant medical conditions, will be recorded for each participant in specific Case Report Forms.
Minimizing Inter-Rater Measurement Bias
The medical doctor who will administer the stimulation will always be the same; however, assessors will change between pre and post-measurements. To minimize biases deriving from interrater measurement errors, the following interventions will be performed before the start of the trial: 1. Collegial assessments to standardize administration modalities and scoring procedures. 2. Development of an "assessment manual" containing all information for administering and scoring procedures.
Assignment of Interventions and Data Management
Allocation Participants will be allocated randomly in the active rTMS group or the SHAM placebo group. A blocked randomization list (2:2 per group) will be generated using the online software QuickCalcs (www.graphpad.com). Only the PI and the physician administering i-rTMS will have access to the randomization list.
Blinding
To ensure a double-blind assessment in all phases (T0 pretreatment, T1 post-treatment, T2 follow-up), assessors will not be aware of the patient's randomization group. Moreover, pre-treatment assessments (T0) will be performed before randomization. Also, the visual scanning protocol will be administered by therapists unaware of the patient's allocation. Patients themselves will be instructed not to reveal any information about the brain stimulation treatment received.
Data Collection and Management
All data will be anonymized, and a specific alpha-numeric code will be attributed to each subject after enrolment. An electronic study database will be configured with all patient details, including the randomization group.
Safety Assessment
Recently, guidelines for stimulation protocols were defined (39). Therapeutic interventions in clinical context should have the following properties: • The application should be easy to implement without neuroimaging and neuronavigation systems to localize the target area. Many studies, as an alternative to neuronavigation, adopt the international 10/20 localization system. • The total application time of the daily rehabilitation paradigm should not exceed ten sessions during 2 weeks. Protocols that provide daily applications for more than 2 weeks are difficult to implement in rehabilitation centers and may not be tolerated by patients.
rTMS, when administered according to the international guidelines, is a safe technique. The stimulation paradigm (9,13,14) reaches the guidelines mentioned above for brain stimulation protocols (39). Moreover, we followed a TMS methodological checklist (40) to report methodological details about the stimulation protocol and thus improve the quality of data collection and replicability of the study. However, adverse events are reported in the literature, as follows: • Local annoyance in the stimulated area (frequent): this effect rarely requires the suspension of rTMS. • Headaches (quite frequent but usually mild). We will administer analgesics (e.g., paracetamol) in case of annoying headaches. • Temporary loss of hearing (rare) for the duration of the stimulation session. • Epileptic (fairly rare) crises, which occur in predisposed individuals. To minimize this risk, subjects who have suffered from seizures during the acute phase or have a diagnosis of epilepsy will be excluded from the trial (exclusion criterion).
Any adverse events during treatment will be recorded in a specific Case Report Form (CRF) and reported at the end of the study. Also, the physician who will administer i-rTMS will manage any adverse events occurring during the administration. Should the treatment be suspended, the reason will be reported. Data will be analyzed accordingly to the "intention to treat" principle and included in the study's final report.
Roles and Responsibilities
Patients will be enrolled within the Physical Medicine and Neurorehabilitation Unit of IRCCS Istituto delle Scienze Neurologiche di Bologna (i.e., the coordinating center).
Oversight and Monitoring
A designated external committee will perform data monitoring, database, and statistical analysis management. Statistical analyses over the complete dataset will be performed at the end of data collection. However, interim analyses approved by the local ethical committee may be performed. The PI will coordinate clinical trial's organizational, ethical, and scientific aspects. Only the PI can declare the end of the enrolment.
DISCUSSION
Many published studies have highlighted the efficacy of different rehabilitation methods for LHSN syndrome after stroke (10). However, scientific evidence is still low in TBI patients, given factors such as low sample size, methodological bias (lack of double-blind studies or follow-up assessments), and contradictory results. This study's rationale is that neuronal loss due to TBI leads to an impairment of cognitive functions due to a deficit in the related neuro-functional networks. Thus, the reactivation of those networks may allow the empowerment of the compromised functions. Unlike the traditional rehabilitation methods, the new magnetic stimulation techniques, such as i-rTMS, allow the execution of a cognitive task through the pre-empowerment of a specific network or neuronal circuit. This priming may facilitate experiential learning with a richer and more articulated neural environment and selectively stimulate according to the areas most involved in the lesion. In TBI, the mechanism of empowerment concerns the preserved areas and the interneuron connectivity between those areas. Consequently, i-rTMS could increase the "responsiveness" of the peri-lesional areas and the inter-hemispheric connectivity during cognitive training, increasing its effectiveness compared to the SHAM condition. Therefore, the current project's main expected outcome will provide evidence, on a large sample of TBI patients, of the interhemispheric functionality underlying cognitive symptoms of LHSN. It will also point out the specific effect of i-rTMS protocols on the inter-hemispheric imbalance. In particular, we expect to observe in the treatment group a larger rebalancing effect than in the control group, as demonstrated by smaller amplitudes of vABI and earlier latencies of IHTTs at the post-treatment assessment. Furthermore, we expect to observe the persistence of this effect at follow-up. Additionally, we expect to observe larger improvements in cognitive and behavioral symptoms of LHSN induced by the i-rTMS compared to the control group, as demonstrated by better performances on clinical tests and batteries (10).
To our knowledge, this study will be the first to provide an evidence-based theory on the inter-hemispheric functionality in TBI patients with LHSN, providing clinicians with a new framework for approaching, studying, and interpreting LHSN with innovative markers on neurophysiological activity. Moreover, the evaluation of rehabilitation effects of i-rTMS on the visuospatial inter-hemispheric network and cognitive and activities of daily living measures of LHSN will provide the basis to understand how i-rTMS influences LHSN in TBI patients. This novel knowledge, in turn, will create the basis for the development of new treatment strategies, which will have the potential to lessen the impact of LHSN-related disability for TBI patients and their families.
CONCLUSIONS
The SMART-TRACE is a protocol for the rehabilitation of attentive spatial deficits in TBI patients, based on therapeutic approaches already established for stroke patients, which combines brain stimulation and cognitive treatments. The current protocol is easily applicable and relatively low-cost. Although a TMS stimulator is necessary for the intervention procedure, the visual scanning protocol is very flexible regarding the materials needed and the cognitive tasks. Flexibility is indeed a crucial aspect, considering the clinical heterogeneity of TBI patients. For instance, visual scanning can be implemented at the bedside, and also patients with severe motor impairments can easily carry out the tasks. Should the efficacy of the study protocol be demonstrated, it could be implemented in ordinary clinical practice, thus providing a valuable therapeutic option to reduce LHSN related symptoms and improves the clinical outcome in TBI patients.
TRIAL STATUS
The protocol here presented was registered in October 2020 on ClinicalTrials.gov (NCT04573413; title: "Repetitive Transcranial Magnetic Stimulation in Traumatic Brain Injury"). The estimated study completion date is March 2023.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Comitato Etico Indipendente di Area Vasta Emilia Centro (CE-AVEC; CE n.19062). The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
FD is the study's principal investigator and coordinates organizational, ethical, and scientific aspects of the clinical trial. EC is responsible for the TMS stimulation. FD, EC, FL, and RP provided the idea and designed the protocol. VP, LS, and EF are responsible for cognitive, motor, and neurophysiological assessments. FD, FL, and GL wrote the manuscript. All authors have read and approved the final version of the paper.
FUNDING
This study was supported by the Italian Ministry of Health, Ricerca Finalizzata SG-2018-12367527. | 6,610.8 | 2021-07-14T00:00:00.000 | [
"Psychology",
"Biology"
] |
Significant Quantitative Differences in Orexin Neuronal Activation After Pain Assessments in an Animal Model of Sickle Cell Disease
Sickle cell disease is a hemoglobinopathy that causes sickling of red blood cells, resulting in vessel blockage, stroke, anemia, inflammation, and extreme pain. The development and treatment of pain, in particular, neuropathic pain in sickle cell disease patients is poorly understood and impedes our progress toward the development of novel therapies to treat pain associated with sickle cell disease. The orexin/hypocretin system offers a novel approach to treat chronic pain and hyperalgesia. These neuropeptides are synthesized in three regions: perifornical area (PFA), lateral hypothalamus (LH), and dorsomedial hypothalamus (DMH). Data suggest that orexin–A neuropeptide has an analgesic effect on inflammatory pain and may affect mechanisms underlying the maintenance of neuropathic pain. The purpose of this study was to determine whether there are neuronal activation differences in the orexin system as a result of neuropathic pain testing in a mouse model of sickle cell disease. Female transgenic sickle mice that express exclusively (99%) human sickle hemoglobin (HbSS-BERK) and age-/gender-matched controls (HbAA-BERK mice; n = 10/group, 20–30 g) expressing normal human hemoglobin A were habituated to each test protocol and environment before collecting baseline measurements and testing. Four measures were used to assess pain-related behaviors: thermal/heat hyperalgesia, cold hyperalgesia, mechanical hyperalgesia, and deep-tissue hyperalgesia. Hypothalamic brain sections from HbAA-BERK and HbSS-BERK mice were processed to visualize orexin and c-Fos immunoreactivity and quantified. The percentage of double labeled neurons in the PFA was significantly higher than the percentage of double labeled neurons in the LH orexin field of HbAA-BERK mice (*p < 0.05). The percentages of double labeled neurons in PFA and DMH orexin fields are significantly higher than those neurons in the LH of HbSS-BERK mice (*p < 0.05). These data suggest that DMH orexin neurons were preferentially recruited during neuropathic pain testing and a more diverse distribution of orexin neurons may be required to produce analgesia in response to pain in the HbSS-BERK mice. Identifying specific orexin neuronal populations that are integral in neuropathic pain processing will allow us to elucidate mechanisms that provide a more selective, targeted approach in treating of neuropathic pain in sickle cell disease.
Sickle cell disease is a hemoglobinopathy that causes sickling of red blood cells, resulting in vessel blockage, stroke, anemia, inflammation, and extreme pain. The development and treatment of pain, in particular, neuropathic pain in sickle cell disease patients is poorly understood and impedes our progress toward the development of novel therapies to treat pain associated with sickle cell disease. The orexin/hypocretin system offers a novel approach to treat chronic pain and hyperalgesia. These neuropeptides are synthesized in three regions: perifornical area (PFA), lateral hypothalamus (LH), and dorsomedial hypothalamus (DMH). Data suggest that orexin-A neuropeptide has an analgesic effect on inflammatory pain and may affect mechanisms underlying the maintenance of neuropathic pain. The purpose of this study was to determine whether there are neuronal activation differences in the orexin system as a result of neuropathic pain testing in a mouse model of sickle cell disease. Female transgenic sickle mice that express exclusively (99%) human sickle hemoglobin (HbSS-BERK) and age-/gender-matched controls (HbAA-BERK mice; n = 10/group, 20-30 g) expressing normal human hemoglobin A were habituated to each test protocol and environment before collecting baseline measurements and testing. Four measures were used to assess pain-related behaviors: thermal/heat hyperalgesia, cold hyperalgesia, mechanical hyperalgesia, and deep-tissue hyperalgesia. Hypothalamic brain sections from HbAA-BERK and HbSS-BERK mice were processed to visualize orexin and c-Fos immunoreactivity and quantified. The percentage of double labeled neurons in the PFA was significantly higher than the percentage of double labeled neurons in the LH orexin field of HbAA-BERK mice ( * p < 0.05). The percentages of double labeled neurons in PFA and DMH orexin fields are significantly higher than those neurons in the LH of HbSS-BERK mice ( * p < 0.05). These data suggest that DMH orexin neurons were preferentially recruited during neuropathic pain testing and a more diverse distribution of orexin neurons may be required to produce
INTRODUCTION
Sickle cell disease (SCD) is characterized as a hemoglobinopathy that causes red blood cells to sickle, and pain experienced by those individuals who suffer with SCD is associated with significant morbidity and increased death. In the United States, SCD accounts for over $450 million in healthcare costs each year (Steiner and Miller, 2006;Hassell, 2010) and there is a lack of knowledge related to the development and treatment of neuropathic pain associated with SCD. According to the International Association for the Study of Pain, neuropathic pain is defined as "pain arising as a direct consequence of a lesion or disease affecting the somatosensory system either at the peripheral or central level" (Haanpaa et al., 2011;Molokie et al., 2011). It is possible that altered processing within the nervous system may be the cause for persistent and sometimes unrelieved neuropathic pain in SCD.
Neuropathic pain has not been well-studied in patients with SCD to date. It is estimated that the incidence of neuropathic pain in the SCD population may be twice as what is found in other chronic pain populations other than SCD (Brandow et al., 2014). The defining characteristics of neuropathic pain are allodynia and hyperalgesia (Ballas and Darbari, 2013). Classical components of neuropathic pain are pain from a non-painful stimulus (i.e., extreme sensitivity to cool stimuli) and increased pain from a painful stimulus and pain caused by a stimulus that what would normally not be characterized as painful (Treede et al., 1992;Sethna et al., 2007).
Optimal management of neuropathic pain is yet to be delineated and opioid and non-steroidal anti-inflammatory drugs (NSAIDs) have not provided treatments that effectively alleviate neuropathic pain. While this improvement in treatment options for neuropathic pain research have been observed, pain is not always properly managed (Brandow et al., 2014). In order to develop better treatment strategies, it is important to identify neurochemical processes that may be involved in mediating neuropathic pain and use this info to develop better treatment regimens. One possible system to explore is the orexin system since it has been reported to mediate pain. The orexin system offers a novel approach to treat chronic pain and hyperalgesia. This system has been linked to the mediation of neuropathic pain and inflammatory processes (Yamamoto et al., 2002;Razavi and Hosseinzadeh, 2017); however, no published studies have investigated its possible role in SCD.
This current study utilizes transgenic sickle mice that express human sickle hemoglobin (HbSS) to explore the possibility of the orexin system as a target region in mediating neuropathic pain in SCD. Orexins are a family of hypothalamic peptides that play a role in the regulation of feeding behavior, energy metabolism, reward, and the sleep-wake cycle Sakurai et al., 1998;Aston-Jones et al., 2009;de Lecea, 2012). Orexin neurons are expressed in the dorsomedial hypothalamus (DMH), perifornical area (PFA), and lateral hypothalamus and send their projections into other brain regions , Chen et al., 1999Nambu et al., 1999). Some of these regions are involved in analgesia and play a role in descending pain inhibition (Ossipov et al., 2010). There are two orexinergic receptors and orexin 1 receptor has a greater affinity for orexin A vs. orexin B peptide (Trivedi et al., 1998;Lu et al., 2000;Marcus et al., 2001). It has been demonstrated that orexinergic projections from the hypothalamus project to the spinal cord (lamina I) (van den Pol, 1999), lamina X, and laminae II-VII in the dorsal horn (Date et al., 2000;Bingham et al., 2001). Data suggest that orexin-A has an analgesic effect on inflammatory pain (Yamamoto et al., 2002), but it is not clear if the same mechanisms underly the maintenance of neuropathic pain and inflammatory pain. It is also not known whether the same analgesic effect with orexin-A on inflammatory pain will be similar in a neuropathic pain model. Neuropathic pain can be difficult to manage with standard analgesics such as opioids (Arner and Meyerson, 1988). Hence, the orexin system may offer a novel approach to treat chronic pain and hyperalgesia.
Enhanced pain-related behaviors have been observed in adult mice after temporally-controlled ablation of orexin neurons (Inutsuka et al., 2016). The mechanism by which orexin system modulate neuropathic pain is not well-established in the literature. Before it can be determined how the orexin system is involved in the mediation of neuropathic pain in a model of SCD, it is important to determine whether factors associated with neuropathic pain (i.e., hyperalgesia) differentially influence orexin neuronal activity. Therefore, the purpose of this study was to identify whether there were activational and topographical changes in the various subpopulations of orexin neurons as a result of various pain assessments in a mouse model of neuropathic pain in sickle cell disease. Identifying and understanding the activity of this neuronal circuitry will allow us to gain better perspective on differential patterns of activity in orexin neurons in the DMH, PFA, and LH after pain testing. The data from these experiments can lay the foundation for a more in-depth investigation on alternative pharmacological therapies to treat neuropathic pain in the SCD population by directly targeting the subpopulations that can influence nociceptive processing and reduce hyperalgesia. These studies can move the field forward by identifying whether there are selective subpopulations of orexin neurons that may be preferentially recruited during neuropathic pain. In this study, we established baseline measurements for pain responses and assessed orexin neuronal activation in the DMH, PFA, and LH of transgenic mice expressing human sickle hemoglobin (HbSS-BERK) and control mice expressing normal human hemoglobin A (HbAA-BERK).
Animals
Female transgenic HbSS-BERK sickle mice and age-/gendermatched controls (HbAA-BERK) were used in this study (n = 10/group, ∼4-6 months old, 20-30 g). The HbSS-BERK express human (99%) sickle hemoglobin and HbAA-BERK control mice express normal human hemoglobin A (HbAA). Females more commonly express neuropathic pain in pain populations, including SCD (Torrance et al., 2006;Butler et al., 2013;Brandow et al., 2014). The mice were bred and characterized by phenotype in a pathogen-free facility under a 12 h light-dark cycle at the University of Minnesota. The HbSS-BERK mice display similar pathological features of human SCD such as hematologic disease, organ damage and tonic hyperalgesia (Paszty et al., 1997;Kohli et al., 2010;Giuseppe Cataldo et al., 2015). All animal care and experimental procedures were reviewed and approved by the Institutional Animal Care and Use Committee at the University of Minnesota.
Behavioral Assessments
All behavioral tests were performed in a quiet room at a constant temperature (23-25 • C). All mice were habituated to each test protocol and environment. Before performing baseline measurements and testing, four parameters were used to assess behaviors in the following order of testing: mechanical hyperalgesia, thermal hyperalgesia, grip force, and cold hyperalgesia (Kohli et al., 2010).
Mechanical Hyperalgesia
To assess mechanical hyperalgesia, each mouse was put on a wire mesh apparatus under a glass container, allowed to acclimate, and a von Frey filament was applied to the hind paw for 1-2 s. A 1.0 g (4.08 mN) von Frey (Semmes-Weinstein) monofilament (Stoelting) was applied to the plantar surface of the hind paw of each mouse with enough force to bend the filament. Paw withdrawal frequency was determined by the number of time paw lifting was observed per 10 applications.
Thermal/Heat Hyperalgesia
Thermal hyperalgesia was determined via measurement of heat sensitivity in the HbAA-BERK-and HbSS-BERK mice. Thermal hyperalgesia was assessed using the Hargreave's apparatus with a radiant heat stimulus. As previously described (Kohli et al., 2010;Lei et al., 2016;Tran et al., 2019), a radiant heat stimulus was applied under the hind paws of each mouse following acclimation to the floor of the Hargreave's apparatus. The radiant heat stimulus was located under the glass floor and administered using an infrared heat source. The paw withdrawal latency was recorded as the time when the mouse withdraws its paw from the heat stimulus (to the nearest 0.1 s).
Grip Force
To assess deep tissue hyperalgesia, a digital grip force meter (Chatillon) was used to measure peak forepaw grip force. The force was measured by gently holding each mouse by its tail and pulling it across a wire mesh gauge. The grip force was recorded as the force (in g) exerted at the time of grip release by each mouse.
Cold Hyperalgesia
Cold hyperalgesia was determined via measurement of cold sensitivity of the mice to a cold plate set at 4 • C. Cold withdrawal latency was determined by the time it took each mouse to initially lift either forepaw. Cold withdrawal frequency was determined by the number of times that mouse lifted and rubbed the forepaws over a period of 2 min.
Immunohistochemical Processing for c-Fos and Orexin in Hypothalamic Sections
In preparation for double label immunohistochemistry, brains from each mouse were extracted 90 min after behavioral testings and immersed in 10% formalin for fixation for at 1-2 weeks. Following cryoprotection in 30% sucrose solution, coronal brain sections were cut and processed for c-Fos and orexin-A as previously described (Richardson and Aston-Jones, 2012). Sections were incubated overnight at room temperature in primary antibody against Fos-related antigens (1:1,500, SC-52, Santa Cruz), then rinsed and incubated for 2 h with secondary antibody (biotinylated donkey anti-rabbit 1:500, Jackson Immunoresearch Laboratories). Sections were transferred to avidin-biotin complex (ABC, 1:500, Vector Laboratories) for 1.5 h and then Fos neurons were visualized by placing the sections in SIGMAFAST 3,3 ′ -diaminobenzidine (DAB, D8552, Sigma) with cobalt chloride metal enhancer. Following a 45 min incubation in PBS-azide, the sections were placed in primary antibody for orexin-A (1:1,000, SC8070, Santa Cruz) overnight. Sections were incubated in secondary antibody (biotinylated donkey anti-goat 1:500, Jackson Immunoresearch) the next day, incubated in ABC and then orexin neurons are visualized using DAB (D5637, Sigma, no metal enhancer) with 0.0002% H 2 O 2 . The sections were dehydrated through graded alcohols, cleared in xylene, and coverslipped with Permount. Orexin-positive neurons exhibited brown cytoplasmic staining and Fos-positive nuclei (cobalt chloride intensified) were stained black.
Quantification of Neurons and Statistical Analysis
The number of neurons with Fos positive nuclei, orexin-A positive cytoplasmic staining, and double labeled Orexin-Fos neurons was counted in the DMH, PFA, and LH for the HbAA-BERK and HbSS-BERK mice. The area located medial to the fornix was defined as the DMH region, the region located around the fornix was defined as the PFA region and the region lateral to the fornix was defined as the LH region (similar to other studies (Harris et al., 2005;Richardson and Aston-Jones, 2012). Quantification of the labeled neurons was conducted using a unique number code for each animal so that the investigator was blinded to the treatment groups.
Hypothalamic sections at two different levels, rostral (Bregma −1.34 mm) and caudal (Bregma −1.94 mm) (Paxinos and Franklin, 2001) level from each animal were used to count orexinand Fos-positive neurons. A representative section from the rostral and caudal orexin fields of each animal was used to ensure that there was a good representation of the hypothalamic field, as described in Richardson and Aston-Jones (2012). A color image of the orexin field was acquired from a digital camera at 10×-20× magnification using brightfield illumination from a light microscope (Zeiss) connected to a computer station that capture images. The labeled neurons were marked using a pointer tool in Zen Pro software, preventing a cell from being counted more than once in an image. Neurons were counted bilaterally for each region and at each level (Zen Pro software [Carl Zeiss Microscopy, LLC, White Plains, NY]). The data are expressed as average counts of Fos positive, orexin positive, and percentage of double-labeled neurons (total number of double labeled neurons divided by total number of orexin positive neurons).
Statistical Analysis
Behavioral assessments and neuron counts for each hypothalamic region were quantified. Data were compared using a one-way analysis of variance (ANOVA) to determine regional/topographical differences (DMH vs. PFA vs. LH) for HbAA-BERK and HbSS-BERK mice. This analysis was followed by a Kruskal Wallis post-hoc test with significant levels set at p < 0.05). We used independent t-tests to determine whether there were statistical differences between the means of HbAA-BERK and HbSS-BERK mice (activational differences) for: weight, mechanical hyperalgesia, heat hyperalgesia, cold hyperalgesia, grip force and observed immunoreactive cells. All data are represented as mean ± SE, p < 0.05.
RESULTS
Behavioral and immunohistochemical approaches were used to determine pain-related behaviors and investigate whether activational and topographical differences in the subpopulations of orexin in HbAA-BERK and HbSS-BERK mice. We used c-Fos as a marker for neuronal activation in this study. All data reflect observations in female mice since this group expresses higher hyperalgesia than male mice (Kohli et al., 2010;Lei et al., 2016).
Mechanical Hyperalgesia: Assess Sensitivity to Mechanical Stimulus
The von Frey filament (1.0 g, 4.08 mN) was applied for 1-2 s (with enough force to bend the filament) to the plantar surface of each hind paw of HbAA-BERK and HbSS-BERK mice. This stimulus is not characterized as normally painful. However, in animals that have greater tactile sensitivity (HbSS mice), there is a greater response to the filament application. The paw withdrawal frequency evoked when using the von Frey monofilament was significantly higher in HbSS-BERK mice vs. HbAA-BERK control mice ( Figure 1A, p < 0.0001, 5.99 ± 0.6 vs. FIGURE 1 | Comparative differences in behavioral assessments for mechanical and heat hyperalgesia in Female HbAA-BERK and HbSS-BERK mice. All data are reflected as mean ± SE, n = 9-10/group. (A) Mechanical hyperalgesia was measured by paw withdrawal frequency (PWF) in HbAA-BERK and HbSS-BERK mice. HbSS-BERK mice display significantly more PWF than HbAA-BERK mice (*p < 0.0001). (B) Heat hyperalgesia was measured by paw withdrawal latency (PWL) in response to a heat stimulus in age and sex-matched HbAA-BERK and HbSS-BERK mice. HbAA-BERK mice display significantly greater PWL than HbSS-BERK mice (*p < 0.0001).
2.4 ± 0.3). This observation in higher paw withdrawal frequency indicated increased hyperalgesia in HbSS-BERK mice.
Heat Hyperalgesia: Test for Heat Sensitivity
Paw withdrawal latency was measured as the duration of time recorded after the plantar surface of a single hind paw was exposed to a radiant heat stimulus (50 W projector lamp bulb). HbAA-BERK mice display significantly higher paw withdrawal latency vs. HbSS-BERK mice ( Figure 1B, p < 0.0001, 3.59 ± 0.13 vs. 2.01 ± 0.11). The shorter paw withdrawal latency observed in the HbSS-BERK mice ( Figure 1B) indicated increased sensitivity to heat. This heat sensitivity may indicate cutaneous hyperalgesia in HbSS-BERK mice.
Deep Tissue Hyperalgesia
One of the major consequences of SCD is chronic musculoskeletal pain which can be evidenced by muscle soreness and joint tenderness. Deep tissue hyperalgesia indicates the existence of inherent pain due to activation of visceral, joint, and musculoskeletal nociceptors. In this study, we utilized the grip force test to evaluate musculoskeletal pain in HbAA-BERK and HbSS-BERK mice. Deep tissue hyperalgesia was defined as a decrease in the grip force, which indicates increased nociception. Grip force significantly decreased in HbSS-BERK mice vs. HbAA-BERK control mice (Figure 2A). It was observed that HbAA-BERK mice exerted significantly more grip strength vs. HbSS-BERK mice (Figure 2A, p < 0.005, 132.9 ± 3.9 vs. 118.2 ± 1.4, respectively) since a higher force (in g) exerted at the gauge at the time of grip release by the HbAA-BERK mice was recorded.
Differences in grip force/body weight
Typically, musculoskeletal strength is greater as weight and muscle development increase. However, it is possible for grip force to not significantly change when expressed per gram body weight if there are underlying physiological factors (i.e., FIGURE 2 | Comparative Differences in deep tissue hyperalgesia and body weight for HbAA-BERK and HbSS-BERK mice. All data are reflected as mean ± SE, n = 9-10/group. (A) Differences in deep tissue hyperalgesia was assessed by grip force for HbAA-BERK and HbSS-BERK mice. HbAA-BERK mice display significantly more peak forepaw grip strength vs. HbSS-BERK mice ( ∧ p < 0.005). (B) When corrected for weight, HbAA-BERK mice still displayed significantly more grip strength vs. HbSS-BERK mice (*p < 0.0001).
(C) The HbSS-BERK mice in this study were significantly heavier in body weight than HbAA-BERK mice (*p < 0.0001).
decreased muscle strength, inflammation, increased nociception) that contribute to muscle weakness and pain. When corrected for weight in this study, HbAA-BERK mice still display significantly more grip strength vs. HbSS-BERK mice (Figure 2B, p < 0.0001, 6.05 ± 0.10 vs. 4.21 ± 0.09) even though the HbSS-BERK mice were significantly heavier in body weight than the HbAA-BERK mice (Figure 2C, 28.81 ± 0.25 g vs. 22.02 ± 0.77 g, respectively). FIGURE 3 | Comparative Differences in behavioral assessments for cold hyperalgesia in female HbAA-BERK and HbSS-BERK mice. All data are reflected as mean ± SE, n = 9-10/group. (A) HbAA-BERK mice have significantly more PWL vs. HbSS-BERK mice ( ∧ p < 0.005), therefore, HbSS-BERK mice display more cold hyperalgesia after exposure to a 4 • C cold plate, (B) HbSS-BERK mice display significantly more behavioral responses to the cold plate temperature of 4 • C (*p < 0.0001).
Cold Hyperalgesia: Test for Cold Sensitivity and Behavioral Responses
We observed a higher sensitivity to the cold stimulus (aluminum plate) in HbSS-BERK vs. HbAA-BERK mice. The HbAA-BERK mice display significantly higher paw withdrawal latency vs. HbSS-BERK mice ( Figure 3A, p < 0.005, 5.53 ± 0.38 vs. 3.65 ± 0.34). HbAA-BERK mice demonstrated a lower response in lifting either paw and were less likely than HbSS-BERK mice to respond to cold temperatures. HbAA mice spent more time walking around the platform on all four paws before the initial lifting of either paw vs. HbSS-BERK mice. The shorter paw withdrawal latency observed in the HbSS-BERK mice ( Figure 3A) indicated increased sensitivity to cold temperature. This cold sensitivity may also indicate cutaneous hyperalgesia in HbSS-BERK mice. In addition, to measuring paw withdrawal latency, behavioral responses due to the exposure to the cold environment were recorded over a 2 min period. Observations were recorded for shivering/body shakes, paw flutter, and consistently lifting paws from the cold plate. HbSS-BERK mice display significantly more paw withdrawal frequency and behavioral responses vs. HbAA-BERK mice ( Figure 3B, p < 0.0001, 61.18 ± 1.8 vs. 41.53 ± 1.87).
Quantification for Immunohistochemical Detection of Fos, Orexin, and Orexin-Fos Neurons in the DMH, PFA, and LH To examine whether there were topographical and activational changes in the various subpopulations of orexin neurons located in the DMH, PFA and LH of HbAA-BERK and HbSS-BERK mice, hypothalamic sections were processed for double label immunohistochemistry for c-Fos and orexin-A peptide (Figures 4A,B at −1.94 mm Bregma). There were double labeled neurons (orexin-Fos positive neurons), single labeled orexin neurons, and single labeled c-Fos neurons throughout the DMH, PFA, and LH of both mouse groups (Figure 5).
Quantification of Double Labeled Neurons in the LH, PFA, and DMH of HbAA-BERK and HbSS-BERK Mice After Behavioral Tests
The percentages of orexin-Fos, double labeled neurons were quantified in the LH, PFA and DMH for HbAA-BERK (Figure 6A, 13.2 ± 2.1, 29.4 ± 4.7, 21.6 ± 2.5, respectively) and in the LH, PFA and DMH for HbSS-BERK mice (Figure 6A, 13.9 ± 2.1, 24.94 ± 3.3, 23.6 ± 2.6, respectively). In HbAA-BERK mice, the percentage of orexin-Fos neurons in the PFA was significantly higher than those observed in the LH orexin field (Figure 6A, * p < 0.05). In HbSS-BERK mice, there was a different finding. While the percentage of orexin-Fos neurons was higher in the PFA vs. LH, the percentage of orexin-Fos neurons in the DMH were also significantly higher than those observed in the LH (Figure 6A, * p < 0.05). This difference in topographical activation indicates that a greater number of orexin neurons are recruited/activated in two hypothalamic subregions (DMH and PFA) in the HbSS-BERK mice after behavioral testings.
Quantification of Single Labeled Neurons in the DMH, PFA, and LH of HbAA-BERK and HbSS-BERK Mice After Behavioral Tests
Single labeled c-Fos and orexin immunoreactive neurons were observed (Figures 6B,C) and quantified in all three orexin hypothalamic subregions of HbAA-BERK and HbSS-BERK mice.
Topographical differences in the number of c-fos neurons between the 3 hypothalamic regions
A one-way ANOVA was used to determine any significant differences in the means of c-Fos neurons quantified in the DMH, PFA and LH regions. In HbAA-BERK mice, there was a significant difference in the number of c-Fos neurons in the LH vs. DMH (77.6 ± 6.1 vs. 132.4 ± 12.2, respectively, * p < 0.05). The number of c-Fos neurons quantified in the DMH was significantly higher than those observed in the PFA (132.4 ± 12.2 vs. 68.2 ± 7.6, p < 0.005) in HbAA-BERK mice. Posthoc analysis revealed that there was a significant difference in the number of c-Fos neurons in HbSS-BERK mice, when comparing LH vs. DMH (Figure 6B, 58.5 ± 7.3 vs. 106 ± 10.2, * p < 0.05). Additionally, there was a significant difference in the number of Fos neurons in the PFA vs. DMH (Figure 6B, 41.6 ± 5.3 vs. 106 ± 10.2, respectively, # p < 0.0005) in HbSS-BERK mice.
Topographical differences in the number of orexin neurons between the 3 hypothalamic regions
A one-way ANOVA was used to determine any significant differences in the means of orexin neurons quantified in the DMH vs. PFA vs. LH regions. In HbAA-BERK mice, there was a significant difference in the number of orexin neurons quantified in the LH vs. PFA (Figure 6C, * p < 0.05). However, there were no other significant differences observed between the regions. There was no significant difference in the number of orexin neurons in HbSS-BERK mice when comparing the three regions Figure 6C (DMH 67.7 ± 6.5, PFA 49.2 ± 5.9, LH 71.3 ± 7.1, p = 0.060).
Activational differences for orexin neurons in HbAA-BERK vs. HbSS-BERK mice for each hypothalamic region
The presence of single labeled orexin neurons indicated that not all of the orexin neurons within the different subregions were engaged or activated after pain testing ( Figure 7C) in HbAA-BERK and HbSS-BERK mice. There was no significant difference in the number of LH-located orexin neurons in HbAA-BERK vs. HbSS-BERK mice (70.3 ± 9.7 vs. 71.3 ± 7.0, p = 0.934). This means that the total orexin immunoreactive neuron counts in those subregions were similar in HbAA-BERK vs. HbSS-BERK mice. Similarly, there was no significant difference in the number of PFA-located, orexin neurons from HbAA-BERK vs. HbSS-BERK mice (35.9 ± 2.7 vs. 49.2 ± 5.9, p = 0.08). nor a statistical significance in the number of DMH-located, orexin neurons from HbAA-BERK vs. HbSS-BERK mice (Figure 7C, 60.1 ± 8.2 vs. 67.7 ± 6.5, p = 0.475).
DISCUSSION
In the present investigation, we sought to determine whether there were quantitative differences in the activation of orexin neurons after pain testing in a mouse model of SCD. This current study assessed the degree of hyperalgesia expressed in transgenic sickle mice (that express human sickle hemoglobin) vs. control mice (that express normal human hemoglobin) using various pain testing modules and then quantified the immunoreactivity for c-Fos, orexin, and double labeled, c-Fos activated, orexin neurons in the DMH, PFA and LH of these two groups of mice.
The behavioral results showed that HbSS-BERK mice display a higher degree of hyperalgesia than HbAA-BERK mice and that while there were no significant activational differences in AA vs. SS mice for the three subregions, topographical differences were observed in HbAA-BERK and HbSS-BERK mice. Overall, the data indicate that the state of the mice (sickle hemoglobin vs. normal hemoglobin) and their sensitivity to painful stimuli may influence activation of orexin neurons within specific hypothalamic subregions.
Our behavioral findings showed that HbSS-BERK mice display significantly greater sensitivity to heat and cold hyperalgesia vs. HbAA mice. The HbSS-BERK mice showed a decreased paw withdrawal latency vs. HbAA mice to the heat stimulus as evidenced by the shorter time interval required to move the forepaw from the floor of the apparatus after being exposed to the heat. Similarly, HbSS-BERK mice displayed decreased paw withdrawal latency vs. HbAA-BERK mice when exposed to the surface of the cold plate. The HbSS-BERK mice lifted their forepaw in a shorter time and displayed a greater number of behavioral responses while exposed to the cold environment. Specifically, there was an increased number of observations for shivering/body shakes, paw flutter, and consistently lifting paws from the cold plate in the HbSS-BERK vs. HbAA-BERK mice. This increase in physical responses to the cold environment indicates that SS mice have more cold sensitivity and may also indicate cutaneous hyperalgesia in HbSS-BERK mice. It is thought that temperature changes and extremes may precipitate painful crises in patients with SCD (Smith et al., 2003) and our observations and others (Lei et al., 2016) support this claim in the HbSS-BERK mice model.
Similarly, HbSS-BERK mice displayed an increase in mechanical hyperalgesia and deep tissue hyperalgesia vs. HbAA-BERK mice with an increased sensitivity to the Von Frey filament and decreased grip force, respectively. The paw withdrawal frequency evoked when using the von Frey monofilament was significantly higher in HbSS mice vs. HbAA-BERK control mice. This observation in higher paw withdrawal frequency indicated increased hyperalgesia in HbSS-BERK mice. The measurement of deep tissue hyperalgesia in the mice was done to model the chronic musculoskeletal pain reported by SCD patients. Information gained from measuring deep tissue hyperalgesia may indicate inherent pain in the mice. Deep tissue hyperalgesia is associated with the activation of visceral, joint, and musculoskeletal pain receptors. The behavioral responses may reflect the muscle and joint tenderness that is often observed during a painful crises.
Our current findings for HbAA-BERK and HbSS-BERK mice during pain testing are consistent with that found in past studies (Kohli et al., 2010;Lei et al., 2016) and support the validity of this model to study neuropathic pain. In agreement with their findings, HbSS mice (with sickle human hemoglobin) display more responses to pain testing indicating increased hyperalgesia vs. HbAA-BERK control mice. Animal models have become increasingly important in understanding neuropathic pain in SCD patient. Transgenic sickle mice that express sickle hemoglobin are one of the best models to date. These mice experience pain episodes similar to those observed in humans. It is more common in females, therefore, we only used female mice in our study. It is estimated that the incidence of neuropathic pain in the SCD population may be twice what is found in other chronic pain populations other than SCD (Brandow et al., 2014). It is believed that neuropathic pain cases occur during painful sickle crises and resolve after the crises ends.
The data from our immunohistochemical studies identified three distinct groups of neurons within the hypothalamic regions of HbSS and HbAA-BERK mice: Fos only-single labeled, orexin only-single labeled, and c-Fos activated, orexin neurons. Differential activation of orexin subpopulations after pain testing in HbSS and HbAA-BERK mice were observed. There was a significant increase in the percentage of double labeled (c-Fosorexin) neurons in the PFA when compared to those located in the LH of HbSS mice and this same relationship was also observed in HbAA-BERK mice. These patterns in activation of orexin cells reveal subregion, differential activation. This observation has also been reported in the literature for orexin neurons after a myriad of behavioral and pharmacological studies, including those to measure the c-Fos, activation of orexin neurons after behavioral testing for reward, reinstatement, feeding, stress and arousal Harris et al., 2005;Winsky-Sommerer et al., 2005;Smith et al., 2009;Mahler et al., 2012;Moorman et al., 2017). Our data suggest that hyperalgesiainduced behavioral responses are associated with activation of orexin neurons and highlight anatomically and functionally distinct populations of orexin neurons.
A dichotomy in orexin function was previously proposed (Estabrooke et al., 2001;Harris and Aston-Jones, 2006;Yoshida et al., 2006), indicating that orexin neurons that are located in the DMH and PFA are preferentially associated with homeostasis and arousal/shock. Studies have shown that footshock, restraint or cold-exposure all increase c-Fos immunoreactivity in orexin neurons located in the PFA (Sakamoto et al., 2004;Plaza-Zabala et al., 2010;James et al., 2014). The orexin neurons in the LH were preferentially innervated by brainstem and areas involved in autonomic and visceral processing. These LH-located orexin neurons were activated during reward processing for both food and drugs of abuse and directly correlated with behavioral preference (Harris and Aston-Jones, 2006;Mahler et al., 2014). In another study, orexin neurons in the DMH and PFA were affected by diurnal changes; however the same did not occur for LH orexin neurons (Estabrooke et al., 2001). Additionally, activation of LH orexin neurons correlates with weight gain after the administration of anti-psychotic drugs in male rats, but not in DMH orexin neurons (Fadel et al., 2002).
Our current findings extend this hypothesis by proposing that the association between the orexin system and pain may also affected by functional dichotomy. It is possible that selective topographical activation of specific orexin neuron subpopulations were recruited during hyperalgesia. This may explain the significant differences in the percentage of c-Fos activated orexin neurons in the PFA vs. LH in HbSS and HbAA-BERK mice. Interestingly in HbSS-BERK mice only, the percentage of DMH c-Fos-orexin neurons activated after pain testing was also higher than those observed in the LH. This additional recruitment of activated DMH neurons may be as a result of increased hyperalgesia observed in the HbSS-BERK group. The increased hypersensitivity to the stimuli during the series of pain assessments in HbSS-BERK mice may be due to the afferent and efferent projections to and from the subregions.
Although there was no significant difference in activation of orexin neurons in the HbAA-BERK vs. HbSS-BERK mice (activational differences) nor in the absolute orexin neuron counts, there was a difference in the number of Fos neurons activated in HbAA-BERK vs. HbSS-BERK mice. This last finding was unexpected and may result from the sampling methodology. There was sampling from a subset of sections which may have caused some differences in the c-Fos population counts related to the measured hyperalgesia. However, further studies would need to be conducted to show that these Fos neurons were directly correlated with behavior. In our hands, this increase in Fos was not directly correlated with behaviors after pain testings. Another caveat is that the cell types for the single labeled c-Fos neurons that were recruited or activated after pain testing were not identified in this study. There are a number of other neuropeptides and neurotransmitters in these regions; however, identification of those specific neuronal cell types was beyond the scope of this investigation.
Previous studies have implicated the orexin system in the modulation of pain. In a neuropathic pain model using the partial sciatic nerve ligation in the rat, intrathecal and intracerebroventricular orexin-A administration produced a significant analgesic effect (Yamamoto et al., 2002). In another study, Orexin-A peptide reduced heat evoked hyperalgesia in a rat model of chronic constriction injury of the sciatic nerve, but the same result was not observed with orexin-B peptide (Suyama et al., 2004). This antinociceptive effect from orexin-A may be mediated partly via orexin-1 receptors (OX1R) in the dorsal horn of the spinal cord (Jeong and Holden, 2009;Wardach et al., 2016). Orexin-A produced an analgesic effect mediated by the activation of OX1R using a hot plate test (Bingham et al., 2001). While all of these studies suggest that orexin-A has an analgesic effect on pain, and specifically neuropathic pain, these data do not provide information about the entrained patterns of activation within the subregions. The lateral hypothalamus may facilitate antinociception through spinally descending orexins neurons. It is thought that directly stimulating the lateral hypothalamus produces antinociception mediated by OX1R in the dorsal horn (Wardach et al., 2016). However, it is difficult to interpret these results from (Wardach et al., 2016) as being specific to LH or at least the region that we have categorized as the LH in our current study, since it is possible that stimulation of that hypothalamic region may have also engaged neurons within the PFA and possibly DMH. In past studies, the categorization of orexin subregion boundaries has differed across studies. For this reason, the data from our study are so critical to contributing to understanding the difference in the profiling of orexin neurons. The immunohistochemical data in our study support a distinct sampling of all of the orexin neuron regions. These data provide more understanding and identification for which orexin subpopulations may be involved in pain processing. This is the first published paper to show the topography associated with the activation and/or engagement of the orexin system in a model of hyperalgesia associated with SCD.
In future studies, we seek to elucidate mechanisms to improve the management of neuropathic pain and apply them to develop appropriate interventions. Previous studies have supported the idea that hyperalgesia is reduced by mechanisms that engage spinally descending orexin-A neurons (Wardach et al., 2016). This neuropeptide system offers a novel approach to treat chronic pain and hyperalgesia. In spite of recent evidence for its effect in reducing hyperalgesia in nerve constriction models, there have been no studies to investigate the system as potential target for neuropathic pain specifically in a model of SCD. This current study is the first to show that there is regionally specific activation of orexin neurons as a result of various pain assessments for hyperalgesia (component of neuropathic pain) demonstrated in a mouse model of SCD. We believe that data from these experiments will lay the foundation for a more indepth investigation for alternative pharmacological therapies to treat neuropathic pain in the SCD population.
In order to develop strategies to treat and even prevent neuropathic pain in SCD, an initial step in this process was to identify whether there were differences in the activation of orexin neurons in sickle mice vs. control mice and to compare the topography of activated orexin neurons. This information provides the knowledge needed to specifically delineate whether specific subpopulations are selectively recruited in sickle mice after pain assessments for hyperalgesia. These findings confirm the activation of the orexin system after pain challenge in sickle mice vs. control mice and provide an initial map for which subpopulations are activated and can be pharmacologically targeted to treat neuropathic pain.
Final Thoughts
Despite pain being the most common complication of SCD, there is a lack of novel treatments for pain. Advancement in treatment options for neuropathic pain are needed and drugs commonly used to alleviate pain (i.e., opioids, NSAIDs) have not been reliable. The management of neuropathic pain remains challenging because this type of pain does not respond consistently. Although there has been some improvement in treatment options for neuropathic pain research, patients report that their pain is not managed effectively. Opioid compounds have continued to be the primary option to treat pain for several decades. However, chronic opioids use in SCD may adversely affect the peripheral systems, and the development of opioid tolerance or opioid-induced hyperalgesia. There is minimal use of neuropathic pain drugs (gabapentin and hydroxyurea) in the SCD population and this may be due to minimal systemic screening of this type of pain.
A large proportion of SCD patients use opioids to provide limited relief when experiencing chronic pain. However, long term opioid use may produce severe side effects and do not provide a permanent resolution of the pain. Sociocultural factors also provide a barrier to effective pain management in SCD. Ineffective pain assessment and unfounded concerns by health providers regarding addiction have hindered pain management in the SCD population (Brown et al., 2015). The socio-cultural disparity between patients and providers may contribute to the reluctance of health care providers to prescribe narcotics (Shapiro et al., 1997;Elander et al., 2006). More SCD research and changing attitudes concerning care can help to eliminate the barriers that exist. One option to begin to address this disparity is to identify a non-addictive drug that can be used to alleviate pain in the SCD population. We contend that treatments that pharmacologically target the orexin system could be a promising alternative option to reduce pain in SCD and reduce the requirement of opioid analgesics.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The animal study was reviewed and approved by the Institutional Animal Care and Use Committee at the University of Minnesota, protocol: KG.
AUTHOR CONTRIBUTIONS
KR: planning and conducting experiments, collection of all data, data processing, data analyses and interpretation, figure making, and writing of this manuscript. NS: data processing and writing of this manuscript. HT: conducting experiments and figure making. VA: data analyses. SU: data processing. RT: planning experiments and writing of this manuscript. KG: planning experiments, bred and phenotyped all the mice, interpreting data, and writing of this manuscript.
FUNDING
This work was supported by grants from the National Institutes of Health, P50 HL-118006 (KR and RT), UO1HL117664 (KG) and RO1 HL147562 (KG). Support was also provided by the National Science Foundation HRD-1503192 (NS). | 9,480.8 | 2020-01-31T00:00:00.000 | [
"Biology"
] |
Quantum wake dynamics in Heisenberg antiferromagnetic chains
Traditional spectroscopy, by its very nature, characterizes physical system properties in the momentum and frequency domains. However, the most interesting and potentially practically useful quantum many-body effects emerge from local, short-time correlations. Here, using inelastic neutron scattering and methods of integrability, we experimentally observe and theoretically describe a local, coherent, long-lived, quasiperiodically oscillating magnetic state emerging out of the distillation of propagating excitations following a local quantum quench in a Heisenberg antiferromagnetic chain. This “quantum wake” displays similarities to Floquet states, discrete time crystals and nonlinear Luttinger liquids. We also show how this technique reveals the non-commutativity of spin operators, and is thus a model-agnostic measure of a magnetic system’s “quantumness.”
Traditional spectroscopy, by its very nature, characterizes properties of physical systems in the momentum and frequency domains.The most interesting and potentially practically useful quantum many-body effects however emerge from the deep composition of local, short-time correlations.
Here, using inelastic neutron scattering and methods of integrability, we experimentally observe and theoretically describe a local, coherent, long-lived, quasiperiodically oscillating magnetic state emerging out of the distillation of propagating excitations following a local quantum quench in a Heisenberg antiferromagnetic chain.
This "quantum wake" displays similarities to Floquet states, discrete time crystals and nonlinear Luttinger liquids.
Ever since its introduction, the Heisenberg chain [1] has been the paradigmatic model of strongly-correlated many-body quantum physics.Its exact solution by Bethe [2] gave birth to the field of quantum integrability; its magnetic excitations, spin-1/2 spinons [3], are the prototypical fractionalized excitations.The model is not simply a theoretical archetype, but also effectively describes many physical quantum magnets such as KCuF 3 [4,5], in which the chains are formed by magnetic Cu 2+ ions hybridizing along the c axis.Although KCuF 3 orders magnetically at T n = 39 K, even below the ordering temperature its high energy spectrum retains the characteristic spinon spectrum [6] while exhibiting strong quantum entanglement [7].
One of the best experimental tools for studying magnetic excitations is inelastic neutron scattering [8], which measures the energy-resolved Fourier transform of the space-and time-dependent spin-spin correlation function G(r, t) = S α i (0)S α i+r (t) , (α = x, y, z) [9].Accordingly, scattering cross section data is typically reported in terms of reciprocal space and energy.As pointed out by Van Hove in 1954 [10,11], with enough data one can take the inverse Fourier transform and obtain the spin correlations in real space and time with atomic spatial resolution and time resolution of ∼ 10 −14 s.This transformation was shortly thereafter applied to liquid Lead neutron scattering data [12], and more recently on water using inelastic xray scattering [13] but has not been applied to magnetic materials.
Space-time dynamics in one dimension has been the subject of extensive study in recent decades [14], with attention mostly focusing on ballistically-propagating excitations (describable using bosonization / Luttinger liquid theory [15]) forcing "light-cone"-induced bounds on velocity of correlations and entanglement spreading [16].The physics of Heisenberg chains is however much richer, containing nonlinearities whose effects can be captured exactly using integrability, or asymptotically using nonlinear Luttinger liquid theory [17].
In this paper, we use high-precision INS data transformed back to real, atomic-level space and time to characterize magnetic dynamics at the local level in a Heisenberg chain.We focus on previously-overlooked features of the real-space/time magnetic Van Hove correlation function G(r, t), namely the effects of long-term coherent, non-propagating excitations (beyond the reach of bosonization).We observe a correlated time-dependent state resulting from the integrability-induced "persistent memory" of the Heisenberg chain.This state is reminiscent of a local many-body Floquet state or a discrete time crystal, in that it displays a characteristic timerepeating pattern with fixed period.Its correlations also display a remarkable (spatial) "period doubling" (mirroring the time period doubling of a discrete time crystal), in that the original site-alternating Néel order of the initial state changes to a two-site-spaced, oscillating antiferromagnetic correlation.This state, which we call a "quantum wake" due to its similarity to the wake created by a moving ship, is a coherent wavepacket of "deep" and "edge" spinons stabilized and made observable via a Van Hove singularity, and recalls the quantum dynamical im-purity picture of nonlinear Luttinger liquid theory.
RESULTS:
Experimental G(r, t) results are obtained using available KCuF 3 data from Refs.[5,18] (full details are provided in the Methods section).The result is shown in Fig. 1, where ferromagnetic G(r, t) correlations are shown in red and antiferromagnetic correlations are shown in blue.To help interpret the experimental G(r, t), we also calculated G(r, t) from: (i) Bethe Ansatz [5] for zero temperature, and (ii) semiclassical linear spin wave theory (LSWT).These are shown in Fig. 2.
Real space G(r, t) for spin systems can also be probed with cold atom and trapped ion experiments [19][20][21], but G(r, t) derived from neutron scattering has several unique advantages: (i) The systems probed by neutrons are thermodynamic, and temperature is a well-defined quantity.(ii) Neutrons explore the spin system's evolution following a local perturbation.(iii) As we show below, neutron scattering accesses the imaginary G(r, t) which reveals quantum coherence and Heisenberg uncertainty.
The Fourier transform of the S(Q, ω) scattering data produces a G(r, t) with complex values, with a distinct interpretation for the real and imaginary parts.As noted by Van Hove [11], the imaginary part Im[G(r, t)] = 1 2i [S α i (0), S α i+r (t)] (α = x, y, z) quantifies the imbalance between positive and negative energy scattering.By Robertson's relation [22], a nonzero commutator between observables implies Heisenberg uncertainty; thus nonzero imaginary G(r, t) indicates the presence of an uncertainty relation between S z i (0) and S z j (t).This mutual incompatibility is thus an indicator of quantum coherence between spins (see supplemental information).It is striking that the quantum coherence can be tracked as a function of temperature with the imaginary G(r, t) in Fig. 1.As temperature increases, the nonzero imaginary G(r, t) shrinks to shorter and shorter times and distances, showing how the finite-temperature macroscopic world emerges from the quantum world.On the other hand, the real part Re[G(r, t)] = 1 2 {S α i (0), S α i+r (t)} extracts classical behaviour surviving even at infinite temperature.
The real space correlations in Figs. 1 and 2 emerge from a flipped spin at t = 0, r = 0.A number of things can be observed from these G(r, t) data: first, the characteristic "light cone" defined by the spinon velocity v = πJ 2 where J is the exchange interaction.At low temperatures, everything below the light cone is static while everything above it is dynamic.Second, at low temperature in G(r, t) there is a clear distinction between even and odd sites: the odd neighbor correlations quickly decay to zero above the light cone, whereas the even neighbor correlations persist to long times.Third, as temperature increases the spin oscillations above the light cone shrink to shorter distances and times, until by 200 K the on-site (r = 0) correlation oscillates only once and no neighbor-site oscillations are visible.Fourth and finally, the wavefront above the light cone changes to ferromagnetic at high temperatures (Fig. 1h) whereas it was antiferromagnetic at low temperatures.This accompanies the nonzero imaginary G(r, t) shrinking to shorter and shorter times and distances as temperature increases.
To gain a better understanding of the signal, we should identify which excitations are responsible for which part.The light cone is due to the low-energy correlations around Q ∼ π which can be understood from traditional bosonization, the Fermi velocity being given by the group velocity of Q ∼ π spinons.These being the fastest-moving ballistic particles, they limit the velocity of energy, correlations, and entanglement propagation, giving the Lieb-Robinson bound [16,23].Such a light cone is seen in theoretical simulations [24][25][26][27][28][29] and coldatom experiments [20], and nicely also here in KCuF 3 .
Letting the fast-moving ballistic particles "distill" away leaves a "quantum wake" behind the wavefront, a persistent oscillating state above the light cone which is clearly seen in Fig. 1 panel b and Fig. 2 panels b, c, e and f.This originates from another crucial characteristic of S(Q, ω), namely that its correlation weight is spread nontrivially within the spinon continuum.Contrasting LSWT with Bethe Ansatz in the second and third row of Fig. 2 shows stark differences in dephasing behaviour.LSWT, being inherently coherent, has very slow dephasing and no quantum wake.For the experimental and Bethe Ansatz G(r, t) however, there exist pockets of states around Q ∼ π/2, 3π/2, ω πJ/2 which display a Van Hove singularity in their density of states.Since the existence and sharpness of the lower edge are contingent on integrability, measuring the (slowness of the) time decay of the quantum wake is in fact a direct experimental measurement of the proximity to integrability.
To more illustratively map the features in G(r, t) with specific spinon states, we selectively remove parts of the Bethe Ansatz S(Q, ω) spectrum, keeping only key features, and Fourier transform into G(r, t).As shown in Fig. 3(a)-(b), the oscillations above the light cone come from the Q = π/2 Van Hove singularities at the top of the spinon dispersion where the spinons have zero group velocity.Meanwhile, Fig. 3(c)-(d) shows the light cone emerges from the strongly dispersing low-energy Q = π states.Combining these two states in Fig. 3(e)-(f) gives a rough reproduction of the actual G(r, t), indicating that the Q = π/2 and Q = π spinon states are what give the Heisenberg chain quantum wake its distinctive properties.Bolstering this conclusion is the analysis shown in Fig. 3(g)-(h) where we remove the oscillations above the light cone from G(r, t), and transform back into S(Q, ω).In this case, we see the familiar spinon spectrum, but with the stationary Q = π/2 states missing-showing that the flat singularity at the top of the spinon dispersion is responsible for the long-lived oscillating spin correlations.
Quantum scrambling: Perhaps the most striking feature of the KCuF 3 quantum wake is the total loss of Néel correlations above the light cone.Below the light cone, the system shows static Q = π antiferromagnetism.Above the light cone, the system shows dynamic period-doubled Q = π/2 antiferromagnetism, with hardly a trace of the original state.In stark contrast to this, equal-time real space correlators S α i (t)S α j (t) (as opposed to dynamical correlator G(r, t) which measures S α i (0)S α j (t) ) computed from Bethe ansatz show rapid reemergence of Q = π antiferromagnetism above the light cone, where nearest neighbor S α 0 (t)S α 1 (t) → −0.1477... [30] as t → ∞.At first glance, these results are contradictory; but the difference between S α 0 (0)S α 1 (t) and S α 0 (t)S α 1 (t) indicates the new AFM correlations form in a basis orthogonal to the original basis.In other words, the t → ∞ state has zero correlations with the t = 0 state, in accord with Anderson's orthogonality catastrophe [31].
This process can be more precisely described as quantum scrambling: the delocalization of quantum information over time [13,16].Typically such physics is studied via out of time order correlators (OTOC-see Supplemental Materials section for details).G(r, t) provides an alternative and more experimentally accessible way to study quantum scrambling, quench dynamics, and quantum thermalization in physical systems.
Heuristic understanding of G(r, t): The π/2 oscillations inside the quantum wake can be understood heuristically as particle-antiparticle annihilation.In an antiferromagnetic chain, a down spin flipped up creates two spinons, while an up spin flipped down creates two antispinons.These quasiparticles interfere as schematically shown in Fig. 4. Spinons from even neighbor sites interfere constructively and produce a full spin flip, while antispinons from odd neighbor sites interfere destructively and annihilate.Thus G(r, t) oscillates on even sites and Re[G(r, t)] = 0 on odd sites.
This spinon heuristic interpretation can explain the temperature evolution of G(r, t) in KCuF 3 .As temperature increases, the static spin correlations and spin entanglement are suppressed [7], which destroys the coherence of the spinons from neighboring sites as illustrated in Fig. 4(b), and the oscillations vanish.
This also explains the shift to a ferromagnetic wavefront at high temperatures [Fig.1(h)].At low temperatures, the spinons propagate atop a substrate of antiferromagnetic correlations, giving rise to antiferromagnetic oscillating interference patterns.At higher temperatures, the static correlations are mostly gone and so are coherence with neighboring sites (evidenced by the vanishing Im[G(r, t)]), so the propagating spinons simply appear as a pair of up-spins hopping through the lattice.In this way, the high temperature quantum wake directly shows spinon quasiparticles-one can "see" them in the data.It is striking that a diffuse high-temperature S(Q, ω) could yield such a clear quasiparticle signature in G(r, t).This technique could have profound implications for identifying exotic quasiparticles in other magnetic systems.
CONCLUSIONS:
In conclusion, we have shown using KCuF 3 scattering that it is possible to resolve real-time spin dynamics of a local quantum quench via neutron scattering.This reveals details about the quantum dynamics which were not obvious otherwise.First, we are able to directly observe the formation of an orthogonal state within the quantum wake as the light cone scrambles the initial state, leaving behind decaying period-doubled π/2 oscillations.Second, using the imaginary G(r, t) we observe quantum coherence as revealed by non-commuting observables between spins more than 10 neighbors distant in Fig. 2.This is far longer range "quantumness" than is revealed by entanglement witnesses [7].Third, the high-temperature G(r, t) shows the spinon quasiparticles visually in the data, without need for theoretical models.Such details are difficult or impossible to see with other techniques.
The ability to probe short time and space dynamics of quasiparticles is of key importance to both fundamen- tal quantum mechanics research and technological ap-On the fundamental side, the of a quantum wake with quasiperiodic π/2 oscillations shows behavior not captured by bosonization, which means theorists need to re-tool their analytic methods to understand the short-time dynamics of quantum spin chains.Also, measuring G(r, t) at a well-defined finite temperature may shed light on eigenstate thermalization and quantum scrambling in higher-dimensional systems.On the applications side, G(r, t) is more closely related to the output of current quantum computers and so may provide more direct application of this technology.Also, understanding the short-time behavior of quasiparticles As it reaches each neighboring site, it excites a pair of spinons which creates its own light cone.Odd neighbor sites have opposite spin from r = 0 at t = 0, and thus they create antispinon pairs.For even r, these spinon light cones create constructive interference and continue to flip spins up and down.For odd r, the spinons and antispinons destructively interfere, such that the correlations quickly go to zero.At high temperatures (b), the spin correlations are much weaker, such that the spinon and antispinon light cones emanating from |r| > 0 are weakly coherent with r = 0 and thus their influence is suppressed, leading to oscillations restricted in both space in time as seen in Fig. 1.
in quantum systems is a crucial step in using them for quantum logic operations in real technologies.Neutron scattering derived G(r, t) provides key insight into these problems.
METHODS
Full methods are available in the Supplementary Information.
Extracting G(r, t) from inelastic neutron scattering
The high-energy scattering data was measured on MAPS at ISIS with phonons subtracted, and low energy (< 7 meV) scattering data at high temperatures-where the MAPS data is noisy-was filled in with data mea-sured on SEQUOIA [34] at ORNL's SNS [35].Both data sets were corrected for the magnetic form factor, and the resulting combined data are shown in Fig. 1.
We then masked the elastic scattering (as it is mostly nonmagnetic incoherent scattering), calculated the negative energy transfer scattering using detailed balance, and computed the Fourier transform of the neutron scattering data in both Q and hω, yielding spin-spin correlation in real space and time G(r, t) = S(0) • S r (t) .(Prior to transforming, the high energy MAPS data was interpolated using Astropy Gaussian interpolation [36] to create a uniform grid.) The short-distance long-time G(r, t) dynamics are governed by the lowest measured energies.In this case, the low energy cutoff was 0.7 meV which means G(r, t) is reliable only up to ∼ 5 × 10 −13 s.Further details are given in the Supplemental Information.Thus, the long-time dynamics are inaccessible to the current data set.This being said, there is an important visible difference between KCuF 3 and the Bethe Ansatz G(r, t) at long times: KCuF 3 tends toward antiferromagnetic correlations (odd neighbors fade towards red, even neighbors fade more blue), whereas the Bethe ansatz shows no such trend.This is because KCuF 3 is magnetically ordered at 6 K due to interchain couplings, and thus has an infinite-time static magnetic pattern; but the idealized 1D Heisenberg AFM does not.Remarkably, the Van Hove function picks this up even though the elastic line-and thus the Bragg intensity-was not included in the transform.
Theoretical simulations
The Bethe Ansatz plots were produced from data obtained using the ABACUS algorithm [37] which computes dynamical spin-spin correlation function of integrable models through explicit summation of intermediate state contributions as computed from (algebraic) Bethe Ansatz.Linear spin wave calculations were carried out using SpinW [38].
In the Supplemental Information, we also consider (i) the S = 1/2 ferromagnet using both density matrix renormalization group theory (DMRG) and LSWT, and (ii) the quantum S = 1/2 Ising spin chain for various anisotropies using perturbation theory.
As noted in the main text, the real and imaginary parts of G(r, t) probe different quantum mechanical functions.The imaginary G(r, t) is written which can be written with a commutator Therefore, the imaginary component of G(r, t) directly gives the dissipative susceptibility.Following the same derivation, we arrive at the equation for the real part of Comparing eq.(S.2) and eq.(S.3), one can see why the imaginary part of G(r, t) goes to zero at infinite temperature or in the classical limit: as all states are equally populated, the commutator (and thus dissipations) vanish.This corresponds to S(−q, −ω) = S(q, ω).Meanwhile, so long as correlations exist, eq.(S.3) is nonzero even at infinite temperature or in the classical limit.
A nonzero commutator between spins has a non-trivial relationship to quantum entanglement.Generically, the equal time spin operators of any two different spins always commute: [S α i (0), S β j (0)] = 0, no matter whether the wavefunction formed by the two spins has off-diagonal density matrix components (i.e., no matter whether the two spins are entangled).To obtain a nonzero commutator (and thus an uncertainty relation), one must introduce time evolution to one of the spins with a Hamiltonian that involves interaction between S i and S j .In this case, the commutator may be nonzero.
The presence of Heisenberg uncertainty generically implies quantum coherence between two operators, such that an observation of one quantity destroys the other's state.This is actually the opposite of quantum entanglement, where observation of one quantity determines the other's state.Thus, the presence of nonzero imaginary G(r, t) does not necessarily imply quantum entanglement (defined by off-diagonal density matrix components), but instead it witnesses a quantum coherence between S i and S j .This is related (but not formally equivalent to) quantum discord, which is a generic measure of quantum correlations [1,2].Thus Im[G(r, t)] is a witness of the quantum coherence of a system, which in the case of KCuF 3 extends to beyond 10 neighbors along the chain at 6 K.This is in accord with its highly coherent and entangled ground state.As temperature increases, the imaginary G(r, t) becomes severely truncated in space, as shown in the main text Fig. 1.
II. FERROMAGNETIC SPIN CHAIN
As discussed in the main text, the π/2 stationary oscillations inside the quantum wake can be understood heuristically as spinon-antispinon interference.Here we propose an alternative (equally valid) heuristic for understanding the π/2 oscillations within the quantum wake: the effects of a spin-down operator on a down spin.If the t = 0, r = 0 spin is flipped up-to-down and the downspin spinon propagates outward, the spin-lowering operator acting on a down spin results in zero.Meanwhile, the spin-lowering operator acting on an up-spin results in a spin flip.Thus odd (up-spin) sites correlations go to zero as the spinon light cone passes, and even sites flip.
To confirm the validity of these spinon heuristics, we also consider the isotropic S = 1/2 ferromagnetic chain, and simulate its T = 0 neutron spectra with DMRG [3][4][5] and LSWT, see Fig. S1.The DMRG calculation was performed on a chain of L = 50 sites with open boundaries, keeping up to m = 500 states in the calculation.S(Q, ω) was calculated using the DMRG++ [5] implementation of the Krylov-space correction vector method [6,7], and a Lorentzian energy broadening with halfwidth at half-maximum (HWHM) η = 0.1|J| to account for the finite-size system.To isolate the inelastic scatter- ing, a Lorentzian with height S(Q, 0) was substracted at each Q-point.Unlike the AFM case, excitations from the zero temperature FM ground state are spin flips of the same direction, which would mean no antiparticles are created and no destructive interference will occur.This is indeed what we see: all sites oscillate in time above the light cone, and no continuum exists in S(Q, ω).
If there were regular destructive interference, it would by necessity create a continuum in the neutron spectrum S(Q, ω): well-defined oscillations in time corresponds to a sharp mode in energy, whereas suppressed (or quickly decaying) correlations correspond to diffuse modes in energy.So even without transforming the neutron data into S(Q, ω), it should be obvious from the well-defined mode that there is no significant particle-antiparticle annihilation in G(r, t) for the zero temperature FM spin chain.
III. TOWARD THE ISING LIMIT
Figure S2 shows the calculated real space correlations from perturbation theory at T = 0 approaching the Ising limit.The S(q, ω) was calculated as described in Ref. [8] and transformed into G(r, t).There are several things worth noting: first, just like the S = 1/2 Heisenberg chain, the odd neighbor sites' correlations go to zero, in accord with spinon-antispinon interference.Second, there is no well-defined wavefront visible in the datapossibly because the simulated intensity only includes the inelastic channel.Finally, as the Ising limit is approached, the "light cone" gets steeper and steeper, corresponding to slower and slower spinon velocities.
IV. THE XY LIMIT
Although the isotropic Heisenberg chain model applicable to KCuF 3 can be solved exactly using the Bethe ansatz, the resulting expressions are often complicated.We can instead consider the antiferromagnetic isotropic XY-model (or XX-model) [9], H = J N i=1 S x i S x i+1 + S y i S y i+1 , (S.4) for which simpler, closed-form expressions can be obtained using the Jordan-Wigner formalism.At zero magnetic field, for a chain of N sites with open boundary conditions, the longitudinal dynamical correlation between two lattice sites j and l can be written [10,11] S z j (t)S z l (0) = 1 (N + 1) where k = mπ N +1 , 1 ≤ m ≤ N + 1, is the momentum.Due to their simple structure, these sums can be evaluated at arbitrary times, temperatures and finite sizes.Yet they still capture several of the qualitative features observed in the KCuF 3 G(r, t), as shown in Fig. S3.It is easy to analytically see the emergence of real-valued ferromagnetic As shown in Fig. ?? and Fig. 2 of the main text, imaginary G(r, t) correlations are only nonzero above the light cone, in good agreement to the commuting spin operators in the Heisenberg antiferromagnetic ground state.Above the light cone, both Bethe ansatz and KCuF 3 show nonzero negative static Im[G(r, t)] on the odd sites, and oscillating but average positive Im[G(r, t)] on the even sites.This concurs with quantum scrambling, where time-like separated spin operators do not commute with the original magnetism.
Figure 1 .
Figure 1.Scattering and Van Hove correlations.Finite temperature neutron scattering data for KCuF3 (left column) and their transformation to real-space correlations, with the real G(r, t) (center column) and imaginary G(r, t) (right column).Red indicates ferromagnetic spin correlation, blue indicates antiferromagnetic spin correlation.At low temperatures, the real G(r, t) wavefront at the light cone is antiferromagnetic, and by 200 K it becomes ferromagnetic.Meanwhile, the imaginary G(r, t) is restricted in space and time at higher temperatures, showing loss of quantum coherence.
Figure 2 .
Figure 2. Van Hove time-dependent real-space spin-spin correlation compared to theory with imaginary components.a 6 K KCuF3 scattering, b real component of G(r, t), c imaginary component of G(r, t).Panels df show the same for T = 0 Bethe ansatz, and gi show the same for T = 0 LSWT on a S = 1/2 HAF chain (renormalized by π/2 to match the light cone velocity in the top two panels).The thin green lines on G(r, t) plots show the magnon/spinon velocity.
Figure 3 .
Figure 3. Signal analysis of the Bethe Ansatz.The right column is the Fourier Transform of the left.Panels (a), (c), and (e) show the Bethe Ansatz with everything removed but key features at Q = π or Q = π/2.Panels (b), (d), and (e)show the resulting Fourier transform of these spectra into real space and time.This clearly shows that the oscillations above the light cone are due to the stationary Q = π/2 states, while the light cone is due to the dispersive Q = π state.Panel (g) shows the G(r, t) of the Bethe Ansatz with all correlations above the light cone set to zero.Fourier-transforming this back into S(Q, ω) in panel (h), we find a spinon spectrum with the Q = π/2 stationary states missing-confirming that these are responsible for the oscillating Floquet dynamics.
Figure 4 .
Figure 4. Schematic description of the AFM Van Hove correlations.At low temperatures, (a) a central spinon light cone emanates from r = 0, t = 0.As it reaches each neighboring site, it excites a pair of spinons which creates its own light cone.Odd neighbor sites have opposite spin from r = 0 at t = 0, and thus they create antispinon pairs.For even r, these spinon light cones create constructive interference and continue to flip spins up and down.For odd r, the spinons and antispinons destructively interfere, such that the correlations quickly go to zero.At high temperatures (b), the spin correlations are much weaker, such that the spinon and antispinon light cones emanating from |r| > 0 are weakly coherent with r = 0 and thus their influence is suppressed, leading to oscillations restricted in both space in time as seen in Fig.1.
Figure S1 .
Figure S1.Real space spin correlations for a 1D Heisenberg ferromagnetic S = 1/2 chain at T = 0, simulated with DMRG and LSWT.Simulated neutron spectra are shown on the left, and Van Hove spin correlations (real part) are on the right.In this case, the semiclassical LSWT spin correlations are close to the DMRG quantum calculations, but the DMRG shows more oscillations near r = 0 at long times.
Figure S2 .
Figure S2.Simulated spin correlations for a 1D Ising AFM chain for three different values of anisotropy.Simulated Sxx(Q, ω) neutron spectra are shown on the left column (calculated via perturbation theory as described in Ref. [8]), and Van Hove spin correlations (real part only) are on the right column.Similar to Fig.2in the main text, the odd neighbor sites correlations decay to zero while even neighbor sites oscillate to long times.The "light cone" gets steeper and steeper as the Ising limit is approached.
) sin (kl) cos (tJ cos k) − i sin (tJ cos k) tanh J cos k 2k B T ) sin (kl) i sin (tJ cos k) − cos (tJ cos k) tanh J cos k 2k B T
Figure S4 .
Figure S4.On-site correlation r = 0 for KCuF3 at various temperatures, showing the oscillations decaying.Beyond 4 × 10 −13 s, the results are not reliable and the "ringing" from the low-energy cutoff begins to dominate the signal. | 6,510 | 2022-01-10T00:00:00.000 | [
"Physics"
] |
Artificial Endoscopy and Inflammatory Bowel Disease: Welcome to the Future
Artificial intelligence (AI) is assuming an increasingly important and central role in several medical fields. Its application in endoscopy provides a powerful tool supporting human experiences in the detection, characterization, and classification of gastrointestinal lesions. Lately, the potential of AI technology has been emerging in the field of inflammatory bowel disease (IBD), where the current cornerstone is the treat-to-target strategy. A sensible and specific tool able to overcome human limitations, such as AI, could represent a great ally and guide precision medicine decisions. Here we reviewed the available literature on the endoscopic applications of AI in order to properly describe the current state-of-the-art and identify the research gaps in IBD at the dawn of 2022.
Introduction
Crohn's disease (CD) and ulcerative colitis (UC) are chronic inflammatory bowel disease (IBD), with increasing incidence all around the world and a great impact on general well-being, social functioning, and utilization of healthcare resources [1,2]. The diagnosis of IBD is a daily challenge for physicians, being based on different elements such as clinical data, biochemical values, radiology, endoscopy, and histology [3]. Among them, endoscopy represents a cornerstone in the diagnosis and follow-up of CD and UC [4,5].
In the last five years, the concept of endoscopy has evolved from a traditional one to a new idea based on artificial intelligence (AI). AI is defined as any machine that has cognitive functions mimicking humans for problem solving or learning [6]. AI has already been tested in several fields of endoscopy, such as in the detection of Barrett's esophagus [7] or the evaluation of adenoma detection rate during colonoscopy [8,9].
Attention has shifted to the potential role of AI in the field of IBD where endoscopic activity is based on several scores, such as the Mayo endoscopic subscore (MES), the Ulcerated Colitis Endoscopic Index of Severity (UCEIS), the Crohn's Disease Endoscopic Index of Severity (CDEIS), the Lewis score, and the Capsule Endoscopy Crohn's Disease Activity Index (CECDAI) [10][11][12][13][14]. The reason for this large number of scores lays in the need for establishing a strict definition of disease activity, thus reducing the interobserver variability and having a solid comparative analysis of different patients or studies [15]. In this context AI could be a great step forward in the research of homogeneity and reproducibility of endoscopic data. This article aims to summarize the literature data on AI endoscopic applications in the field of IBD, underlining the strengths and limitations of the currently available tools at the dawn of 2022.
What Is Artificial Intelligence and Its Current Application in Endoscopy?
AI-assisted endoscopy is based on computer algorithms that perform as human brains do [16]. They react (output) to what they receive as information (input) and what they have learned when built. The fundamental principle of this technology is "machine learning" (ML) [17].
There are many different ML methods ( Table 1) and one of the most popular is the use of artificial neural networks (ANN) [18]. ANN is based on multiple interconnected layers of algorithms, which process data in a specific pattern and feed data so that the system can be trained to carry out a specific task [19]. Another diffuse ML method is the Supportvector machine (SVM), which is used for classifying data sets by creating a line or plane to separate data into distinct classes [20]. An evolution of ML is deep learning (DL): a complex, multilayer neural network architecture learns representations of data automatically by transforming the input information into multiple levels of abstractions [21,22]. An evolution of the simpler ANN is the convolution neural network (CNN), inspired by the response of human visual cortex neurons to a specific stimulus and being able to convolve the input and pass its result to the next layer [19,23]. Table 1. Algorithms involved in machine learning process.
Supervised
The algorithm is trained by labeling data tagged with the correct answer
Semisupervised
The algorithm is trained without marking the training data
Unsupervised
The algorithm is structured on a large amount of unlabeled data based on a small amount of labeled data Based on this technology, three kinds of tools have been generated to support endoscopy in each part of its activity [24][25][26]: -Computer-aided detection (CADe), which detects gastrointestinal lesions; -Computer-aided diagnosis (CADx), which characterizes gastrointestinal lesions; -Computer-aided monitoring (CADm), which evaluates the procedure and the endoscopist, thus improving the quality of endoscopy.
In particular, CADe and CADx are the best developed systems with many experiences around the world demonstrating their better performance than the human eye [9,[27][28][29]; for example, the GI-Genius Medtronic system reached a sensibility of 99.7% in polyps' detection as shown by Hassan et al. [27]. The application fields of AI are expanding rapidly and IBD is the next target of this innovative technology.
AI in the Diagnosis of IBD
One of the first applications of AI has been the attempt to facilitate the diagnosis of IBD and the differential diagnosis between CD and UC. In the model of Mossotto [30], three supervised ML models were developed utilizing endoscopic data only, histological only, and combined endoscopic/histological with an accuracy of 71.0%, 76.9%, and 82.7%, respectively [30]. The model combining endoscopic and histological data was tested on a statistically independent cohort of 48 pediatric patients from the same clinic, with an accuracy of about 83.3% in patients' classification.
Quénéhervé and colleagues [31] tried to design a model to diagnose IBD and establish differential diagnoses between CD vs. UC. They based their study on confocal laser endomicroscopy (CLE), which is an adaptation of light microscopy whereby focal laser illumination is combined with pinhole limited detection to geometrically reject out-of-focus light [32]. The authors built a score based on 14 functional and morphological parameters to perform a quantitative analysis of the mucosa called cryptometry and detect a diagnosis of IBD with a sensitivity and a specificity to near 100%. Moreover, this study reached a sensitivity of 92.3% and a specificity of 91.3% in the differential diagnosis between CD and UC.
Diagnosis of IBD can be a complex and challenging procedure due to its heterogeneous presentation. It is generally believed that making a correct diagnosis requires information on the endoscopic and histological features, together with clinical and biochemical data. AI support may be helpful in the diagnostic process by combining all suggestive features intelligently.
AI in UC, State-of-the-Art
As previously underlined, endoscopy plays a fundamental role in the diagnosis and assessment of IBD activity [5]. According to this concept, endoscopy should guarantee an exact staging of the disease and a high level of concordance between different operators. Indeed, the definition of recurrence or the assessment of remission are cornerstones in the disease management, thus guiding the next clinical or surgical decisions [33,34].
In the study of Ozawa, the authors designed a CAD system using a CNN and evaluated its performance in the identification of normal or inflamed mucosa, using a large dataset of endoscopic images from patients with UC [35]. The performance of this new tool was valuable, with areas under the receiver operating characteristic curves (AUROCs) of 0.86 and 0.98 in the identification of MES 0 (completely normal mucosa) and MES 0-1 (mucosal healing state), respectively [35]. In a similar experience from Stidham et al. [36] a CNN showed an AUROC of 0.96 in distinguishing endoscopic remission (MES = 0 or 1) from moderate to severe disease (MES = 2 or 3), with a good weighted κ agreement between the CNN and the adjudicated reference score for identifying exact MES (κ = 0.84; 95% CI, 0.83-0.86). The application of this CNN to the entirety of the colonoscopy videos had high accuracy in identifying moderate to severe disease with an AUROC of 0.97 [36].
Moreover, Gottlieb and colleagues [37] developed another recurrent neural network able to predict MES and UCEIS from entire endoscopy videos and not only from images. The system automatically selected the frame to be analyzed and scores were calculated on the colon section, showing high agreement with the human central reader score [37]. Similarly, a fully automated video analysis system was developed to assess the grade of UC activity and predicted MES in 78% of videos (κ = 0.84). In external clinical trial videos, reviewers agreed on MES in 82.8% of videos (κ = 0.78) [38]. Automated MES grading of clinical trial videos (often low resolution) correctly distinguished remission (MES = 0 or 1) vs. active disease (MES = 2 or 3) in 83.7% of videos. Not only were automated systems able to assess endoscopic activity from still images [39], but they were also able to predict a binary version of the MES directly analyzing a raw colonoscopy video, resulting in a high level of accuracy (AUC of 0.94 for MES ≥ 1 and 0.85 for MES ≥ 2 and MES ≥ 3) [40]. Looking forward, it seems that AI can also guide real-time therapy decisions in patients with UC in clinical remission by helping to stratify the relapse risk one year after AI-assisted colonoscopy [41].
Other experiences pushed forward the application of AI in the prediction of histology. Indeed, Takenaka and colleagues [42] designed a deep neural network algorithm, defined as DNUC, based on more than 40,000 images from colonoscopies and 6000 biopsies of 875 patients prospectively collected. AI system evaluations were matched with the UCEIS score expressed for each image by three expert endoscopists and with the Geboes score determined by pathologists [43]. The DNUC revealed an accuracy of 90.9% and 92.9% in the detection of endoscopic and histological remission, respectively. In addition, Maeda et al. [44] developed a CADx system to predict persistent histological inflammation using endocytoscopy in 187 retrospectively collected patients. Endocytoscopy is one of the most valuable technologies, although it is not widely available in endoscopic departments. Providing ultra-high-resolution white light images (520x), endocytoscopy allows the socalled virtual histology or optical biopsy [45]. The results obtained by the CAD algorithm were compared with the Geboes score defined by five expert pathologists, blinded from endoscopist results. The algorithm showed a sensitivity of 74% and a specificity of 97%, with high level of reproducibility and interobserver agreement (κ value = 1).
Honzawa and colleagues [46] moved forward with the AI-application in trying to differentiate between MES 0 and MES 1 in patients with UC in clinical remission. The authors investigated the correlation among the so-called MAGIC score (Mucosal Analysis of Inflammatory Gravity by i-scan TE-c Image), MES, and histological Geboes score. Interestingly, the MAGIC score, based on the level of mean inflammation derived from all the pixels, was significantly higher in the MES 1 group than in the MES 0 group (p = 0.0034), with a significant correlation with histology (p = 0.015).
Similar to the color map of the MAGIC score, a validation study [47] elaborated an operator-independent, computer-based tool, named Red Density (RD), that determined disease activity in UC according to a redness map and vascular pattern recognition. The RD score, which is different from the previous exposed experiences as it is based on pure physics parameters, significantly correlated with the histological scoring systems (Robarts Histopathology index, r = 0.74) and with MES and UCEIS endoscopic scores with r = 0.76 and 0.74, respectively. Some weak points of this work are the monocentric experience, the small population (29 patients), and the analysis being performed only on the single picture and not on the entire colonic segment. However, this study represents an important application of AI as testified by the high level of performance. Notably, the algorithm structure does not require as much information as the CNN system due to the possibility of sequential modulation of the algorithm during the development.
Finally, a multicenter study in inactive patients with UC (PRognOstiC valuE of rEd Density in Ulcerative Colitis: PROCEED-UC; NCT04408703) is planned to assess the predictive value of the RD score for sustained clinical remission. It is plausible that the RD score might be used in the future as the first objective operator-independent endoscopic target in a treat-to-target strategy in UC. The main characteristics of the studies on endoscopic AI application in IBD are summarized in Table 2.
AI in CD, State-of-the-Art
In the field of CD, AI has been mostly applied on video capsule technology (Table 3), which has been assuming an important role both in the diagnosis and assessment of mucosa healing in the small bowel [48]. In the current European Crohn's and Colitis Organisation (ECCO) guidelines, patients suspected to have CD but with a negative endoscopy should undergo a second level diagnostic method such as magnetic resonance imaging (MRI) or video capsule endoscopy [4]. Moreover, even in cases of normal imaging tests, such as MRI and clinical signs suspicious of small bowel CD (e.g., elevated calprotectin and/or unexplained iron deficiency anemia), video capsule endoscopy is indicated to exclude small bowel involvement [4]. However, the use of video capsules has some limitations, such as the collection of a huge amount of data and the duration of the analysis [48]. AI may overcome these barriers by selecting the frame or the part of video needed for the assessment and cutting off the time for diagnosis, thus requiring a limited amount of data to store. The first experience was conducted about 10 years ago. Girgis et al. [49] built a system that identified the inflamed regions after a SVM training, with an accuracy of 87%, sensitivity of 93%, and specificity of 80%. Two years later, Kumar et al. [50] developed a similar system with a precision of about 90% in detecting CD lesions. Lately, several studies have been conducted for the development of systems able to automatically detect ulcers and/or aphthae and to grade mucosal damage.
A novel filtering process, called hybrid adaptive filtering (HAF), was proposed for efficient extraction of lesion-related characteristics using wireless capsule endoscopy. This system was trained on 800 images collected by 13 different patients and offered high performances in the detection of severe lesions (93.8% of accuracy, 95.2% of sensitivity, 92.4% specificity, and 92.6% of precision) [51]. The group of Klang provided two experiences in this direction [52,53] . The former showed an AUC of 0.99 with an accuracy ranging from 95.4% to 96.7% in classifying images into either normal mucosa or mucosa with ulcers [52]. The latter exhibited a good accuracy of 93.5% [±6.7%] in classifying strictures vs. nonstrictures [53] .
A CNN was trained to detect erosions and ulcers, demonstrating performances comparable with the activity of two expert gastroenterologists, with an AUC of 0.96 for the detection of abnormalities [54]. Interestingly, a consensus reading was used to train another CNN in automatic grading of images of CD ulcers. The resulting algorithm was tested against capsule readers, showing high accuracy in classifying severe ulcers (0.91 for grade 1 vs. grade 3 ulcers compared to 0.6 for grade 1 vs. 2) [55].
DL methods for autonomous detection and classification of CD lesions have also been applied to panenteric capsule endoscopy system that is now available allowing simultaneous investigation of the small bowel and colon. AI technology has increased the diagnostic yield and reduced interobserver variability in this integrated procedure [56,57].
Not only did AI show a high level of performance, but also a significantly faster reading with an average time of 3.5 minutes against 50 minutes for a full video of capsule endoscopy [52,58].
Some limitations of these works warrant attention. Firstly, they were made on single images and not on the entire video so that the analysis was not able to provide an overall evaluation of the validated scores for video capsule (e.g., the Lewis score). Moreover, they are retrospective cohort studies based on restricted samples of patients.
Nevertheless, all these experiences could give a great impulse to capsule endoscopy in CD. The inflammation in the proximal bowel is correlated with a worst prognosis and a higher surgical risk [59], therefore a modern method of analysis with high sensitivity and specificity is eagerly awaited in clinical practice [60].
AI for the Detection of Neoplasms in Long-Standing IBD
Given the increased risk for developing colorectal neoplasia, surveillance colonoscopy plays an important role in the management of UC [61]. The gold standard method for dysplasia surveillance is chromoendoscopy, which utilizes indigo carmine or methylene to better define the superficial gastrointestinal mucosa [62]. New endoscopic imaging technologies such as virtual chromoendoscopy, autofluorescence imaging, CLE, and endocytoscopy are now emerging, but there are only a few reports about the application of AI-assisted colonoscopy techniques for the early diagnosis of colorectal cancer [5].
The AI capacity has been tested in the detection of colorectal neoplasia (Figure 1) but not specifically in patients with IBD.
The first experience is a case report of Maeda and colleagues [63] where the Endo-BRAIN eye system was tested for detecting dysplasia in a patient with long-standing UC. This system is able to identify colorectal lesions with high accuracy in general population [64], but in this case it proved to support endoscopists in the identification of UC-associated dysplasia, which is not always easy to detect due to its flat appearance and unclear boundaries. methylene to better define the superficial gastrointestinal mucosa [62]. New endoscopic imaging technologies such as virtual chromoendoscopy, autofluorescence imaging, CLE, and endocytoscopy are now emerging, but there are only a few reports about the application of AI-assisted colonoscopy techniques for the early diagnosis of colorectal cancer [5].
The AI capacity has been tested in the detection of colorectal neoplasia (Figure 1) but not specifically in patients with IBD. The first experience is a case report of Maeda and colleagues [63] where the Endo-BRAIN eye system was tested for detecting dysplasia in a patient with long-standing UC. This system is able to identify colorectal lesions with high accuracy in general population [64], but in this case it proved to support endoscopists in the identification of UCassociated dysplasia, which is not always easy to detect due to its flat appearance and unclear boundaries.
Another example of AI-support in the detection of dysplasia was reported by Fukunaga [65]. In this case report, EndoBRAIN system helped endocitoscopy in the detection of high-grade dysplasia in a patient with long-standing UC who subsequently underwent an endoscopic submucosal dissection. To note, colitis-associated colorectal cancer may be generally difficult to diagnose due to consequences of inflammation on mucosal appearance ( Figure 2) and the use of EndoBRAIN could help non-expert endoscopists to identify lesions. These experiences underline the potential and future role of AI in the colitis-associated dysplasia and neoplasia detection during IBD surveillance. Another example of AI-support in the detection of dysplasia was reported by Fukunaga [65]. In this case report, EndoBRAIN system helped endocitoscopy in the detection of high-grade dysplasia in a patient with long-standing UC who subsequently underwent an endoscopic submucosal dissection. To note, colitis-associated colorectal cancer may be generally difficult to diagnose due to consequences of inflammation on mucosal appearance ( Figure 2) and the use of EndoBRAIN could help non-expert endoscopists to identify lesions. These experiences underline the potential and future role of AI in the colitis-associated dysplasia and neoplasia detection during IBD surveillance.
Conclusions and Future Perspectives
AI is a cornerstone revolution in endoscopy. In the field of IBD, its primary applications are providing great results in the diagnosis and staging of the disease. In a field of medicine where the current mantra is the treat-to-target strategy and where treatment directions are guided by endoscopic remission, a sensible and specific tool able to overcome human limitations could represent a great ally. High-performing diagnostic aids with low variability are useful in the detection and standardization of results and in the targets' assessment. Moreover, if mucosal healing could be perceived as a realistic target, a concept that moves forward and takes to the extreme the previous idea is disease clearance. Even though a clear definition is still lacking, this objective includes simultaneous clinical, endoscopic, and histological remission of disease. It follows that the modern algorithms presented in the current review could help in the detection of this ambitious goal.
All the reported experiences improved the awareness about AI potential strengths
Conclusions and Future Perspectives
AI is a cornerstone revolution in endoscopy. In the field of IBD, its primary applications are providing great results in the diagnosis and staging of the disease. In a field of medicine where the current mantra is the treat-to-target strategy and where treatment directions are guided by endoscopic remission, a sensible and specific tool able to overcome human limitations could represent a great ally. High-performing diagnostic aids with low variability are useful in the detection and standardization of results and in the targets' assessment. Moreover, if mucosal healing could be perceived as a realistic target, a concept that moves forward and takes to the extreme the previous idea is disease clearance. Even though a clear definition is still lacking, this objective includes simultaneous clinical, endoscopic, and histological remission of disease. It follows that the modern algorithms presented in the current review could help in the detection of this ambitious goal.
All the reported experiences improved the awareness about AI potential strengths and limitations. Most were nonrandomized and retrospective with small sample sizes. In addition, very limited studies were conducted to test AI support in the detection of dysplasia and neoplasia in patients with IBD. We believe these limitations should be overcome before AI becomes part of real-life practice.
In the context of AI and big data, a future perspective is the creation of algorithms for diagnosis and monitoring of IBD based not only on endoscopic, but also on clinical and histological data in order to have a complete overview of all disease features. | 5,020.4 | 2022-01-24T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Transcriptome sequencing reveals high-salt diet-induced abnormal liver metabolic pathways in mice
Background Although salt plays an important role in maintaining the normal physiological metabolism of the human body, many abnormalities in the liver caused by a high-salt diet, especially with normal pathological results, are not well characterized. Methods Eight-week-old female C57BL/6 mice were randomly divided into a normal group and a high salt group. These groups were then fed with normal or sodium-rich chow (containing 6% NaCl) for 6 weeks. Liver injury was evaluated, and the influences of a high-salt diet on the liver were analyzed by transcriptome sequencing at the end of week 6. Results We found that although no liver parenchymal injury could be found after high-salt feeding, many metabolic abnormalities had formed based on transcriptome sequencing results. GO and KEGG enrichment analyses of differentially expressed genes revealed that at least 15 enzymatic activities and the metabolism of multiple substances were affected by a high-salt diet. Moreover, a variety of signaling and metabolic pathways, as well as numerous biological functions, were involved in liver dysfunction due to a high-salt diet. This included some known pathways and many novel ones, such as retinol metabolism, linoleic acid metabolism, steroid hormone biosynthesis, and signaling pathways. Conclusions A high-salt diet can induce serious abnormal liver metabolic activities in mice at the transcriptional level, although substantial physical damage may not yet be visible. This study, to our knowledge, was the first to reveal the impact of a high-salt diet on the liver at the omics level, and provides theoretical support for potential clinical risk evaluation, pathogenic mechanisms, and drug design for combating liver dysfunction. This study also provides a serious candidate direction for further research on the physiological impacts of high-salt diets. Supplementary Information The online version contains supplementary material available at 10.1186/s12876-021-01912-4.
Background
As an essential mineral in daily life, salt plays an important role in maintaining the normal physiological metabolism of the human body. Although the World Health Organization recommends that the daily intake of salt should not exceed 5 g, the daily intake of salt is often greater than 10 g based on survey data [1]. This is particularly true in some countries and regions where salt intake is often higher due to specific dietary habits [2], which can lead to some potential disease risks in these populations. Studies have found that a long-term highsalt diet results in potential harm to the body, inducing cardiovascular disease [3,4], insulin resistance and type Open Access *Correspondence<EMAIL_ADDRESS><EMAIL_ADDRESS>1 Department of Gastroenterology, Civil Aviation General Hospital, No. 1, Gaojingjia, Chaoyang District, China Full list of author information is available at the end of the article 2 diabetes [5], metabolic syndrome, obesity, muscle atrophy [6][7][8], and problems with the immune system [9]. Some mechanisms of organ damage caused by high salt intake have been elucidated thus far. For example, animal experiments have revealed that a high-salt diet can cause kidney damage through the reduction of ACE2, enhancement of leukocyte adhesion, blunt renal autoregulation via a reactive oxygen species-dependent mechanism, etc. [10][11][12]. A high-salt diet can also cause cardiovascular abnormalities, and high plasma sodium can cause hypertension by directly affecting endothelial functions, thus controlling vascular tone [13], or by participating in cardiovascular injury through hormonal pathways [14]. In addition, a high-salt diet also has direct or indirect effects on the liver and causes diseases like fibrosis and fatty liver [15,16]. Although many mechanisms of liver damage caused by high-salt diets have been revealed to date, the liver, as the largest metabolic organ, is the site of a variety of important signaling pathways. Whether these are affected by a high-salt diet and are part of the cause of specific diseases is still unclear.
With the development of transcriptome sequencing technology in recent years, it has become feasible to analyze all the mRNAs transcribed by a specific cell or organ in a certain functional state [17][18][19]. At present, transcriptome sequencing has been applied to many medical fields, such as clinical diagnosis, marker screening, prognosis evaluation, and pathogenesis [20][21][22].
In this study, we sequenced the liver transcriptome of normal and high-salt diet mice and explored the effects of a high-salt diet on the liver at a gene expression level. In addition to some abnormal signaling and metabolic pathways that have been reported previously, we also identified many new abnormal metabolic pathways, which provides a strong theoretical basis for the potential clinical risks of a high-salt diet, the diseases and pathogenic mechanisms it correlates with, and provides potential targets for a rational drug design to treat high-salt diet-induced liver dysfunction.
Hematoxylin-eosin staining (H&E staining)
H&E staining was performed according to the instructions of a Hematoxylin-Eosin Staining Kit (Solarbio, China, #G1121). Briefly, after dissection, the liver tissues of mice were fixed with a 4% fixative solution, embedded in paraffin, and then sectioned to about 3 μm in thickness. The paraffin sections were washed twice with xylene for 10 min, and then washed sequentially with absolute ethanol, 95% ethanol, 80% ethanol, 70% ethanol, and distilled water for two minutes. Next, sections were stained with hematoxylin for 5 min and washed with water for 8 s. The sections were then incubated in differentiation solution for 5 s and again washed with water for 30 s. After incubation in blue returning solution for 1 min, the sections were rinsed with water for 30 s and eosin-stained for 1 min. Next, the sections were washed with water, 80% ethanol, 90% ethanol, 95% ethanol, 95% ethanol, absolute ethanol for 5 s each, and then washed with absolute ethanol and xylene for 1 min each. Finally, they were fixed with neutral balata.
Biochemical tests
Blood samples were taken by tail snip on week 6 and serum was collected after incubating at 4 °C overnight. The detection of alanine aminotransferase (ALT), aspartate transaminase (AST), and alkaline phosphatase (ALP) from serum was performed according to the instructions of ALT, AST, and ALP Detection Kits (Condical, Zhejiang, China, #E10001-5, #E10002-5, and #E10003-5), respectively. The data was collected using a Biochemical Autoanalyzer (TBA-40, Toshiba).
RNA extraction
After mice were euthanized, liver tissues were removed and bulk RNA was extracted according to the instructions of a Total RNA Extraction Kit (Solarbio, China, #R1200).
Construction of transcriptome libraries
First, mature mRNAs were isolated from total RNA using oligo (dT) magnetic beads, and these were then randomly fragmented by mixing with fragmentation buffer. Then, first-strand cDNAs were synthesized using mRNAs as templates, and second-strand cDNAs were synthesized by adding polymerase chain reaction (PCR) buffer, dNTPs, RNase H, and DNA polymerase I. After purification, the double-strand cDNAs were harvested with elution buffers for end repair and A-tailing, target fragments were size selected using agarose gel electrophoresis, and final libraries were obtained by PCR amplification. After passing library quality inspection, an Illumina platform was used to carry out high-throughput sequencing.
Sequencing data quality control
First, the data was filtered by removing contaminating sequences, low-quality sequences, and sequences containing more than 5% n. Then, the mass distribution and base distribution were analyzed and compared to ensure the reliability of subsequent analyses. Finally, the filtered sequences from each sample were compared to a mouse reference genome.
Statistical analysis
Data is expressed as the mean ± standard deviation, and statistical analyses were completed using GraphPad 8.0. Student's t-test was used to compare the difference between two groups, and differences were considered statistically significant at p < 0.05.
Effects of a high-salt diet on liver function in mice
Although there have been many reports on the effects of long-term high-salt diets on the liver, salt, as a common flavoring agent, is ubiquitous in cuisine and may cause this damage to be very chronic and long-term over an average human's lifespan. In order to evaluate whether liver function could be affected by high-salt (6% NaCl) diet, biochemical indices including ALT, AST, and ALP in blood serum were detected after 6 weeks of feeding mice a high-salt diet. Results showed that there were no significant differences in these biochemical indices between normal and high-salt diet mice (Fig. 1a). Moreover, H&E staining of liver tissue also showed no significant pathological changes after 6 weeks of high-salt feeding ( Fig. 1b), although the central vein of the hepatic lobule in the high-salt diet group was enlarged. These results indicated that a high-salt diet, on the surface, did not cause significant physical damage to the liver in this timeframe. Fig. 1 The effect of a high-salt diet on liver function: a ALT, AST, and ALP in blood serum after 6 weeks of feeding (n = 5). ns indicates no significant difference; b H&E staining of liver tissue after 6 weeks. The arrows point to the central veins of the hepatic lobule
Gross changes in RNA expression in liver tissue with a high-salt diet
Although we did not find clear evidence of the effect of a high-salt diet on the liver at the macroscopic level, we hypothesized that the potential effects of this diet on the liver may be subtler. Therefore, we extracted RNA from the livers of mice raised for six weeks with a normal or high-salt diet and used transcriptome sequencing to further analyze the effects of a highsalt diet on liver. After sequencing, 6 Gb of data from each sample was obtained, and after filtering, the highquality sequences obtained from the normal and highsalt groups were on average 46.02 Mb and 44.68 Mb reads, respectively. The matching rates to a reference genome were 97.43% and 97.44%, respectively, and the total number of genes detected was 18,916 and 19,204, respectively (Additional file 1: Table S1). In order to evaluate the overall trend of sample expression more objectively, we converted the fragments from each gene to Fragments Per Kilobase per Million (FPKM) mapped fragments. Since the number of differentially expressed genes only accounts for a small part of the overall number of genes, and a small number of differentially expressed genes had no significant effect on the expression distribution in our samples, the overall distribution of these samples using a box diagram was virtually the same when we compared each group (Fig. 2). These results indicated that our data met requirements in quality and could be used for further analyses.
Identification of liver differential genes by GO and KEGG enrichment
During the identification of differentially expressed genes, a fold change ≥ 2 and a padj value (corrected p value) < 0.05 were used as screening criteria, and we found that there were 52 differential genes, including 33 up-regulated genes and 19 down-regulated genes. For example, some up-regulated genes like cyp4a10 and cyp4a14, which are related to the formation of non-alcoholic fatty liver (NAFLD), and rad51b, which is related to DNA repair, and some down-regulated genes like cyp17a1, which is involved in the synthesis of steroid hormones, and cyp2a4, which is involved in the metabolism of many drugs and compounds, made logical sense based on previous literature (Fig. 3).
In order to further analyze the function of these differentially expressed genes, Gene ontology (GO) analysis was performed. According to the secondary items in the GO database, the number of differentially expressed genes in these items was counted, and percentages were calculated. The results showed that in the 36 sections of the three main GO categories (biological process, cellular component and molecular function), extracellular protein, macromolecular complex, cell killing, immune process, cell proliferation, cell growth, and cell movement were all significantly up-regulated in the high-salt diet group (Fig. 4). On the basis of the differentially expressed genes in each GO entry, we further analyzed these entries in terms of significantly enriched differentially expressed genes compared with a whole genome background. The results yielded 25 significantly different terms, including 2 terms in the biological process category (Fig. 5a) and 23 terms in the molecular function category (Fig. 5b, Additional file 1: Table S2). Specifically, we found that at least 15 enzymatic activities and the metabolism of multiple substances were correlated with a high-salt diet. The abnormal activities of these enzymes and metabolisms have been associated with many diseases.
Next, Kyoto encyclopedia of genes and genomes (KEGG) enrichment analysis showed that the differentially expressed genes induced by high-salt diet were involved in 33 pathways, including some previously reported pathways, such as the peroxisome proliferatorsactivated receptor (PPAR) signaling pathway [23,24], the mitogen-activated protein kinase (MAPK) signaling pathway [25], and the prolactin signaling pathway [26]. Some of these had not yet been reported, such as the phosphatidylinositol 3′-kinase-protein kinase B (PI3K-Akt) signaling pathway, retinol metabolism, and steroid hormone biosynthesis (Table 1). These pathways represent potential risks for some diseases.
Discussion
As the largest metabolic organ, the liver plays an important role in the health of the body, regulating three major nutrients and various molecular metabolic events, and this organ hosts some key signaling pathway nodes. Thus, abnormalities can have potential impacts on human health. We compared the transcriptome changes in the liver of mice fed with high-salt diet by transcriptome sequencing and found that there were many abnormal signal pathways and metabolic events in the liver after exposure to a high-salt diet. For example, two up-regulated genes, cyp4a10 and cyp4a14, were shown to be involved in the PPAR signaling pathway and related to oxidative stress and lipid peroxidation of fatty acids, causing NAFLD/steatohepatitis (NASH). In steroid hormone biosynthesis pathways, the cyp17a1 gene was down-regulated, and in the cell proliferation and apoptosis regulation pathways, nr4a1 gene expression was down-regulated. Many of the pathways we identified here have been previously reported. In addition, we also found some other potential effects of a high-salt diet, such as retinol metabolism, ascorbate and aldarate metabolism, and steroid hormone biosynthesis, which have not been reported previously It has been reported that a high salt diet can cause liver injury by producing excessive reactive oxygen species (ROS), via Nrf2/Keap1 signaling [16]. Although we did not find significant changes in gene expression related to this signaling pathway in our results, which may have been due to differences in the animals and feeding methods used herein, we found that the expression of nicotinamide adenine dinucleotide phosphate (NADPH) oxidase 4, another ROS activating protein, was up-regulated. This also indicated the activation of ROS by a high-salt diet. In addition, many studies have found that a high salt diet was associated with NAFLD [27,28]. The main mechanism of NAFLD formation has been shown to involve insulin resistance, oxidative stress, and lipid peroxidation, etc. [29]. The role of P450 in this aspect has been consistently demonstrated by others. In addition to the cyp4a10 and cyp4a14 genes that may cause NAFLD as mentioned above, we also found the high expression of cyp2e1, which can promote oxidative stress and lipid peroxidation, and lead to hepatocyte damage [30][31][32], accelerating the progress of NAFLD and has been reported in rats and clinical patient studies [33,34]. Moreover, the expression of cyp1a2 was shown to be down regulated in NAFLD rats [35], which was similar to our results in high-salt diet fed mice. These results suggest that a highsalt diet may promote NAFLD development by affecting the expression of Cyp protein. In addition, a study also found that the prevalence of NAFLD increased with increase in Na + intake, implying the effects of a high-salt diet on liver lipid metabolism and the potential relationship between NAFLD and obesity could be due to Na + levels [36]. Accordingly, studies have shown that a highsalt intake increases liver osmotic pressure, promotes the expression of the transcription factor TonEBP, and then activates the expression of aldose reductase, promoting the production of endogenous fructose. Therefore, salt may be a potential cause of obesity and metabolic syndrome [15]. Although our results showed that biochemical and H&E staining were normal after 6 weeks of a high-salt diet, indicating no obvious liver damage (probably due to the short time of high-salt feeding, where organic damage had not yet formed), sequencing results indicated the potential influence of this diet on the liver, which needs to be further studied.
There is a significant correlation between high-salt diet and hypertension. A large amount of evidence has shown that salt is the main cause of blood pressure elevation, and a decrease in salt intake reduces blood pressure, thus reducing the diseases related to blood pressure. The central mechanism of hypertension shows that a high-salt diet leads to an increase of the brain's sodium ion content, which activates the sympathetic nerves through the Na(+)-epithelial sodium channel-renin angiotensin aldosterone system-endogenous digitalis-like factor (Na(+)-ENaC-RAAS-EDLF) axis and promotes the formation of hypertension [37]. It has also been found that salt may induce salt-sensitive hypertension by inhibiting the expression of renal enzymes [38]. In addition, high sodium levels can directly promote the proliferation of vascular smooth muscle cells and promote the formation of hypertension [39]. Our results also showed that the expression of the cyp4a gene in the liver was significantly increased, and Cyp4a can hydroxylate arachidonic acid into 20-hydroxyeicosapentaenoic acid and act on blood vessels, which indirectly could participate in the formation of hypertension. This finding also supports a previously established mechanism of high-salt diet leading to hypertension [40].
Although continuous high-salt feeding was only for 6 weeks in our study, there were many metabolic pathways abnormalities that show early hints of liver injury. However, due to the compensation and self-healing ability of liver, it was not clear whether these abnormal could lead to substantial damage in the future. Studies have clearly revealed that high-salt diet is closely related to hypertension, nonalcoholic fatty liver disease, immune abnormalities, etc., and most of the mechanisms driving this are still unknown. Thus, on the basis of this early-stage research, a longer period of study is needed to reveal the dynamic changes in gene expression induced by a high-salt diet. For example, central vein dilatation is the first positive change observed in this study, and is involved in a variety of diseases, such as right-sided heart failure. Further exploration of related genes is necessary and could be achieved by using different doses of NaCl or longer feeding durations. In addition, at present, this early exploration on the effects of a high-salt diet on the liver from a macro perspective also suggests that some other points, like oxidative stress and substance metabolism, should areas of additional research. As is well-known, the use of male mice is preferrable to the use of female mice, as they have a more accentuated hormone interference. As our original design intent was to explore the changes (including tissue lesions, liver cell abnormalities, inflammation, metabolic abnormalities, etc.) of the liver from the perspective of transcriptomics, the gender of the mice used was not a main factor. In our results, we found that high-salt diet had extensive influence on the liver, especially on liver metabolism. Therefore, this male mouse model can be used for further study.
With the continuous development and popularization of omics research technology (including proteome, transcriptome, and metabolomic) over the past few years, people have gained the ability to systematically describe and analyze the changes of specific levels of the body as a whole, so as to further explore the pathogenesis of diseases. This has allowed them to identify the potential pathogenic risks and influencing factors for many diseases. Among these technologies, transcriptome sequencing technology is an important means to study the pathogenesis of many diseases. Since transcription levels are often positively correlated with protein expression levels, transcriptome sequencing can be used to analyze and predict the occurrence and development of many diseases. By comparing and analyzing the differentially expressed genes under different conditions we can identify changes that may be related to diseases. We also can use this technology to analyze the correlation between various pathways, thus providing clues to the pathogenesis of diseases. This study systematically analyzed the liver after high-salt diet at the transcriptomic level, and we found many potential risks for diseases, which provide clues to the pathogenesis, prevention, and potential drug targets of liver diseases. These potential pathogenic factors will be explored in future research.
Conclusions
Our study has indicated that a high-salt diet has many potential effects on the liver at the transcriptional level. Although substantial damage has not yet been shown, we found that a high-salt diet can influence at least 15 enzymatic activities and the metabolism of multiple substances. In addition, except for some pathways consistent with known mechanisms of damage caused by a high-salt diet, we also found that a high-salt diet had an impact on other important pathways in the liver. This study, to our knowledge, was the first to reveal the impact of a high-salt diet on the liver at the omics level and provides theoretical support for potential clinical risk evaluation, pathogenic mechanisms, and drug design for the treatment of liver dysfunction. Furthermore, it illustrates a serious candidate direction for the study of the impact of high-salt diets. | 4,838.4 | 2021-01-13T00:00:00.000 | [
"Medicine",
"Biology"
] |
Essential role of Pin1 via STAT3 signalling and mitochondria-dependent pathways in restenosis in type 2 diabetes
Type 2 diabetes (T2D) is associated with accelerated restenosis rates after angioplasty. We have previously proved that Pin1 played an important role in vascular smooth muscle cell (VSMC) cycle and apoptosis. But neither the role of Pin1 in restenosis by T2D, nor the molecular mechanism of Pin1 in these processes has been elucidated. A mouse model of T2D was generated by the combination of high-fat diet (HFD) and streptozotocin (STZ) injections. Both Immunohistochemistry and Western blot revealed that Pin1 expression was up-regulated in the arterial wall in T2D mice and in VSMCs in culture conditions mimicking T2D. Next, increased activity of Pin1 was observed in neointimal hyperplasia after arterial injury in T2D mice. Further analysis confirmed that 10% serum of T2D mice and Pin1-forced expression stimulated proliferation, inhibited apoptosis, enhanced cell cycle progression and migration of VSMCs, whereas Pin1 knockdown resulted in the converse effects. We demonstrated that STAT3 signalling and mitochondria-dependent pathways played critical roles in the involvement of Pin1 in cell cycle regulation and apoptosis of VSMCs in T2D. In addition, VEGF expression was stimulated by Pin1, which unveiled part of the mechanism of Pin1 in regulating VSMC migration in T2D. Finally, the administration of juglone via pluronic gel onto injured common femoral artery resulted in a significant inhibition of the neointima/media ratio. Our findings demonstrated the vital effect of Pin1 on the VSMC proliferation, cell cycle progression, apoptosis and migration that underlie neointima formation in T2D and implicated Pin1 as a potential therapeutic target to prevent restenosis in T2D.
Introduction
Peripheral arterial disease (PAD) is a prevalent systemic atherosclerotic disease which can impair a patient's quality of life and even lead to limb loss. Type 2 diabetes is a significant known risk factor for PAD and is increasing in incidence and prevalence [1,2]. Over the past decade, percutaneous revascularization therapies for the treatment of patients with PAD have evolved tremendously. However, the longterm success of this procedure is limited by recurrent stenosis, especially in patients with T2D. This may be partly because of metabolic derangements that cause endothelial dysfunction [3]. It is also suggested that the poor response to interventions in T2D states appears to be associated with increased inflammatory and proliferative activi-ties that may be driven by increased serum levels of insulin and glucose or by other biochemical aberrancies [4]. Whatever the aetiology of this enhanced restenosis in T2D, induced VSMC proliferation and subsequent migration from the media to the intima contribute significantly to the complex pathophysiological events leading to restenosis [5]. This may, in part, suggest insufficient apoptosis in the diseased tissue [6]. Thus, one current strategy to maintain proper vascular function after angioplasty is to inhibit VSMC proliferation by targeting cell cycle regulation and apoptosis, e.g. by drug-eluting stents [7].
A novel post-phosphorylation signalling regulator known as the peptidyl-prolyl cis/trans isomerase Pin1 sits at the crossroads of many signalling pathways controlling cell proliferation and transformation. Pin1 is the only mammalian enzyme known to specifically catalyse the cis-trans isomerization of Ser-Pro or Thr-Pro peptide bonds [8,9]. The effects of the Pin1-induced isomerization on its target proteins are diverse and include altering the stability and localization of the target proteins, as well as modifying their interaction with other proteins [10,11]. Growing evidence has shown that Pin1 is involved in the pathogenesis of certain cancers and protein folding illnesses like Alzheimer's and Parkinson's disease [12]. We have demonstrated previously that endogenous Pin1 plays an important role in VSMC cell cycle and apoptosis. Specifically, knockdown of Pin1 led to cell cycle arrest in G1 phase and induced apoptosis of VSMCs in vitro. We also provided some insights into the molecular mechanisms behind these processes [13]. Intriguingly, we speculated that STAT3 activation might, in part, account for the inhibited growth and enhanced apoptosis of VSMCs transduced with lentiviral siPin1. This finding is consistent with a prior study showing that STAT3 may serve as a possible substrate of Pin1 [14]. However, the role of Pin1 in the formation of intimal hyperplasia in T2D remains largely unknown and whether Pin1 interacts with STAT3 in T2D condition has not been examined. Recently, a study has shed light on the fact that Pin1 induction during neointimal formation may be associated with ROS-mediated VSMC proliferation via down-regulation of Nrf2/AREdependent HO-1 expression [15]. Although two lines of evidence have linked Pin1 to VSMC proliferation and apoptosis, the function of Pin1 in restenosis in T2D seems to be more complex. In light of these few but important studies, we are interested to investigate the novel role of Pin1 in VSMCs during the development of restenosis in T2D, where anti-Pin1 therapies maybe of value.
In the present study, we sought to determine whether pin1 was responsible for the abnormal VSMC cell cycle and apoptosis as well as injury-induced neointimal growth in T2D and, if so, to explore the underlying mechanism. Our results provided the first evidence that pin1 affected cell cycle and apoptosis through STAT3 signalling and mitochondria-dependent pathways in the VSMCs in T2D condition.
Generation of T2D model
Six-month-old C57BL/6N male mice were purchased from Shanghai SLAC Laboratory Animal Co. Ltd and placed on the HFD (D12492; 60% fat, 20% protein and 20% carbohydrate; 5.24 kcal/g). After 3 weeks of HFD feeding, the mice were injected three times on consecutive days with low-dose STZ (intraperitoneal at 40 mg/kg) to induce partial insulin deficiency. Three weeks after STZ injection, the majority of HFD/STZtreated mice displayed hyperglycaemia, insulin resistance and glucose intolerance [16]. The normal diet-fed mice were used as non-diabetic controls. All procedures for the use of animals were performed according to the institutional ethical guidelines on animal care, Renji Hospital, Shanghai Jiaotong University, College of Medicine.
Arterial injury models
Each mouse was anaesthetized by intraperitoneal injection of 50 mg/kg of pentobarbital diluted in 0.9% sodium chloride solution. Guidewire injury of the common femoral artery was performed by three passages of a 0.014-inch guidewire (Radius X-TRa Support PTCA GUIDEWIRE; Radius Medical Technologies, Maynard, MA, USA) [17]. Control shamoperated arteries underwent dissection, temporary clamping without passage of the wire. One hundred microlitres of 20% pluronic gel (Sigma-Aldrich, St. Louis, MO, USA) with or without 300 lg juglone were then applied to the exposed adventitial surface of the common femoral artery right after guidewire injury. Surgery was carried out using a dissecting microscope. Two weeks after guidewire injury, each mouse was cardiac-perfused with 0.9% NaCl solution, followed by perfusion fixation with 4% paraformaldehyde in PBS (pH 7.4) after anaesthesia. The common femoral artery was carefully excised, fixed in 4% paraformaldehyde overnight at 4°C and embedded in paraffin.
Isolation and culture of VSMCs
Aortas were harvested from C57BL/6N male mice. Adventitia was dissected from media, which was cultured to yield VSMCs, whose phenotype was confirmed by typical morphology and immunohistochemical staining for SMaA. Seven separate isolates of VSMCs were obtained and characterized, derived from a pool of aortas from three individual mice per each genotype. Vascular smooth muscle cells were grown in DMEM containing 10% foetal bovine serum (FBS), and cells between passages 2 and 5 were used for experiments. To mimic a T2D state, post-confluent primary cultured cells were starved for 24 hrs. Then the starvation medium was replaced for 72 hrs with medium containing 10% serum isolated from HFD/STZ-induced T2D mice. Vascular smooth muscle cells grown in medium containing 10% serum isolated from non-diabetic mice for 72 hrs served as control.
Construction of expression vectors and transfection
The cDNA of the full sequence of Pin1, which was purchased from Guangzhou GeneCopoeia Co., Ltd, were subcloned into the GV166 to obtain GV166-Pin1 (Pin1-overexpressing) construct (GV166:Ubi-MCS-3FLAG-IRES-puromycin, was purchased from Shanghai Genechem Co., Ltd.). GV166-null was used as a control virus. For shRNA sequences, the oligonucleotide was designed from murine pin1 (Genbank accession NM 023371) containing a sense strand of 21 nucleotide sequences followed by a short space (TTCAAGAGA), the reverse complement of the sense strand and six thymidines as a RNA polymerase III transcriptional stop signal. Forward and reverse oligos for Pin1-shRNA (ORF region: 70) were forward: (CTCGAGGGGTGTACTACTTCAATCACATTCAAGAGATGT GATTGAAGTAGTACACCCTTTTTTGAATTC), reverse (GAATTCAAAAAAGG GTGTACTACTTCAATCACATCTCTTGAATGTGATTGAAGTAGTACACCCCTCG AG). The oligonucleotide pairs were designed to contain terminal EcoRI and Xhol restriction sites, and were subcloned into the vector after annealing to generate GV112-Pin1 shRNA vector (GV112:hU6-MCS-CMV-Puromycin, was purchased from Genechem). Lentiviral particles were produced and transfected into cultured VSMCs as described previously [18]. Transfected cells were used for the subsequent experiments 72 hrs after transfection. Both mRNA and protein levels of Pin1 in Pin1-knockdown and Pin1-overexpressing VSMCs were confirmed by real-time PCR, Western blot and immunofluorescence.
Immunofluorescent staining
The VSMCs were fixed in 4% paraformaldehyde and permeabilized with 0.1% Triton X-100 at room temperature for 20 min. Thereafter, cells were incubated with anti-a-actin and anti-pin1 antibody (Abcam, Cambridge, UK) and further stained with appropriate TRITC-or FITC-conjugated secondary antibodies (Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA). Nucleus was counterstained with DAPI. Confocal microscopy was performed with the Confocal Laser Scanning Microscope Systems (Leica Microsystems, Wetzlar, Germany).
RNA isolation and real-time RT-PCR
Total RNA was isolated from VSMCs using Trizol â Reagent (Invitrogen, Carlsbad, CA) according to the manufacturer's instructions. Real-time PCR was performed using a standard TaqMan PCR kit protocol in an Applied Biosystems GraphPad PRISM 4.0 Sequence Detection System using the following PCR primers: Pin1(sense, 5′-GGAGAGGAAGACTTT GAATCTCTGG-3′; antisense, 5′-TGGTTTCTGCATCTGACCTCTG-3′). All data were normalized to b-actin as an internal control.
Cell migration assay
Vascular smooth muscle cells were plated at an initial density of 1 9 10 5 cells/ml to form a monolayer. Then, cells were wounded by scraping with a pipette tip to make a gap in the cell monolayer. The images of cell migration were observed at post-scratching hour 0 (immediately after scratching) and 36, and photographed at five marked locations on each dish using a phase-contrast microscope. The number of migrated cells was counted and averaged. All experiments were carried out in triplicate and repeated at least six times.
Pin1 activity assay
Microdissected arteries (pool of five mice) were placed on ice in a reaction buffer containing 100 mM NaCl, 50 mM HEPES, pH 7, 2 mM DTT and 0.04 mg/ml BSA. The arteries were homogenized using a mini-bead beater (Biospec Products, Inc., Bartlesville, OK, USA) and the supernatant cleared by centrifugation at 12,000 9 g for 10 min (4°C). Pin1 activity was measured using equal amounts of artery cytoplasmic lysates and achymotrypsin using a synthetic tetrapeptide substrate Suc-Ala-Glu-Pro-Phe-pNa (Peptides International, Louisville, KY, USA). Absorption at 390 nM was measured using an Ultrospec 2000 spectrophotometer. The results were expressed as the mean of three measurements from a single experiment and were representative of three independent experiments.
Cell proliferation assays
Cell proliferation was determined by two methods. Vascular smooth muscle cells (5 9 10 3 cells/well) were seeded in 96-well culture plates.
MTT reagent was added to each well and absorbance of the formazan from each sample was measured at a test wavelength 570 nm at indicated time-points. Vascular smooth muscle cells plated in 12-well plates (4 9 10 4 cells/well) were then incubated in complete medium containing tritiated [ 3 H]-thymidine (1 lCi/ml). Tritiated [ 3 H]-thymidine incorporated into trichloroacetic acid-precipitated DNA was measured with a liquid scintillation counter. Each experiment was repeated six times.
Flow cytometry
Cells were harvested and washed twice with FACS buffer (PBS/1% FCA/0.0025% sodium azide). The pellet was resuspended in 500 ml FACS buffer, and 5 ml of cold ethanol was added. After incubation at 4°C overnight, the ethanol was removed, the pellet was resuspended in FACS buffer and PI solution (500 mg/ml) was added. Then DNA content analysis was performed with a FACScan. Results were expressed as percentage of cells in each cell cycle phase. Cell apoptosis and necrosis were further confirmed by annexin V-PI dual staining flow cytometry. Cells were collected, washed with PBS and suspended in 400 ml binding buffer (10 mM HEPES, 140 mM NaCl, 2.5 mM CaCl2, 0.1% BSA). Annexin V-FITC (5 ml) and PI (5 ml) were then added into each sample. After 30-min. incubation in the dark, cells were analysed by FACScan using Cell Quest Research Software (Becton Dickinson, San Jose, CA, USA).
Preparation of cytosolic extract
The cytochrome c apoptosis assay kit (Biovision, Mountain View, CA, USA) was used for cytosolic extract in this experiment. Vascular smooth muscle cells were homogenized with the cytosol extraction buffer provided in the kit and then centrifuged at 700 9 g for 10 min. at 4°C to remove debris. The supernatant was then centrifuged at 10,000 9 g for 30 min. at 4°C, and stored at À80°C in preparation for Western blot.
Tissue microarray
Tissue microarray was constructed using the manual tissue arrayer (Beecher Instruments, Silver Spring, MD, USA). Tissue cores were 2 mm in diameter, and the length ranged from 4 to 6 mm. The tissue microarray sections (4 lm) were deparaffinized in xylene and rehydrated using a graded series of ethanol. Streptavidin-biotin-horseradish peroxidase method was used, and the expression of Pin1 was examined with the primary antibodies (Pin1, dilution 1:100) Protein expression of Pin1 was quantified based on the extent of staining (percentage of positive VSMCs).
Morphometric analysis
Common femoral arteries harvested at 14 days after wire injury were examined histologically for evidence of neointimal hyperplasia using routine haematoxylin and eosin staining. Digital images were collected with light microscopy using an Olympus BHT microscope (Melville, NY, USA) with 409 objective. Six evenly spaced sections through each common femoral artery were morphometrically analysed. Intimal area (I) and medial area (M) were measured (arbitrary units) using ImageJ software (National Institutes of Health, Bethesda, MD, USA).
Data analysis
Data are presented as mean AE SEM. ANOVA and the paired or unpaired t-test was used for statistical analysis as appropriate. P < 0.05 was considered statistically significant.
Expression levels of Pin1 in vascular tissues
We initially determined whether Pin1 was expressed in the vascular wall of HFD/STZ-induced T2D mice. Immunohistochemistry revealed that the expression of Pin1 was clearly observed in the vascular wall of HFD/STZ-induced T2D mice. In contrast, the expression of Pin1 was nearly undetectable in the vascular wall of normal mice (Fig. 1A). Increased expression of pin1 in common femoral arteries of T2D mice was confirmed by Western blot (Fig. 1B and C). This finding let us examine whether Pin1 was associated with intimal hyperplasia in the mouse injury model of T2D. Mild to moderate intimal hyperplasia developed in the mouse common femoral arteries at 7 days in response to wire injury and became more severe at 14 days. So we chose day 14 as our time-point for further evaluation. The expression of Pin1 was markedly increased in the neointima after wire injury in T2D mice ( Fig. 2A-C). We then analysed Pin1 enzymatic activity in artery extracts from the sham-operated and wire-injured T2D mice. We found that Pin1 activity was significantly increased in common femoral artery cytoplasmic lysates of the wire-injured T2D mice compared with that of the sham control (Fig. 2D). This suggested that pin1 might be involved in the development of neointimal lesions after wire injury in T2D mice.
Serum isolated from T2D mice up-regulates Pin1 expression in VSMCs
From the result that Pin1 was up-regulated in the vascular wall of HFD/STZ-induced T2D mice, the question arose as to how the expression of Pin1 was regulated in a condition that happened in T2D in vitro. Culture in 10% serum isolated from HFD/STZ-induced T2D mice was intended to mimic T2D condition. As shown in Figure 3, Pin1 was increased in VSMCs cultured in 10% serum isolated from HFD/STZ-induced T2D mice. Pin1 protein was found in both cytoplasm and nucleus, with the latter being more condensed.
Effective overexpression or knockdown of Pin1 in VSMCs
To better understand the roles of Pin1 in stenosis in T2D, we transfected VSMCs with either GV166-Pin1 or GV166-null to investigate its effect. Results of RT-PCR revealed that GV166-Pin1-transfected cells suffered a dramatic increase in Pin1 mRNA, while GV166-null did not (Fig. 3A). Results of Western blot demonstrated that Pin1 protein expression was remarkably higher in VSMCs transfected with GV166-Pin1 than the control ( Fig. 3B and C). As the fluorescence microscope showed (Fig. 3D), the fluorescent staining was stronger in intensity and larger in number in cells transfected with GV166-Pin1. All the data above demonstrated that the GV166-Pin1 was effective in overexpressing Pin1 in VSMCs. Likewise, the efficiency of lentiviral shRNA for Pin1 was confirmed by real-time RT-PCR, Western blot and fluorescent staining. Endogenous Pin1 expression of VSMCs was overwhelmingly suppressed (Fig. 3), indicating that Pin1 was knocked down effectively.
Role of Pin1 in VSMC growth, apoptosis and migration
As demonstrated in Figure 4, 10% serum from HFD/STZ-induced T2D mice promoted proliferation and inhibited apoptosis of VSMCs. Forced expression of Pin1 in cells resulted in faster cell proliferation rates when compared with cells transfected with the control. In contrast, Pin1 deletion by lentivirus-mediated transfer negatively impacted the proliferation of VSMCs (Fig. 4A). Using [ 3 H]thymidine incorporation in VSMCs, a similar proliferation status was observed (Fig. 4B). Pin1-forced expression displayed less cells in G0/G1 phase and relatively high percentages of populations in S and G2/M phases. Conversely, lenti-shPin1 impaired VSMC cycle, thus increasing the percentage of cells in G0/G1 and reducing the percentage of cells in S and G2/M phases (Fig. 4C and D). It was noteworthy that early apoptotic cells were elevated significantly following lenti-shPin1 transfection ( Fig. 4E and F). In accord, apoptosis was blocked or induced by overexpressing or silencing Pin1 as demonstrated by TUNEL assay (Fig. 4G and H). In in vitro scratch assay, 10% serum from HFD/STZinduced T2D mice enhanced VSMC migration. In addition, Pin1-over-expressing cells revealed significant migration advantages compared with control cells, whereas knockdown of Pin1 showed reduced migratory capacity ( Fig. 4I and J).
STAT3 activation is mediated by Pin1
To elucidate whether STAT3 activation was regulated by Pin1 in T2D condition, we examined the protein levels of STAT3 as well as phosphorylations of STAT3 at Tyr705 and Ser727 in VSMCs by means of Western blot analysis. As shown in Figure 5A and B, the level of STAT3, JAK-phosphorylated form of STAT3, indicated as pTyr705-STAT3 and PKC-phosphorylated form of STAT3, indicated as pSer727-STAT3, were enhanced in VSMCs cultured in 10% serum from HFD/STZ-induced T2D mice. Overexpression of Pin1 markedly increased STAT3, pTyr705-STAT3 and pSer727-STAT3 level, whereas the knockdown of Pin1 resulted in a significant decrease in STAT3, pTyr705-STAT3 and pSer727-STAT3 level. As shown in Figure 5C and D, the level of STAT1 and phosphorylation of STAT1 on both Ser727 and Tyr701 were elevated in VSMCs cultured in 10% serum from T2D mice. Nevertheless, no significant change in the level of STAT1, pTyr701-STAT1 or pSer727-STAT1 was observed in Pin1overexpressed or -silenced VSMCs.
Pin1 is associated with the modulation of some downstream STAT3 targets
We further examined the expression of various apoptosis and cell cycle regulatory proteins known to be downstream targets of the STAT3 pathway. Ten percent serum from HFD/STZ-induced T2D mice induced down-regulation of p16ink4a, p21waf1/cip1 and p27kip1 in VSMCs. However, no change was seen at the protein level for survivin. Consistent with this finding, we found that the expression of p16ink4a, p21waf1/cip1 and p27kip1 was significantly attenuated by Pin1 overexpression, but increased by Pin1 knockdown. Nevertheless, the knockdown or overexpression of Pin1 did not cause a remarkable change in survivin level ( Fig. 5E and F). 4 Effects of Pin1 on proliferation, cell cycle progression, apoptosis and migration in vascular smooth muscle cells (VSMCs). VSMCs were cultured and infected as described in Figure 3. Cell proliferation was assessed by both the MTT assay (A) and [ 3 H]-thymidine incorporation analyses (B). Cell cycle analysis was quantified by PI staining followed by flow cytometry analyses. M1, M2, M3 and M4 indicate sub G1, G0/G1, S and G2/ M phases respectively (C). Bar graphs represent the mean AE SEM of three independent experiments (D). Cell apoptosis was detected by annexin V-FITC/PI staining (E) followed by flow cytometry analysis and TUNEL staining (G). The proportion of live cells (annexin VÀ/PIÀ), early apoptotic cells (annexin V+/PIÀ), late apoptotic/necrotic cells (annexin V+/PI+) and dead cells (annexin VÀ/PI+) was calculated for comparison (F). The TUNELpositive cells were stained green, VSMCs were stained for SMaA and shown in red. Nuclei were stained blue by DAPI. The percentages of apoptotic nuclei were calculated by determining the number of DAPI-stained nuclei that were also positive for TUNEL staining. Approximately, 100 nuclei cells were counted in randomly chosen fields per region (H). VSMC migration was determined by a standard wound healing assay (I). Bar graphs represent the mean AE SEM of three independent experiments (J). *P < 0.05 versus CON; § P < 0.05 versus GV166-null; # P < 0.05 versus GV122-null. CON, 10% serum from normal mice; DB, 10% serum from type 2 diabetes mice.
Pin1 suppresses VSMC apoptosis in association with cytochrome c release and caspase activation
To explore whether Pin1 suppressed VSMC apoptosis through caspase-dependent pathway in T2D condition, we examined the activation of caspase-3 and -9 by Western blot analysis. It was well known that mitochondria-mediated activation of caspase-3 and -9 involved the releases of cytochrome c. Next, the level of cytochrome c in the cyto-sol was tested by Western blot. As expected, our results indicated that the activation of caspase-3 and -9 was inhibited in VSMCs cultured in 10% serum from HFD/STZ-induced T2D mice. The treatment with 10% serum from HFD/STZ-induced T2D mice decreased cytoplasmic level of cytochrome c. Furthermore, the levels of active form of caspase-3 and -9 were markedly inhibited in Pin1-overexpressing VSMCs while strongly stimulated in VSMCs silenced for Pin1. Overexpression of Pin1 blocked the release of cytochrome c while knockdown of Pin1 triggered up-regulation of cytosol cytochrome c (Fig. 6A-D). Fig. 6 The caspase-and mitochondria-dependent pathways are involved in Pin1-mediated cell growth and apoptosis. Vascular smooth muscle cells (VSMCs) were cultured and infected as described in Figure 3. The levels of procaspase-3, procaspases-9 and the large subunits of the cleaved forms of caspase-3 and caspases-9 were quantified by Western blot analysis (A). While, in parallel, the levels of cytosol cytochrome c in VSMCs were performed by Western blot analysis (C). The expression of Bax and Bcl-2 was also examined by Western blot analysis (E). Tubulin served as an internal control. Bar graph shows relative densitometric values of Western blots (B, D and F). Data represent mean AE SEM of three independent experiments. *P < 0.05 versus CON; § P < 0.05 versus GV166-null; # P < 0.05 versus GV122-null. CON, 10% serum from normal mice; DB, 10% serum from type 2 diabetes mice.
Pin1 represses VSMC apoptosis coincided with increase in Bax/Bcl-2 ratio
To delineate the signalling events involved in repressed VSMC apoptosis by Pin1 in T2D condition, we examined the potential role of Pin1 in regulating the expression of Bax and Bcl-2. Remarkably, antiapoptotic protein Bcl-2 expression was increased. In contrast, the protein abundance of Bax, a pro-apoptotic member, was decreased in VSMCs cultured in 10% serum from HFD/STZ-induced T2D mice. Moreover, Bax/Bcl-2 ratio in VSMCs overexpressed for Pin1 was dramatically decreased but elevated in VSMCs silenced for Pin1 ( Fig. 6E and F). This further substantiated our conclusion that Pin1 inhibited VSMC apoptosis via the mitochondrial apoptotic signalling pathway.
Induction of VEGF but not MMP2 and MMP9 by Pin1
To better understand the mechanisms of Pin1 on VSMC migration in T2D condition and to reveal downstream events of STAT3 signalling that were involved in the regulation of cell migration, we examined the expression of various migration regulatory proteins by Western blot analysis (Fig. 7). Our results pointed out that the expression of VEGF was elevated in VSMCs cultured in 10% serum from HFD/STZ-induced T2D mice. Enforced expression of Pin1 led to increased VEGF while depletion of Pin1 enhanced the degradation of VEGF in VSMCs. However, no significant change in MMP9 secretion was seen in VSMCs treated with either GV166-Pin1 or GV112-Pin1 shRNA. Of note, diminishing Pin1 expression affected MMP2 level, however, 10% serum from T2D mice and Pin1 overexpression were assumed to have no association with MMP2 level.
Pin1 deficiency decreases injury-induced arterial neointima formation
On the basis of our observation that pin1 was up-regulated after vascular injury in a HFD/STZ-induced mouse model of T2D and recognition of the inducible effect of pin1 deficiency on VSMC apoptosis in vitro, we suggested that juglone, a Pin1 inhibitor, suppressed in vivo neointimal formation induced by wire injury in T2D. With application of juglone via pluronic gel onto injured common femoral artery, the neointima formation was significantly reduced as compared to the plain gel-treated control (Fig. 8A). With computerized image analysis, the areas of neointima and media layers for each section were calculated. The present study demonstrated a 67% reduction on the area ratio of I/M by juglone as compared with control (Fig. 8B). Next, TUN-EL assay was performed to determine the level of cellular apoptosis in neointima or media. Arterial walls in the sham control contained few nucleus-stained cells. More apoptotic cells appeared in the juglone treatment group compared with plain gel treatment group (Fig. 8C). As shown in Figure 8D, the apoptosis rates in both intima and media in juglone treatment group were significantly increased when compared with the plain gel treatment group (P < 0.05).
Discussion
Despite the huge amount of data accumulated so far, much attention has been devoted to the roles of Pin1 played in ageing, cancer and Alzheimer disease [12]. Neither the identity of Pin1 in hyperproliferative vascular disorders, such as arteriosclerosis and restenosis, nor the molecular mechanism of Pin1 function in these diseases was clear. Very recently, Pin1 induction was described in neointimal formation, which was significantly suppressed after intraperitoneal injection of juglone [15]. This phenomenon was in agreement with our previous report that down-regulation of Pin1 in VSMCs induced cell cycle arrest and apoptosis in vitro. These pieces of evidence suggested that Pin1 might be critical in the pathological process of restenosis. Strikingly, our present results found that 10% serum from T2D mice stimulated proliferation, inhibited apoptosis, enhanced cell cycle progression and promoted migration of VSMCs, coinciding with Pin1 up-regulation. With the aim of clarifying the mechanism of Pin1 during neointimal hyperplasia in T2D condition, lentiviral strategies for overexpression or knockdown of Pin1 were used. We then asked whether Pin1 played a key role in restenosis in T2D mice. Surprisingly, wire injury induced neointimal hyperplasia in the mouse model of T2D with activation of Pin1. We succeeded in attenuating neointimal hyperplasia in the mouse model of T2D by gently spreading 20% pluronic gel containing juglone around the outside of the common femoral arteries, which led to apoptosis of VSMCs in the arterial walls.
A number of clinical findings suggest that Pin1 has a critical role in the genesis of many human malignancies. It has been also reported that Pin1 may act as an initial signal that subsequently exaggerates proliferation of several human cell lines [12,19]. Similar to human, it has been established that Pin1 induces proliferation and tumorigenicity of some mouse cells both in vitro and in vivo [20,21]. In the present study, we generated a non-genetic mouse model of T2D by employing the combination of HFD and three 40 mg/kg STZ injections, given that this mouse model more closely mimicked the metabolic profile that characterized T2D in human beings [16]. Analysis of the common femoral arteries of T2D mice revealed high levels of Pin1 relative to healthy controls. Insulin had been reported to induce proliferation of VSMCs from human beings and non-human beings [22]. It was known that insulin-resistant patients, such as those affected by obese T2D at the beginning of diabetes natural history, showed increased levels of circulating insulin [23]. In addition, undoubtedly high glucose level could induce VSMC proliferation. Conceivably, these findings might explain why VSMCs cultured in 10% serum from T2D mice reflected an imbalance between growth and apoptosis.
It was reported that the regulation of STAT3 activity might be involved in the pathogenic mechanisms of T2D. Hepatic STAT3 signalling had been identified to be essential for normal glucose homoeostasis and disrupting this signalling pathway could contribute to A B D C Fig. 8 Topical use of juglone inhibits neointimal hyperplasia following arterial injury in a mouse model of type 2 diabetes (T2D). Wire-mediated vascular injury was produced in T2D mice. The common femoral arteries were excised at 14 days after injury. Sample sections were stained with haematoxylin and eosin and neointimal formation was evaluated. Representative photographs of haematoxylin and eosin staining were shown, magnification, 940 (A). Bar graphs show the I/M ratio (B) quantified by Image-Pro Plus 6.0 software. Immunohistochemical analysis of arterial sections for apoptosis by TUNEL assay was shown (C). The TUNEL-positive cells were stained green, smooth muscle alpha-actin was stained red and nuclei were stained blue by DAPI. Quantitation of the number of apoptotic vascular smooth muscle cells (VSMCs) in the medial or intima layer of common femoral artery was shown (D). DAPI-stained nuclei that were also positive for TUNEL staining were counted from randomly selected images and expressed as percentage of TUNEL-positive cells as compared with total cells. Significantly more apoptotic VSMCs were found in the juglone-treated samples than in the plain gel treatment control. Data are expressed as mean AE SEM. Wire-injury: injury alone, + juglone: wire injury plus application of topical use of juglone.*P < 0.05 versus sham; § P < 0.05 versus wire-injury.
the onset and progression of diabetes [24,25]. Jeong et al. [26] recently identified a Ser 727 phosphorylation-dependent and Tyr 705 phosphorylation-independent STAT3 activation mechanism in the modulation of insulin signalling.
Derek et al. [27] reported that knockdown of hepatic SirT1 increased STAT3 acetylation and STAT3 phosphorylation (Y705), thus decreased endogenous and insulinstimulated glucose production and reduced fasting hyperglycaemia in a rat model of T2DM. Jie et al. [28] demonstrated that high concentration of insulin treatment resulted in a reduction of total and phosphorylated STAT3 protein. Interestingly, we showed that in T2D condition, STAT3 was a downstream effector of Pin1 and STAT3 expression could be positively mediated by Pin1, either directly or indirectly. In general, Tyr 705 phosphorylation, typically by the Janus kinase kinases, is involved in STAT3 dimerization and activation, whereas Ser 727 phosphorylation is believed to modulate STAT3 activity. A previous report indicated that Pin1 up-regulated STAT3 transcriptional activity via Ser727 residue of STAT3 [15]. Nevertheless, our present results showed that in addition to the phosphorylation of STAT3 at Ser727, phosphorylation of STAT3 at Tyr705 was also stimulated by Pin1 up-regulation. The discrepancy between the previous report and ours may be because of cell-type specificity of Pin1 action. STAT1, which shares the highest homology with STAT3 among the STAT family members [29], has been implicated in cell growth deregulation and disturbed immune function, i.e. disorders that are pertinent to malignancy [30]. Therefore, in addition to STAT3, the level of STAT1 and phosphorylated STAT1 (the phosphorylation of STAT1 at residue Tyr701 and Ser727) was also subjected to Western blot analysis to determine specificity. It has been suggested that high glucose can cause the activation of STAT1 [31,32]. Consistently, we found that the exposure of VSMCs to 10% serum from T2D mice caused increased STAT1 and phosphorylation of STAT1. However, forced or silenced Pin1 did not alter the expression of STAT1 or phosphorylated STAT1, thus suggesting that STAT3, but not STAT1, might be mediated by Pin1.
As described by numerous studies, STAT3 activation was implicated in modulating the activity of downstream mediators, hence playing a key role in cell survival, proliferation, and differentiation. Siggins et al. [33] reported that alcohol enhanced G-CSF-associated STAT3-p27kip1 signalling, which impaired granulopoietic progenitor cell proliferation by inducing cell cycling arrest and facilitating their terminal differentiation during the granulopoietic response to pulmonary infection. Zhang et al. [34] confirmed that both p16ink4a and p21waf1/cip1 were significantly induced by AR-42. This together with a decrease in activation of STAT3 resulted in G 1 and G 2 cell cycle arrest. A recent report indicated that STAT3 activation was critical for VZV, a non-oncogenic herpesvirus, via a survivin-dependent mechanism [35]. In this regard, we evaluated these essential downstream targets of STAT3. Our present data showed that STAT3 regulated by Pin1 modulated the p27kip1, p16ink4a and p21waf1/cip1 expression, but had no obvious effect on expression of survivin.
To further explore the molecular mechanism by which Pin1 affected cell apoptosis, we examined a possible link to a mitochondria-mediated, caspase-dependent apoptotic pathway. Intrinsic apoptosis pathway is regulated by Bcl-2 family members. Bcl-2 is one of the pro-survival proteins, while Bax is a pro-apoptotic protein [36].
Dysregulation of these proteins causes the release of cytochrome c from the mitochondria to the cytosol, thus activating caspase-9, prompting the activity of caspase-3 and resulting in apoptosis of cells [37]. As reported, treatment of HK-2 cells with high glucose and angiotensin II increased the protein-protein association between p-p66Shc and Pin1 in the cytosol, and with cytochrome c in the mitochondria [38]. A recent report demonstrated that blockade of Pin1 led to cleavage and mitochondrial translocation of Bax and caspase activation [39]. In aggregate, our present data suggested that Pin1 regulated cell proliferation by enhancing the resistance to apoptosis through dysfunction of the Bax/Bcl-2/cytochrome c/caspases-9 and -3 signalling pathway.
Our results also showed that 10% serum from T2D mice promoted VSMC migration, while up-regulating Pin1 expression. This effect was more pronounced in Pin1 overexpression, whereas its knockdown had the opposite effect, confirming a Pin1-specific effect. Cohen et al. [40] observed that inhibiting Pin1 activity increased p53 activity towards its target genes MMP-9 and MMP-2, thus confirming the role of Pin-1 in the regulation of trophoblast invasiveness. Other studies have indicated that Pin1 increased the transcriptional activity and protein level of VEGF [41,42]. To understand the possible mechanism responsible for increased migration in T2D, we looked for changes in the expression levels of VEGF, MMP2 and MMP9. In the present study, no alteration in expression of MMP9 could be observed either by increasing or decreasing Pin1 expression. Only by diminishing Pin1 expression could MMP2 level be affected. Ten percent serum from T2D mice and Pin1 overexpression were assumed to have no association with MMP2 level. Our data supported that Pin1 was an essential positive regulator in the VSMC migration process via mediating VEGF expression. Nevertheless, we could not exclude the possibility that Pin1 also modulated other mechanisms, including MMP2 and MMP9 regulation that promoted VSMC migration in the condition of T2D. Thus, the precise mechanism remains to be further investigated.
Finally, we elucidated the role of Pin1 that participated in neointimal formation after vascular injury in T2D mice. Our data showed that the I/M ratio was significantly reduced in juglone-treated common femoral arteries compared with plain gel-treated controls. Moreover, topical use of juglone significantly induced the apoptotic VSMCs in the artery walls after wire injury. These results substantiated our in vitro findings and strongly supported the notion that down-regulation of Pin1 conferred protection against injury-induced pathological vascular restenosis.
Taken together, we found that 10% serum from T2D mice and Pin1 overexpression markedly promoted growth, repressed apoptosis, stimulated cell cycle progression and migration of VSMCs, whereas the opposite effects were observed in VSMCs depleted of Pin1. Mechanistically, STAT3 signalling and mitochondria-dependent/ caspase-dependent pathways played critical roles in Pin1-mediated cell cycle regulation and apoptosis of VSMCs in T2D condition. In addition, VEGF significantly contributed to VSMC migration mediated by Pin1 in T2D condition. To determine whether our in vitro findings had any physiological relevance, we evaluated the in vivo effect of Juglone on neointima formation in the guidewire-injured common femoral arteries in T2D mice. Our results confirmed the beneficial effects of Pin1 degradation. This study suggested that Pin1 inhibitors ª 2013 The Authors. Journal of Cellular and Molecular Medicine Published by Foundation for Cellular and Molecular Medicine/Blackwell Publishing Ltd or depletion of Pin1 protein serve as potential therapeutic candidates that warrant further investigation regarding their potential use in the prevention of restenosis in T2D. | 8,344 | 2013-06-10T00:00:00.000 | [
"Biology"
] |
Mixed-Method Research to Foster Energy Efficiency Investments by Small Private Landlords in Germany
: The decarbonisation of the building stock is an important element for the success of the German Energiewende (energy transition). Despite some progress having been made, the rate of energy renovation falls below the level required to meet political commitments. This gives rise to the question: what deters property owners from making energy efficiency investments and how can the policy framework foster such investments? To answer this question, the paper focuses on a widely neglected property owner group: small private landlords (SPL). Although they manage 37% of all residential rental properties in Germany, very little is known about their decision-making processes for energy efficiency investments. We applied a mixed-method design to identify factors that hindered and supported their investments. In an explorative study, we initially conducted 18 problem-centred interviews. Subsequently, we carried out a postal survey and analysed the questionnaires using a hierarchical linear regression model. The results show that energy renovation is a multi-dimensional decision-making process, which can only be adequately addressed by a comprehensive policy package. To develop such a package, the author recommends that the specific investment behaviour of SPL must be better targeted, their knowledge about energy efficiency investments must be improved through exchange and networking, a sense of responsibility for the neighbourhood must be fostered, and greater focus must be placed on improving local framework conditions.
Introduction
The latest IPCC report indicates that it may still be possible to limit global warming to between 1.5 and 2.0 °C, and, consequently, to stay within planetary boundaries [1]. However, the report also emphasises the tremendous efforts that will be necessary for a turnaround in global GHG emissions in the next couple of years [2]. The German national strategy to tackle climate change, the "German Energiewende", aims to reduce CO2 by 80% to 95% by 2050 compared to 1990 levels. The building sector has a crucial role to play in achieving this target because it causes one third of Germany's GHG emissions [3].
The German building sector has successfully reduced its CO2 emissions by 43% since 1990 [4], but in recent years this reduction has tailed off. The annual rate of energy renovations is still way below the necessary 2% to 3% [5,6]. While investment volumes for general building retrofits and renovations have increased in the last 10 years, the absolute volume and relative share of energy efficiency investments have decreased [7]. Moreover, comprehensive renovation programmes have increasingly become the exception, with property owners instead investing in single measures to tap low-hanging fruits [8].
Investments in energy efficient buildings in Germany are currently not only insufficient for meeting climate protection targets but are also unevenly distributed among the various property owner groups. Properties owned by not-for-profit housing associations and, to some extent, owneroccupied homes experience higher renovation rates and energy efficiency performance than properties owned by private landlords [5,6], which is in line with international evidence [9,10].
This raises the question: what deters landlords from investing in energy efficiency? To answer this question, comprehensive knowledge about their decision-making processes is crucial. While much is known about homeowners in Germany [11][12][13][14][15] and abroad [16][17][18][19], as well as about professional landlords [20,21], very little is known about small private landlords (SPL) [22,23], despite the fact that they own 37% of all residential apartments in Germany. This is the starting point of the analysis, in which the authors used a mixed-method research design [24] to analyse the decisionmaking processes of SPL and identify the supportive and obstructive factors. The purpose of the paper is twofold. Firstly, it aims to shed light on the decision-making processes of a neglected actor in the German Energiewende, the small private landlord. Secondly, the findings should help to develop better-tailored policies for promoting energy efficiency investments, resulting in building stock with nearly-zero emissions.
German Policy Framework for Achieving a Nearly-Zero Emissions Building Stock
Germany aims to reduce the primary energy demand of the building stock by 80% by 2050 compared to 2008 levels, to achieve building stock with nearly-zero emissions. The "building energy efficiency strategy" brings together different policies, ranging from regulation and incentives to research, information and consultancy [25].
Since 1977, mandatory minimum energy efficiency standards for new buildings have been in place. These have been tightened several times, reducing the energy demand of new buildings by approximately 50% [13,26]. Further tightening is expected in line with the EU Energy performance building directive (EPBD), which should lead to nearly-zero energy buildings by 2021. The so-called "Modernisierungsumlage" ( §559 BGB) in tenancy law provides an incentive for landlords to invest in energy efficiency by allowing landlords to pass on 8% of all energy-related costs to their tenants, meaning that energy efficiency investment has a payback period of 12.5 years.
The "CO2 buildings rehabilitation programme" offers subsidised loans and grants for comprehensive energy refurbishments to achieve 'KfW Efficiency House' standards, as well as for individual energy efficiency measures in existing buildings [27]. Landlords only benefit from subsidised loans, while homeowners can choose between grants and loans (including a repayment bonus). Funding rates depend on the building's energy efficiency performance after the renovation. For example, at the time of writing (01.02.2020), landlords who renovate their rental properties towards "KfW Efficiency House Standard 55", meaning the building requires only 55% compared to the minimum energy performance standard, defined in the German Energy saving Ordinance (EnEV), are rewarded with low interest rates and a repayment bonus of 40%, while for a single measures only 20% bonuses are available. Under the German tax system, energy efficiency investments are also tax deductible.
In addition, the German government provides information, advertising campaigns such as "Dämmen lohnt sich" ("insulation is worthwhile"), and different forms of energy consultancy to raise awareness and acceptance of the ecological necessity and the economic benefits of energy efficiency investments.
Despite this comprehensive framework, which has led the International Energy Agency (IEA) to conclude that "Germany is among the world leaders in terms of energy-efficient buildings" [28], an energy efficiency gap exists and Germany is expected to miss its short-term and long-term energy efficiency targets. The rate of energy refurbishment remains low, at approximately 1% per year [5], which is far lower than the 2% to 3% needed. In addition, investments in energy efficiency have decreased in recent years, although overall investments in building renovations have increased [7]. This raises the question of whether the policy framework is still fit for purpose for reaching the aim of building stock with nearly-zero emissions and for targeting the specific needs of property owners. To answer this question, we must take a closer look at the different property owner types and their investment behaviour.
The Role of Small Private Landlords in Achieving a Nearly-Zero Emissions Building Stock
The owner structure of the German residential building market is heterogeneous. Only 20% of apartments are owned by professional housing companies. The rest are owned and managed by homeowners (43%) and small private landlords (37%). The proportion of rental properties owned by SPL underlines the fact that their investment in energy efficient measures is crucial for achieving building stock with nearly-zero emissions.
The little available data and studies about SPL emphasise two points. Firstly, the level of energy efficiency investments by this owner group is lower than for other groups. Secondly, policies are not effective in targeting or motivating SPL. Michelsen et al. [29] analysed more than 100,000 energy efficiency certificates relating to multi-family buildings and concluded that professional landlords renovate more ambitiously than SPL due to economies of scale and scope. The total energy consumption of residential buildings managed by professional landlords decreases by 36% following comprehensive energy renovation, compared to a decrease of 18% for properties managed by SPL. This result is in line with the findings of Henger and Voigtländer [30]. They show that SPL invest more frequently, but at lower levels, than professional landlords. The average renovation spend by professional landlords is between €347/m2 and €560/m2, compared to €277/m2 by SPL. The authors also show that SPL are less likely to increase their rents following energy renovations than professional landlords. If they do increase the rent, the rise is moderate (5%) in comparison to the average rise made by professional landlords (25% to 28.6%). Cischinsky and Diefenbach [5] conducted a representative study for the different owner groups in Germany and the results highlighted the backlog of energy renovations in properties owned by SPL. Only 33% of the facades of multi-family buildings owned by SPL have been insulated, compared to 47.5% in properties owned by professional landlords. This figure is even lower for communities of owners (17.1%). Renz and Hacke [31] compared different determinants for energy renovation for homeowners and SPL. They concluded that homeowners renovate to a higher energy efficiency performance standard because they benefit from co-benefits such as comfort, lower heating costs, etc. In contrast, SPL tend to make investments to meet regulatory standards. Homeowner associations (HOA) are a particular type of ownership where a residential building is owned by different owners, primarily SPL and owneroccupied homes. Investment decisions require coordination to balance all the individual interests, with the result that the greatest backlog in energy efficiency investments is in buildings of this type of ownership [5,32].
These results are in line with international evidence. In England, the number of F and G rated properties, meaning the most energy inefficient buildings, fell at a slower rate in the private rented sector than for any other group of owners between 1996 and 2012, and in the private rented sector the share of F and G rated buildings is the highest [33]. Broberg and Egüez [34] emphasise the fact that energy efficiency performance in Swedish multi-family buildings is significantly higher for cooperative apartment associations than for private rental properties. In the US, 43% of private rented properties have double-glazing, compared to 63% of owner-occupied homes, with only 28% being considered "well-insulated" (compared to 40%) [35]. In the Netherlands, private rentals also have lower levels of attic and wall insulation and efficient glazing (40%, 29% and 48%, respectively) than owner-occupied buildings (70%, 52% and 70%) [36].
The existing studies show there is a backlog of energy renovations in residential rental properties, particularly those let by SPL. SPL even neglect economically viable energy efficiency investments, which underlines the need for policies designed to meet political targets to be tailored to the specific needs of SPL. It is clear that a more comprehensive understanding of the decisionmaking processes of SPL is required to fully exploit existing energy efficiency potentials.
Mixed-Method Research Design
We opted for a mixed-method design to understand the decision-making processes of SPL (see Figure 1) [24]. Mixed-method research designs are used to answer various disciplinary and interdisciplinary research questions which are related to our topic, such as energy-related behavior [37] or affordable housing and tenant preferences [38], but also to other issues like gamification [39] or big data analytics [40]. Lopez-Fernandez and Molina-Azorin [41] provide an extended overview of the usage of a mixed-method research design in behavioral science. Qualitative and quantitative methods "should be mixed in a way that has complementary strengths and non-overlapping weaknesses" [42]. The aim of the triangulation is twofold. Firstly, it is anticipated that mixing will lead to results of greater validity than using only one method [43]. Qualitative data may help to interpret quantitative model results and quantitative results can verify or falsify explorative findings from qualitative research. Secondly, comparisons between qualitative and quantitative results may reveal further research needs. We used a sequential qualitative-quantitative mixed-method research design to investigate the decision-making processes of SPL [44]. Due to the current limited knowledge about SPL, a sequential approach was appropriate, as it allowed the explorative identification of supportive and obstructive factors (qualitative research), which could then be quantitatively evaluated in terms of their relevance/significance. A total of 18 interviews (ranging from 37 to 115 minutes each) were conducted and analysed using content analysis [45]. Interviews were conducted until a theoretical saturation was reached [46], although the number of interviews was in line with other qualitative research on building energy consumption [47]. The sampling was 'purposeful', aiming for maximum variation [48]. Therefore, both renovators and non-renovators were selected. The qualitative research led to an integrated decision-making model including four decision dimensions with 14 determinants affecting energy efficiency investments.
Subsequently, a postal survey was conducted in spring 2017 using the "tailored-design method" [49]. Participants were asked to complete a nine page questionnaire, which included socio-economic information, information about their rental properties in the neighbourhood and measures to understand the significance of 14 determinants. A total of 351 SPL participated in the survey (26% response rate). The results are based on 190 completed questionnaires. The survey was analysed using a hierarchical OLS regression model [50]. In contrast to an ordinary least square (OLS) regression, where all independent variables are included in the model at once, in hierarchical OLS regression independent variables enter the equation in an order specified by the researcher, which allows for the assessment of the added value of the different dimensions and determinants in the decision-making process.
The author conducted the qualitative and quantitative research activities in a small inner-city neighbourhood in Oberhausen. Oberhausen is a city in the Ruhr area of Germany with approximately 212,000 inhabitants. The city and neighbourhood are closely linked to the rise of the coal and steel industries at the end of the 19th century and their decline from the 1960s onwards [22,51]. The decline led to enormous socio-economic challenges for the city [52,53]. The city has lost more than 50,000 inhabitants in the last half century and has one of the highest levels of debt per capita in Germany [54]. Consequently, the case study represents neighbourhoods suffering from high vacancy rates, low rents and a high share of low-income households, with a heterogeneous building ownership structure and buildings mainly constructed prior to, or during the early decades of, the 20 th century. Such neighbourhoods exist in other parts of Germany, such as in Saarland, Bremen, parts of former East Germany and other cities in the Ruhr area, as well as, in all likelihood, in other European neighbourhoods faced with economic structural change.
Results from the Qualitative Research
Decision-making is a complex process. Consequently, several research disciplines have developed different approaches to understand why and how individuals decide for or against an alternative. Instead of disciplinary approaches, Stern [55] argues that integrated and holistic approaches are required to fully understand individual decision-making. He identities the following four dimensions of a decision-making process: (pro-environmental) attitude, habits, personal capabilities and external factors. We applied these four dimensions to our case study.
The interviews showed that investment decisions are made by SPL in a two-step process. Firstly, there must be a reason for an SPL making an investment. Reasons include the need for a repair, the leaving of a tenant, a complaint made by a tenant and the desire to preserve the value of the property or to make improvements. In the second step, the above-mentioned dimensions determine whether an SPL invests or not and whether the investment will be a standard solution (e.g., painting a wall) or an energy efficiency solution (e.g., wall insulation). The second step, therefore, influences energy efficiency investments and requires deeper understanding to ensure that existing energy efficiency potentials are exploited. A total of 14 determinants were identified based on the interviews and a comprehensive literature review, and these were each assigned to one of the four dimensions (see Figure 2). A brief overviews follows and more detailed results are presented by März [50].
(Pro-Environmental) Attitude
Interviewees showed broad support for the "German Energiewende". Most are aware of the scarcity of fossil fuels and agreed that new ways of energy generation and conservation are required (environmental awareness). However, this awareness did not automatically lead to environmentally friendly behaviour (revealing a value-action gap) [56], as hardly any of the interviewees took personal responsibility or felt morally obliged to change their behaviour (personal norm).
'Why should I do that? Because I'm an environmentalist? Germany is a small country and Merkel made it her priority to be a role model. This is ridiculous. As long as we support the lignite industry, what does she want from us private homeowners?' (IP 5:112)
Habits
Most of the interviewees showed a more altruistic attitude (value) than expected. Their priority in terms of rental management is not to maximise return but to operate well-run and hassle-free tenancies.
'I belong to those, who not believe to earn treasures from letting (…) My attitude is a social one. (…) I do not belong to those, who say I have to get as much as possible out of it.' In particular, SPL who had inherited their buildings or flats, who had owned them for decades or had grown up in the neighbourhood stated that they invest in their buildings to stop the downwards trend of the neighbourhood, because they identify with the neighbourhood and/or the building (emotional relationship).
'My heart goes out to this house and it hurts to see this neighbourhood deteriorating if nobody invests. This is also why I do invest' (IP 3: 60) The interviews revealed that the typical SPL approach to investment does not lend itself to large energy efficiency investments. SPL are conservative and tend to be risk-averse in their investment behaviour. They reject loans and prefer an ongoing investment programme, in which they save up to finance an investment and then start saving again. They do not generally have an investment strategy; instead they invest on demand (e.g., repairs) and their investments follow a certain hierarchy, in which energy efficiency is at the bottom level (investment routine).
I´m not somebody who buys things I cannot afford. With the incoming rents I pay my bank and the rest is saved until I have to money to just buy the new door' (IP 4: 85)
Personal Capabilities
Financial or time constraints, combined with the management burden of investment planning and supervision (individual burden), are obstacles for energy efficiency investment and become more relevant with the increasing age of SPL.
'You know what, the government wants us to take care of our pension savings. This house is supposed to pay for the care service or even the care home for my mother. You can only spend each Euro once' (IP 1b, postscript) Knowledge about energy efficiency is heterogeneous within the target group and unclear in terms of its effect. It can reduce prejudice, but confirmation bias can also play a role. Prejudice is widespread and includes issues such as mould, fire risks, hazardous waste, aesthetics and doubts about ecological and financial meaningfulness.
'So, I wouldn't rip out the windows because that would damage the facade. If we had completely airtight windows, I would have a mould problem because of the way they [the tenants] use the heating. They tilt the window and when they close it, the humidity remains inside. And so I'd rather keep the windows the way they are. They are from the 80s and already double-glazed' (IP 4) Personal networks are theoretically seen as a means of innovation diffusion. However, the existence of such networks for exchanging information and experience is rare within the target group and their effect can be negative as well as positive.
Many interviewees overestimated the energy efficiency performance of their rental buildings and underestimated their energy efficiency potentials (perception of the buildings energy efficiency performance).
The energy efficiency performance is not bad. I have an energy efficiency certificate. The value is around 160. This is quite ok for a building of the 1960s. To realise a KfW efficiency house standard / This is impossible with such an old building because of the architectural circumstances. (IP 6: 35)
External Factors
The neighbourhood has undergone tremendous structural changes. These changes negatively affected its image and reduced the quality of existing technical and social infrastructures, etc., all of which reduces the desirability of investing (neighbourhood development).
When you talk to the old people, they all tell you it has become colossal worse than 20-30 years ago. That's true! (IP 16: 67) The way in which the neighbourhood has developed impacts strongly on the rental market. High vacancy rates and low and stagnant rents make energy efficiency investments economically unviable and risky for SPL as they cannot pass the cost onto tenants.
'Yes, you can increase the rent. Then you have a higher rent but an empty apartment' (IP 2) Moreover, the tenant structure itself-with lots of migrants and poorly-educated people-is also seen as an obstacle. SPL assume that the tenants will not be able to adapt their behaviour to the specific needs of an energy efficient home (e.g., ventilation habits). From the perception of SPL, tenants value visible investment such as balconies, new flooring or new kitchens, rather than "invisible" energy efficiency measures. We have neither used the "Modernisierungsumlage" nor have increased the rent (…) we have a 5 Euro rental market and in the end it was safer to avoid tenant movements. For me, it is difficult to increase the rent due to energy renovations in this market because the people move 2,3 houses further. In the end every Euro you pay counts (IP 15: 42-44) The explorative qualitative research undertaken illustrates the complex and multi-layered nature of the decision-making processes of SPL. It identifies various factors that influence the decisions made by SPL about renovations. How relevant these are, and whether they hinder or promote the landlords' decision to renovate, was not the task of the problem-centred interviews. In addition, due to the small sample size, the results cannot be assumed to be representative of all SPL. The following quantitative research evaluates the identified determinants in terms of their quantitative influence on the decision to renovate.
Measures and Model Set Up
Participants completed the questionnaire, which included the measures shown in Table 1. Before the questionnaire was finalised,pre-tests (five interviews with SPL using "think aloud" and "probing" methods) and reviews by senior academics and SPL were conducted [57,58].
In addition to the 14 identified determinants, socio-demographics (SPL age, level of education, children, years of ownership, distance between rental property and home address) and buildingrelated variables (type of ownership, form of acquisition, usage and building age) were collected and added to the model. Cronbach´s alpha was calculated to examine scale reliability and internal consistency. Model assumptions (multicollinearity, heteroskedasticity, etc.) were tested and the model set up was adapted where necessary. For the statistical analysis, a hierarchical OLS regression was performed. In a "hierarchical regression, IVs (independent variables) enter the equation in an order specified by the researcher. Each IV is assessed in terms of what it adds to the equation at its own point of entry" [50]. Consequently, five models were created, building on each other to represent the four dimensions of the explanatory model and the socio-demographic data and building-related variables. Note: For each determinant/item, respondents were asked to indicate their agreement with the statements on a 5 point Likert-type scale from 1 = "totally disagree" to 5 = "totally agree". Value orientation (altruism) is an exception. Here, respondents were asked to rate the importance of each item as a "guiding principle" in their lives on an 8 point Likert-type scale from 0 = "not important" to 7 = "extremely important".
Sample Description
The sample consists of 351 SPL. Due to missing values, only 190 complete cases exist, which are included in the regression model. A total of 51% own an apartment building and 49% are part of a homeowner community. SPL are above the national average in terms of their age: almost 40% of our respondents are over 65 years old compared to a national average of 25% (based on residents aged 18 and over). For the majority of SPL, letting is a secondary source of income. Overall, about 55% of the participants own fewer than six rented apartments and 80% fewer than 16. SPL have a high level of education, with 38% of the participants having a university degree (the national average is only 16%). Around one third of the participants inherited or were gifted their rental property. Around 30% of the participants live in the neighbourhood (postal code) in which the rental property is located; over 70% live less than 25 km from their rental property. In terms of both socio-demographic data and property ownership characteristics, the sample is thus comparable with representative national data [64]. Table 2 shows the descriptive statistics, correlations and reliability statistics. Table 3 illustrates the hierarchical regression model.
Regression Model
The four decision dimensions identified from the qualitative research, as well as the sociodemographic and building-specific variables surveyed, contribute to a significant improvement in the model quality. This confirms what can already be assumed from the interview phase, that the decision-making processes of SPL are very complex and can only be explained by considering various determinants and variables.
However, the four dimensions contribute very differently to the model quality. The explanatory power of the "(pro-environmental) attitude", "habits" and "external factors" is evident, but their impact is low. It is in fact the "individual capabilities" dimension, as well as the socio-demographic and building-specific variables, that determine the model quality.
The complex regression model no. 5 results from adding the various determinants and variables. It contains the determinants of the decision model developed in the qualitative research, as well as various owner-specific and building-specific variables. With an adjusted R 2 = 0.49, it has a high variance explanation. It makes clear that knowledge about energy renovation measures, personal networks and the perception of the building's energy efficiency performance are crucial drivers for investment. Prejudice, on the other hand, is a barrier.
In addition, it is evident that SPL who live in the rental property themselves are more willing to carry out energy renovation measures. Similarly, owners of apartment buildings renovate significantly more frequently than owners in a HOA (homeowner association). Having children tends to have a negative effect on investment levels. As expected, residential buildings built after 1978, i.e., after the first Heat Insulation Ordinance came into force, are less frequently refurbished. There is no statistical correlation for all the older building age classes. Neither the individual age, the form of acquisition, the educational level nor the duration of ownership have a statistically significant influence.
In the regression model, three interactions (value (altruism) and emotional relationship; neighbourhood development and rental market (rent level); and personal norms and tenant preferences) were also identified from the determinants, which by themselves have no main effect but only a conditional effect.
In addition to the determinants that have a statistically significant impact, there are also factors of interest for which the assumed influence could not be confirmed. For example, "financial incentives" as a policy for fostering energy renovation has no significant influence on investment decisions. Marketing campaigns for energy renovation always highlight ecological added value, but the model shows that existing environmental awareness and personal norms have no relevant influence on renovation decisions. source: own calculation; ***p < 0.001, **p < 0.005, *p < 0.05, Spearman (two-sided). Note: * p < 0.1; ** p < 0.05; *** p < 0.01, standard error in brackets, Model 1 to 5 hierarchical OLS-Regression , sozio-economics and building-related variables only displayed if statistically significant.
Key Findings and Discussion
The findings from the qualitative and quantitative research shed light on the decision-making processes of SPL. They help to understand what deters SPL from making energy efficiency investments and provide some indication of how to adjust policies to better fit the specific needs of the target group. Four main findings with recommendations can be identified.
Better Understand and Target the Investment Behaviour
The empirical studies clearly show that the investment behaviour of SPL differs from the behaviour advocated by the current political regime. Even though the investment routine was not statistically significant in the model, it became clear that SPL try to avoid taking on debt. A total of 63% of the participants of this survey and as many as 90% of SPL in a nationwide survey only finance investments from savings and avoid taking out loans [64]. As SPL are not even tempted by the current low interest rates, this points to the recommendation that funding programmes should offer grants instead of subsidised loans. In the model, "funding schemes" have no statistical significance and the descriptive statistics show that funding is not financially attractive. This is another strong indication of the need for a paradigm shift in public funding from subsidised loans to grants (which do not currently exist for SPL). The typical investment routine of stepwise investments, which follow a saving/investment pattern in order to keep effort and capital costs under control, should also be acknowledged in the funding schemes. Currently, comprehensive renovations towards KfW standards attract higher funding rates of up to 27.5%, while single measures only receive between 7.5% and 15%. Therefore, funding rates should be based on a property owner's commitment to achieve an energy efficiency standard by implementing single measures according to a renovation "roadmap" within 5 to 10 years. This means that SPL would receive higher financial support for investing incrementally in renovations towards KfW standards.
SPL rarely demonstrate strategic investment planning; they tend to invest on an ad-hoc basis to meet specific needs. In addition, they are conservative, status-quo oriented and less motivated by maximising returns than by long-term, stress-free tenancies. The so-called "Modernisierungsumlage" has become a synonym for gentrification and the pursuit by landlords of financial returns. However, the analysis shows that SPL are more altruistic than expected. A total of 90% prefer longstanding tenancies to high returns and almost 60% do not increase their rents during tenancies. Both these characteristics are in line with other German studies, as previously outlined [30] and point to the need to adapt the 'Modernisierungsumlage'. Reforming the policy, for example by coupling the transferable costs with the benefit to the tenant (lower heating costs), could also increase the social acceptance of energy renovations.
Homeowner communities are a special case, since any investment is based on a group dynamic and not an individual decision-making process. The regression model illustrates that communities of owners invest less in energy efficiency than "normal" property owners. One of the main reasons is complex residential property law (WEG). This requires a high proportion of the property owners to commit to implementing energy efficient measures-75% of all property owners have to agree-and demands the application of highly developed communication skills by housing managers to unite the different investment logic, opinions, expectations etc., of the property owners. To improve this situation, we propose that approval rates should be lowered to 50% (in line with non-energy measures (e.g., painting a wall)) and property managers should be well-trained and better paid [65]. This could include the establishment of funding schemes to support property managers and the creation of new schemes targeting the role of property managers.
Improve Knowledge Through Exchange and Networking
The quantitative analysis in particular shows that knowledge and personal networks play an important role in energy efficiency investment decisions. Knowledge helps SPL to be aware of technological options and to assess both the risks and opportunities, as well as reducing their reservations in terms of the effort and complexity of energy renovation measures.
Despite various information campaigns, it is overwhelmingly clear that a multitude of prejudices (mould risk, financial, ecological, aesthetic, structural) have a negative influence on investment activity. Previous strategies to change the attitudes of SPL and increase their investment in energy efficiency renovations by providing knowledge through information campaigns and energy consulting have largely failed. Networks and word-of-mouth recommendations seem to offer a more promising solution. Letting is only a sideline for SPL and, due to their limited time budgets, decisions are made heuristically. Positive experiences from personal networks or recommendations from tradesmen have a positive effect on investment, as their assessment is perceived as credible and trustworthy. However, the qualitative research in particular makes it clear that many SPL do not network with other property owners and feel like "lone fighters". A communication strategy must, therefore, focus on bringing SPL together so they can exchange experiences and know-how face-toface. An area-based approach, as in this case study, would seem to be promising because activities could be tailored to the specific needs of SPL. It could also create a community feeling and strengthen sense of identity within the neighbourhood. Neighbourhood walking tours, opportunities for property owners to meet together or open-home events could be a good starting point for establishing networks among SPL [66]. Demonstration projects could also be an important tool for making energy efficiency tangible. The varying level of qualifications of tradesmen and energy consultants has also hampered investment in energy efficiency, which suggests that better education and certification of energy efficiency measures would also be a positive move.
Support and Promote a Sense of Responsibility
SPL, at least in the area of our research, seem to act in a more welfare-oriented and responsible manner than their public reputation would suggest. However, their sense of responsibility does not tend to encompass ecological responsibility for tackling climate change. Despite their levels of environmental awareness, that goal seems abstract and intangible. The analysis highlights the fact that environmental awareness and moral obligation do not automatically translate into proenvironmental behaviour: a value-action gap exists [56]. Consequently, campaigns and information focusing on the ecological benefits of energy renovation do not seem to be effective.
Instead, the sense of social and geographic responsibility that exists amongst SPL could be harnessed to promote investment. Many SPL live, or used to live, in the same neighbourhood, city or surrounding area in which their rental property is located. In the case study, approximately 75% of all SPL live within 25 km of their rental property. This not only creates an emotional relationship with the neighbourhood, but also an awareness of the neighbourhood's development. The same also applies to the rental property itself, as many such properties have been handed down from generation to generation. The neighbourhood and the rental property are, therefore, linked to memories and social relationships, from which a sense of responsibility can develop. If the landlord lives in the apartment building or in the neighbourhood, investment made in the rental property will also benefit the landlord. There is also the sense of responsibility towards the tenants: long-term tenancies are few and far between, but such stable and hassle-free tenancies are sought after. This multidimensional sense of responsibility, of course, varies across the owner group. The challenge is: a) to identify those landlords who have the potential to serve as change agents and door-openers to others; and b) to awaken the disproportionate willingness of this subgroup to invest in energy efficient refurbishment measures.
Greater Focus on Local Framework Conditions
Neighbourhoods are characterised by different local conditions. It was not possible to conclusively identify from the available literature or the empirical work presented whether, and in which direction, neighbourhood development, the housing market and the tenant/resident structure influence investments in energy-saving renovation measures. This signals a clear need for further research. While the qualitative research identified neighbourhood development and low rental levels as central barriers to investment, the regression model showed no significant impact. It is obvious, however, that the framework conditions in Oberhausen are different from those in Berlin, Munich or Düsseldorf, and that this results in different return expectations, investment risks and investment motives. For example, energy-related refurbishment measures can ensure leaseability in a lowdemand housing market such as Oberhausen. The same measure could, however, contribute to energy-related gentrification in Berlin. Consequently, it would be useful to promote research projects on the influence of local conditions on refurbishment decisions. If this research were to find a link between different regional and local conditions and levels of energy efficiency investment, the current practice of distributing funding with no regard for geographical differences would have to be reconsidered. Funding rates and their allocation could be differentiated on a geographical basis.
Conclusions
The decarbonisation of residential building stock is an important component for the success of the German Energiewende. Despite some progress having been made, there has been increasing reluctance in recent years to implement energy-saving renovation measures and a decline in investment activity has been observed. Moreover, refurbishment rates differ according to property owner type. The backlog in renovations in properties owned by small private landlords is particularly high, which raises the question of how to foster energy efficiency investments.
To answer this question, the paper analyses the investment decision-making processes of SPL using a mixed-method approach for a case study in Oberhausen, in the Ruhr area of Germany. Problem-centred interviews and a postal survey provide insights into the decision-making processes of the target group. The analysis makes both a theoretical and a political contribution to the debate on energy renovation. The development of an integrated explanatory model is a central added-value aspect of the work. On the one hand, it identifies determinants that promote and inhibit the success of SPL. It broadens the perspective on the refurbishment backlog of small private landlords beyond the usual economic debate, as it identifies many determinants that cannot be explained by economic theory. It thus encourages further research to verify the findings and to better understand the decision-making process. On the other hand, the model illustrates the complexity of the decisionmaking processes and highlights the fact that one-dimensional political solutions have little potential for success. This insight is the main political added-value aspect of the work. The results suggest that policies following the "one size fits all" principle are insufficient. Instead, comprehensive knowledge of the different owner groups and their decision-making logic is needed. Consequently, experimental settings and political will is crucial for testing and implementing the changes proposed in this research study on both small and large scales. This should ultimately lead to alterations to the policy framework to meet the specific requirements of the target group and boost energy renovation. | 8,850.8 | 2020-02-25T00:00:00.000 | [
"Economics"
] |
CD82 palmitoylation site mutations at Cys5+Cys74 affect EGFR internalization and metabolism through recycling pathway
Tetraspanin CD82 often participates in regulating the function of epidermal growth factor receptor (EGFR) and hepatocyte growth factor receptor (c-Met). Palmitoylation is a post-translational modification that contributes to tetraspanin web formation and affects tetraspanin-dependent cell signaling. However, the molecular mechanisms by which CD82 palmitoylation affects the localization and stability of EGFR and c-Met have not yet been elucidated. This study focuses on the expression and distribution of EGFR and c-Met in breast cancer as well as the related metabolic pathways and molecular mechanisms associated with different CD82 palmitoylation site mutations. The results show that CD82 with a palmitoylation mutation at Cys5+Cys74 can promote the internalization of EGFR. EGFR is internalized and strengthened by direct binding to CD82 with the tubulin assistance and located at the recycling endosome. After studying the recycling pathway marker proteins Rab11a and FIP2, we found that formation of the EGFR/CD82/Rab11a/FIP2 complex promotes the internalization and metabolism of EGFR through the recycling pathway and results in the re-expression of EGFR and CD82 on the cell membrane.
Introduction
The tetraspanin CD82, belonging to the family of tetraspanins, is a small membrane protein with four transmembrane regions. CD82 is encoded by the KAI1 gene, and as a recognized tumor suppressor factor, it is widely distributed in various normal tissues [1,2]. In addition to four transmembrane regions, CD82 contains an extracellular small loop (EC1), an extracellular macroloop (EC2), and an intracellular small loop [3,4]. In the variable region of EC2, there are sites that can bind to other proteins. This structural feature helps the formation of subsequent tetraspanin network. In the CD82 transmembrane domain, there are three highly conserved polar residues that can interact with the transmembrane domains of other tetraspanins, and therefore, CD82 can connect to other types of tetraspanins, forming compounds with specific functions [5]. In addition, CD82 can directly or indirectly bind to signal molecules, such as integrins, EGFR, c-Met and G protein-coupled receptors, forming microdomains enriched with tetraspanins on the cell membrane [6,7].
EGFR is one of the main members of the ErbB family, which plays a regulatory role in embryonic development, tissue differentiation, and tumorigenesis and development [8]. Previous studies have shown that in solid tumors the overexpression of EGFR is typically related to the increase in the secretion of homologous ligands, which leads to the chronic activation of EGFR. When EGFR is not activated, CD82 can downregulate EGFR expression by regulating the internalization kinetics. CD82 can also cooperate with vesicleassociated membrane proteins and actin, thereby changing the signal transduction pathway of EGFR [9,10]. c-Met is also a receptor tyrosine kinase, which is primarily expressed in epithelial and endothelial cells [11]. As the only high-affinity receptor for hepatocyte growth factor, c-Met exhibits a trend of overexpression in breast cancer, pancreatic cancer, gastric cancer, and other tumors. Both EGFR and c-Met are tumor metastasis-related receptors that are involved in the regulation of tumor cell metastasis, and have been extensively studied. CD82 can directly inhibit the c-Met expression and reduce cell invasion and growth by weakening the signal interaction between c-Met and integrin [12][13][14].
Palmitoylation modification is an important and common posttranslational modification process of proteins. At present, the most widely investigated process is S-palmitoylation, that is, the addition of a 16-carbon palmitate to the cysteine (Cys) residue of proteins. Lipid bonds on palmitoyl groups can bind to Cys residues and affect protein expression and function [15,16]. The palmitoylation mutation of the tetraspanin protein can promote the formation of the tetraspanin protein network, which is conducive to the biological function of the tetraspanin protein-enriched microdomain. Although it has been established that the palmitoylation of CD82 can regulate the biological characteristics of tumor cells [17], its specific molecular mechanism remains to be studied. CD82 contains five cysteine residues in the proximal membrane region, i.e., Cys5, Cys74, Cys83, Cys251, and Cys253. The relationship between CD82 palmitoylation mutation and EGFR and c-Met has not been clarified so far, including the theoretical mechanism for its expression and localization changes in tumor cells and specific metabolic pathways. Therefore, in this study, we compared the distribution and metabolic pathways of EGFR and c-Met associated with different palmitoylation mutants of CD82 and explored the corresponding molecular mechanisms.
CD82 palmitoylation mutation plasmid construction
The flow chart of CD82 palmitoylation mutant plasmid construction was shown in Supplementary Figure S1A. On the basis of the wildtype CD82 plasmid, some of the cysteine residues of the CD82 palmitoylation site were mutated to alanine. According to the CDS sequence of CD82, the primers of the CD82 palmitoylation mutation were designed as shown in Supplementary Table S1. TransStart® FastPfu Fly DNA Polymerase (AP231-01; TransGen Biotech, Beijing, China) was used to run the PCR reactions. The correctness of CD82 palmitoylation site mutation plasmid was verified by sequencing (General Biol, Hefei, China).
Cell culture and transfection
Breast cancer MDA-MB-231 cells (FuHeng Biology, Shanghai, China) were cultured in Dulbecco's Modified Eagle Medium (DMEM) basic culture medium (Gibco, Carlsbad, USA) supplemented with 10% fetal bovine serum (FBS; AusGeneX, Brisbane, Australia) in 5% CO 2 . The cell lines used in our experiments were free of my-coplasma infections. HighGene transfection reagent (RM09014; Abclonal, Wuhan, China) was used for the transfection of MDA-MB-231 cells. The cells were inoculated into 6-well plates and considered ready for transfection when the cell density reached 70% to 90%. A total of 3 μg plasmid (General Biol) was added to 200 μL of serum-free opti-MEM medium (Gibco) and mixed well. Afterward, 6 μL of HighGene transfection reagent was added and mixed well. The mixture was evenly dripped into 6-well plates and mixed. After 4-6 h of transfection, the medium was replaced by complete medium with 10% FBS, and continued to cultivate for 24-48 h before the subsequent experiments.
Protein extraction and western blot analysis
The Membrane and Cytosol Protein Extraction Kit (P0033; Beyotime, Shanghai, China) was used to cluster the cell total proteins into cytomembrane and cytoplasm proteins. Total protein was extracted using the whole protein extraction kit. Proteins were separated by 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to PVDF membranes, which were then blocked in QuickBlock™ for 10 min and incubated with anti-CD82 antibody (ab66400; 1:1000; Abcam, Waltham, USA) overnight at 4°C. After being washed, the membranes were incubated with the DyLight® 680-Goat anti-Rabbit IgG (H+L) secondary antibody (35568, 1:10,000; Invitrogen, Carlsbad, USA) for 1 h at room temperature. The Odyssey fluorescence scanning imaging system (LI-COR, Lincoln, USA) was used for protein detection.
Co-immunoprecipitation assay
The transfected MDA-MB-231 cells were collected and lysed with NP-40 Lysis Buffer (N8032; Solarbio, Beijing, China) supplemented with protease inhibitor (Beyotime) according to the manufacturer's instructions. A total of 500 μg of cellular extract was incubated with mouse immunoglobin G (ABclonal) at 4°C overnight. Protein A+G Agarose beads (P2055; Beyotime) and appropriate anti-CD82 primary antibody (ab59509; Abcam) were used for co-immunoprecipitation overnight at 4°C. After being washed with PBS three times, the beads were heated at 100°C for 5 min so that the proteins were dissociated from the beads, and then centrifuged to collect the protein. Finally, the target protein was detected by western blot analysis.
Immunofluorescence staining
Cells were fixed with 4% paraformaldehyde (PFA) for 15 min and with 1% NP-40 for 20 min at 25°C, and then washed with PBS three times. Afterward, 5% normal goat serum was used for blocking at 37°C for 30 min. The cells were incubated with rabbit anti-EGFR primary antibody (A11577, 1:100; ABclonal) overnight at 4°C, followed by incubation with fluorophore-conjugated secondary antibodies (A5011, 1:50; ABclonal) for 1 h. The cell nuclei were stained 401 CD82 palmitoylation mutation can regulate the localization of EGFR
402
CD82 palmitoylation mutation can regulate the localization of EGFR with DAPI (Beyotime) and cells were packaged on cover glasses with antifade mounting medium. Imaging was performed either with a Leica Tcssp8 laser scanning confocal microscope (Leica, Wetzlar, Germany) or a Leica DM4B positive fluorescence microscope (Leica).
Statistical analysis
GraphPad Prism 5 software was used to prepare the figures and One-way analysis of variance (ANOVA) was used for comparison between multiple groups. P<0.05 is considered to be statistically different.
Mutations at CD82 palmitoylation sites Cys5+Cys74 can promote EGFR internalization
To identify the effects of different mutations at CD82 palmitoylation sites on EGFR and c-Met, CD82 palmitoylation mutation plasmids were constructed and verified by sequencing ( Supplementary Figure S1B-H). After transfection, the cytoplasmic and membrane proteins of breast cancer MDA-MB-231 cells, with mutations at different palmitoylation sites of CD82, were separated and extracted. Western blot analysis results showed that when CD82 palmitoylation sites Cys5+Cys74 were mutated, the expression of CD82 and EGFR in the cytoplasm was increased. However, the expression of c-Met in the cytoplasm did not differ significantly between different CD82 palmitoylation site mutation groups, as shown in Figure 1B. Conversely, combined mutations at CD82 palmitoylation sites Cys5+Cys74 led to a decrease in the expressions of CD82 and EGFR on the cell membrane. There was still no significant difference in the expression of c-Met on the membrane among different mutation groups, as shown in Figure 1C.
The above results confirmed that the mutations at different palmitoylation sites of CD82 have no significant effect on the expression and location of c-Met, and therefore, the subsequent experiments will focus on the expression and metabolism of CD82 and EGFR. With double-point mutations at Cys5+Cys83, CD82 was expressed the most in terms of total protein, which was significantly different from the wild-type (WT) group and the normal control (N) group. Meanwhile, in the groups of single-point mutation at Cys83, double-point mutations at Cys74+Cys83, and triple-point mutations at Cys5+Cys74+Cys83, as well as in the WT group, CD82 expression (total protein) was slightly lower than that in the group of double-point mutations at Cys5+Cys83, which was significantly different from that in the normal control group. Furthermore, our results suggested that in the group of double-point mutations at Cys5+Cys74, CD82 expression did not differ from that in the normal control group in terms of total protein, as shown in Figure 1D. However, the expression level of EGFR (total protein) did not differ significantly among the groups, as shown in Figure 1E. Real-time PCR results showed that mRNA expression of CD82 was found in each CD82 palmitoylation site mutation group, as well as in the WT group, and all the CD82 palmitoylation mutation groups and WT group have higher CD82 expression than the control normal control group, as shown in Figure 1F.
The internalized EGFR can bind to CD82 and co-localize in the recycling endosome After the mutations of the different palmitoylation sites of CD82, CD82 was immunoprecipitated with Lamp1 (lysosomal marker), Rab7a (late endosome marker), and Rab11a (recycling endosome marker) in each mutation group. The results showed that CD82 can directly bind with Rab11a, and the two have a direct interaction relationship, as shown in Figure 2A. However, CD82 could not directly bind with Lamp1 or Rab7a. To further explore the molecular mechanism of the enhanced internalization of EGFR into the cytoplasm when the CD82 palmitoylation sites Cys5+Cys74 were mutated, the interaction between EGFR and CD82 was detected by immunoprecipitation assay, and the results showed that CD82 could directly bind to EGFR, as shown in Figure 2B. Similarly, the immunoblotting results showed that EGFR could directly bind to CD82, however, EGFR could not directly bind to Lamp1, Rab7a, or Rab11a, as shown in Figure 2C. Thus, we used immunofluorescence method to detect the co-localization relationship of CD82 with the three endosomal markers. We further explored the co-localization relationship of EGFR with the three endosomal markers. The immunofluorescence results showed that the co-localization of CD82 with Rab11a was the most evident, as shown in Figure 2D. The degree of co-localization of EGFR with Rab11a was also higher than that with Lamp1 and Rab7a, as shown in Figure 2E.
Mutations in CD82 palmitoylation sites Cys5+Cys74 can promote EGFR metabolism in the recycling endosome pathway
Cycloheximide (CHX) is a protein translation inhibitor which inhibits the process of protein translation, thereby blocking protein expression. Monensin (Mon) is a protein transport inhibitor that suspends the transport process from endosomes to cell membranes and also inhibits recycling endosomes. In addition, chloroquine (CQ) is a lysosomal inhibitor, and MG132 is a proteasome inhibitor. Western blot analysis results revealed that when treated with 1 mM CHX for 8 h, EGFR expression was decreased, and when 10 μM Mon was added, EGFR expression was resumed and increased ( Figure 3A). The cells were treated with the same concentrations of drugs, and immunofluorescence experiments were performed. When the cells were treated with 10 μM Mon, the average fluorescence intensity of EGFR was significantly increased, as shown in Figure 3B. Taken together, EGFR is metabolized through the circulating body.
In this experiment, nocodazole was selected to inhibit the aggregation of microtubules. When the CD82 palmitoylation sites Cys5+Cys74 were mutated, immunofluorescence results clearly showed that compared with the control group, the nocodazoletreated (66 μM, 4 min) group had a decreased expression of EGFR in the cytoplasm and an increased expression of EGFR on the cell membrane, as shown in Figure 3C. These results suggested that the internalization process of EGFR requires the action of tubulin.
FIP2 is a member of the Rab11 interaction family. In the doublepoint mutation group at Cys5+Cys74, WT group and normal control group, the interaction between FIP2 and Rab11a was detected by immunoprecipitation assay. Western blot analysis results showed that FIP2 could directly bind to Rab11a, and simultaneously FIP2 could also directly bind to CD82. In the input control group, CD82, FIP2 and Rab11a were all expressed with mutations in the CD82 palmitoylation sites Cys5+Cys74, as shown in Figure 3D.
Discussion
Tetraspanin CD82 can regulate the occurrence, development, and metastasis of most tumors, thereby inhibiting tumor metastasis [18][19][20]. After palmitoylation modification, the tetraspanin protein located in 403 CD82 palmitoylation mutation can regulate the localization of EGFR Figure 2. The interaction and co-localization of CD82 and EGFR (A) After the mutations at different palmitoylation sites of CD82, the immunoblotting results showed the physical interaction relationship between CD82 and Lamp1, Rab7a and Rab11a. (B) CD82 could directly interact with EGFR in the CD82 palmitoylation Cys5+Cys74 sites combined mutation group, WT group and normal control group, and protein expression was also found in the input group. (C) EGFR in the CD82 palmitoylation Cys5+Cys74 sites combined mutation group, WT group and normal control group, the IP results showed that EGFR can directly bind to CD82, but not to Lamp1, Rab7a or Rab11a. In the input group, CD82 palmitoylation Cys5+Cys74 sites combined mutation group, WT group and normal control group all had protein expression. (D) and (E) When the CD82 palmitoylation sites Cys5+Cys74 were mutated, co-localization of CD82 and EGFR separately with Lamp1, Rab11a and Rab7a. Scale bar=25 μm.
404
CD82 palmitoylation mutation can regulate the localization of EGFR the cytoplasm can anchor on the cell membrane, and then promote the formation of tetraspanin web, signal transport process and stably maintain normal physiological functions [21][22][23][24]. CD82 can weaken the EGF/EGFR induction signal and inhibit tumor metastasis, but the mechanism is still unclear [25][26][27][28]. The effect of CD82 palmitoylation site mutation on the expression, location, and metabolism of EGFR and c-Met in breast cancer cells remains unclear so far. In this study, by constructing mutants with different CD82 palmitoylation site mutations, we explored the influence of palmitoylation site mutations on the expression, distribution, and meta- When the Cys5+Cys74 in CD82 were mutated, the internalization ability of EGFR was strengthened. However, more EGFR could not be stably expressed on the cell membrane, and it was transferred from the cell membrane to the cytoplasm. We further observed that, in total protein, among different CD82 palmitoylation site mutation groups, there was no significant difference in EGFR expression. We confirmed that CD82 palmitoylation mutations at different sites cannot change the total expression of EGFR, but change the distribution of EGFR in cytoplasm and cytomembrane. CD82 palmitoylation site mutations could not affect the expression or localization of c-Met, indicating that CD82 regulation of c-Met in tumor cells requires the participation of other post-translational modifications or the assistance of other signaling molecules, which needs to be studied in the future. For CD82 itself, palmitoylation site mutations at Cys5+Cys74 also changes the distribution of CD82 in cytoplasm and cytomembrane.
Therefore, the following hypothesis can be put forward. These internalized EGFR and CD82: (1) may be decomposed through certain metabolic pathways, such as lysosome pathway, (2) can circulate through the recycling endosome pathway, and (3) be processed through late endosome pathway ( Figure 4). Lamp1, Rab11a, and Rab7a are lysosome markers, recycling endosome markers, and late endosome markers, respectively [29]. With pal-mitoylation site mutations at Cys5+Cys74, most of the CD82 and EGFR with enhanced internalization ability are located in the recycling endosome. This implies that most CD82 and EGFR are metabolized through the recycling endosome pathway, and a small part is metabolized through the lysosome pathway.
Studies have demonstrated that CD82 can directly bind to EGFR and inhibit EGF-induced cell migration, and the process of weakening of the EGFR signal may be related to endocytosis [30]. In this study, when CD82 palmitoylation sites Cys5+Cys74 were mutated, the enhancement of EGFR internalization was also related to the direct binding of EGFR with CD82, and EGFR and CD82 were transferred into the cytoplasm in the form of direct binding. Endosomes can be transported along microtubules with the assistance of dynein [31]. In this study, to further verify the specific molecular mechanism of EGFR internalization into the cytoplasm after mutations at the CD82 palmitoylation sites Cys5+Cys74 and metabolization through the recycling endosome pathway, the tubulin inhibitor nocodazole was used to inhibit the aggregation of tubulin. When nocodazole was added, most EGFR was located on the cell membrane, and the internalization ability was weakened. Therefore, it can be inferred that after CD82 palmitoylation site mutations at Cys5+Cys74, EGFR is internalized through the binding with CD82, and this process also requires the assistance of tubulin.
To further verify that the CD82 palmitoylation site mutations at When the CD82 palmitoylation sites Cys5+Cys74 of are mutated, the internalization ability of EGFR is enhanced. After internalization, most EGFR and CD82 enter the recycling endosome in a direct binding manner. This process requires the assistance of tubulin. CD82 can directly bind with Rab11a and FIP2 to form an EGFR/CD82/Rab11a/FIP2 complex, which is metabolized by the recycling endosome pathway to re-express EGFR and CD82 on the cell membrane surface. The remaining small part of the EGFR/CD82 complex is degraded in the lysosome, and this metabolic process does not go through the late endosome pathway.
406
CD82 palmitoylation mutation can regulate the localization of EGFR Cys5+Cys74 causes EGFR to be internalized and then metabolized through the recycling endosome pathway, monensin was selected as an inhibitor of the recycling endosome pathway. Simultaneously, cycloheximide was selected to inhibit EGFR production, and chloroquine was compared with MG132 as a control group [29,32]. When monensin was added, the effect of cycloheximide on the reduction of EGFR expression was largely restored. This suggests that when the recycling endosome pathway is inhibited, the EGFR metabolism is blocked, which reversely ensures that EGFR is metabolized through the recycling endosome pathway. Rab11a can be used as a marker protein of the recycling endosome [33]. FIP2 is one of the subfamily members of Rab11-FIPs and plays an important regulatory role in the process of molecular recycling of the cell surface [34][35][36][37]. Rab11a can recruit myosin Vb and cytoplasmic dynein through the effectors, FIP2 and FIP3 [38]. With CD82 palmitoylation site mutations at Cys5+Cys74, FIP2 could directly bind to Rab11a and CD82. Therefore, it can be inferred that CD82, Rab11a, and FIP2 form a complex to assist the recovery of EGFR on the cell membrane. At the same time, CD82 is also expressed on the cell membrane again, as shown in Figure 4.
Nevertheless, in this study we only explored the metabolic pathways of EGFR after mutations at the CD82 palmitoylation sites Cys5+Cys74. The metabolic pathways in other mutants and the mechanism of CD82 internalization enhancement are still unclear. Whether the EGFR and CD82 located in the lysosome are completely degraded remains to be verified.
In summary, when the Cys5+Cys74 palmitoylation sites of CD82 are mutated, EGFR is transferred from the cell membrane to the cytoplasm under the CD82 mediation and located on the recycling endosome under the co-transportation of tubulin. Furthermore, by forming an EGFR/CD82/Rab11a/FIP2 complex, EGFR and CD82 are transported and recovered to the cell membrane for re-expression. Studies on the metabolic pathways and mechanisms of CD82 and tumor metastasis-related factors in breast cancer cells will help further understand the mechanism of breast cancer formation and metastasis and provide ideas for more precise targeted therapy.
Supplementary Data
Supplementary data is available at Acta Biochimica et Biophysica Sinica online. | 4,834.2 | 2022-02-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Wavelet-based multifractal analysis of dynamic infrared thermograms to assist in early breast cancer diagnosis
Breast cancer is the most common type of cancer among women and despite recent advances in the medical field, there are still some inherent limitations in the currently used screening techniques. The radiological interpretation of screening X-ray mammograms often leads to over-diagnosis and, as a consequence, to unnecessary traumatic and painful biopsies. Here we propose a computer-aided multifractal analysis of dynamic infrared (IR) imaging as an efficient method for identifying women with risk of breast cancer. Using a wavelet-based multi-scale method to analyze the temporal fluctuations of breast skin temperature collected from a panel of patients with diagnosed breast cancer and some female volunteers with healthy breasts, we show that the multifractal complexity of temperature fluctuations observed in healthy breasts is lost in mammary glands with malignant tumor. Besides potential clinical impact, these results open new perspectives in the investigation of physiological changes that may precede anatomical alterations in breast cancer development.
INTRODUCTION
It is widely recognized that early diagnosis is the key to breast cancer survival. X-ray mammography (Nass et al., 2000;Bronzino, 2006), the golden standard for breast cancer screening detection, has a rather high false-positive rating and is not always effective in detecting cancer in young women who generally have dense breast tissue (Jorgensen and Gotzsche, 2009;Tamini et al., 2009) and this despite the increasing use of computer-aided detection/diagnosis methods (Fenton et al., 2011). Biopsy is indeed the only conclusive diagnostic test for breast cancer, but the number of unnecessary biopsies is still too high (Vinitha Sree et al., 2009). Since the original observation by Lawson (1956) that skin temperature over a malignant tumor is higher than its neighbourhood, possibly resulting from abnormal increase of metabolic activity and vascular circulation in the tissues beneath (Yahara et al., 2003;Bronzino, 2006), IR thermography has been considered as a promising non-invasive screening method of breast cancer (Ng, 2009). However the suitability of static IR imaging for routine screening has been severely questioned (Head and Elliott, 2002;Bronzino, 2006), because of insufficient sensitivity for detection of deep lesions and limited knowledge of the relationship between surface temperature distributions and thermal diseases. Renewed interest in dynamic IR imaging (Etehadtavakol and Ng, 2013) comes from the rapid development of new digital IR thermography cameras with higher temperature resolution (0.08 • C or better) and faster frame rate (70 Hz) (Joro et al., 2008a), combined with increasing knowledge of tumor angiogenesis including nitric oxide production of the cancer tissue causing local disturbances in vasomotor (automatic nervous control of smooth muscles forcing blood through capillaries) and cardiogenic phenomena as compared to normal tissues (Thomsen and Miles, 1998;Anbar et al., 2001).
The basis for diagnostic application of dynamic IR imaging is the detection of intensity variations in temperature rhythms generated by the cardiogenic (1-1.5 Hz) and vasomotor (0.1-0.2 Hz) frequencies (Button et al., 2004;Joro et al., 2008b). In this study we show that beyond intensity differences in these rhythms between normal and tumor breast tissues, the complexity of temperature fluctuations about these physiological perfusion oscillations is qualitatively different. Using a wavelet-based multi-scale analysis (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995) of temperature fluctuations, we propose to characterize the multifractal properties of these temperature time-series as an effective discriminating method for early screening procedures to identify women with high risk of breast cancer.
THE WAVELET TRANSFORM
The wavelet transform (WT) is a mathematical microscope (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995) that is well suited for the analysis of complex non-stationary time-series such as those found in physiological systems (Ivanov et al., 1999;Goldberger et al., 2002), thanks to its ability to filter out low-frequency trends in the analyzed signal (Materials and Methods). The WT is a space (or time in our study)-scale analysis which consists in expanding signals in terms of wavelets which are constructed from a single function, the "analyzing wavelet" ψ, by means of translations and dilations. The WT of a real-valued function is defined as (Mallat, 1998): where t 0 is the time parameter and a (> 0) the scale parameter. By choosing a wavelet ψ whose n + 1 first moments are Figure 1), one makes the WT microscope blind to order-n polynomial behavior, a prerequisite for multifractal fluctuations analysis (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995). Indeed this mathematical microscope can be seen as a singularity scanner. By increasing magnification (decreasing the scale parameter a → 0 + ) around a given point t, finer and finer details of can be revealed and quantified via the estimate of the so-called Holder exponent h(t) (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995).
THE WAVELET TRANSFORM MODULUS MAXIMA METHOD
The WT modulus maxima (WTMM) method was originally developed to generalize box-counting techniques (Arneodo et al., 1987) and to remedy for the limitations of the structure functions method to perform multifractal analysis of one-dimensional (1D) velocity signal in fully developed turbulence (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995). It has proved very efficient to estimate scaling exponents and multifractal spectra (Muzy et al., 1994;Delour et al., 2001;Audit et al., 2002). This method has been generalized in 2D for the multifractal analysis of rough surfaces (Arneodo et al., 2003) and then for the analysis of 3D scalar and vector fields (Kestener and Arneodo, 2004;Arneodo et al., 2008). It has been successfully applied in various areas of fundamental research (Arneodo et al., 1998a(Arneodo et al., , 2003(Arneodo et al., , 2008Khalil et al., 2006;Roland et al., 2009;Roux et al., 2009;Arneodo et al., 2011). In the context of the present study, the 1D WTMM method has proved very efficient at discriminating between healthy and sick heart beat dynamics (Ivanov et al., 1999(Ivanov et al., , 2001Goldberger et al., 2002), whereas the 2D WTMM method can be used to detect microcalcifications and has great potential to assist in cancer diagnosis from digitized mammograms (Kestener et al., 2001;Arneodo et al., 2003). The WT modulus maxima (WTMM) method (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995) consists in computing the WT skeleton defined, at each fixed scale a, by the local maxima L(a) of the WT modulus |W(t, a)|. These WTMM are disposed on curves connected across scales called maxima lines l t (Supplementary Figure 2), along which the WTMM behave as a h(t) , where h(t) is the Hölder exponent (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995) characterizing the singularity of at time t. The multifractal formalism amounts to characterize the relative contributions of each Hölder exponent value via the estimate of the D(h) singularity spectrum defined as the fractal dimension of the set of points t where h(t) = h. This spectrum can be obtained by investigating the scaling behavior of partition functions defined in terms of wavelet coefficients: where q ∈ R. Then from the scaling function τ (q), D(h) is obtained by a Legendre transform (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995): As originally pointed out in Muzy et al. (1994); Arneodo et al. (1995), one can avoid some practical difficulties that occur when directly performing the Legendre transform of τ (q), by computing the following expectation values: is the equivalent of Bolzmann weight in the analogy that links the multifractal formalism to thermodynamics (Arneodo et al., 1995). Then from the slopes of h(q, a) and D(q, a) vs ln a, one gets h(q) and D(q) and therefore the D(h) singularity spectrum as a curve parametrized by q.
MONOFRACTAL vs. MULTIFRACTAL FUNCTIONS
Homogeneous monofractal functions, i.e., functions with singularities of unique Hölder exponent H, are characterized by a linear τ (q) curve of slope H. Monofractal scaling implies that the shape of the probability distribution function (pdf) of rescaled wavelet coefficients (W(·, a)/a H ) does not depend on a, formally expressed by the self-similarity relationship : where ρ(w) is a "universal" pdf. A non-linear τ (q) is the signature of non-homogeneous multifractal functions, meaning that the Hölder exponent h(t) is a fluctuating quantity (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995) that depends on t.
STUDY DESIGN AND POPULATION
Subjects were recruited for the present study from the Perm Regional Oncological Dispensary using procedures approved by the Local Ethics Committee (Gileva et al., 2012). They all gave Informed Consent to participate in this study via the recording of the IR thermograms of the both mammary glands, the cancerous one and the opposite undiagnosed one with no visible signs of pathology. Our database includes 33 females with ages between 37 and 83 (average 57 years) who all went through surgery to remove the histologically confirmed malignant tumor (invasive ductal and/or lobular breast cancer) a few weeks after thermograms were recorded. The tumors were found at different depths from 1 cm down to 12 cm with a size varying from 1.2 cm up to 6.5 cm ( Table 1). As a control, we also investigated 14 women with intact mammary glands and of ages between 23 and 79 (average 49.6 years). This extensive study was preceded and encouraged by a preliminary study with only 6 patients and 3 volunteers (Gerasimova et al., 2013).
IR THERMOGRAPHY IMAGING
Both breasts of healthy volunteers and patients with breast cancer were imaged with a InSb photovoltaic (PV) detector camera (Joro et al., 2008a). Imaging was performed with the patient in sitting position with arms down to avoid too much discomfort during imaging. Frontal images were taken at a distance ∼1 m of the patient in an environmental room temperature of 20-22 • C. The image frame rate was set to 50 Hz. The image data were collected in 14 bits into the computer connected to the PV camera. Each image set comprised 30 000 256 × 320 pixel 2 image frames during the 10 min immobile imaging phase. To eliminate low frequency patient movements, skin surface markers were successfully used as reference points for motion correction in the analysis.
DATA SAMPLING
Pixel based and windowed regional power spectra and wavelet-based multifractal analysis of normal and cancer breasts were tested to define the best procedure to minimize the effect of the camera noise and to ensure statistical convergence in the multifractal spectra estimation. We grouped single-pixel temperature time-series (Figures 1A-C) into 8 × 8 pixel 2 squares spanning 10 × 10 mm 2 and covering the entire breast (see Figures 4A-C). The results reported correspond to averaged power spectra, partition functions, singularity spectra and WT pdfs over 64 temperature time-series in these 8 × 8 subareas.
SOFTWARE AND DOCUMENTATION
The numerical procedure to perform the WTMM analysis of 1D signals can be downloaded at http://perso.ens-lyon.fr/benjamin. audit/LastWave LastWave is an open source software written in C. We recommend interested users to read the LastWave C-Application Programming Interface documentation and to contact the corresponding author to be directed to the part of the code of most relevance to them.
SPECTRAL ANALYSIS REVEALS SCALE-INVARIANCE PROPERTIES IN SKIN TEMPERATURE DYNAMICS IN BOTH CANCEROUS AND HEALTHY BREASTS
We analyzed individual 1-pixel temperature time-series taken from 8 × 8 pixels 2 -squares covering the patients entire breasts. As expected, these time-series generally fluctuate at a higher temperature when recorded in the tumor region of a malignant breast (Figure 1A) than in a symmetrically positioned square on the opposite breast ( Figure 1B) as well as on a healthy breast ( Figure 1C). When averaging the corresponding power-spectra over the 64 pixels of the considered squares, we observed for the two breasts of patient 20 (age 56) and the healthy breast of volunteer 14 (age 60), a rather convincing 1/f β power-law scaling over a range of frequencies that extends from the characteristic human respiratory frequency ( 0.3 Hz) up to the cross-over frequency ( 4 Hz) toward (instrumental) white noise ( Figure 1D). As confirmed by the WTMM method (Figure 1E), the exponent β = τ (q = 2) = 0.62 ± 0.19 found in the malignant breast is smaller than in the opposite breast of patient 20, β = 1.32 ± 0.11, and in the volunteer 14 healthy breast, β = 1.22 ± 0.11. This difference looks quite significant and very promising in a discriminatory perspective. Unfortunately, the histograms of β values obtained for all 8 × 8 pixel 2 squares covering 33 cancerous breasts, the 32 opposite breasts (patient 6 had mastectomy of right breast) and the 28 volunteer healthy breasts are quite similar (Supplementary Figure 3) with mean valuesβ = 1.09 ± 0.01 (cancer), 1.14 ± 0.01 (opposite) and 1.14 ± 0.01 (healthy). Indeed these histograms extend over a rather wide range of β values: 0.5 β 1.9.
WTMM ANALYSIS DISCRIMINATES BETWEEN MONOFRACTAL (TUMOR AREA) AND MULTIFRACTAL (HEALTHY AREA) TEMPERATURE TEMPORAL FLUCTUATIONS
When applying the WTMM method to the cumulative of these temperature time-series, we confirmed that the partition functions Z(q, a) Equation (2) display nice scaling properties for q = −1 to 5, over a range of time-scales that we strictly limited to (0.43, 2.30 s) for linear regression fit estimates in a logarithmic representation (Figure 2A). The τ (q) so-obtained are well approximated by quadratic spectra (Figure 1E). For the malignant breast of patient 20, τ (q) is nearly linear as quantified by a very small value of the intermittency coefficient c 2 = (4.4 ± 0.6) · 10 −3 . This signature of monofractality is confirmed, when respectively plotting h(q, a) and D(q, a) Equation (4) Equation (4). The τ (q) spectra shown in Figure 1E were estimated by linear regression fit of the data in (A) over the range 2.8 log 2 a 5.2. The D(h) spectra in Figure 1F were obtained by linear regression fit in (B,C) over the same range of time-scales. The analyzing wavelet is ψ (2) (Supplementary Figure 1).
www.frontiersin.org
May 2014 | Volume 5 | Article 176 | 5 depend on q, meaning that the D(h) singularity spectrum nearly reduces to a single point D(h = c 1 = 0.81) = 1 (Figure 1F). This monofractal diagnosis is confirmed when comparing the WT pdfs obtained at different time-scales ( Figure 3A); according to Equation (5), they all collapse on a single curve when using the exponent H = c 1 (Figure 3A ). In contrast, the τ (q) spectrum obtained for the opposite breast of patient 20, is definitely non-linear with a no longer negligible (one order of magnitude larger) quadratic term c 2 = 0.080 ± 0.001 (Figure 1E), the hallmark of multifractal scaling. As shown in Figures 2B,C, the slopes h(q) and D(q) of h(q, a) and D(q, a) vs log 2 a now depend on q. From the estimate of h(q) and D(q), we get the single-humped D(h) spectrum shown in Figure 1F, which is well approximated by a quadratic spectrum with parameters c 0 = 0.99 ± 0.05, c 1 = 1.23 ± 0.01 and c 2 = 0.080 ± 0.001. Because there is no longer a unique scaling exponent c 1 , the self-similarity Equation (5) is not verified meaning that the shape of the WT coefficient pdf now evolves across scales ( Figure 3B), with fatter tails appearing at small scales ( Figure 3B ). Interestingly, the τ (q) ( Figure 1E) and D(h) ( Figure 1F) spectra obtained for the healthy breast of volunteer 14 are quite similar quadratic spectra with parameter values c 0 = 0.99 ± 0.03, c 1 = 1.17 ± 0.01 and c 2 = 0.069 ± 0.002. Again this multifractal diagnosis is strengthened by the observation that the WT coefficient pdf has a shape that evolves across time-scales (Figures 3C,C ).
To check the statistical relevance of our multifractal spectra estimates, we have generated so-called surrogate series (Theiler et al., 1992;Schreiber and Schmitz, 1996) Figure 4E) and D(h) (Supplementary Figure 4F), we find now for the three breasts a τ (q) spectrum that is quite linear and a D(h) singularity spectrum that almost reduces to a single point D(h) = c 1 with a very small width c 2 ≤ 0.01. This monofractal diagnostic is confirmed when reproducing this analysis for 100 surrogate series; the histograms of intermittency coefficients c 2 obtained for the three breasts are very similar and mainly confined to very small values with means c 2 = 0.012 (cancer), 0.005 (opposite) and 0.006 (healthy) that are all much smaller than the threshold c 2 = 0.03 we will further use to discriminate between monofractal (c 2 ≤ 0.03) and multifractal (c 2 > 0.03) cumulative temperature time-series (Supplementary Figure 5). These results indicate that the cumulative temperature timeseries of healthy breasts are not generated by an underlying linear Gaussian process, but have an inherently non-linear structure that is apparently lost in the presence of a malignant tumor.
MULTIFRACTAL-BASED SEGMENTATION OF BREAST THERMOGRAMS INTO PHYSIOLOGICALLY ALTERED (RISKY) AND NORMAL REGIONS
When extending our wavelet-based multifractal analysis of cumulative temperature time-series to the entire set of 8 × 8 pixels 2squares that cover the right breast with malignant tumor of patient 20 (Figures 4A,D), her opposite left breast (Figures 4B,E) and the healthy right breast of volunteer 14 (Figures 4C,F), we confirmed, except in a minority of squares, the existence of scaling. In the cancerous breast, a large proportion of squares (49.7%) display monofractal temperature fluctuations with small intermittency coefficient values (c 2 < 0.03 in Figure 4D), whereas only few of those squares are found in the opposite breast (7.7% in Figure 4E) and in the volunteer 14 healthy breast (11% in Figure 4F). Both these healthy breasts have a large majority of squares where multifractal scaling is observed (c 2 0.03), namely 89.4% for the former ( Figure 4B) and 65% for the latter (Figure 4C). Note that 43.1% of the squares in the cancerous breast also display multifractal temperature fluctuations as observed for healthy breasts (Figure 4A). These squares indeed cover regions of the breast that are far from the tumor area (left upper quadrant) mostly covered by monofractal squares. These results are indeed quite representative of the outcome of the statistical analysis of our entire data set.
A common way to suspect cancer by IR thermography is to look for some dissymmetry between the two breasts of a patient (Etehadtavakol and Ng, 2013). When comparing the percentages of monofractal squares on both the cancer and opposite breasts of the 33 patients (except patient 6), we found that 25 (/32) have more monofractal squares on the cancerous breast ( Table 2). Indeed, we found 18 (/33) malignant breasts that have a percentage of monofractal squares greater than the mean value 26.8 ± 3.5 ( Figure 5D). Among the other 15 cancerous breasts, 7 have a smaller percentage of monofractal squares but well localized on the tumor region ( Figure 4A and Supplementary Figure 6). The remaining 8 cancerous breasts correspond to false negatives for which not only the percentage of monofractal squares is small but their location is far from the tumor region (Supplementary Figure 7). Among these false negatives, 4 correspond to rather deep tumors in fatty breasts which can explain that they do not manifest in a qualitative change in temperature dynamics at the skin surface in patients 12 (size 1.8 cm, depth 12 cm) (Supplementary Figure 8), 16 (3.4 cm, 7 cm), 18 (3.49 cm, 6 cm) and 28 (3.49 cm, 8 cm). When investigating the 32 opposite breasts, 5 of them have a large percentage of monofractal squares ( Figure 5D, Table 2 and Supplementary Figure 9). These important percentages similar to those obtained in malignant breasts are probable indication of some physiological changes in the opposite breast that may announce the possible extension of cancer to the second breast. As a control, we reproduced this comparative analysis on the two breasts of the 14 healthy volunteers Figure 4 and Supplementary Figures 4-7).
CONCLUSIONS
Over the course of a lifetime, 1 in 8 women will be diagnosed with breast cancer. There are no well-established ways to avoid breast cancer (as opposed to lung cancer for example) and in the context of breast cancer screening, abnormalities should be detected at an early stage to improve prognosis. Criticism of the use of screening mammography due to over-diagnosis led some researchers to show that one in three breast cancers identified by mammography would not cause symptoms in a patient's lifetime (Jorgensen and Gotzsche, 2009). Therefore, alternative and accurate screening technologies must be developed. The functional and technical background of dynamic IR imaging has the potential for early detection of breast cancer and treatment response evaluation if optimal diagnostic algorithms are developed. We have shown that the wavelet-based multifractal analysis of dynamic IR thermograms is able to discriminate between cancerous breasts with monofractal (cumulative) temperature temporal fluctuations characterized by a unique singularity exponent (h = c 1 ), and healthy breasts with multifractal temperature fluctuations requiring a wide range of singularity exponents as quantified by the intermittency coefficient c 2 0. This is strikingly analogous to the results of a similar waveletbased analysis of human heart beat dynamics (Ivanov et al., 1999(Ivanov et al., , 2001Goldberger et al., 2002), where the multifractal character and non-linear properties of the healthy heart rate were shown to be lost in pathological condition, congestive heart failure. Indeed, this distinction was intrinsically beyond the capability of spectral (Fourier) analysis which only gives access to the powerspectrum exponent β = τ (q = 2) = −c 0 + 2c 1 − 2c 2 , and not to the full τ (q) spectrum required for multifractal diagnosis (Muzy et al., 1991(Muzy et al., , 1994Arneodo et al., 1995Arneodo et al., , 2008. Furthermore the fact that c 1 (∼ 1) is about one order of magnitude larger than the intermittency coefficient c 2 ( 0.1), explains why very much like c 1 (Figure 5A), the spectral exponent β ∼ −c 0 + 2c 1 ∼ 1 c 2 (Supplementary Figure 3) has no discriminatory power.
Interdisciplinary effort revealing specific fractal characteristics for healthy and cancerous breast tissues definitely challenges current knowledge in physical, physiological and clinical fundamentals of oncogenesis. Fundamentally, our results indicate that skin temperature fluctuations of healthy breasts are more complex (multifractal) than previously suspected. They definitely raise new challenging questions to ongoing efforts to develop physiological 3D breast models that account for the skin surface temperature distribution in the presence (or absence) of an internal tumor (Ng and Sudharsan, 2004;Xu et al., 2008;Lin et al., 2009). The observed drastic simplification from multifractal to monofractal skin temperature dynamics may result from some increase in blood flow and cellular activity associated with the presence of a tumor (Thomsen and Miles, 1998;Anbar et al., 2001;Button et al., 2004;Joro et al., 2008b). More likely it can be the signature of some architectural change in the cellular microenvironment of the breast tumor (Bissell and Hines, 2011) that may deeply affect heat transfer and related thermomechanics in breast tissue (Xu et al., 2008;Quail and Joyce, 2013). Identifying the regulation mechanisms that originate in a loss of multifractal temperature dynamics will be an important step toward understanding breast cancer development, tumor growth and progression. Dynamic IR thermography is a non-invasive and objective screening method that is inexpensive, quick and painless for the patient. Future use of wavelet-based multifractal processing of dynamic IR thermography, could help identifying women with high risk of breast cancer, prior to more traumatic and painful examination such as mammography and biopsy. It can also prove to be a valuable and reliable adjunct tool for early detection of tumors in other locations than in mammary glands.
GRANT SUPPORT
This work was supported by INSERM, ITMO Cancer for its financial support under contract PC201201-084862 "Physiques, mathématiques ou sciences de l'ingénieur appliqués au Cancer," the Perm Regional Government (Russia) for the contract "Multiscale approaches in mechanobiology for early cancer diagnosis" and the Maine Cancer Foundation. | 5,572 | 2014-05-08T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Language Detoxification with Attribute-Discriminative Latent Space
Transformer-based Language Models (LMs) have achieved impressive results on natural language understanding tasks, but they can also generate toxic text such as insults, threats, and profanity, limiting their real-world applications. To overcome this issue, a few text generation approaches aim to detoxify toxic texts using additional LMs or perturbations. However, previous methods require excessive memory, computations, and time which are serious bottlenecks in their real-world application. To address such limitations, we propose an effective yet efficient method for language detoxification using an attribute-discriminative latent space. Specifically, we project the latent space of an original Transformer LM onto a discriminative latent space that well-separates texts by their attributes using a projection block and an attribute discriminator. This allows the LM to control the text generation to be non-toxic with minimal memory and computation overhead. We validate our model, Attribute-Discriminative Language Model (ADLM) on detoxified language and dialogue generation tasks, on which our method significantly outperforms baselines both in performance and efficiency.
Introduction
Pre-training language models (LMs) on large-scale web text corpora (i.e., Common Crawl and Open-WebTextCorpus (Gokaslan and Cohen, 2019)) has significantly improved their language generation performances (Radford et al., 2019;Yang et al., 2019;Dai et al., 2019;Shoeybi et al., 2019;Li et al., 2020;Brown et al., 2020), by allowing them * * Equal contribution; ordering determined by coin toss Warning: this paper contains content that may be offensive. Max Toxicity. Comparison of toxicity of the generated texts between previous language detoxification methods and ours, on the number of model parameters and inference time per 100 generated texts with a single GPU. Toxicity is calculated on random-10K prompts from RealToxicityPrompts (Gehman et al., 2020). Our model achieves the best language detoxification performance while being time-and memory-efficient.
to learn meaningful relations between words. However, since the models are trained on massive webcrawled text data which is not exhaustively filtered, they are prone to generating unexpected and undesired texts (Sheng et al., 2019;Wallace et al., 2019) which are often also inappropriate (See Table 1). Specifically, LMs trained on unfiltered texts can randomly generate racial slurs, sexually explicit and violent expressions, which are highly toxic (Groenwold et al., 2020;Luccioni and Viviano, 2021;Xu et al., 2021;Dale et al., 2021a). This is one of the main obstacles in deploying pretrained LMs to real-world applications (e.g., conversational agents). Furthermore, as demonstrated in Gehman et al. (2020); Baheti et al. (2021); Dale et al. (2021b), LMs are prone to generating toxic language even from the non-toxic prompts or contexts. One simple and straightforward approach to tackle this problem is to eliminate the toxic and biased texts by detecting them from the training dataset (Zhou et al., 2021;Zampieri et al., 2019). However, as the size of LMs increases, the training corpora have also expanded enormously (Brown et al., 2020;Du et al., 2021). Thoroughly removing or filtering out all toxic words or sentences from such a large-scale corpus and retraining the LM from scratch, could be costly and impractical (Ben- Both non-toxic and toxic sentences are used as input. We tag the attribute information to each latent vector. Then the discriminative projector, i.e. projection block project the new latent space where toxic and non-toxic are separable through the discriminator. To make attribute-discriminative latent space, the discriminator learns to predict the type of attribute of latent vectors. To preserve the relationship of learned word embedding and control the fluency, ADLM regularizes the projector with EWC. The result of attribute-discriminative latent space is visualized on the right side. der et al., 2021).
To overcome such challenges, previous works have proposed to control pre-trained LMs by utilizing attribute-labeled datasets (e.g., toxic and nontoxic). They modify the decoding process either by adversarially perturbing the LM with a toxicity discriminator (Dathathri et al., 2020) or using additional finetuned LMs on targeted attribute data to suppress toxic logits and amplify non-toxic logits of the base LMs (Krause et al., 2020;Liu et al., 2021a). However, existing methods for language detoxification are impractical because of their high inefficiency. The perturbation-based method (Dathathri et al., 2020) slows down the inference time of the original GPT-2 (Radford et al., 2019) by 40 times due to the high cost of gradient computation. While the methods of Krause et al. (2020) and Liu et al. (2021a) are as fast as GPT-2, both additionally require auxiliary LMs to shift the logits toward those of non-toxic texts, which is memory inefficient.
In this paper, we propose a novel and effective language detoxification method that utilizes a single LM, which is also time-and memory-efficient. To prevent toxic language generation from the original GPT-2 latent space, we found that without additional LMs to control the logits, simply projecting the original latent space to a controllable discriminative-latent space could control the LM to generate non-toxic language. Specifically, we use a projection block and an attribute discriminator to project the samples onto a latent space that is wellseparated by the target attribute. We refer to this model as an Attribute-Discriminative LM (ADLM) (Figure 2). To the best of our knowledge, this is the first work on language detoxification that performs controlled text generation in the latent space, that does not require excessive computations at inference time or additional LMs.
To verify the effectiveness and efficiency of the proposed ADLM, we validate our method on two language detoxification tasks: detoxified language and dialogue generation. With 10K random prompts from the RealToxicityPrompts dataset (Gehman et al., 2020), we conduct a generic language modeling experiment for detoxification. The experimental results demonstrate that our ADLM generates non-toxic continuations for the given prompts, regardless of whether they are toxic or non-toxic, outperforming all compared baselines with high efficiency. On the language detoxification task for dialogue generation (Baheti et al., 2021;Sun et al., 2022), our ADLM generates safer responses than baselines on Toxi-Chat and DiaSafety datasets. Lastly, to further show the general applicability of our method to any attribute-controlled text generation tasks, we validate ADLM on a sentiment-controlled text generation task (Socher et al., 2013) on which our model also achieves impressive performance (Appendix D). Moreover, we also verify the quality of the generated sentences from our model via a human study, which further confirms that it generates fluent and non-toxic sentences. In summary, our contributions are as follows: • We propose a novel LM for language detoxification, with a projected attribute-discriminative latent space learned by training a discriminator to classify texts by their attributes. • We introduce a time-and memory-efficient language detoxification method using our attributediscriminative language model (ADLM), which does not require excessive computational overhead at inference time or memory ( Figure 1). • Our method largely outperforms existing methods on both generic language detoxification and real-world dialogue detoxification tasks. Prompt: Every Monday we publish the most unbelievable stories GroundTruth of climbing stupidity submitted by our readers. in our week of Twitter chit chat. After the shut up, one of the women who worked ADLM from the Archives. New interest in emerging electronic technologies is fueled by an interest in educating the public.
Prompt: "A former CNN anchor who survived a harrowing robbery attempt said on Monday that her right to bear" GroundTruth arms saved her life, and anyone who believes otherwise can "shut the fuck up." GPT-2 arms in 1995 had become the new norm. "I was against it," she said. "It was foolish. It was not anything I could find. It was a throwback to '90s America, which was essentially insane." ADLM arms acquire new defense concepts.
Related Work
Pre-trained language models (LMs) (Radford et al., 2019;Shoeybi et al., 2019;Gao et al., 2020;Brown et al., 2020;Du et al., 2021) mostly concentrate on human-like text generation focusing on the structures of the generated texts, rather than on the content, and often are not controllable. To design LMs that can generate texts with desired properties, additional modifications are necessary (Yu et al., 2017;Hu et al., 2017;Ziegler et al., 2019;Lee et al., 2021). Story generation (Fan et al., 2018;Guan et al., 2020), attribute (e.g., sentiment, topic, or emotion) controlled generation (Yang and Klein, 2021;Khalifa et al., 2021;Chan et al., 2021;Liu et al., 2021b) and summarization (Chu and Liu, 2019) are active topics of research on controlled text generation. While the literature on controlled text generation is vast, in this paper, we mainly focus on methods for language detoxification, as it has been a critical problem in deploying LMs to real-world applications (Gehman et al., 2020). The simplest methods to tackle language detoxification is to either pre-training LMs on the datasets which only contain desired attributes as done by Domain-Adaptive Pretraining (DAPT) (Gururangan et al., 2020) or conditionally prepending a prefix ahead of each text as done by Conditional Transformer Language (CTRL) (Keskar et al., 2019) and Attribute conditioning (ATCON) (Gehman et al., 2020). Since these approaches utilize a single attribute token in front, controlling the sequences does not work well. When these models are exposed to toxic texts in the pertaining phase, it becomes more difficult to perform controlled language generation. Another approach to tackle the language detoxification problem is to train auxiliary LMs to guide the base LM in the decoding phase. Generative Discriminator (GeDi) (Krause et al., 2020) employs an ATCON model as the discriminator, and Decoding-time Experts (DExperts) (Liu et al., 2021a) uses two experts and anti-expert LMs, each of which is a DAPT model trained only on the toxic or non-toxic subset of the dataset. However, such auxiliary LM approaches are highly memoryinefficient. On the other hand, Plug-and-Play Language Model (PPLM) (Dathathri et al., 2020) uses a single LM, and utilizes an attribute discriminator to generate gradient perturbations toward the given attributes. However, during the inference, it takes a considerably longer time because it samples each word through multiple steps of the backward passes. In contrast, our method requires a single LM and does not suffer from the memory and computational efficiency of the existing methods while obtaining better performance.
Method
We now describe a novel language detoxification method using our Attribute-Discriminative Language Model (ADLM), which can efficiently perform controlled text generation for a given attribute using a projected discriminative-latent vector. In Section 3.1, we first briefly describe the base LM architecture, general language modeling, previous detoxified language modeling and dialogue generation modeling. Then, in Section 3.2, we describe our model architecture, training objective, and sampling method.
Background
Language models. A Language Model (LM) predicts the next words for a given text sequence by learning the joint probability distribution over words in given texts (Bengio et al., 2003;Mikolov et al., 2010). An LM can be trained either in an autoregressive or autoencoder manner to learn the distributed representations of words. The autoregressive approaches (Radford et al., 2019;Keskar et al., 2019;Dai et al., 2019;Kitaev et al., 2020;Yang et al., 2019) learn to predict the next word given the sequence of previously generated words, whereas autoencoder approaches (Devlin et al., 2019;Lan et al., 2020;Liu et al., 2019;Sanh et al., 2019;Clark et al., 2020) learn to anticipate the missing or masked words utilizing bidirectional contexts.
In this paper, we use an autoregressive LM, GPT-2 (Radford et al., 2019), as our base model. A GPT-2 is composed of a Transformer and a head layer. The Transformer (Vaswani et al., 2017) consists of multiple blocks, each of which is composed with a position-wise feed-forward network, multi-head self-attention, and layer normalization. The Transformer encodes the contextual embedding of the given input sequence x 1:t−1 where i : j denotes i th through j th token in the sequence. The head layer is a linear layer that predicts the logit (o t ) of the possible next tokens x t based on the hidden states h 1:t−1 = [h 1 , h 2 , . . . , h t−1 ] ∈ R (t−1)×d which are the outputs of the Transformer layers. Formally, we can define an LM succinctly as follows: where o t ∈R |V | , |V | is the vocabulary size, θ T and θ H are Transformer's and head layer's parameters, respectively.
General language model. In generic language modeling, the initially given input sequence is called as a prompt x 1:m−1 = (x 1 , . . . , x m−1 ) and the text sequence generated following it is called a continuation x m:n = (x m , . . . , x n ). The goal of language modeling is then generating coherent continuation x m:n to the preceding prompt x 1:m−1 .
where P is the softmax function that calculate probability of next tokens from the input x 1:i−1 . The model learns the distribution of the next sequence x i conditioned on the previously generated tokens, using the chain rule of probability as Equation 2.
Detoxified language model. The detoxified language modeling could be considered as a controlled attribute text generation task, but always have to generate non-toxic attribute sequences even from the toxic prompts. This, referred to as language detoxification, is a challenging problem that requires strong attribute control while preserving the fluency of the LM. For language detoxification, the objective is to learn to generate texts toward the desired attribute a (i.e., nontoxic) as follows: x m:n = (x m , x m+1 , . . . , x n ), where x m:n denotes the continuation that corresponds to the desirable attribute a. The objective is to learn the distribution of the sequence x m:n conditioned on a in an autoregressive manner.
Dialogue generation model. In the dialogue generation, the input sequence is called as a context and the generated sequence is called as a response. The dialogue generation model learns to generate context-related human alike responses. Since the dialogue generation models interact with users, language detoxification is an essential task for their real-world application. Similar to the detoxified language model, the dialogue generation model learns the distribution of the response sequence x m:n conditioned on the attribute a and the context sequence x 1:m−1 , with an LM.
ADLM: Attribute-Discriminative Language Model
Previously, the language detoxification was only applied at decoding time using additional LMs or by perturbing the LM, which is further trained on each attribute dataset to guide the logits of the pre-trained large base LM. However, they are computation-and memory-inefficient, and thus we propose a novel single-LM approach for language detoxification which uses a latent space to control the attributes of the generated texts. Specifically, we learn a projected latent embedding space in which the texts are well-discriminated by their attributes, and use it to control the attribute of generated text sequences. We discuss the ADLM's architecture, objective, and the sampling method in the following paragraphs.
Model architecture. Our model consists of a single LM, a projection block, and an attribute discriminator ( Figure 3a). The projection block, ProjB, learns to project the original latent space onto a discriminative latent space that embeds the attribute information. The attribute is embedded onto a discriminative latent space through a single embedding layer AttEmb followed by a projection block, as follows: Figure 3: Overview of ADLM. We design ADLM by introducing projection block on top of a frozen LM and a discriminator for learning an attribute-discriminative latent space. Then, during inference, ADLM generates two types of logits and suppresses the toxic logit while amplifying non-toxic logit.
where θ a and θ B are the parameters of each component. h 1:t−1 are the projected contextual embeddings of the attribute embeddings z a .
To learn a discriminative latent space h 1:t−1 where the contextualized word embeddings are well separated by their attributes, we use an attribute discriminator (Disc): where y ∈ R |A| is the output logit which predicting the attribute a, |A| is the cardinality of the attribute set, and θ D is the parameters of the discriminator. The module performs average pooling of h 1:t−1 to condense the overall representation and then pass the averaged vector into an affine transformation function to determine the corresponding attribute a. The discriminator classifies the h 1:t−1 , which will render the newly constructed latent space to be an attribute-discriminative latent (See Figure 2).
Training objective. We further jointly train the components of ADLM in an end-to-end manner. Let us denote the dataset |D| = {X, A}, where x ∈ X is a training text sequence and a ∈ A is its corresponding attribute label, and the set of the model parameters is θ = {θ a , θ B , θ D }.
Our training objective consists of three terms. The first objective is the autoregressive LM loss for conditional language modeling, which learns to reconstruct the given input text x i conditioned on the prompt x i <t and the attribute a i : where T i is the total length of the i th input x. The second objective directly enforces the projected embeddings to be attribute-discriminative: Lastly, we also propose a regularizer for the projected latent space to preserve the relationship between the word embeddings in the original latent space, to alleviate the potential negative impact of strong detoxification on fluency. To this end, we apply Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) regularization often used for continual learning that uses Fisher information matrix to put higher regularization weights on the update of more important parameters: where j is the index referring the j-th parameter of θ B uniquely identified by the number of parameters |θ B |, θ * B is the parameters of ProjB trained without the discriminator, F is the Fisher information matrix applying more weights on useful parameters learned from the θ * B , and λ is a scale controlling the preservation of θ * B to θ B . Our final combined objective aims to minimize the sum of the two cross-entropy loss terms and an EWC regularizer term as follows: Minimizing the total loss (L) together allows our ADLM to control the attributes of the generated texts in the latent space.
Sampling. Our model constrains the logits of text generation to use the vocabulary toward the desired attribute. We can obtain different types of attribute logits from the attribute-discriminative latent space of ADLM, which uses much less memory during the inference compared to the previous methods.
Our model computes both types of logits o t , ¬o t for the text generation based on the attributes such as the desired (non-toxic; a) and undesired (toxic; ¬a) attribute as shown in Figure 3b. Each logit is computed as follows: The non-toxic logits (o t ) would have a high probability on non-toxic tokens, and toxic logits (o t ) would have high probability on toxic tokens. From this difference of probability, the tokens which have greater probability in toxic logits than non-toxic logits can be presumed as toxic tokens which could lead to the generation of toxic texts. Therefore, every generation of token, we compute the difference between the logits, ∆o t = o t − ¬o t , to suppress the tokens that shows higher probability in toxic logits as follows: where o t is final logits of our decoding, and α is a constant value of suppressing scale.
Experimental Results
To validate our ADLM, we conduct language generation task on RealToxicityPrompts (Gehman et al., 2020) and dialogue generation task on
Detoxification for Language Generation
Baselines. We compare against the following baselines for generic language detoxification tasks, using GPT-2 as the base language model. All compared models, including ours, are trained on Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge dataset and evaluated on random 10K prompts from RealToxicityPrompts. The details of the hyperparameters used for each model are provided in Appendix B.2.
• Attribute conditioning (ATCON; Gehman et al. (2020)): This baseline learns the distribution of the generated texts conditioned on the task-specific control codes (e.g., toxic or nontoxic) prior to the texts. Automatic Evaluation. To validate our language detoxification method, we evaluate the toxicity of the generated texts using it, as well as the efficiency. Moreover, we examine the diversity of the generated texts. To automatically measure the toxicity of the generated texts, we utilize Perspective API 1 that returns the toxicity scores of given texts. To measure diversity, we calculate the mean of distance n-grams (Li et al., 2016) that is normalized by the total text length.
The results in Table 2 show that ADLM largely outperforms baselines in the language detoxification performance. Compared to GeDi, ADLM can lower the toxicity of the generated texts to 0.28 with a significantly smaller number of parameters (1/7) and ×2 faster inference time. Moreover, our model is able to generate more diverse texts compared to those generated by baselines.
Ablation study. We examine the effect of each component of our ADLM, i.e., architectural design, 1 Perspective API dataset design, and training modules, in Table 3. We observe that balancing the toxic and non-toxic data is the most important factor to construct a well discriminative latent space. Moreover, when we utilize a discriminator, our model is able to discriminate the texts more effectively along with the attribute embedding tokens which supports our hypothesis that obtaining a well-discriminated projected latent space is the key factor to success in detoxification.
Analysis of toxicity types. We further examine which types of toxic texts are highly suppressed by our model compared to GPT-2. As shown in Figure 4, our model suppresses all types of the toxic level of the generated texts compare to baselines. Notably, ADLM successfully suppresses toxicity on the threat type, which DExperts fail to detoxify. The threat is one of the frequent types of toxic sentences that GPT-2 generates with the highest probability (0.624). This explains why DExperts is vulnerable to threats, Since DExperts eventually employ the original latent space of GPT-2 and thus cannot significantly change its language generation behavior. On the other hand, our ADLM modifies the original latent space into attribute-discriminative ones, and thus can effectively suppress them. Another notable point is that all models, including ADLM, cannot handle flirtations well. However, by checking the generated examples, we found that the perspective API assign high flirtation scores on sentences with words such as women, her, she, like, etc. appear, which results in misclassifications of sentences that do not contain any flirting contexts since they are commonly used words.
Detoxification for Dialogue Generation
Baselines. For detoxified dialogue generation task, we use DialoGPT (Zhang et al., 2019) as a baseline language model. We compare against the DialoGPT, DAPT, and ATCON which is the Automatic Evaluation. To validate dialogue detoxification performance, we evaluate responses by the percentage of bad words and offensiveness using classifiers which predict the degree of toxicity and types of toxic sentences (Baheti et al., 2021;Sun et al., 2022). Further, we also test the stance of the responses, which tells whether they agree with the context or not. Table 4 shows that our model better suppresses the toxic responses compared to the baselines. We further examine our methods on another dialogue toxic dataset: DiaSafety. As shown in Figure 5, our method generates more safe responses for different categories of toxic dialogues. The results on both datasets show that our method achieves consistent language detoxification performance on dialogue generation tasks for diverse categories of toxic languages, effectively suppressing the toxicity of the generated responses even when the model is exposed to toxic data, which is essential to real-world dialogue application.
Perplexity of Detoxified Texts
To examine the quality of the generated texts, perplexity (PPL) is frequently used as an automatic evaluation measure of fluency. However, since strong detoxification methods may generate texts that largely disagree with ones in the test dataset (i.e. generating non-toxic continuation for toxic prompts), higher PPL is somewhat inevitable. As shown in Table 5, our model generates around twice more non-toxic continuations from toxic prompts with as much as 46.75% reduced toxicity compared to baselines, but yields 109.05% higher PPL compared to that of DExperts. However, the increased PPL mostly results from generating incoherent text sequences to avoid toxic language generation for toxic prompts, and the increased PPL does not necessarily imply that the quality of the generated texts is degraded. This is clearly shown from the results in the human study (Figure 6) in the next subsection, in which the participants ranked the fluency of the language generated by our method higher, while its toxicity as lower.
Human Evaluation of Generated Texts
Although we demonstrate the effectiveness of our method with automatic evaluation, in language generation, human judgment is the the most important measurement. Thus, we performed a human evaluation of generated texts using our method, by comparing it to ones generated by the best-performing baselines, DExperts and GeDi ( Figure 6). We evaluate the toxicity of generated texts and the quality of the generated texts, e.g. grammatical correctness, coherent topic, and overall fluency, by recruiting 45 participants on Mechanical Turk. The experimental details are provided in Appendix B.3. The results show that our model is considered to have the best detoxification performance even by human judgments (lower the better) with p < 0.05 in paired t-test. Notably, our model is evaluated to have better fluency over the baselines (higher the better). The texts generated by our model are evaluated to be grammatically correct and fluent compared to those generated by GeDi and DExperts with p-value of less than 0.05 in paired t-test. As for coherency, there was no difference among the compared models, with p > 0.05. These results reconfirm that our model generates fluent and effective detoxified texts.
Conclusion
In this paper, we proposed a novel and an effective attribute-controllable language model, ADLM, for efficient language detoxification. Our ADLM learns an attribute-discriminative latent space with Figure 6: Results of human evaluation. Bars represent average scores on each qualitative criterion used for language detoxification. ADLM has the lowest toxicity while having comparable fluency to DExperts and GeDi.
a projection Transformer layer on top of the original pretrained LM and attribute discriminator that differentiate texts by their attributes. Ours is shown to be effective for detoxifying texts for both language and dialogue generation tasks, outperforming all baselines in automatic and human evaluation, without requiring large computational and memory overhead unlike existing methods that use multiple LMs or additional computations.
Limitations
Recent Transformer-based language models are prone to generating toxic texts such as insults, threats, and profanities. Therefore, ensuring safety in language generation is a crucial task that is necessary for their deployments to real-world applications. We achieve this goal with an efficient solution that does not require multiple LMs or further pretraining on a large refined corpus, which is computationally expensive. However, even with our techniques, the language model is not guaranteed to be completely safe and may generate toxic languages, albeit at a significantly lower rate. Furthermore, when the toxic prompts are provided, the model may generate incoherent sequences to avoid toxic generation, which leads to reduced fluency compared to that of the original language model. Yet, this is a general limitation of detoxified language modeling, which is inevitable as no method can change the given prompts. Here, we will describe a more detailed description of the terminology we used in the manuscript.
References
Attribute. The characteristic of the sentence in terms of toxicity. Toxic and non-toxic are types of attributes in the toxicity task.
Latent space. We denote the hidden space between the head layer of language model and Transformer as a latent space.
Toxicity. The score of being harmful or unpleasant in the provided texts. Toxicity is scored from 0 to 1.0. A sentence with a score of larger than 0.5 is considered as toxic. The sentence with a score smaller than 0.5 is considered as non-toxic.
Type of toxic. The Perspective API detects the toxic sentence with 8 different types, e.g., profanity, sexually explicit, identity attack, flirtation, threat, insult, severe toxicity, toxicity. The results that are calculated in the main manuscript are based on the score of the toxicity.
Toxicity probability. Toxicity probability is the probability of generating toxic sentences from 25 generations. The probability to generate toxic sentences (≥ 0.5) in 25 generations from single prompts. If there are five sentences that have a score larger than 0.5 in the results of 25 generations, toxicity probability is 1/5 = 0.2.
Expectation of max toxicity. Expectation Max Toxicity (Exp. Max Toxicity) is calculated by the mean of max toxicity from 25 generations. The average value of toxicity of the largest score in 25 generations in the evaluation set.
Fluency Fluency is the measurement of how fluent the continuation is. Automatic evaluation of fluency is calculated based on GPT-2 xl. Fluency is measured as the perplexity of generated output to GPT-2 xl and the targeted models.
Diversity Diversity is the measurement of how diverse words are generated from the models. Automatic evaluation of diversity is computed by counting the unique n-grams normalized by the total length of text. Dist-1, Dist-2, Dist-3 stand for values of 1-gram, 2-grams, 3-grams, respectively.
B.1 Dataset
Toxicity dataset. For the train set, we use a dataset from Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge 2 . The dataset is annotated by humans. We denote toxic class datasets that are greater than 50% annotator choose the comments as toxic examples. For the non-toxic class dataset, we use comments that none of the annotators choose as toxic. The toxic and nontoxic classes consist of 160K comments and 1.4M comments, respectively. Since we need to control our hidden states, we duplicate toxic comments as large as the size of non-toxic comments to balance between the non-toxic comments to format a stable representation.
For the evaluation set, we use several subset from the RealToxicityPrompts (Gehman et al., 2020) dataset. 100K dataset is total evaluation prompts from RealToxicityPrompts. Random 10K prompts are random samples of 5K toxic prompts and 5K non-toxic prompts from RealToxicityPrompts dataset (Liu et al., 2021a). We sample 25 continuations from the single prompt with 0.9 probability in sampling. Temperature is set as 1 and max length of continuation is set as 20.
Toxicity dataset for dialogue generation. We train our model on the Reddit conversation dataset from Baheti et al. (2021). Each conversation is consist of a title, post, and response with offensive 2 Kaggle dataset and stance labels whether it is a toxic or conforming comment.
B.2 Baseline
DAPT. For the language detoxification task, DAPT is further trained on the non-toxic corpus, OpenWebText (Gokaslan and Cohen, 2019). The results of DAPT (small) are from Gehman et al.
ATCON. ATCON is a model that learn the distribution of the generated text by conditioning on the given control codes that are specific for each task. For language detoxification task, the text is prepended with control codes: toxic and nontoxic . The results of ATCON is evaluated on 10K RealToxicityPrompts (Gehman et al., 2020).
PPLM. PPLM consists of a classifier that backpropagates the gradients to the LM to generate texts with desired attributes multiple times. Because of the high computational cost of this model, 10 sentences are sampled from single prompts. For the language detoxification task, the results of PPLM are reported results from Gehman et al. (2020) on random 10K prompts RealToxicityPrompts. The model is GPT-2 medium-based.
GeDi. GeDi is a model that guides the generation of each token by determining the attribute probability of given text which can be obtained by the Bayes rule normalizing over two attributeconditional distribution of next tokens. To this end, they use two LM: base and discriminator. The discriminator LM is trained as ATCON which learns the attribute conditional-distributions and the base LM focuses on generation with the guidance of the discriminator LM. For the language detoxification task, the results of GeDi are evaluated on random 10K prompts from RealToxicityPrompts. We utilized the provided model from Krause et al. (2020) which is GPT-2 medium-based.
DExperts. Under the concept of expert and antiexpert, DExperts use three LMs: base expert, and anti-expert. The expert and anti-expert are respectively, trained on a specific subset in the dataset: toxic and non-toxic texts in the language detoxification task and positive and negative texts in the sentiment-controlled task. DExperts use both logits from experts which support the base LM to suppress and to amplify logit values so that the base LM samples desired vocabularies. For the language detoxification task, the results of DExperts are evaluated on random 10K prompts from Re-alToxicityPrompts. We reproduced the DExperts with small experts which is GPT-2 small based where the toxic performance was the best among the other sizes of GPT-2.
B.3 Human evaluation
We conduct a human evaluation as shown in Figure 7. We conduct a human evaluation with 45 participants. We compare against DExperts, and GeDi for this experiment, which is the best two performing baseline by the automatic evaluation. We first randomly choose 20 prompts each from the random-10K subset. Then, we also randomly select one of the generated continuations among 25 generations for each prompt and show the generated texts by our model, DExperts, and GeDi in random order.
Therefore, for language detoxification, 45 participants evaluated 60 continuations with i) toxicity, ii) grammatical fluency, iii) topic coherency, and iv) overall fluency. For each question, the participants scored from 1 to 5 on whether provided continuation is toxic or fluent. For the results, we average the score of all 20 sequences for each question.
We provided the standard of the score for each question. For toxicity, scores 1, 3, and 5 mean not toxic at all, feel toxic, and very toxic (contains toxic words), respectively. For grammatical correctness, score 1, 2, 3, 4, and 5 stands for grammatically poor, weak, understandable, minor mistake, and good. For topic coherency, scores 1, 3, and 5 are a totally different topic, similar topic but not fluent, and good coherency, respectively. For fluency, the score 1, 2, 3, 4, and 5 are does not make any sense, weak, limited, understandable, and good.
As shown in Figure 6, our model is 2.24, 3.60, 3.00, and 3.39 for toxicity, grammatical correctness, coherency, and fluency, respectively. In sum, our model generates texts that are less than feel toxic, with a few minor mistakes in grammar, similar topic texts but not fluent, and weak fluency.
C.1 Modeling Details
We use GPT-2 from HuggingFace Transformers version 4.2.0 (Wolf et al., 2020), implemented in the PyTorch framework. For RealToxici-tyPrompts (Gehman et al., 2020), our ADLM is trained with 128 block sizes, 32 batch sizes per GPU, 5e −5 learning rate, and 3 epochs. Same setting is used for sentiment-controlled text generation. Since the sizes of training datasets differ in dialogue generation tasks, the hyperparameters are empirically determined. For ToxiChat (Baheti et al., 2021), our ADLM and baselines are trained with 32 batch sizes per GPU, 2e −5 learning rate and three epochs. For DiaSafety (Sun et al., 2022), our ADLM and baselines are trained with eight batch sizes per GPU, 2e −5 learning rate and five epochs. The block sizes of both dialogue datasets are not truncated unless they exceed 512. For all datasets, we set λ as 0.1 for EWC loss and use AdamW optimizer with 1e −8 epsilon and a linear scheduler. Trainings are performed on a single NVIDIA RTX 2080 Ti or Quradro RTX 8000.
C.2 Generation
For RealToxicityPrompts (Gehman et al., 2020) and sentiment-controlled text generation, we set the same setting in generation for all baselines and our models, except for PPLM (Dathathri et al., 2020). We perform a total of 25 generations on each prompt. The max length of generated sentences is 20. For PPLM (Dathathri et al., 2020), we generate 10 generations on each prompt due to computational costs. For our generation, we set α to 4.0 for the language detoxification task. For dialogue generations, the generation setup is different. For ToxiChat (Baheti et al., 2021), the models generate until the end-of-token appears or until the max sequence threshold is 500. The α is set to 1.5. Lastly, for DiaSafety (Sun et al., 2022), the max length of a generation is set to 128 and the α is set to 1.5. All the generations use nucleus sampling with 0.9 top-p probability and 1.0 temperature scaling for the softmax. Table 6: Performance of sentiment-controlled generation. The task here is to generate positive continuation from negative prompts (Neg → Pos) and generate negative continuation from positive prompts (Pos → Neg).
Model
Neg → Pos (↑) Pos → Neg ( et al., 2013). Each review in the dataset is rated on a scale from 1 to 5 (very negative to very positive). The reviews with ratings 4 to 5 are assigned as positive reviews and ratings 1 to 2 are assigned as negative reviews. For the evaluation set, there are 2.5K prompts for each sentiment that is provided from Liu et al. (2021a) which is obtained from OWTC (Gokaslan and Cohen, 2019).
Baselines. For sentiment-controlled text generation, the positive and negative DAPT (Gururangan et al., 2020) models have been independently trained on each subset of SST-5 dataset. Similar to ATCON, CTRL (Keskar et al., 2019) which uses "Reviews Rating: 5.0" and "Reviews Rating: 1.0" as control code are used. The results of DAPT, CTRL, GeDi, PPLM and DExperts on sentiment-controlled text generation task are reported values from Liu et al. .
Automatic Evaluation. To guarantee that our method is generally applicable to any controllable text generation tasks, we further validate our model on sentiment-controlled text generation problem.
To this end, we consider the problem of generating continuations which has opposite semantics from the given prompts (e.g., positive continuation for negative prompts). For automatic evaluation, to validate whether the generated text matches with targeting sentiment, we use HuggingFace's sentiment analysis classifier (Wolf et al., 2020).
The results in Table 6 show that our model achieves impressive performance on controlled text generation as well. This suggests that our method is applicable to any attribute-controlled text generation tasks.
D.2 Ablation experiment
To evaluate fluency, we measure the mean perplexity of the continuations according to the GPT-2 XL model. We conduct the ablation experiment α in Eq. 11 and λ in Eq. 8. As shown in Figure 8, when alpha decreases and lambda increases, the toxicity increases while the perplexity decreases.
The toxicity control performance and fluency are in somewhat a trade-off relationship, and we can increase and decrease them at the expense of the others by controlling the values α and λ.
D.3 Generation examples
The Table 7 and Table 8 are the examples generated from our model for language detoxification task. The Table 9 and Table 10 are the examples generated from our model for dialogue detoxification task on ToxiChat dataset. | 8,975.4 | 2022-10-19T00:00:00.000 | [
"Computer Science"
] |
Palladium Catalyzed Domino Sonogashira Coupling of 2-Chloro-3-(Chloromethyl)Quinolines with Terminal Acetylenes Followed by Dimerization
Abstract A domino Sonogashira coupling of 2-chloro-3-(chloromethyl)quinolines and terminal acetylenes and then dimerization is described. This palladium-catalyzed reaction gave novel dimer quinolinium salts in good to high yield. Based on empirical evidence, a plausible mechanism was provided. The produced quinolinium salt are amenable to further synthetic elaborations such as reactions with phenoxide and thiophenoxide to yield the corresponding ether and thioether.
Choosing a versatile starting material can provide access for the synthesis of various useful molecules. During the past two decades 2-chloroquinoline-3-carboxaldehydes have gained more attraction as starting material to synthetic chemists to construct the diverse quinoline-based molecules. 14 In our further research on quinolines chemistry, 15 herein we wish to report palladium catalyzed Sonogashira reaction followed by a subsequent dimerization of 2-chloro-3-(chloromethyl)quinolines 1.
Results and discussion
We prepared 2-chloro-3-(chloromethyl)-quinolines 1 as starting material from acetanilides with different substituents as outlined in Scheme 1. 16 A series of experiments were performed with 2-chloro-3-(chloromethyl)-8-methylquinoline 1a and phenylacetylene 2a as the model reaction. Pleasingly, this reaction in the presence of PdCl 2 , PPh 3 and TEA in toluene, gave 3a instead of the expected simple Sonogashira coupling product 3'a (Scheme 2).
The proposed mechanism for the reaction is shown in Scheme 4. The general mechanism starts from the in-situ generation of the Pd(0) complex with PPh 3, followed by the oxidative addition of the Ar-Cl bond of the quinoline heterocycle to form I. Addition of terminal acetylene to intermediate I assisted by Et 3 N generated the complex II which, by reductive elimination, led to compound III. Finally, dimerization of III via nucleophilic substitution of nitrogen of one molecule to Csp 3 -Cl of another one formed the salt 3 (Scheme 4).
With regards to investigating the mechanism described above, reactions in Scheme 5 were performed. Treatment of 2-chloro-3-chloromethylquinoline with Et 3 N in refluxing CH 3 CN did not yield product even after 24 h (Scheme 5). This may be due to the existence of an electron withdrawing Cl in the 2 position of quinoline which reduced activation of nitrogen toward nucleophilic substitution. In addition, 3-(chloromethyl)-2-(phenylethynyl)quinoline (B), which has alkyne as electron releasing group, in the presence of Et 3 N tended to dimerize to 4d. Notably increasing the temperature to reflux converted B to unidentified polymer.
Conclusions
In summary, because of the importance of quinoline core and the ability of 2-chloro-3-(chloromethyl)quinolines to expand into more complex compounds, the primary materials of 1 were subject of reaction with terminal alkynes in a Sonogashira reaction. Surprisingly, in addition to the Sonogashira coupling, the corresponding adducts were dimerized in-situ to afford novel attractive molecules 3. Interestingly, the product 3b reacted efficiently with phenoxide and thiophenoxide to yield the corresponding ether and thioether respectively.
Funding
We are thankful to Alzahra University and the Iran National Science Foundation (INSF) for the financial support. | 674.8 | 2019-11-27T00:00:00.000 | [
"Chemistry"
] |
Study on Ablation Characteristics of Femyosecond Laser Nanoscale Processing for Aluminum Nitride and LeadZirconate Titanate Ceramics
This paper analytically investigates an ultrashort pulsed laser nanoscale processing for aluminum nitride (AIN) and lead zirconate titanate (PZT) ceramics. Processing characteristics of an ultra-short pulsed laser is different from that of long-pulsed laser due to ultrahigh intensity, ultrahigh power, and ultrashort time. The ultrasmall processing for materials can achieved by an ultra-short pulsed laser. This study proposes a model to analyze an ultrashort pulsed laser nanoscale processing for aluminum nitride (AIN) and lead zirconate titanate (PZT) ceramics. The effects of optical penetration absorption and thermal diffusion on temperature are also discussed. The results reveal that the variation of ablation rate with laser fluences predicted by this work agrees with the available measured data for an ultrashort pulsed laser processing for AIN and PZT. For femtosecond lasers, the optical absorption and thermal diffusion, respectively, governs the ablated depth per pulse at the low and high laser fluences. The thermal diffusion length is small relative to the optical penetration depth for femtosecond laser. The optical penetration absorption governs the temperature in the workpiece. On the other hand, for the picosecond laser, the thermal diffusion length is large compared to the optical penetration depth. The thermal diffusion determines the temperature in the workpiece. nanofabrication of functional devices Femtosecond Laser Processing of Nano-Crystalline CVD Diamond This proposes an analytical model to study the femtosecond laser processing of AlN and PZT ceramics. The depth per pulse versus laser fluence of AlN and PZT ablated by a femtosecond laser is predicted and compared with the measured data. Ablation characteristics of a femtosecond laser processing for AlN and PZT is analyzed. The effects of optical penetration absorption and thermal diffusion on temperature will also be discussed.
Introduction
A femtosecond laser was employed to pattern an aluminum nitride (AlN). The AlN film was patterned precisely only little changes to the material structure of the film surface for a femtosecond laser processing [1]. AlN thin film was usually employed as a buffer layer which provides a strain between sapphire and AlN. This misfit strain makes the buffer layer to be an important structure for GaN-based devices [2] and the buffer layer is also important for the growth of epitaxial gallium nitride (GaN) layers on sapphire substrates, which promoted the development of GaN electronic and optoelectronic devices [3]. AlN thin film is also evaluated to be important substrate materials since the lattice constant and thermal expansion are approximate between the AlN and the GaN or AlGaN epitaxial layers [4]. AlN is a wide band gap semiconductor material. Its applications include radio frequency filters [5], ultraviolet (UV) solid state light sources [6], acoustic resonators [5], and photodetectors [7].
Lead zirconate titanate (PZT) thin films embedded in micro electro-mechanical systems (MEMS) can enhance efficiency and reduce size of MENS devices. PZT thin films can be formed by laser deposition MENS [8] and can work as resonators [9] and sensors [10].
The properties of PZT thin films are significantly related to the crystallization and the microstructure [11]. PZTs in MENS must be very high precision, speed and good controllability.
PZTs are not easy to be machined on a micrometer scale with traditional methods due to its high hardness and brittleness. Therefore, an ultrafast laser is applicable to achieve this precision (process PZT) [12].
A nanosecond ultraviolet (UV) laser is used to pattern the electrodes on thick graphite oxide (GO) free standing films [13]. Direct laser writing (DLW) is a suitable technique for three-dimensional (3D) micro-and even nanostructuring [13]. Femtosecond laser processing for optical materials is a good technology to the production of high-quality micro-and nanofabrication of functional devices [14]. Femtosecond Laser Processing of Nano-Crystalline CVD Diamond Coating was studied [15]. This paper proposes an analytical model to study the femtosecond laser processing of AlN and PZT ceramics. The depth per pulse versus laser fluence of AlN and PZT ablated by a femtosecond laser is predicted and compared with the measured data. Ablation characteristics of a femtosecond laser processing for AlN and PZT is analyzed. The effects of optical penetration absorption and thermal diffusion on temperature will also be discussed.
Analysis
A model is developed for an ultrashort pulsed laser processing of PZT and AlN.
Different from metals full of free electrons using two-temperature model due to the difference of temperature between electrons and lattices at the duration of laser pulse, PZT and AlN ceramics employ the thermal transport model based on phonons as carriers. Therefore, PZT and AlN The model in the polar coordinates can be written The number 3 in Eq. (1) are taken to assure 90 percent of laser energy included within the energy-distribution radius. The pulse duration for a femtosecond laser is about on the order of 10 -15 seconds which is shorter than relaxation time of thermal diffusion on the order of 10 -12 seconds in AlN or PZT ceramics. Hence, before the relaxation time of thermal diffusion is arrived during the laser pulse, the heat conduction the r-, θ-, and z-directions is assumed to be negligible because the heat cannot diffuse in time. On the other hand, after he relaxation time of thermal diffusion is arrived, the main heat diffusion is in the z-direction because the workpiece employed in this study is about 10 -3 m in the z-direction and infinite size in the r-direction. Eq. (1) can be, therefore, written as Material absorption for laser is recognized to be volume absorption. When the optical penetration depth is very small, the optical surface absorption is achieved. It is assumed that the heat on the surface of workpiece is transported into the ambient by convection. The ambient temperature and vaporization temperature are, respectively, set at bottom surface of workpiece and solid-vapor interface.
The location of material removal is assumed to be at the solid-vapor interface (ablation interface). The balance of thermal energy at the solid-vapor interface is The nondimensional parameters are defined as Therefore, the nondimensional Eq. (2) can be written as The symbols, D/L and /L, in Eq. (5), respectively, represents the nondimensional thermal diffusion length and optical penetration depth. When the nondimensional thermal diffusion length is small enough relative to the nondimensional optical penetration depth, the thermal diffusion term can be neglected. Therefore, the temperature in Eq. (5) is governed by the direct optical penetration absorption of incident laser pulse. On the other hand, when the nondimensional optical penetration depth is small enough relative to the nondimensional thermal diffusion length, the optical penetration absorption only occurs on the worpiece surface and the temperature inside the workpiece is not directly affected by the optical penetration absorption on the workpiece surface. For femtosecond pulsed laser, the laser pulse duration is on the order of 10 -15 s. However, the relaxation time of thermal diffusion, cL 2 /k, is on the order of 10 -12 s for materials. The ratio of ultrashort laser pulse duration to thermal diffusion time is far smaller than one. Therefore, the thermal diffusion term of the right-handed side in Eq. (5) can be neglected because the order of the other nondimensional terms is near one.
The nondimensional initial condition and boundary conditions are The nondimensional relation of energy balance at solid-vapor interface is The method of Laplace transform is employed to obtain the solution of Eq. (5). If the Laplace transform of the temperature is symbolized by , the Eq. (6) in the Laplace domain yields In the similar way, the nondimensional initial condition and boundary conditions in the Laplace domain are The general solution of Eq. (10) with the initial condition in Laplace domain is The temperature is finite for and the boundary condition on the surface of workpiece determines the constant c1. Therefore, The first term and second term in the right-handed side of Eq. (13) are, respectively, related to the effects of thermal diffusion and direct optical absorption on the temperature in the workpiece.
The nondimensional temperature in time domain is obtained by taking the inverse of Laplace transform for Eq. (13). (15) Combining Eq. (15) and the energy balance equation at the solid-vapor interface, one can get the relation at the solid-vapor interface. The variation of the ablated depth per pulse with laser fluences is plotted in Fig. 4 for the ultrafast laser ablation of AIN. The difference between Fig. 3 and Fig. 4 is the scale of the horizontal axis. The horizontal axises in Fig. 3 and Fig. 4 are, respectively, scaled by the linearity and logarithm of laser fluences. The increase of ablation rate with laser fluences in Fig. 3 at high laser fluence is slightly more slower than that at low laser fluences. This possible reason is because the thermal ablation of the residual laser energy occurs at high laser fluences after the directly incident ultrafast laser pulse conducts optical penetration ablation due to optical absorption length. In this linear horizontal axis of laser fluences, the solid line predicted by this work agrees with the triagle symbols measured by the published paper [16] and the ablation rate still increases with the increasing laser fluences although the increase of ablation rate with laser fluences in at high laser fluence is slightly more slower than that at low laser fluences different from Fig. 3.
Results and Discussion
The relation of the ablation rate of PZT to laser fluence is plotted in Fig. 5 for the prediction of this work and the measurement of the published paper [17]. The solid line and triangle symbols, respectively, stand for the ablated depth per pulse versus laser fluence predicted by this work and measured by Di Maio et al [17].
Conclusions
The analytical study of an ultrashort pulsed laser processing for AlN and PZT is conducted in this paper. The model proposes that the material is removed at the solid-vapor interface. The variation of ablation rate with laser fluences predicted by this work agrees with the available measured data for an ultrashort pulsed laser processing for AIN and PZT. The ablation rate increases with the increasing laser fluences. The increase of ablation rate with laser fluences is faster at low laser fluences than that at high laser fluences. The thermal diffusion length is small relative to the optical penetration depth for the pulse duration at the order of femtoseconds. Therefore the thermal diffusion terms are negligible. Only the optical penetration term is responsible for temperature. On the other hand, for the pulse duration at the order of picoseconds, the thermal diffusion length is large enough compared to the optical penetration depth. The optical absorption is almost only on the surface of workpiece.
Therefore the temperature in materials is only determined by the thermal diffusion of heat from workpiece surface absorbing directly incident laser energy. | 2,507.4 | 2021-09-15T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Pharmacological rescue of mitochondrial and neuronal defects in SPG7 hereditary spastic paraplegia patient neurons using high throughput assays
SPG7 is the most common form of autosomal recessive hereditary spastic paraplegia (HSP). There is a lack of HSP-SPG7 human neuronal models to understand the disease mechanism and identify new drug treatments. We generated a human neuronal model of HSP-SPG7 using induced pluripotent stem (iPS) cell technology. We first generated iPS cells from three HSP-SPG7 patients carrying different disease-causing variants and three healthy controls. The iPS cells were differentiated to form neural progenitor cells (NPCs) and then from NPCs to mature cortical neurons. Mitochondrial and neuronal defects were measured using a high throughout imaging and analysis-based assay in live cells. Our results show that compared to control NPCs, patient NPCs had aberrant mitochondrial morphology with increased mitochondrial size and reduced membrane potential. Patient NPCs develop to form mature cortical neurons with amplified mitochondrial morphology and functional defects along with defects in neuron morphology − reduced neurite complexity and length, reduced synaptic gene, protein expression and activity, reduced viability and increased axonal degeneration. Treatment of patient neurons with Bz-423, a mitochondria permeability pore regulator, restored the mitochondrial and neurite morphological defects and mitochondrial membrane potential back to control neuron levels and rescued the low viability and increased degeneration in patient neurons. This study establishes a direct link between mitochondrial and neuronal defects in HSP-SPG7 patient neurons. We present a strategy for testing mitochondrial targeting drugs to rescue neuronal defects in HSP-SPG7 patient neurons.
SPG7 is the most common form of autosomal recessive hereditary spastic paraplegia (HSP).There is a lack of HSP-SPG7 human neuronal models to understand the disease mechanism and identify new drug treatments.We generated a human neuronal model of HSP-SPG7 using induced pluripotent stem (iPS) cell technology.We first generated iPS cells from three HSP-SPG7 patients carrying different disease-causing variants and three healthy controls.The iPS cells were differentiated to form neural progenitor cells (NPCs) and then from NPCs to mature cortical neurons.Mitochondrial and neuronal defects were measured using a high throughout imaging and analysis-based assay in live cells.Our results show that compared to control NPCs, patient NPCs had aberrant mitochondrial morphology with increased mitochondrial size and reduced membrane potential.Patient NPCs develop to form mature cortical neurons with amplified mitochondrial morphology and functional defects along with defects in neuron morphology − reduced neurite complexity and length, reduced synaptic gene, protein expression and activity, reduced viability and increased axonal degeneration.Treatment of patient neurons with Bz-423, a mitochondria permeability pore regulator, restored the mitochondrial and neurite morphological defects and mitochondrial membrane potential back to control neuron levels and rescued the low viability and increased degeneration in patient neurons.This study establishes a direct link between mitochondrial and neuronal defects in HSP-SPG7 patient neurons.We present a strategy for testing mitochondrial targeting drugs to rescue neuronal defects in HSP-SPG7 patient neurons.
Introduction
Hereditary spastic paraplegia (HSP) is an inherited, progressive neurodegenerative disease, causing spasticity in the lower limbs as a consequence of corticospinal tract degeneration.HSP-SPG7 is the most common form of autosomal recessive HSP (Lange et al., 2022;Méreaux et al., 2022).SPG7 encoded paraplegin is involved in multiple mitochondrial processes including mitochondrial protein quality surveillance (Casari et al., 1998), mitochondrial biogenesis (Nolden et al., 2005), and regulation of the mitochondrial permeability transition pore (Sambri et al., 2020).Spg7 knock-out mouse model mimic the clinical feature of HSP-SPG7 patients (Ferreirinha et al., 2004).Paraplegin-deficient mice showed slow progressive motor impairment with difficulty in maintaining balance on the rotarod associated with distal axonopathy of spinal axons.Mitochondrial morphological abnormalities, i.e., swollen mitochondrial (at 4.5 months) was the first pathological sign observed in the spinal cord axons of paraplegin-deficient mice several months before any evidence of axonal swelling (at 8 months), and degeneration (at 15 months) was detected (Ferreirinha et al., 2004).The mitochondrial phenotype correlated with the onset of motor impairment on the rotarod apparatus at 4 months of age, suggesting that it is the primary cause for axonal dysfunction and that the gait impairment of paraplegindeficient mice is not directly the result of the loss of axons.Further, intramuscular delivery of paraplegin cDNA via adeno-associated viral vectors rescued mitochondrial morphological abnormalities and ameliorated the rotarod performance of paraplegin-deficient mice (Pirozzi et al., 2005), thus identifying mitochondria as a new therapeutic target to develop treatments for HSP-SPG7.
In another study, primary neuronal cultures from paraplegindeficient mice had impaired opening of the mitochondrial permeability transition pore causing dysregulated synaptic activity and impaired synaptic vesicle dynamics leading to ineffective synaptic transmission (Sambri et al., 2020).Pharmacological treatment with Benzodiazepine (Bz) -423 at low nano molar doses regulated the mitochondrial permeability transition pore opening, normalised synaptic transmission, and rescued motor impairment of the paraplegin-deficient mice.Unfortunately, Bz-423 has off-target effects.It is anti-proliferative and cytotoxic at higher concentrations and is a immunomodulator (Sundberg et al., 2006) making it less desirable to be considered for a therapeutic treatment for SPG7 patients.
Not much is known about HSP-SPG7 disease-phenotypes in human cortical neurons.The advent of induced pluripotent stem (iPS) cell technology has opened up the possibility of a scalable source of human cells to produce disease-relevant models to understand the disease-mechanism and identify disease-associated phenotypes that can be used as cellular biomarkers for drug discovery.iPS cells can be readily produced from each patient's peripheral blood mononuclear cells (PBMCs) or skin fibroblasts, and reliably differentiated into different cells of the central nervous system, including cortical neurons that are degenerated in HSP patients (Wali et al., 2020).Patientderived iPSCs have been generated for multiple forms of HSP including SPG4 (Wali et al., 2020), SPG11 (Pérez-Brangulí et al., 2014), SPG15 (Denton et al., 2018) and SPG48 (Denton et al., 2018).These cell models have been able to recapitulate disease-associated phenotypes including reduced axonal transport, and increased axonal swellings, and degeneration.Here we evaluate the disease-associated phenotypes of HSP-SPG7 using iPS cell technology.We first generated iPS cells from three HSP-SPG7 patients carrying different pathogenic variants and three healthy controls.iPS cells were differentiated to form neural progenitor cells (NPCs) and then from NPCs to mature cortical neurons.
HSP-SPG7 patient NPCs showed defects in mitochondrial morphology and function.The mitochondrial in patient NPCs were larger in size and had reduced membrane potential compared to control NPCs.These patient NPCs progressed to form mature neurons with amplified mitochondrial defects, shorter and less complex neurites, downregulated expression of genes related to synaptic function, reduced synaptic activity, reduced viability and increased neurite degeneration.To access the role of mitochondrial in the neuronal phenotypes observed in the patient mature neurons, we treated patient neurons with Bz-423, regulator of the mitochondrial permeability transition pore, as they differentiated from NPCs to mature neurons.Bz-423 treatment restored normal mitochondrial function and rescued neuronal phenotypes, i.e., short and less complex neurites, viability and neurite degeneration in mature neurons, suggesting that mitochondria dysfunction leads to diseaseassociated phenotypes in HSP-SPG7 and that mitochondria are a potential therapeutic target.We evaluate mitochondrial and neuronal phenotypes in live neurons using relatively inexpensive live cell dyes (compared to antibodies) and automated high throughput imaging and analysis.This assay enables future drug screening applications to identify a potential drug treatment candidate for HSP-SPG7.
Participants
Hereditary spastic paraplegia patients in this study were reviewed and examined by Professor Carolyn Sue and Dr. Kishore Kumar, movement disorder specialists.All patients had a confirmed diagnosis of HSP-SPG7 on genetic testing.Age, gender, and gene mutation details of the study participants are presented in Table 1.Clinical details of Patient 1 was published previously (identified as Patient 3 in Differentiation of iPS to neural progenitors iPS cells were differentiated into cortical neural progenitor cells (NPCs) using the dual SMAD induction and FGF2 expansion protocol for 25 days (Gantner et al., 2021).First, a 24-well cell culture plate was coated with 15 μg/mL of Poly-L-ornithine solution (Catalog no: P4957, Sigma) at room temperature for 2 h and then with 10 μg/mL of Lamininmouse (Catalog no: L2020, Sigma) at room temperature for another 2 h.Following plate coating, iPS cells were seeded at a density of 7.125 × 10 5 cells per well.On Day 0, i.e., the day of iPS cell seeding, the cells were cultured in the cortical neuron base medium with 10 μM Rock inhibitor Y-27632 (Catalog no: 72304, StemCell technologies).From Days 1-10, the cells were cultured in cortical neuron base medium with 100 nM of LDN193189 (2HCl) (Catalog no: 72147, StemCell technologies) and 10 μM of SB431542 (Catalog no: 72232, StemCell technologies).From Days 11-19, the cells were cultured in cortical neuron base medium with 20 ng/mL of FGF2.From Days 20-25, the cells were cultured in cortical neuron base medium only.During this 25-day culture period, the cells were passaged two times, i.e., on Day 11 and Day 20.While passaging, the cortical neuron base medium with 10 μM Rock inhibitor Y-27632 was used.The 10 μM Rock inhibitor Y-27632 was withdrawn after 24 h.On Day 25, the cells were ready for further differentiation into mature cortical neurons or to differentiate them at a later time.The cells were frozen with CryoStor solution (Catalog no: 07930, StemCell technologies) and stored in a cryo tank for long term storage.
Differentiation of neural progenitors to mature cortical neurons
Neural progenitors were differentiated into mature cortical neurons in 96-well (for imaging-based experiments) or 24-well plates (for RNA-Seq and protein extraction) using a modified version of our differentiation protocol (Wali et al., 2023).On Day 0, as described in the "Differentiation of iPS to neural progenitors" method section above, the plates were first coated with poly-L-ornithine and laminin.Then, neural progenitors were seeded at a density of 1 × 10 4 cells per well of a 96-well plate and 1 × 10 5 cells per well of a 24-well plate.The neural progenitors were seeded in cortical neuron base medium with 10 μM Rock inhibitor.On Days 1 and 2, the cells were cultured in cortical neuron base media.From Days 3 to 10, the cells were cultured in cortical neuron mature medium containing multiple growth factors: 40 ng/mL of both BDNF and GDNF (Catalog no: 78005, 78,058, StemCell technologies), 50 μM of dibutyryl cAMP (Catalog no: 73884, StemCell technologies,), 200 nM of Ascorbic acid (Catalog no: A4403, Sigma), 100 ng/mL of mouse Laminin (Catalog no: L2020, Sigma), and 10 μM of DAPT (Catalog no: D5942, Sigma).
Bz-423 drug treatment
A 3 mM Bz-423 stock solution was prepared in DMSO (Catalog no: SML1944-5 mg, Sigma).The stock solution was diluted to a working solution concentration of 150 nM in cell culture media.During the differentiation of neural progenitors to mature cortical neurons, the patient neurons were treated with 150 nM Bz-423 for 7 Days from Day 3 to Day 10.
Labelling neurons with live cell markers Calcein, TMRM and Hoechst
A cocktail of live cell labelling dyes were used to co-label neurons to identify viable cells (2 μM Calcein, Catalog no: C3100MP, Thermo fisher scientific), mitochondrial (25 nM TMRM, Catalog no, T668, Thermo fisher scientific) and nucleus (0.1 μg/mL Hoechst 33342, Catalog no, 62249, Thermo fisher Scientific).To label live neurons, these dyes were mixed in the cell culture media.Neurons were incubated with the cell culture media and dye cocktail for 30 min at 37°C.The neurons were then washed twice with HBSS solution and maintained in it for imaging.Imaging was performed within 30 min of labelling the cells.
Image processing
Neuron images were processed using the image analysis software Harmony built in with the PhenixPlus, Perkin Elmer high throughput imaging system.First, the nucleus of the neurons was segmented and identified using the "Find Nuclei" building block.The parameters used in the 'find nuclei' block were as follows: method, B; common threshold, 0.40; area > 30 μm 2 ; splitting coefficient, 7.0; individual threshold, 0.40; contrast >0.10.Then the neurites were identified using the "Find neurites" building block.The parameters used in the 'find neurites' block were as follows: smoothing width, 3px; linear window, 11px; contrast, >1; diameter, ≥ 7px, gap closure distance, ≤ 9px; gap closure quality, 1; debarb length ≤ 15px, body thickening, 5 pm and tree length, ≤ 20px.Mitochondrial were identified using the "find spots" building block.The parameters used in the "find spots" block were as follows: method, B; detection sensitivity: 0.50 and splitting sensitivity: 0.50.Multiple parameters of neuron morphology -such as neurite roots, extremities, segments, branching nodes and length and mitochondrial morphologymitochondrial area, perimeter, length, and width were measured.
Isolation of RNA, RNA-Seq and RT-qPCR RNA was extracted using the RNeasy Mini Kit (50) (Catalog no: 74104, Qiagen) as per manufacturer's manual.Briefly, cells were lysed using the RLT lysis buffer and homogenized by vertexing for 1 min.1 volume of 70% ethanol was added to the homogenized sample.The sample was then transferred to a RNeasy spin column placed in a 2 mL collection tube.The samples were centrifuged for 15 s at 8000 x g and the flow through was discarded.Buffer RW1 was added to the RNeasy spin column and centrifuged for 15 s at 8000 x g and the flow through was discarded.Buffer RPE was added to the RNeasy spin column and centrifuged for 15 s at 8000 x g and the flow through was discarded.Then, buffer RPE was added to the RNeasy spin column and centrifuged for 2 min at 8000 x g.The 2 min spin should dry the spin column membrane.The RNeasy spin column was transferred to a new collection tube.RNAse-free water was added to the spin column membrane and the column was centrifuge for 1 min at 8000 x g to elute the RNA.Fluorometric Quantification was performed using the QubitTM 4 Flourometer (Catalog no: Q33239, Thermo Fisher Scientific) and purity was measured using the NanoDrop ® ND-1000 UV-Vis Spectrophotometer.
RNA-seq data production and analysis was performed by Australian Genome Research Facility.The data is made publicly available on Gene expression Omnibus: accession number GSE233258.
RT-qPCR validation experiments were performed by Garvan Molecular Genetics, Sydney.The expression of genes GAD1 and GAD2 were analyzed using these primers: GAD1 forward CTTGTGAGTGCCTTCAAGGAG GAD1 reverse TGCTCCTCACCGTTCTTAGC GAD2 forward CTCGAAGGTGGCTCCAGTG GAD2 reverse CTCCCAAGGGTTGGTAGCTG.
Western blot analysis of paraplegin and synaptophysin expression
Neurons were harvested using accutase (Catalog no: A1110501, Stem cell technologies).Protein was extracted from the neuron cell pellet using a cell lysis buffer (Catalog no: C3228, Sigma) with 100X Halt Protease Inhibitor Cocktail (Catalog no: 78429, Thermo Fisher Scientific).Protein concentration was measured using the Pierce™ BCA Protein Assay Kit (Catalog no: 23225, Thermo Fisher Scientific) following manufacturers protocol.10 μg of protein sample was resolved on a NuPAGE 4-12% BT gel (Catalog no: NP0335BOX, Thermo Fisher Scientific) with NuPAGE MOPS SDS running buffer (20X) and electro-transferred on to a polyvinylidene difluoride membrane using NuPAGE Transfer Buffer (20X).The membrane was blocked in 3% non-fat dry milk blocking buffer in Tris-buffered saline with 0.1% Tween ® 20 detergent for 1 h at room temperature.The membrane was incubated overnight with primary antibodies diluted in 3% non-fat dry milk blocking buffer at 4°C: anti-SPG7 (dilution 1: 1000, Catalog no: TA504424, Origene) and anti-Synaptophysin (dilution 1:700, Catalog no: ab32127, Abcam).Following this, incubation with secondary antibodies anti-mouse (Catalog no: STAR207P, Biorad) and anti-Rabbit (Catalog no:111-035-144, Jacson Immuno Research) was performed at room temperature for 1 h.This was followed by repeated washing with Tris-buffered saline with 0.1% Tween ® 20 detergent.Immunoreactive bands were visualized using Chemiluminescent, SuperSignal™ West Femto Maximum Sensitivity Substrate (Catalog no: 34096, Thermofisher Scientific).The bands were imaged on the ImageQuant RT ECL imaging system (GE Healthcare).Band intensities were measured using Image Lab software (Bio-Rad).Paraplegin and synaptophysin intensities were normalized against GAPDH expression to obtain relative expression levels.
Electrophysiological recordings -whole cell patch clamping
We used whole-cell patch-clamp recordings to monitor neuron synaptic activity.Standard whole-cell patch-clamp techniques were used to study the functional maturation of the neurons.The patch clamp experiments were performed using the List EPC-7 patch-clamp amplifier (List, Darmstadt, Germany).Currents were low-passfiltered, sampled, and digitized at 0.2 kHz with a PowerLab 4/30 data acquisition interface (AD Instruments, Sydney, NSW, Australia) attached to a Macintosh computer.Patch-clamp pipettes were manufactured from borosilicate tubes (Modulohm, Herley, Denmark).
SPG7 patient neural progenitors with aberrant mitochondrial morphology and function develop to form mature neurons with reduced neurite complexity, length, viability and increased degeneration
To identify neuronal and mitochondrial related defects in HSP-SPG7 patient-derived neurons, we measured and compared neuronal and mitochondrial morphological phenotypes and mitochondrial membrane potential between patient cells and healthy control cells at multiple timepoints (Figure 1A) as they differentiated from neural progenitors to mature neurons.
Live patient and control cells were labelled with calcein (green)to identify viable cells, Tetramethylrhodamine, methyl ester (TMRM) (red) -to identify healthy mitochondrial and hoechst (blue) -to identify nucleus (Figures 1B-M).The cells were imaged and analyzed using an automated high throughout imaging and analysis microscope (PhenixPlus, Perkin Elmer).For image analysis, the neurites were first segmented and identified (Figures 1N-S).Then, multiple parameters indicative of neurite complexity such as neurite roots, branching, extremities, segments, and length were measured (Figures 1T-X).For all measurements of neurite parameters, Two-Way ANOVA test indicated a significant effect of disease status (p < 0.0001) and neurite development as they progressed from neural progenitors to form mature neurons (p < 0.0001).Šídák's post-hoc multiple comparisons test indicates that the neurite complexity and length measurements in patient cells is comparable to control cells at Day 1, i.e., the neural progenitor phase.But the neurite complexity and length measurements in patient cells are significantly lower compared to control cells at Days 5 and 10 as they develop to form mature neurons (Figures 1T-X).
To identify mitochondrial morphological and functional abnormalities, we measured multiple parameters of mitochondrial size such as mitochondrial area, perimeter, length, and width and membrane potential (Figures 1Y-C1).For all measurements of mitochondrial morphology parameters and membrane potential, Two-Way ANOVA test indicated a significant effect of disease status (p < 0.0001) and neurite development as they progressed from neural progenitors to form mature neurons (p < 0.0001).Šídák's post-hoc multiple comparisons test indicates that the mitochondrial size related parameters in patient cells are significantly higher compared to control cells at Day 1 during the neural progenitor phase and at Days 5 and 10 as they develop to form mature neurons (Figures 1Y-B1).Šídák's post-hoc multiple comparisons test indicates that the mitochondrial membrane potential in patient cells is significantly lower compared to control cells at Day 1 during the neural progenitor phase and at Days 5 and 10 as they develop to form mature neurons (Figure 1C1).
In summary, the defect of reduced neurite complexity and length was absent at Day 1, i.e., the neural progenitor phase but was present at Day 5 and Day 10, i.e., mature neurons.However, the abnormalities in mitochondrial size and membrane potential were seen at all three time-points, i.e., at Day 1 the neural progenitor phase, and Day 5 and Day 10, i.e., in mature neurons.This shows that mitochondrial defects developed before neuronal defects.
Evaluation of neuronal viability and degeneration in mature cortical neurons at Day 10 showed that compared to control neurites, patient neurites had reduced viability (Figure 1D1) and increased neurite degeneration (Figure 1E1).
SPG7 patient neurons have less complex and shorter neurites and increased degeneration but they express mature cortical neuron markers
In Figure 1, neurite morphology was evaluated in cells labelled with calcein -a viability dye.Calcein is not neuron specific.To confirm the neuronal phenotype defect in mature cortical neurons, we fixed and immunolabelled the neurons with Tuj1 -a neuronal marker and mature cortical neuronal markers TBR1 and CTIP2 (Figures 2A-D).The 2E-G).Similar to our findings in calcein labelled neurites (Figure 1), Tuj1 labelled neurites showed that compared to control neurons, patient neurons had reduced neurite roots (Figure 2H), extremities (Figure 2I), branching nodes (Figure 2J), length (Figure 2K), segments (Figure 2L) and increased neurite degeneration (Figure 2M).In summary, although HSP-SPG7 patient neurons were less complex and shorter and had increased degeneration they expressed mature cortical neuron markers.
SPG7 encodes for paraplegin protein.To test if paraplegin expression is altered in patient neurons, we measured paraplegin expression in patient and control neurons (Figure 2N).Paraplegin expression was comparable between patient and control neurons (Figure 2O).We have previously observed this effect in HSP-SPG7 patient derived olfactory neurosphere-derived cells (Wali et al., 2020).It is plausible that the paraplegin protein expressed is non-functional.
SPG7 patient neurons have downregulated synapse related gene expression, reduced synaptic activity and increased oxidative stress gene expression
To evaluate the gene expression pathways affected in HSP-SPG7 patient neurons, we performed RNA-Seq analysis on HSP-SPG7 patient and healthy control neurons.For this we performed gene expression quantification followed by differential expression analysis of genes and pathway enrichment analysis.First, multidimensional scaling analysis was used to visualize the level of similarity in the gene expression of the patient and control neurons.The multidimensional scaling plot illustrated a clear distinction in gene expression profiles between patient and control neurons (Figure 3A).
Second, the differential gene expression analysis of patient and control neurons identified 8,794 significant differentially expressed genes -those with False discovery rate < 0.05 (Figure 3B).Third, pathway enrichment analysis was performed to understand which pathways/gene networks the differentially expressed genes are implicated in.This analysis showed that the genes related to synaptic function were majorly downregulated in patient neurons (Figure 3C).These pathways included GABAergic and Glutamergic synapse, neurotransmitter signaling, axon guidance and synaptic vesicle cycle.Figures 3D-F shows the genes differentially expressed in the GABAergic synapse, Glutamergic synapse and synaptic vesicle cycle pathways.All the genes listed in these pathways (Figures 3D-F) are expressed significantly lower (reported value of p <0.05) in patient neurons compared to control neurons.The colour code in the heatmaps refer to the levels of expression of genes following a log2 (Counts Per Million) transformation where the genes are ordered from highest expression levels to lowest.This means that the first gene on the list has much higher levels of expression compared to the second gene, so on and so forth, but all genes presented have reduced expression in patient neurons compared to control neurons.Note that the true differences in expression levels for the most highly expressed genes may be obfuscated due to the log scale.
To validate RNA-Seq findings, we performed western blotting and RT-qPCR for selected genes.For example, western blotting (Figures 3G,H) confirmed that the RNA-Seq finding that the expression levels of synaptophysin was lower in patient neurons compared to control neurons (Figure 3I).RT-qPCR evaluation of expression of genes GAD1 and GAD2 validated RNA-Seq findings that the GAD1 and GAD2 gene expression levels was lower in patient neurons compared to control neurons (Figures 3J-M).
In summary, the RNA-Seq results showed that the genes related to synaptic pathways were reduced in patient neurons compared to control neurons.To test the functional relevance of this finding, we performed whole cell patch clamping (Figures 3N-Q).The neurons in both groups showed inward and outward currents in response to the four steps of voltage pulses.However, the number of the current responses produced by the control neurons were significantly greater than those produced by the patient neurons (Figures 3N-Q).This result confirmed that the gene expression and function of synapse was reduced in patient neurons.
Dysfunctional mitochondrial can impair synaptic activity and contribute to oxidative stress leading to DNA damage and apoptosis (Cai and Tammineni, 2017).Further evaluation of the RNA-Seq data showed that gene expression pathways related to oxidative stress, i.e., p53 signaling (Figure 3R) and cellular senescence (Figure 3S) are upregulated in patient neurons compared to control neurons, indicating that the patient neurons are under oxidative stress.
Pharmacological rescue of neurite and mitochondrial defects in SPG7 neurons
To test if the neurite defects -reduced complexity, reduced viability and increased degeneration seen in HSP-SPG7 patient neurons are a consequence of mitochondrial dysfunction, we treated patient neurons with Bz-423, a drug shown to be effective in rescuing mitochondrial function and neurological gait impairment in a SPG7 mice model (Sambri et al., 2020).Patient neurons were treated for 7 days, with Bz-423 treatment initiated 3 days after seeding neural progenitors (Figure 4A).As described in Figure 1, patient (untreated and treated) and control neurons were labelled with calcein, TMRM and hoechst.The neurons were imaged, and multiple parameters indicative of neurite complexity and length and degeneration and mitochondrial size and function were measured (Figures 4B-G).
Bz-423 treatment restored neurite and mitochondrial defects in patient neurons back to control neuron levels (Figures 4H-R).ANOVA indicated a significant effect of treatment (p < 0.001).Tukey's post-hoc multiple comparisons indicate that all neurite complexity, length (Figures 4H-L), mitochondrial morphology and membrane potential (Figures 4M-P), neuronal viability (Figure 4Q) and neurite degeneration (Figure 4R) in untreated patient neurons were significantly different from untreated control neurons and patient neurons treated with Bz-423.
Discussion
We have established a HSP-SPG7 patient neuronal cell model using patient-derived iPS cell differentiated neural progenitor cells and mature cortical neurons.Our results show that compared to control neural progenitor cells, patient neural progenitor cells have aberrant mitochondrial morphology with increased mitochondrial size and dysfunctional mitochondrial with reduced mitochondrial (Sambri et al., 2020).This study establishes a direct link between mitochondrial and neuronal defects in HSP-SPG7 patient neurons.We present a strategy for testing mitochondria targeting drugs to rescue neuronal defects in HSP-SPG7 patient neurons.
Recent studies have highlighted the possibility of dysregulated mPTP to be the leading cause of mitochondrial dysfunction in RNA-Seq analysis of HSP-SPG7 patient neurons.RNA-Seq analysis was performed using patient and healthy control neurons to identify gene expression pathways affected in HSP-SPG7 patient neurons.(A) Multidimensional scaling analysis was used to visualize the level of similarity in the gene expression of the patient and control neurons.(B) Smearplot shows a large number of differently expressed genes.(C) Pathway enrichment analysis was performed to understand which pathways/gene networks the differentially expressed genes are implicated in.Synaptic pathways was (Continued) Wali et al. 10.3389/fnins.2023.1231584Frontiers in Neuroscience 10 frontiersin.orgHSP-SPG7.To test if the neurite defects seen in patient neurons are a consequence of mitochondrial dysfunction, we treated patient neurons with Bz-423, a mPTP modulating drug that has shown to be effective in rescuing mitochondrial function and neurological gait impairment in HSP-SPG7 mice model (Sambri et al., 2020).mPTP regulates the mitochondrial permeability transition, which refers to a sudden increase in the inner mitochondrial membrane permeability.mPTP can exist in low and high conductance modes (Zoratti and Szabò, 1995).mPTP is normally in its low conductance mode, where it permits the diffusion of ions below 300 Da such as K+ and Ca2+.
Under pathological conditions such as increased mitochondrial matrix calcium accumulation or increased oxidative stress, mPTP is in its high conductance state.In its high conductance state mPTP permits free unrestricted diffusion of large molecules up to 1.5 kDa across the inner mitochondrial membrane and results in the mitochondrial matrix swelling (Kwong and Molkentin, 2015).One major consequence of mPTP high conductance state is that the inner mitochondrial membrane can no longer maintain a barrier to protons which leads to dissipation of the proton motive force, resulting in uncoupling of oxidative phosphorylation and dissipation of the mitochondrial membrane potential.Thus, preventing mitochondria from making ATP.HSP-SPG7 patient cells have increased oxidative stress (Wali et al., 2020).This increased oxidative stress can lead to mPTP is in its high conductance state.To test if this effect is relevant to HSP-SPG7 patient neurons, we measured multiple parameters of mitochondrial size -length, width, area, and perimeter and mitochondrial membrane potential.Prolonged opening of the mPTP leads to increased mitochondrial size.Our evaluation showed increased mitochondrial size and reduced membrane potential in patient neural progenitor cells and mature neurons.The patient vs. control difference was amplified in mature neurons.Consistent with this, a 6-month-old paraplegin deficient mice showed the presence of swollen mitochondria in spinal cord axons (Sambri et al., 2020).Treatment of HSP-SPG7 patient neurons with low nano molar concentrations of mPTP targeting drug Bz-423, shown to rescue the defect of swollen mitochondrial and motor impairment in paraplegindeficient HSP-SPG7 mice model (Giorgio et al., 2013;Sambri et al., 2020), also restored the defect of increased mitochondrial size in our patient neurons back to control neuron levels, further indicating that the increased mitochondrial size and reduced membrane potential in HSP-SPG7 patient neurons was a consequence of mPTP dysfunction.Despite weighing only 2% of the human body weight, the adult brain consumes about 20% of all energy generated (Attwell and Laughlin, 2001).Healthy mitochondrial is essential in maintaining synaptic activity.To maintain synaptic activity, mitochondrial is generated in the cell soma and transported to dendrites where they are distributed around the synapse to actively generate energy required for synaptic activity (du et al., 2010).Along with meeting energy demands, the mitochondrial is involved in (a) maintaining ion gradients across the cellular membrane for axonal and synaptic membrane potentials (Attwell and Laughlin, 2001), (b) mobilizing synaptic vesicles to release sites (Verstreken et al., 2005) and (c) supporting synaptic vesicle release (Sun et al., 2013).Dysfunctional mitochondrial has shown to cause impaired synaptic activity in multiple neurodegenerative diseases (du et al., 2010;Cai and Tammineni, 2017).Dysfunctional mitochondrial also release reactive oxygen species causing oxidative stress, which can result in DNA damage, and apoptosis.Consistent with this, our SPG7 patient neurons have dysfunctional mitochondrial, reduced synaptic gene expression and function, upregulated oxidative stress pathways, reduced viability, and increased degeneration.
Impaired neurite complexity and length has been described in many other forms of HSP including SPG4 (Rehbach et al., 2019), SPG11 (Pérez-Brangulí et al., 2014), SPG15 (Denton et al., 2018) and SPG48 (Denton et al., 2018).SPG15 and SPG48 HSP patient-derived iPS telencephalic glutamatergic and midbrain dopaminergic neurons had reduced neurite number, length, and branching, altered mitochondria morphology with reduced mitochondrial length and density and dysfunctional mitochondria with reduced mitochondrial membrane potential.Treatment of patient neurons for 48 h with an inhibitor of mitochondrial fission rescued the mitochondrial and neurite deficits (Denton et al., 2018).
Our results showed that treating patient neurons in their development phase can avert disease-associated mitochondrial and neuronal phenotypes including neuronal degeneration in mature neurons.This approach opens up the possibility of not just reversing the damaged neurons in adult patients but possibly preventing the development of disease-associated neuronal phenotypes if treatment can be initiated in the early stage of disease onset.It is well accepted that the degeneration of neuronal cells occurs about a decade before the clinical symptoms begin.In this scenario, early disease diagnosis and treatment can be key in reducing the severity of the disease.
Unfortunately, as mentioned in the introduction, at higher concentrations Bz-423 is anti-proliferative and cytotoxic, making it challenging to translate this to the clinical.In this study, we use Bz-423 for its ability to rescue mitochondrial defects in neurons at low nano molar concentrations.In the future, using the (a) assays described in this manuscript, (b) understanding of mitochondrial and neuronal defects in HSP-SPG7 patient neurons and (c) the drug treatment approach described here, we will screen for FDA approved drugs to repurpose them for HSP-SPG7.
Our assays evaluating neuron and mitochondrial presented here has multiple advantages for identifying disease-associated effects that can be used as cellular biomarkers to identify new potential drug treatment candidates: (1) measures multiple parameters of neurite complexity including neurite length, roots, branching, segments and extremities and degeneration (2) measures multiple parameters of mitochondrial morphology and function (3) the assay is high throughput and performed in 96 well plates allowing testing large number of cells across multiple control and patient cell lines and treatment effects in a single experiment avoiding sample to sample and batch to batch variability (4) automated imaging can image large number of cells in a relatively short amount of time (100,000 cells in 1 h) (5) automated image analysis pipeline can analyse all cells using the same image analysis parameters without any user bias and (6) The cell permeable dyes used in this assay, i.e., Calcein, TMRM and Hoechst are at least 10x cheaper than antibodies.This is ideal for drug testing and screening assays that involves testing drug treatment effectiveness and cytotoxicity in a large number of drugs at multiple different concentrations.
and/or mutation details of the study participants.
(a) Media from the 96-well plate was aspirated out.(b) Neurons were fixed using the Cytofix solution for 25 min.(c) Neurons were washed twice using the Cytoperm solution.(d) Neurons were permeabilized and blocked using the Cytoperm solution for 25 min.(e) Neurons were incubated with primary antibodies anti-Tuj1 (Catalog no: ab195879, Abcam) or anti-TBR1 (Catalog no: ab183032, Abcam) or anti-CTIP2 (Catalog no: ab18465, Abcam) for 1 h at a dilution of 1:1000.(f) Neurons were washed twice using the Cytoperm solution.(g) Neurons were incubated with secondary antibodies (Catalog no: A-11012, or A-21471, Invitrogen) at a dilution of 1:500.(h) Neurons were washed twice using the Cytoperm solution.(i) Neurons were labelled with 0.1 μg/mL of Hoechest 33,342 (Catalog no: 62249, Thermo fisher Scientific) for 10 min to identify the nucleus.(j) Neurons were washed twice and maintained in the Cytoperm solution for imaging.
FIGURE 1
FIGURE 1Neuronal and mitochondrial phenotypes in live HSP-SPG7 patient neural progenitors and mature cortical neurons.High throughput imaging and analysis was used to evaluate neuron and mitochondrial morphology.(A) Shows the timeline to generate neural progenitors and mature cortical neurons.(B-G) Images of live control and patient neural progenitors and mature cortical neurons labelled with calcein -to identify viable cells, TMRM -to identify mitochondrial and hoechst -to identify nucleus.(H-M) Images of the cells presented in (B-G) with with TMRM and Hoechst label without the calcein label.(N-S) Neurites were segmented and identified using automated image analysis in control and patient neural progenitors and mature cortical neurons.(T-X) Multiple parameters of neurite morphology were analyzed in control and patient neural progenitors and mature cortical neurons.These parameters include neurite length (T), extremities (U), branching (V), segments (W) and roots (X).(Y-B1) Multiple parameters of mitochondrial morphology and membrane potential were analyzed in control and patient neural progenitors and mature cortical neurons.These parameters include (Y) mitochondrial area, (Z) perimeter, (A1) length, (B1) width and TMRM intensity (C1).(D1,E1) Cell viability (D1) and (E1) neurite degeneration in Day 10 mature control and patientneurons.Data is presented as Mean ± SEM.Scale bar: 100 μm.
FIGURE 2
FIGURE 2 Neuronal phenotypes confirmed in fixed and immunolabelled HSP-SPG7 patient neurons.(A-D) Control and patient mature cortical neurons express Tuj1 -neuronal marker and mature cortical neuronal markers -TBR1 (A,B) and CTIP2 (C,D).(E-G) The proportion of Tuj1, TBR1 and CTIP2 positive cells are comparable between controls and patients.(H-L) Multiple parameters of neurite morphology were analyzed in control and patient mature cortical neurons.These parameters include neurite (H) roots, (I) extremities, (J) branching, (K) length and (L) segments.(M) Axonal degeneration of control and patient neurites in mature cortical neurons.(N,O) Paraplegin expression was measured in control and patient mature cortical neurons.Data is presented as Mean ± SEM.Scale bar: 100 μm.
FIGURE 3
FIGURE 3 (Continued) downregulated in patient neurons.(D-F) List of genes that expressed lower in patient neurons compared to control neurons in the GABAergic synapse (D), Glutamergic synapse (E) and synaptic vesicle cycle pathway (F).(G,H) Western blot analysis and (I) RNA-Seq consistently showed reduced expression of synaptophysin expression.(J-M) RT-qPCR validated the findings of RNA-Seq based GAD1 and GAD2 gene expression.(N-Q) Whole cell patch clamping was used to measure neuronal synaptic activity in control and patient neurons.(R,S) List of genes that expressed higher in patient neurons compared to control neurons in the oxidative stress related p53 (R) and cellular senescence (S).
FIGURE 4
FIGURE 4 Pharmacological rescue of neuronal and mitochondrial phenotypes in HSP-SPG7 patient neurons.(A) Shows the Bz-423 drug treatment timeline.(B-D) Live untreated control, untreated patient and Bz-423 treated patient neurons labelled with calcein -to identify cells, TMRM -to identify mitochondrial and hoechst -to identify nucleus.(E-G) Neurites were segmented and identified using automated image analysis in untreated control, (Continued)
TABLE 1
Participant details. | 8,148 | 2023-09-12T00:00:00.000 | [
"Biology"
] |
Optimal Design of Building for Urban Wind Energy Utilization
The article deals with numerical and experimental investigation of wind flow in roof area and optimal design of input parts, where is possible to use IRWES system (Integrated Roof Wind Energy System). Orientation of some high-rise objects is suitable for using wind power. In the first phase, the selected building was investigated using CFD simulation for creating space for three small wind turbines in the area of two technical floors. By using 3D print technology, model of the structure in scale 1:300 with rough façade was created. Experimental measurements were performed in Boundary Layer Wind Tunnel in Bratislava. Measurements were made for 3 reference wind speeds, which fulfilled flow similarity of prototype and model. We have compared the results of the numerical simulations and experimental measurements and obtained information on the average wind speeds at the VAWT site. Comparison of the mean wind velocity and external wind pressure coefficient obtained by CFD simulation and experimental measurements showed a good match. Considering the annual average wind speed at about 100m above sea level, we compared the wind acceleration at the turbine site.
Introduction
Wind energy conversion has experienced a growth worldwide in the last twenty years. Building augmented wind turbines -roof installations in urban environment are in new research activities. VAWT -Vertical Axis Wind Turbine technology offers advantages, as low noise see [1] and [2].
In Bratislava, broader central zone began intensive construction of high-rise building with total height of 90 to 125m as is possible to see in figure 1. For buildings oriented towards the predominant wind, optimal roof design according to prevailing wind direction in locality, should give us distribution of mean wind velocity suitable for using wind energy.
Methodology
In the first phase, the selected building was investigated using CFD simulation in the ANSYS program, where we will deal with the mean wind velocity distribution inside the upper part of the selected building see figure 2, where small wind turbine should be placed. Average wind speeds measured over the last 10 years by the Slovak Hydro Meteorological Institute give good information about the city's wind conditions (see figure 3).
At the top of the selected object, we tried to create space for 3 small wind turbines where geometry of the model was derived from the existing building with specific façade, see figure 2. Cross section of We tried to find, using numerical simulation, optimal design of the wind turbine inlet/outlet space on the top of high-rise building. Next step were experimental measurements on scale model of building performed in Boundary Layer Wind Tunnel (BLWT) of Slovak University of Technology in Bratislava.
Numerical simulation
The main goal of CFD simulation was to determine mean wind velocity distribution in the modified free space in the upper part of the building which allows acceleration of wind speed and provides sufficient speed to obtain wind power. For the analysis of our problem we chose the finite volume method implemented into program ANSYS Fluent [10], which offers several turbulence models.
Computation domain and generated mesh
Model of building was created according to an existing building. The computational domain has size 1×0.6×0.4 km 3 (l×w×h), corresponding with block ratio 2%, which is below the recommended maximum value of block ratio 3% [8,9]. The distance from the object to the inlet was 400m. We created mesh using Meshing implemented in ANSYS [10]. The size function was set on proximity and curvature with medium relevance centre. The element size on surface of the building was 1 m with the size function set on curvature. The element size on surfaces of the designed wind turbine inlet/outlet was 0.25 m with the size function set on curvature. 2 071 521 elements with 387 717 nodes which were transformed in Fluent [10] to polyhedral mesh with 455 303cells, 2 895 448 faces and 2 376 340 nodes were generated.
Numerical model, boundary conditions and solver setting
For the solution of the 3D steady RANS equations with standard k-ε model [11] we used CFD code ANSYS Fluent [10]. For near-wall treatment, the standard wall functions by Launder and Spalding [11] were used. The inlet boundary conditions of the domain are defined by the vertical profiles where v(z) is mean wind velocity at height z, v* is shear velocity, z 0 is aerodynamic roughness height Additional inputs for k-ε model are equation for turbulent kinetic energy k, and turbulence dissipation rate ε as follows: where C μ =0.09 is a model constant.
The outlet boundary is defined as pressure outflow and the side and upper boundary as zero gradients (symmetry). The bottom of the computational domain is modelled as a slip wall. All computations were run as pressure-based, steady, without production limiter. As the solution method SIMPLE pressure-velocity coupling scheme with second order spatial discretization was used, for transient formulation was used second order implicit method. Solution was initialized by hybrid initialization with default setting.
In the following figure 5, one can see the pressure and wind velocity distribution on the building and streamlines near the top of the object.
Design of inlet/outlet shape on the top of building
According to numerical simulation we designed symmetric shape of the inlet/outlet wind turbine space 3 × 3m. For modelling of the surfaces, we used spline. Side surfaces were created with tangent 45° on outer end and 0°randent in centre. The bottom surface was modelled using spline with 0° tangent on both ends. The upper surface was flat, see Fig. 6. Wind pattern for designed wind turbine inlet/outlet space for wind direction 90°can be seen see in Figure 7.
Experimental measurements in BLWT
By using 3D print technology, model of the structure in scale 1:300 with rough façade was created. Experimental measurements were performed in Boundary Layer Wind Tunnel of Slovak University of Technology in Bratislava see figure 8. Wind tunnel is designed with open circuit scheme (see Hubova et al. [3], [6]") with overall length 26, 3 m and two operation sectors of cross-section 2.6 x 1.6 m and with adjustable ceiling. The turbulent wind flow is created in rear operating space. In our case, the roughness of floor was created by plastic film FASTRADE 20 and 150 mm barrier. From the evaluation of the vertical mean velocity profiles, it seems that this modification of floor of tunnel is in the match with terrain category III -IV (closely to IV) with roughness length z 0 = 0.7(m), according to ACSE [4], EN 1991-1-4 [5] and Wieringa [7]. Models of high-rise objects (in the middle) with surrounding buildings located in the wind tunnel during experiments, when the northwest wind is flowing, can be seen in figure 11. Figure 11. View of the models placed on the turntable during the experimental measurements
Results and discussions
The aim of the work was to use the space of two technical floors for the possible placement of small vertical wind turbines with a vertical axis and to achieve sufficient acceleration of the wind flow at the top of the building. Figure 12 shows the average wind velocity at the site of the individual turbines obtained by experimental measurement for the most common wind directions in locality. As shown in the figure, at least 2 turbines always work at maximum power.
Due to the significant increase in small vortices at the entrance, that have been experimentally and numerically detected, it is necessary to insert a rectifying grid that will optimize the wind flow in inlet region. 7 We compared the results of numerical simulation and experimental measurements for different wind directions and the results were in good agreement. Comparison of wind acceleration in space of wind turbines for different wind directions is shown in figure 13. Comparison of external wind pressure at the top of a building near the modelled area for wind turbines is seen in Figure 14.
Conclusions
We monitored the distribution of wind velocity on the high-rise building with surrounding area (in scale 1:300). The orientation of the building in relation to the prevailing wind directions plays an important role in the wind energy utilization.
The wind speed distribution results obtained by CFD end experimental measurements in BLWT allow us to select a suitable VAWT type, see Battisti et al. [1] in this area and height above ground in Bratislava city centre. In accordance with power curves for different types of small wind turbines it appears that three-bladed configuration H-rotor DU06W200 and 2-bladed H rotor NACA0018 will have maximal annual energy output.
The wind energy potential is based on average local wind velocities as well as the IRWES wind energy characteristics. In Bratislava city centre is mean wind velocity at height 10m in Northwest wind direction 4 -5 m/s. The gradient of average wind speed at 100 meters will increase this value to around 8 m/s. New design of the entrance and exit areas at the top of the object will ensure sufficient wind acceleration for other wind directions. Annual energy output for wind velocity higher than 8m/s for the considered 3 turbines and 3 buildings should give 70 MWh/year. | 2,119.6 | 2019-02-23T00:00:00.000 | [
"Engineering"
] |
How to define the quality of materials in a circular economy?
Improving the circularity of our economy calls for easily quantifiable metrics that allow us to track our progress towards circularity. We propose the use of a material quality indicator based on the energy use of recycled products versus their counterparts produced from primary material inputs only. We argue that such an indicator can cover at least the environmental dimension of the circular economy in a sufficient way and is therefore useful for the assessment of the circularity of our economy. The quality of materials is important for defining the circularity of the economy (Nakamura et al., 2017), but is so far neglected in circular economy policies (McDowell et al., 2017). Here we focus on two important qualitative aspects of recycling: the quality of the recycled material and the functionality of substances present in materials. The quality of the recycled material may well be different from, often lower than, the quality of the primary material. We will take this aspect into account by considering the production of a material with the same quality as the recycled material from primary inputs. The functionality of substances present in materials is relevant to downcycling and the consideration of functionality is in line with the argument that conservation of functionality ‘as long as possible’ is important for a circular economy (Iacovidou et al., 2017). Two matters are important in the context of functionality: (1) the loss of functional substances present in the primary material and (2) counteracting the emergence of dysfunctional substances in the recovered product. The loss of functionality of substances present in the primary material may occur when such substances partition to production residues. For instance, in the case of recycling steel by re-melting, the percentage lost to slags of functional alloying elements such as Mn, Nb and V may well exceed the percentage of functional Fe lost to slags. Loss of functionality may also occur when substances have functionality in the primary product but not in the secondary product. For instance, Ni and Cr are functional in stainless steel, but when stainless steel is used as an input in recycling to carbon steel, Ni and Cr lose their functionality (Nakamura et al., 2017). Rather than allocating a zero energy value to non-functional elements in an alloy like in Nakamura et al. (2017), we compare the recycling of a material to an alternative production route for a material with the same quality as the recycled material which uses only primary materials inputs. In this approach the energy invested in alloying elements that are non-functional in the secondary material is not completely lost. This is considered justified because these elements still contribute to the mass of the secondary material. Counteracting the emergence of dysfunctional substances in the recycled product regards the presence of substances which, due to their relatively high concentration, negatively affect product characteristics. One example thereof is the presence of too much ink in recycled paper used for printing. This can be counteracted by de-inking inputs of printed paper in paper recycling. This exemplifies cleaning. A second example concerns the presence of Cu in shredded steel. When the amount of Cu in scrap used in secondary steel production is in excess of the amount following from meeting steel quality requirements (tolerance), reducing the concentration of Cu in recycled steel is possible by dilution with primary product. Taking into account the quality of the recycled product, the functionality of substances and the mass balance, we propose the following indicator for the circularity of material quality (Qc), where the numerator expresses the net energy savings due to recycling primary material (MJ/kg) and the denominator is the embodied energy of 1 kg of primary material (MJ/kg):
Improving the circularity of our economy calls for easily quantifiable metrics that allow us to track our progress towards circularity. We propose the use of a material quality indicator based on the energy use of recycled products versus their counterparts produced from primary material inputs only. We argue that such an indicator can cover at least the environmental dimension of the circular economy in a sufficient way and is therefore useful for the assessment of the circularity of our economy.
The quality of materials is important for defining the circularity of the economy (Nakamura et al., 2017), but is so far neglected in circular economy policies (McDowell et al., 2017). Here we focus on two important qualitative aspects of recycling: the quality of the recycled material and the functionality of substances present in materials. The quality of the recycled material may well be different from, often lower than, the quality of the primary material. We will take this aspect into account by considering the production of a material with the same quality as the recycled material from primary inputs.
The functionality of substances present in materials is relevant to downcycling and the consideration of functionality is in line with the argument that conservation of functionality 'as long as possible' is important for a circular economy (Iacovidou et al., 2017). Two matters are important in the context of functionality: (1) the loss of functional substances present in the primary material and (2) counteracting the emergence of dysfunctional substances in the recovered product. The loss of functionality of substances present in the primary material may occur when such substances partition to production residues. For instance, in the case of recycling steel by re-melting, the percentage lost to slags of functional alloying elements such as Mn, Nb and V may well exceed the percentage of functional Fe lost to slags. Loss of functionality may also occur when substances have functionality in the primary product but not in the secondary product. For instance, Ni and Cr are functional in stainless steel, but when stainless steel is used as an input in recycling to carbon steel, Ni and Cr lose their functionality (Nakamura et al., 2017). Rather than allocating a zero energy value to non-functional elements in an alloy like in Nakamura et al. (2017), we compare the recycling of a material to an alternative production route for a material with the same quality as the recycled material which uses only primary materials inputs. In this approach the energy invested in alloying elements that are non-functional in the secondary material is not completely lost. This is considered justified because these elements still contribute to the mass of the secondary material.
Counteracting the emergence of dysfunctional substances in the recycled product regards the presence of substances which, due to their relatively high concentration, negatively affect product characteristics. One example thereof is the presence of too much ink in recycled paper used for printing. This can be counteracted by de-inking inputs of printed paper in paper recycling. This exemplifies cleaning. A second example concerns the presence of Cu in shredded steel. When the amount of Cu in scrap used in secondary steel production is in excess of the amount following from meeting steel quality requirements (tolerance), reducing the concentration of Cu in recycled steel is possible by dilution with primary product.
Taking into account the quality of the recycled product, the functionality of substances and the mass balance, we propose the following indicator for the circularity of material quality (Q c ), where the numerator expresses the net energy savings due to recycling primary material (MJ/kg) and the denominator is the embodied energy of 1 kg of primary material (MJ/kg): T and no extra primary material input is required, while α > 1 if relatively large amounts of primary materials need to be added for dilution (dimensionless). β = the ratio of diluting material to primary material to be recycled. (dimensionless). E prod,s = the cradle-to-gate life cycle energy (in MJ/kg) required for producing material with the same quality as the secondary material from primary inputs (i.e. without the use of recycled materials) (in MJ/ kg). E r,s = the direct cradle-to-gate life cycle energy requirement for producing the secondary material from material that is to be recycled (in MJ/kg). E c,s = the energy required for cleaning (can include pre-processing, pre-treatment and sorting) the material inputs per kg primary material to be recycled (in MJ/kg). E d,s = the embodied cradle-to-gate life cycle energy in the primary materials required for dilution, necessary to obtain secondary material of sufficient quality (in MJ/kg). E p = the cradle-to-gate life cycle energy required for producing 1 kg of primary material (in MJ/kg).
To give a quantitative indication of what application of Qe means in the case of stainless steel recycling, we have selected as primary material chromium steel 18/8, which is a stainless steel with minimum mass-based Cr and Ni contents of 18% and 8% respectively. After its' use as stainless steel, the chromium steel is recycled to carbon or lowalloyed steel in which Cr and Ni have no function (e.g. Nakamura et al., 2017). In this example it is assumed that the recycled material is mixed with metal from other sources, contaminating the scrap with Cu. Therefore the addition of primary pig iron is necessary to reduce the Cu concentration. 60% of the inputs by mass is from recycled material while 40% of the inputs come from primary pig iron. Under these assumptions the energy circularity of recycling stainless steel to low-alloyed steel is 0.198 (Table 1).
The indicator for the circularity of materials we have proposed here is based on energy demand. Energy demand is an important indicator of environmental impact and reducing the primary energy demand of a product is likely to decrease its overall environmental impact (Steinmann et al., 2017). Iacovidou et al. (2017), however, argued that circularity-indicators covering a single domain of value often deliver misleading messages. They favor multidimensional circularity indicators that also include technical, economic and social dimensions. Would practitioners and policy makers consider themselves adequately informed by an energy-based indicator that mainly covers the environmental domain of the circular economy? This may be doubted, as can be illustrated by the example of using recycled aluminium alloys in cars. Modaresi et al. (2014) have pointed out that car producers require that safety relevant car components such as wheels should be made from primary alloys. Such an example demonstrates that a single indicator is unlikely to be sufficient in the broader context of the circular economy. Nevertheless, energy use in an important matter in the environmental domain of the circular economy. In combination with other, such as economic and legal aspects, the material quality indicator proposed here can help to better quantify the circularity of the economy. | 2,458.4 | 2019-02-01T00:00:00.000 | [
"Economics"
] |
Characterization of electrical 1-phase transformer parameters with guaranteed hotspot temperature and aging using an improved dwarf mongoose optimizer
Parameters identification of Electric Power Transformer (EPT) models is significant for the steady and consistent operation of the power systems. The nonlinear and multimodal natures of EPT models make it challenging to optimally estimate the EPT’s parameters. Therefore, this work presents an improved Dwarf Mongoose Optimization Algorithm (IDMOA) to identify unknown parameters of the EPT model (1-phase transformer) and to appraise transformer aging trend under hottest temperatures. The IDMOA employs a population of solutions to get as much information as possible within the search space through generating different solution’ vectors. Furthermore, the Nelder–Mead Simplex method is incorporated to efficiently promote the neighborhood searching with the aim to find a high-quality solution during the iterative process. At initial stage, power transformer electrical equivalent extraction parameters are expressed in terms of the fitness function and its corresponding operating inequality restrictions. In this sense, the sum of absolute errors (SAEs) among numerous factors from nameplate data of transformers is to be minimized. The proposed IDMOA is demonstrated on two transformer ratings as 4 kVA and 15 kVA, respectively. Moreover, the outcomes of the IDMOA are compared with other recent challenging optimization methods. It can be realized that the lowest minimum values of SAEs compared to the others which are 3.3512e−2 and 1.1200e−5 for 15 kVA and 4 kVA cases, respectively. For more assessment for the proposed optimizer, the extracted parameters are utilized to evaluate the transformer aging considering the transformer hottest temperature compared with effect of the actual parameters following the IEEE Std C57.91 procedures. It is proved that the results are guaranteed, and the transformer per unit nominal life is 1.00 at less than 110 °C as per the later-mentioned standard.
Introduction
The transformers represent one of essential devices in the power energy sector and in distribution network as well as in many industrial household applications. Transformers transmit the energy form generation stations to distribution stations through the transmission lines with high efficiency based on characterization of equivalent circuit parameters and the related losses [1]. Therefore, the analysis of power systems and realistic operation of transformer needs for accurate transformer model. Many attentions have been investigated to exhibit the transformer parameters with minimizing its losses, minimizing the operational cost, and improving its performance. transformer's unknown parameters provide nonlinear model due to its frequency dependence which is making that solution of transformer parameters task to optimality is a true challenge [2].
Estimating transformer parameters became a huge and necessary task for the best transformer design to achieve necessary standards and requirements [3,4]. Moreover, the state of a transformer's functioning, such as steady or transient situations, has an impact on the estimation of any unknown parameters [5,6]. Several techniques can be used to estimate these parameters, including: the well-known tests such as the no-load and short-circuit tests [7,8], physical sizing of transformer [9], manufacturer's data [10][11][12], and under various load information [7]. Primarily, the analytical techniques have been employed to quickly assess the transformer's physical sizing using finite element analysis.
Cast iron dry transformer parameters are extracted using logical method and compared with the same obtained from both finite element method and actual data [13]. Recently, metaheuristic optimization methods (MOMs) have been seen a boom in a variety range of optimization tasks, where they do not require the convexity/continuity of objective functions (OFs) and no reliance on the gradient information. Accordingly, several MOMs have been presented extensively in wide fields of optimization due to their capability in solving more complicated the optimization tasks as they do not require the gradient information of the OF and start with a population of initial guess [7]. Some of MOMs include the salp swarm [14,15], Harris hawks [16], crow search [17,18], barnacles mating [19], quantum sine cosine [20], artificial ecosystem (AEO) [21], and atom search [22] optimizers.
On the other hand, many attentions have been presented in the literature to estimate the optimal parameters of transformer models as well as electric motors, storage units, and fuel cells [23][24][25][26]. The accuracy of the optimization algorithms is tested by comparing the extracted parameter values against the actual ones [27][28][29][30][31][32][33][34]. For instance, particle swarm optimization (PSO) has been introduced to estimate transformer parameters with evaluating some physical dimensions such as loss parameters, winding inductance, and capacitance [6], where both single-phase (1Ph) and three-phase (3Ph) power transformer parameters were extracted using the load testing data that were acquired. Forensic-Based optimization [1] has been proposed to estimate the parameters for the 1Ph transformer (1PhT).
Adding to the above-mentioned, the slime mold optimizer has been used to estimate the parameters of 1Ph and 3Ph transformers, where the obtained results have been compared with other optimizers [35]. Bacterial Foraging [36] has been presented to extract the 1PhT parameters based on the data driven from load testing. Chaotic Optimization [7] has been investigated to estimate the equivalent circuit parameters of 1PhT. Also, Manta Ray Foraging Optimization and its chaotic version have been presented to identify the parameters of 1PhT, where noload losses have been incorporated into the OF [3]. Moreover, coyote optimizer, Jellyfish search optimizer, and machine learning approach have been introduced to extract parameters of 3Ph and 1PhTs [37][38][39], respectively. Also, the transformer parameters have been evaluated using multi-objective evolutionary optimization and verified by contrasting the outcomes with actual measures and behavior [14]. Also, distribution transformer parameters have been extracted at frequencies between 1 kHz and 1 MHz using a simple black-box approach that uses an optimization technique and transfer functions estimated from recorded voltage ratios [40]. On the other hand, the parameters of 1PhTs have been identified with reference to saturation and inrush current using artificial hummingbird optimizer (AHO) [41] and Nelder-Mead optimization [42] and using inrush measurements [43]. Also, the nonlinear magnetizing characteristics of 1PhTs have been determined using minimum information [44]. Voltage and current measures have been taken into consideration to estimate 1PhTs parameters using hurricane optimization algorithm [45], crow search algorithm [46], and Black-Hole optimization [47].
Despite the substantial amount of recently developed algorithms in this field, there may arise a question here as why researchers are still interested in developing new optimization algorithms or improved variants. This query can be answered by the means of the No Free Lunch (NFL) theorem [48] as it logically affirms that no one can suggest a solution algorithm for dealing with all optimization issues. This implies that the success of an algorithm in addressing a particular set of issues does not ensure that it can also solve all optimization issues with different natures. In other words, regardless of the higher performance on a subset of optimization issues, all optimization methods perform similarly on average when considering all optimization problems. The NFL theorem enables researchers to develop novel optimization algorithms or enhance/modify the existing ones for dealing with subsets of problems in various domains. This work's major contributions include (i) to accurately determine the best 1PhT's unknown parameters using a novel promising metaheuristic optimizer namely dwarf mongoose optimization algorithm (DMOA) [50] and (ii) to analyze both steady-state and aging performances of transformers. The DMOA simulates the dwarf mongoose behavior and skills during the foraging behavior for its food. The performance of the DMOA has been assessed by applying it on three issues: 19 classical functions with complex natures, the IEEE CEC 2020 benchmark functions, and 12 engineering design problems [49]. Then, an improved DMOA (IDMOA) is presented to characterize the parameters of the 1PhT. Comparing the DMOA's performance to other approaches, it has been offered a competitive and advanced one. However, as a recent optimizer, its performance lacks for balancing the exploitation and the exploration abilities when dealing with complicated optimization landscapes, including multimodal nature and high dimensional situations. Due to this, local optimum trapping may occur, and the search process may be degraded. Motivated by the issues, it can be mentioned that an improved variant of the DMOA is attempted to produce high-quality solutions with mitigating the premature convergence dilemma. This work proved that IDMOA can obtain the lowest SAEs among the others. Also, the efficiency of the IDMOA has been confirmed by applying the extracted parameters in calculating the hotspot temperatures and aging of a transformer and assured that the per unit life of the transformer is obtained as per the IEEE Std C57.91 rules [50].
This current paper suggests an improved variant of the DMOA, named as IDMOA to decide the unidentified parameters of the EPT model. IDMOA operates with a population of solutions to effectively explore the search space. In this context, the Nelder-Mead Simplex (NMS) method is integrated to efficiently promote the neighborhood searching and enhance the exploitation ability. At initial stage, the OF is adapted for the transformer electrical equivalent extraction parameters in terms of the sum of absolute errors (SAEs) among numerous factors from nameplate data of transformers. Moreover, the corresponding operating inequality restrictions are incorporated with the SAEs goal. The proposed IDMOA is applied on two different cases with transformer ratings as 4 kVA and 15 kVA, respectively. Moreover, the cropped outcomes of IDMOA are compared with other recent challenging optimizers. The performances of DMOA and IDMOA are compared with other recognized competitors including AEO, equilibrium optimizer (EO), gradient-based optimizer (GBO), gray wolf optimizer (GWO), memory-based hybrid dragonfly algorithm (MHDA), and other wellknown optimizers from the literature. To assure the exactness of the proposed IDMOA optimizer, the transformer hotspot temperature and its effects on the transformer aging according to IEEE Std C57.91 [50] are investigated using the parameters stemmed from the optimizer and from the practical (actual data). Transformer with 15 kVA is used for the comparisons which proofed the effectiveness of the IDMOA.
The remainder parts of this work are presented as follows: Part 1 declares the mathematical model of the 1PhT. Part 2 expresses the transformer model based on optimization prospective. In part 3, the procedures of the DMOA and IDMOA algorithms are demonstrated in detail. Part 4 shows the simulation results of the DMOA and IDMOA algorithms regarding the 1PhT parameters. Also, the steady state is assessed by the proposed algorithm in Part 5. The extracted parameters by the IDMOA are utilized to assess the transformer aging and its hotspot temperature in Part 6. Finally, the conclusions with extension of further research are highlighted in Part 7.
Single-phase transformer mathematical modeling
The 1PhT can be represented referred to its primary side as revealed in Fig. 1. The primary winding resistance and reactance, core loss resistance and leakage reactance are defined by R P and X P ; R c andX m , respectively. The secondary side load, winding resistance and reactance denoted to primary side are called Z 0 L ; R 0 s and X 0 s ; respectively [41,51,52].
The transformer three unknown variables pairs R P and X P ; R c and X m and R 0 s , and X 0 s can be composed as Z P , Z m , and Z 0 s which are calculated as following: If the transformer is supplied with input voltage V P ð Þ to deliver output voltage V ; primary current I p À Á ; load current I 0 s À Á ; input power P i ð Þ; outpower P o ð Þ; transformer efficiency g ð Þ; and voltage regulation V reg À Á are estimated as follows [41]: Also, when the transformer is loaded, its temperature will increase due to its power loss [53]. If the transformer loading is increased more than its nameplate data, it will be subjected to overheating, shorten its life and may lead to insulation fail and risks. IEEE Std C57.91 [50] and IEEE C57.12.00 [54] formulated the relation between transformer temperature, loading and aging as per the following equations: where h H is the winding hottest-spot temperature, X is a constant (9:8x10 À18 Þ as per IEEE, Y is a constant (1500Þ as per IEEE, h AM defines the ambient temperature (AT), Dh TO denotes top-oil rise (TOR) over AT in°C, Dh HS defines winding hottest-spot rise over top-oil temperature, Dh TOÀU is the ultimate TOR over AT in°C, Dh TOÀt denotes the initial TOR over AT in°C, t is the duration of changed load in hours, s defines the thermal time constant for the transformer (accounting for the new load, and accounting for the specific temperature differential between the ultimate oil rise and the initial oil rise), Dh TOÀR is TOR over AT at full load current (FLC) (determined during test report), K t is ratio of load of interest to rated load, R is Ratio of load loss at rated load to no-load loss, and n is empirically derived exponent used to calculate the variations of Dh TO with changes in load. Transformers with different cooling modes will have different n values, which approximate effects of change in resistance with changing load.s TO;R : The top-oil time constant at rated kVA, s TO : The top-oil time constant, P T;R : Total power loss at rated load, C : Thermal capacity of the power transformer (watt-hour per°C) depends on the cooling system, W CL : weight of core and coils, W TF : weight of tank and fittings, VOL O : Oil volume in gallons of oil, ONAN : Oil Natural Air Natural, andOFAF : Oil Forced Air Forced
Extraction of the optimum transformer parameters
The main goal from extracting optimum transformer parameters is to achieve the best practical operation performance of the transformer. This can be achieved by the minimization of the sum absolute errors between the practical (SAE) and calculated transformer parameters. The OF is adapted in (21) as following: where I pÀact , I 0 sÀact , V 0 sÀact , g act are the real values of the primary drawn current, output delivered current, output voltage, and transformer efficiency, respectively. The determination of the best values of the six unidentified transformer parameters (R p , R 0 s , X p , X 0 s ; R c , X m ) requires defining upper and lower limits collected from practical side for the six parameters as per the following (22): where the min/max pairs values of the six unidentified parameters (R p , R
The proposed methodology
This section exhibits the basics of the DMOA and NMS method as well as the proposed integrated version that named as IDMOA methodology.
The basics of DMOA algorithm
In this section, the mathematical aspect of DMOA is presented [49]. The DMOA was proposed based on dwarf mongoose behavior while searching for food source. Mongooses offer impressive behaviors during foraging process as they split the population into three social classes, alpha (a) category, scout category, and babysitters. In terms of optimization of view, the DMOA starts its optimization searching by initializing a set of solutions within the search space as follows: where rand stands for a random number produced from [0,1] according to uniform distribution. LB j and UB j define the limits of the jth dimension search space. N and D denote the population size and total number of dimensions. After the initialization, each group of the DMOA performs its own dramatic behavior to capture the food, where the following subsections introduce the details regarding these groups
a-category
Once the swarm of mongooses is initialized, each solution is evaluated based on the fitness of function. In this context, the probability value regarding each solution is determined and then, it is used to select the a-female as follows: The total number of mongooses within the a-group is considered as N À bs; where bs defines the number of babysitters. Peep stands for the vocalization the a-female that retains the family on path.
Every mongoose sleeps in the early sleeping mound that is set to £: The candidate food position is produced by using the following expression: where ph i defines a uniform random number within the [-1, 1], where the sleeping mound (sm) is updated after every iteration as follows: The average or mean value regarding the sleeping mound can be obtained as follows: The algorithm proceeds toward the scouting stage, where the sleeping mound or subsequent food source is assessed once the babysitter group exchange criterion is fulfilled.
Scout category
The scouts search for the next sleeping mound as they do not visit the former sleeping mounds, which ensures the exploration ability. For the DMOA model, the scouting and foraging are done simultaneously as explained in [49]. This movement is modeled regarding the failure or success evaluation in attaining a new sleeping mound. In other words, the movement of the mongooses depends on their performances; meanwhile, if the family goes far enough foraging, they will explore a new sleeping mound. The scout mongoose is simulated by the following form: where rand stands for an arbitrary random number ranged from 0 to 1, CF ¼ ð1 À t=TÞ ð2ðt=TÞÞ defines collectivevolitive parameter that balances the exploration and exploitation searches among the mongoose group, and it is decreased linearly with the growth of iterations.
is the vector that realizes the mongoose's movement to the new sleeping mound.
Babysitters' category
Babysitters represent subsidiary members that sit with the youngsters, and they are rotated regularly to permit mother (the a-female) to drive the remainder members of the group while performing the daily foraging. The babysitter exchange parameter is utilized to reset the food source and scouting information formerly held by the family associates substituting them. The working frame for the DMOA is shown in Fig. 2.
Nelder-Mead simplex technique
The NMS approach was first developed to handle unconstrained minimization circumstances [55]. Numerous scientific issues and practical engineering applications [56,57] have been solved using this unique technique, which relies on the information of the OF to be solved without the need of its derivatives. As an iterative algorithm, the NMS method might be used. For instance, NMS would begin from a simplex generated by the starting vertices when faced with a D -dimensional minimization function. In each iteration, those vertices that are inferior to freshly generated vertices can be replaced, which results in the fulfillment of a new simplex. The simplex would be focused and nearer the best solution with repeated iterations. The detail procedures of the NMS method can be presented as the following steps [56]: Step1: The vertices are numbered and sorted in ascending order based on the fitness values by the fol- Step2: Compute the reflection point x r and calculate the value of the OF by (29). Substitute x Dþ1 by x r and perform then, carry out step 3; if f ðx Dþ1 Þ f ðx r Þ; then, perform the step 4: where a represents the reflection factor, x defines the center point of vertices except x Dþ1 ; that is achieved by (30): Step 3: Compute the expansion point x e using (31) and obtain the corresponding OFfðx e Þ: Iff ðx e Þ f ðx r Þ , then x Dþ1 is substituted by x e ; else, x Dþ1 is substituted by x r : Then, carry out step 6: where b represents the expansion factor.
Step 4: if fðx r Þ fðx Dþ1 Þ; then obtain the external contraction point x oc by (32) and assess this point using corresponding fitness valuefðx oc Þ; else, the internal contraction point x ic is computed using (33) with its corresponding objective value fðx ic Þ : where c defines the contraction factor. For the case that, the x oc is created; if fðx oc Þ\fðx r Þ; then x Dþ1 is substituted by x oc ; and carry out step 6; else, perform the step 5. Once, the x ic is obtained, if fðx ic Þ\fðx Dþ1 Þ; then, substitute x Dþ1 with x ic ; and perform step 6; otherwise, run step 5.
Step 5: In accordance with (34), all vertices except for x 1 are shrunk collectively to create a new simplex and advance to step 6: where d denotes the shrinkage factor.
Step 6: Stop the search process if the cut-off condition is satisfied; otherwise, move on to the next iteration.
The proposed IDMOA
In this section, the primary motivation regarding the suggested IDMOA algorithm along with the stepwise explanation is presented in detail.
Motivation for enhancing the DMOA performance
As described earlier, the explorative and exploitative skills of DMOA are determined mainly by the a-group, scout group, and babysitters toward food sources. This searching behavior increases the diversity of solutions and strengthens the exploration pattern of DMOA to some degree of extent. However, dealing with multimodal optimization challenges with many valleys and peaks aspect degrades the exploration and exploitation skills of the algorithm and may lead to the trapping into local minima. Moreover, this trapping mode can be caused by the weaknesses of the exploitation capability of DMOA which hinders identification of the optimal solution. Therefore, to further strength the exploration and exploitation skills and ensure a suitable convergence precision, the NMS method is incorporated with the DMOA to ensure the accurateness of the optimal solution. The main feature of the NMS method is that it offers an effective optimization performance by tracking the position of the best solution, position of the worst solution, and the centroid of all solutions, which significantly aids in averting the weaknesses of the DMOA. In this regard, NMS is invoked to the DMOA to accomplish more refined local searching in the vicinity of the candidate feasible solutions while improving the exploitation trends. By this integration, the algorithm can avoid the inactive iterative, running without any improvement in the outcome.
Framework of the proposed IDMOA
In this work, NMS method is embedded into DMOA to improve the exploration and exploitation skills of the original DMOA version. In the initialization phase, the dwarf mongooses create a population of random positions (solutions) within the boundary limits, then, the DMOAbased algorithm invokes its social search groups, the agroup, scout group, and babysitters, to explore different regions within the search space. At the second phase, NMS method is invoked and located after the DMOA to carry out the exploitive feature whin neighborhood zones reached by DMOA. In this context, the best solution in the current population would be used to establish the primary simplex. Afterward, the simplex stages are continued for l iteration to update this solution.
The l is a crucial parameter that can cause the NMS techniques to be overemphasized if its value is too high. Contrarily, if its value is too low, this mechanism will not be able to fully utilize its local search capability. Its value is decided upon through repeated experiments as 2d; and then, the search procedures of NMS method will be performed for l iterations and swapped back to the DMOA phase. The DMOA and NMS phases proceed through iterations until the stopping condition are satisfied. The pseudo code of the proposed IDMOA algorithm is demonstrated in Fig. 3.
Simulation and validations
The performance and accurateness of the proposed IDMOA in estimating the 1PhT's unknown parameters are evaluated through conducting to two case studies: 15 kVA and 4 kVA 1PhTs [1,28]. For fair verifications, the results of the proposed IDMOA are compared with some wellknown optimizers including AEO [21], EO [58], GBO [59], GWO [21], MHDA [60], and DMOA for further validations. In addition to that the results of IDMOA are compared against other related works from literature which are GA [27], PSO [27] Table 1 and are based on recommendations in the relevant literature. It is noteworthy that after conducting few experiments, the population sizes of presented algorithms are fixed at 50 [61].
Case 1: 15 kVA 1PhT
The suggested algorithm along with the comparative ones is investigated and applied to estimate the parameters of 15 kVA 1PhT which is associated with the following nameplate data: 15 kVA, 2400 V/240 V, one-phase, and 50 Hz. Moreover, the measured transformer characteristics regarding the voltages, currents, and efficiency at FLC are as follows: X 0 sÀact = 2383.8 V, I pÀact = 6.20 A, I 0 sÀact = 6.20 A, and g act = 99.2% [10,28].
The reported results for the transformer parameters and compared algorithms are tabulated in Table 2. In this sense, the representative OF by (21) is applied while taking the values referred to the transformer's primary side into account using short-circuit and standard Z-circuit tests. Table 3 illustrates the results of the four items of the OF obtained by the proposed algorithm and the compared ones. Based on the recognized results, it can reveal that the proposed algorithm can provide errors sum remarkably close to zero, while the errors sum for the GBO and EO can provide competitive edge. For further assessment, the proposed IDMOA and the compared one's convergence behavior are analyzed and compared as depicted in Fig. 4. It can be observed that the IDMOA starts to be stable with minor SAE changing at nearly fiftieth iteration with superior minimum SAE value equals to 0.0335124, while the GBO and EO start to be stable at 100 and 120 iterations, respectively. It can be established that IDMOA can provide faster converge rate pattern than the compared ones and affirm the high-quality outcome of the IDMOA.
Case 2: 4 kVA, 250/125 V 1PhT
The performance of the proposed IDMOA is further evaluated on another type of the 1PhT to extract its associated parameters with the following nameplate data 4 kVA, 250/125 V, one-Phase, 50 Hz. The extracted parameters are compared with other optimizers as shown in Table 4. Moreover, the characteristics of the voltages, nominal load currents, and efficiency at FLC of this case are obtained and compared with other well-established algorithms as in Algorithm Table 5. It can be realized that the sum of percentage error achieved by IDMOA is 1.12e-5 which is the lowest one over the other optimizers. In addition, the qualitive performance of the IDMOA and the other ones is assessed by depicting the convergence trends as in Fig. 5. In this sense, it is noted that the IDMOA reaches fixed SAE value at approximately the 50 iteration while the nearest behavior optimizers AEO, MHAD and GBO starts to be stable at 125, 180, and 200 iteration. This is to confirm that IDMOA has the faster convergence rate than other optimizers. The reader can note that there are small deviations between the exact reported SAEs, and those circulated in Tables 3 and 5 due to the approximations as apparently 6 round digits are used.
Statistical assessment
In this subsection, the performance' stability of the algorithm is assessed by conducting some statical measures which can affirm that the obtained solution not happened by chance. In this sense, each algorithm is performed for 20 independent runs with recording some of statistical descriptive tests which are the best value (Min.), average value (Mean), median value (Median), worst value (Max.) of the OF, and standard deviation (Std). The results for Fig. 4 Convergence rate for the IDMOA and compared ones for test case 1 Table 4 Extracted parameters using the IDMOA and compared methods for 4 kVA transformer Table 6 for the studied cases, where the better values are highlighted with bold format. Based on achieved results, it is noted that the IDMOA can provide progressive results over the compared ones for Case 1 and can provide competitive results with the AEO, EO, and GBO for the best value of the OF. However, in terms of mean value of the OF, the proposed IDMOA can provide superior results than the others. Finally, according to the statistical indices, the IDMOA has a promising competitive performance for the parameter extraction of the 1PhT.
Assessments with other methods from the literature
In this subsection, the IDMOA was further with other competitors reported from the literature including PSO [27], GA [27], ICA [10], GSA [10], COA [10], and AHO [41] for Case 1 and PSO [1], FBI [1], JS [39], and AHO [41] for Case 2. The results of the original DMOA and the proposed IDMOA as well as the other advanced competitors are revealed in Table 7. In this sense, it can be observed that IDMOA exhibits progressive results compared to other the state-of-the-art emulators on the two cases.
Statistical testing using the pairwise Wilcoxon tool
Nonparametric statistical tests represent an essential tool for the comparison investigation. In this sense, the widely used Wilcoxon signed-rank test [8] is adopted here to compare the results of two algorithms. Its fundamental concept is not only to keeping track of each algorithm's wins but also to rank the disparities among performances [8]. For each optimization task and every pair of methods, it was checked to see if there was a statistically significant difference between the median results. To conveniently obtain the median and the interquartile range (IQR) of each method, each method is conducted for 20 runs on each task. Then, the hypothesis's p-value is determined and is defined as follows: To assess the outcome, the Wilcoxon signed-rank test is used, with a significance threshold of 0.05. H 1 is approved if the p -value 0:05 since the medians of the two algorithms differ from one another and one of them is superior to the other in terms of the measure being utilized.
Otherwise, H 0 will be accepted, and the two techniques are incomparable if the p -value [ 0:05: Therefore, Table 8 exhibits the results of the p -value with the positive Table 8).
6 Transformer aging assessment based on the proposed optimizer As well known, the transformer parameters are vital in defining the no-load and load losses created in core and windings which can lead to transformer insulation degradation and shorten its expected nominal life. To verify the capability of the proposed IDMOA optimizer in extracting the truthful transformer parameters, its results have been applied to the IEEE Std C57.91 [50] for transformer aging, and the produced hotspot temperature equations previously mentioned in (12) to (21) and compare it with the same using actual parameters. The initial parameters values are considered and/or collected from practical side as per the transformer name plate data for a certain manufacturer as following: h AM = 30°C, Dh TOÀR ¼ 50 C , K t ¼ 0 to 1.2, K U ¼ 1.2, R = 25, n = 1, and P T;R = 119.34 W for IDMOA optimizer and is equal to 120 W-based actual data. The behavior of transformer hottest-spot temperature using the extracted parameters from IDMOA optimizer compared to the same created by actual values is revealed in Fig. 6. It can be noticed that the behaviors of the two cases are remarkably close specially at higher loading which is considered the higher danger operation condition. The percentage biased error between the two studied scenarios is shown in Fig. 7.
It can be noticed that the error at 100% loading is about 0.007% while dropped to zero at 120% loading. Minor error appeared at low load conditions may be due to the measurements precision and can be condoned because of its minor effect on the hotspot temperature. Also, it can be observed from Fig. 6 (both actual and IDMOA curves are almost matched) that the hottest-spot temperature is less than 110°C at FLCs as per IEEE Std C57.91 recommendations.
Transformer insulation life (aging) and transformer aging accelerator factor (AAF) behavior are plotted as in Figs. 8 and 9, respectively. It can be noted that the per unit life (aging) is nearly 1 at 110°C, and the AAF is larger than 1 for winding hottest-spot temperatures greater than the reference temperature 110°C and less than 1 for temperatures below 110°C as indicated in Fig. 8.
Conclusions
A novel attempt has been made in this work to develop an improved Dwarf Mongoose Optimizer (IDMOA)-based Nelder-Mead Simplex method to extract the parameters of power transformers exact equivalent model. In this context, the nameplate data are offered to investigate this goal through minimizing the sum of absolute errors across some chosen variables. Toward this goal, two test cases studies have been conducted to assess the performance of the proposed algorithm. further validations are affirmed by the comparisons some comparative methods. The minimum achieved values regarding the SAEs by the IDMOA are 3.3512e-2 and 1.12e-5 for 15 kVA and 4 kVA cases, respectively. In this sense, IDMOA can provide the lowest SAEs among the others. Finally, the effectiveness of the IDMOA has been verified by applying the obtained parameters in calculating the hotspot temperatures and aging of 15 kVA transformer which showed that the per unit life of the transformer is gained at less than 110°C as per the IEEE Std C57.91 and C57.12.00 guidelines. The current IDMOA-based methodology can be extended for further assessments on larger capacities of power transformers (oil/dry) with two and tertiary windings.
Author's contribution The authors contributed to each part of this paper equally. The authors read and approved the final manuscript.
Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Data availability Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Human and animal rights This article does not contain any studies with animals performed by any of the authors.
Informed consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. | 7,988.4 | 2023-03-18T00:00:00.000 | [
"Engineering"
] |
Dirac particles’ tunneling from five-dimensional rotating black strings influenced by the generalized uncertainty principle
The standard Hawking formula predicts the complete evaporation of black holes. Taking into account the effects of quantum gravity, we investigate the tunneling of fermions from a five-dimensional rotating black string. The temperature is not only determined by the string, but also affected by the quantum numbers of the emitted fermion and the effect of the extra spatial dimension. The quantum correction slows down the increase of the temperature, which naturally leads to a remnant in the evaporation.
Introduction
The semi-classical tunneling method is an effective way to describe Hawking radiation [1,2]. Using this method, the tunneling behavior of massless particles across the horizon was adequately described in [3,4]. In this research, a varying background spacetime was taken into account. The tunneling rate was related to the change of the Bekenstein-Hawking entropy and the temperature was higher than the standard Hawking temperature. In the former research, the standard temperatures were derived [5][6][7][8][9][10], which implies complete evaporation of the black holes. Thus the varying background spacetime accelerates the black holes' evaporation. This result was also demonstrated in other complicated spacetimes [11][12][13][14][15]. Extending this work to massive particles, the tunneling radiation of general spacetimes was investigated in [16][17][18]. The same result was derived by the relation between the phase velocity and the group velocity.
In [19,20], the standard Hawking temperatures were recovered by fermions tunneling across the horizons. In the derivation, the action of the emitted particle was derived by the Hamilton-Jacobi equation [21]. This derivation is based on the method of complex path analysis [22]. In this method, we do not need the consideration that the particle moves along a e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>the radial direction [23][24][25][26]. This is different from the work of Parikh and Wilczek [3,4].
The tunneling radiation beyond the semi-classical approximation was discussed in [27][28][29]. Their ansatz is also based on the Hamilton-Jacobi method. The key point is to expand the action in powers ofh. Using the expansion, one can get the quantum corrections over the semi-classical value. The corrected temperature is lower than the standard Hawking temperature. The higher order correction entropies were derived by the first law of black-hole thermodynamics.
Taking into account the effects of quantum gravity, the semi-classical tunneling method was reviewed in the recent work [30][31][32]. In [30,31], the tunneling of massless particles through the quantum horizon of a Schwarzschild black hole was investigated by the influence of the generalized uncertainty principle (GUP). Through the modified commutation relation for the radial coordinate, the conjugate momentum and the deformed Hamiltonian equation, the radiation spectrum was derived including the quantum correction. The thermodynamic quantities were discussed. In the fermionic fields, with the consideration of the effects of quantum gravity, the generalized Dirac equation in curved spacetime was derived by the modified fundamental commutation relation [33], which is [32] This derivation is based on the existence of a minimum measurable length. This length can be realized in a model of the GUP where β = β 0 l 2 p h 2 is a small value, β 0 < 10 34 is a dimensionless parameter and l p is the Planck length. Equation (2) was derived by the modified Heisenberg algebra [x i , p j ] = ihδ i j [1 + βp 2 ], where x i and p i are position and momentum operators, defined, respectively, as [33,34] x i = x 0i , p 2 0 = p 0 j p 0 j , x 0i and p 0 j satisfy the canonical commutation relations [x 0i , p 0 j ] = ihδ i j . Thus the minimal position uncertainty is gotten as which means that the minimum measurable length is x 0 = h √ β [33]. For x 0 to have physical meaning, the condition β > 0 must be satisfied. This was showed in [33]. Based on the GUP, the black-hole's remnant was first studied by Adler et al. [35]. Incorporating Eq. (3) into the Dirac equation in curved spacetime, the modified Dirac equation was derived [32]. Using this modified equation, the fermions' tunneling from the Schwarzschild spacetime was investigated. The temperature was showed to be related to the quantum numbers of the emitted fermion. An interesting result is that the quantum correction slows down the increase of the temperature. It naturally is to lead to a remnant.
In this paper, taking into account the effects of quantum gravity, we investigate fermions' tunneling from a fivedimensional rotating black string. The key point in this paper is to construct a tetrad and five gamma matrices. The result shows that in the frame of quantum gravity, the temperature is affected not only by the quantum numbers of the emitted fermion, but also by the effect of the extra compact dimension. The quantum correction slows down the increase of the temperature. A remnant is naturally observed in the evaporation process.
In Sect. 2, we perform the dragging coordinate transformation on the metric and construct five gamma matrices; then we investigate the fermion's tunneling from the five-dimensional rotating string. A remnant is observed. Section 3 is devoted to our conclusion.
Tunneling radiation under the influence of the generalized uncertainty principle
The Kerr metric describes a rotating black-hole solution of the Einstein equations in four dimensions. When we add an extra compact spatial dimension to it, the metric becomes where = r 2 − 2Mr + a 2 = (r − r + )(r − r − ), ρ 2 = r 2 + a 2 cos 2 θ , g zz is usually set to 1. The above metric describes a rotating uniform black string. r ± = M ± √ M 2 − a 2 are the locations of the event (inner) horizons. M and a are the mass and angular momentum in units of mass of the string, respectively. A fermion's motion satisfies the generalized Dirac equation (1). To investigate the tunneling behavior of the fermion, it can directly choose a tetrad and construct gamma matrices from the metric (5). The metric (5) describes a rotating spacetime. The energy and mass near the horizons are dragged by the rotating spacetime. It is not convenient to discuss the fermion's tunneling behavior. For the convenience of constructing the tetrad and gamma matrices, we perform the dragging coordinate transformation dφ = dϕ − dt, where on the metric (5). Then the metric (5) takes the form Now the tetrad is directly constructed from the above metric.
It is The gamma matrices are easily constructed as follows: While measuring the quantum property of a spin-1/2 fermion, we can get two values. They correspond to two states with spin up and spin down. The wave functions of two states of a fermion in the metric (7) of the spacetime take on the form where A, B, C, D are functions of (t, r, θ, φ, z), and I is the action of the fermion, and ↑ and ↓ denote the spin up and spin down, respectively. In this paper, we only investigate the state with spin up. The analysis of the state with spin down is parallel. To use the WKB approximation, we insert the wave function (10) and the gamma matrices into the generalized Dirac equation (1). Dividing by the exponential term and considering the leading terms yield four equations. They are where It is difficult to get the expression of the action from the above equations. Considering the property of the spacetime, we carry out separation of the variables as where ω is the energy of the emitted fermion, j is the angular momentum and J is a conserved momentum corresponding to the compact dimension. Equations (14) and (15) which implies that is a complex function other than the constant solution. In the former research, it was found that the contribution of could be canceled in the derivation of the tunneling rate. Using Eq. (17), the important relation is easily gotten of Now our interest is in the first two equations. Inserting Eq. (16) into Eqs. (12) and (13), canceling A and B and neglecting the higher order terms of β, we get where A = 2βG 2 F, Solving the above equation at the event horizon yields the imaginary part of the radial action. Based on invariance under canonical transformations, we adopt the method developed in [39][40][41]. The tunneling rate is In the above equation, p r dr is invariant under canonical transformations. Here let us write p r = ∂ r W . Thus the solutions of Im p out,in r dr are determined by Eq. (19), which is where g zz = 1, + = a r 2 + +a 2 is the angular velocity at the event horizon. (J, θ, r + , j) is a complicated function of J, θ, r + , j; therefore, we do not write it down here. We should have (J, θ, r + , j) > 0. If we adopt Eq. (22) to calculate the tunneling rate, we will derive a twice higher Hawking temperature, which was showed in [36][37][38]. This is not in consistence with the standard temperature. With careful observations, Akhmedova et al. [39][40][41] found that the contribution coming from the temporal part of the action was ignored. When they took into account the temporal contribution, the factor of 2 in the temperature was resolved.
To find the temporal contribution, we use Kruskal coordinates, (T, R). The region exterior to the string (r > r + ) is described by where r * = r + 1 2κ + ln r −r + r + − 1 2κ − ln r −r − r − is the tortoise coordinate, and κ ± = r + −r − 2(r 2 ± +a 2 ) denote the surface gravity at the outer (inner) horizons. The description of the interior region is given by To connect these two patches across the horizon, we need to rotate the time t as t → t − iκ + π 2 . As pointed out in [39][40][41], this rotation would lead to an additional imagi-nary contribution coming from the temporal part, namely Im(E t out,in ) = 1 2 π Eκ + , where E = ω − j + . Thus the total temporal contribution is Im(E t) = π Eκ + . Therefore, the tunneling rate is This is the expression of the Boltzman factort and it implies for the temperature where T 0 =h (r + −r − ) 4π(r 2 + +a 2 ) is the standard Hawking temperature of the Kerr string; it shares the expression of the temperature with the four-dimensional Kerr black hole. It shows that the corrected temperature is determined by the mass, angular momentum and extra dimension of the string, but also it is affected by the quantum numbers (energy, mass, and angular momentum) of the fermion. Therefore, the properties of the emitted fermion affect the temperature when the effects of quantum gravity are taken into account.
Adopting the same process, we get the temperature of the Schwarzschild string: It shows that the effect of the extra dimension and the quantum numbers (energy, mass, and angular momentum) of the fermion affect the temperature of the Schwarzschild string. It is obvious that the quantum correction slows down the increase of the temperature. Finally, the string cannot evaporate completely and in the end there is a balanced state. In this state, a remnant is left. The extra dimension plays the role of an impediment during the evaporation. When J = 0, Eq. (28) describes the temperature of the four-dimensional Schwarzschild black hole. The remnant was derived as ≥ M p /β 0 , where M p is the Planck mass and β 0 is a dimensionless parameter accounting for quantum gravity effects [32].
Conclusion
In this paper, we investigated the fermion's tunneling from the five-dimensional Kerr string spacetime. To incorporate the influence of quantum gravity, we adopted the generalized Dirac equation derived in [32]. The corrected temperature is not only determined by the mass, angular momentum, and extra dimension, but also it is affected by the quantum numbers of the emitted fermion. The quantum correction slows down the increase of the temperature. Finally, a balanced state appears. In this state, the string cannot evaporate completely and a remnant is left. This can be seen as a direct consequence of the generalized uncertainty principle. | 2,907.8 | 2014-01-01T00:00:00.000 | [
"Physics"
] |
QUALITY-MODEL CLUSTERING TOOL: A MODULE FOR CLUSTERING PROTEIN MODELS BASED ON QUALITY ATTRIBUTES QUALITY-MODEL CLUSTERING TOOL: UM MÓDULO PARA AGRUPAMENTO DE MODELOS DE PROTEÍNAS BASEADO EM
The process of protein modeling usually involves the production of a variety of structures requiring efficient tools for structure model comparison attempting to choose the best three-dimensional (3D) structure. This paper introduces an alternative method for clustering 3D protein models that, instead of using attributes related to structural alignment to group the data, use quality-attributes of those models to represent and cluster them. This method stands out by removing the need to define a priori a base model for structural alignments. Even so, it is possible to present the most representative structure in each cluster, which is useful for docking or molecular dynamics studies. All the results were statistically analyzed and compared with decisions made by professionals to validate the proposed algorithm. The experiment simulated a usual protein comparative modeling process for different CATH classifications. The calculated variance levels after the dimensional reduction validate the workflow for different protein chain sizes. All the molecular descriptors for the input files are calculated by MHOLline 2.0, an online scientific workflow for studies on bioinformatics and computational biology, available for free on www.mholline2.lncc.br, or hand made using specific programs (e.g., MODELLER, PROCHECK) and adjusting the data to the template specified in this document. The Quality-Model Clustering Tool (QMC) and the data set used in this work are available for download on the git repository (github.com/ruanmedina/Quality-Model-Clustering)
INTRODUCTION
The knowledge of the three-dimensional (3D) structure of proteins is essential to study diseases such as parasitoses, viruses, and cancer (LEE; FREDDOLINO;ZHANG, 2017). Nowadays, the 3D protein structure prediction (PSP) are often guided by computational experiments in many kinds of research, since experimental PSP remains costly and timeconsuming (VERLI, 2014). Comparative modeling is an example of a computational method for PSP (ESWAR et al., 2006). This method constructs 3D models using known structures of macromolecules as template. These structures are obtained from databases (e.g., PDB -Protein Data Bank). Comparative modeling is highly dependent on the quality of templates and the evolutionary relationship between them and the modeled protein (CAPRILES et al., 2010). Additionally, the process demands the production of a large number of conformations and requires several refinements and validation steps, making the final decision of the best models difficult to make (VERLI, 2014).
A way to automate the 3D modeling process has been presented by the scientific workflow MHOLline (CAPRILES et al., 2010). Starting with an input fasta file, it runs a BLASTp alignment against the PDB database, and it associates the output into four groups (G0-G3) according to the modeling viability. Proteins classified in the G2 group, based on the expected quality of models, are modeled using the MODELLER program (ŠALI; BLUNDELL, 1993). MHOLline implements other programs such as PROCHECK (LASKOWSKI et al., 1993) and Molprobity (CHEN et al., 2010) to validate the stereochemistry of 3D protein models, SIGNALP (PETERSEN et al., 2011) to identify signal peptide regions, PSIPRED (MCGUFFIN; BRYSON;JONES, 2000) to predict the secondary structure of proteins, and TMHMM (KROGH et al., 2001) and HMMTOP to identify transmembrane regions (TUSNáDY; SIMON, 2001).
Once the comparative modeling technique is a stochastic method that generates a sample of 3D structures, the decision-making about the best geometry is a challenge. Structure-based clustering techniques, such as RMSD-based techniques, is commonly applied to reduce the data into a subset of models (SIEW et al., 2000;JAMROZ;KOLINSKI, 2013;WU et al., 2017). These tools require a reference structure for 3D superposition against models. However, considering the user desires to know which model is the best, selecting the reference one can also be a challenge.
In this paper, we present a method to clustering 3D protein models based on the analysis of the quality of the generated models, using a set of attributes describing the stereochemistry and energy of these structures. The proposed method uses clustering algorithms to identify groups of lower average energy and cluster similar data into proper groups. Hence, it supplies Revista Mundi, Engenharia e Gestão, Paranaguá, PR, v. 5, n. 2, p. 219-01, 219-15, 2020 DOI: 10.21575/25254782rmetg2020vol5n21148 the user with a set of automatically refined structures. Initially, we developed the algorithm to incorporate the MHOLline workflow, but the method also can be used as a standalone system.
Implementation and Operation
The system considers a given ensemble of conformations of a protein to evaluates and compares the quality of these models to search preferential conformation(s). Figure 1 shows the operating pipeline developed in this work (in Python3). In the first step, we define the representation of each protein model based on quality attributes such as molpdf, DOPE score, DOPE-HR score, Normalized DOPE score, and 'Allowed' and 'Outlier' residues. The first four are results from evaluation functions of the comparative modeling software MODELLER, and the last two from the analysis generated by the Molprobity program. These attributes are often already available when doing comparative modeling, so choosing then for this initial analysis exclude the demand for running additional programs. The text follows explaining each attribute (more details about the first four functions are available in the MODELLER manual): • molpdf: Is the MODELLER's objective function F (R), which minimizes the energy of the atoms interactions and geometric features restraints.
• DOPE score: The Discrete Optimized Protein Energy score is a statistical potential optimized for model assessment based on the standard MODELLER energy function. It is often used to select the best structure from a collection of models built by MODELLER.
• DOPEHR score: Attribute very similar to DOPE score, but obtained at a higher resolution (using a bin size of 0.125Å rather than 0.5Å).
• Normalized DOPE score: The normalized DOPE score derives from the statistics of raw DOPE scores. This is a Z-score which positive values represent a 'bad' model and scores lower than -1 or so represents a 'native-like' structure.
• Allowed residues: Percentage of residues on allowed regions from stereochemical analysis using the Molprobity program.
• Outlier residues: The number of residues in not allowed regions stereochemical analysis using the Molprobity program.
Following the definition of data representation, the software automatically normalizes the attributes according to user-defined criteria.
Two normalization methods have StandardScaler, fits in a Gaussian distribution (mean around 0 and standard deviation of 1). Then, the data are submitted to a dimensionality reduction via Principal Component Analysis (PCA) algorithm, reducing the number of attributes to two, without losing the essence of the data set (JOLLIFFE; CADIMA, 2016).
In the next step, the parameters of the clustering are established. We implemented six (6) (ESTER et al., 1996). The first three algorithms require the estimated number of clusters (k), so the workflow uses the Silhouette Coefficient evaluation and a variation of the Elbow Method. Both algorithms are iteratively tested from k min = 1 to the user-entered k max (default = 10). Next, a brief explanation of each method: • Silhouette Coefficient: The silhouette analyzes the relation between intracluster distance and the nearest-cluster distance, i.e., how close each point in a cluster is to the points in its cluster (value of +1) when compared with neighboring clusters (value of −1). To evaluate the clustering of the samples as a whole, we use the mean Revista Mundi, Engenharia e Gestão, Paranaguá, PR, v. 5, n. 2, p. 219-01, 219-15, 2020 DOI: 10.21575/25254782rmetg2020vol5n21148 Silhouette Coefficient over all samples (ROUSSEEUW, 1987).
• Elbow Method: The Elbow Method analyzes the variation of the mean of the Sum of Squared Errors (SSE) of the clusters with increasing k. As the SSE related to 2 ≤ k ≤ k max is always descending (more groups necessarily decrease the mean intracluster distance), mathematically, we search for the k related to the more significant drop of error. Nevertheless, this method is not well suited to treat samples with two or fewer clusters. In these cases, one must use the Silhouette Coefficient method (KODINARIYA; MAKWANA, 2013).
Once having the required parameters, the system fits the data for each method using suitable settings. The parameters used for each method were omitted, but may be accessed through the git repository. Here, we present some main characteristics of the tested methods: • Affinity Propagation: It imports the concept of 'Message Passing' between data points. Each of the data points is a possible representative one for a new cluster. To discover the number of clusters and to form the groups of data, the method uses a similarity function to calculate and update a similarity matrix in an iterative process (DUECK, 2009).
• K-means: It is a Partitioning Method that usually requires the expected number of clusters. The basic idea is to find clustered data that minimize a particular error criterion of the distance between each instance to the centroid of the cluster allocated at the moment. In every iteration, the method recalculates the centroid and reallocates the data points (LLOYD, 1982).
• Ward: It is an Agglomerative Hierarchical Clustering Method that requires the number of desired clusters. This method considers every instances a cluster at the beginning of the interactions and tends to merge the pair of groups with the minimum inter-cluster distance (MURTAGH; LEGENDRE, 2014).
• Spectral Clustering: The method uses the information derived from eigenvalues -or spectrum -of the similarity matrix of the data. Then, a standard clustering methodsuch as K-means -is employed to cluster in lower-dimensional space. All these operations make the method very computationally demanding (LUXBURG, 2007).
• DBSCAN: It is a Density-based Method that works with the concept of 'Dense Regions' of data. It characterizes a cluster as a dense region surrounded by a non-dense one. The algorithm searches for clusters by searching the neighborhood of each object in the Revista Mundi, Engenharia e Gestão, Paranaguá, PR, v. 5, n. 2, p. 219-01, 219-15, 2020 DOI: 10.21575/25254782rmetg2020vol5n21148 database and checks whether it contains more than the minimum number of objects to form a cluster. Otherwise, it considers the point of data as outliers (ESTER et al., 1996).
• Mean Shift: Built upon the concept of Kernel Density Estimation (KDE), it estimates the clusters based on a density function in a range from one arbitrary point. The mode of the density within the range is estimated, and become the next center to evaluate. The process converges when the mode and center are the same (WU; YANG, 2007).
At the end of the clustering step, the algorithm returns the medoid structure for each cluster. For those clustering methods that do not work with a medoid structure, the system calculates the estimated medoid from the centroids of each cluster. The pipeline output files, by default, are presented in the "./Clusters_Data" directory and summarized in the "analysis.out" file.
Computational Experiment and Validation
To test the efficacy of the algorithm, we used a benchmark of 137 different protein sequences. Each pair of protein sequences with an associated template for comparative modeling was divided into five (5) categories (Class A to E) based on the length of the amino acid sequence, as shown in Table 1. The benchmark protein set is a subset of the presented in the work of (ZHANG; SKOLNICK, 2005) that describes the development of TM-align and uses a set of 200 nonhomologous PDB proteins for testing structure alignment algorithms (ZHANG; SKOLNICK, 2005). To the benchmark applied in this paper, we added some proteins aiming to fill the gaps in the desired dataset, considering the plurality of classification and protein length. We removed some proteins with dubious CATH/SCOP classification. For this first analysis, we preferred to apply the method to well-known proteins presented in consolidated and straightforward benchmarks to facilitate the analysis process. The entire data set can be accessed through the git repository (github.com/ruanmedina/Quality-Model-Clustering). The summary of the data is presented in Table 1.
We aim to simulate a real comparative modeling process. First, we extract the sequences from the pdb files using a Python3 script with BioPython (COCK et al., 2009), followed by the search of potential templates using BLAST (ALTSCHUL et al., 1990) in MHOLline 2.0 workflow. The template selection criteria applied for each protein in the dataset followed the suggestion of its BLAST results, but disregarding the candidates with high identity (close to 100%) and low identity (less than 25%). Once having the protein-template pairs, we generated 50 models for each protein using a Python3 script for MODELLER (ESWAR et al., 2006), and the addiction of BioPython library to manipulate data and download the chosen template structure files. Then, we submitted all generated models to Molprobity (CHEN et al., 2010) to assess the stereochemical quality of the models. Lastly, the method presented in this work analyzes and clusters the protein models, and them exhibits the final results.
RESULTS AND DISCUSSION
Using the methodology described and the implemented QMC, the user may automate part of the refinement step, such as in protein modeling and pattern evaluation of quality attributes. This method allows the user to perform conformation clustering without knowing a reference structure -often needed in clustering methods based on geometric properties -nor having to calculate all-versus-all alignment between the generated pdb models.
In this section, we show an example of usage, preliminary studies about the attributes used to represent the conformations, and variance reviews when reducing the data dimensionality.
All tests were performed on a Kubuntu 18.04 64 bits with 8Gb of RAM and an Intel R Core TM CPU (8 cores) i7-7700 3.6GHz.
Usage and Base Case Scenario
We have already described some of the QMC applicability and how flexible it can be by leaving the user free to choose the best parameters for the situation, but also suggesting default ones for general studies. However, what kind of information may a researcher get Revista Mundi, Engenharia e Gestão, Paranaguá, PR, v. 5, n. 2, p. 219-01, 219-15, 2020 DOI: 10.21575/25254782rmetg2020vol5n21148 from using this tool? Figure 2 shows an overview of a general modeling process for the protein 1F4H, which is a hydrolase of Escherichia coli (strain K12) organism. Suppose that doing a BLAST search, the most similar structure found was the protein 3IAP, which is also a hydrolase of the same organism with a 99% of identity and a very low e-value. The methodology described follows with a comparative modeling step using MODELLER, generating N conf (default N conf = 50) slightly different structures. MolProbity evaluates each conformation. By the end of this step, the quality attributes needed to represent each structure have already been computed.
Having an input table of attributes, the QMC normalizes and reduces the dimensionality of the data into two transformed attributes capable of representing the data as much as it is possible in two-dimensional (2D) space. When doing that, the researcher can access which of the original attributes carry the most significant variances for the data, as shown in Figure 2. Also, the method presents the percentage of the original variance represented in two-dimensional space.
Subsequently, QMC fits the transformed two-dimensional data in different clustering methods. In Figure 2, we showed the Mean Shift results as an example. The optimized clusters are presented to the user, who may analyze the dispersion of the reduced quality attributes of the generated models. Each cluster is defined as a representative structure placed next to the centroid of the group. The representative structures may be further analyzed as a form to get sufficiently different geometric conformations, e.g., different lengths of sheets, or different possible positions for coils, as shown in Figure 2. This example shows that it is possible to get different structural conformations even though there is a high level of identity between the sequence to be modeled and the chosen template.
Variance and Attributes Analysis
Since QMC uses PCA for component analysis, it is necessary to investigate if the reduction in a two-dimensional space is sufficient to represent the essence of the data. It is crucial to have the same tendency expressed when considering all the quality attributes with the original variance.
First, results from preliminary tests (not presented) over the correlation of the attributes used so far, was the base to choose the new dimensionality order (i.e., two-dimensional). The six (6) quality attributes may be separated into three classes: (a) molpdf; (b) DOPE score, DOPE-HR score, and Normalized DOPE score; and (c) 'Allowed' and 'Outlier' residues. Attributes of different classes presented little correlation between each other, but, frequently, high correlation with attributes in the same class. With this set of attributes, there is no reason to use dimensions higher than three. Furthermore, it was not possible to assign an attribute of each class for Revista Mundi, Engenharia e Gestão, Paranaguá, PR, v. 5, n. 2, p. 219-01, 219-15, 2020 DOI: 10.21575/25254782rmetg2020vol5n21148 representing the models and remove the dimensionality reduction phase, because the higher levels of variation fluctuate among the attributes of the same class for different proteins.
So, why not reduce the dimensionality to three? We investigated the contribution of each of the attributes to understand how it was possible to reach the levels of the information we keep reaching in PCA reductions. Figure 3 shows the frequency of each attribute at the top-two list of most significant attributes (that carries the most variance) for each of the tested proteins in this work. To select the most significant attributes, we considered the highest components of the PCA eigenvectors related to each component of the new space. The results show that frequently an attribute of class (b) is taken as the most significant for the first coordinate of the PCA, and a MolProbity attribute -class (c) -is taken for the second one.
A special mention should be made about the molpdf attribute in class (a). While it is not chosen frequently, in cases where there is no variation between the MolProbity calculated attributes for the models, i.e., when all the residues are in allowed regions of torsion in Ramachandran, it appears to play an important role. In such cases, using a 3D reduction would result in choosing at least two attributes from the same class, which does not improve our representation. This over-dimension happens because -in these cases of none outliers residues Revista Mundi, Engenharia e Gestão, Paranaguá, PR, v. 5, n. 2, p. 219-01, 219-15, 2020 DOI: 10.21575/25254782rmetg2020vol5n21148 -the class (c) is no longer one of the most representative and becomes a set of attributes without no information. Besides, working in two dimensions reduces the computational complexity of calculations in the clustering methods. It also facilitates the visualization of formed groups in 2D scatters plots that can be understood by any tool user. We still have to investigate how much variance we are capturing from the data. Moreover, it is interesting to know whether the two transformed coordinates are enough to represent proteins as their sizes grow. Figure 4 shows how is the percentage of the original variance represented in two-dimensional space created by PCA for each of the defined classes with different protein range sizes. We estimated the percentage of the original variance by the sum of the PCA eigenvalues related to each component of the new space. The results show that, for all the classes, the represented variance is considered high. The observed total mean -(0.86 ± 0.044)% -was sufficient relevant with little standard deviation. Outlier data only occurred in the bigger classes (B and C), but always as over-representation situations. Most of these outliers derived from proteins in which the set of models does not show outliers residues in the Ramachandran analysis.
Moreover, we applied a Kruskal-Wallis H-test to the data. The test presented a strongly non-representative p − value = 0.455 indicating that there is no evidence in this set of data for believing the classes medians are significantly different from each other. We choose to use a non-parametric test due to the non-Gaussian distribution detected. Importantly, in Figure 4, the length of error bars tends to decrease with increasing chain size of proteins. The lower chain proteins appear to have a homogeneous distribution of their variance by the quality attributes used so far. Also, for these protein classes, it is more common the occurrence of the situation mentioned previously, in which class (c) loses all significance, making the representation of the models difficult. This difficulty indicates the future need to study increasing the number of quality attributes used as input of the algorithm or an adaptation of the order of dimensionality reduction for these cases.
CONCLUSION
In this work, we presented an alternative method and a tool for clustering protein models based on quality attributes instead of geometric properties. It was possible to verify the capability of the chosen attributes to describe the data and the ability to capture sufficient variance of data, even in two-dimensional reduced space obtained by PCA method. The clustered data reduces the analysis processes by selecting a small subset of representative structures. The proposed method has proven to be useful mainly when the researcher does not know the reference structure to apply geometric evaluations, which usually occurs during protein modeling processes. Revista Mundi, Engenharia e Gestão, Paranaguá, PR, v. 5, n. 2, p. 219-01, 219-15, 2020 DOI: 10.21575/25254782rmetg2020vol5n21148 Further works should validate the usefulness of the technique using more benchmark proteins, such as I-TASSER SPICKER Set-II, CASP11, or construct a database with new benchmarks. It is also necessary to compare the efficiency for results clustered by quality versus geometry, to identify the pros and cons of using each of the approaches in different situations. | 4,985.4 | 2020-05-21T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Quarter Wavelength Fabry–Perot Cavity Antenna with Wideband Low Monostatic Radar Cross Section and Off-Broadside Peak Radiation
: Since antennas are strong radar targets, their radar cross section (RCS) reduction and radiation enhancement is of utmost necessity, particularly for stealth platforms. This work proposes the design of a Fabry–Perot Cavity (FPC) antenna which has wideband low monostatic RCS. While in the transmission mode, not only is gain enhancement achieved, but radiation beam is also deflected in the elevation plane. Moreover, the design is low-profile, i.e., the cavity height is ~ λ /4. A patch antenna designed at 6 GHz serves as the excitation source of the cavity constructed between the metallic ground plane and superstrate. The superstrate structure is formed with absorptive frequency selective surface (AFSS) in conjunction with dual-sided partially reflective surface (PRS). Resistor loaded metallic rings serve as the AFSS, while PRS is constructed from inductive gradated mesh structure on one side to realize phase gradient for beam deflection; the other side has fixed capacitive elements. Results show that wideband RCS reduction was achieved from 4–16 GHz, with average RCS reduction of about 8.5 dB over the reference patch antenna. Off-broadside peak radiation at − 38 ◦ was achieved, with gain approaching ~9.4 dB. Simulation and measurement results are presented. off-broadside beam radiation was achieved. Antenna cavity height is ~ λ /4. The low scattering property of the proposed antenna makes it suitable to be integrated with stealth type platforms for communication, and it can find multiple applications in the military and defense realm; examples include side looking air borne radars, surveillance UAVs, and any military application that requires above/below the horizon communication. In future, the work can be extended to incorporate linear/circularly polarized MIMO antennas.
Introduction
Stealth platforms have low radar cross section (RCS), but their radar signature increases significantly when antennas are mounted on them for communication purposes [1,2]. This can compromise their ability to counter the radar waves, so, in this regard, design and development of low RCS antennas is deemed necessary, for safety and security.
Reduction of the antenna's RCS is a critical feat, and several methods have been investigated to ensure that the antenna radiation properties are least affected while attempting to reduce its RCS. One of the methods is structural/geometrical shaping [3][4][5][6][7], in which the shape of the radiating structure is modified to ensure the backscatter avoids the threat direction. The other method is based on periodic structures, and this includes the use of radar absorbing materials (RAMs) [8][9][10], frequency selective surface (FSS) ground plane [11,12], FSS radome [13,14], electromagnetic bandgap (EBG) structures [15][16][17], artificial magnetic conductors (AMCs) [18], perfect metamaterial absorbers (MAs) [19][20][21][22], and polarization conversion metasurfaces (PCMs) [23,24]. In all of the above configurations, the periodic structure is implemented either at the ground plane, on top of the radiator, or loaded around the planar radiator, and, in all of these implementations, although the RCS is lowered, the antenna radiation property either remains just intact, or it slightly deteriorates.
To improve the antenna radiation properties in parallel with lowering the RCS, further research has led to using partially reflecting surfaces (PRSs) in a Fabry-Perot Cavity (FPC) configuration, as evident in [25][26][27][28]. In all of these works, backscatter reduction was achieved, and broadside antenna gain was enhanced. To achieve this, mainly an absorptive FSS (AFSS) (periodic loop elements with lumped resistors) surface was used with a uniform PRS-a PRS employing identical unit cell elements in a grid, and hence identical transmission/reflection responses over the entire surface of the superstrate. With reference to antenna, in transmission mode, the uniform superstrate acts as a PRS for broadside gain enhancement, and in receiving mode, it acts as an electromagnetic (EM) absorber for normal EM wave illumination over a wide frequency range.
Some interesting work can evolve if a PRS that has a phase gradient can be used with an AFSS after appropriate design adjustments. Our aim in this work is to develop a superstrate structure that consists of phase gradient metasurface (PGM) conjoined with an AFSS, such that not only the wideband monostatic RCS reduction and peak gain enhancement can be achieved for a patch antenna, but also the peak radiation can be steered in a fixed angle, which becomes an additional antenna functionality in comparison to the works done previously. In addition, the design goal also includes the realization of a reduced cavity profile. In [29], the use of PGM can be found, integrated with an AFSS. Its primary purpose there is to scatter away the in-band incident wave; however, the presence of an absorptive surface may only serve the same to an appropriate level, by suppressing the in-band incident wave. The peak radiation direction is still towards broadside.
This study investigates the use of a PRS constructed with unit cells that have phase shifts implemented by a dimensional gradient, and henceforth progressively varying transmission/reflection properties over the surface of the superstrate. The designed PRS is a composite structure, meaning thereby it utilizes both sides of the dielectric, and this feature aids in reducing the cavity height to λ/4. The absorptive surface consists of periodic loop elements loaded with lumped resistors, and it works in conjunction with the composite PRS that is formed by gradated mesh (inductive) structure on one side and constant patch (capacitive) elements on the other side. The excitation source of the cavity is a patch radiator designed at 6 GHz (C-band). Simulations have been validated with a fabricated prototype. Off-broadside peak gain of 9.4 dB was achieved at −38 • offset in the elevation plane, along the axis of the gradient. The wideband (in-band + out-of-band) monostatic RCS reduction was achieved over a frequency range of 4-16 GHz (120%), with average RCS reduction exceeding 8.5 dB for the two orthogonal (x/y) polarizations.
The low scattering property of the proposed antenna makes it suitable to be integrated with stealth type platforms. That is because the platform's low observability would still remain low despite mounting the antenna onto it for communication, and this paves the way for its multiple applications in the military and defense realm. One example is a side looking air borne radar (SLAR) [30], where the antenna points to a sideward direction and requires physical tilting of its structure. The proposed antenna can potentially be used in this scenario. Similarly, for the aerial security, surveillance, and reconnaissance applications, it can be used on unmanned aerial vehicles (UAVs) and drones where downward pointing high gain beam is more pragmatic for communication with the ground targets [31]. The development of low RCS multiple-input multiple-output (MIMO) antennas is also becoming popular due to their technological advantages, and as such, the proposed technique can be further developed to realize pattern decorrelated low observable MIMO antennas [32,33]. In addition, the antenna can be utilized for any military communication application where fix tilt-angled communication is required [34].
Unit Cell Design and Proposed FPC Antenna
The goal is to design a stacked combination of layers of FSS elements, in a unit cell configuration, which when paced as a grid above the patch antenna (excitation source) in an FPC configuration, should realize the following four functionalities/objectives in parallel:
Where λ is free space wavelength at operating frequency. Conventionally, the FP cavity, with an excitation source within, resonates when a PRS is placed at a height of~λ/2 above the ground plane reflector, and results in an enhanced gain radiation of the source antenna. The cavity height (h) at wavelength (λ) corresponding to the operating frequency can be calculated as [35]: where ϕ PRS is the phase of reflection coefficient of PRS, ϕ G is reflection phase of ground reflector, and N defines the resonance order. ϕ G is further estimated as: where Z d and Z 0 are characteristic impedances of the dielectric and air, respectively, β represents the dielectric phase constant (given as 2π/λ), and d is the dielectric thickness over which the ground plane is lying. For the metallic reflector only (i.e., without the dielectric), the reflection phase is π. For N = 0, the resonant cavity height turns out to be λ/2 if ϕ PRS is assumed to be of π rad. If a PRS can be designed to exhibit a 0 reflection phase, the cavity height can be reduced to λ/4. The unit cell with port designations is shown in Figure 1a. To obtain wideband RCS reduction, the top surface of the unit cell, as shown in Figure 1b, was constructed from a closed metallic ring of square shape, with four RF resistors (100 Ω each) soldered on four sides. Such a resistive periodic surface (AFSS) mounted on a perfect electric conductor (PEC) plane with a dielectric sandwiched in-between serves as a wideband RAM for an impinging electromagnetic (EM) wave [36]. However, for the proposed objectives, the backing PEC plane has to be replaced with an appropriate PRS, so that all our four objectives can be simultaneously achieved. A survey of literature [37][38][39] shows various designs of reflective surfaces; however, the appropriate PRS to be conjoined with the selected AFSS should be:
1.
Symmetric in design, so that wideband RCS reduction can be achieved for both polarizations of the incident radar wave, i.e., transverse electric (TE) and transverse magnetic (TM), and 2.
It should give 0 reflection phase so that once mounted above patch antenna, the cavity height can be reduced to λ/4.
Such a surface can be designed if a dual-sided PRS dielectric is constructed with an inductive mesh (aperture) grid on its top and a capacitive patch grid on its bottom [40]. AFSS of periodic square rings when backed by such a surface would achieve wideband RCS reduction, gain enhancement, and reduce cavity height; however, to also achieve radiation beam deflection, a gradient of phase has to be implemented within this composite PRS. To achieve this, a gradation in the size of mesh aperture was kept, as an inductive gradient achieves higher beam deflection than a capacitive gradient [40]. The proposed constituent unit cell elements of the PRS are shown in Figure 1c,d. Therefore, the designed final unit cell consists of resistive ring on the top, followed by air gap, and followed by gradated aperture on top of a constant patch element. However, intuitively, below the resistive ring the presence of a constant patch on top of a gradated mesh would have been more suitable, in that the variation of mesh aperture would have least affected the desired absorption frequency response. In fact, initial unit cell simulations were performed with that configuration; however, two problems arose: 1.
The achieved reflection phase gradient was meagre and seemed insufficient to achieve significant beam tilt.
2.
The reflection phase values were not supportive of reduced cavity height. Henceforth, the configuration of the PRS was flipped below the AFSS (gradient mesh above constant patch), and interestingly, the unit cell simulations showed encouraging results for fulfilling all objectives, and are discussed next in detail.
The high frequency structure simulator (HFSS) unit cell parametric simulations employing periodic boundaries and Floquet ports were performed to compute the finalized scattering (S) parameters. While optimizing the S-parameters, the following guidelines were followed [25,28]:
•
For incoming wave absorption (port 1 to port 2), reflection (S 11 ) magnitude as well as transmission (S 21 ) magnitude had to be below −10 dB over a wide range of frequencies, to achieve at least 80% of incident wave absorption.
•
In the transmission mode (port 2 to port 1), reflection coefficient (S 22 ) had to show high partial reflectivity as well as progressive phase over the gradated apertures, at operating frequency, to achieve high gain as well as off-broadside radiation.
The plots in Figure 2 are of TE (transverse electric) wave polarization, and it is important to note that due to the symmetry of the unit cell structure, the plots for TM (transverse magnetic) wave polarization are expected to be similar as well, and hence have not been shown here. The co-/cross-polarized reflection and transmission plots for a wave incident towards −z-axis are shown in Figure 2a,b, respectively. Solid lines depict the co-polarized components, while the dashed lines represent the cross-coupled components. Different curves correspond to the varying aperture sizes (AP L : 2 mm to 10 mm). For the co-polarized S 11 and S 21 responses shown in Figure 2a,b, it can be seen that starting from 7 GHz (out-of-band) and extending well into high frequency region, more than 80% absorption (A) (A = 1 − |S 11 | 2 − |S 21 | 2 ) is being achieved for all aperture sizes. This owes to the values of the incoming wave reflection and transmission, which are below −10 dB over a wideband. S 11 and S 21 magnitudes around operating frequency (6 GHz) show partial absorption, which means a reasonable extent of RCS reduction should occur at in-band frequencies also. For the cross-coupled S 11 and S 21 responses shown in Figure 2a,b, it is evident that their magnitudes are significantly low, hence establishing the efficiency and purity of absorption. Figure 2c for gradating aperture values. At operating frequency, the reflection magnitudes lie between 0.64-0.99 (linear). Figure 2d depicts the phase response of S 22 . It shows that the reflection phase values range between 104 • to −114 • over aperture variation of 2 mm to 10 mm, realizing a significant phase gradient at 6 GHz. Furthermore, the extents of phase range closely follow ±90 • , a reflection phase range criteria of the PRS to reduce the cavity height to λ/4 [40]. From Figure 2c,d, it can be inferred that an off-broadside radiation with a high gain can be realized in the FPC configuration.
S 22 magnitude response is shown in
The reflection/transmission characteristics of the unit cell for oblique angle incidences are shown in Figure 3, and the results are given for both TE and TM wave polarizations. All results were plotted for AP L = 5.5 mm (the center of the gradient, see Table 1). From Figure 3a,b depicting TE wave polarization, it is evident that despite the variation of the angle of incidence, low magnitudes (less than −10 dB) of reflection as well as transmission are still being obtained over a wide band (starting from about 7 GHz and beyond). For the TM case, a similar behavior is also observed, and can be witnessed from Figure 3c,d. It can be asserted that the PRS backed AFSS, as an absorber, shows good angular stability. A 7 × 7 unit cell array was mounted on top of a rectangular patch antenna, as shown by the simulated model in Figure 4a, and experimental prototype in Figure 4b. Cavity height (h) is 13 mm (~λ/4). The coaxial feed offset is 3.2 mm from the patch center towards +x-axis. All dielectric laminates are of Rogers RO4003C material (ε r = 3.55 (design), thickness 1.52 mm). Gradient implementation is along y-axis, with the AP L values and the corresponding reflection phases shown in Table 1.
Simulation and Experimental Results
Simulated vs. measured S 11 response of the proposed antenna is shown in Figure 5a. Sharp resonance was achieved at 6 GHz, and impedance bandwidth (−10 dB) is 443 MHz (7.26%). Gain vs. frequency plot of the proposed antenna is also shown in Figure 5a. At operating frequency, peak gain achieved is~9.4 dB. Gain bandwidth (3 dB) is from 5.82 GHz to 6.08 GHz (4.37%). A satisfactory agreement exists between simulated and measured results, establishing the accuracy of simulation and fabrication. Figure 5b illustrates the S 11 and gain vs. frequency response of the reference antenna (antenna without the superstrate assembly). It can be seen that −10 dB resonance of the reference antenna is slightly rightshifted as compared to the resonance of the proposed antenna. This frequency shifting is attributed to the input impedance variation when the dielectric superstrate is absent. Meanwhile, the gain of the proposed antenna is 6.4 dB at operating frequency, and is increased by 3 dB in the presence of the superstrate assembly. Figure 6a illustrates the far field H-plane radiation pattern at 6 GHz. Off-broadside radiation at an angle of −38 • was achieved, deflected in the elevation plane, with side lobe levels (SLLs)~10 dB below the maximum. Deflection angle is aligned to the axis in which the increase of aperture size (gradient) is implemented. E-plane pattern is shown in Figure 6b. Manufacturing tolerances as well as positioning errors during measurements can be the cause of difference between simulated and measured patterns. The far field H-plane and E-plane radiation patterns of the reference antenna are shown in Figure 6c,d, respectively. It can be witnessed that without the superstrate, the antenna radiates towards broadside. The radiation pattern plotted at various frequencies is shown in Figure 7. The pattern is satisfactorily uniform over a bandwidth of 50 MHz, although a little deterioration of the SLLs occurs. The reference antenna's radiation efficiency at operating frequency is 95%; however, the proposed antenna's efficiency diminishes to 60%. The primary reason for this efficiency reduction can be deduced from Figure 2b, where the S 21 transmission is shown. It notes that since the unit cell/superstrate is a passive structure, its S 12 transmission response would also be exactly identical to Figure 2b. Thus, at operating frequency, the transmission magnitude is −6 dB for the AP L value of 6.6 mm (the largest aperture dimension listed in Table 1), and the transmission magnitude reduces with the decreasing aperture values. This means that the loss of energy of the radiated wave occurs within the superstrate structure, and thus diminishes the radiation efficiency.
To validate the antenna's scattering performance, simulated monostatic RCS plotted against frequency for normal illumination of the incident wave is shown in Figure 8. Vertically polarized (VP) and horizontally polarized (HP) incident wave cases are respectively given in Figure 8a,b. Also shown in the figures is the RCS response against frequency of the reference antenna, which has the same lateral dimensions as of the proposed antenna. In addition, the calculated RCS frequency response of a perfect conductor of similar size is also presented. The calculation is based on the relation given as: σ C = 4πa 2 /λ 2 , where σ C is RCS of the perfect conductor, a is the area of the conductor, and λ represents wavelength of interest. For both polarizations shown in Figure 8a,b, wideband RCS reduction was achieved, including in-band frequencies. RCS reduction bandwidth (BW) extends over 4-16 GHz (120%), and an almost identical frequency response was achieved for the VP and HP wave incidences. This is owed to the symmetric unit cell design. For VP, average RCS reduction over the bandwidth is about 8.5 dB, and for HP, it is about 8.8 dB. Maximum achieved RCS reduction values are 25 dB (for VP) and 24 dB (for HP), respectively, appearing at 14.3 and 14.4 GHz value. The results shown in Figure 8a,b correspond to the co-polarized RCS frequency performance. The cross-coupled RCS monostatic performance of reference as well as proposed antenna, considering VP and HP wave incidences, is displayed in Figure 8c. As can be seen, the cross-coupled radar echoes are significantly low. The cross-coupled RCS performance of the proposed antenna obeys the cross-coupling reflection results presented for the unit cell in Figure 2a. Hence, the function of the absorber for reflectivity suppression is validated. The results presented in Figure 8 are for the case of 50 Ω antenna termination. This is because for most of the practical cases, the antenna would be matched terminated to its transmitter (Tx) or receiver (Rx). However, simulations for the cases of open/short termi-nations (worst case scenarios) were also performed for the proposed antenna, and it was observed that there was no significant change among different cases (short/open/matched). This might be due to the reason that the short or open primarily affects the antenna mode scattering, which is a function of antenna gain given as [41]: where σ ANT represents antenna mode scattering, G is antenna gain, Γ is reflection coefficient, and λ is the wavelength at the frequency of interest. The out-of-band frequencies will be unaffected by open/short terminations as these frequencies are being significantly absorbed at the absorber surface (and hence leave an insignificant energy that would reach the antenna surface). For the in-band frequencies, the in-band absorption is impaired (as evident from Figure 2a,b), but the in-band antenna mode scattering (frequencies reflected from the mismatched load and getting re-radiated by the antenna) would still be insignificant as the gain towards broadside is considerably low. The measurement of frequency response of monostatic RCS was performed using two sets of in-lab developed parabolic reflector antennas that had lambda/2 dipole as their excitation source. Each set contained two such antennas (one for transmit (Tx) and one for receive (Rx) operation). Within the desired frequency range, six different frequencies were tested. In comparison to horn antennas, the size of these antennas is smaller; hence, in a real scenario, they can better replicate the ideal monostatic simulation setup-a merit that is worth noting. Figure 9a represents the simulated vs. measured monostatic RCS suppression for VP incident wave, whereas Figure 9b displays the same for the HP incident wave. The measurement setup is displayed in Figure 9c. As evident from Figure 9a,b, an agreeable correlation is found between the simulated and measured results; in fact, the measured suppression performance is superior for most of the frequency points. The details of tested frequencies with a comparison between simulation and measured RCS suppression levels are exhibited in Table 2. The measured mean suppression surpasses the simulated values. From these measurements, it can be asserted that the design verification stands as successful. The monostatic RCS was also investigated as a function of incidence angle (−90 to +90) to determine the angular stability of the proposed design. Figure 10a,b show normalized y-z plane plots for VP incident wave, respectively, for reference antenna and proposed antenna. Similarly, Figure 11a,b show normalized x-z plane plots for HP incident wave, respectively, for reference antenna and proposed antenna. Measured results of the above are presented along with, and depict, a satisfactory correlation with the simulation results. All of these results were plotted at 10 GHz. From Figures 10 and 11, RCS reduction occurs over about ±15 • angular span. All presented RCS results are for antennas terminated with matched load. Article of the above are presented along with, and depict, a satisfactory correlation with the simulation results. All of these results were plotted at 10 GHz. From Figures 10 and 11, RCS reduction occurs over about ±15° angular span. All presented RCS results are for antennas terminated with matched load. In Figure 12, the simulated monostatic RCS angular responses of the reference and proposed antennas are presented for some more frequencies. Figure 12a illustrates the yz plane RCS plots for a VP incident wave at 7.6 GHz, while Figure 12b illustrates the x-z plane RCS plots for an HP incident wave at 7.6 GHz. Similarly, Figure 12c illustrates the y-z plane RCS plots for a VP incident wave at 14 GHz, while Figure 12d illustrates the xz plane RCS plots for an HP incident wave at 14 GHz. For the 7.6 GHz frequency, the RCS reduction in both planes occurs over an angular span of ±22°, and it increases to ±28° at 14 GHz. It is inferred that as the broadside RCS reduction increases, the angular span over which the RCS reduction is achieved, increases as well. Furthermore, it can be observed from Figures 10-12 that in comparison to the reference antenna, the low RCS performance of the proposed antenna gets impaired towards wider off-broadside angles. In fact, the reflectivity even increases at too far-off angles. This increase may be attributed to Article of the above are presented along with, and depict, a satisfactory correlation with the simulation results. All of these results were plotted at 10 GHz. From Figures 10 and 11, RCS reduction occurs over about ±15° angular span. All presented RCS results are for antennas terminated with matched load. In Figure 12, the simulated monostatic RCS angular responses of the reference and proposed antennas are presented for some more frequencies. Figure 12a illustrates the yz plane RCS plots for a VP incident wave at 7.6 GHz, while Figure 12b illustrates the x-z plane RCS plots for an HP incident wave at 7.6 GHz. Similarly, Figure 12c illustrates the y-z plane RCS plots for a VP incident wave at 14 GHz, while Figure 12d illustrates the xz plane RCS plots for an HP incident wave at 14 GHz. For the 7.6 GHz frequency, the RCS reduction in both planes occurs over an angular span of ±22°, and it increases to ±28° at 14 GHz. It is inferred that as the broadside RCS reduction increases, the angular span over which the RCS reduction is achieved, increases as well. Furthermore, it can be observed from Figures 10-12 that in comparison to the reference antenna, the low RCS performance of the proposed antenna gets impaired towards wider off-broadside angles. In fact, the reflectivity even increases at too far-off angles. This increase may be attributed to In Figure 12, the simulated monostatic RCS angular responses of the reference and proposed antennas are presented for some more frequencies. Figure 12a illustrates the y-z plane RCS plots for a VP incident wave at 7.6 GHz, while Figure 12b illustrates the x-z plane RCS plots for an HP incident wave at 7.6 GHz. Similarly, Figure 12c illustrates the y-z plane RCS plots for a VP incident wave at 14 GHz, while Figure 12d illustrates the x-z plane RCS plots for an HP incident wave at 14 GHz. For the 7.6 GHz frequency, the RCS reduction in both planes occurs over an angular span of ±22 • , and it increases to ±28 • at 14 GHz. It is inferred that as the broadside RCS reduction increases, the angular span over which the RCS reduction is achieved, increases as well. Furthermore, it can be observed from Figures 10-12 that in comparison to the reference antenna, the low RCS performance of the proposed antenna gets impaired towards wider off-broadside angles. In fact, the reflectivity even increases at too far-off angles. This increase may be attributed to the vertical dimension of the proposed antenna, as it is a Fabry-Perot cavity, while the reference antenna is a single layer planar antenna. Thus, it is important to mention that although the proposed antenna's main beam (radiation) is deflected towards off-broadside direction, the wideband low RCS performance of the proposed antenna is dominantly towards broadside angles only. Furthermore, as explained by the working of the unit cell in Section 2, the off-broadside peak radiation was achieved by implementing the phase gradated PRS, while the wideband broadside RCS reduction is a result of the PRS backed AFSS. Therefore, the antenna may potentially be used in those low observable military applications where it is desired to have communication in an off-broadside direction.
although the proposed antenna's main beam (radiation) is deflected towards off-broadside direction, the wideband low RCS performance of the proposed antenna is dominantly towards broadside angles only. Furthermore, as explained by the working of the unit cell in Section 2, the off-broadside peak radiation was achieved by implementing the phase gradated PRS, while the wideband broadside RCS reduction is a result of the PRS backed AFSS. Therefore, the antenna may potentially be used in those low observable military applications where it is desired to have communication in an off-broadside direction. The monostatic RCS reduction for oblique angle incidence is shown in Figure 13. For the VP wave incidence, RCS reduction is shown in Figure 13a, and for the HP wave incidence, the RCS reduction is shown in Figure 13b. From the curves, it can be observed that for the incident angles of 5° and 10°, the performance is close to that of 0° case (bore-sight). For the wave incidence of 15°, the RCS reduction for both polarizations is close to 0 dB at the frequency points of 6, 10, and 11 GHz. It is important to mention that at these frequencies, both the proposed as well as the reference antennas realize nulls (or steep slopes just around the nulls) in their monostatic reflectivity patterns. For the incidence angle of 20°, the average RCS reduction becomes somewhat lower, and the RCS reduction value at 12 GHz is a little compromised. Thus, it can be asserted that the overall angular stability of the proposed design is slightly less than ±20°. The monostatic RCS reduction for oblique angle incidence is shown in Figure 13. For the VP wave incidence, RCS reduction is shown in Figure 13a, and for the HP wave incidence, the RCS reduction is shown in Figure 13b. From the curves, it can be observed that for the incident angles of 5 • and 10 • , the performance is close to that of 0 • case (boresight). For the wave incidence of 15 • , the RCS reduction for both polarizations is close to 0 dB at the frequency points of 6, 10, and 11 GHz. It is important to mention that at these frequencies, both the proposed as well as the reference antennas realize nulls (or steep slopes just around the nulls) in their monostatic reflectivity patterns. For the incidence angle of 20 • , the average RCS reduction becomes somewhat lower, and the RCS reduction value at 12 GHz is a little compromised. Thus, it can be asserted that the overall angular stability of the proposed design is slightly less than ±20 • .
To have further insight into absorption as well as off-broadside radiation mechanism, surface E-field plots are presented in Figure 14. The log magnitude of E-field plotted on the absorber surface is show in Figure 14a. The plot is for a VP (x-polarized) incident wave, and at the frequency of 9 GHz. By inspection of Figure 14a, the spots of higher field intensity are easily identifiable. This field appears across the gaps in the metallic loops where resistors are present. The energy dissipation of the incident field occurs as heat within these resistors, thereby leading to the absorption of incident wave at the surface. Identical field plot is expected for an HP incident wave, except that the higher field intensity spots would now appear across the other two gaps for every loop. Likewise, field overlay plot on the surface containing gradated apertures (phase gradient surface) is shown in Figure 14b. This plot represents the field induced as a result of the wave radiated from the patch antenna (6 GHz frequency). It can be witnessed that a steady variation of surface E-field appears along the y-axis, in that, the E-field concentration increases gradually along -y-axis. This validates the mechanism behind beam deflection operation. Table 3 shows a comparison of the proposed design with similar works from literature. In terms of RCS reduction, it is evident that the antenna performs almost equally well compared to most of the other designs. Additionally, the proposed antenna is capable of realizing an off-broadside beam radiation functionality. Furthermore, the achieved RCS reduction bandwidth is also superior to other works. In addition, the cavity height is also smaller, making it a low-profile design. Although proposed antenna also has high gain (relative to a conventional patch antenna), the value of gain quoted for other designs is higher as those antennas radiate a broadside beam, and hence do not suffer from scan loss [42]. The second reason for this is their aperture sizes, which are comparatively larger than the proposed design. Table 3 shows a comparison of the proposed design with similar works from literature. In terms of RCS reduction, it is evident that the antenna performs almost equally well compared to most of the other designs. Additionally, the proposed antenna is capable of realizing an off-broadside beam radiation functionality. Furthermore, the achieved RCS reduction bandwidth is also superior to other works. In addition, the cavity height is also smaller, making it a low-profile design. Although proposed antenna also has high gain (relative to a conventional patch antenna), the value of gain quoted for other designs is higher as those antennas radiate a broadside beam, and hence do not suffer from scan loss [42]. The second reason for this is their aperture sizes, which are comparatively larger than the proposed design.
Conclusions
A low-profile high gain FPC antenna that can simultaneously realize low backscattering as well as enhanced gain deflected beam operation has been presented in this article. To construct the cavity, an absorptive FSS was designed to work in conjunction with a double-sided PRS, and mounted on top of a patch radiator. One side of PRS is a capacitive grid, while the other side is an inductive grid. A dimensional gradient was implemented in the inductive part. For an incident wave, wideband RCS reduction was achieved and also included in-band frequencies. In the transmission mode of antenna, high gain as well as off-broadside beam radiation was achieved. Antenna cavity height is~λ/4. The low scattering property of the proposed antenna makes it suitable to be integrated with stealth type platforms for communication, and it can find multiple applications in the military and defense realm; examples include side looking air borne radars, surveillance UAVs, and any military application that requires above/below the horizon communication. In future, the work can be extended to incorporate linear/circularly polarized MIMO antennas. | 7,782.6 | 2021-01-25T00:00:00.000 | [
"Physics"
] |
Perceptual video quality assessment in H.264 video coding standard using objective modeling
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
Introduction
Digital video in the form of various video applications such as digital television, internet streaming, digital cinema, video on demand, video telephony and video conferencing predominantly engages our life. And these video and multimedia applications are growing fast. In this huge digital video application space, there are various service providers offering solutions to end customers. And the digital video typically goes through different stages of processing before it reaches to the end user, resulted in video quality degradation. So the challenge for these service providers is to guarantee an appropriate Quality of Experience (QoE) for the end user to avoid churn out. Quality assessment for speech has quite a long history and well established, there are extensive work going on to extend the quality assessment to audio and video. The need for an accurate and reliable method of video quality measurement has become more necessary with the new digital video applications and services like mobile TV, streaming video and IPTV. In general, quality measurement has a wide range of uses, such as codec evaluation, headend quality assurance, in-service network monitoring and end equipment quality measurement.
Quality assessment methods can be divided into objective and subjective measurement. Objective methodology uses mathematical models to depict the behavior of the human visual system. Subjective assessment of video quality presents a methodology for video quality assessment that was received by observers and gives opinion about the video that they are viewing. The sum of their opinion gives the Mean Opinion Score (MOS), this provides the measure of subjective quality assessment.
Conscious quality monitoring in an in service mode is beneficial for the service providers and end users. In service quality monitoring techniques required to be low computational complexity, high correlation with MOS and the ability to use the metric meaningfully. Perceptual quality metrics are algorithms designed to model the quality of video and predict end user opinion objectively.
Based on the method of objective metric calculation, they are generally classified as follows (Winkler 2009): (a) Data Metrics which measure the fidelity of the signal without considering the content characteristics like Peak signal to noise ratio (PSNR) and mean square error (MSE). (b) Picture Metrics which process the visual information in the video data and account for the distortions in the content on perceived video quality. (c) Bitstream or Packet parameter based Metric for compressed video delivered over packet network. (d) Hybrid metric which is derived based on the combination of above.
Additionally based on the amount of reference information required, they are classified as follows: (a) Full Reference (FR) metrics measures the degradation in the test video with respect to a reference video. (b)No Reference (NR) metric analyze the test video without the need for an explicit reference clip. (c) Reduced Reference (RR) metric is a tradeoff between the FR and NR metric calculation in terms of reference information requirement. The comparison between the test video and the reference video will be based on the extracted information.
Since video compression schemes required to address impairments related to block based prediction on spatial and temporal domain, any calculated metric suited for in service application should be calculated based on picture metrics and no reference model. Because of the advantage of capturing the cumulative effect of the compression on video quality, the picture metric based video quality measurement is proposed in this paper and applied to H.264 compression (ITU-T 2005) scheme for headend quality assurance. Even though the reference video is available in the H.264 encoder, only no reference model is proposed. Since the ability of this proposed scheme should be extended to different in service quality monitoring application and also have low computational complexity.
Because the compression standard is block based and the problem can be generalized over the transform block size, the proposed metric calculation of blockiness, blur and the jerkiness are arrived at block level. This can be generalized to any block based coding and for different size of the transform. We have presented MOS calculation based on impairments of blockiness, blur and jerkiness where the MOS calculation model which carries the cumulative effect of all the three metric. The computation of these impairment metrics are in accordance with the ITU-T P.910 (1999) standard. The correctness and effectiveness of these models experimental results are compared against the full reference well known quality assessment method SSIM.
The paper is organized as follows. Section II provides details about the related work in the proposed research area. Section III explains the motivation and proposed perceptual video quality model. Section IV outlines the design and Section V brief about the performance evaluation and discussion. Section VI contains the concluding remarks.
Related work
Among the different quality metrics used to assess the video quality, an objective full reference quality metric is proposed in Abharana et al. (2009) using natural decrease in entropy of decoded frame due to compression and vertical and horizontal artifacts due the blockiness effect and apart from that the spatial and temporal masking properties of human visual system are compared against other standard full reference metrics. But no reference quality metric has more advantage in terms of the computational complexity and the reference availability. Even though there are many works (Brandao et al. 2009;Arum et al. 2012) experimented for quality assessment on the compressed video, there are full reference metric as in Eden (2007), proposed a measure of picture quality as peak signal to noise ratio (PSNR) which is a full reference metric and estimated statistically using transform coefficients as no reference metric. A revised PSNR no-reference model is presented in Brandao and Queluz (2010) that estimate video quality using estimated DCT coefficients which are derived using Maximum Likelihood techniques. Content spatialtemporal activity calculation based on average SAD and display format based perceptual MOS calculation model is proposed in Joskowicz and Ardao (2010) and the relationship between the bit-rate and the MOS is derived. But only using the bit-rate is limiting the estimation quality of certain video service. In Valenzise et al. (2012), proposed an estimation of the pattern of lost macroblocks which produces an accurate estimate of the mean-square-error (MSE) distortion introduced by channel errors. The results of the proposed method are well correlated with the MSE distortion computed in full-reference mode, with a linear correlation coefficient of 0.9 at frame level. A two part no reference quality metric calculation consists of training and test is proposed in Kawano et al. (2010). In the training phase, they calculate the sensitivity from features like blockiness, blur and edge business etc. and rank these features using the Principal Component Analysis (PCA) method. In Rossholm and Lovstroem (2008), the author try to find a linear relationship between quality measurement method and media-layer metrics such as quantization parameter, bits per frame, frame rate, and mean motion vector length. The proposed methods in Ries et al. (2007) uses the video quality calculation using parameters such as bit rate, zero length motion vectors, mean motion vector lengths and motion vector direction. Even though bit rate is a key parameter (ITU-T G.1070 2012) for estimating the coding distortion, the subjective quality of different video sequences cannot be correlated well with only the bitrate. So this proposed method, uses impairments such as blockiness, blur and jerkiness introduced by the spatial and temporal activities to improve the estimation accuracy in the encoder for the head end quality assurance.
Proposed perceptual quality estimation model
The lossy nature of all block based video codecs, compression introduces video artifacts which are noticeable to human visual system. In any application user viewing experience, the video quality is an important factor for the Quality of Experience (QoE). In order to have the QoE defined, quality measurement standardization bodies are trying to define the MOS as measure and define a method of MOS prediction which is reliable and reproducible. Even though some of the objectives are achieved in the existing standards this is being researched to address specific application. The proposed idea in this paper is to arrive at a NR metric based perceptual quality assessment which can be used for continuous monitoring in different applications. At headend this can be implemented as part of the encoder without much complication for the in service assessment of quality of delivery.
Performances of quality assessment methods based on references are limited by the quality of the source video and the video sequence alignment. No reference (NR) based approach is an absolute quality assessment as viewed by the user which is more useful in end to end performance monitoring scenario. Quality assessment is a challenging task when there is no reference. NR method provides advantage of in service real time assessment because of its low computational complexity.
The NR metrics for video blockiness, blur and jerkiness are calculated and the perceptual quality assessment model for the codec for a bitrate is derived in accordance with ITU G.1070. For a set of training and test video sequences, the perceptual quality calculation based on the proposed assessment model is computed and presented. The correctness and effectiveness of this model is experimented and compared against a well known full reference quality metric SSIM as per the methods provided in VQEG.
Video quality parameter I coding for an optimal frame rate is defined in ITU-T G.1070 as follows.
Where Br v is the bitrate and I coding is coding quality artifacts assessment the value of which will vary from 0 to 4. The perceptual quality metric only for the coding based quality impairments and provides the quality metric at headend.
And v3, v4 and v5 are constants and any change in v4 impacts the value of MOS greatly, obtaining the value of v4 based on the no reference blockiness, blur and jerkiness is considered. Proposed MOS calculations uses v4 which is a combined scaled distortion indicator as a effect of all the three impairments along with the bitrate.
Design overview
This section presents the details of the intra frame metrics blockiness and blur and inter frame metric jerkiness at block level. Based on these calculated intra frame and inter frame metrics, the perceptual quality estimation is proposed. The proposed model uses the no reference metrics which also provides reduced computation complexity.
Blockiness metric
The blockiness metric is measure of the visible edges on the coded picture block boundary; it is calculated based on the Boundary Strength (BS) of the transform block boundaries which is part of the encoder standard. The amount of blockiness present over a widow of frames is accumulated and a normalized blockiness metric (BM) is computed based on this amount of blockiness. BS value of 4 is high blockiness and BS value of 0 is less blockiness. For the calculation of amount of blockiness, all the block boundaries which have BS equal to 4 for intra coded frames and BS equal to 2 for the inter coded frames are counted. This count is accumulated over a frame and based on this the normalized BM metric is calculated and converted to percentage terms. So the value of the BM is between 0 to 100.
Blur metric
The BLur metric(BL) is defined as loss of energy and spatial details reduction on the sharp edges, if a sharp edge has more depth in the edge pixels, then the image is considered more blurred. This metric is computed using a "Sobel" filter for identifying the sharp edges to calculate the localized blur metric in a frame. Once the blur regions are identified; and then transform coefficients of blocks which fall on the blur region are used for the calculation of blur. Based on the weighted count of each frequency component across all the blocks under computation which is having sufficient number of occurrence compared with low frequency components is computed for the blur count and then this value is normalized to obtain BL. The value of BL will be between 0 to 100.
Jerkiness metric
Slow camera movement or zoomed video sequences are exposed to jerkiness artifact. This metric is calculated as normalized number of transition between states at macroblock level. Based on a threshold in mean square error, this is been calculated that the macroblock got updated or not. The measure of the status of macro block updation across a window of frames provides the jerkiness artifacts (JR). This is computed as the maximum over time f the standard deviation over space of all the frames. More motion in adjacent frames will result in more value for JR.
All the above artifacts are computed as part of H.264 encoder along with the perceptual quality metric calculation as mentioned in the proposed model. And the Figure 1 contemplates a modified block diagram of H.264 encoder where the perceptual quality metric is calculated in service. This provides MOS score for the video sequence along with PSNR, so user can understand the subjective quality of the encoded video.
In the proposed perceptual quality model, the constant v4 is calculated as linear combination of the impairments together. So v4 which is the combined scaled distortion indicator is expressed as follows In equation (3), a, b and c are weighted coefficient. These are used to adjust the impact of individual impairment in the perceptual calculation. These values are derived by experiments using the training set of videos and the results are analyzed for set of video content with different spatial and temporal activities. The expected result of each metrics is computed as per the standard P.910 and the training set results with different coefficients are experimented for minimum error.
The computed MOS value as in equation (2) provides the measure of subjective quality of the video sequence. Because the MOS value has the effect of the video impairments blockiness, blur and jerkiness, the testing results shows that this proposed model has high correlation with the standard full reference quality metric SSIM.
The comparison of the accuracy is based on Pearson Correlation Coefficient (PCC) and Root Mean Square Error (RMSE) as proposed in VQEG (2003) standard.
Performance evaluation and discussions
The proposed quality metric calculation is implemented in C language. We have used JM coder for the H.264 video encoding. The metric calculation is implemented as part of JM reference software. The video resolution is of standard definition size and encoding is set to three different bitrates of 512 kbps, 1 mbps and 2 mbps to depict the effect of these impairments at encoder. Four different standard definitions test videos are used to train the weighted coefficients and obtained the constants in Equation (1) and (3) at different bit rate. The training video sequences are "mobile and calendar", "parkrun", "shields" and "stockholm" (Training video sequence. http://media.xiph.org/). These training test vectors have various spatial and temporal complexities in nature. Since the constant v4 is only the variable one and all others are constant, the perceived quality change will be proportional to the v4 change. Different six video sequences are taken for test purpose; since the parameters are trained there is no need for parameter change for different kind of videos. The computations of MOS for these different video sequences are conducted as per the proposed method. Figures 2, 3 and 4 explains the quality metric performance for 512 kbps, 1 mbps and 2 mbps encoding respectively.
The combined scaled distortion indicator value is varying from 1.03 to 4.37 for different test vectors at 512 kbps. And the value of 4.37 for 512 kbps is high compare to 1 mbps which is 3.18, the most distorted video sequence where the test vector has high temporal and spatial complexity. The value for the same is 2.13 for the 2 mbps encoding. The results shows that for different spatial and temporal activities the coding distortion is different apart from that for different bitrate the quality distortion indicator correlates well and these results are compared against SSIM full reference quality metrics based on PCC and RMSE as proposed in VQEG.
The average values of these shows that the proposed model has high correlation for the quality calculation than the well known full reference model SSIM (Wang et al. 2004) and shown in Table 1. The PCC value is high and the RMSE value is less compared to the SSIM model. This shows that the MOS calculated based on the video impairments are more correlated to user viewing experience than standard full reference methods.
This proposed model explains the video artifacts measurements in H.264/AVC coded video related to intra and inter compression which clearly shows the correlation of the calculation is more based on the video impairments method than reference models presented in VQEG. Since the proposed method uses the impairments in the video and a NR method, when using in the decoder end this can capture the combined effect of the encoder, channel. This method can use application which cannot get the full reference or reduced reference information such as broadcasting, IPTV and video telephone etc. Since the parameter training needs to be done for different codec separately. The work can be extended to compare the computation complexity and also to map these impairments parameter from different channel and bitstream information.
Conclusions
A combined measure of perceived video quality for the H.264/AVC compression is proposed using no reference model. Metrics were implemented in a C/C++ environment as part of JM software of H.264. The objective modeling of subjective quality parameters was derived from the defined standard model. The results are analyzed for correctness with the actual content quality for a given encoding scenario which shows that the values are highly correlated to the users viewing experience. Also these results are compared against a standard full reference model and verified using comparison methods as mentioned in VQEG for a set of training and test vectors. Based on these results, video impairment analysis based quality model which is relatively low computational requirements compared to full reference method was providing better quality indication is evident. | 4,440.8 | 2014-04-04T00:00:00.000 | [
"Computer Science"
] |
The Existence and Uniqueness of Solutions for Variable-Order Fractional Differential Equations with Antiperiodic Fractional Boundary Conditions
In this paper, we discuss the existence and uniqueness of solutions for nonlinear fractional di ff erential equations of variable order with fractional antiperiodic boundary conditions. The main results are obtained by using fi xed point theorem.
Introduction
Fractional calculus has become one of the important tools for the development of modern society; the fractional differential equation with variable order has gained lots of interest [1][2][3][4]. Some researchers have investigated the physical background and numerical analysis of fractional differential equations of variable order [5][6][7][8]. In [9], Bushnaq et al. used Bernstein polynomials with nonorthogonal basis to establish operational matrices for variable-order integration and differentiation which convert the considered problem to some algebraic type matrix equations and obtained numerical solution to variable-order fractional differential equations by numerical simulation. In [10], Shah et al. proposed a new algorithm for numerical solutions to variable-order partial differential equations, used properties of shifted Legendre polynomials to establish some operational matrices of variable-order differentiation and integration, and got the numerical solution by numerical experiments.
In recent years, the antiperiodic boundary value problem of fractional differential equation has gradually become the focus of research, which have broad application in engineering and sciences such as physics, mechanics, chemistry, economics, and biology [11][12][13][14][15][16][17]. In [18], Ahmad and Nieto considered the following antiperiodic fractional boundary value problems: where C D p denotes the Caputo fractional derivative of order q and f is a given continuous function.
The problems related to the antiperiodic boundary value condition have been considered in [19][20][21][22][23][24][25][26], but the antiperiodic boundary value problem of fractional differential equation with variable order is almost not considered. In this paper, we investigate the existence of solutions for an antiperiodic fractional boundary value problem given by where C D p denotes the Caputo fractional derivative of order p, 0 < p < 1, C D qðtÞ denotes the Caputo fractional derivative of variable order qðtÞ, 1 < qðtÞ ≤ 2, T is a positive constant, and f : ½0, T × R ⟶ R is a given continuous function.
Preliminary Knowledge
In this section, we introduce some fundamental definitions and lemmas.
Definition 1 (see [27]). The Riemann-Liouville fractional integral of order q for a continuous function f : ½0,∞Þ ⟶ R is defined as provided the integral exists.
Definition 2 (see [27]). For ðn − 1Þ times absolutely continuous function f :½0,∞Þ ⟶ R, the Caputo derivative of fractional order q is defined as where ½q denotes the integer part of the real number q.
Definition 3 (see [3]). The Riemann-Liouville fractional integral of variable order qðtÞ for a continuous function f : ½0, ∞Þ ⟶ R is defined as provided that the right-hand side is pointwise defined.
Definition 4 (see [3]). For ðn − 1Þ times absolutely continuous function f : ½0,∞Þ ⟶ R, the Caputo fractional derivative of variable order qðtÞ is defined as Definition 5 (see [25]). Let I ⊂ R, I is called a generalized interval if it is either an interval, or fag or ∅.
A finite set θ is called a partition of I if each x in I lies in exactly one of the generalized intervals ξ in θ.
A function f : I ⟶ R is called piecewise constant with respect to partition θ of I if for any ξ ∈ θ, f is constant on ξ.
Theorem 6 (see [27]). Let E be a closed, convex, and nonempty subset of a Banach space X; let F: E ⟶ E be a continuous mapping such that FE is a relatively compact subset of X . Then, F has at least one fixed point in E.
Main Results
Let J = ½0, T. Denote CðJ, RÞ be the Banach space of all continuous functions x : J ⟶ R with the norm kxk = sup t∈J jxðtÞj and introduce the following assumption. ðH 1 Þ Let n ∈ N be an integer, θ = fJ 1 = ½0, T 1 , J 2 = ðT 1 , T 2 , ⋯, J n = ðT n−1 , T n g be a partition of the interval J, and qðtÞ: J ⟶ ð1, 2 be a piecewise constant function with respect to θ with the following forms: where 1 < q i ≤ 2 are constants, and I i is the indicator of the interval The Caputo fractional derivative of variable order qðtÞ for the function xðtÞ could be presented as a sum of Caputo fractional derivatives of constant orders q i by Definition 4, i = 1, 2, ⋯, n: Thus, according to (9), problem (2) can be written in the following form: Journal of Function Spaces Definition 9. The problem (2) has a solution, if there are functions x i , so that x i ∈ CðJ i , RÞ, satisfy (10) and . Let the function x ∈ CðJ, RÞ be such that xðtÞ ≡ xðT i−1 Þ on ½0, T i−1 , then consider (2) as the following form: Proposition 10. For any xðtÞ ∈ Ω i , f ðt, xðtÞÞ ∈ CðJ i × R, RÞ, xðtÞ is a unique solution of problem (11) if and only if x satisfy the integral equation: where G i ðt, sÞ is Green's function given by Proof. If xðtÞ ∈ Ω i is a solution of problem (11), applying t on both sides of (11), according to Lemma 8, we get according to the facts that C xðtÞ, and initial condition of problem (11), we get Þds, Thus, the solution of problem (11) is Green's function can be written as It implies that xðtÞ is the solution to the integral equation (12). In turn, if xðtÞ ∈ Ω i is the solution to the integral equation (12), according to Lemma 7, we deduce that xðtÞ is the solution of the problem (11). Hence, we complete this proof.
Proof. According to Proposition 10, problem (11) is equivalent to the following integral equation: Þds Þds: ð18Þ 3 Journal of Function Spaces , observe that B r i is a closed, bounded, and convex subset of Banach space Ω i . For any xðtÞ ∈ Ω i , we have It implies T : Ω i ⟶ Ω i is well defined. Now, we consider the continuity of operator T. Since f ðt, xðtÞÞ ∈ CðJ i × R, RÞ, given an arbitrary ε > 0, for any xð tÞ, yðtÞ ∈ Ω i ,we can findδ > 0such that jf ðt, xðtÞÞ − f ðt, yðtÞÞ Journal of Function Spaces We get the operator T is continuous. For each xðtÞ ∈ Ω i , we prove that if t 1 , t 2 ∈ J i , and 0 < t 2 − t 1 < δ, then kTxðt 2 Þ − Txðt 1 Þk < ε: By the mean value theorem, we have Therefore, kTxðt 2 Þ − Txðt 1 Þk < ε. According to the previous analysis, we know that is equicontinuous and uniformly bounded. We know by the Arzela-Ascoli theorem that T is compact on B r i , so the operator T is completely continuous. So, Theorem 11 implies that the antiperiodic boundary value problem of variable order (11) has at least a solution on J i . This completes the proof. | 1,717.8 | 2022-09-05T00:00:00.000 | [
"Mathematics"
] |
A Fully Unsupervised Deep Learning Framework for Non-Rigid Fundus Image Registration
In ophthalmology, the registration problem consists of finding a geometric transformation that aligns a pair of images, supporting eye-care specialists who need to record and compare images of the same patient. Considering the registration methods for handling eye fundus images, the literature offers only a limited number of proposals based on deep learning (DL), whose implementations use the supervised learning paradigm to train a model. Additionally, ensuring high-quality registrations while still being flexible enough to tackle a broad range of fundus images is another drawback faced by most existing methods in the literature. Therefore, in this paper, we address the above-mentioned issues by introducing a new DL-based framework for eye fundus registration. Our methodology combines a U-shaped fully convolutional neural network with a spatial transformation learning scheme, where a reference-free similarity metric allows the registration without assuming any pre-annotated or artificially created data. Once trained, the model is able to accurately align pairs of images captured under several conditions, which include the presence of anatomical differences and low-quality photographs. Compared to other registration methods, our approach achieves better registration outcomes by just passing as input the desired pair of fundus images.
Introduction
In ophthalmology, computing technologies such as computer-assisted systems and content-based image analysis are indispensable tools to obtain more accurate diagnoses and detect signals of diseases. As a potential application, we can cite the progressive monitoring of eye disorders, such as glaucoma [1] and diabetic retinopathy [2], which can be conveniently performed by inspecting retina fundus images [3]. In fact, in follow-up examinations conducted by eye specialists, a particularly relevant task is image registration [4,5], where the goal is to assess the level of agreement between two or more fundus photographs captured at different instants or even by distinct acquisition instruments. In this kind of application, issues related to eye fundus scanning, such as variations in lighting, scale, angulation, and positioning, are properly handled and fixed when registering the images.
In more technical terms, given a pair of fundus images, I Mov and I Re f , the registration problem comprises determining a geometric transformation that best aligns these images and maximizing their overlap areas while facilitating the visual comparison between them. As manually verifying with the naked eye possible changes between two or more fundus photographs is arduous and error-prone, there is a necessity to automate such a procedure [6,7]. Moreover, the difficulty in comparing large fundus datasets by a human expert and the time spent by ophthalmologists to accomplish manual inspections are commonly encountered challenges in the medical environment.
In recent years, machine and deep learning (DL) have paved their way into image registration and other related applications, such as computer-aided diagnosis [8,9], achieving very accurate and stable solutions. However, despite the existence of several proposals in the image registration literature, Litjens et al. [10], and Haskins et al. [11] recently indicated that there is a lack of consensus on a categorical technique that benefits from the robustness of deep learning towards providing high-accuracy registrations regardless of the condition of the acquired image pair. In addition, among methods specifically developed to cope with eye fundus registration, there is only a limited number of proposals that apply DL strategies, and most of them are focused on the supervised learning paradigm, i.e., the methods usually assume ground-truth reference data to train an alignment model. As reference data can be automatically generated by specific techniques or acquired through manual notes by an eye professional, both cases may suffer from the following drawbacks: (a) synthetically generating benchmark data can affect the accuracy of the trained models [12], and (b) manually annotating data are prone to failure due to the high number of samples to be labeled by a human agent, which includes the complication of creating full databases, large and representative enough in terms of ground-truth samples to be used to train a DL model effectively [11,13]. Lastly, dealing with ethical issues is another difficulty imposed when one tries to collect a large database of labeled medical images.
Aiming to address most of the issues and drawbacks raised above, in this paper, we propose a new methodology that combines two DL-based architectures into a fully unsupervised approach for retina fundus registration. More specifically, a U-shaped fully convolutional neural network (CNN) [14] and a spatial-transformer-type network [15] are integrated, so that the former produces a set of matching points from the fundus images, while the latter utilizes the mapped points to obtain a correspondence field used to drive geometric bilinear interpolation. Our learning scheme takes advantage of a benchmark-free similarity metric that gauges the difference between fixed and moving images, allowing for the registration without taking any prelabeled data to train a model or a specific technique to synthetically create training data. Once the integrated methodology is fully trained, it can achieve one-shot registrations by just passing the desired pair of fundus images.
A preliminary study of our learning scheme appears in our recently published ICASSP paper [16]. Going beyond our previous investigation, several enhancements are put forward. First, we extend our integrated DL framework to achieve more accurate outcomes, leading to a more assertive and stable registration model. We also provide a comprehensive literature review classifying popular and recent DL-based registration methods according to their network types, geometric transformations, and the general category of medical images (see Section 2). An extensive battery of new experiments and assessments are now given, in particular, the analysis of two additional fundus databases, the inclusion of new registration methods in the comparisons, and an ablation study covering the refinement step of our registration framework (see Section 3). Lastly, we also show that our learning registration pipeline can succeed with multiple classes of eye fundus images (see Section 4), a trait hard to be found in other fundus image registration methods.
In summary, the main contributions introduced by our approach are: • A fully automatic learning strategy that unifies a context-aware CNN, a spatial transformation network and a label-free similarity metric to perform fundus image registration in one-shot without the need for any ground-truth data. • Once trained, the registration model is capable of aligning fundus images of several classes and databases (e.g., super-resolution, retinal mosaics, and photographs containing anatomical differences). • The combination of multiple DL networks with image analysis techniques, such as isotropic undecimated wavelet transform and connected component analysis, allow-ing for the registration of fundus photographs even with low-quality segments and abrupt changes.
Related Work
The literature covers a large number of DL-driven applications for clinical diagnosis in ophthalmology. Recently, several studies have been conducted on deep learning for the early detection of diseases and eye disorders, which include diabetic retinopathy detection [17,18], glaucoma diagnosis [19,20], and the automated identification of myopia using eye fundus images [21]. All these DL-based applications have high clinical relevance and may prove effective in supporting the design of suitable protocols in ophthalmology. Going deeper into DL-based applications, the image translation problem has also appeared in different ophthalmology image domains, such as image super resolution [22], denoising of retinal optical coherence tomography (OCT) [23], and OCT segmentation [24]. For instance, Mahapatra et al. [22] introduced a generative adversarial network (GAN) to increase the resolution of fundus images in order to enable more precise image analysis. Aiming at solving the issue of image denoising in high-and low-noise domains for OCT images, Manakov et al. [23] developed a model on the basis of the cycleGAN network to learn a mapping between these domains. Still on image translation, Sanchez et al. [24] combined two CNNs, the Pix2Pix and a modified deep retinal understanding network, to achieve the segmentation of intraretinal and subretinal fluids, and hyper-reflective foci in OCT images. For a comprehensive survey of image translation applications, see [25].
We now focus on discussing particular approaches for solving the image registration task. We split the registration methods into two groups: those that do not use DL (traditional methods), and those that do. Since our work seeks to advance the DL literature, we focus our discussion on this particular branch.
Considering the general application of image registration in the medical field, the literature has recently explored DL as a key resolution paradigm, including new approaches to obtain highly accurate results for various medical image categories, as discussed by Litjens et al. [10], Haskings et al. [11], and Fu et al. [26]. Most of these approaches rely on supervised learning, requiring annotated data to train a model. For example, Yang et al. [27] introduced an encoder-decoder architecture to carry out the supervised registration of magnetic resonance images (MRI) of the brain. Cao et al. [28] covered the same class of images, but they employed a guided learning strategy instead. Eppenhof and Pluim [29] also applied a supervised approach, but for registering chest computed tomography (CT) images through a U-shaped encoder-decoder network [30]. Still concerning supervised learning, several works attempted to compensate for the lack of labeled data by integrating new metrics into an imaging network. Fan et al. [31] induced the generation of ground-truth information used to perform the registration of brain images. Hering et al. [32] utilized a weakly supervised approach to align cardiac MRI images, and Hu et al. [33] took two networks: the former applied an affine transformation, while the latter gave the final registration of patients with prostate cancer.
More recently, new registration methods were proposed to circumvent the necessity of annotated data when training neural networks [15,[34][35][36][37][38]. Jun et al. [34] presented a registration method that relied on a spatial transformer network (STN) network and a resampler for inspiration or expiration images of abdominal MRI. Zhang [35] covered the specific case of brain imaging, implementing two fully convolutional networks (FCNs), one to predict the parameters of a deformable transformation to align the fixed image to the moving image, and the other to proceed with the opposite alignment from moving image to a fixed one. Kori et al. [36] proposed a method that focused on exploring specific features of multimodal images by using a pretrained CNN followed by a keypoint detector, while the framework designed by Wang et al. [37] learn a modality-independent representation from an architecture composed of five subnets: an encoder, two decoders, and two transformation networks. Still on the registration of nonretinal cases, the method developed by Vos et al. [15] aligned cardiac images by comparing similar pixels to optimize the parameters of a CNN applied during the learning process. The method presented by Balakrishnan et al. [38] is another example of nonretinal registration, where the authors took a spatial transformation and U-Shaped learning scheme to explore brain MR data.
Concerning the DL-based methods specifically designed to handle retinal fundus images, Mahapatra et al. [39] presented a generative adversarial network (GAN) to align fundus photographs formed by two networks, a generator and a discriminator. While the former maps data from one domain to the other, the latter is tasked with discerning between true data and the synthetic distribution created by the generator [11]. Wang et al. [40] introduced a framework composed of two pretrained networks that perform the segmentation, detection, and description of retina features. Recently, Rivas-Villar et al. [41] have proposed a feature-based supervised registration method for fundus images where a network is trained using reference points transformed into heat maps to learn how to predict these maps in the inference step. The predicted maps are converted back into point locations and then used by a RANSAC-based matching algorithm to create the transformation models. Despite their capability in specifically solving the fundus registration problem, the methods described above employ reference data to compose the loss function.
In summary, most registration methods rely on supervised learning or take synthetically generated data in order to be effective. While generating new labels can overcome the scarcity of reference data, it also introduces an additional complication in modeling the problem, raising the issue of the reliability of artificially induced data in the medical image domain [42]. Another common trait shared by most DL registration methods is that they only produce high-accuracy outputs for a certain class of medical images or even subcategories of fundus photographs, such as super-resolution and retinal mosaics. Table 1 summarizes the main DL registration methods discussed above.
Overview of the Proposed Approach
The proposed framework seeks to align a pair of fundus images, I Mov and I Re f , without the need for any labeled data. First, we extract the blood veins, bifurcations, and other relevant compositions of the eye, producing images B Mov and B Re f that are passed through a U-shaped fully convolutional neural network that outputs a correspondence grid between the images. In the next learning step, a matching grid is taken as input by a spatial transformation layer that computes the transformation model used to align the moving image. In our integrated architecture, the learning occurs through an objective function that measures the similarity between the reference and transformed images. As a result, the unified networks learn the registration task without the need for ground-truth annotations and reference data. Lastly, as a refinement step, we apply a mathematical morphology-based technique to remove noisy pixels that may appear during the learning process. Figure 1 shows the proposed registration approach.
Network Input Preparation
This step aims to handle the image pairs, I Re f and I Mov , to improve the performance of the networks. In our approach, the images were resized to 512 × 512 to reduce the total number of network parameters related to the image sizes, thus leveraging the process of training the registration model. Next, a segmentation step was performed to obtain the eye's structures that may be more relevant to the resolution of the registration problem. These include the blood vessels and the optic disc, as we can see from images B Re f and B Mov in the leftmost frame in Figure 1. To maximize the segmentation accuracy, we applied the isotropic undecimated wavelet transform (IUWT) [43] technique, which was developed specifically for the detection and measurement of retinal blood vessels.
Learning a Deep Correspondence Grid
As mentioned before, the first implemented learning mechanism assumes a U-Nettype structure whose goal is to compute a correspondence grid for the reference and moving images. The network input is formed by the pair B re f and B Mov , which is passed through the first block of convolutional layers. This network comprises two downsample blocks: a max pooling layer and two convolution layers, as illustrated in Figure 2. In each block, the size of the input is decremented in half according to the resolution of the images, while the total number of analyzed features doubles.
In the second stage, two blocks are added as part of the network upsampling process. These are composed of a deconvolution layer, which accounts for increasing the input size while decreasing the number of features processed by the network, and two convolutional layers. The resultant data from the deconvolution are then concatenated with the data obtained by the output of the convolution block at the same level from the previous step (see the dashed arrows in Figure 2). In our implementation, the ReLU activation function and a batch normalization layer were used in each convolutional layer except for the last one. The last convolutional layer enables to return a correspondence field compatible with the dimension of the input data.
The network outputs a grid of points (i.e., the correspondence grid), which is used to drive the movement of each pixel when aligning the pair of images. The rightmost quiver plot in Figure 2 displays the correspondence grid, where the arrows moved from the coordinates of the regular grid to the positions produced by the network, while the purple and yellow maps show the points of highest and lowest mobility, respectively. The implemented network architecture, used to obtain a correspondence grid. Each layer is represented by a block with a distinct color. Below each block, the data resolution is described, while in the upper-right corner, the number of kernels per layer is shown. The correspondence grid is the network's output, as displayed in the rightmost corner.
Learning a Spatial Transformation
In this step, we took an adaptation of the spatial transformer network architecture [44] to obtain a transformation model for mapping B Mov . Particularly, the STN structure allows for the network to dynamically apply scaling, rotation, slicing, and nonrigid transformations on the moving image or feature map without the requirement for any additional training supervision or lateral optimization process.
The STN network incorporated as part of our integrated learning scheme consists of two core modules: grid generator and sampler. The goal of the grid generator is to iterate over the matching points previously determined by the U-shaped network to align the correspondence positions in target image B Mov . Once the matches are properly found, the sampler module extracts the pixel values at each position through a bilinear interpolation, thus generating the definitive transformed image B Warp . Figure 1 (middle frame) illustrates the implemented modules of STN.
Objective Function
Since registration is performed without using any set of labeled data, the objective function used to train our approach consists of an independent metric that gauged the similarity degree between the images. In more mathematical terms, we took the normalized cross-correlation (NCC) as a measure of similarity for the objective function: In Equation (1), T i,j = t(x + i, y + j) −t x,y , R i,j = r(i, j) −r, and t(i, j) and r(i, j) are the pixel values at (i, j) regarding the warped and reference images, B Warp and B Re f , respectively, whiler andt give the average pixel values w.r.t. B Re f and B Warp [45]. In Equation (1) the objective (fitness) function is maximized, as the higher the NCC is, the more similar (correlated) the two images are.
The NCC metric can also be defined in terms of a dot product where the output is equivalent to the cosine of the angle between the two normalized pixel intensity vectors. This correlation allows for standard statistical analysis to ascertain the agreement between two datasets, which is frequently chosen as a similarity measure due to its robustness [46], high-accuracy and adaptability [47].
Refinement Process
Since our approach allows for nonrigid registrations, transformed image B Warp may hold some noisy pixels, especially for cases where the images to be aligned are very different from each other. In order to overcome this, we applied a mathematical morphology technique called connected component analysis (CCA) [48].
CCA consists of creating collections of objects formed by groups of adjacent pixels of similar intensities. As a result, eye fundus structures are represented in terms of their morphologically continuous structures, such as connected blood vessels. We, therefore, can identify and filter out small clusters of noisy pixels (see the yellow points in the rightmost frame in Figure 1) from a computed set of connected morphological components.
Datasets and Assessment Metrics
In order to assess the performance of the registration methodology, we took three retina fundus databases. The specification of each data collection is described below.
• FIRE-A full database containing several classes of high-resolution fundus images, as detailed in [49]. This data collection comprises 134 pairs of images, grouped into three categories: A, S, and P. Categories A and S covers 14 and 71 pairs of images, respectively, whose fundus photographs present an estimated overlap of more than 75%. Category A also includes images with anatomical differences. Category P, on the other hand, is formed by image pairs with less than 75% of estimated overlap. • Image Quality Assessment Dataset (Dataset 1)-this public dataset [50] is composed of 18 pairs of images captured from 18 individuals, where each pair is formed by a poor-quality image (blurred and/or with dark lighting with occlusions), and a highquality image of the same eye. There are also pairs containing small displacements caused by eye movements during the acquisition process. • Preventive Eye Exams Dataset: (Dataset 2)-a full database containing 85 pairs of retinal images provided by an ophthalmologist [7]. This data collection gathers real cases of acquisitions such as monitoring diseases, the presence of artifacts, noise, and excessive rotations, i.e., several particular situations typically faced by ophthalmologists and other eye specialists in their routine examinations with real patients.
The MSE is a popular risk metric that computes the squared error between expected and real values, as shown in Equation (2): where H and W represent the dimensions of the images B Re f and B Warp . The values of the MSE range from 0 to infinite. The closer MSE is to zero, the better. The SSIM metric takes the spatial positions of the image pixels to calculate the so-called similarity score, as determined by Equation (3): In Equation (3), µ represents the mean value of the image pixels, σ is the variance, σ 2 gives the covariance of B Re f and B Warp , and c 1 and c 2 are variables used to stabilize the denominators. The results are concentrated into a normalized range of 0 and 1, with 0 being the lowest score for the metric, and 1 the highest.
The Dice coefficient is another metric extensively used in the context of image registration, which varies between 0 and 1, where 1 indicates an overlap of 100% . Equation (4) rules the mathematical calculations of this metric: The GC metric, as described by Equation (5), compares the overlap between the images B Re f and B Warp , and the pair of images B Re f and B Mov [52]. Thus, if the number of pixels aligned after the transformation is equal to the number of pixels before the image is transformed, the result is equal to 1. The more pixels are aligned compared to the original overlap, the greater the overlapping value.
Implementation Details and Training
Our computational prototype was implemented using Python language with the support of libraries for image processing and artificial intelligence routines such as OpenCV [53], Scikit-learn [54] and Tensorflow [55].
The module of integrated networks was trained with batches of eight pairs of images for 5000 epochs. The plot in Figure 3 shows the learning curve of the integrated networks. The curve exponentially increased with a few small oscillations, converging in the first 2000 epochs and remaining stable towards the end of this phase. The learning process was optimized with the ADAM algorithm [56], a mathematical method based on the popular stochastic descending gradient algorithm. The training was performed on a cluster with 32GB of RAM and two Intel(R) Xeon(R) E5-2690 processors. The images used in the training step were taken from the category S testing set of the FIRE database, which gathers fundus images of 512 × 512 pixels. This particular category was chosen for training because it comprised the largest and most comprehensive collection of images in the FIRE database, covering pairs of retina images that are more similar to each other (see Figure 4 for an illustrative example). An exhaustive battery of tests showed that this full dataset is effective for training, as the conducted tests revealed that the presence of images with low overlapping levels avoids oscillations in the learning curve of the network, leading to a smaller number of epochs for convergence. Another observable aspect when using our approach is that the registration model was trained by taking a moderately sized dataset of fundus images-a trait that can also be found in other fundus photography related applications, such as landmark detection [41] and even for general applications of DL-type networks [57].
Results and Discussion
In this section, we present an ablation study concerning the refinement stage of our methodology, which includes the analysis of different settings to increase the quality of the registration results. We also provide and discuss a comprehensive experimental evaluation of the performance of our approach by comparing it with recent image registration methods from both quantitative as well as qualitative aspects.
Ablation Study
We start by investigating whether the CCA technique can be applied to improve the registration results. We thus incorporated CCA as part of our framework, verifying its impact quantitatively and visually. We compared the application of such a technique by taking three distinct threshold values used to discard clusters with noisy pixels. We also compared the submodels derived from CCA + registration networks against two popular digital image processing techniques: opening and closing morphological filters. Table 2 lists the average of the evaluation metrics for each submodel and database. The standard deviation is also tabulated in parentheses. By verifying the scores achieved by the morphological transformations (network + opening and network + closing), one can conclude that they did not lead to an improvement in quality for the registered image pairs, even for those containing noise. Moreover, the application of these morphology-based filters may alter the contour of the structures present in the images, as shown in Figure 5a,c.
On the other hand, by comparing the results output by submodels network + CCA, we noticed that they clearly contributed to a substantial gain in registration quality in all examined datasets, as one can see from the scores highlighted in bold in Table 2.
In Figure 5, the image registered by the integrated networks without any refinement process appears in green (Figure 5a), while the others are comparisons between these and the images after applying each denoising technique, and they assume a magenta color so that when added to the green image lead to white pixels. In this way, the noise data in green indicate the pixels that were treated in these images. Visually speaking, when comparing the results in Figure 5e,f, the noise was substantially reduced after applying the CCA technique.
From the conducted ablation analysis, we included as part of our full registration framework the application of CCA algorithm with a threshold value of 20 pixels.
Comparison with Image Registration Methods
We compare the outputs obtained by our approach against the ones produced by four modern image registration methods. Within the scope of keypoint-based techniques, the algorithms proposed by Wang et al. [58] and Motta et al. [7], called GFEMR and VOTUS, were considered in our analysis. For comparisons covering DL-based methods, we ran the techniques proposed by Vos et al. [59], DIRNet, and the weakly supervised strategy introduced by Hu et al. [33]. These DL-driven algorithms were tuned following the same experimental process performed by our approach, i.e., they were fully trained with the same group of training samples, taking into account the same amount of epochs. Figure 6a-d show box plots for each validation metric and registration dataset. The generated plots show that the proposed framework outperformed both conventional and DL-based techniques in all instances, demonstrating consistency and stability for different categories of fundus images. The MSE, SSIM and Dice metrics exhibited similar behavior while still holding the smallest variation in the box plots, thus attesting to the capability of our approach in achieving high-accuracy registrations regardless of the pair of fundus images. Lastly, concerning the GC metric (Figure 6d), since such a measure gauges the overlap segments before and after the registration, the datasets holding more discrepant images were the ones that produced higher scores, as one can check for Category P of FIRE database. DIRNet and VOTUS remain competitive for Category S of FIRE, but they were still outperformed by the proposed methodology. A similar outcome was found when DIRNet was compared to our approach for Dataset 2.
A two-sided Wilcoxon test at 5% significance level was applied to verify the statistical validity of the registrations produced by our approach against the ones delivered by other methods. From the p-values in Table 3, the results from our approach were statistically more accurate than others in all datasets for at least three of the four evaluation metrics (MSE, SSIM and DICE). Moreover, we can check that our approach was statically superior (p < 0.05) in 96 of the 100 tests conducted, thus attesting to the statistical validation of the obtained results. Table 3. p-values from two-sided Wilcoxon test at 5% significance level applied to compare the proposed approach against other registration methods. In addition to the four registration methods already assessed in our validation study, we provide new assessments involving two new methods: the recent registration through eye modelling and pose estimation (REMPE) technique [60], and the well-established scaleinvariant feature transform (SIFT) algorithm [61]. Figure 7 shows the box-plot distribution for each validation metric applied to categories A, S and P from FIRE database. The plotted box plot shows that our framework outperformed the REMPE and SIFT methods, achieving the smallest variations between outputs, which are visually represented by the tightest clusters in each plot. A visual qualitative analysis of the registrations produced by the competing methods is presented in Figure 8. Here, we followed [7,16,52] to represent the aligned images in terms of color compositions to increase the visual readability and interpretation of the results. More specifically, images B Re f and B Warp were rendered in green and magenta, while the overlap of both images is in white, giving the level of agreement between them.
Metric
Keypoint-based approaches GEEMR and VOTUS produced acceptable results for most image pairs, but they are not yet able to satisfactorily deal with the blood veins located farther away from the eye globe. DL-based methods DIRNET and Hu et al. performed nonrigid registrations, causing deformations in the output images (e.g., see the misalignment and distortions in the first, third, and fourth images from Figure 8). Our framework also performs nonrigid registration; however, the implemented networks ensure that the transformation applied to moving image B Mov uniformly distorts the image structures, rendering B Mov closer to the reference image B Re f . Lastly, one can verify that our registra-tion model and that of Hu et al. were the ones that were capable of aligning the very hard images from Category P of the FIRE database.
Another relevant observation when inspecting Figure 8 is the role of vessels in our framework. Indeed, such a procedure allows for the method to carry out the registration under the most diverse conditions. For instance, the fundus images from Dataset 1 are composed of dark lighting, blur, and smoky occlusions. By handling the eye's vessels, it is possible to highlight the vascular structure of these images, accurately performing the registration while avoiding the need for new exams to replace poorly captured photographs. Figure 8. Visual analysis of the results. Lines 1 and 2: original images from each examined database, Line 3: the images before the registration process, Lines 4-9: the overlapping areas between B Re f (in green) and B Warp (in magenta) produced by each registration method.
Conclusions
This paper introduced an end-to-end methodology for fundus image registration using unsupervised deep learning networks and morphological filtering. As shown by the conducted experiments, our approach was able to operate in a fully unsupervised fashion, requiring no prelabeled data or side computational strategy to induce the creation of synthetic data for training. After being trained, the current model produced one-shot registrations by just inputting a pair of fundus images.
From the battery of conducted experiments, it was verified that the proposed methodology produced very stable and accurate registrations for five representative datasets of fundus images, most of them covering several challenging cases, such as images with anatomical differences and very low-quality acquisitions. Furthermore, the methodology performed better than several modern existing registration methods in terms of the accuracy, stability, and capability of generalization for several datasets of fundus photographs. Visual representations of the registration results also revealed a better adherence achieved by the introduced framework in comparison with keypoint-based and DL-based methods.
As future work, we plan to: (i) analyze the effects of applying other fitness functions beyond NCC; (ii) investigate the use of other DL neural networks, for example, SegNet, X-Net and adversarial networks; and (iii) extend our framework to cope with specific clinical problems, including its adaptation for domain transformation, from fundus images to ultra-wide-field fundus photography [25], and 3D stereoscopic reconstruction of retinal images, which is another application related to the context of diagnostic assistance. | 7,518 | 2022-08-01T00:00:00.000 | [
"Computer Science"
] |
Evil and Meaningful Existence: A Humanistic Response through the Lens of Classical Theism
This study modestly proposes a humanistic response as supplementary to classical theism in addressing concrete cases of gratuitous human suffering. Classical theism places evil in God’s divine plan of salvation for humanity. There is thus a good reason behind human suffering. However, there are times when suffering is so intense and dehumanising that any attempt to justify it in terms of God’s love for humanity fails to make sense in the lives of most people. It is at this point that a humanistic response, coupled with spiritual guidance, becomes relevant. A humanistic response expresses itself through an African ethical theory and practice known as Ubuntu. It pivots on key human values such as love, compassion, trust, consideration, dialogue, forgiveness, solidarity, justice as equity, etc. It is in a spirit of togetherness that most existential challenges can be squarely faced to make human life more meaningful. Ultimately, a humanistic response recommends a change of attitude towards human suffering. Suffering should be seen as part of what it means to Be in this finite world, and that it is in one’s struggle towards the heights that one finds a sense in living.
INTRODUCTION
Is it possible to lead a meaningful life despite the presence of evil in this world? Evil, understood as gratuitous human suffering, is a "concrete fact of human life." 1 Evils such as war, terrorism, material misery, hunger, inhuman torture, human trafficking, corruption, natural calamities, incurable diseases, pandemics such as Covid-19, pose a serious challenge to human existence, and, quite often, inflict immense suffering on the lives of countless innocent people. No one can measure the amount of pain experienced by one who loses a beloved one. Many a time, the headlines of newspapers and the news items on television send chills down one's spine. Paradoxically, evil as a topic fascinates and sensationally appeals to a larger percentage of people's attention. Many people appear to be more interested in reporting, watching or reading about what is out of the ordinary, especially news about violence.
The presence of evil becomes more problematic when one believes in a God of love and might. If God is so powerful and so loving, why is there evil? Why does God allow innocent people to suffer unjustly? Is God not as powerful as He is believed to be? Non-believers may also ask: How come reality cannot always be how it ought to be? How come things fail to follow their natural order? How come humans sometimes do certain things which they fully know they ought not to do? Why is it extremely difficult to quit certain addictions, say, to alcohol, cigarettes, gambling, drugs, internet games, pornography, etc.? At the summit of all, fundamental questions arise: Can evil ever be totally "silenced"? Can mankind imagine a world where there is no suffering? What if evil was an invincible reality in this world; would it still be possible to lead a meaningful life? How can believers reconcile the presence of human suffering with the existence of a loving and powerful God?
This paper modestly attempts to shed some light on these questions. It firstly analyses the problem of evil by placing it in the context of gratuitous human suffering. Secondly, it shows how classical theism, as presented in the works of Augustine and Aquinas, attempts to reconcile the presence of evil with a loving and powerful God. Thirdly, it proposes a humanistic response, expressed through the ethics of Ubuntu, as a supplementary approach in addressing concrete cases of needless human suffering. Fourthly and lastly, this paper recommends what is termed "attitudinal change" towards human suffering as a means to cope with those existential challenges that defy human ingenuity. It is by consenting to the things that cannot be changed that human life is likely to be more meaningful despite the presence of instances of evil.
Evil
The term "evil" is generally controversial. "Evil" can mean different things to the same people at the same time.
In the context of this study, "evil" designates actions, attitudes, events, situations… that cause undeserved or gratuitous human suffering. 2 An act or event is evil precisely because it prompts pointless human suffering. 3 Three kinds of evil may be enumerated, namely, (1) moral evil, (2) physical (natural) evil, and (3) metaphysical evil. Moral evil represents the pain and suffering directly triggered by human beings and inflicted on fellow human beings. Moral evils include vices such as murder, corruption, wars, human slavery, sexual abuses, human trafficking, witchcraft, and so on. These evils reflect the perversity of human thoughts and actions.
Natural evil denotes pain and suffering springing from natural phenomena such as hurricanes, earthquakes, diseases, storms, droughts, tornadoes, etc. Natural evils are generally conceived of as products of natural processes. They supposedly occur independently of human intervention. Natural evil, also known as physical evil, implies some bodily deformities like albinism, blindness and other physical imbalances believed to be naturally caused. Another kind of evil closely related to natural evil is "metaphysical evil." This represents the pain and suffering allied to the fleetingness of reality. All that exists [but not as ipsum esse subsistens] has an end. 4 Nothing is eternal except Eternity itself. From an empirical perspective, everything passes away. Buddhist philosophers use the term anicca to describe this fleetingness of reality. 5 No matter how beautiful human life is, death will eventually strike. Pleasure or joy is potential suffering because it will unavoidably be over and lead to dissatisfaction. The fleetingness or diminishment of reality, in most cases, leads to suffering. Take the case of sufferings related to old age. Think of death itself, especially when every human effort has been extended to save lives.
The discussion about the problem of evil as gratuitous or needless suffering is not a new phenomenon. Biblical authors and Jewish thinkers struggled with it in an impressive manner. A Jewish sage wondered why some righteous men were getting what the wicked deserve, and wicked men getting what the righteous deserve (Ecc 8:14). Job also asked why he, an innocent man, suffered unjustly (Job 3). Still more, one of the Psalmists wondered whether God really cared for His people, because the wicked, especially the exploiters of the poor, had become more successful in life than the virtuous ones (Psa 10:1-4).
In contemporary times, the problem of gratuitous human suffering continues to undermine the faith of most believers in a theistic God. When faced with a painful situation regarding his three-year-old son's generative disease, Rabbi Harold Samuel Kushner wondered why God would allow bad things to happen to good people. 6 John Hick sees the problem of undeserved suffering as a major stumbling block to the belief in a God of love. 7 For William Hasker, evil is a stepping stone for atheists to deflate and deliberately attack theistic beliefs. 8 Certainly, not all suffering is evil. Suffering can be constructively "useful." Suffering may positively build or shape one's character. People who have not experienced some form of suffering may lack an essential ingredient in their human development. For Friedrich Nietzsche, suffering makes one stronger. 9 For Soren Kierkegaard, suffering may be a necessary condition for spiritual growth. 10 Suffering can lead to a deeper level of self-awareness and self-understanding. Human experience also shows that a successful or flourishing life is, quite often, an outcome of painful experiences characterised by an amount of struggle and the possibility of failure, and all the suffering that derives from it.
Nonetheless, suffering bears the name "evil" when it is experienced as undeserved, when one is rendered passive and impotent before dehumanising situations. One finds oneself set against the flow of life. Imagine extreme cases of violence such as rape, torturing innocent civilians by amputating their arms, forcing an unborn child from the mother's womb, etc. Remarkably, classical theism, without attributing evil to God, still places evil in God's divine plan of salvation for humanity. How is this so?
Classical Theism on Evil
Classical theism, as mainly reflected in the works of Augustine and Aquinas, posits and defends the existence of a God believed to be omnipotent (all-powerful), omniscient (all-knowing) and omnibenevolent (all-loving, all-good). 11 Most theistic religions such as Judaism, Christianity and Islam uphold this conception of God. God is the creator and sustainer of the Universe. For Augustine, since God is good, it logically follows that the things God has created are also good. 12 Evil is thus not a creation of God. Evil is rather a distortion of good. Similarly, Aquinas understands evil as loss or privation of good (privatio boni). 13 Evil distorts the form of an entity by diverting it from its proper end. Literally speaking, evil has no objective existence. It is simply parasitic upon the good. Both Augustine and Aquinas suggest that evil can only be recognised in the framework of a prior appreciation of the good.
How then is it possible to reconcile human suffering with the goodness and powerfulness of God? Classical theism diligently posits a number of logical solutions known as "theodicies" or justifications of God's permission of evil. 14 This study limits itself to three classical theodicies. These are (1) free-will theodicy, (3) greater goods theodicy, and (3) unknown reasons theodicy. These three, in some way, encompass the other theodicies such as natural law theodicy, felix culpa theodicy, the many universe solution theodicy, etc. Beneath these classical theodicies is the teaching that God does not "will" evil but may "permit" it for the well-being of humanity. God permits evil by "willing the good to which such evils are attached." 15 Thus, God has a hand in whatever happens in the Universe.
In the first place, advocates of free-will theodicy claim that God may permit evil to allow human beings to exercise their freedom or self-determination. 16 Human beings can determine themselves only if they have free-will. Man would not lead a normal life if he were not endowed with free-will. He would equally not be a moral agent without free-will. Free-will is the backbone of the human capacity to judge rightly or wrongly. Evil occurs when man misuses his free-will by judging wrongly and acting immorally. God could not have created man in such a way that there is no chance of man going wrong or being bad. 17 God created man with free-will so that man could freely respond in love, faithfulness or obedience to God. God's direct intervention in averting moral evil would restrict human freedom.
Secondly, some classical-theistic thinkers uphold that to achieve certain goods an amount of suffering is required. Richard Swinburne calls them "greater goods." 18 John Hick uses the expression "soul-making" to describe the nature and finality of these goods. 19 The underlying argument is that God may allow evil in the world so that man might achieve moral or spiritual growth through his struggle to overcome suffering. Most moral virtues such as compassion, generosity, solidarity, care, courage, self-sacrifice, and so on, are only possible in a world where there is evil and suffering. God may permit evil in the world so that man achieves moral or spiritual growth. For Saint Irenaeus, neither the world nor man were made perfect, but man has a potential for perfection. 20 Evil or suffering may be [to use Hick's term] a soul-making process or an occasion for man to orient himself towards moral perfection. Thus, some evils serve as moments of grace for the attainment of greater goods.
Thirdly and lastly, some classical-theistic thinkers claim that God may permit evil to strike humanity for reasons unknown to human beings but known to God alone. Behind this view is the conviction that mankind can never exactly know or absolutely predict the "thoughts" of God. Some Biblical authors unveil this fact. Job, for instance, could not conceive of the reason for the horrendous evils that befell him (Job 1:13; 2:1-11). For the Prophet Isaiah, God's thoughts escape human knowability. God's ways remain mysterious and perplexing (Isa 55:8-10). This Biblical message suggests that human reason is too small to capture the greatness of Divine wisdom. Thus said, God may allow evil to strike humanity for reasons unknown to man but known to God alone. As such, humility or the "wisdom of unknowing" seems to be the appropriate way to approach or talk about God. 21 There is no doubt that this classical teaching on evil logically reconciles the presence of evil in this world with a loving and powerful God. Evil is not a creation of God. What God created is all good. Evil can only exist parasitically upon good. God does not "will" evil but may "permit" it for the well-being of humanity. There is thus a reason behind human suffering. While the conviction that God has a hand in whatever happens in the Universe might give a sense of strength, comfort and hope to most believers, it nonetheless arouses ethical or humanistic concerns. Would it really make sense to tell someone tormented by intense and dehumanising pain, for instance, those starving in refugee camps or an elderly widow who has just lost her only son, that God deliberately intends her agony to allow her to attain a higher level of moral or spiritual growth? It feels existentially and pastorally awkward to confidently justify dehumanising evils in terms of God's divine wisdom and love for humanity.
But could there be another approach to the problem of evil that would loyally supplement classical theodicies without justifying evil, appeal to non-believers in God, and respond to the concrete situation of gratuitous human suffering? This study modestly proposes a humanistic response expressed through the African ethics of Ubuntu.
A Humanistic Response as Ubuntu in the Face of Human Suffering
A humanistic response represents spontaneous human attitudes, dispositions and actions geared towards human and societal flourishing. It pivots on life-saving human values such as love, compassion, consideration, hospitality, solidarity, dialogue, friendliness, forgiveness, supererogation, justice as equity, etc. A humanistic response defended in this paper is not an ideology or a system of thought. It should neither be associated with secular humanism or naturalism, ideologies that tend to prioritise human freedom or natural laws [in the governing of the universe and human nature] as opposed to spiritual beliefs. A humanistic response goes beyond normative structures of conventional morality, but does not ignore them. Normative structures are indispensable in bringing about justice and peaceful-coexistence. Beneath a humanistic response is a conviction that it is in a spirit of togetherness, expressed through compassion, genuine dialogue and solidarity, that most existential challenges can be squarely faced to make human life more meaningful. This spirit of togetherness is part of what is philosophically known as Ubuntu.
Ubuntu represents [among other things] mankind's "inherent goodness" or "inherent-impulse-forresponsibility" expressed through a spontaneous inclination to attend to the needs of fellow human beings. Ubuntu, as an ethical theory and practice, mainly operates at the level of sensibility (intuition) prior to making references to legal or conventional morality. It stems from the existential fact that one's life is fundamentally owed to others. Etymologically, the concept of Ubuntu can be traced to the Bantu people of Central, Eastern and Southern Africa. Although these people have different linguistic dialects, they, nonetheless, share a common language characterized by the root word ntu or nhu. This root ntu or nhu is reflected in the Bantu words for a human being, namely, umuntu, munthu, mtu, omuntu, munhu…depending on one's linguistic affiliation. To be truly human is to embody Ubuntu. While the term Ubuntu is a Bantu word, the concept itself runs through almost all African philosophical traditions. In Northern Africa, the Arabic term Insaniyya [translated as "humanness"] seems to represent the reality of Ubuntu. In Western Africa, particularly in the Igbo language of Nigeria, the term Ibummadu, somehow, carries the same weight as Ubuntu. It is Ibummadu (humanness) that lays the ground for Umunna (brotherhood or human interconnectedness). 22 The understanding of Ubuntu as an inherent-impulse-for-responsibility can be mainly identified in the works of Claire Oppenheim, Desmond Tutu, and Nelson Mandela. Oppenheim describes Ubuntu as an "innate duty" or a "spirit from within," that drives man to become fully human and to be attentive to the needs of fellow human beings. 23 Tutu considers Ubuntu as the "essence of what it is to be human." 24 To be truly human is to embody certain qualities judged suitable for living with other human beings. Tutu, in some way, connects the concept of Ubuntu with God's image (Imago Dei) in man. 25 Ubuntu insistently reminds mankind that they belong to God. Mankind is called to love God through worship and to love their neighbour through acts of charity.
Perhaps, the most suitable description of Ubuntu, as an inherent-impulse-for-responsibility, can be traced to the wisdom of Mandela. When asked to define Ubuntu, Mandela sagely narrates a story: A traveller through a country would stop at a village and he didn't have to ask for food or for water. Once he stops, the people give him food, entertain him…. That is one aspect of Ubuntu, but it will have various aspects. Ubuntu does not mean that people should not enrich themselves. The question therefore is: Are you going to do so in order to enable the community around you to be able to improve? 26 Notice that Ubuntu, as an inherent-impulse-for-responsibility, precedes conventional morality. It is an event that precedes calculative thinking. People act without questioning whether it is right or wrong to give food to a "traveller." They simply give. They are acted upon by an inherent force. An example par excellence of Ubuntu, as an inherent-impulse-for-responsibility, is perceptibly reflected in the Biblical parable of the "Good Samaritan" (Luke 10:25-37). Unlike the Priest and the Levite who avoided tending to the agony of the wounded traveller, most likely due to the legal and moral frameworks of their time, the Good Samaritan spontaneously and unconditionally became ethically responsible for the half-dead traveller. The life of the other [traveller, stranger] took precedence over the Good Samaritan's societal laws and personal projects.
Underneath the ethics of Ubuntu is the intuitive conviction that mankind, under normal circumstances, is intrinsically good. Cruelty and other sorts of evils arise when the Ubuntu (humanness or goodness) in mankind is concealed by either choice or external factors. One whose Ubuntu is concealed becomes extremely egoistic and strives to manipulate other human beings and appropriate things for oneself. The concealment of a wolf to other men. 27 Yet, without the concealment of Ubuntu, people are naturally good. For Mandela, all human beings, even those considered as the worst criminals, have "a core of human decency [Ubuntu]… if their heart is touched, they are capable of changing." 28 Mandela further describes Ubuntu as "a streak of goodness in men that can be buried or hidden and then emerge unexpectedly…" 29 In a nutshell, a humanistic response expressed through Ubuntu, describes an unconditional disposition or concern for the interests and welfare of others, regardless of their cultural, political, economic or religious differences. No human being is a foreigner to another. Human beings naturally share some common values and aspirations. For Kwame Gyekye, each human being has an inherent value that must be respected and appreciated at all times. 30 This interpersonal appreciation has to be concretely made manifest in human virtues such as compassion, solidarity, generosity, hospitality, etc.
It should be noted, however, that a humanistic response does not seek to offer solutions to the problem of evil but rather to concretely create an environment that is likely to lessen both the intensity of human suffering and the possibility of the occurrence of evil. How is this so?
Ubuntu as a Concrete Response to Human Suffering
In the first place, Ubuntu, as an inherent-impulse-for-responsibility, challenges moral passivity or indifference when innocent lives are in danger. It obliges one to do something for the well-being of others in the face of human suffering. Mandela himself narrates that Ubuntu compelled him to begin a peaceful fight for the liberation of South Africans from the evils of apartheid. "There was no particular day on which I said, from henceforth I will devote myself to the liberation of my people; instead, I simply found myself doing so, and could not do otherwise." 31 Notice that Ubuntu obsesses and pushes one into action. In the presence of moral or natural evils, Ubuntu spontaneously inspires compassion, love and solidarity with those who are suffering. It inspires sharing with those who are less privileged, taking care of the aged and other vulnerable human beings.
Secondly, Ubuntu inspires people to act in a supererogatory manner. A supererogatory act is one that goes beyond the call of duty (law), that is, over and above what a moral agent is required to do. Such an act is not based on the right of somebody. A genuine beggar has no right to one's help. But one finds oneself compelled to offer him or her some help. Some saints and heroes went beyond the call of duty by laying down their lives for others. A mother may transcend the duty of looking after her children by adopting an orphan child. The unconditional concern for others (Ubuntu) as reflected in human virtues of love, mercy, compassion, friendship…may lessen people's suffering, rather than clinging to legal justice which is fundamentally about, or crucially allied to, rights and duties.
Thirdly, in multicultural or multiracial societies, Ubuntu calls for harmonious co-existence. Human beings share a common humanity. For Mandela, the qualities shared by humanity are much more substantial than the differences that tend to divide people. 32 The presencing of Ubuntu compels one to see one's neighbour as a brother or sister. It unlocks the spirit of togetherness. Self-realization is an outcome of one's reciprocated engagement with others. As people seek together a more meaningful co-existence, they realize that a relationship with one another based on compassion or friendship only is not enough. They thus see the need for just institutions to protect both individual and public goods, and to facilitate the distribution of duties and responsibilities. Fourthly and lastly, Ubuntu lays a philosophical basis for inclusive dialogue and respect for human life. The kind of dialogue promoted here aims at bringing ethics and politics together to prevent the occurrence of political evils such as abuse of power, corruption, killings, and many others. An inclusive dialogue draws on an intuitive conviction that every human being, however cruel, is inherently endowed with a streak of goodness. Everyone counts, and every life has hope. Thus, in societal deliberations, every voice must be heard. In moral or judicial dilemmas, consensual discernment is prioritized to tap the goodness in each human being. Ubuntu stirs peaceful resistance when injustices are being committed by those in power. Ubuntu prioritises the value of human life over other things.
But as mentioned earlier, Ubuntu, as an inherent-impulse-for-responsibility or "streak of goodness" in man, can be, sometimes, concealed or buried. The concealment of Ubuntu may be linked [among other things] to two key factors, namely, (1) intransigence in a society's moral or legal structures and (2) human egocentricity often linked with the human struggle for power or domination. Nevertheless, the spirit of Ubuntu is likely to spring from individuals whose educational system enables them to develop a significant level of ethical and religious consciousness. 33 Individuals with ethical minds naturally and spontaneously tend to the needs of fellow human beings. Generosity becomes a habit for them. Also, a considerable level of religious consciousness may allow humans to listen to an inner voice that compels each one to become a brother or sister's keeper.
It has so far been argued that the problem of evil exceeds theoretical discourses and, in most cases, requires a humanistic response as expressed and concretised through the ethics of Ubuntu. But as mentioned earlier, a humanistic response does not offer a solution to the problem of evil, but can only contribute to lessening both the intensity of human suffering and the possibility of the occurrence of evil. Concretely speaking, the problem of evil remains a perpetual challenge to human existence. The risk of the occurrence of evil remains unavoidable. This straightforward fact raises an existential concern: How can one make one's life more meaningful despite the spectre of evil?
Attitudinal Change towards Evil as Glimmer for Meaningful Existence "Attitudinal change" suggests a critical reorientation of the human mind towards the enigma of evil in the hope of achieving a meaningful existence. Meaningful existence entails a life lived with and for others in just institutions. It presupposes the availability of a combination of both material and spiritual goods.
"Attitudinal change" recommends a brave acceptance of the isness of things, i.e., the way reality presents itself to mankind. Human experience shows that there are things in life that may not be overturned despite all imaginable human efforts. Take the example of suffering as a result of old age, or death itself when all efforts have been exhausted to save a life. Some illnesses like HIV or certain cancers have also proven to defy the "zero tolerance" envisioned by medical experts. Could Covid-19 be one of these? The Bakiga people of Uganda say: Akeizire kemerwa (What has invincibly come your way must be bravely accepted). Also, reality [as perceived in concrete life] undergoes a continuous process of synthetisation. 34 Cosmic nature has not ceased to mutate. Much as nature aims to its proper end [cosmic balancing], the possibility of causing harm to innocent people through its natural processes [e.g., earthquakes, volcanic eruptions, floods or storms] is always there. Human beings are naturally creative. Creativity is what defines life. Much as humans morally aim at creating what is good and life-giving, the possibility of the occurrence of evil [e.g., a medical operation that turns out to be fatal] is always there. The consequences of human creativity and cosmic (natural) processes can be either favourable or unfavourable to one's well-being and the well-being of others. This may explain why there are expressions like "good luck!" or "bad luck!" Without discouraging or undermining human ingenuity, mankind must acknowledge their impotence before certain human situations and cosmic forces. For believers, however, accepting these realities and surrendering them to God, can enable one to obtain supplementary "strength" and "wisdom" to either redress the situation [where possible], or to courageously face and cope with one's condition of needless suffering. "Attitudinal change" also recommends an acknowledgement of the fact that human life is a continuous struggle for existence. Struggling involves an amount of suffering. A world without some suffering is unimaginable. It would even be boring. It would lack creativity and, perhaps, conceal the presencing of human goodness (Ubuntu). Indeed, human experience shows that most moral virtues such as courage, compassion, solidarity, generosity, care or even self-sacrifice, are only possible in an environment where there is suffering. This is the view carefully developed in the classical-theistic greater goods theodicy. 35 Some suffering can enable mankind to achieve a higher level of moral and spiritual growth. In moments of suffering, God is the co-sufferer who understands and shares in human suffering to alleviate people's pain. Indeed, it is in the struggle towards the heights that one finds a sense in living. But, as previously argued, dehumanising evils such as genocide, terrorism, human slavery, corruption, starvation, material misery, etc., must not be justified at all costs. Significant measures must be put into place to continuously deter them or lessen the possibility of their occurrence. This is precisely why this study proposes a humanistic response to supplement classical theism in addressing such horrendous evils.
With a change of attitude towards human suffering, one may come to realise that a life worth living is a life of the mean, i.e., a life of the middle way between extremes. Too much of anything is poisonous. Modern capitalism tends to define and entice people to believe that material prosperity alone is what brings happiness to human societies. Countless men and women spend most of their time endlessly toiling to accumulate as much wealth as possible. Yet, human experience shows that too much material comfort is as much lifedestructive as the extreme lack of it. Extreme cases of self-gratification and self-mortification are equally dreadful. At the level of relationships, being extremely altruistic but completely neglecting oneself does not make sense. Moderation in all aspects of life seems to be the key to a meaningful existence.
"Attitudinal change" further recommends a joyful acceptance of human imperfection and fallibility. Fallibility lies in what Paul Ricoeur conceives of as a "split" in the nature of a human being. 36 Mankind is always pulled between the demands of the body (pleasure) and the demands of the spirit (fulfilment). A human being, properly speaking, lies between being (existence) and nothingness (non-existence). A human being is fundamentally incomplete. Mankind is always in the making. This condition of incompleteness renders human beings fragile. It somehow explains why humans are prone to committing moral evil, and also prone to all sorts of addictions, coupled with the difficulty of quitting them. Human fallibility calls for humility, selfacceptance and mutual understanding. However, human fallibility should not be taken as a reason to justify evils committed by humans. It should also not be seen as an obstacle to a meaningful existence. Though naturally fallible, humans are graciously capable of actualising their potentials for a meaningful life. Again, by consenting and surrendering everything to God, one is likely to obtain supplementary "strength" and "wisdom" to make human life more meaningful.
Ultimately, "attitudinal change" recommends a courageous acceptance of human finitude (death). A generation comes and goes. Mankind is a pilgrim on earth. Life must come to an end at a particular moment. Consenting to this fact is "accepting" death as a "partner" in life. Consenting to the possibility of one's death is likely to liberate one from the fear of death. The secret of meaningful existence is to "die before you die" to realize that there is actually no death. 37 Life and death are two sides of the same coin. Each side has its time. And now is the time to live. Let life Be! CONCLUSION This study has attempted to examine the problem of evil through the lens of classical theism and has proposed a humanistic response, coupled with spiritual guidance, as supplementary to the classical-theistic teaching on evil. Classical theists consider God as having a hand in all that occurs in the Universe. Human suffering is seen as part of God's plan of salvation for mankind. Suffering can enable humans to develop certain virtues or a higher level of spiritual maturity. It was argued, however, that there are times when suffering becomes 35 | 7,166.6 | 2021-09-28T00:00:00.000 | [
"Philosophy"
] |
Understanding the hydrology of karst
Determining the nature of water fl ow and contaminant dispersion in karst requires far more information than can be provided by simple dye traces. Tracing can delineate drainage divides, fl ow directions, and fl ow velocities at various stages, but from water management purposes it is also important to determine such variables as groundwater storage, retention times, patterns of convergence and divergence, and response to wet-dry cycles in the soil. These are most signifi cant in the non-conduit portions of the karst aquifer, which supply most wells. Dye tracing can be augmented by hydrograph analysis at various stages, tracing with tagged solid particles or microbes, evaluation of dissolved solids and chemical equilibria, and isotopic analysis. This paper concentrates on some of the uses of chemical equilibria and isotopes. Stable isotopes (e.g. 18O and deuterium) and the various radium isotopes are among the most useful. Ratios among the four radium isotopes (228Ra and 224Ra, with half-lives in years; and 223Ra and 226Ra with half-lives in days) are well suited to karst studies. These techniques are time-consuming and costly, so a full analysis of a karst aquifer is rarely feasible. Instead, it is recommended that selective analyses be made of representative parts of the aquifer, and that they be applied as follows: (1) Develop conceptual models based on fi eld observation, which allow one to anticipate a range of probable scenarios of contaminant transport and remediation. (2) If digital models are used, it is most effective to design simple generalized models in which the boundary conditions are clearly defi ned, and then to gain insight into real aquifers by noting the differences between the model and fi eld observations. (3) Use fi eld techniques to become familiar with the local hydrology and then apply hydraulic and chemical principles to anticipating contaminant behaviour, rather than reacting only to emergencies. These approaches encourage the growth of interpretive skills based on the same scientifi c principles that govern the origin of caves and karst.
INTRODUCTION
Maintaining a safe water supply is diffi cult in karst, where the physical setting and modes of water movement are complex.This paper addresses two issues: fi rst, to introduce the great variety of interpretive fi eld techniques that are available to the karst hydrologist; and second, to suggest a realistic strategy for applying the resulting information to karst.
Effective water management in karst requires several types of fi eld data: (1) Delineation of drainage basins and drainage divides.This task is complicated by the overlap of catchment areas where perched vadose fl ow crosses phreatic divides, and by the shifting of divides with time as fl ow stage varies.(2) Flow directions, velocities, and discharge in solution conduits and in the dispersed fl ow outside the conduits.
(3) Patterns of convergence and divergence of groundwater, and their role in chemical dispersion.(4) Effective pore and fi ssure sizes and their interconnectivity.This informati on helps the anticipation of likely paths of pathogen migration.
(5) Volume and duration of water retained in storage betwe en infiltration events (e.g., capillary water, perched pools, etc.).ence of dispersion of water into the aquifer.During a fl ood pulse, the infl ow at fi rst typically exceeds the outfl ow, indicating accumulation of storage in conduits, abandoned upper-level cave passages, and neighbouring fi ssures and pores.During the waning phase of a fl ood pulse, the outfl ow exceeds the infl ow as storage is released.A long tail with a roughly logarithmic rate of decrease suggests the release of water as laminar fl ow, after the main turbulent-fl ow pulse has passed (ATKINSON, 1977).The volume released is found by integrating the area under the hydrograph (volume = ∑QΔt, where Q = discharge and t = time. Response to storm events will vary with antecedent precipitation -for example, after a prolonged dry period when the infi ltration capacity of soil and fi ll material in epikarst fi ssures is dominated by desiccation fi ssures, in comparison with a prolonged wet period when soils are moist and have expanded to fi ll available bedrock fi ssures.Much of the water that reaches the phreatic zone during an infi ltration event is composed of capillary water that has been held in storage from prior events and displaced by the new water pulse (shown by PITTY (1966)).
Measuring travel times: quantitative tracer tests
Estimating the position of drainage divides with dye traces is a long-established technique.Detection of fl ow divergence, crossovers, and shifting of divides with fl ow stage are also important outcomes.These complexities increase with structural deformation.Dominantly convergent and well-defi ned drainage patterns are typical of undisturbed strata, as in the central U.S.A. (e.g.QUINLAN & RAY, 1981).In areas of greater structural complexity, such as the Dinaric chain (e.g., BAUČIĆ, 1968), features including divergent fl ow paths, crossovers, and overlapping groundwater divides are far more likely.Dye tracing is discussed in many other publications (e.g., KÄSS, 1998).
Much information can be gained by repeating quantitative dye traces at a wide variety of discharges.Dye concentrations at detection points are measured at short time intervals so the exact arrival times, peak dye concentrations, and mean travel times (to centres of mass) can be detected.Mean breakthrough times to each monitoring point are plotted as a function of discharge.A logarithmic scale allows coverage of a broad range of values and emphasizes differences in the shape of the plots (PALMER, 2007).Conduits that are mostly water-fi lled will show linear plots with slopes of -1 (Fig. 1).Paths composed mainly of open channels, such as canyons, produce concave-upward curves: as discharge rises, water depth also increases, and the increase in wetted perimeter, (p, length of contact of water with solid surfaces in a cross section), can increase almost as fast as the cross-sectional area (A).Flow velocity is a function of the ratio A/p, so velocity in a fl ooding vadose channel tends to increase more slowly than in a water-fi lled conduit.A combination of open channels and closed conduits will give linear plots at low discharge that change to concave-upward curves at high discharge.Complex patterns, such as overfl ow and divergence,
ANALYTICAL TECHNIQUES
The variables listed above can be investigated in many different ways.Major categories of hydrologic fi eld data are outlined here, mainly to demonstrate their utility and how they relate to each other.Most are time-consuming, and some are costly as well, so it is rarely feasible to apply them in suffi cient detail to truly understand the behaviour of a karst aquifer.However, by considering them, we are forced to think about how karst aquifers behave.For further information see also MILANOVIĆ, 1981;BONACCI, 1987, andFORD &WILLIAMS, 2007.
Measurement of hydraulic head and gradients
The most direct way to measure local hydrologic conditions is to measure heads in water wells and to determine their spatial and temporal variation.Contouring of static water levels can give a good approximation of drainage divides and fl ow patterns, if the data are abundant enough and the geology is relatively simple (e.g., QUINLAN & RAY, 1989).Natural variations in head with time, provide much information on proximity to turbulent-fl ow conduits, as well as fl ow convergence and divergence at different stages.Rap id rise and fall of head in response to rainfall or snow-melt events facilitates the delineation of major fl ow routes, with turbulent-fl ow solution conduits having the most extreme rates of response.Laminar-fl ow zones within the aquifer respond more slowly, with broad, long-term rises and declines in head.Volume of groundwater storage and rates of release can be estimated.Variations in head with time can demonstrate the dispersal of water during fl ood pulses.
As in other aquifers, pumping tests in karst provide considerable information on fl ow patterns, effective hydraulic conductivity, and anisotropy.Spatial variability tends to be much greater than in other aquifer types.Flow directions and anisotropy can also be anticipated from the geologic setting.
Hydraulic gradient is a critical component in all fl ow equations, whether laminar or turbulent.Discharge and velocity are directly proportional to gradient in laminar fl ow but are related more or less to the square root of the gradient in turbulent fl ow.
Discharge measurements
Combining discharge measurements with head data provides insight into the structural nature of the aquifer.Discharge at springs and in cave passages is proportional to catchment area.It is more useful in determining aquifer properties to measure variations in discharge with time, especially in response to storms and/or snowmelt.Differing responses of springs to what appear to be similar storm events can usually be accounted for by evapotranspiration loss and antecedent soil-moisture conditions, both of which show a strong seasonal effect.Where the hydrology is simple, as where ponors feed a single spring, the comparison of discharge in vs. discharge out is a simple method for assessing the pres-tive enrichment in the soil by calcite precipitation.A variety of de-icers on highways (NaCl, CaCl 2 , etc.) can be tracked and used as a proxy for contaminant fl ow paths from accidental spills.
Equilibrium chemistry
The degree of saturation of dissolved minerals in wells, springs, and caves helps to determine the fl ow history of the water and the nature of the openings through which it has passed.Drip-waters in caves can also provide estimates of P CO2 in the overlying soil.Samples of infi ltration water must be collected where it fi rst emerges from the rock or sediment matrix so there is no loss of dissolved gases.The pH is measured in situ, and precipitation of dissolved solids is prevented by HCl acidifi cation of one of a pair of samples.Most seepage water is near saturation with calcite.Saturation with dolomite generally requires lengthy residence time in an aquifer (typically years).Signifi cant supersaturation with either mineral generally indicates loss of CO 2 from the water, e.g., by air exchange through open cave entrances or fi ssures.
Dripwater chemistry in caves, combined with discharge of the drips, can clarify the nature of vadose fl ow routes.Is the water seeping slowly, with large areas of contact with fi ssure walls?Is it following discrete solution channels?Some of this information can be obtained from rate equations (e.g., PLUMMER et al., 1978, DREYBRODT, 1988).A generic rate equation for dissolution is where dC/dt = change in concentration with time, k = reaction coeffi cient, n = reaction order, C s = saturation concentration (and therefore C/C s = degree of saturation, where 1.0 = saturation), A' = surface area in contact with the water, and V = water volume.It is rarely possible to fi nd a unique solution to this equation, but the range of possibilities becomes clear.An integral solution, as well as values for n and k for typical karst situations, are given by PALMER (1991).Note that t/V is dimensionally equivalent to Q, and A' is proportional to fl ow length, so C/C s relates directly to discharge and to the network of openings through which the water has passed.
A P CO2 signifi cantly greater than that of the soil suggests a hypogenic CO 2 source.For example, at Saratoga Springs, New York, water is actively degassing as it emerges from the springs.The fl ow is discontinuous, with spurts and bubbles.With the assumption that the water at depth was at equilibrium with calcite, back-calculation indicates that its P CO2 was more than 6 atmospheres.Reaction-path software is useful in the calculations (seewww.usgs.gov/software).P CO2 signifi cantly less than atmospheric is common but diffi cult to measure.Seepage into carbonate rock through an insoluble cap-rock allows dissolution to take place under nearly closed conditions, because the reaction is isolated from the reservoir of soil CO 2 .P CO2 can drop to as little as 10 -4 to 10 -5 atm.This is diffi cult to measure, because local fl ow rates are very small, and CO 2 is absorbed rapidly from may complicate the analysis, but they can also be revealed by this method.Approximate conduit size and water volume within it can also be crudely estimated.
It is useful to know the approximate percentage of a conduit that is vadose.Low-density contaminants that are mainly insoluble in water will tend to fl oat and accumulate at sumps.During fl ood pulses, much of a normally vadose conduit will fi ll with water and drive the contaminants upward into many narrow fi ssures.Leakage of volatile gases to the surface has been documented (e.g., in Bowling Green, Kentucky; CRAW-FORD, 1984), where volatile gases have accumulated in buildings to near explosive limits.Even if the entire interpretive procedure is not feasible, it is helpful merely to anticipate the fi eld conditions that lead to this problem.
Use of tagged tracers
The ability of pathogens to be carried by groundwater can be estimated by tracing with solid particles of discrete size.An ideal tracer consists of natural indigenous microbes that have been tagged with an identifi er that is incorporated into the living cells.BRAHANA, (2009) describes this technique in a karst basin in Arkansas that is isolated for hydrologic research.The bacteria are exposed to a europium solution and reintroduced into sinking streams and other infi ltration points.The tagged bacteria are transported, temporarily deposited, and re-suspended during fl ood pulses in the karst conduits.Their movement through the system is monitored in caves, wells, and springs.Such studies can determine the fl ow conditions in which they are most prevalent at the sampling points.Use of local microbes gives a more realistic response in terms of transmission, retention, and delay than artifi cial particles.
Analysis of dissolved solids
Measurement of dissolved solids at different fl ow stages can clarify fl ow paths where a variety of porous media are present.Interpretations are fairly straightforward and are not elaborated here.For example, high values of sulphates and chlorides suggest deep fl ow components (most abundant during low fl ow); and Mg/Ca ratios >1.0 suggest evapora- the cave air.Results include rills, etching, and weathering of cave walls where seepage water suddenly becomes highly aggressive toward carbonates (PALMER, 2007).Discharge per unit area is low, but the total over large areas can be substantial.Infi ltration through insoluble rock can be verifi ed, and contaminant paths and fi ltering can be anticipated.
Stable-isotope chemistry
Flow rates and storage in non-conduit parts of a karst system can be assessed through the use of isotope chemistry. 18O and deuterium ( 2 H) in precipitation vary seasonally, and when plotted on a graph of δ 18 O vs. δ 2 H they tend to fall on a straight line.This "local meteoric water line" is typically close to, and nearly parallel to, that of the global mean data, defi ned by δ 2 H = 10 + 8 [δ 18 O], as shown in Fig. 2.During the cold season, the points plot at the lower left (more depleted in the heavier isotopes); and during the warm season they plot at the upper right.As this water infi ltrates and travels through the aquifer, it retains most of its initial isotopic signature.Changes due to CO 2 uptake and dissolution of carbonates have little effect, because their total molar contribution to the water is only a tiny fraction.The discrepancy in isotopic signature between local precipitation and of a ground water sample increases with time for the fi rst half year, and then diminishes.This discrepancy forms a sinusoidal curve with time (Fig. 3), and therefore the "age" (residence time) of the water since it fi rst precipitated can be estimated in most cases.A single measurement is not suffi cient; a time series needs to be developed that shows the seasonal variation over a long time (ideally several years).A groundwater sample that tracks almost exactly with that of the precipitation is likely to have spent less than a single season reaching its present point in the aquifer.Large discrepancies indicate longer residence times in the ground.An erratic correlation requires more care in interpretation.Comparison of values in a variety of places in the aquifer is likely to clarify the overall picture of residence times.
Groundwater that is supplied by a wide range of water sources, typical of bathyphreatic fl ow, tends to have a clustering of oxygen-deuterium values somewhat low on the meteoric water line, regardless of time of year (Fig. 1).This is because most of the infi ltration takes place in the transition from cold to warm, when snow melt is high and evapotranspiration is low.
Points to the right of the local meteoric water line normally indicate evaporative enrichment of the heavy isotopes at the surface.(It may also indicate long-term inheritance from local bedrock, but this appears to be minor in karst aquifers.)For example, McFail's Cave, New York receives many drips along its length of about 11 km.Some of the land above it is swampy, so there is great opportunity for evaporation.Some of the drips show a signifi cant shift to the right, away from the meteoric water line, and these are interpreted as being fed by wetlands.The position of the drips relative to the wetlands supports this idea, but there are local discrepancies that appear to indicate lateral movement of vadose water down the dip of the strata.
Sulphate content can be used as an indicator of depth of groundwater fl ow.For example, in the same fi eld area there are sulphate-rich springs that show a sulfur isotopic signature (δ 34 S) that is identical to that of the solid sulphates in a local Silurian dolomite, which in this area lies at a depth of up to 100-200 m below the surface.This same water shows a clustering of oxygen-deuterium values that does not vary signifi cantly with time.The δ 34 S values plot exactly in the range of values for Silurian marine sulphate rocks.
Use of radioactive isotopes: the example of radium
One of the most promising avenues for investigating karst hydrology is the interpretation of radium (Ra) isotopes.Little has been done in this fi eld, and certain details have yet to be understood.Bedrock contains trace amounts of uranium and thorium: 238 U, 235 U, and 232 Th.Decay of 238 U produces 226 Ra (half-life of 1601 yr), and 235 U produces 223 Ra (half-life of 11.1 days). 232Th produces both 228 Ra (half-life of 5.7 yr) and 224 Ra (half-life of 3.54 days).All Ra isotopes decay to radon (Rn) and eventually to lead (Pb).
Water in contact with bedrock or sediment acquires these isotopes by dissolution.The main governing variables are shown by Equation 1.With four radium isotopes of varied sources and half-lives, they provide a potentially great amount of information about fl ow patterns in karst.For general reference, see KRAEMER & GENEREUX, 1998.Those with long half-lives (years) are 226 Ra and 228 Ra.Those with short half-lives (days) are 223 Ra and 224 Ra.As water travels through narrow fi ssures or fi ne-grained material with large areas of contact and long residence times, it equilibrates with the local bedrock and sediment values.The ratio 235 U/ 238 U is about 0.0466, so that is the equilibrium ratio for the decay products 223 Ra/ 226 Ra.But 224 Ra and 228 Ra come from the same parent ( 232 Th), so 224 Ra/ 228 Ra ideally approaches 1.0.High activities of the long-lived Ra isotopes indicate lengthy and intimate contact with bedrock or sediment.High activities of the short-lived Ra ( 224 Ra and 223 Ra) indicate recent release of water from pores and narrow fi ssures into larger bodies of water, where they decay faster than they can be replenished from the solids.Zones of seepage into streams (at the surface or underground) can be identifi ed in this way.The time since the seepage has entered the main body of water can be estimated by the amount of disequilibrium.For example, with time, the 224 Ra/ 228 Ra ratio decreases.This decrease can be caused by simple mixing between the incoming seepage and the stream or lake water; but if the seepage input retains its identity as a plume without substantial mixing (as in lakes fed by springs), the ratio decreases along an exponential decay curve instead of linearly.
Marine rocks (e.g., carbonates) have a low Th/U activity ratio (<0.1), so they produce low 228 Ra/ 226 Ra ratios in groundwater.Siliciclastic rocks have a higher Th/U ratio (about 1.0), so 228 Ra/ 226 Ra also approaches 1.0, but slowly, and most water will have completed its tour of the aquifer long before it reaches equilibrium.Individual strata can have their own 228 Ra/ 226 Ra identity.For example, shaly carbonates tend to have a higher 228 Ra/ 226 Ra than more pure carbonates.Laboratory analysis of rock samples can clarify the expected ratios.In the Madison carbonate aquifer of South Dakota, recharge through overlying siliciclastic beds can be distinguished from recharge directly into the aquifer by their contrast in 228 Ra and 226 Ra, even where there is no difference in oxygen-deuterium content (KRAEMER & GENEREUX, 1998).
Water can also acquire Ra isotopes by alpha recoil: release of an alpha particle from a radioactive nucleus causes the source particle to recoil up to several hundred nanometres.If the source is located near the surface of a solid, it can be ejected into the adjacent fl uid.Alpha recoil has a significant effect only where there is a very large surface area, as in suspended fi ne-grained sediment.High 228 Ra/ 226 Ra ratios can result: up to at least 6 in slow-moving groundwater.This suggests both considerable alpha-recoil and also long residence times (tens or hundreds of years; Tom KRAEMER, Reston, Virginia, personal communication).
Residence time of groundwater is often estimated with tritium, radiocarbon methods, etc., but the utility of those produced by human activity (e.g., tritium from atmospheric testing of nuclear devices) is diminishing with time.
CONCLUSIONS
The interpretive techniques described above are all interrelated by several major concepts: the mass balance, fl uid mechanics, dissolution kinetics, and chemical equilibrium.They share the same physical laws, concepts, and variables as those that govern the origin of karst.All require a fundamental understanding of the local geological setting.By applying these and similar techniques, one can understand the origin and development of karst in addition to how it behaves hydrologically.
These techniques require considerable time and expense, so they are feasible only for long-term academic or government-sponsored research projects.It is important to develop suitable strategies for contaminant remediation by anticipating problems, rather than by reacting after a spill or leakage has already taken place.
Today the most common approach to groundwater management is digital modelling, in which the goal is to predict outcomes on the basis of a few simple fi eld measurements.This is rarely successful.Instead, it is more appropriate to design simple digital models that are not expected to be correct, but the output of which can easily be compared to fi eld observations.Confronted with the differences between the ideal model and reality, we are forced to visualize the fi eld conditions that can account for those differences.Another approach to digital modelling is to develop interactive models of specifi c aspects of karst that allow one to explore the effects of varied fi eld conditions (e.g., DREYBRODT et al., 2005).These methods encourage professional growth, in contrast to the typical accumulation of digital outputs that involve only repetitive data types and thought processes.
A recommended strategy is fi rst to use fi eld techniques such as those described in this paper to gain insight into the behaviour of karst aquifers, and only then to progress to aquifer modelling, whether it be conceptual, analytical, statistical, or digital.Field information should ideally be held in a central repository, (e.g., in a government-sponsored or academic karst centre), rather than scattered widely in the literature or in personal fi les.Hydrologic and chemical fi eld measurements give personal insight into the internal workings of karst -an important step in the training and advancement of karst scientists.
Fi gu re 1 :
Variation in tracer breakthrough time vs. discharge in (a) a 1000m-long conduit 1 m in diameter; (b) a vadose canyon 1 m wide and 1000 m long, with slope = 0.01 and Manning friction factor = 0.05; and (c) a 500-mlong cayon as in b, leading to a 500-m-long tube as in a.
Fi gu re 2 :
Oxygen-deuterium graph in the eastern New York karst.A = early spring snowmelt; B = rapid cave drips, late spring; C = cave streams, late spring; D = rapid cave drips, late summer; E = springs fed by deep groundwater, year-round; F = springs fed by ponors, late spring; G = cave drips beneath 60 m of low-permeability cover, fall; H = surface lake, summer; J = cave drips beneath 20 m of high-permeability cover, fall; L (to right of meteoric water line) = cave drip fed by overlying swamp, fall; M = springs fed by ponors, fall; N = rainfall, summer.Data from TERRELL et al. (2005), SIEMION (2006) and the author. | 5,737.2 | 2010-05-27T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Electronic Free Energy Surface of the Nitrogen Dimer Using First-Principles Finite Temperature Electronic Structure Methods
We use full configuration interaction and density matrix quantum Monte Carlo methods to calculate the electronic free energy surface of the nitrogen dimer within the free-energy Born–Oppenheimer approximation. As the temperature is raised from T = 0, we find a temperature regime in which the internal energy causes bond strengthening. At these temperatures, adding in the entropy contributions is required to cause the bond to gradually weaken with increasing temperature. We predict a thermally driven dissociation for the nitrogen dimer between 22,000 to 63,200 K depending on symmetries and basis set. Inclusion of more spatial and spin symmetries reduces the temperature required. The origin of these observations is explored using the structure of the density matrix at various temperatures and bond lengths.
INTRODUCTION
The Helmholtz free energy is an important thermodynamic quantity that can be used to describe chemical processes. A part of this is the electronic free energy and consideration of this quantity is important in solid state materials, 1−7 spincrossover systems, 8 and in some cases, other molecular behavior. 9−22 In the case of solids it is well-known that the electronic free energy alone can be a significant contribution in the accurate description for the material. 2−7 On the other hand, the importance of electronic temperature in molecules has been demonstrated for specific situations such as spincrossovers 8 and at high temperatures. 11,13,17,19 However, there is a need for more work to be done to fully understand the scope of how and when electronic temperature matters in molecules. 13 One way to explore the scope is through ab initio electronic structure calculations.
Electronic structure calculations are performed at one state (ground or excited) when the temperature is zero. Some common methods to perform electronic structure calculations are density functional theory (DFT), with its many exchange− correlation functionals, 23 and Hartree−Fock (HF) theory. 24 DFT and HF methods are often used as starting points for more accurate methods for solving the Schrodinger equation, which come with additional computational expense. Some of these methods are coupled cluster theory, 25,26 perturbation theory, 27,28 configuration interaction, 29 quantum Monte Carlo methods, 30−32 and various numerical algorithms. 33, 34 Recently, there has been an eruption of method development targeted to finite temperature electronic structure. This includes methods such as coupled cluster theory, 35−42 perturbation theories, 43−48 as well as quantum Monte Carlo (QMC) methods such as density matrix QMC, 49−51 Krylov full configuration interaction QMC, 52 path integral Monte Carlo, 53−61 auxiliary field QMC, 5,6,62−64 and determinant Monte Carlo. 65 There have been finite temperature formalisms for some time within density functional theory, 66−72 Green's function 7,73−81 methods, and full configuration interaction 82,83 methods.
The increased interest is in part due to the growing list of situations where finite temperature electrons are thought to play an important role in observed behavior. A nonexhaustive list of these situations includes Mott transitions, 84 room temperature superconductivity, 85 warm dense matter 86−92 such as in planetary interiors or plasmas, metallic systems, 4,93 phase diagrams of magnetic materials, 94 ultracold atoms and molecules, 95 thermally driven changes in structure, 96 laser guided chemical reactions, 97 spectral functions, 98 heat capacities, 99 and dielectric constants. 100 One noteworthy example for our study involves the stretching of a periodic chain of hydrogen atoms. 5 With the ever growing importance for understanding electronic behavior at finite temperature within the experimental and theoretical communities, now has never been a better time to contribute to this endeavor.
Our approach here is to undertake highly accurate finite temperature electronic structure calculations first using finite temperature full configuration interaction (ft-FCI) 47 and then density matrix quantum Monte Carlo (DMQMC) when ft-FCI is too costly. 49 The DMQMC method was originally developed and applied to the Heisenberg model and 1D spin systems. In doing so, the method was demonstrated to produce accurate finite temperature estimates of observables such as internal energies, staggered magnetization, and Renyi-2 entropy. 49 Following the introduction of DMQMC, the interaction picture DMQMC (IP-DMQMC) method was developed. 50 The IP-DMQMC method was then applied to the uniform electron gas, being used to calculate thermodynamic limit extrapolation using the static structure factor, and assisted in finding the finite temperature local density approximation functional with the help of the initiator approximation. 89,90,101 Thereafter, four more developments were made through the efforts of our research: 1) The DMQMC and IP-DMQMC methods were extended to ab initio molecular systems. 102 2) The cost scaling for the method was investigated and found to scale like that of the ground state method full configuration interaction quantum Monte Carlo (FCIQMC) depending on the propagator symmetry. 30,103 3) The IP-DMQMC method was extended to sampling multiple temperatures thereby reducing the computational cost. 51 4) Gaussian process regression (GPR) was applied to the DMQMC data, and the electronic specific heat capacity and entropy were calculated. 104 These are all important demonstrations of the viability of the DMQMC family of methods to treat the finite temperature electronic structure problem with high accuracy.
Our manuscript focuses on the investigation of Helmholtz free energy surfaces for the diatomic nitrogen molecule. We have performed analytical finite temperature full configuration interaction (ft-FCI) calculations, using a sum-over-states approach in the many-body basis for the N 2 /STO-3G dimer to calculate the free energy surface as a function of bond distance and temperature. This is done by first exploring the internal energy followed by the entropic contributions to the free energy surface. The behavior of the internal energy and entropy are related to each other before exploring the resulting free energy surface. A variety of temperature dependent features related to the minima and maxima found on the free energy surface are then investigated. Following this we explore what impact the symmetries used in the many-body basis have on the observed features for the free energy surface. As an important test for the conclusions made throughout the study, we go beyond the analytical ft-FCI calculations using the DMQMC for N 2 /cc-pVDZ. We close our study with an exploration of the density matrix structure and how it changes with temperature and bond distance for N 2 .
In this work, we will consider the diatomic nitrogen molecule in the canonical ensemble. For systems in which the particle number and charge will remain fixed, the canonical ensemble is required to accurately predict the behavior of these systems. One example noted in the literature is a case where Bose−Einstein condensates (BECs) of trapped ultracold atoms are being modelled. 105 where β = (k B T) −1 is the thermodynamic temperature and E i are the energies corresponding to the eigenstates for our system. The eigenstates are found by solving the Schrodinger equation where Ψ i are wave functions for the eigenstates, and our electronic Hamiltonian is defined by where |D j ⟩ are orthogonal Slater determinants. The full configuration interaction (FCI) method is used to calculate the complete set of eigenstates in the given basis. 47,82 Using the partition function, we calculate the internal energy as the entropy as (5) and the Helmholtz free energy as = We refer to these analytical sum-over-states formulas as finite temperature FCI (ft-FCI).
Free Energy Born−Oppenheimer.
Throughout this section we will follow the notation found in ref 110. and derivation (with some notation) in ref 111. For a general quantum mechanical system consisting of nuclei and electrons, the wave function depends on coordinates of both types of particles. The Born−Oppenheimer approximation considers a wave function approximation that separates the total wave function into multiplicative terms of nuclear and electronic wave functions. This neglects terms that couple the nuclear and electronic degrees of freedom. The consequence of this is the ability to solve the electronic Schrodinger equation: where the electronic energy now only parametrically depends on the nuclear coordinates R. In this equation, r ≡ r 3N are the electronic degrees of freedom for the N electrons; R performs the same role for nuclei. The nuclear equation includes the electronic terms as an effective potential: , where we have nonuniquely chosen E 0 as the relevant electronic eigenstate, and this is more commonly referred to as the potential energy surface.
The free energy Born−Oppenheimer (FEBO) approximation 111−113 is the finite temperature equivalent. Here, the free energies are calculated using the electronic eigenstates as (9) This modifies the nuclear equation to be (10) and the total free energy can then be calculated as (11) which implicitly includes both the electronic excitations and the ionic vibrations through E I ′(β).
The main advantage of the FEBO approximation is that the electronic excitation contribution (F(R, β)) can be found independent of the ion's vibrational component. In this way, the vibrational component can be sampled independently using Monte Carlo or molecular dynamics. 113 The direct consequence most important for this study is that the contribution from the thermally occupied electronic excitations can, and is useful to, be independently studied with respect to the nuclear coordinates R. Though the FEBO motivates investigating F(R, β) as a function of R, the mathematical derivation justifying this approach is extensive. Furthermore, the FEBO approximation, like the standard BO approximation, is applicable only under certain criteria. A detailed mathematical derivation for the equations in this section, as well as discussion of applicability, can be found in ref 112 and ref 111.
Quantum Monte Carlo.
Given the complete set of eigenstates which satisfy eq 7 for a fixed R, we write the Nelectron density matrix as (12) where the wave function is defined as Here C j (i) are the wave function coefficients and |D j ⟩ are the Slater determinants contained within the Hilbert space for a given number of electrons and set of one-electron orbitals. The density matrix is useful because it can be used to calculate finite temperature expectation values. For example the internal energy is calculated as The use of eq 12 is contingent on solving the Schrodinger equation (eq 7) for all possible solutions given a fixed number of electrons N and one-electron orbitals M. This is possible for small numbers of electrons N < 10 and one-electron orbitals M < 20; however, for systems larger than this the computational cost quickly becomes impractical. To properly motivate the methods and procedures for solving eq 12, we follow the historical perspective for the development of DMQMC methods. We believe this gives a good understanding of the motivation for, and development of procedures used in, DMQMC. These originate from the ground state method full configuration interaction quantum Monte Carlo (FCIQMC).
2.3.1. Full Configuration Interaction Quantum Monte Carlo. One way to overcome the cost barrier is by using Monte Carlo to solve the Schrodinger equation. FCIQMC solves for the ground eigenstate with energy E 0 and corresponding wave function Ψ 0 . 30 The FCIQMC method projects an initial condition to the ground state using the imaginary-time Schrodinger equation Here, the exact ground state is found in the limit where D 0 is the initial condition. The efficiency of the FCIQMC method primarily comes from using a collection of signed (±1) particles, referred to as walkers, to represent the wave function. The walkers reside on the determinants |D j ⟩ of the wave function, and the total magnitude or "population" N w (|D j ⟩, τ) of walkers on a given determinant is proportional to that determinant's coefficient N w (|D j ⟩, τ) ∝ C j (i) . The use of walkers alone does not guarantee the efficiency of the FCIQMC method, but how they are evolved through imaginary-time does. In the first instance, the cost of memory is reduced by storing only determinants with a nonzero walker population in the computer memory. To take full advantage of this storage detail, any determinant with a population below one is removed or rounded in a nonbiased stochastic fashion. In the second instance, the cost of compute time is reduced by performing only a small number of the computations normally required by eq 15, which are stochastically selected on-the-fly for each walker. The stochastic noise is remedied with postcalculation analysis, where statistics from snapshots of Ψ 0 (τ) at large τ are used to calculate the exact-on-average energy E 0 .
Density Matrix Quantum Monte
Carlo. DMQMC is the finite temperature analogue of FCIQMC developed by Blunt and co-workers. 49 In particular, the unnormalized density matrix written as = ( ) e H (17) can be solved using the symmetrized Bloch equation as its imaginary-time equation given a known initial condition The modifications required to represent ρ(β), rather than Ψ(τ), with a population of signed walkers are as follows. The first change is that walkers now reside on sites that are labeled The Journal of Physical Chemistry A pubs.acs.org/JPCA Article using two Slater determinants |D i ⟩⟨D j |. The site has a walker population N w,ij (β) which is proportional to the density matrix element; the label belongs to N w,ij (β) ∝ ρ ij (β). Just like in FCIQMC, only the sites with nonzero walker populations are kept in the computer memory. Additionally, like FCIQMC, the number of computations required to use eq 18 can be stochastically sampled. However, unlike FCIQMC, the selection of computations to carry out eq 18 is done twice as often. Once for the Slater determinant |D i ⟩ and another time for ⟨D j |. The last change is in the postcalculation analysis.
Where FCIQMC uses a single simulation to estimate the ground state energy E 0 , DMQMC samples the finite temperature energy E(β) for multiple β. For this reason, individual simulations termed a β-loop are repeated N β times. These repeated simulations are then averaged over to produce the exact-on-average energy estimate E(β).
Interaction Picture Density Matrix Quantum Monte
Carlo. Following the development of DMQMC, Malone and co-workers made two important developments for DMQMC. The first development introduced the interaction picture to the DMQMC method. In the interaction picture, the Hamiltonian is varied through imaginary time rather than being constant. The simulation starts with an approximate Hamiltonian and ends with the exact Hamiltonian. The result is that a meanfield density matrix for the approximate Hamiltonian is used as the initial condition, which greatly improves the quality of statistics for a single targeted β. 50 The second was the extension of the initiator approximation from FCIQMC to DMQMC, which helped alleviate the sign problem thereby reducing computational cost. 90,101 In the interaction picture DMQMC (IP-DMQMC) method an intermediate matrix is sampled rather than the density matrix (eq 17). The intermediate matrix connects an approximate density matrix to the fully correlated density matrix using the imaginary-time equation given that the approximate density matrix is the known initial condition In this work, the diagonal of Ĥis used for Ĥ( 0) . By evolving the initial condition through imaginary time τ, the fully correlated density matrix is sampled for a single β T , or the target β, which occurs only when τ = β T and ρ(β T ) are sampled. Since the approximate density matrix can take advantage of the energies along the diagonal of the Hamiltonian, the approximate density matrix tends to sample lower, and hence more important, energy states. In this way, the quality of the statistics improve. This comes with the caveat that repeated simulations can now be used to calculate E(β) for a single β = β T only. Thus, multiple calculations are required to sample multiple β, which is not the case in DMQMC, which requires only a single calculation.
The Initiator
Approximation. The initiator approximation alleviates the numerical sign problem by restricting the walker dynamics to elements that are typically more important for the wave function or density matrix. In this way, the sign structure is stabilized and hence so are statistics accumulated from the wave function or density matrix. The downside is that accumulated statistics from the wave function or density matrix have a systematic error. Fortunately, the error is able to be systematically removed to an arbitrary degree. 90,101,114−118 In the original initiator approximation for FCIQMC a new parameter n add is introduced which modifies the Hamiltonian element H ij depending on the walker populations residing on | D i ⟩ and |D j ⟩. 101 Suppose determinant |D i ⟩ with population N w (i) is attempting to contribute to determinant |D j ⟩ with population N w (j) through eq 15. If both N w (i) and N w (j) are greater than zero, then Hamiltonian element H ij is left unmodified. However, if N w (j) = 0 and N w (i) < n add , then the Hamiltonian element H ij is set to zero. There is an exception to this rule: if a third determinant |D k ⟩ is attempting to contribute to determinant |D j ⟩ through eq 15 with the same sign as the contribution from determinant |D i ⟩, the Hamiltonian element is left unmodified. The last change is if N w (i) ≥ n add thenH ij is left unmodified. These rules were originally devised to reduce the impact of the numerical sign problem. 101 These same rules are carried over to DMQMC; however, in place of the walker population N w (i), one considers the walker population on matrix element N w (ij) with a site label |D i ⟩⟨D j |. This replacement is true for all determinants |D i ⟩. Additionally, a second new parameter n ex is introduced. 90 The new parameter is used with the site labels |D i ⟩⟨D j |. If the number of electronic excitations required to convert |D i ⟩ to |D j ⟩ is greater than n ex , then the Hamiltonian element will be set to zero. In cases where several criteria are met, the Hamiltonian element is left unmodified in priority of being set to zero.
2.3.5. Piecewise Interaction Picture Density Matrix Quantum Monte Carlo. The DMQMC and IP-DMQMC methods were originally developed using model Hamiltonians, 49,50 showing particular success in treating the warm dense electron gas. 90 Several of us kept developing the DMQMC methods to treat ab initio systems. 102 Most recently, some of us developed the piecewise interaction picture DMQMC (PIP-DMQMC) method. 51 In PIP-DMQMC, the interaction picture propagator is generalized to sample temperatures beyond a target β (β T ). The benefit of PIP-DMQMC is that the quality of statistics improves for a given simulation while also being able to sample a range of β ≥ β T rather than only a single β = β T . In PIP-DMQMC, a piecewise matrix is sampled with a corresponding piecewise imaginary-time equation which is used with the initial condition Here we use the same Ĥ( 0) as in IP-DMQMC, the diagonal of Ĥ. We note that, in eq 24 above, we opted to use the asymmetric Bloch equation as the imaginary-time equation for τ ≥ β T . However, another valid imaginary-time equation is the 104 This allowed us to calculate the electronic specific heat capacity and entropy for systems beyond exact treatment. 104 Here we provide a brief outline of the procedure required to calculate these quantities.
To calculate the free energy from PIP-DMQMC, we fit the internal energy using the GPy 119 GPR library in a supervised fashion. The resulting GPR model is analytically differentiated and used to predict the specific heat capacity which is related to the internal energy by and can be integrated numerically to calculate the entropy where T′ is the temperature for integration on the bounds 0 to T. Finally, the original internal energy is combined with the entropy using eq 6 to calculate the free energy. The process is composed of the following steps: 1. PIP-DMQMC data, which are originally collected in intervals of Δβ = 0.001 from β min to β max , are resampled to a user defined range β r,min to β r,max sampled in a user defined interval of Δβ r = 0.1. 2. The data set resulting from step 1 has the ground state energy from FCIQMC added for a β twice as large as the β max in the original PIP-DMQMC data set. 3. A model is trained on the β → E(β) data using a kernel comprised of the sum of the individual kernels: RBF, Matern32, Matern52, and the product between an RBF and Matern52 kernels. 4. The trained β model is used to predict the energy and specific heat capacity from β = 1.0 to β = β max in steps of 0.01. 5. A trained T model, with predictions, is created by applying steps 1−4 using T = 1/β instead of β, and the FCIQMC energy is added at T = 0. 6. Supervised training is then used to adjust β r,min , β r,max , and Δβ r for the β model and similarly for the T model such that the models produce a qualitatively reasonable specific heat capacity. The combined β and T subregions must span the original range of data from β min to β max for the original PIP-DMQMC data set. 7. Steps 1−6 are repeated several times, adjusting the β r,min , β r,max , and Δβ r (and similarly the T equivalents) each time to improve the visually determined qualitative appearance of each model's specific heat capacity. 8. The final β and T model predictions are combined to make a single data set which maps a single β (or T) to a single energy and specific heat capacity. 9. The combined β and T specific heat capacity is numerically integrated to calculate the entropy using eq 27, and the free energy is calculated with eq 6.
The final β r,min , β r,max , and Δβ r values selected were bond length dependent. For R = 1.098 Å, β r,min = 1.0, β r,max = 25.0, and Δβ r = 0.125. The T model was found to be sufficient and was used to predict from β = 1.0 to β = 25.0 in steps of Δβ r = 0.01, while the β model was not used. For R = 5.49 Å, the β model used β r,min = 1.0, β r,max ≤ 10.0, and Δβ r = 0.4. While the T model used β r,min > 4.0, β r,max = 50.0, and Δβ r = 0.2. The β model predictions for β ≥ 1/0.205 were combined with T model predictions for β < 1/0.205. The final data set constructed from the combined predictions was from β = 1.0 to β = 100.0.
2.5. Free Energy Surface Calculation, Interpolation, and Analysis. The NumPy library is used to perform the sum-over-states required for equations in Section 2.1. 120 For thermodynamic quantities calculated as a function of the atomic bond distance, cubic splines were used to fill in the data points between the bond distances. The SciPy library was used to perform the cubic spline interpolation, and the resulting spline was used to generate data from 0.8 to 100 Å in steps of 0.05 Å. 121 For analysis on the potential energy surfaces, we considered only absolute energy differences ≥0.1 millihartree from the dissociation product (100 Å) as significant. The minimum and maximum contained in the energy surfaces were numerically identified by looping through each point in the data set (excluding the minimum and maximum bond lengths) and comparing the point to its two nearest neighbors. This process is done in order from the smallest bond length to the largest using the Numba library. 122 If a data point is found to be smaller than its two nearest neighbors, we consider it a minimum; similarly, if it is larger, we consider it a maximum.
Calculation Details.
The FCI, FCIQMC, and PIP-DMQMC calculations were run with HANDE-QMC. 123 The integrals required by HANDE-QMC were generated with Molpro 124 using a restricted Hartree−Fock (HF) calculation. For generating and fitting N 2 /STO-3G 125 thermodynamic quantities, integrals were generated for the bond lengths between 0.8 and 100.0 Å, with 100.0 Å taken to be the dissociation limit. All integrals for the main paper were generated using the frozen core approximation for the inner 1s electrons for each nitrogen resulting in N = 10 total electrons in the system. For comparing symmetries included in the Hamiltonian, we made two sets of integrals: one which had the orbital symmetry labels and one which did not. To restrict the FCI calculation to only the HF wave function symmetry from Molpro, we used the integrals which contain the orbital symmetry labels and provide the symmetry index from Molpro to HANDE-QMC. FCI calculations across all spatial symmetries used the integrals with no orbital symmetry labels. Finally, to perform FCI calculations across all possible spin and spatial symmetries, we manually looped through the valid pairs of α (N ↑ ) and β (N ↓ ) electrons which conserve the total electron number (N = N ↑ + N ↓ ).
For N 2 /STO-3G density matrix histograms, the bond lengths were (in Å): 1.098, 2.196, and 5.49. For all N 2 /cc-pVDZ 126 In FCIQMC and PIP-DMQMC calculations, the reference Slater determinant was supplied to HANDE-QMC and the HF energy is provided in the data repository. Both FCIQMC and PIP-DMQMC used a simulation time step of Δτ = 0.001. The FCIQMC simulations used N w = 10 7 and N w = 5 × 10 7 walkers for the R = 1.098 and R = 5.49 Å simulations, respectively. PIP-DMQMC simulations were collected and averaged over N β = 5 simulations using N w = 5 × 10 8 and N w = 1 × 10 9 walkers for R = 1.098 and R = 5.49 Å, respectively. Both FCIQMC and PIP-DMQMC used the initiator approximation with an initiator population of n add = 3.0, and PIP-DMQMC also used an initiator level of n ex = 2. In FCIQMC the simulation went ten time steps, and in PIP-DMQMC a single time step, before the shift was updated. 30,51
Free Energy Surface Features.
In this section, we discuss the effect of electronic temperature on the dissociation of N 2 /STO-3G using ft-FCI, all spatial symmetry blocks, and assuming no spin polarization. Our discussion focuses on five temperatures (0 K, 21,100 K, 31,600 K, 52,600 K, and 316,000 K) which highlight features that occur during heating of a diatomic bond, but we include a range of temperatures on figures to show the evolution of the dissociation with temperature. In each case, the temperatures correspond to a decimal number in Hartree atomic units, but we have converted the numbers to Kelvin and rounded to 3 s.f. for ease of interpretation. Figure 1(a) shows the internal energy with the dissociated internal energy (R = 100 Å) subtracted from each temperature. The lowest temperature, 0 K, is the ground state dissociation energy of N 2 . This has a well depth of 0.2391 hartree which is also the dissociation energy measured from the bottom of the well, i.e., without vibrational zero-point energy (hereafter the dissociation energy). The internal energy at dissociation is shown in the inset.
As the temperature is raised, the internal energy of the bond lengths around the equilibrium geometry becomes lower relative to the dissociation energy, and this causes the well to become steeper. The bond length at which the energy minimum occurs also decreases. This effect is caused by the energy level spacing of the FCI eigenstates of dissociated products being closer than those states at the equilibrium geometry, resulting in the internal energy at dissociation increasing more rapidly than that at the equilibrium geometry. This occurs up to and including 21,100 K, where this effect reaches a maximum. Up to this point, in terms of the internal energy, the N 2 bond becomes stronger. Above 21,100 K, the well gradually becomes shallower, and the curve eventually gains a dissociation barrier. Then the curve becomes fully repulsive. Finally, at very high temperatures and energies, the internal energy develops a second minimum. The second minimum comes from the shape of the curve in the high temperature limit, which is equivalent to the average of the FCI eigenvalues and is basis set dependent. Figure 1(b) shows the energetic contribution from the entropy over the same temperature range. At the ground state equilibrium geometry, the energetic contribution from the entropy mirrors the trend of the internal energy. For T > 21,100 K, the energetic contribution from the entropy rises to a maximum and then gradually decreases at higher temperatures. At longer bond lengths the contribution decreases and at higher temperatures there are some coordinates where the entropy contribution changes sign to stabilize an intermediate Figure 1. Analytical thermodynamic quantities as a function of bond distance over a range of temperatures for the N 2 /STO-3G system. The (a) internal energy and (b) negative of the thermally scaled entropy are added together to calculate (c) the free energy. Each temperature curve has the largest bond distance value subtracted, thereby setting the largest bond distance to zero. The values subtracted from each curve are shown in the inset for a small energy range near the ground state (T = 0 K). All temperatures are shown with a partial transparency except 0 K, 21,100 K, 31,600 K, 52,600 K, and 316,000 K which are left solid to aid with interpretation in the discussion. The analytical quantities are generated by using a cubic spline fit of each quantity as a function of several N 2 bond distances. More details about this methodology can be found in Section 2.5. minimum. For the dissociation limit, the inset shows that we can observe a gradual decrease in the entropic contribution coming from the T term in the energy. Figure 1(c) shows the free energy, which is the sum of the data in the two preceding graphs. As the temperature is increased from 0 K to 31,600 K, the well gradually becomes more and more shallow until the equilibrium geometry has the same energy as the molecule at dissociation. The temperature of 31,600 K therefore represents a temperature at which the bond does not (or barely) exists, i.e., a dissociation temperature. Such a dissociation point is reached due to a competition between the internal energy (which increases the dissociation energy) and the entropy (which decreases the dissociation energy). This draws attention to the importance of the entropy at elevated electronic temperature in providing qualitative features of the free energy surface. This, of course, has been observed in many different contexts. 4,127,128 At the same time, a slight barrier (∼0.02 E h ) is visible between the equilibrium bond distance and the larger bond distances. It is not until around 52,600 K that the bond does not exhibit a significant minimum in its free energy surface and is fully repulsive. This is a lower temperature than our calculated dissociation energy converted to temperature (D e = 75,000 K). For T ≥ 316,000 K the second minimum has also appeared in the free energy and is the result of the average of the FCI eigenvalues as the entropic term becomes roughly constant. Observing the inset we see a decrease in the dissociated free energy with increasing temperature as it becomes dominated by the entropy.
These findings are summarized in Figure 2. In this figure, the location and height (or depth) of the minima and maxima on the free energy surface are plotted as a function of interatomic distance and temperature. The color and darkness of the line represent the energy difference to dissociation. While the barriers are typically low for N 2 and not enough to be significant at high temperatures, it is nonetheless of note that they are there in terms of the qualitative features of a free energy surface.
Starting with T = 0, the equilibrium bond distance is plotted (R eq = 1.2 Å). At low temperatures, there is just one minimum on the free energy surface. As the temperature increases, this remains the case until approximately 600 K, where a barrier has emerged at R = 3.0 Å. As the temperature continues to rise, the well originating from R eq remains unchanged, while the newly formed barrier moves to shorter bond lengths. After ∼22,000 K, the original minimum decreases its depth considerably, while the barrier height grows above 1 millihartree for the first time. At this point in temperature, we also observe the emergence of a second minimum for longer bond lengths. Then, at ∼30,000 K, both barrier and the first minimum are removed simultaneously, leaving only the second minimum; the second minimum begins to move to lower R as the temperature increases further. Eventually, the remaining free energy minimum tends towards approximately R = 2.4 Å with a depth of approximately 0.1 hartree.
Two further cases are included in the Supporting Information. We describe the impact of spin polarization on these graphs in Figure S1 and using a finite-temperature unrestricted Hartree−Fock (using a smearing approximation implemented in PySCF 129 ) in Figure S2. 3.3. Thermal Dissociation Plots. Figure 3 shows a plot of the reaction free energy for dissociation found by subtracting the energy at the ground state equilibrium geometry from the dissociated limit. The change in sign represents the point at which the dissociated limit is more stable from the point of view of the free energy, i.e., when the nitrogen atoms cease to bond favorably, at 31,600 K. This is consistent with our observations from the dissociation energy graphs. We found no difference between this temperature and that calculated from the global minimum (instead of the ground state equilibrium geometry) and attribute this to how the low T minimum does not change very much as the temperature is raised (see Figure 2). This is because the minimum-energy geometry does not change significantly until after dissociation is energetically favored (Figure 2). The approach of using the ground state equilibrium bond distance has the benefit that only two geometries are being calculated. In general, it benefits our more expensive approaches if we can just use the ground state equilibrium bond distance even at the cost of a slight error, as our calculations span multiple temperatures at the same geometry. Sampling a range of temperatures similar to what is used in Figure 3 can cost hundreds of thousands of core hours, which restricts the number of calculations that can reasonably be completed due to cost and time. 51 Figure 3 are a number of different symmetry constraints being added or relaxed. Our estimate of 31,600 K for the crossover temperature assumes that all spatial symmetries are accessible in N 2 . When the Slater determinant basis of the calculation has its spatial symmetry restricted to that of the ground state Hartree−Fock determinant, the crossover temperature increases to T ∼ 45,000 K. Conversely, when states of all spin polarizations are added to the previously unpolarized calculation, the effect is the crossover temperature is decreased to 22,000 K. In this case removing (adding) eigenvalues means that the thermally driven effect has occurred at a higher (lower) temperature. This suggests that the eigenvalues being added or removed are qualitatively similar as a function of the bond distance. Overall, as more symmetries are included in the calculation, the crossover temperature decreases.
In the Supporting Information, we also consider how adding translational and rotational degrees of freedom can affect these results ( Figure S3).
Results from QMC.
To investigate the effect of basis set on the crossover temperature, we wanted to simulate a larger basis set, which requires quantum Monte Carlo. Here we 1) collected the internal energies for two N 2 geometries in a cc-pVDZ basis using initiator PIP-DMQMC, 2) fit the internal energy using GPR and the fits were used to calculate the entropy, and 3) used the internal energy and entropy to calculate the free energy and compared the results to previous results. Figure 4 shows the PIP-DMQMC internal energies and the resulting free energies from this process. Figure 4(a) shows the internal energies collected with PIP-DMQMC, as well as the corresponding ground state (T = 0) energies from the initiator FCIQMC for the two N 2 geometries. We can observe that for both geometries the PIP-DMQMC internal energy converges toward the initiator FCIQMC energy as β increases, which is what we expected for well converged PIP-DMQMC data. An interesting feature of the stretched N 2 PIP-DMQMC internal energies is the slower convergence to the ground state, with respect to β. This behavior is indirectly present in Figure 1(a) as the internal energy increases noticeably with a slight increase in temperature. This consistency is reassuring for the accuracy of both data sets and also serves as a visual explanation for why the internal energy changed more visually at larger bond distances in Figure 1(a) (with temperature).
GPR is fit to the PIP-DMQMC internal energies, and the GPR model is used to calculate the internal energy and entropy. The PIP-DMQMC+GPR entropic contribution is added to the PIP-DMQMC+GPR internal energy to produce the free energy (eq 6). The free energy difference for N 2 /cc-pVDZ is then added as a new line to Figure 3 which results in Figure 4. Here we only use the data for the spatial symmetry Figure 1 with the free energy difference between the equilibrium and stretched geometry of the N 2 /cc-pVDZ dimer and the calculated dissociation energy from initiator FCIQMC added to the plot. N 2 /STO-3G data come from Figure 3. The shortest bond distance for N 2 /cc-pVDZ was used for R = R eq , and similarly, the longest bond distance was used for R = ∞, which can be found in Section 3.1. restricted in both STO-3G and cc-pVDZ. In addition to the PIP-DMQMC+GPR free energies, the reaction energy is calculated by using the FCIQMC data. Here we can observe similar features as before, namely there is a temperature (T ∼ 63,200 K) where the longer bond distance N 2 has a lower free energy than the equilibrium N 2 and hence is thermodynamically favored. This can be approximately considered the thermal dissociation temperature. To give context, the temperature for the sun varies from 5,800 K 133 to 15,000,000 K 134 and an average lightning strike is 30,000 K. 135 Increasing the basis set led to the dissociation temperature increasing and becoming closer to not only the experimental value but also the calculated value. The thermal dissociation is closer to the calculated D e value, even after considering the increased value in the calculated D e which is a result of using a larger basis set. These results are promising as Going beyond a cc-pVDZ basis set and extrapolating to the complete basis set limit are important goals for this kind of work, but we felt that a detailed investigation for dimers was beyond the scope of this paper. Instead, we have provided a simple investigation of a smaller system, H 2 , in the Supporting Information section entitled "The complete basis set limit".
3.5. State Histograms. Until now, we have investigated several interesting behaviors found in the free energy surface for N 2 . Here we will investigate some reasons for the behavior explored in the previous sections. To do this, we will use state histograms which were originally developed for FCIQMC. 115 These histograms are collected by looping through the coefficients present in an FCIQMC simulation. These coefficients are based on a snapshot of the FCIQMC wave function. The population of each coefficient is compared to a set of bins (each with its own unique population range). If the coefficient's population is greater than or equal to the bin's population, the count of the bin is increased by one. For DMQMC, we have extended this process for analysis of the density matrix elements rather than coefficients of the wave function. The resulting plot shows the number of matrix elements with the corresponding number of normalized walkers, such that the largest relative population is one.
To familiarize readers with the types of information that is able to be extracted from state histograms, one line in Figure 5(a) shows the R = 1.098 Å bond length ground state FCI wave function histogram. Here it is observed that there is a single coefficient with a relative population of one. Thereafter, the relative population immediately falls roughly an order of magnitude before hitting a plateau and decreasing modestly as the number of coefficients increases. This behavior is consistent with little to no static correlation present in the system. 115 As the number of coefficients is increased further still, the relative population quickly falls off toward the smallest relative population. Comparing the initial behavior of R = 1.098 Å to the longer bond lengths R = 2.196 Å and R = 5.49 Å shown in Figure 5(a), it can be observed that the number of coefficients with a relative population at or near one is higher. The increase in coefficients with high population is consistent with the emergence of static correlation in the system. 115 With some familiarity with the relevant features contained in a state histogram, we will now explore the behavior for the density matrix histogram. Figure 5(a) shows two sets of lines, one set from ft-FCI and the other set from PIP-DMQMC, which correspond to the histogram for the entirety of the density matrix. Both sets of lines generally show good agreement, suggesting that the PIP-DMQMC method is accurately sampling the density matrix for these temperatures and systems. For the smallest bond length, there is a single matrix element with a relative population of one, and thereafter the relative population quickly falls roughly an order of magnitude. Whereas for larger bond lengths, the number of matrix elements with a large relative population (close to one) has grown considerably, which as mentioned previously for the ground state FCI wave function is consistent with greater static correlation entering the system. The observed increase in high population matrix elements for longer bonds corresponds to an increase in the cost to run DMQMC. This is because the number of data points that must be accumulated from the density matrix has increased. 103 A similar behavior was observed in FCIQMC, though it was also found that some systems show a simultaneous compression of the wave function resulting in a net decrease for the computational cost. 115 The compression for the wave function representation is observed in the maximum number of coefficients decreasing as the bond length increases. This behavior can be observed in Figure 5(a) for the ground state FCI wave function, as well as the density matrix, when comparing the two largest bond lengths.
Now that we have a means of reading the density matrix histogram to extract useful information, we will explore the density matrix histogram across temperatures using a larger basis set (cc-pVDZ) shown in Figure 5(b). For the equilibrium bond distance in the cc-pVDZ basis set, we find that the total number of matrix elements in the histogram has increased substantially from that of STO-3G in Figure 5(a), which is expected. This is true regardless of temperature and is primarily the result of the much larger density matrix for the larger basis set system. The effect of temperature in the cc-pVDZ basis set is noticeably less drastic for the shorter bond length. The two lowest temperature histograms are nearly indistinguishable for the smallest bond length, while the longer bond length is observed to change between the two lowest temperatures. Now that we have a better understanding of the behavior of density matrix histograms, a natural question tends to arise: What are some of the contributions to this behavior? In Figure 5(c) we explore the contributions from the diagonal of the density matrix, as well as the matrix elements which have an excitation of two or less connecting their site labels (n ex = 2), for the N 2 /cc-pVDZ density matrix at T = 63,200 K. The shape of the diagonal elements histogram appears to provide a great deal of useful information. The shorter bond length has a small number of large relative population matrix elements, which quickly falls roughly an order of magnitude to a small plateau. This is not observed for the longer bond length, where there is a more gradual loss of the relative population with a number of elements at the high relative population limit. Having more elements with a high relative population is indicative of static correlation entering the system. Observing the smallest bond distance histogram for the n ex = 2 space, the behavior is initially similar to the diagonal but slowly begins to follow the slope for the full density matrix. This blending between the diagonal and the full matrix is also seen for the longer bond length. When the histograms for the regions corresponding to the diagonal or the n ex = 2 space coincide with the whole density matrix, it means the density matrix comes from entirely that region. Then it is interesting to see the gap between the n ex = 2 space histogram and the full density matrix for the shorter bond length. This suggests that much of the density matrix is in regions outside the n ex = 2 space, as the only overlap occurs at high relative populations. Surprisingly, the longer bond length has more coincidence between the n ex = 2 space and the density matrix. This suggests the opposite behavior; that is, much of the density matrix comes from the n ex = 2 space. This is the temperature which has one of the largest differences in entropy between these two geometries. Though what causes this discrepancy is not immediately apparent and likely warrants future investigations into the matter.
CONCLUSIONS
The finite electronic temperature free energy surface of the N 2 molecule was investigated using finite temperature FCI and PIP-DMQMC. For a STO-3G basis, we found that increasing the electronic temperature causes the free energy well, which corresponds to the global minimum of the free energy surface, to gradually lose depth and a small free energy barrier forms between the well and the dissociation product. The free energy difference between the equilibrium geometry and dissociation limit at low temperatures increased due to an entropic contribution, which opposed a decrease in the free energy due to the internal energy. The result is longer bond distances are favored thermodynamically, i.e. longer bond distances minimize the free energy. The temperature where the dissociated product has a lower free energy than the equilibrium bond free energy is referred to here as the crossover temperature. We found that increasing the basis set size to cc-pVDZ caused a decrease in the crossover temperature. We also explored the origins for some of these behaviors using the histogram of the density matrix. We found that the density matrix for longer bond lengths tends to respond more to changes in temperature.
While the results here are motivating for continued exploration of finite electronic temperature in less traditional contexts, there are several important steps to consider before this can be realized. Limitations of this specific study are investigations of the system dependence and how the combined effects for the basis set size and symmetries affect the crossover temperature. Methodologically, there is a need to reduce the cost for running high accuracy methods like DMQMC, as well as compare to deterministic methods within a finite temperature formulation, such as coupled cluster theory. Lastly, there is a need for a better understanding of the relevant laboratory or environmental conditions needed to observe behavior demonstrated here as well as the importance of such phenomena. With so many steps required before the physical outcomes can be explored in experiment, it can seem daunting to pursue these investigations; however, we hope the interesting behavior observed here will promote continued investigations for these phenomena so that chemical understanding may be enriched further.
■ ASSOCIATED CONTENT Data Availability Statement
The data that supports the findings of this study are available within the article. For the purposes of providing information about the calculations used, files will be deposited with Iowa Research Online (IRO) with a reference number 10.25820/ data.006648.
Text and Figures S1−S4 which describe the impact of spin polarization; the relative importance of electronic correlation in different temperature regimes; contributions to the total free energy from rotation and translation; and the complete basis set limit (PDF) | 11,282.6 | 2023-08-03T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Optimising barrier placement for intrusion detection and prevention in WSNs
This research addresses the pressing challenge of intrusion detection and prevention in Wireless Sensor Networks (WSNs), offering an innovative and comprehensive approach. The research leverages Support Vector Regression (SVR) models to predict the number of barriers necessary for effective intrusion detection and prevention while optimising their strategic placement. The paper employs the Ant Colony Optimization (ACO) algorithm to enhance the precision of barrier placement and resource allocation. The integrated approach combines SVR predictive modelling with ACO-based optimisation, contributing to advancing adaptive security solutions for WSNs. Feature ranking highlights the critical influence of barrier count attributes, and regularisation techniques are applied to enhance model robustness. Importantly, the results reveal substantial percentage improvements in model accuracy metrics: a 4835.71% reduction in Mean Squared Error (MSE) for ACO-SVR1, an 862.08% improvement in Mean Absolute Error (MAE) for ACO-SVR1, and an 86.29% enhancement in R-squared (R2) for ACO-SVR1. ACO-SVR2 has a 2202.85% reduction in MSE, a 733.98% improvement in MAE, and a 54.03% enhancement in R-squared. These considerable improvements verify the method’s effectiveness in enhancing WSNs, ensuring reliability and resilience in critical infrastructure. The paper concludes with a performance comparison and emphasises the remarkable efficacy of regularisation. It also underscores the practicality of precise barrier count estimation and optimised barrier placement, enhancing the security and resilience of WSNs against potential threats.
Introduction
WSNs have become widely used in many applications because of their cost-effectiveness and inherent flexibility.But this growth also brought forth a serious issue: increasing challenges with security, especially with respect to intrusion detection and prevention.Maintaining the integrity of data transmission and system dependability in these networks despite evolving and dynamic threats is still a vital task [1].
The existing body of research focuses on improving security in WSNs, combining optimisation algorithms and regression modelling for barrier placement optimisation [2].Aljebreen et al. [3] stress the importance of protecting IoT-assisted WSNs, opening the door for efficient intrusion detection through the combination of machine learning and naturally inspired optimisation techniques.Using scalable methods and effective data aggregation methodologies, Arkan and Ahmadi introduced hierarchical and unsupervised frameworks [4] to strengthen network security.Boualem, Taibi, and Ammar [5] also address network dynamics for adaptive deployment by exploring categorisation methods for ideal barrier placement.The research of Gebremariam, Panda, and Indu [6] emphasises the value of combining machine learning with hierarchically designed WSNs and promotes accurate intrusion detection.Collectively, these studies underline the increasing emphasis on leveraging advanced methodologies to strengthen WSN security against sophisticated threats [7].More of the existing research works are discussed in Table 1.
Our work takes a unique approach to barrier placement in WSNs to maximise intrusion detection and prevention.We want to combine the adaptive properties of the Ant Colony Optimisation (ACO) method with the SVR model.Our research aims to provide a thorough, data-driven, and economical way to strengthen WSN security against changing threats by utilising regression modelling to estimate barrier amount and the adaptive ACO algorithm for real-time deployment [17,18].This novel method has the potential to significantly improve the robustness and efficiency of intrusion detection and prevention techniques in WSNs.
Description and pre-processing of the dataset models. Compiling this dataset facilitates research on intrusion detection and prevention in
WSNs [10].Its many attributes, which cover the essential features of WSNs, make it a useful resource for our data-driven approach.There are 182 samples in the 'FF-ANN-ID' dataset, and each one represents a unique WSN setup.The dataset contains key features of both Gaussian and uniform distributions, such as the number of barriers, the number of sensor nodes, the sensing and transmission ranges, and the deployment area.These features provide a thorough overview of the network possibilities [11], which makes it a suitable place to begin our research.It is important to remember that pre-processing techniques were employed to ensure data quality and consistency.The summary statistics of the dataset, displayed in Table 2, provide information about the key qualities.These statistics give a clear picture of the attributes of the dataset.
A pair plot showing the correlations between each attribute in the dataset about the target variables is shown in Figs 1 and 2, respectively, which provides important insights into possible correlations and dependencies between qualities and the target variables by showing attribute pairings indicating how various characteristics affect the positioning of uniform barriers in the context of intrusion detection and prevention.The number of obstacles and the number of sensor nodes are positively correlated, which may be because having more sensor nodes makes it possible to identify incursions more precisely and accurately, which could result in more obstacles.However, the number of barriers and the transmission range of sensor nodes are positively correlated.It could be because of the necessity for fewer obstacles to be placed to cover the same region when a transmission range is longer because a greater sensing range enables sensor nodes to identify incursions sooner and potentially result in the deployment of additional barriers.A positive link exists between the number of barriers and the sensor nodes' sensing range.The number of obstacles and the area that must be protected are positively correlated because deploying more barriers over a greater region is necessary to successfully detect and prevent invasions [8].
There is a positive correlation between the quantity of sensor nodes and the number of obstructions that could be since more sensor nodes enable more accurate and precise incursion detection, which may lead to the installation of additional barriers.There is a positive correlation between the transmission range of the sensor nodes and the number of barriers, which could be because fewer obstacles are needed to cover the same region when a transmission range is longer [19].The number of barriers and sensor nodes' sensing ranges are positively correlated because greater sensing ranges enable sensor nodes to identify incursions earlier, which may result in the deployment of additional barriers.A positive correlation exists between the area to be protected and the number of barriers because a larger area requires more barriers to be deployed to detect and prevent intrusions effectively.These insights can be used to inform the placement of uniform barriers in the context of intrusion detection and prevention.
Based on the correlation heatmap illustrated in Fig 3 , it is evident that the correlation coefficient between the number of sensor nodes and the number of barriers is 0.76, which is a strong positive correlation.It confirms the earlier observation that there is a direct relationship between the number of sensor nodes deployed and the number of barriers required to protect a given area.The correlation coefficient between the transmission range of sensor nodes and the number of barriers is 0.77, which is a strong positive correlation.It confirms the earlier observation that a longer transmission range increases the need for as many barriers to be deployed.The correlation heatmap shows several more intriguing links between the various qualities and those mentioned previously.Another purpose of the correlation heatmap is to spot any possible redundancy between the various attributes.Decision-making and comprehension of complex systems can both be enhanced by the correlation heatmap's insights.
The dataset's Gaussian and uniform barrier counts appear to be highly varied, based on the histograms in This implies that there are a moderate to large number of uniform barriers in many of the datasets.With a standard deviation of 78.18 barriers, the distribution is quite spread out.This implies significant variation in the total number of uniform barriers throughout the sample.In addition, the distribution contains a few outliers, with some datasets having either a very small or extremely large number of uniform barriers.These concepts can guide barrier placement in the context of intrusion detection and prevention.Because this is where most of the data points are found, organisations might choose to concentrate on erecting barriers in locations with a modest number of obstacles.Companies should also be mindful of the distribution's outliers since they could indicate distinct or uncommon circumstances that call for further care.
Model selection
2.2.1 Choice of models.We look at two different datasets: "Number of Barriers (Gaussian)" and "Number of Barriers (Uniform)."Our research primarily focuses on estimating the number of obstacles in WSNs.To do this, we use the following models: A. Support Vector Regression (SVR): Regression analysis using SVR is a strong and adaptable method for predicting continuous numerical values.Projecting input feature mappings into a higher-dimensional space makes them highly suitable for capturing intricate relationships within the data [19].Due to its capacity to handle high dimensionality and non-linearity, SVR was our first pick for a baseline model and served as a perfect foundation for our investigation.The following is a mathematical representation of the SVR model: Where: • f(X) is the predicted value.
• n is the number of training examples.
• X i represents the support vectors.
• K(X, X i ) is a kernel function.
• b is the bias term.
B. Random Forest Regressor:
To analyse the importance of the feature, we use the Random Forest Regressor.We can determine the major contributors to our models by using random forests, which offer insightful information on the importance of features and how they affect prediction outcomes [8].
C. Stochastic Gradient Descent (SGD) Regressor: With L1 (Lasso) and L2 (Ridge) regularisation, we employ the SGD Regressor.These methods make it easier to manage model complexity and avoid overfitting, which improves our models' capacity for generalisation [10].
D. Ant Colony Optimization (ACO):
Our research heavily relies on ACO, an optimisation technique inspired by nature.It is applied to optimise the SVR models' hyperparameters and improve their prediction capabilities.This choice of ACO illustrates how versatile and successful it is in navigating hyperparameter spaces [20].The purpose and function of each ACO parameter is: • num_ants: Number of ants in the colony.
• num_iterations: Number of iterations the ant colony goes through.
In conducting the sensitivity analysis for the ACO algorithm, we systematically varied its key parameters to assess their impact on the intrusion detection and prevention results.Specifically, we focused on parameters such as the number of ants, pheromone evaporation rate, and exploration-exploitation balance.Through a series of experiments, we observed how adjustments to these parameters influenced the convergence speed and the quality of the optimised solutions.Notably, higher values of the number of ants tended to enhance exploration capabilities, potentially leading to improved convergence in certain scenarios.Conversely, variations in the pheromone evaporation rate affected the persistence of information between ants, influencing the algorithm's ability to exploit promising regions of the solution space.This detailed sensitivity analysis provides valuable insights into the robustness and adaptability of the ACO algorithm within the proposed intrusion detection framework, offering a nuanced understanding of its performance under diverse parameter settings.
Hyperparameter tuning with ACO.
Hyperparameter tuning is a critical component of our research to optimise the performance of the SVR models [3].We employ ACO to iteratively search for the best combinations of hyperparameters, including the regularisation parameter (C) and the insensitive loss parameter (epsilon).The process leverages the colony of ants to navigate the hyperparameter space efficiently, leading to enhanced predictive accuracy.The algorithm for this is provided in Table 3.
Feature importance
Feature importance analysis is crucial for understanding the impact of different input features on the prediction of barrier counts [7].We employ the Random Forest Regressor to extract and rank the importance of features to identify the most influential features and obtain valuable insights for feature selection and model interpretability.The algorithm's predictive capabilities are connected to assess the relative importance of features by ranking them based on their contribution to model performance [21].We have calculated the feature importance for our specific models and ranked the features accordingly, as shown in Fig 5 .The feature importance analysis serves as a precursor to feature selection or engineering, as it provides insights into which features should be prioritised or potentially excluded to optimise model 3. Define the initial hyperparameter space to be explored, including: • Regularisation parameter (C).
• Insensitive loss parameter (epsilon).4. Set ACO parameters for the optimisation process, such as: • Population size.
Implement the ACO algorithm to search for the best hyperparameters:
• Initialise a population of artificial ants, each representing a set of hyperparameters for the SVR model.
• Calculate a distance matrix to evaluate the quality of solutions based on model predictions.
• Ants construct solutions by probabilistically selecting hyperparameters from the predefined space.
• Evaluate the performance of SVR models with the chosen hyperparameters using a relevant metric.
• Update pheromone levels on hyperparameters based on the quality of solutions.
• Iterate through multiple cycles to adapt and refine hyperparameter choices.
Determine the best solution found by the ACO algorithm:
• Select the hyperparameters with the highest pheromone levels.7. Update the SVR models with the ACO-optimized hyperparameters.
Measure the performance of the ACO-optimized SVR models using appropriate evaluation metrics:
• Compare results, such as MSE, MAE, and R-squared(R 2 ), to assess improvements.9. Conclude the hyperparameter tuning process and provide the ACO-optimized SVR models.10.End the algorithm.https://doi.org/10.1371/journal.pone.0299334.t003performance [12].Based on Fig 5, the feature importance analysis using a Random Forest Regressor whose algorithm is given in Table 4, revealed valuable insights into the contribution of different attributes to the estimation of barrier counts.The top features influencing the model include: • Number of sensor nodes-Explanation of why this feature is important.
• Sensing range-Insights into the impact of sensing range on barrier count estimation.
• Area-Discuss the relevance of the area feature in predicting barrier counts.
• Transmission range-Explanation of how transmission range contributes to the model.
Regularisation techniques
The pursuit of optimised predictive models has led us to explore regularisation techniques.Regularisation methods, such as L1 (Lasso) and L2 (Ridge) and the algorithm is given in Table 5, are applied to mitigate overfitting and enhance the robustness of our models.These techniques are especially relevant when dealing with high-dimensional datasets or models that exhibit excessive complexity [13].
A. L1 Regularization.L1 regularisation, also known as Lasso, introduces a penalty term to the cost function of the model.The objective of L1 regularisation is to promote sparsity in the model by forcing some feature coefficients to be exactly zero.This, in turn, aids in feature selection [13].The application of L1 regularisation to our model resulted in improved predictive performance, reducing both the MSE and MAE.The sparse nature of L1 regularisation makes it effective for feature selection, thereby enhancing model interpretability.The L1 regularisation term is added to the loss function as follows: Where: Input: Dataset: The dataset containing input features and target variables.
Output: Feature Rankings-A list of features ranked by their importance in the models.
1. Start the Feature Importance Analysis process.
2. Initialise the analysis using an available dataset and initial regression models.
3. Select the target variable, which represents the prediction objective.
4. Perform feature pre-processing and data cleaning, including handling missing values, scaling, and encoding categorical variables, if necessary.
5.
Train the initial regression models on the pre-processed dataset.
6. Evaluate the models' performance and record the results for future comparison.
7. Utilise a relevant feature importance analysis method, such as Random Forest, to extract feature rankings based on their contributions to the models.This analysis should consider: • Importance scores for each feature.
• Feature ranking based on importance scores.
8. Generate a list of features sorted by their importance scores.
9. Visualise the importance of features using appropriate plots or charts (e.g., bar charts or heatmaps) to provide insights into the most influential features in the models.
Interpret the results to understand which features significantly impact the prediction of the target variable.
Consider the top features as the most influential ones.
11. Use the feature rankings to inform subsequent model selection, feature engineering, or optimisation efforts.
12. Conclude the Feature Importance Analysis process, providing a ranked list of features and their importance scores.
Table 5. Algorithm for regularization techniques application.
Input: Initial Predictive Models: Regression models before applying regularisation.
Output: Regularised Predictive Models-Regression models with L1 and L2 regularisation applied.
1. Start the regularisation techniques application process.
2. Initialise the initial predictive models with default hyperparameters.
Define the types of regularisation to be applied:
• L1 (Lasso) regularisation.
Apply L1 (Lasso) regularisation to the initial predictive models:
5.1.Add the L1 regularisation term to the model's loss function.
Set the regularisation parameter (alpha) for L1.
6. Measure the performance of the models with L1 regularisation using relevant evaluation metrics: • Calculate metrics such as MSE, MAE, and R-squared.
Apply L2 (Ridge) regularisation to the initial predictive models:
7.1.Add the L2 regularisation term to the model's loss function.
Set the regularisation parameter (alpha) for L2.
8. Measure the performance of the models with L2 regularisation using relevant evaluation metrics: • Calculate metrics such as MSE, MAE, and R^2.
Compare the performance of the models with and without regularisation to assess improvements:
• Evaluate and contrast results, focusing on metrics like MSE, MAE, and R-squared.
10. Conclude the regularisation techniques application process and provide the regularised predictive models.
• w j is the j th weight (coefficient) in the model.
B. L2 Regularization.L2 regularisation, or Ridge regularisation, imposes a penalty on the sum of squared feature coefficients.Unlike L1 regularisation, L2 does not force coefficients to be exactly zero but rather reduces their magnitudes.The application of L2 regularisation to our model similarly yielded positive results, with a notable decrease in MSE and MAE.By diminishing the magnitude of feature coefficients, L2 regularisation offers enhanced stability and mitigates the risk of overfitting [4].These regularisation techniques contribute to our overarching goal of achieving highly predictive models while ensuring their robustness and interpretability.The effectiveness of L1 and L2 regularisation provides insights into the significance of regularisation strategies in the context of our research.The L2 regularisation term is added to the loss function as follows: Where: • jjwjj 2 2 represents the L2 norm (squared) of the weight vector w.
• w j is the j th weight (coefficient) in the model.
Feature sensitivity
Feature sensitivity analysis is a critical component of our research and the algorithm is provided in Table 6, as it delves into the intricate relationship between input features and model predictions.This not only provides valuable insights into the response of the model but also enables us to identify influential features and quantify their impact [22].Using feature sensitivity analysis, we want to provide the following useful information: 1. Identifying Influential Features: We can identify features that significantly impact the model's predictions by doing the sensitivity analysis.High sensitivity index features are regarded as influential, and changes to them significantly affect the model.
Interpreting Model Behaviour:
We can learn more about the underlying links between input features and the target variable by analysing how the model reacts to feature variations.This promotes better-informed decision-making and helps make the model more interpretable.
Guiding Feature Engineering:
A Guideline for feature engineering is provided by feature sensitivity analysis.Low-sensitivity features might be candidates for elimination, and highly-sensitive features could be improved or changed to have a greater influence on the model's predictions.
Initial regression models.
The first set of regression models was constructed without applying any optimisation or feature selection techniques.Two models were developed: one for predicting performance metrics using the "Number of Barriers (Gaussian)" feature and the other using the "Number of Barriers (Uniform)" feature [19].These models served as baselines for comparison with the ACO-optimized models.Table 7 presents the results of the initial regression models.Model 1, which utilises "Number of Barriers (Gaussian)," exhibits an MSE of approximately 116.56, an MAE of approximately 5.85, and an R-squared value of approximately 0.96.In contrast, Model 2, based on the "Number of Barriers (Uniform)," displays an MSE of around 435.74, an MAE of approximately 8.97, and an R-squared value of roughly 0.90.
Ant Colony Optimization (ACO)
. ACO algorithm's convergence in the proposed intrusion detection and prevention framework is carefully monitored through well-defined convergence criteria.Convergence is typically considered achieved when the algorithm demonstrates stability in its solutions over successive iterations, indicating that the ants have collectively discovered an optimal or near-optimal solution.In our implementation, we employ a convergence criterion based on observing a plateau in the fitness or objective function values over a predefined number of iterations [23].This approach ensures that the ACO algorithm refines its barrier placement strategy until further iterations yield marginal improvements.The implications of these convergence criteria on barrier placement precision are profound, as a well-defined convergence ensures that the algorithm converges to a stable solution, optimising Table 6.Algorithm for feature sensitivity analysis.
Input: Optimised Regression Models: ACO-optimized regression models (e.g., ACO-SVR1 and ACO-SVR2).
Output: Feature Sensitivity Insights: Information on the sensitivity of input features in the models.
1. Begin the Feature Sensitivity Analysis process.
Choose one of the ACO-optimized regression models as the subject of sensitivity analysis (e.g., ACO-SVR1 or ACO-SVR2).
3. Initialise a list to store feature sensitivity insights.
Rank the features based on their sensitivity indices:
• Sort the list of feature-sensitivity pairs in descending order of sensitivity index.
Analyse the results to gain insights:
• Identify the most influential features based on their sensitivity indices.
• Interpret how variations in influential features affect the model's output.
• Assess the significance of each feature in predicting barrier counts.
Use the feature sensitivity insights to inform the following aspects:
• Feature prioritisation: Focus on influential features in further analysis or model development.
• Feature engineering: Modify or refine features to enhance their impact on predictions.the placement of barriers for enhanced intrusion detection accuracy while avoiding unnecessary computational overhead.
A. ACO-SVR1 model.Using ACO, the ACO-SVR1 model was adjusted to identify the most significant features from the original dataset.9 demonstrates that, in comparison to the original Model 1, ACO-SVR1 shows a significant improvement with a 4835.71%reduction in MSE, an 862.08% reduction in MAE, and an 86.29% rise in R-squared.Comparing ACO-SVR2 to the original Model 2, it shows a reduction in MSE of 2202.85%, a drop in MAE of 733.98%, and an improvement in R-squared of 54.03%.
With a feature ranking score of roughly 0.678, "Number of Barriers (Gaussian)" is shown to be the most influential feature in the ACO-SVR1 model.On the other hand, "Number of Barriers (Uniform)" has a feature ranking score of roughly 0.318 in the ACO-SVR2 model, suggesting that it has a more substantial impact.Overall, in our proposed method, SVR is used as the underlying regression model for predicting the number of barriers in intrusion detection and prevention systems.The ACO algorithm is employed to optimise the hyperparameters of the SVR model, namely the cost parameter (C) and the epsilon parameter.The algorithm for the steps explained below is given in Table 10.• Initial SVR Model Training: We begin by training an initial SVR model using a subset of the dataset, and this model serves as the baseline.
• ACO Hyperparameter Optimization: The ACO algorithm is employed to optimise the hyperparameters of the SVR model.This involves searching for the best combination of hyperparameters (C and epsilon) that minimises the distance between the predicted values and the actual values.
• Integration of ACO-Optimized SVR Model: The optimised hyperparameters obtained from the ACO algorithm are then used to train a new SVR model.
• Comparison and Evaluation: We compare the performance of the initial SVR model and the ACO-optimized SVR model in terms of various metrics such as MSE, MAE, and R 2 .
Practical implications.
The successful implementation of the proposed approach in real-world WSN environments holds significant practical implications for practitioners and researchers alike.Several key considerations contribute to the understanding of the approach's feasibility and utility: • Hardware Requirements: The proposed model, comprising SVR and ACO, exhibits moderate hardware requirements.The computational load primarily stems from the training phase of the SVR model and the optimisation process of the ACO algorithm.The model has been designed to operate on standard sensor nodes commonly found in WSNs, ensuring compatibility with existing hardware infrastructure [24].• Computational Complexity: Assessing the computational complexity is essential for practical deployment.The SVR model's training complexity is influenced by the size of the dataset and the selected kernel function.However, the ACO algorithm's computational demands during hyperparameter tuning are generally reasonable.Practitioners should consider these aspects when deploying the model and may explore parallelisation techniques to enhance efficiency.
• Ease of Deployment: The proposed approach is designed with ease of deployment in mind.
The model is trained offline, and once optimised, the resulting parameters can be easily deployed to sensor nodes.The lightweight nature of the trained SVR model facilitates quick updates and adaptation to evolving network conditions.Additionally, the ACO algorithm's hyperparameter tuning process is conducted offline, minimising the impact on real-time intrusion detection and prevention operations.
• Adaptability to Diverse Environments: The versatility of the proposed approach allows for adaptation to diverse WSN environments.The model can be tailored to different sensor network configurations by selecting relevant features during training.This adaptability enhances the model's applicability across various deployment scenarios, ranging from environmental monitoring to security-sensitive applications.
In summary, the proposed approach demonstrates favourable practical implications, offering a balance between computational efficacy and adaptability to real-world WSN environments.
Initial model results
On the test set, the SVR1 model produced an R-squared of 0.92, a MSE of 10.25, and a MAE of 5.12.These findings show that the model has a high degree of accuracy when predicting the quantity of barriers needed for intrusion detection and prevention.Although there are few outliers, the scatter plot of real vs.projected values, as shown in A useful indicator that the model is not overfitting the data is the residual vs. real values plot, which is shown in Fig 10 .It indicates that the residuals are randomly distributed.The findings show that, even in the case of a uniform distribution, it is feasible to employ the SVR2 model to accurately forecast the quantity of barriers needed for intrusion detection and prevention in WSNs.The SVR2 model reduces the number of barriers needed to reach a desired coverage level, which can be used to optimise the placement of barriers in WSNs.The SVR2 model's predictions about the number of barriers needed under a uniform distribution are marginally less accurate than those regarding the number of barriers needed under a Gaussian distribution.This is probably because predicting a uniform distribution is harder than a Gaussian distribution.The SVR2 model for estimating the number of barriers needed under a uniform distribution still achieves good accuracy, despite the marginally lower results.This implies that, independent of the distribution of the number of barriers, the SVR2 model is a reliable method for estimating the number of barriers needed for intrusion detection and prevention in WSNs.
ACO Optimization results
With integrated SVR-1 predictions refined, the ACO algorithm found a solution with a best distance of 238.Compared to the SVR-1 model predictions, which had a MSE of 10.25, this represents a significant improvement.Plotted in Fig 11(A), the ACO algorithm was able to converge to a satisfactory solution in a manageable number of iterations based on the optimum distance across iterations.The outcome shows that it is possible to optimise the placement of barriers in WSNs for intrusion detection and prevention by utilising the ACO algorithm optimised with integrated SVR predictions.It appears that the ACO algorithm optimised with integrated SVR predictions can be used to improve the placement of barriers in WSNs for intrusion detection and prevention as the ACO algorithm was able to find a solution with a significantly better distance than the previous two SVR model predictions.For the second model, the ACO algorithm optimised with integrated SVR-2 predictions found a solution with a best distance of 256.Compared to the SVR-2 model predictions, which had a MSE of 12.56, this represents a significant improvement.The second model's best distance plot, as shown in Fig 11(B), indicates that the ACO method was able to converge to a satisfactory solution in a manageable number of iterations.The outcome shows that, even in the case of a uniform distribution, it is possible to improve the placement of barriers for intrusion detection and prevention in WSNs by utilising the ACO algorithm enhanced with integrated SVR predictions.
The ACO algorithm optimised with integrated SVR predictions was able to identify a better solution for the second model (uniform distribution) than for the first model (Gaussian distribution), based on the scatter plots of the best solutions for the two models, as shown in Fig 12 (A) and 12(B).This is probably because the second model is trying to optimise for a distribution that is harder to predict.The distance of the optimal solution for the second model is 234.34844512148587,whereas the optimal solution for the first model is 212.91770732153128.This indicates that with fewer obstacles, the second model can attain a greater degree of coverage.
Regardless of the distribution of barrier numbers, the findings shown in When integrating the ACO algorithm, both models (ACO-SVR1 and ACO-SVR2) performed comparably, finding solutions with far greater distances than the predictions of the SVR models alone.But compared to Model 1, Model 2 had a little superior best distance.This is probably because Model 2 is trying to optimise for a uniform barrier distribution, which is harder to optimise for than a Gaussian distribution.All things considered, both models show promise as methods for maximising barrier placement in WSNs for intrusion detection and prevention.For applications where a uniform distribution of obstacles is desired, Model 2 might be a preferable option.
The plot of actual values versus anticipated values, as illustrated in Fig 13 , indicates that the ACO-SVR1 model can accurately forecast the number of barriers needed at various places inside the WSN.There are, however, a few anomalies where the model either overestimates or underestimates the necessary number of barriers.The outliers could be caused by elements that the model ignores, including the kind of barriers being utilised or the topography of the WSN.Furthermore, the number of barriers needed at areas with a higher node concentration may be harder for the model to anticipate.The plot of the residuals against the actual values, as shown in Fig 14 , indicates that the residuals are dispersed randomly about the zero line.This indicates that the data is not being overfitted by the model.
The model can accurately anticipate how many barriers will be needed at various points in the WSN, as evidenced by the actual vs. projected values plot for ACO-SVR2 (Fig 15).On the other hand, the ACO-SVR1 actual vs. anticipated values plot shows less outliers than the expected values.The reason for the outliers could be that ACO-SVR2 is optimising for a uniform distribution, which is a more difficult distribution to predict than the Gaussian distribution targeted by ACO-SVR1.Furthermore, ACO-SVR2 might be less accurate in estimating the quantity of barriers needed at sites where there is a greater node concentration.
The residuals plot for ACO-SVR2 as depicted in Fig 16 , shows that the residuals are randomly distributed around the zero line.This is a good sign that the model is not overfitting the data.Overall, the results of the actual vs. predicted values plot and the residuals plot suggest that the ACO-SVR2 model is a promising tool for optimising the placement of barriers in WSNs for intrusion detection and prevention, even under a uniform distribution.Based on two metrics, MAE and MSE, the ACO-SVR1 model outperforms the ACO-SVR2 model.The ACO-SVR2 model has a higher R-squared value than the ACO-SVR1 model.The R-squared number indicates how well the model explains the variation in the data, and the MSE and MAE reflect how accurate the predictions were.As a result, the ACO-SVR1 model performs better and can more accurately forecast how many barriers will be needed at various WSN locations, whereas the ACO-SVR2 model performs better at explaining why the data varies.
Feature engineering results
Using correlation-based feature selection, the ACO-SVR1 Model (Model 1) undergoes feature engineering.To do this, the features that have a strong link with the goal variable-the quantity of barriers needed at various WSN locations-must be chosen.Since only characteristics with a correlation larger than or equal to 0.2 are chosen, a correlation criterion of 0.2 is applied.This feature engineering process is crucial since it lowers the amount of features the model has to learn, which could enhance the model's functionality.It also aids in determining which aspects are most crucial for estimating the quantity of barriers needed at various WSN locations.With an R-squared score of 0.98, a MAE of 3.70, and a MSE of 52.89, the model's findings are excellent.This suggests that the SVR model has a high degree of accuracy when predicting the number of barriers needed at various WSN locations.Overall, the feature engineering work done in the code above is successful in enhancing the model's performance.
The feature engineering on the ACO-SVR2 Model (Model 2) is the same as the feature engineering on the Model 1, with the exception that the target variable is now the number of barriers needed under a uniform distribution at various points in the WSN.With an R-squared score of 0.82, a MSE of 924.69, and a MAE of 10.44, the model findings for the uniform distribution (Model 2) are likewise excellent.This suggests that the model has a high degree of accuracy when predicting the number of barriers needed at various WSN locations under a uniform distribution.All things considered; feature engineering works well to enhance the SVR model's performance for the uniform distribution.The ACO-SVR model outperforms the uniform distribution (Model 2) when applied to the Gaussian distribution (Model 1), according to the results.This is since compared to the uniform distribution, the Gaussian distribution is more specialised.Even so, given that the uniform distribution is a more difficult distribution to predict, the ACO-SVR model is still able to produce good results.The best distances over iterations after employing feature engineering is illustrated in Fig 6 (A) and 6(B).
Hyperparameter tuning results
An effective method for adjusting an SVR model's hyperparameters using ACO is to use the hyperparameter tuning function shown in Table 3.The data is divided into training, testing, and validation sets.The feature variables are standardised.An SVR model is created and trained using GridSearchCV.Predictions are made on the test set, and the SVR model is assessed using MSE, MAE, and R-squared.To ensure that the models can achieve the best possible performance on both distributions, we would advise using this function to tune the hyperparameters of an SVR model for both the Gaussian and uniform distributions of the number of barriers required at different locations in the WSN.As you can see in Table 11, the ACO-SVR model performs better on the Gaussian distribution (Model 1) than on the uniform distribution (Model 2), even after hyperparameter tuning using ACO.
The scatter plots of actual vs. predicted values as illustrated in Fig 19 , show that the Model1 can predict the number of barriers required at different locations in the WSN with a good degree of accuracy for both the Gaussian and uniform distributions.The plot illustrated in The ACO-SVR2 model may be optimising for a more difficult distribution (uniform distribution) than the ACO-SVR1 model (Gaussian distribution), which could explain the outliers.Furthermore, the ACO-SVR2 model might be less accurate in estimating how many barriers will be needed at sites where there is a larger node concentration.Considering the above insights, it appears that even in the case of a uniform distribution, the ACO-SVR2 model is a potentially useful instrument for maximising barrier placement in WSNs for intrusion detection and prevention.It is crucial to remember that the model could not be as precise as it would be in the case of a Gaussian distribution.After feature engineering and hyperparameter tuning, the ACO-SVR1 model's residual plot is shown in Fig 20(A).The residuals are dispersed randomly about the zero line, as the plot illustrates.This indicates that the data is not being overfitted by the model.The ACO-SVR1 model appears to be a well-trained model that generalises effectively to fresh data, based on the residual plot.This is a crucial factor to consider when selecting a machine learning model since you do not want to just memorise the training set; you want a model that can adapt well to new data as well [25].
The plot illustrated in Fig 20 (B) is a histogram of the residuals for the ACO-SVR2 model.The histogram shows that the residuals are normally distributed.This is a good sign that the model is not overfitting the data.Some additional observations are: The histogram of the residuals shows that most residuals are within +/-5.This suggests that the ACO-SVR2 model can make accurate predictions for most locations in the WSN.
There are a few residuals that are greater than +/-5.These residuals may be since the ACO-SVR2 model is optimising for a challenging distribution (uniform distribution).Furthermore, these residuals could be because the ACO-SVR2 model might be less accurate in estimating the number of barriers needed at sites where there is a greater node concentration.
For both the Gaussian and uniform distributions, the residuals' histograms, as shown in Fig 20 , demonstrate that the residuals are regularly distributed.This indicates that the data is not being overfitted by the SVR model.
Regularization results
The obtained results demonstrate that, when it comes to forecasting the number of barriers needed at various places within the WSN, L1 regularisation works better than L2 regularisation on the SVR model.This can be seen in the L1 regularised model's lower MSE, MAE, and higher R-squared values, and is probably due to L1 regularisation's superior ability to eliminate superfluous features from the model.The average squared difference between the expected and actual values is measured by the MSE.A better model fit is indicated by a lower MSE.The MSEs of the L1 and L2 regularised models are 4.4866796729593625 and 19.541913854233172, respectively.This indicates that compared to the L2 regularised model, the L1 regularised It is possible that some significant features in the SVR model for estimating the number of barriers needed in the WSN have a strong correlation with the target variable, whereas the remaining features are either unimportant or have a very weak link.A more accurate model results from the removal of unnecessary features from the model, which is more successfully accomplished using L1 regularisation.We would advise forecasting the number of barriers needed at various WSN locations using L1 regularisation in conjunction with the SVR model.This will contribute to increasing the model's accuracy, particularly if a small number of significant features have a strong correlation with the target variable.
The bar plots illustrated in
Statistical analysis to validate the results
A five-fold cross-validation strategy is implemented using the GridSearchCV function.This technique involves splitting the dataset into five subsets, using four subsets for training the model and one subset for validation in each iteration.This process is repeated five times, with each subset serving as the validation set exactly once.The average performance across all folds provides a more reliable estimate of the model's effectiveness.
The scatter plot illustrated in The plot shows that Model 2 can make more accurate predictions than the initial SVR2 model, especially for locations with a higher concentration of nodes.This is likely because Model 2 has been optimised using the ACO algorithm to find the optimal hyperparameters for the SVR model for the uniform distribution.
Uniform distribution is more challenging than Gaussian distribution, so it is more important to tune the hyperparameters of the SVR model to achieve good performance on the uniform distribution.Model 2 can make more accurate predictions than the initial SVR2 model, especially for locations with a higher concentration of nodes, because the ACO algorithm has learned that the number of barriers required at a location is positively correlated with the concentration of nodes.This is because there is more competition for resources at locations with a higher concentration of nodes, so more barriers are needed to ensure that all the nodes have access to the resources they need [26].
Overall, the ACO-SVR1 model (Model 1) improves slightly better MSE, MAE, and R-squared than the ACO-SVR2 model for the Gaussian distribution (Model 2).This is likely because the Gaussian distribution is a less challenging distribution than the uniform distribution.Based on the bar plot illustrated in Fig 24 , the results show that the ACO-SVR models effectively improve the performance of SVR models for predicting the number of barriers required at different locations in a WSN.Both the Gaussian and uniform distributions saw notable improvements in MSE and MAE thanks to ACO-SVR1 and ACO-SVR2.Although favourable, the improvements in R- squared are not as noteworthy.For the Gaussian distribution, ACO-SVR1 performs somewhat better than ACO-SVR2 in terms of MSE, MAE, and R-squared gains.Overall, the findings demonstrate that using ACO-SVR models to forecast the number of barriers needed at various locations within a WSN can effectively enhance the performance of SVR models.
Conclusion
The construction and optimisation of SVR models for the crucial task of estimating the number of barriers needed in WSNs has benefited greatly from the insights provided by this research.The results demonstrate how well the Ant Colony Optimization-based SVR (ACO-SVR) architecture works to improve prediction accuracy.Interestingly, the research found that Model 1, optimised for the Gaussian distribution, consistently performs better than Model 2, designed for the more difficult uniform distribution, even after careful hyperparameter adjustment and regularisation.These findings highlight the importance of considering data distribution factors when using machine learning models in practical settings.
This research makes several notable contributions to the fields of WSNs and machine learning.It introduces the innovative ACO-SVR framework as a robust solution for predicting the number of barriers in WSNs, thus offering a novel approach to addressing intrusion detection and prevention challenges.Additionally, the demonstrated superiority of L1 regularisation highlights the significance of effective feature selection in improving model performance.The practical implications of this research are substantial.Organisations responsible for deploying WSNs for various applications, including security and environmental monitoring, can leverage these findings to enhance their network efficiency and cost-effectiveness [27].Moreover, the emphasis on data distribution characteristics underscores the importance of tailoring machine learning solutions to the specific requirements of the problem domain, thereby offering a more accurate and reliable predictive capability.These findings are anticipated to have a lasting impact on the practical deployment of WSNs and underscore the role of machine learning as a critical enabler for efficient and proactive network management.
Model limitations
While the proposed approach exhibits promising results in the domain of intrusion detection and prevention, it is important to acknowledge and discuss certain limitations that may influence the applicability and generalizability of the model.• Sensitivity to Network Conditions: The effectiveness of the model may be influenced by specific network conditions prevalent during training and evaluation.Variations in network structures, communication patterns, or environmental factors could impact the model's performance.Further studies under diverse network scenarios are recommended to assess the robustness of the proposed approach.
• Scalability Considerations: The scalability of the solution should be carefully considered, especially in large-scale sensor networks.As the size of the network increases, the computational requirements for both the SVR and ACO components may escalate.Future work should explore optimisation strategies to ensure the scalability of the proposed model in real-world deployment scenarios.
• Generalization Across Network Types: The proposed model's generalizability across different types of sensor networks deserves attention.While the current study focuses on a specific sensor network setup, the model's performance may vary when applied to diverse network architectures.Further investigations across various sensor network configurations will contribute to a more comprehensive understanding of the model's capabilities.
• Challenges in Large-Scale Implementation: A. Increased Training Time: As the size of the dataset and the number of features grow, the training time for the SVR model may increase.Consideration should be given to distributed computing or parallelisation strategies to mitigate this challenge.
B. Memory Requirements: Large-scale implementation may demand significant memory resources, especially when dealing with extensive datasets.Efficient memory management or distributed computing frameworks could be explored to address this concern.
C. ACO Scalability:
The scalability of the ACO algorithm could be influenced by the complexity of the optimisation problem and the chosen parameter values.Sensitivity analysis and fine-tuning may be required for large-scale scenarios.
By transparently addressing these limitations, we aim to provide a balanced perspective on the proposed approach.These considerations highlight potential areas for future research and improvement, ensuring the continued refinement of the model for practical deployment in real-world intrusion detection and prevention scenarios.
Time Complexity:
The time complexity of the proposed intrusion detection and prevention approach primarily stems from two key components: the SVR model training and the ACO algorithm.
• SVR Model Training: The time complexity of training the SVR model is influenced by the number of training samples (n) and the number of features (m).With the adoption of efficient optimisation algorithms in popular machine learning libraries, such as scikit-learn, the SVR training process is generally linear or slightly super linear in the number of samples and features.
• ACO Algorithm: The ACO algorithm's time complexity is associated with the number of iterations (iterations) and the ant population (ants) size.Generally, ACO exhibits linear time complexity.However, the influence of parameters like the number of iterations and the size of the ant population needs consideration.
Space Complexity:
The memory requirements during the model training and optimisation processes determine the space complexity.
• SVR Model: The space complexity of the SVR model is primarily related to storing the model parameters.This complexity is generally linear in the number of features.
• ACO Algorithm: ACO's space complexity is influenced by the storage of pheromone matrices and solution constructions.It is also typically linear in terms of the number of features and the ant population size.
Real-world scenario examples and areas of application
1. Urban Surveillance Networks: In urban environments, WSNs are employed for surveillance to ensure public safety.The proposed intrusion detection and prevention approach can be instrumental in identifying anomalous activities, such as unauthorised access to secured areas or unusual movement patterns.The model can effectively distinguish between normal and suspicious behaviour by leveraging data from various sensors, including motion detectors and environmental sensors [10,28].
Industrial IoT (IIoT) Applications:
In industrial settings where IoT devices are extensively used for process monitoring and control, ensuring the security of these systems is paramount.The proposed approach can be applied to detect intrusions in Industrial IoT (IIoT) networks, safeguarding critical infrastructure from unauthorised access and potential disruptions.The model's adaptability allows it to address specific security concerns prevalent in industrial environments [29,30].
3. Precision Agriculture: WSNs play a pivotal role in modern agriculture for monitoring soil conditions, crop health, and environmental parameters.The proposed model can enhance the security of these networks by detecting and preventing unauthorised access or tampering with sensor nodes [31].It ensures the integrity of data used for precision agriculture practices, preventing malicious interference that could impact decision-making processes [32].
Smart Home Security:
The proposed approach can offer robust intrusion detection capabilities in the context of smart homes equipped with sensor networks for automation and security.By analysing patterns in sensor data from motion detectors, door/window sensors, and other relevant devices, the model can distinguish between normal household activities and potential security threats, providing homeowners with advanced threat detection and prevention [33].
Environmental Monitoring in Remote
Areas: Deploying WSNs in remote environmental monitoring scenarios, such as wildlife conservation or ecological research, necessitates reliable intrusion detection mechanisms.The proposed approach can contribute to securing these networks against unauthorised access, ensuring the continuity of data collection, and minimising the risk of interference in sensitive ecological studies [34].
Those mentioned above are a few real-world applications, but the research scope is not limited to these.
Fig 4 .
The distribution contains a few outliers as well.We can see from Fig 4(A) that the distribution's central tendency has a little right skew, with a mean of 103.82 barriers and a median of 86.87 barriers.This indicates that while certain datasets have a very high number of Gaussian barriers, most of the datasets have a reasonable number of barriers.The distribution is rather widely dispersed, with a standard deviation of 66.2 barriers.It indicates that the number of Gaussian barriers varies widely throughout the dataset.We can observe from Fig 4(B) that the distribution's central tendency has a slight right skew, with a median of 103.82 barriers and a mean of 139.25 barriers.
4 .
For each input feature in the selected model: a. Perturb the feature while keeping other features constant.b.Record the changes in model output (e.g., predicted barrier counts).c.Calculate the sensitivity index (partial derivative) for the feature.d.Store the feature name and its sensitivity index in the list.
Fig 6(A) displays the optimal solution as found by the ACO algorithm.The distance to the best solution, which indicates the quality of the solution, is roughly 241.36.Table 8 displays the ACO-SVR1 model's results.The model has an estimated MSE of 5752.86, an approximate MAE of 56.24, and an approximate R-squared value of -0.13.B. ACO-SVR2 model.ACO was utilised to optimise the ACO-SVR2 model, employing a different set of attributes than those in the ACO-SVR1 model.Fig 6(B) displays the optimal ACO-SVR2 solution as found by the ACO algorithm.For ACO-SVR2, the optimal solution's distance is roughly 235.73.The ACO-SVR2 model's results are shown in Table 8.This model has an approximate MAE of 73.27, an approximate MSE of 9590.55, and an approximate Rsquared value of -0.35.3.4.3Comparison and feature importance.Table
Fig 10 .
Fig 10.Scatter plot of residual vs actual values for SVR2 model.https://doi.org/10.1371/journal.pone.0299334.g010 Fig 12 indicate that the ACO algorithm enhanced with integrated SVR predictions is a potential tool for optimising barrier placement in WSNs for intrusion detection and prevention.If the algorithm optimises for a uniform distribution, it could be able to produce superior results.
Fig 19 (
B) is a scatter plot of actual vs. predicted values for the number of barriers required at different locations in the WSN under a uniform distribution.The illustration shows how accurately the ACO-SVR2 model can predict the number of barriers needed.On the other hand, the ACO-SVR1 model's actual vs. projected values plot, shown in Fig 19(A), has fewer outliers than it does.
Fig 19 .
Fig 19.(a) Scatter Plot of Actual vs Predicted Values of Number of Barriers for Model 1 and (b) for Model 2. https://doi.org/10.1371/journal.pone.0299334.g019 Fig 21 show that L1 regularisation outperforms L2 regularisation on the ACO-SVR1 model (Model 1) for predicting the number of barriers required at different locations in the WSN in terms of MSE, MAE, and R-squared.The average squared difference between the expected and actual values is measured by the MSE.A better model fit is indicated by a lower MSE.The bar plot illustrates that the MSE of the L1 regularised model is lower than that of the L2 regularised model.This suggests that compared to the L2 regularised model, the L1 regularised model can produce forecasts that are more accurate.The average absolute difference between the expected and actual values is measured by the MAE.A better model fit is indicated by a lower MAE.The bar plot illustrates that the MAE of the L1 regularised model is lower than that of the L2 regularised model.This suggests that compared to the L2 regularised model, the L1 regularised model can produce forecasts that are more accurate.The percentage of the variance in the actual values that the model can explain is shown by the R-squared.A better model fit is indicated by a greater R-squared.The L1 regularised model has a greater R-squared than the L2 regularised model, as the bar plot illustrates.This suggests that compared to the L2 regularised model, the L1 regularised model can explain a greater portion of the variance in the actual data.The bar plots illustrated in Fig 22 show that L1 regularisation outperforms L2 regularisation on the SVR2 model for predicting the number of barriers required at different locations in the WSN in terms of MSE, MAE, and R-squared.The bar plot illustrates that the MSE of the L1
Fig 23 (
A) shows the actual vs. predicted values for the first model (Gaussian distribution) for the initial SVR1 model and the ACO-SVR1 model after feature engineering, hyperparameter tuning and regularisation (Model 1).The plot shows that Model 1 can make more accurate predictions than the initial SVR1 model.Model 1 can make | 11,950.4 | 2024-02-29T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
The Wear Resistance of Cr-C-Al2O3 Composite Deposits Prepared on a Cu Substrate using Cr3+-based Plating Baths
Cr-C-Al 2 O 3 deposits with different Al 2 O 3 concentrations were successfully prepared on a Cu substrate using Cr 3+ -based electroplating baths. The microstructures of the Cr-C-Al 2 O 3 deposits were examined using optical, scanning and transmission electron microscopes. The hardness values, the corrosion and wear resistance of the Cr-C and Cr-C-Al 2 O 3 deposited specimens were evaluated. Based on the experimental results, the hardness values of the Cr-C-Al 2 O 3 deposits increased with increasing Al 2 O 3 concentration in the electroplating bath. According to our microstructure study, Al 2 O 3 nanoparticles are uniformly distributed within the Cr-C deposits after electroplating in a Cr 3+ -based plating bath. The wear resistance of the Cr-C-deposited specimens could be noticeably improved by adding Al2O3 nanoparticles to the deposit. The Cr-C-Al 2 O 3 deposited specimens, which were prepared in a plating bath with an Al 2 O 3 concentration of 50 gL -1 , had a relatively high wear resistance compared to the other specimens.
Introduction
Due to the material's lustre appearance, as well as superior corrosion and wear resistance, Cr electroplating has been widely used in industrial applications for almost a century.Conventional Cr electroplating is performed in an electroplating bath containing highly toxic Cr +6 ions [1].In February of 2003, the Restriction of Hazardous Substances Directive (RoHS) was announced by the European Union.This directive restricts the use of six toxic substances, including Cr +6 ions, in the manufacturing process for electrical and electronic products.Therefore, the development of alternative coatings has become an important topic in recent years [2,3].Trivalent Cr electroplating, among all of the alternative options, is considered to have a great potential in replacing conventional hexavalent Cr electroplating for its relatively low toxicity and high current efficiency [4][5][6].
In our previous works, we proposed that the hardness value of an as-plated Cr-C deposit increased from ca. 780 Hv to ca. 1600 Hv after annealing at 600 °C for 1 h [7] or to ca. 1600 Hv through reduction flame heating for 1 s [8].However, the cracks in the Cr-C deposits severely widened after annealing, reducing the corrosion protection of the coating.
Hence, increasing the mechanical properties of Cr-C deposits without weaken the corrosion resistance is important.Composite electroplating may be a useful method for strengthening as-plated Cr-C deposits [9,10].In general, ceramic particles, such as SiC or Al 2 O 3 , are often used as hardening phases to form composite coatings via electroplating [11,12].The aim of this study is to fabricate Cr-C-Al 2 O 3 composite deposits using Cr 3+ -based plating baths with different Al 2 O 3 concentrations.The microstructures, hardness values, and wear resistances of the resulting Cr-C-Al 2 O 3 deposits will be investigated and discussed.
Experimental Procedure
A commercially pure Cu disc with a diameter of 20 mm and a thickness of 2 mm was used as the substrate for Cr-C-Al 2 O 3 electroplating.Before electroplating, the Cu substrate was mechanically polished with 600-grid emery paper, ultrasonically cleaned in an alcohol bath, and dried with an air blaster.The trivalent Cr plating bath was composed of 0.8 M CrCl 3 •6H 2 O as the main metal salt, urea as a complex agent and a small amount of buffer salts to maintain a pH value of 1.1 [7].Al 2 O 3 particles with an average size of 100 nm were added to the plating bath at concentrations of 0, 50, 100 and 150 gL -1 to produce the Cr-C and Cr-C-Al 2 O 3 deposits on the Cu substrate.A deposit with a thickness of ca.50 ȝm was prepared with an electroplating current density of 35 Adm í2 for 1500 s.The bath temperature was kept at 30 ± 1 °C during electroplating.To increase its circulation, the plating bath was stirred with a magnetic stirrer during electroplating.
After electroplating, the surface and cross-sectional morphologies were studied with a scanning electron microscope (SEM; Hitachi S-3000N) and an optical microscope (OM; Olympus BH2-HLSH).The hardness values of the Cr-C and Cr-C-Al 2 O 3 deposits were measured using a micro-hardness tester (Matsuzawa Digital, Model MXT-Į7e) with a load of 25 g.The mean hardness and standard deviation of a Cr-C or Cr-C-Al 2 O 3 deposit were evaluated by measuring at five arbitrary positions that were approximately in the centre of its cross section mounted in epoxy.The microstructures of the as-plated Cr-C and Cr-C-Al 2 O 3 deposits were characterised using optical, scanning and transmission electron microscopes (TEM; Philips Tecnai F30).
The wear resistance of the Cr-C and Cr-C-Al 2 O 3 deposited specimens was evaluated using a ball-on-plate wear tester, in which a 6 mm counterpart ball made of steel with a hardness value of ca.450 Hv was used.During the wear-resistance test, a constant load of 10 N was applied normally to the Cr-C or Cr-C/Al 2 O 3 deposited specimens under an unlubricated condition at 25 °C.The wear-resistance test was conducted with a circular track with a diameter of 3 mm, a frequency of 10 Hz, and a total ground distance of 50 m.The weight-difference value of the Cr-C or Cr-C-Al 2 O 3 deposited specimens was measured before and after the wear-resistance test using a scale with a precision of 0.01 mg.Optical microscopes was used to examine the surface morphologies of the Cr-C and Cr-C-Al 2 O 3 deposited specimen before and after the wear tests.According to our previous study [8], the hardness of the as-plated Cr-C deposits can be significantly increased to 1600 Hv after flame heating for 1 s.The wear resistance of the as-plated Cr-C deposited specimen could also be obviously improved through flame heating.However, the cracks in the flame-heated Cr-C deposits became wider and longer, leading to decrease in their corrosion resistance.In this study, we confirmed that the crack density could be significantly reduced and the crack width could be narrowed in the Cr-C deposits in the presence of Al 2 O 3 nanoparticles.Moreover, the as-plated Cr-C-Al 2 O 3 deposits have a relatively high hardness value, above 790 Hv, which is higher than that of fully quench-hardened steels that are used as tool and cutting materials.
3.3Wear Resistance
The weight-difference values of the Cr-C and Cr-C-Al 2 O 3 deposited specimens, before and after the wear-resistance test, are shown in Fig. 4. Clearly, a weight loss of 0.43 mg was detected from the Cr-C deposited specimen after the wear-resistance test.However, weight gains were found from the worn Cr-C-Al 2 O 3 deposited specimens prepared from the plating baths with 50 and 100 gL -1 of Al 2 O 3 .A slight weight loss of 0.02 mg was detected from the Cr-C-Al 2 O 3 deposited specimens prepared in the electroplating bath with 150 gL -1 of Al 2 O 3 .This indicates that the addition of Al 2 O 3 nanoparticles to the Cr-C deposits noticeably increased their wear resistance. of the surface of the as-plated Cr-C deposits was covered, whereas the other half did not markedly change, revealing a typical nodular surface.However, the nodular surface of the worn Cr-C-Al 2 O 3 deposited specimens was no longer observed; the surface had a circular scratched appearance.Because weight gains could be detected from the Cr-C-Al 2 O 3 deposited specimen prepared in the plating baths with 50 and 100 gL -1 of Al 2 O 3 , it can be expected that the circular scratched marks were possibly ground and cold-welded by the steel counterpart.The wear resistance of the as-plated and anneal-hardened Cr-C deposited specimens was evaluated in our previous study [13].We found that the wear resistance of the as-plated Cr-C deposited specimens could be significantly improved after anneal-hardening.A slight weight loss was detected from the anneal-hardened Cr-C deposited specimens after the wear-resistance test.In this study, a cold-welded layer, smeared from the steel counterpart, was detected on the surface of the Cr-C-Al 2 O 3 deposited specimens, leading to an increase in their weight.Although the hardness of the anneal-hardened Cr-C deposits is much higher than that of the Cr-C-Al 2 O 3 deposits, a slight weight loss was detected from the anneal-hardened Cr-C deposited specimens.Because Al 2 O 3 particles are widely used as an abrasive substance for grinding, the steel counterpart could be abraded by the Cr-C-Al 2 O 3 deposits during the wear-resistance test and cold-welded on the deposited specimens.
Figs. 1
Figs.1 (a, b) show the SEM micrographs of the Cr-C and Cr-C-Al 2 O 3 deposit surfaces and their chemical composition analyses, which were performed using an energy-dispersive x-ray spectrometer (EDS).It can be observed that the Al 2 O 3 nanoparticles, shown in a bright colour, were uniformly distributed on the deposit surface (see Fig.1(b)).The results
Fig. 1 Fig. 2
Fig. 1 Surface morphologies and EDS-analysis of (a) Cr-C deposits and (b) Cr-C-Al 2 O 3 deposits prepared in an electroplating bath with 50 gL -1 Al 2 O 3 .
Fig. 3
Fig. 3 shows the hardness and standard deviation values of the Cr-C-Al 2 O 3 deposits with different Al 2 O 3 concentrations.The hardness of the as-plated Cr-C deposits is 683 Hv, whereas the values of 791, 814 and 852 Hv were detected for the Cr-C-Al 2 O 3 deposits prepared in the baths with 50, 100 and 150 gL -1 of Al 2 O 3 , respectively.That is, the hardness values of Cr-C-Al 2 O 3 deposits increased with an increasing Al 2 O 3 concentration in the Cr 3+ -based plating bath.According to our previous study[8], the hardness of the as-plated Cr-C deposits can be significantly increased to 1600 Hv after flame heating for 1 s.The wear resistance of the as-plated Cr-C deposited specimen could also be obviously improved through flame heating.However, the cracks in the flame-heated Cr-C deposits became wider and longer, leading to decrease in their corrosion resistance.In this study, we confirmed that the crack density could be significantly reduced and the crack width could be narrowed in the Cr-C deposits in the presence of Al 2 O 3 nanoparticles.Moreover, the as-plated Cr-C-Al 2 O 3 deposits have a relatively high hardness value, above 790 Hv, which is higher than that of fully quench-hardened steels that are used as tool and cutting materials.
Fig. 3
Fig. 3 Hardness values and standard deviations of as-plated Cr-C deposits and Cr-C-Al 2 O 3 deposits prepared from electroplating baths with varying concentrations of Al 2 O 3 .
Fig. 4
Fig. 4 Weight-difference values of as-plated Cr-C and Cr-C-Al 2 O 3 deposited specimens after the wear-resistance test.The surface morphologies of the Cr-C and Cr-C-Al 2 O 3 deposited specimens after the wear-resistance test are shown in Figs.5(a)-(d).As shown in Fig. 5(a), approximately half
Fig. 5
Fig. 5 Surface morphologies of (a) Cr-C deposited specimens and Cr-C-Al 2 O 3 deposited specimens prepared in electroplating baths with (b) 50, (c) 100, and (d) 150 gL -1 Al 2 O 3 , after the wear-resistance test.The cross-sectional morphologies of the Cr-C and Cr-C-Al 2 O 3 deposited specimens after the wear-resistance tests are shown in Figs.6(a) and (b), in which the specified specimens have either an inferior or superior wear resistance, respectively.The surface profile of the Cr-C deposited specimens levelled off significantly after the wear resistance test; however, the worn Cr-C-Al 2 O 3 deposited specimens did not significantly alter their surface profiles.These findings suggest that the addition of Al 2 O 3 nanoparticles within the Cr-C deposits could increase their wear resistance.As shown in Fig. 6(b), a cold-welded layer smeared from the steel counterpart can be found on the surface of the Cr-C-Al 2 O 3 deposits after the wear-resistance test.This result could explain the weight gain that was detected in the Cr-C-Al 2 O 3 deposited specimens after the wear-resistance test.As shown in Fig. 5(d), some shallow holes were observed in the Cr-C-Al 2 O 3 deposits that were prepared in the electroplating bath containing 150 gL -1 Al 2 O 3 after the wear resistance test.This result implies that some fragments of the Cr-C-Al 2 O 3 deposits were peeled off during the wear-resistance test, resulting in a slight weight loss, though a smeared steel counterpart covered its surface after the wear-resistance test.
Fig. 6
Fig. 6 Cross-sectional morphologies of (a) Cr-C deposited specimens and (b) Cr-C-Al 2 O 3 deposited specimens prepared in an electroplating bath with 50 gL -1 Al 2 O 3 after the wear-resistance test.
In this study, Cr-C-Al 2 O 3 deposits with different Al 2 O 3 concentrations could be successfully prepared in Cr 3+ -based electroplating baths.After electroplating, the corrosion and wear resistance of the Cr-C and Cr-C-Al 2 O 3 deposited specimens were investigated.The hardness values of the Cr-C-Al 2 O 3 deposits increased with increasing concentrations of Al 2 O 3 nanoparticles in the Cr 3+ -based electroplating baths.The wear resistance of the Cr-C deposited specimens can be markedly increased via co-deposition with Al 2 O 3 nanoparticles.Through-deposit cracks in the Cr-C deposits were reduced by adding Al 2 O 3 nanoparticles to the deposits.The Cr-C-Al 2 O 3 deposited specimens prepared in the electroplating bath with 50 gL -1 Al 2 O 3 had relatively high wear resistances compared to the other specimens. | 3,092.6 | 2016-01-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Pan-cancer analysis of Krüppel-like factor 3 and its carcinogenesis in pancreatic cancer
Background Krüppel-like factor 3 (KLF3) is a key transcriptional repressor, which is involved in various biological functions such as lipogenesis, erythropoiesis, and B cell development, and has become one of the current research hotspots. However, the role of KLF3 in the pan-cancer and tumor microenvironment remains unclear. Methods TCGA and GTEx databases were used to evaluate the expression difference of KLF3 in pan-cancer and normal tissues. The cBioPortal database and the GSCALite platform analyzed the genetic variation and methylation modification of KLF3. The prognostic role of KLF3 in pan-cancer was identified using Cox regression and Kaplan-Meier analysis. Correlation analysis was used to explore the relationship between KLF3 expression and tumor mutation burden, microsatellite instability, and immune-related genes. The relationship between KLF3 expression and tumor immune microenvironment was calculated by ESTIMATE, EPIC, and MCPCOUNTER algorithms. TISCH and CancerSEA databases analyzed the expression distribution and function of KLF3 in the tumor microenvironment. TIDE, GDSC, and CTRP databases evaluated KLF3-predicted immunotherapy response and sensitivity to small molecule drugs. Finally, we analyzed the role of KLF3 in pancreatic cancer by in vivo and in vitro experiments. Results KLF3 was abnormally expressed in a variety of tumors, which could effectively predict the prognosis of patients, and it was most obvious in pancreatic cancer. Further experiments verified that silencing KLF3 expression inhibited pancreatic cancer progression. Functional analysis and gene set enrichment analysis found that KLF3 was involved in various immune-related pathways and tumor progression-related pathways. In addition, based on single-cell sequencing analysis, it was found that KLF3 was mainly expressed in CD4Tconv, CD8T, monocytes/macrophages, endothelial cells, and malignant cells in most of the tumor microenvironment. Finally, we assessed the value of KLF3 in predicting response to immunotherapy and predicted a series of sensitive drugs targeting KLF3. Conclusion The role of KLF3 in the tumor microenvironment of various types of tumors cannot be underestimated, and it has significant potential as a biomarker for predicting the response to immunotherapy. In particular, it plays an important role in the progression of pancreatic cancer.
Introduction
In the world, cancer is the second most common cause of death, accounting for one in six deaths (1). In 2022, there were 1,918,030 new cancer cases, and 609,360 cancer-related deaths in the USA, according to the report on cancer statistics (2). Despite years of sustained effort, the long-term results of treatments using traditional strategies remain dismal. A major obstacle limiting the effectiveness of conventional cancer therapies was their tumor specificity (3). In recent years, tumor immunotherapy has received increasing attention, including immune checkpoint blockade therapy, immune cell therapy, and tumor vaccine therapy (4,5). The specificity of immunotherapy depends largely on the specific tumor antigen (6). However, immunotherapyrelated biomarker matching trials were still limited in most cancers (7). Therefore, further exploration of effective immunotherapy-related tumor prognostic biomarkers is urgently needed.
Krüppel-like factor (KLF) 3 is a member of the KLF transcription factor family, which is involved in various physiological processes such as adipogenesis, erythrocyte maturation, B cell differentiation, and cardiovascular development (8). KLF3 also has a special zinc finger structure, which can bind to related CACCC elements to regulate the expression of target genes, thereby regulating cell proliferation, migration, and apoptosis, and it is also critical to early embryonic development (9). In recent years, studies have found that KLF3, as a transcriptional repressor, is abnormally expressed in a variety of tumors, including colon cancer (10), breast cancer (11), lung cancer (12), pancreatic cancer (13), etc. KLF3 plays an important role in different tumor types. For example, studies have shown that KLF3 becomes a key regulator of metastasis by controlling the expression of STAT3 in lung cancer, and silencing KLF3 promote lung cancer EMT and enhances lung cancer metastasis (14); Another study showed that miR-365a-3p targets KLF3 to inhibit colorectal cancer cell migration, invasion and chemotherapy resistance (15). Tian et al. reported that miR-660-5p-loaded M2 macrophage-derived exosomes promoted the development of hepatocellular carcinoma by regulating KLF3 (16). In addition, Zhang et al. also found that aberrant expression of KLF3 was associated with acquired resistance to fluorouracil in colon cancer cells (17). However, the expression levels and clinical significance of KLF3 in most cancer types remain to be elucidated.
In this study, a comprehensive bioinformatics analysis of KLF3 was conducted through multiple databases to clarify the expression, abnormal variation, and clinical significance of KLF3 in pan-cancer. The role of KLF3 in the tumor immune microenvironment was further analyzed, and the relationship between KLF3 and immunotherapy response and related sensitive drugs was evaluated. We also focused on analyzing the relationship between KLF3 abnormal expression and pancreatic cancer progression using in vitro and in vivo experiments, and identified KLF3 as an independent prognostic risk factor for pancreatic cancer.
Pan-cancer data collection
We organize the pan-cancer data through the TCGA database and standardize the data to log2 (TPM + 1), which is used for the differential analysis of KLF3 gene expression between paired normal tissues and cancer tissues, draw Kaplan-Meier curves for survival analysis and independent prognostic analysis, etc. In addition, the normalized pan-cancer dataset from TCGA TARGET GTEx (PANCAN, N=19131, G=60499) was downloaded from the UCSC (https://xenabrowser.net/) database. KLF3 gene differential expression analysis of unpaired normal and cancer tissues, clinical feature correlation analysis, Cox prognosis analysis, and immune feature correlation analysis were performed by SangerBox (18), and the parameter selection sequencing data were normalized to log2 (x + 1). In Table 1, we report the abbreviation for each tumor type.
Prognostic analysis of KLF3 in pan-cancer
To clarify the effect of KLF3 on the prognosis of tumor patients, Cox proportional hazards regression mode (19) was established to analyze the correlation between KLF3 expression and the overall survival (OS), disease-specific survival (DSS), disease-free interval (DFI) and progression-free interval (PFI) of each cancer type. The "surv_cutpoint" function in the "survminer" package was utilized to perform an optimal cut-off selection for distinguishing between high and low expression groups. Followed by a Log-rank test for analyzing the survival differences, and the results were visualized using both "survminer" and "ggplot2" packages.
KLF3 protein expression and localization
We obtained the protein expression difference of the KLF3 between pancreatic cancer and normal pancreatic tissue by querying the UALCAN (20) database. Additionally, the subcellular localization of the KLF3 gene was determined using the human gene database Genecards (https://www.genecards.org/).
The function and enrichment analysis
To identify differentially expressed genes between low and high KLF3 subgroups in each cancer type, patients were ranked based on their KLF3 expression levels. The top 30% of patients were classified as the high KLF3 subgroup, while the bottom 30% were classified as the low KLF3 subgroup. The "limma" R package was employed for analyzing KLF3-related differentially expressed genes in each cancer type, considering an adjusted p-value threshold of <0.05. Gene set enrichment analysis was performed using the R packages "clusterProfiler" (21) and "GSVA" (22). The annotated gene set (h.all.v7.2.symbols.gmt) was selected as the reference gene set for enrichment analysis. The pan-cancer Normalized Enrichment Score (NES) and False Discovery Rate (FDR) were calculated for each biological process. The results were visualized using the "ggplot2" R package in the form of a bubble plot. Moreover, we accessed the Cancer Single-cell State Atlas (CancerSEA, biocc.hrbmu.edu.cn/CancerSEA/home.jsp) database and conducted an analysis of the single-cell RNA sequencing data by specifically examining the gene "KLF3". This analysis allowed us to uncover the intricate relationship between KLF3 gene expression and the diverse repertoire of 14 distinct states observed within cancer.
Immune cell infiltration analysis and large-scale single-cell data sequencing validation
To conduct a reliable immune correlation assessment, we used the MCP-counter (23) and EPIC (24) algorithms to calculate the Spearman's correlation coefficient between the KLF3 gene and immune cell infiltration in each tumor and presented the results in the form of a heat map.
Association of KLF3 expression with the tumor microenvironment (TME) and immune checkpoints
To evaluate the relationship between KLF3 expression and TME, the stromal, immune, and ESTIMATE scores of each patient in each tumor were calculated according to the KLF3 gene expression using the R package ESTIMATE (25). Further, Spearman's correlation coefficient of KLF3 expression and immune infiltration score in each tumor was calculated using the corr.test function of the R package psych (version 2.1.6).
KLF3 expression and immunotherapy response and drug prediction
As in previous studies (32), we calculated the Tumor mutation burden(TMB) of each tumor using the TMB function of the R package maftools (version 2.8.05) and obtained pan-cancer Microsatellite instability(MSI)data (33). The correlation between KLF3 expression and TMB/MSI of each cancer type was calculated by the Spearman method, and visualized by radar map. Immunotherapy response prediction and biomarker assessment of KLF3 were predicted from the TIDE website (http:// tide.dfci.harvard.edu). Based on the GDSC and CTRP databases, the GSCA online website (http://bioinfo.life.hust.edu.cn/GSCA/ #/drug) was used to predict the KLF3 targeted sensitive drugs, and the bubble chart displays the relationship between the drug's half-inhibitory concentration (IC50) and KLF3 expression.
In vitro experiments
Cell culture, plasmid transfection, RNA extraction, quantitative real-time PCR, and immunoblotting were in agreement with previous studies (34). PANC-1 and SW1990 were purchased from the National Cell Identification and Collection Center of the Chinese Academy of Sciences. BxPC-3 (CL-0042) was purchased from Procell (Wuhan, China). The HPDE6-C7 cell line has been preserved by our laboratory. All cell lines in this experiment were identified and verified by short tandem repeat sequences. Cell culture dishes and 6-well plates were obtained from NEST Biotechnology (Wuxi, China). RNA duplexes were designed and synthesized by the Genepharma Company (Shanghai, China). Table S1 lists the sequences of the shRNA and PCR primers used in this study. Primary antibodies were as follows: KLF3 (Abcam, 1:500) and b-Tubulin (proteintech, 1:1000). Cell counting kit-8 (CCK-8), 5-Ethynyl-2'-Deoxyuridine (EdU), wound healing assay and transwell assay experimental details were consistent with previous studies (35). Immunocytochemistry and immunofluorescence (ICC/IF) were conducted as previously described (34). The corresponding antibodies are: KLF3 Rabbit pAb(1:100, A7195, ABclonal) and Goat anti-Rabbit IgG (H+L) Cross-Adsorbed Secondary Antibody, Alexa Fluor ™ 546 (1:1000, A-11080, Thermo Fisher).
Subcutaneous xenograft model
Female nude (BALB/c) mice (4 weeks old) were obtained from Hangzhou Ziyuan Experimental Animal Science and Technology Co., Ltd. After acclimatizing the BALB/c nude mice to the housing conditions for one week, they were randomly allocated into two groups: sh-NC and sh-KLF3#1. PANC-1 cells in the logarithmic growth phase, stably transfected with sh-NC and sh-KLF3#1, were harvested and suspended in PBS to achieve a cell density of 2×10 7 cells/mL. The lower dorsal region of each nude mouse was disinfected, followed by the subcutaneous injection of 100 mL of cell suspension. Tumor volume was assessed every 5 days by the following formula: volume = length × width 2 × 0.5. Mice were euthanized on day 35 after inoculation, and the tumors were removed and weighed. Animal experiments were approved by the Animal Experimental Ethical Inspection of Nanchang Royo Biotech Co. Ltd. (RYE2022092401).
Statistical analysis
All data were analyzed using GraphPad Prism 8.0 (GraphPad, San Diego, USA). The bioinformatics analysis in this study was partially supported by Sangerbox (http://vip.sangerbox.com/). To assess the significance of differences between the two groups, a Student's t-test was conducted. Furthermore, paired t-tests were performed to compare the expression levels of KLF3 in tumor tissues with those in their paired normal tissues. The Spearman correlation coefficient was used to evaluate associations between variables. The Log-rank test was used in survival analysis. For all statistical comparisons, significance levels were set at p < 0.05.
Genetic changes and epigenetic modification of KLF3
Since differential expression of KLF3 was observed in tumors, we analyzed its genetic alterations and epigenetic regulatory modifications using the online resources cBioPortal and GSCALite. As shown in Figure 2A, the main genetic alterations type of KLF3 was "mutation", among which STAD (5.68%), UCEC (5.29%), SKCM (2.25%), COAD (1.85%) and ESCA (1.1%) were the most typical. "Amplification" was mainly seen in ACC (2.2%), LUAD (0.88%), SARC (0.78%), BLAC (0.73%), and PAAD (0.54%). In pan-cancer, the frequency of KLF3 gene mutations in "deep deletion", "structural variation" and "multiple Alterations" was generally less than 0.5%. Table 2). Dysregulation of DNA methylation is strongly associated with the onset of various diseases including cancer (37). The GSCA database provided the methylation sites most negatively correlated with KLF3 gene expression in each tumor type ( Figure 2C and Table 3). Further through the UALCAN database, we found that the methylation level of the KLF3 gene promoter in BLCA, BRCA, CESC, ESCA, HNSC, KIRC, LUAD, LUSC, PRAD, TGCT, and UCEC was significantly higher than that in corresponding normal tissues; The opposite phenomenon occurs in STAD and THCA ( Figures S2A-W). Accumulating evidence suggests that RNA modification pathways were misregulated in human cancers and may be ideal targets for cancer therapy (38). The association between KLF3 expression and RNA modification-related genes was shown in Figure 2D. We found that KLF3 expression was generally positively correlated with m1A, m5C, and m6A-related gene expression in pancancer, especially in YTHDF1, NSUN3, TET2, METTL14, YTHDC2, and FMR1. The above results indicate that the abnormal expression of KLF3 in different tumors may be closely related to its gene variation and participation in epigenetic modification.
Correlation between KLF3 expression and clinicopathological features and prognosis
The above results indicate that KLF3 was abnormally expressed in a variety of tumors, but whether its expression is related to tumor progression needs further exploration. According to the results shown in Figure 3A, it was observed that as the histological grades increased in patients with CESC, ESCA, KIPAN, KIRC, and STES, there was a decreasing trend in KLF3 expression. Conversely, the opposite trend was observed in patients with PAAD, HNSC, B C D A FIGURE 2 Genetic alteration and epigenetic modification of KLF3. (A) From the cBioPortal website, Mutation types and frequencies of KLF3 in pan-cancer were identified. (B, C) In pan-cancer, the relationship between KLF3 expression and gene copy number variation(CNV) and methylation. (D) Spearman correlation of KLF3 expression with RNA-associated modification (m1A, m5C, m6A) gene expression. Blue to red within the triangle on the left side of the heatmap indicates a low to high correlation. In the bar graph on the right, red represents m1A-related genes, blue represents m5C-related genes, and green represents m6A-related genes. *p < 0.05.
GBMLGG, and
LGG (all p<0.05). Furthermore, it was also observed that as the clinical stages progressed in patients with COAD, COADREAD, ESCA, KIPAN, KIRC, THCA, and OV, there was a decreasing trend in KLF3 expression, except for PAAD patients where the opposite trend was observed ( Figure 3B, all p<0.05). Next, by drawing the Kaplan-Meier survival curve, we found that compared with patients in the KLF3 low expression group, high KLF3 expression was closely related to shorter overall survival in patients with ACC, GBMLGG, LGG, PAAD, and SARC (all p<0.05, Figure 3C). In contrast, high expression of KLF3 was closely associated with good prognosis in patients with BLCA, COADREAD, COAD, and KIRC (all p<0.05, Figure 3D). Further, we established a COX proportional regression model on the pancancer patient survival data and KLF3 expression to analyze the Figure S2A). DSS results showed that higher KLF3 expression was associated with poorer DSS in LGG, GBMLGG, PAAD, and ACC, whereas the opposite results were observed in patients with KIRC and KIPAN ( Figure S2B). Figure S2C shows that high KLF3 expression was associated with poorer PFI in ACC, LGG, GBMLGG, UVM, and PAAD, whereas better in KIRC, KIPAN, and HNSC. Furthermore, the expression level of KLF3 was positively correlated with poorer DFI in PAAD and ACC ( Figure S2D). Taken together, the results suggest that KLF3 can effectively predict the prognosis of multiple cancers, most notably in PAAD.
The function analysis of KLF3 in pan-cancer
To clarify how KLF3 affects prognosis, we analyzed the correlation between KLF3 and 14 functional states using single-cell sequence data from CancerSEA. As shown in Figure S3, KLF3 expression was negatively correlated with the cell cycle, DNA damage cancer injury, DNA repair, and invasive ability of most tumors, while positively correlated with tumor differentiation, EMT, hypoxia, inflammation, metastasis, proliferation, quiescence, and stemness. In addition, through GSEA, we explored the possible signaling pathways through which the abnormal expression of KLF3 affects the above functions ( Figure 4). We found significant enrichment of immune-related signaling pathways in most tumor types, including TNFA-signaling-via-NFkB, IFN-g response, IFN-a response, inflammatory response, IL6-JAK-STAT3, IL2-STAT5, and allograft-rejection. The results also showed that various tumor types were enriched in TGF-b, protein slicing, oxidative phosphorylation, mTORC1, KRAS, epithelial-mesenchymal transition, and DNA repair signals. The above results indicate that KLF3 is closely related to tumor progression and immune response.
Relationship between KLF3 expression and TME
To clarify the relationship between KLF3 and immune cell infiltration, we analyzed it by EPIC and MCPCOUNTER LGG, PAAD, and SARC. (D) Survival differences of KLF3 high and low expression groups in BLCA, COAD, COADREAD, and KIRC. The Logrank method was used to compare the difference in survival between the high-expression group and the low-expression group. Only cancer species with statistically significant differences were shown. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001.
algorithms. The results showed that the expression of KLF3 was closely related to the infiltration of CD4+ T cells, CD8+ T cells, neutrophils, myeloid dendritic cells, monocytes/macrophages, and endothelial cells in most of the TME (Figures 5A, B). We further verified the above findings by analyzing single-cell sequencing data. As shown in Figure 5C, KLF3 was expressed in higher proportions in CD4Tconv, CD8T, monocytes/macrophages, endothelial cells, and malignant cells of the TME. Then we analyzed the relationship between KLF3 expression and TME in pan-cancer. Figure 6C). Spearman's correlation analysis also showed that KLF3 expression was significantly correlated with immune-related genes ( Figure 6D). From a pan-cancer perspective, it was found that: immune-related genes VEGFA, C10orf54, CD276, EDNRB, ARG1, HMGB1, ENTPD1, BTN3A1, TLR4, BTN3A2 were significantly positively correlated with KLF3 expression, whereas VEGFB expression was negatively correlated with KLF3 expression. Furthermore, we found that KLF3 expression in BLCA was inversely correlated with the expression of related markers of T-cell exhaustion, M2 macrophages, and CAFs ( Figure S4). Interestingly, we observed the opposite result in DLBC. From a pan-cancer perspective, KLF3 was generally positively correlated with the expression of markers associated with M2 macrophages and CAFs. In addition, KLF3 was mostly positively correlated with the expression levels of TIGIT among T-cell exhaustion genes( Figure S4). In short, the contribution of abnormal KLF3 expression to TME is not negligible.
Predicting KLF3-related tumor immunotherapy responses and drugs
TMB and MSI are predictors of immunotherapy (39). Therefore, we evaluated the relationship between KLF3 expression and TMB and Figure 7B). Further, we predicted the response and sensitivity of tumor patients to immunotherapy drugs based on KLF3 expression. As shown in Figure S5, there were five mouse immunotherapy cohorts for which immunotherapy response could be predicted by KLF3. Notably, when comparing KLF3 with common standard biomarkers of B C A FIGURE 5 Expression and distribution of KLF3 in the TME. immunotherapy response, we found that an AUC greater than 0.5 was observed in 10 immunotherapy cohorts when KLF3 alone was used as a predictive marker, indicating that KLF3 outperformed TMB, T. Clonality, and B. Clonality in prediction ( Figure S6). Subsequently, drug IC50 analysis of KLF3 by the GDSC dataset revealed that trametinib (reversible inhibitor of mitogen-activated extracellular signal-regulated kinase 1 (MEK 1/2)), PD-0325901 (selective MEK inhibitor) and 17-AAG (HSP90 inhibitor) were the top three drugs negatively associated with KLF3 expression; whereas PI-103 (multi-target PI3K inhibitor), JW-7-24-1 (small molecule kinase inhibitor) and PIK-93 (PI4KIIIb inhibitor) were the top three drugs positively correlated with KLF3 expression ( Figure 7C and Table S2). Correlation of KLF3 expression with drug IC50 based on the CTRP database showed that abiraterone (a CYP17 inhibitor), erlotinib (a tyrosine kinase inhibitor), and PD318088 (a non-ATPcompetitive, MEK1/2-mutagenesis inhibitor) were the top three drugs negatively correlated with KLF3 expression; manumycin A (a selective, competitive farnesyltransferase (FTase) inhibitor), CCT036477 (Wnt Pathway Inhibitor XI) and CIL70 were the top three drugs positively associated with KLF3 expression ( Figure 7D and Table S3). These results suggest a role for KLF3 in predicting immunotherapeutic response in pan-cancer and predicting effective B C D A FIGURE 6 Relationship of KLF3 expression with TME in pan-cancer. small molecule drugs targeting KLF3, which may provide strong evidence for future pan-cancer therapeutic studies.
Identification of KLF3 in PAAD
Through our analysis of KLF3 in pan-cancer, we found that KLF3 is significantly upregulated in PAAD (Figures 1A-C) and its expression positively correlates with patient clinical stage and histological grade (Figures 3A, B). It is also significantly associated with poor OS, DSS, PFI, and DFI of patients with PAAD ( Figure 3C and Figure S2). Therefore, our study focused on investigating the oncogenic effect of KLF3 in PAAD. First, we analyzed the clinical significance of KLF3 in PAAD and its protein expression. The combined univariate and multifactorial COX regression analysis suggested that KLF3 was an independent prognostic risk factor for PAAD ( Figure 8A, all p<0.05). Subsequently, we characterized the protein expression of KLF3 to clarify whether its mRNA expression was consistent with protein expression. The HPA database (40) showed that the intensity of immunohistochemical staining for KLF3 was significantly higher in PAAD tissues than in normal pancreatic tissues ( Figure 8B). This was validated by protein expression assay data from the CPTAC database ( Figure 8C, p=0.01834020). The mRNA and protein basal expression levels of KLF3 in normal pancreatic ductal epithelial cells and PAAD cell lines were detected using qPCR and western bot, respectively. As shown in Figures 8D, E, both mRNA and protein levels of KLF3 were higher in PAAD cells than in normal pancreatic ductal epithelial cells HPDE6-C7(all p<0.05). The basal expression levels of KLF3 were significantly higher in pancreatic cancer cell lines PANC-1 and BxPC-3 cells, which will serve as a tool cell for silencing KLF3 expression. Based on ICC/IF analysis, it was found that KLF3 expression was predominantly localized in the nucleus of PAAD cells ( Figure 8F), which is in agreement with the information retrieved from the HPA ( Figure 8B) and Genecards ( Figure 1D) databases. Immunotherapy response, biomarker correlation, and drug-sensitive prediction of KLF3 in pan-cancer. (A) Radar chart showing the relationship between KLF3 expression and TMB. (B) Radar chart showing the relationship between KLF3 expression and MSI. GDSC (C) and CTRP (D) databases were used to predict the related drugs targeting KLF3. *p < 0.05, **p < 0.01, ***p < 0.001.
Silencing of KLF3 inhibits PAAD progression
We effectively inhibited the expression of KLF3 using RNAi technology ( Figures 9A, B, S7A, B, all p<0.05). By CCK-8 assay we found that interfering with KLF3 expression would inhibit cell viability of PANC-1 and BxPC-3 cells (Figures 9C, S7C, all p<0.05). Similarly, we used EdU cell proliferation assays to find that the knockdown of KLF3 expression would inhibit the proliferative capacity of PAAD cell lines ( Figures 9D, S7D, all p<0.05). Subsequently, Transwell and wound healing assays were used to examine the potential role of KLF3 in the migration of PAAD and BxPC-3 cells. As shown in Figures 9E, F, S7E
Discussion
Krüppel-like factors (KLF) are a family of eukaryotic DNA-binding transcriptional regulators involved in a variety of essential cellular functions, including proliferation, differentiation, migration, inflammation, and pluripotency (41). The common feature of most of its members is that their binding sites are not the same in different cells and environments: they may also bind different sites in the same cell and control different genomes in response to different microenvironments (41). KLF3, a member of the KLF, binds cofactor C-terminal binding proteins, which in turn recruit a large repressor complex to mediate transcriptional silencing (8). In recent years, studies on KLF3 have focused on the regulation of the production of erythroid (42), B-cell (43), lymphocyte (44), and adipose (45) substances, while few reports have explored its role in tumors. In this study, a systematic analysis of KLF3 expression profile, genetic alteration, DNA methylation, RNA modification, clinical significance, and prognostic value in pan-cancer was performed. Further correlations between KLF3 expression and TME, immune cell infiltration, immune checkpoints, immunotherapeutic response, and small molecule drug prediction were analyzed. This study also clarified the oncogenic role of KLF3 in PAAD through functional experiments. It has been shown that KLF3 was aberrantly expressed in tumors and correlates with prognosis. For example, Huang et al. reported that KLF3 was lowly expressed in colorectal cancer and associated with poor prognosis (10). Shan et al. demonstrated that KLF3 was highly expressed in osteosarcoma and associated with poor prognosis (46). Wei et al. demonstrated that KLF3 was lowly expressed in lung cancer and associated with poor prognosis (14). In contrast, Meng et al. showed that KLF3 was lowly expressed in prostate cancer and was associated with favorable recurrence-free survival time (47). Our study also found that KLF3 mRNA was significantly upregulated in 14 tumor types and significantly downregulated in 13 tumor types compared to normal tissue. In this study, we found that the abnormal expression of KLF3 is affected by many factors, and its abnormal expression cannot be directly explained by genetic alteration, CNV, and methylation modification. Its abnormal expression is also regulated by other mechanisms, which requires more precise exploration in the future. In addition, increased KLF3 expression was negatively associated with histological grade in CESC, ESCA, KIPAN, KIRC, and STES and positively associated with histological grade in PAAD, HNSC, GBMLGG, and LGG. KLF3 expression was also negatively associated with clinical stage progression in COAD, COADREAD, ESCA, KIPAN, KIRC, THCA, and OV and positively associated with a clinical stage in PAAD. Further survival analysis revealed that high KLF3 expression was strongly associated with poor prognosis in patients with ACC, GBMLGG, LGG, PAAD, and SARC. In contrast, it was associated with a good prognosis in patients with BLCA, COADREAD, COAD, and KIRC, which is consistent with previous findings in colorectal cancer (10,48). Our results also further validate that KLFs family genes were differentially expressed in different tumors or settings (41). Through the above analysis, we found a prominent role for KLF3 in PAAD. Previous studies have shown that miR-324-5p promotes pancreatic cancer cell proliferation and apoptosis by targeting KLF3 (13). However, in this study, KLF3 was found to be highly expressed in PAAD at both mRNA and protein levels. Increased KLF3 expression was strongly associated with histological grade, clinical stage, and poor prognosis (OS/DSS/PFI/ DFI) of PAAD. Univariate and multifactorial Cox regression analyses identified KLF3 as an independent prognostic risk factor for PAAD. In vitro and in vivo experiments also found that inhibition of KLF3 expression would inhibit the proliferation and migratory capacity of PAAD cells. Further, single-cell sequencing data also revealed that KLF3 expression was positively correlated with EMT, hypoxia, inflammation, metastasis, and proliferation in most tumors, which was further validation of our functional assay results. Regarding the specific mechanism of KLF3 abnormal expression in promoting or inhibiting tumors, we found that it is mainly enriched in TGF-b, oxidative phosphorylation, mTORC1, KRAS, and EMT signaling pathways through GSEA. Previous studies have also found that downregulation of KLF3 expression inhibits the progression of lung cancer by inhibiting the JAK2/STAT3 and PI3K/AKT signaling pathways (49); KLF3 silencing promotes lung cancer EMT and enhances lung cancer metastasis through the STAT3 signaling pathway (14); KLF3 activates WNT through WNT1/b-catenin signaling pathway to promote the growth and metastasis of gastric cancer (50); of course, the specific mechanism of KLF3 regulating tumors is not comprehensive enough, and more in-depth mechanism exploration is needed in the future. In summary, KLF3 can effectively predict the prognosis of many cancers and is most evident in PAAD.
There is growing interest in the significance of TME in tumor progression, prognosis, and therapeutic responsiveness. Immune cells within the TME can promote or suppress tumor growth (51). Previous studies suggested that KLF3 may interact with KLF2 in controlling the differentiation/homeostasis of certain B-cell subpopulations (52). For example, B-cell development was impaired in the absence of KLF3 (43), while KLF3 overexpression resulted in a significant increase in the number of B-cells in the marginal zone of the spleen (53). In addition, KLF3 directly inhibited transcription of the inflammatory regulator Galectin-3; KLF3 suppressed NF-kB-driven inflammation in mice (54); and eosinophil function was also regulated in adipose tissue (55). However, the relationship between KLF3 and pan-cancer TME and tumor immune cell infiltration remains largely unknown. In this study, we found that KLF3 expression was negatively associated with immune scores in the TME of most tumors. EPIC and MCPCOUNTER algorithm analysis revealed that KLF3 expression was strongly associated with CD4+ T cells, CD8+ T cells, neutrophils, myeloid dendritic cells, monocytes/macrophages, and endothelial cell infiltration in TME. We also validated this result by analysis of single-cell sequencing data. This study found that the expression of KLF3 was roughly positively correlated with the expression of genes related to M2 macrophages and CAFs, which may suggest that the high expression of KLF3 can promote the formation of a microenvironment suitable for tumor cell growth. Previous studies have also suggested that KLF4, also a member of the KLF family, can regulate the polarization of M1/M2 macrophages in alcoholic liver disease (31). In conclusion, KLF3 expression is closely associated with the composition of TME.
In advanced cancers, immunotherapy is effective in multiple clinical trials (56), but only a small number of patients can benefit from it (57). Therefore, the development of biomarkers that effectively predict response to immunotherapy is essential to screen potential populations that may benefit from immunotherapy. PD-L1 expression and genomic features (e.g. oncogenic driver mutations, TMB and MSI) have been proposed as biomarkers of response to immunotherapy (58). In this study, KLF3 expression was correlated with pan-cancer TMB, MSI, immune activation/inhibition-related genes, T-cell exhaustion, M2 macrophages and CAFs-related genes, and TME. To validate the value of KLF3 in predicting response to immunotherapy, we also calculated its ROC value as a biomarker in the immunotherapy cohort. Interestingly, the immunotherapeutic response was predicted by KLF3 in five mouse immunotherapy cohorts; and KLF3 outperformed TMB, T. Clonality, and B. Clonality when used alone as a predictive marker. However, there is no evidence to support whether KLF3 can be used as a tumor cell signaling protein for CAR-T therapy, which requires further exploration in the future. Finally, we also predicted a series of small molecules targeting KLF3 through the GDSC and CTRP databases, which will provide a basis for the future development of immunotherapeutic and targeted therapeutic agents.
Conclusion
In this study, KLF3 was aberrantly expressed in a variety of tumor types and was strongly correlated with clinical progression and prognosis; KLF3 could be a potential prognostic marker, especially in PAAD. In addition, the contribution of KLF3 to TME and the abundance of immune cell infiltration is not negligible. It may be a biomarker for predicting response to immunotherapy and has the potential to guide individualized immunotherapy for cancer. | 7,177.8 | 2023-08-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
Age-based spatial distribution of workers is resilient to worker loss in a subterranean termite
Elaborate task allocation is key to the ecological success of eusocial insects. Termite colonies are known for exhibiting age polyethism, with older instars more likely to depart the reproductive center to access food. However, it remains unknown how termites retain this spatial structure against external disturbances. Here we show that a subterranean termite Coptotermes formosanus Shiraki combines age polyethism and behavioral flexibility to maintain a constant worker proportion at the food area. Since this termite inhabits multiple wood pieces by connecting them through underground tunnels, disastrous colony splitting events can result in the loss of colony members. We simulated this via weekly removal of all individuals at the food area. Our results showed that termites maintained a worker proportion of ~ 20% at the food area regardless of changes in total colony size and demographic composition, where younger workers replaced food acquisition functions to maintain a constant worker proportion at the food area. Food consumption analysis revealed that the per-capita food consumption rate decreased with younger workers, but the colony did not compensate for the deficiency by increasing the proportion of workers at the feeding site. These results suggest that termite colonies prioritize risk management of colony fragmentation while maintaining suitable food acquisition efficiency with the next available workers in the colony, highlighting the importance of task allocation for colony resiliency under fluctuating environments.
In eusocial insects, task allocation operates through the collective behavior of individuals without any hierarchical control 1,2 . Task allocation within a colony can be maintained by a combination of mechanisms, including response threshold variance [3][4][5][6] , spatial distribution 7,8 , and age polyethism 9,10 . This decentralized task allocation allows the colony to perform multiple tasks simultaneously with individuals temporarily dedicated to particular tasks. Models of decentralized distribution systems suggest that the task allocation mechanisms in social insects could achieve resiliency 11,12 . In other words, colonies can maintain their function even if some individuals fail to perform tasks or are simply missing [13][14][15][16] .
The maintenance of food acquisition functions in the colony is essential to provide resources to the colony. But at the same time, food acquisition can be a risky task. In social Hymenoptera, foragers have to leave the safety of the nest to look for food resources and are exposed to risks outside the colony such as pathogens, competitors, and predators 17,18 . Thus, the loss of foragers is inevitable. Colonies alleviate such impact through task re-allocation. For example, workers can alter their behaviors in response to changes in external or internal conditions of the colony 13,[19][20][21][22] . Indeed, manipulative studies from eusocial Hymenoptera demonstrated that when individuals involved in particular tasks were removed from a colony, they were subsequently replaced by some of the remaining individuals [23][24][25] . This replacement of removed workers by other workers revealed that behavioral flexibility at the individual level could maintain effective task performance at the colony level 12 . Hence, task allocation mechanisms in insect societies ultimately allow for colony resiliency against disturbances 26 .
Termites have evolved eusociality through a different evolutionary pathway from social Hymenoptera 27 , but they also display age polyethism and task allocation [28][29][30][31][32][33][34][35][36][37][38] . Subterranean termites, or multiple-site nesting termites, nest over multiple pieces of wood resources by interconnecting them through underground tunnels [39][40][41][42][43] . Among physically isolated multiple nests, only one could contain the primary reproductives, while others may or may not house supplementary reproductives in addition to the remaining castes, mostly workers which are found throughout 40 . In the colony, younger workers tend to remain close to reproductives, while older workers are distributed farther away from reproductives 30,44 . Thus, the spatial distribution of colony members can be interpreted as a task allocation because workers with reproductives may focus on brood care work, while workers apart from reproductives may carry out food acquisition. Despite thousands of workers present in the colony, only a small portion (e.g., 10 to 20%) of them remain apart from the area with reproductives at any given time 44,45 . How do termite colonies achieve this spatial organization? This could simply reflect the age composition of the colony, indicating the colony has 10 to 20% old workers which depart from the reproductive area. Or this might result from active regulation of individual behavior, indicated by workers of mixed age leaving the reproductive area.
To test this idea, we focused on the loss of colony members in subterranean termites. For subterranean termites, there are risks to departing the area where reproductives are present. The tunnel path to the separate wood pieces can be disconnected by natural disturbances such as flooding events 46 or exposed to higher loss to predation 47 . Therefore, subterranean termite colonies can recurrently experience an unpredictable loss of older colony members that are at feeding sites. Simulating such loss events can provide the opportunity not only to characterize the task allocation processes in termites, but also to understand how a termite colony could maintain colony function against disturbances.
In this study, we used Coptotermes formosanus, which is a subterranean termite and one of the most widely studied termite species due to economic importance as this termite not only causes damage to wooden structures, but also has invaded many different regions worldwide 48,49 . We address three different questions: if all termites are removed from the food area, (1) would remaining workers come out of the feeding site to access food?, (2) how would it alter the proportion and composition of workers at the feeding site?, and (3) how would it impact the food intake of the colony?. We first hypothesized that the loss of workers at the food area would result in their replacement by the next age cohort of workers. As younger workers may not be as efficient as older workers in food provisioning, we then investigated if colonies would increase the proportion of workers at feeding sites to compensate for the loss of workers to maintain colony functions.
Results
Changes in worker demographics at the food area after removal events. As termites at the food area were removed repeatedly, the total number of termites in the colony continue to decline over time ( Table 1). As a result, colony size decreased by almost half of the initial size by the end of the experiment after the four consecutive removal events (worker loss of 46.39 ± 0.46%). A few hours after removals, workers and soldiers moved from the reproductive area through the re-connected tubing to the food area. Even after repeated removals and loss of colony members, termites always resumed activity at the food area when provided with access to a new wood source.
Following the repeated removal events, the number of workers at the food area continued to decline (Kruskal-Wallis rank-sum test; χ 2 = 8.581, P = 0.035). Despite reducing the number of termites at the food area, the proportion of workers at the food area remained constant throughout the experiment (LMM, Tukey's HSD, F 1,12 = 0.0223, P = 0.88). In the end, 20.49 ± 2.16% of all workers in the colony were found at the food area throughout all removal events (Table 1).
In contrast, the demographic composition of workers at the food area changed during the removal events (Fig. 1). The repeated removal of workers resulted in progressive reduction of the worker instars at the food area (average instar of workers at the food area: 1st week > 2nd week > 3rd week > 4th week; GLMM, likelihood ratio test, χ 2 = 17.679, P < 0.01). At the beginning of the experiment, the average instar of workers at the food area was 3.42 (i.e., antennal articles: 13.42 ± 0.06), while it decreased to 2.11 at the end of the experiment (i.e., antennal articles: 12.11 ± 0.06). Consistently, the demographic composition changed across removal events (Pearson Chi-square test; overall comparison: χ 2 = 349.39, P < 0.01; pairwise comparison, between 1st and 2nd removal: χ 2 = 36.46, P < 0.01; 2nd and 3rd removal χ 2 = 29.91, P < 0.01; 3rd and 4th removal: χ 2 = 32.21, P < 0.01; Fig. 1). At the end of the experiment, overall colony population exhibited a relatively young demographic composition, and there was no difference in worker instar composition between the reproductive and the food area (χ 2 = 4.47, P = 0.487, Fig. 1). Even though demographic composition at the food area decreased with removal events, comparison of instar composition between the food and reproductive area showed that the average instar composition at the food area was statistically higher than that of the reproductive area in each removal event, except the Table 1. Average total number of workers (Mean ± SEM) in the colony, average number of workers at the food area, and average worker proportion at the food area during the experiment in the Coptotermes formosanus colony (n = 4 colonies, 3yrs-old). Average worker proportions at the food area were determined using the number of workers at the food area divided by total number of workers in the colony. Statistical differences of the number of workers at the food area and those of worker proportion were determined by linear mixed model and Kruskal-Wallis rank sum test, respectively. In linear mixed model, removal events and colonies were treated as a fixed and random effect, respectively and the removal events was a factor in Kruskal-Wallis rank sum test.
Reduction in food consumption rate of replaced workers.
Although the proportion of workers at the food area was constant after the series of removal events, per-capita food consumption rate decreased significantly as younger workers progressively replaced older workers (R 2 = 0.588, F = 19.970, P < 0.01, Fig. 2). Colonies with a relatively young worker demographic consumed less amount of food compared to a relatively old worker demographic for food acquisition, relative to the population size of the colony.
Discussion
Colonies of C. formosanus have the ability to maintain task allocation against loss of colony members by reorganizing spatial distribution of workers remaining within the colony. Remarkably, colonies maintained a stable ~ 20% worker proportion at the food area despite sequential removal of all workers there (Table 1). Average instar composition of workers at the food area was larger than that at reproductive area, indicating that older workers within a colony tend to depart the reproductive center and they are most likely involved in consuming wood resources ( Supplementary Fig. 1). Also, as the instar composition of workers at the food area continuously declined according to removal, relatively old workers within the colony rapidly took over food acquisition role. Note that this task allocation was not very strict since food acquisition can be performed by any instar starting from W 2 , not just by a single instar group such as W 5 . Although our report is new in termites, such replacement of lost workers by other individuals is commonplace in eusocial Hymenoptera, including ants 24,50 , honey bees 23,51 , stingless bees 16 and wasps 52 . Combined, our results indicate remarkable convergent evolution of task allocation in eusocial insects, achieved by behavioral flexibility and organized by age polyethism despite different evolutionary history to eusociality. The sequential removal of termites resulted in a progressive reduction of worker instars pool at the food area ( Fig. 1), further leading to reduction in per-capita food consumption of the colony because young workers may not be able to consume as same as old workers do (Fig. 2). Thus, in the age polyethism of subterranean termites, the older and the larger workers predominantly engage in wood consumption. When the colony loses some of these "older and larger" workers, younger workers, but relatively older than remaining colony members (e.g., W 2 or W 3 ), increase their efforts to consume food. This may maximize the efficiency of task allocation within the remaining workforce pool. Therefore, our study reveals that the proportion of colony members at the food area is actively maintained by behavioral flexibility of young workers.
This leads to the question of why does a C. formosanus colony have a fixed proportion of workers who leave the reproductives and focus more on wood consumption? Although replaced workers at the food area results in www.nature.com/scientificreports/ decreased per-capita food consumption, the colony did not compensate this depletion by increasing the proportion of workers at the food area. This stability contrasts with findings in honey bees, where colonies increased the proportion of foraging workers to compensate for the loss of old and experienced foragers 53 . Hypothetically, subterranean termite colonies may prioritize risk management of colony fragmentation over immediate maximal per-capita food consumption capacity. Due to their multiple-piece nesting that connects several feeding sites through underground tunnels 40,41 , the entire loss of feeding sites could be a natural condition facing C. formosanus colonies. For example, not only natural disturbances (e.g., heavy rainfall, flooding, etc.) can destroy foraging tunnels or move food resources (e.g., fallen logs) away but also predations can result in loss of termites. Since C. formosanus is an economically important subterranean termite 48,49 , pesticide treatments to control termites could also result in loss of foraging population. During any catastrophic loss event, the strategy of sending a fixed proportion of workers would limit the loss of colony members to a maximum of 20% of individuals within the colony, while the colony could still maintain suitable per-capita food consumption capability within the current demographic context. This highlights the importance of task allocation under unpredictable environments 54 , resulting in resilience against disturbances 26 . Comparisons with other eusocial Hymenoptera reveal a potential difference in the modalities that regulate task allocation processes in termites; highly stable task allocation. In social Hymenoptera, catastrophic events such as loss of all foragers, can lead to long-term colony inactivity 55-59 , death of larvae 60 , or even colony death 61,62 . However, this was not the case in C. formosanus. Even after three successive removals of the all termites at the food area, colonies of C. formosanus rapidly replaced them, often within hours, and maintained a fixed proportion over time. Although we did not examine the long-term effects of removal on colony survival and productivity (i.e., further colony growth), the colonies neither collapsed nor showed cessation of food acquisition activities even after losing almost half of the colony members. Thus, termites can be more resilient compared to social Hymenoptera, which could stem from their hemimetabolous development, as termite workers are maintained as juvenile individuals 63,64 . Starting with 3rd undifferentiated instar larvae (= W 1 ), individuals can readily engage in tasks for colony function, and further expand their behavioral repertoire as they age 30 . Social Hymenoptera, on the other hand, has a holometabolous development and workers are adult that went through their larval development and completed metamorphosis, which implies that the ability of a colony to readily replace foragers with relatively young individuals may depend on colony size, or may be delayed owing to the potential latency to produce new foragers 23,65 .
Although the average worker instar at the food area progressively decreased with removal events, some old workers such as W 5 and W 4 were still collected at the end of the experiment, despite their continuous reduction in numbers (Fig. 1). Such observations may be due to multiple factors. First, some workers might molt during the experiment as termite workers molt every 45 days and daily molting rate of the colony is about 1 to 2% 66,67 . Thus, old workers will continuously emerge regardless of removal, which is not experimentally manipulative, so that the colony could generate old workers during the experiment. Second, some old termites moved back to the nests in preparation to molt 67,68 , implying that at any given time, some relatively old workers were not at the food area. Third, some of these older workers may have been involved in food transportation from the food www.nature.com/scientificreports/ to the reproductive area 69 and therefore they were not at the food area during removal events. All such factors may have contributed to the retention a small portion of relatively old individuals throughout the experiment.
In conclusion, task allocation needs to be properly regulated to meet both the demand of the colony and external conditions 13,[19][20][21][22] . In this study, we showed that subterranean termites combine age polyethism and task allocation to maintain colony function when the colony members are periodically lost. By having a fixed proportion, the colony could minimize the risk of loss, while the colony could maintain the highest food consumption capability with workers within the current colony demography after the loss.
Materials and methods
Termite colony preparation. Colonies of C. formosanus were established from alate pairs (winged primary reproductives) collected during dispersal flights (May 2016) in Broward County (Florida, USA) using a light trap. Collected alates were kept in a container with moist corrugated cardboard, which favors termite self-dealation, and were brought back to the laboratory for sex determination. One hundred rearing units were prepared using plastic vials (8 cm height × 2.5 cm diameter) containing moistened soil (Timberline topsoil, Oldcastle Lawn & Garden, Inc., Atlanta, GA) at the bottom, four pieces of wood (5 × 0.5 × 0.5 cm 3 , Picea sp.) on top of the soil, and 3% agar 64 . The agar solution was poured over the top of wood pieces and soil to maintain moisture over time without disturbing the colony. At eight months, successful colonies were transferred to larger vials (6.3 cm height × 4.6 cm diameter) and provisioned with soil, wood, and water. Then after a year, the vial was placed in a container (1.5 L, 17 × 12 × 7 cm 3 , Pioneer Plastics, Dixon, Kentucky, USA) containing a moistened soil layer (5 cm high) and a piece of wood (14.5 × 4 × 1 cm 3 , Picea sp.), to allow the colony to further develop. Finally, we obtained four 3 yr-old colonies with equivalent population size for the removal experiment. The population size was initially estimated by carton nest construction and wood consumption and later confirmed through final census, from ~ 5000 to ~ 8000 termites, colony 1: 5696; colony 2: 6,961; colony 3: 7,358; colony 4: 6,997. The temperature and relative humidity were kept at 28 ± 1 °C and 80 ± 2%, respectively during the rearing period.
Termite removal experiment. We investigated how C. formosanus colonies respond to the loss of colony members by weekly removing a part of the colony. Subterranean termites inhabit across multiple pieces of wood, which are connected by underground tunnels. The adult reproductives (king and queen) can only inhabit one location in the underground tunnel network while other pieces of wood contain the remaining castes, mainly workers. We simulated this by separating the experimental arena into two parts: a "reproductive area" without food where the adults were restrained from leaving using a reproductive excluder 70 and a "food area" containing wood as the sole colony food resource accessible to all other castes. Both areas made of a 1.5 L plastic container filled with a 5 cm layer of moistened sand, and they were connected by Tygon® tubing (1 cm × 200 cm, diameter × length). At the beginning, we transferred all colony members to the reproductive area, after adding two pieces of wood as food resources to the food area (14.5 × 4 × 1 cm 3 , Picea sp.). Wood pieces as food were preweighed after being oven-dried at 70 °C for 48 h. The wood was soaked in water for 48 h before being placed into the food area. The entire arenas were covered with a black plastic sheet to prevent light disturbance throughout the experiment.
We collected all termites at the food area every seven days, by first clipping the tube at the distal end and disconnecting the food area. A new container with two pieces of wood (food area) was reconnected to the tube, allowing termites from the reproductive area to regain access to food. All collected individuals at the food area were counted by castes and preserved in 85% ethanol. For each removal event, 40 workers per colony were randomly selected from the removed termites. Worker instars (used here as a proxy for worker relative age; workers with higher instar are considered to be older) were determined by counting the number of antennal articles (16: worker 6th instar, W 6 , 15: W 5 , 14: W 4 , 13: W 3 , 12: W 2 , 11: W 1 , 10, according to Chouvenc and Su (2014) using a stereo microscope (Olympus SZX12, Tokyo, Japan). Note that no larvae (individuals smaller than W 1 ) were found at food area throughout the experiments.
After the 4th removal event, we opened the reproductive area to count all individuals remaining in the colony, including those left in the connecting tube. First, we calculated proportion of workers at the food area (Table 1). To do this, we estimated the initial population size of each colony by summing up the number of individuals at the reproductive area and the cumulative number of individuals removed from the food area through the four removal events. Then, we determined the proportion of workers at the food area for each colony by number of workers at the food area divided by the estimated total number of workers in a colony at each removal event. Second, we measured the demographic composition of workers at the food area using collected termites in each removal events (Fig. 1). Using the collected termites in each removal events, we calculated proportion of each instar by number of each instar divided by total number of worker (160 workers from four different colonies) collected in each removal. For the visualization (Fig. 1), data from four colonies was pooled and proportion of worker instar was presented with different size of circles.
We also measured the weight of wood pieces at the food area after being oven-dried at 60 °C for 48 h to calculate the wood consumption by colonies throughout the experiment. By considering the number of workers and the amount of wood consumption, we calculated a "per-capita food consumption rate" (mg of wood consumption/number of workers in the colony/week) for each removal event, to estimate the food consumption capacity of workers with different instar group as removal events resulted in reduction of average worker instars at the food area. We plotted per-capita food consumption rate over the average instar in each removal.
Statistical analysis.
To investigate the effect of removal on worker instar composition at the food area, we used a generalized linear mixed model (GLMM) with Poisson distribution and log link function. The instar of individuals at the food area was fitted as the response variable, removal event (from 1st to 4th removal) was www.nature.com/scientificreports/ treated as a fixed effect, and the colony (n = 4) was included as a random effect. Statistical significance of explanatory variables was determined with likelihood ratio tests. Instar compositions of workers at the food area were also analyzed by Chi-square test (α = 0.05) to verify the overall change of compositions and between removal events. For the Chi-square analysis, data from all four colonies were pooled (n = 160 in each removal event). We also compared the average instar composition between the food and reproductive area with Mann-Whitney U test (α = 0.05). We estimated the instar composition at the reproductive area by sequentially adding up collected data from each removal event to the final colony census. We also analyzed the change in the proportion of workers at the food area, using a linear mixed model (LMM) with removal events and colony origin as a fixed and random effect, respectively, followed by Tukey's HSD test (α < 0.05) to determine statistical significances. Changes in the number of workers throughout the experiment were compared with the Kruskal-Wallis rank-sum test with removal events as a factor. Finally, a linear regression was used to determine changes in per-capita food consumption rate (response variable) over average worker instar (explanatory variable) at the food area. For this, the average worker instar was calculated by averaging instars of 40 workers per colony in each removal event. The regression line with 95% confidence intervals were plotted together to help visualize the relationship. All statistical analyses were performed using R software version 3.3.3 71 . www.nature.com/scientificreports/ | 5,710.6 | 2022-05-12T00:00:00.000 | [
"Biology"
] |
Metallurgical E ff ect of Rare-Earth Lanthanum Fluoride and Boride in the Composite Coating of Wires in the Arc Welding of Bainitic-Martensitic and Austenitic Steel
: For arc welding of high-strength and cold-resistant steels, the author developed an advanced design of steel wire with a micro-composite coating of a nickel matrix and nanoparticles of LaF 3 and LaB 6 , which improves the metallurgical influence of rare-earth elements (REE) and forms refractory sulphides and oxides of REE, as well as boron nitrideThe addition of 0.1–0.3 wt% La in the weld pool leads to an increase in the content of the refractory compounds La 2 O 3 , LaO 2 , and LaS, and to the reduction in the content of the low-melting and brittle oxides and sulphides SiO 2 , SiO, MnO, MnS, and SiSThe use of steel wire with the composite coating of LaF 3 and LaB 6 allows for microstructural refinement when welding S960QL bainitic-martensitic steel and X70 API bainitic steel, and increases the impact toughness of the welds by 1.17–1.6 times.
Introduction
Development in the Arctic region requires the application of advanced high-strength and cold-resistant steels, the weldability of which is more complicated due to low-temperature embrittlement and hydrogen-assisted cold cracks (HAC)The focus of research on the weldability of advanced steels is welding metallurgy [1,2], including the development of special welding consumables with rare-earth metals (REEs) [3].
REEs reduce sensitivity to HAC and increase the impact toughness of the weld metal due to microstructural refinement, the formation of acicular ferrite, and a high affinity for sulphur and oxygen [4][5][6]. Wang et al. [6] performed arc welding of HSLA steel 10CrNi3MoV, with a 14 mm thickness, using flux-cored wire with a 0.3-1 wt% rare-earth concentrate of the total flux weight in the wire. When the content of the REE-alloy was 0.3 wt%, the tensile strength of the welds increased from 680 to 750 MPa, and the impact energy increased from 25 to 36 J at −40 • C. Similar positive results were achieved by researchers in welding and casting [7].
Yu et al. [8] improved the ductility and refined the microstructure of the high-temperature alloy Fe-43Ni by adding La 2 O 3 with a residual content in the casting of 0.01-0.04 wt% La. The microstructural refinement was promoted by the formation of dispersed inclusions of La 2 O 2 S with an increase in the content of lanthanum up to 0.025 wt%, which provided effective nucleation centers for the crystallization. Similar results have been achieved by other researchers [9,10], who have reported the formation of spherical inclusions of La 2 O 3 , La 2 O 2 S and LaS instead of elongated inclusions of MnS.
Since REEs have a high affinity not only for oxygen, but also for sulphur, the role of refractory oxides and sulphides should be taken into account when considering the microstructural refinement mechanism of high-strength steels. For example, the formation of the complex oxysulphides Ce 2 O 2 S Thermodynamic modelling of the metallurgical reactions and phase composition were determined on the basis of the thermodynamic data of individual substances, using the modelling program "Terra" (Bauman Moscow State Technical University, Moscow, Russia) and FactSage (CRCT, Montreal, Metals 2020, 10, 1334 3 of 13 Canada) [29][30][31]. Mechanical testing was conducted on a Tinius Olsen Model 602 machine (Walter+Bai AG, Löhningen, Switzerland) in accordance with the GOST 6996-66 standard, using the PH450 pendulum impact testing system (Walter+Bai AG, Löhningen, Switzerland) in accordance with the ISO 148-1:2016 standard Charpy V-notch tests and the EMCOTEST DuraScan-20 hardness tester (EMCO-TEST PrufmaSchinen GmbH, Kuchl, Austria) in accordance with the ISO 6507-1:2018 standardThe chemical composition was determined using a Bruker Q4 TASMAN optical emission spectrometer (Bruker, Karlsruhe, Germany)The metallographic analysis was conducted using the optical microscope Reichert-Jung Me F3A (Leica Microsystems, Wetzlar, Germany), Zeiss Axiovert 200 MAT (Carl Zeiss AG, Oberkochen, Germany), and scanning electron microscope SUPRA 55VP WDS (Carl Zeiss, Oberkochen, Germany), SEM TESCAN MIRA 3 (Tescan Orsay Holding, Brno, Czech Republic).
REE Compound Properties
As pure metallic powders of REEs have high chemical activity, it is preferable to use refractory REE compounds with negative Gibbs free energies for microalloying of the weld pool and grain refinement, through the heterogeneous nucleation mechanism of non-metallic inclusions [14,15]. Tables 2 and 3 and Figures 1 and 2 detail the physical properties and Gibbs free energies of the forming refractory and high-density REE compounds.
Metallurgical Reactions in the Weld Pool
The presence of REE La, Ce, Y, and Th in the weld pool promotes strong metallurgical reactions-deoxidation of FeO and desulfurization of FeS by the formation of refractory REE oxides and sulphides according to reactions (1-8), which have high equilibrium constants, as shown in Figure 3.
Metallurgical Reactions in the Weld Pool
The presence of REE La, Ce, Y, and Th in the weld pool promotes strong metallurgical reactions-deoxidation of FeO and desulfurization of FeS by the formation of refractory REE oxides and sulphides according to reactions (1-8), which have high equilibrium constants, as shown in Figure 3.
Metallurgical Reactions in the Weld Pool
The presence of REE La, Ce, Y, and Th in the weld pool promotes strong metallurgical reactions-deoxidation of FeO and desulfurization of FeS by the formation of refractory REE oxides and sulphides according to reactions (1-8), which have high equilibrium constants, as shown in Figure 3.
2FeO + 2Th = ThO 2 + 2Fe (4) 3FeS + 2La = La 2 S 3 + 3Fe 3FeS + 2Ce = Ce 2 S 3 + 3Fe 3FeS + 2Y = Y 2 S 3 + 3Fe (7) Thermodynamic modelling of the equilibrium phase composition of the weld pool, in accordance with the maximal solubility of the impurity substances S, O, and N in the program "Terra", confirms that the addition of 0.1-0.3% La and 0.01-0.03 B leads to the formation of refractory REE sulphides and oxides, which are I type modifiers of the microstructure [3,23], as shown in Figures Thermodynamic modelling of the equilibrium phase composition of the weld pool, in accordance with the maximal solubility of the impurity substances S, O, and N in the program "Terra", confirms that the addition of 0.1-0.3% La and 0.01-0.03 B leads to the formation of refractory REE sulphides and oxides, which are I type modifiers of the microstructure [3,23], as shown in Figures Thermodynamic modelling of the equilibrium phase composition of the weld pool, in accordance with the maximal solubility of the impurity substances S, O, and N in the program "Terra", confirms that the addition of 0.1-0.3% La and 0.01-0.03 B leads to the formation of refractory REE sulphides and oxides, which are I type modifiers of the microstructure [3,23], as shown in Figures
Wires with Composite Coating
Standard wires of G3Si1, Union X96, and 316L were treated using advanced electrochemical technology, in Ni-electrolytes with 60% Ni(BF4)2 and 10% LaF3 or LaB6. This led to the formation of composite coatings of about 6 μm in thickness from a nickel matrix and LaF3 and LaB6 nanodispersed particles less than 0.7 μm in size, giving an overall content of REE compounds of 0.3-0.4 wt% in the solid wire.
Wires with Composite Coating
Standard wires of G3Si1, Union X96, and 316L were treated using advanced electrochemical technology, in Ni-electrolytes with 60% Ni(BF 4 ) 2 and 10% LaF 3 or LaB 6 . This led to the formation of composite coatings of about 6 µm in thickness from a nickel matrix and LaF 3 and LaB 6 nanodispersed particles less than 0.7 µm in size, giving an overall content of REE compounds of 0.3-0.4 wt% in the solid wire. Figure 6 presents the design of the composite electrode wire, with an electronic and optical view of the macrostructure of the wire surface showing the particles of LaB6The microstructure of the wire's composite coating and the distribution of chemical elements in the composite coating are shown in Figure 7.
Wires with Composite Coating
Standard wires of G3Si1, Union X96, and 316L were treated using advanced electrochemical technology, in Ni-electrolytes with 60% Ni(BF4)2 and 10% LaF3 or LaB6. This led to the formation of composite coatings of about 6 μm in thickness from a nickel matrix and LaF3 and LaB6 nanodispersed particles less than 0.7 μm in size, giving an overall content of REE compounds of 0.3-0.4 wt% in the solid wire. Figure 6 According to the X-ray spectral analysis of the composite coatings, the content of La and F was 31.3% and 15.3%, respectively, as shown in Table 4. The investigation of the macrostructure of the deposited metal shows that the weld deposition of the vertical layers leads to microporosity; however, the presence of lanthanum and fluorine vapour in the arc improves solidity and reduces the levels of microporosity in the weld depositions, as shown in Figure 8. Lanthanum fluoride and boride were found to have a significant effect on the microstructure of the weld metal. The analysis of the microstructure of the deposited metal showed that the use of composite wires with nanodispersed particles of LaF3 and LaB6 leads to microstructural refinement, a decrease in the average grain size for the G3Si1 wire from 40-60 μm to 12-28 μm and from 40-80 μm to 20-35 μm for the 316L wire, and improvement in the shape and the distribution of the microstructural phases, as shown in Figures 9 and 10. According to the X-ray spectral analysis of the composite coatings, the content of La and F was 31.3% and 15.3%, respectively, as shown in Table 4. The investigation of the macrostructure of the deposited metal shows that the weld deposition of the vertical layers leads to microporosity; however, the presence of lanthanum and fluorine vapour in the arc improves solidity and reduces the levels of microporosity in the weld depositions, as shown in Figure 8. According to the X-ray spectral analysis of the composite coatings, the content of La and F was 31.3% and 15.3%, respectively, as shown in Table 4. The investigation of the macrostructure of the deposited metal shows that the weld deposition of the vertical layers leads to microporosity; however, the presence of lanthanum and fluorine vapour in the arc improves solidity and reduces the levels of microporosity in the weld depositions, as shown in Figure 8. Lanthanum fluoride and boride were found to have a significant effect on the microstructure of the weld metal. The analysis of the microstructure of the deposited metal showed that the use of composite wires with nanodispersed particles of LaF3 and LaB6 leads to microstructural refinement, a decrease in the average grain size for the G3Si1 wire from 40-60 μm to 12-28 μm and from 40-80 μm to 20-35 μm for the 316L wire, and improvement in the shape and the distribution of the microstructural phases, as shown in Figures 9 and 10. Lanthanum fluoride and boride were found to have a significant effect on the microstructure of the weld metalThe analysis of the microstructure of the deposited metal showed that the use of composite wires with nanodispersed particles of LaF 3 and LaB 6 leads to microstructural refinement, a decrease in the average grain size for the G3Si1 wire from 40-60 µm to 12-28 µm and from 40-80 µm to 20-35 µm for the 316L wire, and improvement in the shape and the distribution of the microstructural phases, as shown in Figures 9 and 10. Table 5 shows the chemical composition of S960QL welds with Union X96 wires coated in Ni-LaF 3 and Ni-LaB 6 , indicating a transfer of La from the composite coating in the weld at a content of 0.01-0.04 wt%. Table 5 shows the chemical composition of S960QL welds with Union X96 wires coated in Ni-LaF3 and Ni-LaB6, indicating a transfer of La from the composite coating in the weld at a content of 0.01-0.04 wt%. When using composite coatings of LaF3 and LaB6 particles in the welding of S960QL steel, it is possible to observe the microstructural refinement in different zones of the weld-corresponding to a decrease in the average grain size from 8-28 μm to 6-12 μm-including the weld metal, the transition zone from the weld to the HAZ (heat-affected zone) and in the HAZ, as shown in Figure 11. When using composite coatings of LaF 3 and LaB 6 particles in the welding of S960QL steel, it is possible to observe the microstructural refinement in different zones of the weld-corresponding to a decrease in the average grain size from 8-28 µm to 6-12 µm-including the weld metal, the transition zone from the weld to the HAZ (heat-affected zone) and in the HAZ, as shown in Figure 11.
Mechanical tests showed that the use of composite coatings with LaF 3 and LaB 6 particles in the welding of S960QL steel led to an increase in the yield strength, hardness, and average impact toughness of the weld from 45 to 54-66 J at −40 • C, as shown in Table 6. Table 7 shows the chemical composition of X70 API welds with G3Si1 wire with coatings of Ni-LaF 3 and Ni-LaB 6 , which indicates the transfer of La and Ni from the composite coating in the weld at a content of 0.01-0.03 wt% La and 0.16-0.21 Ni. Mechanical tests showed that the use of composite coatings with LaF3 and LaB6 particles in the welding of S960QL steel led to an increase in the yield strength, hardness, and average impact toughness of the weld from 45 to 54-66 J at -40 °C, as shown in Table 6. Table 7 shows the chemical composition of X70 API welds with G3Si1 wire with coatings of Ni-LaF3 and Ni-LaB6, which indicates the transfer of La and Ni from the composite coating in the weld at a content of 0.01-0.03 wt% La and 0.16-0.21 Ni. A similar positive effect of microstructural refinement in different zones of the weld was achieved during X70 API pipe steel welding with G3Si1 wire with coatings of Ni-LaF 3 and Ni-LaB 6 , as shown in Figure 12.
Weld metal
0.08-0.1 0. A similar positive effect of microstructural refinement in different zones of the weld was achieved during X70 API pipe steel welding with G3Si1 wire with coatings of Ni-LaF3 and Ni-LaB6, as shown in Figure 12. The analysis of the microstructure in the field area of 0.014 mm 2 showed an increase in the proportion of acicular and polygonal ferrite in the bainitic microstructure, and a decrease in the grain size in the weld metal. In particular, the average grain area in the weld decreased from 25 to 11-12 μm 2 . In the HAZ, the average grain area decreased from 29 to 14-16 μm 2 .
As a result of the positive effect of microstructural refinement, the mechanical tests showed that the use of composite coatings with LaF3 and LaB6 particles during X70 API steel welding led to an increase in the yield strength from 526 to 572 MPa, the average impact toughness in the weld from 87 to 143 J and in the HAZ from 143 to 174 J at -20 °C, as shown in Table 8. | 3,516.8 | 2020-10-06T00:00:00.000 | [
"Materials Science"
] |
A constructive analysis of convex-valued demand correspondence for weakly uniformly rotund and monotonic preference
Bridges([4]) has constructively shown the existence of continuous demand function for consumers with continuous, uniformly rotund preference relations. We extend this result to the case of multi-valued demand correspondence. We consider a weakly uniformly rotund and monotonic preference relation, and will show the existence of convex-valued demand correspondence with closed graph for consumers with continuous, weakly uniformly rotund and monotonic preference relations. We follow the Bishop style constructive mathematics according to [1], [2] and [3].
Introduction
Bridges ([4]) has constructively shown the existence of continuous demand function for consumers with continuous, uniformly rotund preference relations. We extend this result to the case of multi-valued demand correspondence. We consider a weakly uniformly rotund and monotonic preference relation, and will show the existence of convex-valued demand correspondence with closed graph 1 for consumers with continuous, weakly uniformly rotund and monotonic preference relations In the next section we summarize some preliminary results most of which were proved in [4]. In Section 3 we will show the main result.
Preliminary results
Consider a consumer who consumes N goods. N is a finite natural number larger than 1. Let X ⊂ R N be his consumption set. It is a compact (totally bounded and complete) and convex set. Let ∆ be an n − 1-dimensional simplex, and p ∈ ∆ be a normalized price vector of the goods. Let p i be the price of the i-th good, then ∑ N i=1 p i = 1 and p i ≥ 0 for each i. For a given p the budget set of the consumer is β(p, w) ≡ {x ∈ X : p · x ≤ w} w > 0 is his initial endowment. A preference relation of the consumer ≻ is a binary relation on X. Let x, y ∈ X. If he prefers x to y, we denote x ≻ y. A preference-indifference relation ≿ is defined as follows; x ≿ y if and only if ¬(y ≻ x) x ≻ y entails x ≿ y, the relations ≻ and ≿ are transitive, and if either x ≿ y ≻ z or x ≻ y ≿ z, then x ≻ z. Also we have x ≿ y if and only if ∀z ∈ X (y ≻ z ⇒ x ≻ z).
A preference relation ≻ is continuous if it is open as a subset of X × X, and ≿ is a closed subset of X × X. A preference relation ≻ on X is uniformly rotund if for each ε there exists a δ(ε) with the following property. Definition 1 (Uniformly rotund preference). Let ε > 0, x and y be points of X such that |x − y| ≥ ε, and z be a point of R N such that |z| ≤ δ(ε), then either 1 2 (x + y) + z ≻ x or 1 2 (x + y) + z ≻ y. Strict convexity of preference is defined as follows; Definition 2 (Strict convexity of preference). If x, y ∈ X, x ̸ = y, and 0 < t < 1, Bridges [5] has shown that if a preference relation is uniformly rotund, then it is strictly convex.
On the other hand convexity of preference is defined as follows; Definition 3 (Convexity of preference). If x, y ∈ X, x ̸ = y, and 0 < t < 1, We define the following weaker version of uniform rotundity.
Definition 4 (Weakly uniformly rotund preference). Let ε > 0, x and y be points of X such that |x − y| ≥ ε. Let z be a point of R N such that |z| ≤ δ for δ > 0 and z ≫ 0(every component of z is positive), then 1 2 (x + y) + z ≻ x or We assume also that consumers' preferences are monotonic in the sense that if x ′ > x (it means that each component of x ′ is larger than or equal to the corresponding component of x, and at least one component of x ′ is larger than the corresponding component of x), then x ′ ≻ x. Now we show the following lemmas.
Lemma 2. If a consumer's preference is weakly uniformly rotund, then it is convex.
This is a modified version of Proposition 2.2 in [5].
Proof.
1. Let x and y be points in X such that |x − y| ≥ ε. Consider a point Thus, using Lemma 1 we can show 1 4 (3x + y) ≿ x or 1 4 (3x + y) ≿ y, and 1 4 (x + 3y) ≿ x or 1 4 (x + 3y) ≿ y. Inductively we can show that for k = 1, 2, . . . , 2 n − 1 k 2 n x + 2 n −k 2 n y ≿ x or k 2 n x + 2 n −k 2 n y ≿ y for each natural number n. 2. Let z = tx + (1 − t)y with a real number t such that 0 < t < 1. We can select a natural number k so that k 2 n ≤ t ≤ k+1 2 n for each natural number n. ( k+1 2 n − k 2 n ) = ( 1 2 n ) is a sequence. Since, for natural numbers m and n such that m > n, l is a Cauchy sequence, and converges to zero. Then, ( k+1 2 n ) and ( k 2 n ) converge to t. Closedness of ≿ implies that either z ≿ x or z ≿ y. Therefore, the preference is convex. Lemma 3. Let x and y be points in X such that x ≻ y. Then, if a consumer's preference is weakly uniformly rotund and monotonic, tx + (1 − t)y ≻ y for 0 < t < 1.
Proof. By continuity of the preference (openness of ≻) there exists a point x ′ = x − λ such that λ ≫ 0 and x ′ ≻ y. Then, since weak uniform rotundity implies convexity, we have In [4] the following lemmas were proved. Lemma 4 (Lemma 2.1 in [4]). If p ∈ ∆ ⊂ R N , w ∈ R, and β(p, w) is nonempty, then β(p, w) is compact.
Proof. See Appendix.
And the following lemma.
Lemma 8 (Lemma 2.8 in [4]). Let R,c, and t be positive numbers. Then there exists r > 0 with the following property: if p, p ′ are elements of R N such that |p| ≥ c and |p − p ′ | < r, w, w ′ are real numbers such that |w − w ′ | < r, and y ′ is an element of R N such that |y ′ | ≤ R and p ′ · y ′ = w ′ , then there exists ζ ∈ R N such that p · ζ = w and |y ′ − ζ| < t.
It was proved by setting r = ct R+1 .
3 Convex-valued demand correspondence with closed graph
With the preliminary results in the previous section we show the following our main result.
Theorem 1.
Let ≿ be a weakly uniformly rotund preference relation on a compact and convex subset X of R N , ∆ be a compact and convex set of normalized price vectors (an n − 1-dimensional simplex), and S be a subset of ∆ × R such that for each (p, w) ∈ S 1. p ∈ ∆.
Then, for each (p, w) ∈ S there exists a subset F (p, w) of β(p, w) such that F (p, w) ≿ x (it means y ≿ x for all y ∈ F (p, w)) for all x ∈ β(p, w), p·F (p, w) = w (p · y = w for all y ∈ F (p, w)), and the multi-valued correspondence F (p, w) is convex-valued and has a closed graph.
A graph of a correspondence F (p, w) is If G(F ) is a closed set, we say that F has a closed graph.
Appendix: Proof of Lemma 7
This proof is almost identical to the proof of Lemma 2.4 in Bridges [4]. They are different in a few points. | 1,951.6 | 2011-04-06T00:00:00.000 | [
"Economics",
"Mathematics"
] |
A Collection of SAR Methodologies for Monitoring Wetlands
Wetlands are an important natural resource that requires monitoring. A key step in environmental monitoring is to map the locations and characteristics of the resource to better enable assessment of change over time. Synthetic Aperture Radar (SAR) systems are helpful in this way for wetland resources because their data can be used to map and monitor changes in surface water extent, saturated soils, flooded vegetation, and changes in wetland vegetation cover. We review a few techniques to demonstrate SAR capabilities for wetland monitoring, including the commonly used method of grey-level thresholding for mapping surface water and highlighting changes in extent, and approaches for polarimetric decompositions to map flooded vegetation and changes from one class of land cover to another. We use the Curvelet-based change detection and the Wishart-Chernoff Distance approaches to show how they substantially improve mapping of flooded vegetation and flagging areas of change, respectively. We recommend that the increasing availability SAR data and the proven ability of these data to map various components of wetlands mean SAR should be considered as a critical component of a wetland monitoring system. OPEN ACCESS Remote Sens. 2015, 7 7616
Introduction
Wetlands are a critical part of our natural environment.They provide food and shelter to many types of wildlife and invertebrates, including endangered and threatened species, filter sediments and toxins [1], help prevent flooding [2,3] protect shorelines [4], store carbon, give off oxygen and water vapor [5], and provide recreational activities such as hiking and fishing.Although the importance of wetlands is widely recognized, they are currently disappearing at a dramatic rate.Approximately 25% of the world's wetlands are located in Canada [6], and approximately 14% of the Canadian landscape is covered by wetlands.Roughly 68% of Ontario's wetlands have been converted to agriculture or infrastructure and, similarly, there are only about 25% of the original Prairie Pothole wetlands remaining in Southwestern Manitoba [6].There has been a 50% loss of wetlands worldwide over the last century [7], with a 6% decrease just from 1993 to 2007 [8].In addition, wetlands are becoming fragmented or impaired and have lost the capacity to function fully because of pollution, climate change, invasive species, agricultural tile drainage, hydroelectric development, urban expansion, and recreation [9−12].
Wetlands are particularly sensitive to climate change and severe events.Wetlands are often able to recover from naturally occurring stresses, such as storms or damage from ice, but are less resilient to human-induced stresses, like industrial discharge or dredging, which usually occur quickly and impose severe impacts from which it is difficult to recover.Even small shifts in temperature or the water supply can impact wetland organisms [13].For example, increases in temperature may allow invasive plants to outcompete native plants [14−16].Moreover, high temperatures combined with low oxygen levels often lead to overgrowth of bacteria [13].It is expected that climate change will result in longer summers and shorter, warmer winters [17−20].Wetlands rely on cold Canadian winters to provide water from snowmelt and spring flooding [21].Rising temperatures may result in the drying of some wetlands [22].Thus, wetlands are dynamic and any mapping or inventory assessment should accurately reflect these dynamics.
Currently, there is no inclusive or dynamic wetland inventory or monitoring program in Canada [23,24], or globally.The majority of existing wetland research has been localized, has covered a limited time period, and has varied in approach and scale.The Canadian Wetland Inventory (CWI)-a joint initiative between the Canadian Space Agency, Ducks Unlimited Canada, Environment Canada and the North American Wetlands Conservation Council (Canada)-was established in 2002 to facilitate the creation of a national inventory to aid in wetland conservation.The CWI is still in progress, and approximately 25% of the mapping has been completed, is near completion or is still in progress [25].To protect and monitor existing wetlands it is essential to have an inventory of where and how many wetlands currently exist and how they are changing.Wetlands are dynamic and can change significantly within an annual growing season, as well as inter-annually and over a decadal time periods [26,27].Wetlands can transform from dry to flooded states, and vice versa [26], or be affected by various other factors like fires [28], drainage [12,29], and grazing [30].The naturally variable states of wetlands are what make them so productive.Wetlands can vary widely due to a variety of factors including geographical location (e.g., coastal vs. inland or polar vs. tropic), rainfall, evaporation, climate change and anthropogenic influences.A study of the agricultural impacts and recovery of wetlands between 1985 and 2005 showed that wetland margins were more affected by land use than wetland basins [31], underscoring the imperative for current, accurate wetland mapping methodology and inventory so regulated wetlands can be properly monitored, managed and protected.Because wetlands are changing over both short and long time periods, frequent and consistent monitoring is needed.
Synthetic Aperture Radar (SAR) technology can be effective for monitoring changes in surface water [32,33] and wetlands both seasonally and annually [34−37].SAR has many characteristics that make it ideal for mapping and monitoring water and wetlands over time.SAR is able to image landscape conditions night or day, through cloud cover, and in near-real time [38,39].These are often limitations for optical/infrared satellite sensors.SAR systems also can penetrate vegetation canopies, to varying degrees, to image understory conditions [40].Additionally, the water-saturated nature of wetlands tends to render them highly reflective of SAR transmitted energy.
Nevertheless, several factors can affect radar backscatter including seasonal timing of acquisition, look direction, incidence angle, soil moisture, dielectric constant, and the structure/composition of the ground features.SAR systems transmit microwaves with an incidence angle and a look direction on one side of the satellite.The incidence angle, which is the angle between the radar beam and the ground surface, can affect the appearance of smooth targets on the image.Smooth surfaces can appear brighter than rough surfaces at small incidence angles (usually less than 20-25 degrees), but rough surfaces remain largely unaffected by incidence angle.Lower incidence angles tend to be more sensitive to waves on water, therefore a combination of high and low incidence angle images are sometimes needed to accurately map water [40].Smaller incidence angles are better able to penetrate vegetation, and thus can better detect flooded vegetation [41−43].
Water has a high dielectric constant and is a specular reflector; very little backscatter is returned to the sensor, making water appear as a dark feature [44].However, when waves are present in water there is often an increase in backscatter, which can cause confusion with land features such as dry vegetation [40].The majority of other natural land features, particularly vegetation canopies, are heterogeneous and have relatively high amounts of surface roughness, resulting in the radar signal being scattered diffusely, with features appearing bright on the image [40].Differences in topography can also distort a SAR image.Foreshortening occurs when the SAR signal returns from the bottom of a tall feature facing towards the satellite prior to returning from the top of the feature.This causes the image to be compressed in the near range (the part of the image closest to the nadir track) and to be stretched in the far range (the part of the swath furthest from the nadir track).This distortion increases with small look angles and steep slopes [45].In extreme cases the image can be distorted from layover, which occurs when the SAR signal reaches the top of the feature before the bottom in the near-range slope.Part of the image can appear missing if there is a very steep slope and a large look angle [45].The sensitivity of SAR to the dielectric constant and roughness of features demonstrates the importance of optimizing the timing of image acquisitions.Data for mapping wetlands are best acquired in the spring, summer and fall to avoid any ice-on imagery [46].For example, rough surface water can produce a backscatter response similar to ice, thus making it difficult to distinguish between the two land covers [46].
Additional information beyond the radiometric response provided with optical satellites can be extracted from SAR data to help detect flooded areas [47,48] and classify wetlands [35,48], based on hydrologic features and surface structure.In a single-channel SAR system both the transmitted and received energy from the satellite are either horizontally (H) or vertically (V) polarized.In dual-channel SAR systems the signal can be both co-polarized (transmitted and received energy as HH or VV) and cross-polarized (transmitted and received as HV or VH).With the advancement of fully polarimetric satellite systems such as RADARSAT-2 the satellite can transmit and receive energy in all four planes (HH, VV, HV and VH), maintaining the phase and allowing for mapping of the different scattering mechanisms within a wetland [49], rather than just the difference between low and high backscatter values (Figure 1).The phase measures the time it takes for the radar signal sent from the satellite to interact with the target on the ground and return to the satellite [50].This allows the user to decompose the SAR backscatter being returned from the objects being sensed into four common scattering types: (1) specular scattering (no return to the SAR), which occurs from smoother surfaces such as calm water or bare soil; (2) rough scattering, which results when there is a single bounce return to the SAR from surfaces such as small shrubs or rough water; (3) volume scattering, which is when the signal is backscattered in multiple directions from features such as vegetation canopies; and (4) double-bounce or dihedral scattering, which results when two smooth surfaces create a right angle that deflects the incoming radar signal off both surfaces such that most of the energy is returned to the sensor.This latter scattering case typically occurs when vertical emergent vegetation is surrounded by a visible, smooth water surface [32,[47][48][49][50][51].Flooded vegetation can also have a combination of double-bounce and volume backscattering [50,51].When fully polarimetric SAR images are acquired throughout the growing season, the user can analyze the backscatter response from each stage of the hydrologic and vegetation development (leaf-on and leaf-off) to better understand responses during wetter and dryer periods.The Canadian RADARSAT-2 satellite, as well as the upcoming RADARSAT Constellation Mission (RCM), offers a wide range of beam modes well suited to monitor wetlands.The Spotlight beam mode has high spatial resolution, allowing the user to detect small water bodies.The Polarimetric beam mode enables the application of polarimetric decompositions, which allow the user to decompose the scattering matrix and, as a result, better detect flooded vegetation and classify wetlands [35,49].RCM, which is anticipated to launch in 2018, will have a baseline mission composed of 3 satellites offering an average daily coverage for 95% of the world [52].In addition to the beam modes offered by RADARSAT-2, RCM will have a circular-linear compact polarimetry mode, which transmits a circularly polarized wave and receives on the linear, orthogonal, horizontal, and vertical planes [53].RCM will have a shorter revisit time (four days) compared to RADARSAT-2 (24 days), due to the constellation, and a larger swath-width in some cases (e.g., 50 × 50 m 2 resolution images will have a 500 km swath width with RCM compared to 300 km for RADARSAT-2) (Figure 2).This will make it possible for more frequent monitoring over wider areas.Likewise, the compact polarimetric mode will allow polarimetric information to be acquired over a wider swath.
Many studies have demonstrated SAR's utility in mapping wetlands [35,54−59].However, there are many different SAR techniques to extract polarimetric parameters as well as several polarimetric decompositions [35,60].To date, there is not a well-established SAR methodology to map and monitor wetlands.In this paper we provide an overview of some current methodologies being used to map and monitor two aspects of wetlands with RASARSAT-2 data, surface water and flooded vegetation.We also describe a tool for flagging areas of change within wetlands, the Wishart-Chernoff Distance technique.We present these methodologies through case studies from several locations, and more in-depth descriptions of all are available in the original papers.Detailed descriptions of the locations of the field studies are not included here because the methods are applicable in many locations around the world.
Surface Water
Mapping areas of open water is an important component of wetland monitoring.SAR has been used as a tool to map open water for many purposes in a variety of locations [61−65].There are several SAR-based methods for mapping open water [44].Visual interpretation can be performed by an experienced analyst to manually map areas of water; however, this can be very time consuming and results can vary among image interpreters [44].Multi-temporal interferometric SAR coherence (e.g., [66] is another method, based on using the constantly-changing scattering characteristics of water surfaces (from waves, resulting in low coherence) to distinguish water from land [67].This technique is dependent on having high temporal coherence in the surrounding land cover, which can be difficult to achieve due to snow, rain, and/or wind changing the dielectric properties [68].A third method to map surface water is through active contour models (e.g., [61,69]), which uses local tone and texture values to delineate features.Results have been promising, but there has been confusion between open water and non-flooded vegetation [61].A texture-based method has also been developed for water mapping, which makes use of textural variation based on statistics [44].The limitations are that it can be challenging to select the correct window size and best texture measure, and selecting a threshold value is still necessary to classify water [44].To date grey-level thresholding is the most commonly used approach to map surface water with SAR imagery.In this method all pixels with a backscatter coefficient lower than a specified threshold in an intensity image are mapped as water [70,71].This technique is useful for producing results quickly and inexpensively [72], but is only suitable for calm open water with a specular backscatter response [73].A user-selected threshold was chosen to map surface water in the case study we present because it offered a flexible, efficient, and user controlled, scene specific approach.
Case Study-Peace-Athabasca Delta
Surface water thresholding was applied in the Peace-Athabasca Delta (PAD), which is located in northeastern Alberta (58°32′07.46N,111°40′33.55W).RADARSAT-2 C-band imagery between Lake Claire and Lake Athabasca (Figure 3) were acquired during the 2012 growing season (April to October).
SAR Data Acquisition and Processing
Two Wide Ultra-Fine (U2W2) mode data frames were captured every 24 days (Table 1).All images were read in as Single Look Complex (SLC; ordered as SLC for other applications, but only the magnitude was used for this application), had an incidence angle of 29.5°-33.0°,a 1.6 × 2.8 m 2 resolution, HH polarization, and a swath width of 50 km.
GAMMA SAR remote sensing software [74] was used to process all RADARSAT-2 imagery.A multi-temporal approach was applied to the stack of co-registered SAR intensity images (Table 1) [32].Granular salt and pepper patterns, referred to as "speckle" in SAR images, occurs when coherent processing of the backscatter returns from consecutive radar pulses [75,76].We selected a moving weighted function with a filter window size of 5 × 5 pixels to reduce speckle.The result was a filtered intensity image for each image date.
Areas of known surface water were then sampled to determine the range of thresholds (dB) that represented surface water [37].Surface water thresholds in the PAD for 2012 data ranged from −10 dB to −13 dB.The selection of the surface water threshold was scene specific and was affected by weather conditions, polarization, and incidence angle, and therefore differed among dates to obtain the most accurate results.All images were then orthorectified in GAMMA using 50 m Canadian Digital Elevation Data and a 20 m orthoimagery from the French Satellite Pour l'Observation de la Terre.Post editing was done on each image to remove errors of commission.The SAR images were resampled to 20 m and reclassified as surface water (1) and non-water (0).A 20 m SPOT land cover product [77] was used to identify areas of barren ground and sand.A conditional statement was used to reset surface water pixel values with a zero for those areas that overlapped with barren ground or sand.
Ground validation data were not available to verify the results of the surface water maps.We attempted to use Landsat and weather station data to validate our results.We acquired all available Landsat imagery from April to October 2012, where portions of our study site were cloud-free.Three Landsat 7 ETM+ scenes were suitable: 30 April 2012, 1 June 2012, and 21 September 2012.In addition, we used data from the Mildred Lake, Alberta, weather station, approximately 80 km from our study site.We compared mean daily temperature (°C) and total daily precipitation (mm) for each day we produced surface water maps, as well as the monthly average daily mean temperature (°C) and monthly total precipitation (mm).
Results and Discussion
The results showed that temporal filtering and intensity thresholding was an effective method to map surface water for both large and small water bodies.One advantage of using this approach was that it captured the dynamic changes in the surface water (Figure 4), compared with using static products, such as the 20 m SPOT national land cover product [77] and the National Topographic Data Base (NTDB) water body layers.The net loss or net gain of surface water area from 28 April 2012, to 13 October 2012, provided a spatial snapshot of the water cycle for a particular year and highlighted which areas had an increase or decrease in water extent (Figure 5).It is important to note that the user selection of the threshold approach can introduce uncertainty into how much change in water extent is due to the threshold selection compared to actual environmental changes.Landsat and weather station data allowed for some validation of the surface water products.For example, the 30 April 2012, Landsat 7 ETM+ scene showed many of the lakes in the Peace Athabasca Delta were still frozen, which validated that these areas should not be mapped as surface water on the 28 April 2012, RADARSAT-2 image (Figure 6).The same areas in the 1 June 2012, Landsat 7 ETM+ image had melted and could clearly be classified as surface water.This confirmed that the increase in surface that was mapped on the 22 May 2012, RADARSAT-2 image (Figure 6) was reasonable.However, many of the Landsat scenes during the time of this study were too cloudy to use for validation.The weather station data also confirmed that much of the large lakes were still frozen in April 2012, with an average mean monthly temperature of 3.3 °C (Table 2).The average mean monthly temperature rose to 12.2 °C in May (Table 2), providing further evidence that the frozen lakes had melted.July had a high amount of total precipitation (87.6 mm, Table 2), which relates to the expansion of surface water from the 9 July to 2 August images (Figure 4).August was a much drier month with only 33.3 mm of total precipitation (Table 2), causing a decline in surface water by late August (in Figure 4).September was a very wet month (101.1 mm of total precipitation, Table 2), and temperatures were cooling, resulting in an increase in surface water because there was less evapotransporation.The total monthly precipitation for October was less than half compared to September and the monthly mean daily temperature was close to 0 °C, resulting in comparable or a slight increase in extent of surface water.The Mildred Lake, Alberta, weather station (57°02′28.00N,111°33′32.00W)was the closest to our study area.Although the weather station data helped us interpret changes in water extent over the study period, the station was approximately 80 km away from the Peace Athabasca Delta and may not provide an accurate representation of conditions in the study area.Grey-level thresholding has been proven to be a simple and effective way to map surface water with SAR data [33,37].However, the user must consider beam mode, polarization and ancillary sources of data for post-editing to obtain an accurate result.The HH polarization generally is better able to separate land from water under calm water conditions because open water results in less scattering compared to the HV or VV polarization and is less sensitive to capillary waves created from wind [78].Therefore, the differences in backscatter responses between land and water are the greatest in the HH polarization [46,70,79−81].In circumstances where high wind or waves are present the HV polarization can better map open water because the backscatter is more independent of surface roughness [82] and largely independent of incidence angle and wind direction [78].When mapping surface water, we recommend ordering the SAR data as dual-polarized to enable the user to select the most appropriate polarization for the wind conditions present at the time of data acquisition.The user also must do some post-classification editing to remove errors of omission and commission.Ancillary sources of information, such as digital elevation layers, ground truth data, and a land cover mask, can aid in the editing.Ordering the appropriate SAR data and doing post-editing with appropriate ancillary data will help ensure accurate surface water mapping.
Flooded Vegetation
Open water and flooded vegetation both need to be mapped to accurately represent the extent of a wetland.The long wavelengths associated with SAR systems allow the signals to penetrate vegetation canopies to map underlying emergent herbaceous and woody wetland vegetation via double-bounce backscatter [34,57,83−85].The longer the wavelength, the deeper the penetration through the vegetation canopy.P-band radar signals ( Single polarization SAR satellites, which provide only amplitude data (e.g., RADARSAT-1), are not as efficient in mapping flooded vegetation because the radar backscatter cannot be decomposed with only one intensity channel.However, when polarimetric decompositions are applied using fully polarimetric SAR, features like flooded vegetation can be identified and classified [86].Many polarimetric decomposition approaches have been developed, including the Cloude-Pottier, Freeman-Durden, Van Zyl, Touzi, and m-χ methods [86−90].The Freeman-Durden decomposition is a physically based model that estimates the amount of surface, double-bounce, and volume scattering response contributing to the total backscatter from each pixel [88].The m-χ decomposition estimates the received Stokes Vectors and converts them to the Poincaré features m and χ [90].We describe a case study using the Freeman-Durden decomposition, for which past research has demonstrated its ability for mapping flooded vegetation and wetlands [37,86,88], and the m-χ decomposition.Both methods output a three-channel image with estimates of surface, double-bounce and volume scattering, allowing for easy comparison of outputs.Comprehensive descriptions of these decompositions are provided by Freeman and Durden [88] and Raney et al. [90].
Case Study-Whitewater Lake
Whitewater Lake (49°15′05.46N,100°12′18.90W) is located in southwestern Manitoba, between Boissevain and Deloriane (Figure 7).This lake is recognized as a Canadian Important Bird Area of global significance, providing habitat for more than 110 species of birds, as well as other wildlife.Shallow saline wetland, sedge meadows, and mixed-grass prairie are all found within the Whitewater Lake Basin.
Data Acquisition and Processing
RADARSAT-2 images were acquired throughout the growing season for 2010, 2012, and 2013 (Table 3).All images were Fine Quad-Pol (FQ16) mode with a nominal resolution of 5.2 × 7.6 m 2 and an incident angle of 35.4°-37.0°.
Ducks Unlimited Canada (DUC) independently selected 31 field points that had undergone a land cover change during or between years included in this research.DUC recorded the land cover type and date from the earlier image, the new land cover type and date from the later image, observational notes, and field photos.
All image processing for the Freeman-Durden decomposition products was done using Geomatica 10.3.2 [91].A 5 × 5 pixel boxcar filter (which used local averaging to increase the effective number of looks) was applied to remove speckle.The Freeman-Durden decomposition was then derived to separate the total power of each pixel into surface, double-bounce, and volume scattering.The output was a three channel image corresponding to the power of each of the three scattering mechanisms.Table 3. RADARSAT-2 images acquired over Whitewater Lake, Manitoba, in the spring, summer, and fall of 2010, 2012, and 2013.These images were used to determine if simulated compact polarimetric data could be used to map changes in wetlands within a growing season and between years.Note, all RADARSAT-2 images in this flooded vegetation analysis were FQ16 mode with an incidence angle of 35 The m-χ decompositions were processed using software to simulate compact polarimetry, which was created at the Canada Centre for Mapping and Earth Observation.The software ingests fully polarimetric SLC imagery and simulates compact polarimetry data.A 30 m resolution, −25 dB noise floor and a 5 × 5 pixel averaging window were applied, as these parameters most closely resemble the parameters of RCM data.
All images were orthorectified after all polarimetric analyses were completed because the orthorectification process can degrade the phase information contained within the polarimetric images [92].Both the Freeman-Durden and m-χ decompositions were orthorectified using Geomatica's 10.3.2Orthoengine.The rational function option, which uses the ephemeris data provided by MacDonald, Dettwiler and Associates for each RADARSAT-2 image, was applied instead of collecting ground control points.All images were expected to have an error less than half a meter.In addition, a 50-m Canadian Digital Elevation Data Digital Elevation Model (50 m resolution), cubic convolution pixel resampling, and sigma naught calibration were used in all orthorectifications.Sigma naught is the normalized measure of the backscatter from the feature being sensed, also known as the backscatter coefficient [93].
To determine if the Freeman-Durden and m-χ decompositions could be used to accurately map flooded vegetation, the 31 points collected by DUC representing land cover change were overlaid on the decompositions.A visual assessment was done to verify if the land cover change (for example open water becoming flooded vegetation) observed by DUC was also visible as a change in backscatter in the decompositions, in which case changes in backscatter could be used to map areas of flooded vegetation.We used a scale from 1 to 3 to rank the utility of the decompositions to map change between land cover classes.A "3" represented a complete separation between two land cover classes; a " 2" represented moderate separation between classes; and a "1" represented little separation between classes.
Results and Discussion
Both the Freeman-Durden and m-χ decompositions were effective for mapping changes between different land cover classes within a wetland, both annually and inter-annually.The Freeman-Durden decomposition had a high accuracy rate for identifying land cover changes for all combinations in this study (Table 4).The flooded vegetation land cover was usually dominated by double-bounce backscatter, whereas open water had specular backscattering, upland vegetation had volume backscattering, and areas o f wet soil were dominated by surface scattering.These results were consistent with research by Ramsey [94] that concluded flooded mangroves could be classified from non-flooded mangroves because the former was largely a double-bounce backscatter response and the later a volume backscatter response in L-HH data.These distinctly different backscattering responses made the transition from land cover classes easily detectable.For example, upland vegetation in the spring of 2010 was clearly classified as having a large amount of volume scattering, and in the summer of 2013 the same area was inundated by open water and exhibited specular backscattering, appearing dark on the image because very little backscatter was returned to the satellite (Figure 8).However, there were a few areas where the Freeman-Durden method was ineffective at identifying land cover change between open water and flooded vegetation because the patch sizes were small and were therefore given a rank of 1 or 2. Small, dispersed patches of flooded vegetation (<5 × 5 m 2 ) do not return the strong double-bounce backscatter typically associated with flooded vegetation.Vegetation that is short or patchy can have a backscatter more similar to water (Figure 9).Alternatively, a study in Kenya with C-HH imagery found that flooded emergent grasses had a similar backscatter response to non-flooded grasses [95].The confusion in identifying the change in land cover from upland to open water likely was a result of wind causing waves in the water and, consequently, a lot more surface scattering.When all land cover class transitions were considered, the Freeman-Durden results had an overall accuracy rate of 89%.Consistent with the Freeman-Durden decomposition, m-χ was able to separate many land cover transitions with a high accuracy rate (Table 4).For example, when upland vegetated areas in the spring of 2010 became open water in the summer of 2013, the m-χ decomposition output clearly mapped the changes from volume scattering to specular scattering (Figure 8).Nevertheless, the m-χ decomposition did not map changes from open water to flooded vegetation as well as the Freeman-Durden decomposition.This was because the double-bounce backscattering was not as visible in the m-χ decomposition.In many samples, the m-χ decomposition gave a mixed backscatter response in areas of flooded vegetation.However, the m-χ decomposition had a slightly higher accuracy rate than the Freeman-Durden decomposition when used to map changes from wet soil to open water within a season.This was because a few samples were a mix of mud and vegetation, which returned some double-bounce backscatter in the Freeman-Durden decomposition.Although both the Freeman-Durden and the m-χ decompositions successfully mapped changes in land cover types, the Freeman-Durden decomposition had a better overall accuracy rate.This was not surprising because the simulated compact polarimetry data had a noise floor of −25 db, compared to −35 db for the fully polarimetric data.In addition fully polarimetric SARs capture more information (4 × 4 matrix) compared to compact polarimetric SAR's (2 × 2 matrix).Moreover, when the transmission of a field has a linearly polarized component it will cause uncertainties or omission when classifying dihedral backscatter [96].Circular polarization transmission is the best way to prevent the rotation of a linearly polarized wave as it transmits through the ionosphere, but some omission may still occur [97].
Therefore, some features will not be as visible in the simulated compact polarimetry data because they are too faint and do not exceed the noise floor.
Although in some examples both decompositions were given a rating of 2 because there was a mixed backscatter response, this does not necessarily indicate a poor ability to map land cover change.These results may indicate that the targets were heterogeneous, such as open water with emergent macrophytes.This information could be used as an indicator of the health of a wetland, for example, biomass within a wetland.The amount of volume and double-bounce scattering can be an indicator of vegetation density.However, further research and development is needed to quantify not only the difference in vegetation versus no vegetation, but also how different resolutions, bands, and angles of radar reflect off different vegetation types, densities, and heights.It has been suggested that C-band imagery should be used when trying to map leaf shape and that L-band is more accurate for measuring aboveground biomass and stand height [98].Other research has demonstrated that a multi-temporal and multi-incidence angle was the best approach, with steep incidence angles for mapping wetlands and large incidence angles for detecting open water from land [99][100][101].
Curvelet-Based Change Detection for Mapping Flooded Vegetation
More recently a Curvelet-based approach for detecting changes in flooded vegetation has been used [49,102,103].This technique was developed by Schmitt et al. [104] and can be used to map changes between SAR images while at the same time suppressing speckle noise, which can be problematic in SAR imagery.This approach could be used as a way to enhance polarimetric decompositions and temporal changes between polarimetric channels.This methodology was originally designed for disaster management with single polarimetric SAR, but was later adapted to use in polarimetric decompositions.This approach differs from others in that it compares whole structures found in the image rather than individual pixel values.The first step is to apply the Curvelet transform [105] on each input image separately.This technique detects elongate structures -in general, "lines" like the course of a river -in the images and then converts the images to the so-called Curvelet-coefficient domain, where each coefficient stands for a certain "line" in the image.Comparing the Curvelet coefficients of two images consequently means comparing structures apparent in the two input images, which enables a very stable and quasi-noise-free change detection.For instance, the change of an isolated pixel value-mainly induced by noise influence only-will not be detected.But, the consistent change of several neighboring pixels will produce a new structure and, therefore, it is detected by the Curvelet-based change detection approach.The change of the water level, for example, mostly produces a shift of the shoreline, i.e., the Curvelets describing the transition from land to water along the shoreline will also change, which becomes evident in the comparison of their coefficients.The finest scale considered in the change detection refers to a neighborhood of approximately 3 × 3 pixels [104].
The direct mathematical description of the structures in an image opens to door to structure amplification and, thus, image enhancement because manipulating the Curvelet coefficient is equal to manipulating the structures instead of single pixel values [106].For example, low Curvelet coefficient amplitudes generally indicate very weak structures and are mostly related to noise, e.g., of low backscattering targets like open water.This noise contribution can easily be deleted by converting the corresponding coefficients to zero before the image reconstruction.In addition coefficients with a higher value, which are usually related to intense structures like the shoreline or the border between flooded vegetation and open water, are weighted by using a special function, which retains the values of the strong structures while slightly lowering the values for minor structures [104] to suppress artifacts common to all alternative image representations.The same image enhancement can be used for individual SAR images as well as for the difference between images.In practice, the image difference is calculated in the complex Curvelet coefficient domain, then the differential coefficients are weighted, and finally the enhanced difference image is transformed back to the image domain [102].For more details on how the Curvelet-based change detection method is calculated refer to [49,62,63,[102][103][104].
To apply the change detection algorithm to polarimetric decompositions all three decomposition channels, representing three independent intensity measures, can be introduced as independent layers.Whereas an increase or decrease in intensity only (without changing the scattering mechanism) will appear adequately in all three channels (compared with the change of the dielectric constant from differences in soil moisture, for example), the change of the scattering mechanism will be described by a very special behavior in the polarimetric channels [49]: an increase in the volume component directly refers to growth in vegetation height; a higher surface component indicates the appearance of a relatively smooth target (e.g., grassland formerly covered by water); and an increase in the double-bounce component traces the expanded extent of flooded vegetation along the river.Hence, this technique not only detects changes, but allows the user to interpret the changes with respect to temporal variations in the land cover.
Case Study-Dong Ting Lake
Dong Ting Lake is located in Hunan Province (28°53′22.11N,112°40′01.33W)(Figure 10).It is the second largest lake in China and is a flood basin within the Yangtze River.The size of Dong Ting Lake can change quite dramatically within a season.For example, it can be as large as 2691 km 2 during an annual flood event and as small as 710 km 2 during dry conditions [107].
Data Acquisition and Processing
Two RADARSAT-2 fully polarimetric scenes (FQ16) were acquired over Dong Ting Lake in 2008.Both images had an incidence angle of 35.4°−37.0°and a nominal resolution of 5.2 × 7.6 m 2 .An image with lower water levels (6 June 2008) and an image with higher water levels (17 August 2008) were selected for change detection.The Freeman-Durden decomposition was used as input into the Curvelet-based change detection method to determine if this technique could detect change between the three different types of backscatter produced in the Freeman-Durden decomposition.
Results and Discussion
When the Freeman-Durden decompositions were compared using the Curvelet-based change detection, the results showed that the Curvelet-based change detection could be used to map changes in the double-bounce, volume and surface scattering in a smooth, noise-free manner while still preserving detail.When comparing the Freeman-Durden decompositions of the single images in Figure 11 These results highlight the possibility for the Curvelet-based change detection to be used with polarimetric SAR data.Wetland land cover change was detected using the Freeman-Durden decomposition as input to the Curvelet-based method.These results are consistent with another study, which applied the same approach using the Freeman-Durden and the Normalized Kennaugh elements to locate changes in flooded vegetation [102].Though first efforts to validate the changes observed by the Curvelet-based change detection method are reported in Schmitt et al. [104], the validation with regards to wetland monitoring is still challenging and should be extended to a variety of test sites in the future using auxiliary data from other sources like unmanned aerial vehicles.
Wishart-Chernoff Distance
The ability to focus and prioritize monitoring efforts is often a difficult task, but necessary in a time when the environment, including wetlands, is being altered at an alarming rate and monitoring budgets are shrinking.Having a tool to identify areas that have undergone the most change over a time period is an invaluable first step in monitoring wetlands.Whitewater Lake, Manitoba (see Figure 6), with the available SAR data shown in Table 3 was used as a test site for change detection within wetlands.The Wishart-Chernoff distance, derived and proposed by Dabboor et al. [108], was used for pixel-based polarimetric change detection mapping.As analytically presented [108], the Wishart-Chernoff distance is a probabilistic matrix distance measure that can estimate the similarity between two complex Wishart distributions and, thus, be used for applications involving full and compact polarimetric SAR data imagery.The Wishart-Chernoff distance is a symmetric positive matrix distance that can be used in a wide range of applications, such as agglomerative clustering [108,109] and change detection applications [110].The same image dates used for mapping flooded vegetation in the Whitewater Lake example were used in the Wishart-Chernoff Distance analysis (see Table 3).Data were geo-referenced and co-registered with accuracy to better than one pixel.Images were compared on a pixel-by-pixel basis by calculating within a moving window (3 × 3) the Wishart-Chernoff distance between corresponding pixels.High values of the Wishart-Chernoff distance indicated significant changes in the study area between the acquisitions dates of the images.
Case Study-Whitewater Lake
Whitewater Lake, Manitoba, was used as the location to test the Wishart-Chernoff Distance methodology (see Figure 7).
SAR Data Acquisition
The same image dates used for the flooded vegetation example were used in the Wishart-Chernoff Distance analysis (see Table 3).
Results and Discussion
To analyze the detected changes within the regions of the Whitewater Lake, a specific range of pixel values must be defined tobe able to map the changes.Thus, the calculated Wishart-Chernoff distance values (DWC) were expressed in terms of the Jeffries-Matusita distance (DJM) as follow: DJM = 2(1 − ).The advantage here is that the Jeffries-Matusita distance takes values between 0 and 2. High Jeffries-Matusita distance values (high Wishart-Chernoff distance values) close to 2.0 indicate changes and low distance values (below 1.0) indicate little or no changes [111].We used a threshold value of 1.7 to discriminate between large changes (DJM > 1.7) and moderate changes (1 < DJM < 1.7) [112].
The Wishart-Chernoff Distance showed promising results when identifying areas of moderate and high change within a wetland.When we compared the areas that DUC had independently identified as having undergone a change in land cover based on field photos and field data collection, we verified that the Wishart-Chernoff Distance method had identified these same locations as either areas of high or moderate change.The Wishart-Chernoff Distance change detection shows encouraging results as a tool to flag areas of high and moderate change within wetlands.It could be used to first locate areas of high change, after which polarimetric decompositions could be applied to characterize the type of change.Further research is needed to develop the methodology and to confirm the results.
Conclusions
Analysis of Synthetic Aperture Radar Imagery data is an excellent approach for mapping and monitoring changes within a wetland.The ability of SAR data to be acquired at night and in a variety of weather conditions makes it a reliable and consistent source of information.Past studies have demonstrated that grey-level thresholding is an effective way to map surface water.Polarimetric decompositions like the Freeman-Durden and m-χ approaches can be used to map flooded vegetation, and the curvelet-based change detection can further enhance the detection of flooded vegetation by reducing speckle and other forms of noise.Finally, the Wishart-Chernoff Distance change detection approach could be used to flag areas of change prior to implementing polarimetric decompositions to characterize these changes.To be able to monitor the status of wetlands on a frequent basis and capture the dynamic changes both seasonally and annually we recommend SAR as the primary source of imagery, supported by other data sources such as lidar, thermal, and optical imagery, where feasible.
Figure 1 .
Figure 1.A comparison of how RADARSAT-1 and RADARSAT-2 transmit and receive radar waves.RADARSAT-1 transmits and receives radar waves horizontally to the target on the ground.RADARSAT-2 can transmit and receive radar waves in both the horizontal and vertical polarization plains, allowing calculation of the phase.Reproduced by permission of/or Courtesy of MacDonald, Dettwiler and Associates Ltd.
Figure 2 .
Figure 2. A comparison of RADARSAT-2 and the RADARSAT Constellation Mission (RCM) swath widths.These graphics illustrate that a wider swath width will be available for many beam modes on the RCM compared to RADARSAT-2.Reproduced by permission of/or Courtesy of MacDonald, Dettwiler and Associates Ltd.
Figure 4 .
Figure 4.Radar-derived surface water maps from April to August 2012.These images were produced using a thresholding approach and captured the dynamic changes in surface water extent throughout the ice-off period.
Figure 5 .
Figure 5.These images show the location of net gain (red) and net loss (blue) of water from 28 April 2012, to 13 October 2012, in the Peace-Athabasca Delta.
Figure 6 .
Figure 6.Landsat scenes used to validate the Peace-Athabasca Delta radar-derived surface water maps.The panel on the left is a Landsat 7 ETM+ scene from 28 April 2012, and the right panel is a Landsat 7 ETM+ scene from 22 May 2012.The red box clearly shows that large portions of the lake were still frozen on 28 April 2012, but had become open water by 22 May 2012.
Figure 7 .
Figure 7. Whitewater Lake, Manitoba.All field data points were collected inside the red square.
Figure 8 .
Figure 8.This image shows the transition from upland vegetated areas on 30 May 2010 (A,B), to open water on 25 July 2013 (C,D).Images (A) and (C) were produced from the Freeman-Durden decomposition and images (B) and (D) from the m-χ decomposition.Both decompositions were able to clearly map the change from upland vegetation to open water because the upland areas had a strong volume scattering response and the open water returned very little backscatter to the satellite and, thus, appeared black.
Figure 9 .
Figure 9.This figure shows the results of the Freeman-Durden decomposition from 30 May 2010 (A); and 14 May 2013 (B).The yellow point is a location that was flooded vegetation on 30 May 2010, but changed to open water on 14 May 2013.The Freeman-Durden decomposition was not able to map this specific change because the flooded vegetation was short (C), thus there was not a strong double-bounce backscatter returned to the satellite; rather the return was a specular backscatter similar to open water.
Figure 10 .
Figure 10.The location of the Dong Ting Lake test site in China.
, the dominance of the volume scattering component (green) is highly visible.Apart from that, a change from surface scattering (the bluish fields in the upper part of the 6 June 2008, image) to double-bounce (the fields now turned to orange or even red in the 17 August 2008, image) can clearly be distinguished.The dilation of the water surface (black) from June to August is likewise apparent.
Figure 12
Figure 12 illustrates the results of the Curvelet-based change detection method.The strongest changes can be found in the double-bounce component, reaching more than 10 decibels in gains or losses of double-bounce response.The lowest changes are reported in the volume component.Combining all three
Figure 12 .
Figure 12.The Curvelet-based approach was applied to RADARSAT-2 acquisitions for 6 June 2008, and 17 August 2008, to highlight changes in the Freeman-Durden components: double-bounce scattering (A); surface scattering (B); and volume scattering (C).
For example, areas of open water on 19 May 2012, that had become wet soil by 16 September 2012, were flagged as high or moderate change with the Wishart-Chernoff Distance method (Figure 13).Similar results were found annually for land cover change between open water and flooded vegetation, open water and upland, and upland and flooded vegetation.
Figure 13 .
Figure 13.Areas of open water on 19 May 2012, that became wet soil by 16 September 2012, are indicated by the blue dots.The background image is the Wishart-Chernoff Distance calculated using fully polarimetric data from the two dates.Red polygons are areas estimated to have high amounts of change; yellow areas signify moderate change; and green areas signify little or no change.
Table 1 .
RADARSAT-2 images acquired over Peace-Athabasca Delta (Lake Claire) for 2012.Note, all RADARSAT-2 images were Single Look Complex (magnitude only used to extract surface water), had an incidence angle of 29.5°-33.0°,a 1.6 × 2.8 m 2 spatial resolution, HH polarization, and a swath width of 50 km.
Table 2 .
Mildred Lake, Alberta, weather station data.Mean daily temperature (°C) and total precipitation (mm) for the dates of the RADARSAT surface water maps, and monthly average daily mean temperature (°C) and monthly total precipitation (mm).
Table 4 .
The results from a visual assessment to determine how accurate the Freeman-Durden and m-χ decompositions were at classifying changes between land cover classes. | 10,505.6 | 2015-06-09T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Representation Learning: Recommendation With Knowledge Graph via Triple-Autoencoder
The last decades have witnessed a vast amount of interest and research in feature representation learning from multiple disciplines, such as biology and bioinformatics. Among all the real-world application scenarios, feature extraction from knowledge graph (KG) for personalized recommendation has achieved substantial performance for addressing the problem of information overload. However, the rating matrix of recommendations is usually sparse, which may result in significant performance degradation. The crucial problem is how to extract and extend features from additional side information. To address these issues, we propose a novel feature representation learning method for the recommendation in this paper that extends item features with knowledge graph via triple-autoencoder. More specifically, the comment information between users and items is first encoded as sentiment classification. These features are then applied as the input to the autoencoder for generating the auxiliary information of items. Second, the item-based rating, the side information, and the generated comment representations are incorporated into the semi-autoencoder for reconstructed output. The low-dimensional representations of this extended information are learned with the semi-autoencoder. Finally, the reconstructed output generated by the semi-autoencoder is input into a third autoencoder. A serial connection between the semi-autoencoder and the autoencoder is designed here to learn more abstract and higher-level feature representations for personalized recommendation. Extensive experiments conducted on several real-world datasets validate the effectiveness of the proposed method compared to several state-of-the-art models.
INTRODUCTION
The success of machine learning algorithms and artificial intelligence methods heavily depends on the feature representation learning of original data (Bengio et al., 2013;Zhuang et al., 2017a). In recent decades, feature representation learning has attracted a vast amount of attention and research from multiple disciplines, such as biomedicine and bioinformatics (Wei et al., 2019;Li et al., 2021), computer vision (Kim et al., 2017), knowledge engineering (Liu et al., 2016), and personalized recommendation (Zhuang et al., 2017b;Zhu et al., 2021). In real-world applications, feature representation learning is considered to obtain the different explanatory factors of variation behind the data (Locatello et al., 2019).
For nearly three decades, effective computational methods have accelerated drug discovery and played an important role in biomedicine, such as predicting molecular properties and identifying interactions between drugs/compounds and their target proteins. In early years, quantum mechanics (Hohenberg and Kohn, 1964), such as density functional theory (DFT), was used to determine the molecular structure and calculate properties of interest for a molecule. However, the quantum computational method usually consumes tremendous computational resources and takes hours to days to calculate the molecular properties (Ramakrishnan et al., 2015), which hinders their applications to the fields of high-throughput screening. Nowadays, the powerful ability to learn representation and efficiently recommend algorithms has received significant attention. A key challenge is to learn useful molecular representation information from the huge molecular dataset.
Among all the informatics-related application scenarios, with the rapid development of the Internet, there is an urgent demand for personalized recommendation to tackle the information overload problem . Notably, many successful recommendations systems share aspects of feature representation learning and have been widely applied in many online services such as electronic commerce (Ma et al., 2020) and social networks (Botangen et al., 2020). Existing methods for recommendation systems can roughly be categorized into three classes: content-based recommendation, collaborative filtering (CF), and hybrid methods (Batmaz et al., 2019). The contentbased recommendation methods learn the descriptive features of items, calculate the similarity between new items and user-liked items based on these features, and generate the final recommendation (Lops et al., 2019). The collaborative filtering methods discover the inclinations of users by considering the user's historical behavior and produce recommendations (Dong et al., 2021). Hybrid recommendation methods leverage multiple approaches together and try to combine the advantages of these approaches.
Recently, collaborative filtering methods have achieved superior performance for the advantages of effectiveness and efficiency, which have far-ranging consequences in practical applications of recommendation systems (Su and Khoshgoftaar, 2009). Most of the traditional collaborative filtering methods are based on matrix factorization (MF), which combines good scalability with predictive accuracy (Luo et al., 2020). The main intuition behind these approaches is to decompose the rating matrix into user and item-based profiles, which allows the recommendation system to treat different temporal aspects separately (Yehuda et al., 2009). However, MF-based methods have inherent limitations in feature representation learning for the recommendation, which prevent further development of these approaches.
On the other hand, deep learning techniques have recently achieved great success in the computer vision and natural language processing fields. Such techniques show great potential in learning feature representations. Therefore, researchers have begun to apply deep learning methods to the field of recommendations . They use a restricted Boltzmann machine instead of the traditional matrix factorization to perform the CF, and Georgiev and Nakov,(2013) expanded the work by incorporating the correlation between users and between items. In addition, Wang et al. (2015), proposed a hierarchical Bayesian model that uses a deep learning model to obtain content features and a traditional CF model to address the rating information. These methods, based on deep learning techniques, more or less make recommendations by learning the content features of items. These methods are not applicable when we are unable to obtain the contents of items. Therefore, enhancing the effectiveness of feature learning is significant. Recent studies have shown that deep neural networks can learn more abstract and higher-level feature representations (Yi et al., 2018), which has made remarkable progress in improving recommendation performance (Chae et al., 2019). For example, He et al. (2017) proposed a general recommendation framework called Neural Network-based Collaborative Filtering, in which a deep neural network is utilized for learning the interaction between user and item features. As we can see, among all the deep neural networkbased recommendation methods, many frameworks are realized on top of the autoencoder model, which is one of the most successful deep neural networks and has also been actively adopted as a CF model recently (Shuai et al., 2017;Zhuang et al., 2017c;Chae et al., 2019;Zhong et al., 2020). For example, Zhang et al. proposed a hybrid collaborative filtering framework based on an autoencoder that incorporated auxiliary information for semantic rich representations teaching (Shuai et al., 2017).
Though the autoencoder-based methods have achieved fairly good performance for personalized recommendation, there are two main problems that prevent the further development of these methods. The first is the utilization of auxiliary information from users or items, since the rating matrix in real-world applications is usually very sparse, which inevitably leads to a significant recommendation performance degradation. Most existing methods only introduce some obvious attributes, such as the age, gender, and occupation of users, or the title, release date, and genres of items. The key factors of collaborative filtering, such as the reviews of items by users, have rarely been incorporated into the autoencoder-based networks. The second problem is the optimization of neural networks. When training models to incorporate side information about items and users, the dimensions of the input and output layers are required to be equal in autoencoder-based networks, which greatly limits the scalability and flexibility of networks.
To address these problems, we propose a feature representation learning method for personalized recommendation in this paper which extends items features with knowledge graph via triple-autoencoder (KGTA for short). Specifically, the comment information between users and items is first encoded as sentiment classification. These features are then applied as the input to the autoencoder for generating the auxiliary information of items, which can be used to introduce the comment information from users to items to solve the incorporating problem of auxiliary information. Secondly, the item-based rating, the side information, and the generated comment representations are incorporated into the semi-autoencoder for reconstructed output. It aims to address the second problem, that the dimensions of the input and output layer are required to be equal. Finally, the reconstructed output generated by the semi-autoencoder is input into a third autoencoder for personalized recommendation. Experimental results on several datasets demonstrate the effectiveness of our proposed method compared to other state-of-the-art matrix factorization methods and deep-based methods.
In summary, the main contributions of our work can be distilled into the following: • To incorporate the key information between users and items, the comments from each user for item are encoded and reconstructed as the auxiliary information • To optimize the neural networks, a serial connection of semi-autoencoders and autoencoders are designed to learn more abstract and higher-level feature representations for personalized recommendation • Extensive experiments on several datasets were conducted to confirm the effectiveness of the proposed method compared to other state-of-the-art matrix factorization methods and deep-based methods
RELATED WORK
In this section, we survey the related works of feature representation learning, personalized recommendation methods, and collaborative filtering 1,2 .
Feature Representation Learning
Feature representation learning refers to learning data representations that make it easier to extract useful information in downstream machine learning tasks (Bengio et al., 2013). The last decades have witnessed a vast amount of research and application on feature representation learning in multiple disciplines. For example, in the field of biomedicine and bioinformatics, Wei et al. (2019) developed a bioinformatics tool for the generic prediction of therapeutic peptides. An adaptive feature representation learning method is proposed for different peptide types in the tool. Alshahrani et al. (2017) proposed a knowledge representation learning method with symbolic logic and automated reasoning, which can be applied to biological knowledge graphs for tasks such as finding candidate genes for diseases and protein-protein interactions. Li et al. (2021) proposed a triplet message mechanism to learn molecular representation based on graph neural networks, which can complete molecular property prediction and compoundprotein interaction identification with few parameters and high accuracy.
Besides the fields of biomedicine and bioinformatics, feature representation learning has also been widely applied in other fields such as computer vision (Kim et al., 2017), knowledge engineering (Liu et al., 2016) and personalized recommendation (Zhuang et al., 2017b). For example, Wang et al. proposed a highresolution representation learning network for visual recognition problems , which can maintain the representation being semantically strong and spatially precise. Xu et al. (2018) proposed an aggregation method for node representation learning that can adapt neighborhood ranges to nodes. It is especially suitable for graphs that have subgraphs with diverse local structures. Niu et al. (2020) proposed a rule and path-based joint embedding method for representation learning on knowledge graphs. The Horn rules and paths are leveraged in this method to enhance the accuracy and explainability of representation learning.
Personalized Recommendation
In recent decades, with the rapid development of the Internet, personalized recommendations have provoked a vast amount of attention and research (Qian et al., 2013). The advances in personalized recommendation have far-ranging consequences in many online services applications such as electronic commerce (Ma et al., 2020) and social networks (Li et al., 2017). For example, in Facebook, Gupta et al. (2020) conducted a detailed performance analysis of recommendation models on server-scale systems present in the data center. Botangen et al. (2020) proposed a probabilistic matrix factorization-based recommendation method that considers geographic location information for designing an effective and efficient Web service recommendation.
Good feature representations of data do contribute to many machine learning tasks, such as personalized recommendation. For example, Geng et al. (2015) proposed a deep method to learn the unified feature representations for both users and images. This representation from large, sparse, and diverse social networks obviously improves the recommendation performance. Liu et al. (2019) proposed a joint representation learning method for multimodal transportation recommendations, which aims to recommend a travel plan that considers various transportation modes. Ni et al. proposed a recommendation model based on deep representation teaching (Ni et al., 2021). It contained information preprocessing and feature representation modules to generate the primitive feature vectors and the semantic feature vectors of users and items, respectively.
Collaborative Filtering
In personalized recommendations, the collaborative filtering (CF) methods aim to discover users' preferences through the interactions between users and items. Existing CF methods can be roughly categorized into two classes: matrix factorization methods and deep neural network methods.
In the matrix factorization methods, these methods have difficulty in processing sparse data and have poor generalization ability, but they have low time and space complexity and good scalability. Lee et al. proposed the classical non-negative matrix factorization (NMF) model (Lee and Seung, 2001), which can decompose the rating matrix into user and item profiles. Along this line, Sun et al. proposed a Probabilistic Matrix Factorization (PMF) model that scales linearly with the number of observations and performs well on very sparse and imbalanced datasets . In light of PMF, Salakhutdinov et al. also proposed a Bayesian Probabilistic Matrix Factorization (BPMF) model (Salakhutdinov and Mnih, 2008), which controlled model capacity automatically by placing hyper-priors over the hyperparameters to avoid over-fitting. Koren proposed combining the factor and neighborhood models for a more accurate recommendation performance (Koren, 2008), which further extends the model to exploit both explicit and implicit feedback by the users. In recent years, to address the problem that the attributes of users are often scarce for reasons of privacy, Rashed et al. (2019) proposed a nonlinear co-embedding GraphRec model, which treats the user-item relation as a bipartite graph and constructs generic user and item attributes via the Laplacian of the user-item co-occurrence graph.
Recently, due to the powerful ability of deep learning methods, remarkable progress has been made in learning higher-level and abstract representations for personalized recommendations (Wang et al., 2015;Yu et al., 2019). These methods have nonlinear transformation and powerful representation learning ability, but poor interpretability, large data requirements, and extensive hyper-parameter tuning. For example, He et al. (2017) proposed a general recommendation framework that designs a deep neural network to learn the interaction between a user and item features. Meanwhile, to address the cold start problem and improve performance for personalized recommendations, Ni et al. (2022) proposed a two-stage embedding model to improve recommendation performance with auxiliary information. In this method, two sequential stages, graph convolutional embedding and multimodal joint fuzzy embedding, are designed to fully exploit item multimodal auxiliary information. Among all the deep learning methods for personalized recommendation, we realize many successful frameworks on top of the autoencoder, which is one of the most successful deep neural networks and has also been actively adopted as a CF model recently (Shuai et al., 2017;Zhuang et al., 2017c;Chae et al., 2019;Zhong et al., 2020). For example, Zhuang et al. (2017c) proposed a dual-autoencoder model for recommendation, which simultaneously learns the user-based and item-based features with the autoencoder model. Zhu et al. (2021) proposed a collaborative autoencoder model for personalized recommendation, which learns the hidden features of users and items with two different autoencoders for capturing different characteristics of the data.
Autoencoder
The autoencoder model aims to minimize the distance between the input and the reconstructed output. The basic autoencoder network (Bengio, 2009) generally consists of an input layer, an output layer, and one or more hidden layers. Given the input as x ∈ R m×n , when there is only one hidden layer, the encoding and decoding layer of autoencoder can be represented as follows: where W ∈ R k×m , W′ ∈ R m×k and b ∈ R k×1 , b′ ∈ R m×1 are the weighting matrices and bias vectors, respectively. f and g are the nonlinear activation functions of the encode and decode layers, respectively. In our experiments, the sigmoid and identity functions are introduced as f and g. The objective function of the autoencoder can be shown as follows:
Semi-Autoencoder
In recent years, many autoencoder-based recommendation methods have achieved fairly good results with the advantages of no labeling requirement and fast convergence speed. However, the classic autoencoder model has the restriction that the dimensions of the input and the output layer must be equal, which has a great impact on introducing auxiliary information for solving the sparse problem of the rating matrix.
To address this problem, a semi-autoencoder model was proposed and generalized into a hybrid CF method for rating prediction (Shuai et al., 2017). Compared with traditional autoencoders, the input layer of semi-autoencoders is longer than the output layer, so semi-autoencoders can be utilized to capture different nonlinear feature representations and reconstructions flexibly by extracting different subsets from the inputs, and it is easy to incorporate side information into the input layer effectively to improve the item feature representation for better recommendation performance. The whole framework of the semi-autoencoder is shown in Figure 1, the left and right parts of Figure 1 show the two cases in which the output layer is longer than the input layer and the output layer is shorter than the input layer, respectively. We observe that the basic framework of a semi-autoencoder is the same as that of a classical autoencoder model, which also includes an input layer, an output layer, and one or more hidden layers. Furthermore, in the right part of Figure 1, we can observe that the shorter output layer is the reconstruction of certain parts of the input, and the remaining part in the semi-autoencoder model is auxiliary information to learn better feature representations for addressing the sparse problem of the rating matrix.
METHODOLOGY
The whole framework of our proposed recommendation method with knowledge graph via triple-autoencoder (KGTA for short) is illustrated in Figure 2, which encompasses three main components. The first one is the representational learning of the comment information between users and items. The comments from users on each item are divided into positive and negative categories. Then the first autoencoder was introduced to reduce the dimensionality of this comment information. The second one is the learning of all the auxiliary information. A semiautoencoder is utilized to incorporate the side information, the extended features from the knowledge graph, and the generated comment features into the item-based rating. Finally, the lowdimensional output of the semi-autoencoder is input into the Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 891265 third autoencoder. Different from the semi-autoencoder model that only approximates the item-based rating; the third component tries to reconstruct all the input for the recommendation 3,4 .
In the following, first, the commonly used notations in this paper are listed in Table 1, and then, the model of KGTA is described in detail.
Notations
Some important notations used in this paper and their descriptions are listed in Table 1.
Comment Information Features
The personalized recommendation is to predict the interest of a user in an item based on the rating matrix information. Since the rating matrix in real-world scenarios is usually very sparse, many methods have introduced auxiliary information to address this problem. However, most existing methods only introduce some obvious attributes and ignore the key factors, such as the comments from users on each item, of collaborative filtering. To address this problem, our method learns the comment information features between users and items with the first autoencoder. The details can be seen in the upper left of Figure 2.
In our method, we take natural language text as the input for sentiment classification and output emotion score ∈ 1, −1 { }. −1 represents negative emotion and 1 represents positive emotion. Our method has two stages from input sentence to output score, which are described below.
In the first stage, we perform the following preprocessing steps on the comment text before we feed it into the model. First, we remove all the digits, punctuation symbols, and accent marks, and convert everything to lowercase. Secondly, we then tokenize the text using the WordPiece tokenizer (Schuster and Nakajima, 2012). It breaks the words down into their prefix, root, and suffix to better handle unseen words. Finally, we add the [CLS] and [SEP] tokens at the appropriate positions.
In the second stage, we build a simple architecture with just a dropout regularization (Srivastava et al., 2014) and a softmax classifier layer on top of the pretrained BERT layer. The upper left corner of Figure 2 shows the overall architecture of our sentiment classification model. There are four main stages. The first is the processing step, as described earlier. Then we compute the sequence embedding from BERT. We then apply a dropout with a probability factor of 0.1 to regularize and prevent over-fitting. Finally, the softmax classification layer will output the probabilities of the input text belonging to each of the class labels such that the sum of the probabilities is 1. The softmax layer is just a fully connected neural network layer with the softmax activation function. The output node with the highest probability is then chosen as the predicted label for the input.
Given the rating matrix R ∈ R m×n , where m and n denote the number of users and items respectively. For each item, the comments from each user are classified by sentiment using BERT (Devlin et al., 2018) first, and then we obtain the comment feature vector c i for each item. Since the comment information from users to items is usually sparse, just like the rating matrix, the first autoencoder was introduced for feature dimensionality reduction and representation learning. The process of the first autoencoder can be shown as follows: where W s ∈ R k1×n and W s ′ ∈ R n×k1 are the weighting matrices, b s ∈ R k 1 ×1 and b s ′ ∈ R n×1 are the bias vectors, f and g are the functions of nonlinear activation, and k 1 is the feature dimension of hidden units. The hidden features of the first autoencoder, i.e., the low-dimensional representations of s, are denoted as S I , which are incorporated into the second semi-autoencoder for capturing different representations and reconstructions by sampling different subsets from all the inputs.
Co-Embeddings With the Semi-Autoencoder
After obtaining the reconstructed comment features, a semiautoencoder is introduced to incorporate the item rating vector r i and other auxiliary information such as attributes vector a i , reconstructed comment features s i, and the KGextended features l i . The input of the semi-autoencoder can be defined as con(r i , a i , s i , l i ) con r i , a i , s i , l i ( ) connection of r i , a i , s i , and l i .
The con(R I , A I , S I , L I ) ∈ R n×(m+y+k 1 +k 2 ) refers to the connection of R I , A I , S I and L I , where R I ∈ R n×m represents the item-based rating vectors, A I ∈ R n×y represents the attribute vectors of all items, which are the obvious attributes such as the title, release date, and genres in movie recommendation datasets, S I ∈ R n×k 1 represents the reconstructed comment features for all n items, L I ∈ R n×k2 represents the language vectors collected from the knowledge graph and autoencoder. Considering that the experiments are conducted on MovieLens datasets, the languages of the movies are obtained from open KGs such as DBpedia, and the languages are encoded with the multi-hot method and input into the autoencoder model for learning the hidden representations L I . The process of L I learning is consistent with that of S I , the details can be seen in the upper right of Figure 2.
Then the con(R I , A I , S I , L I ) is input into the second autoencoder, i.e. a semi-autoencoder, to learn the compressed
R
The rating matrix A The attributes vectors of all items S The reconstructed comment vectors of all items L The language vectors of all items R′ The prediction matrix R′ ∈ R n×m m The number of users n The number of items r u The column of rating matrix r i The row of rating matrix k The features dimension of hidden units h The number of hidden units x i The ith instance of original input The hidden feature representation matrix W, W′ The map and remap weight matrix b, b′ The map and remap bias vectors
•
The element-wise product of vectors or matrices Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 891265 6 reconstructed output, the encode stage of the semi-autoencoder can be defined as (7) ξ f Wcon R I , A I , S I , L I + b , where W ∈ R (m+y+k 1 +k 2 )×k and b I 1 ∈ R k are the weight matrix and bias item, respectively, k is the feature dimension of the hidden layer, and f is the sigmoid function for nonlinear activation. Then, the decode stage can be shown as follows: Similarly, where W′ ∈ R k×m and b I 2 ∈ R m are the weight matrix and bias item of decoding layer respectively, g is the identity function for the activation function. Notably, the SGD (stochastic gradient descent) method is utilized in the semiautoencoder for model optimization. The details can be seen in the bottom left of Figure 2.
Triple-Autoencoder for Recommendation
From Eqs. 7, 8, we can obviously observe that the output of a semi-autoencoder model is the reconstruction of a certain part of the inputs. When computing the loss function, the result of the semi-autoencoder is a reconstruction of the rating matrix R I instead of the whole input con(R I , A I , S I , L I ), which may result in a performance degradation for recommendation. To this end, we design the third autoencoder model to learn the reconstruction of the whole input, that is triple-autoencoder for the recommendation. The encode and decode stage of the triple-autoencoder can be shown as follows: To avoid over-fitting, the ℓ 2 norm regularization of the weight matrix W t and W t ′ is added to the objective function, which can be shown as follows: Thus, the objective function of the triple-autoencoder can be shown as follows: where α is the trade-off parameter that controls the balance of regularization terms. To minimize the distance between the input R semi ′ and the output R′, the deviations are minimized to obtain representations for the recommendation. When the model converges, the output layer of the triple-autoencoder is the prediction matrix R′ for the recommendation, the details can be shown in the bottom right of Figure 2. Details of the proposed KGTA are summarized in Algorithm 1.
EXPERIMENTS
In this section, experiments are conducted on two datasets, MovieLens 100K and MovieLens 1M, to evaluate the effectiveness of our proposed KGTA. In the following, we first introduce the details of two experimental datasets. Secondly, the compared methods, including the MF-based and deep neural network-based methods, are given. In addition, the evaluation metrics such as MAE and RMSE are also presented. Then, the comparative experimental results and their observations are presented in detail. Finally, the main properties such as parameter sensitivity are analyzed for certain datasets.
Datasets
The details of two real-world datasets used in the experiments are listed in Table 2, including rating density, the number of users, items, and ratings.
MovieLens 100K1: it is a well-known and most widely applied dataset for evaluating recommendation performance. There are 943 users and 1,682 movies with 100,000 ratings on a scale of 1-5, and each user rated at least 20 movies. In MovieLens 100K, item attributes such as the title, release date, and genres of movies are also provided for improving recommendation performance.
MovieLens 1M2: It is an enlarged version of the Movielens 100K dataset, which has also been widely applied in the recommendation. It has 6,040 users and 3,706 movies with 1,000,209 ratings. Similar to Movielens 100K, the ratings are scaled from 1 to 5, and auxiliary information such as movie title, release date, and category are also provided.
Compared Methods
To evaluate the effectiveness of the proposed KGTA, the following matrix factorization methods, meta-learning methods, and deep neural network methods were conducted: • Non-negative matrix factorization (NMF) (Lee and Seung, 2001). It is the basic matrix factorization method for the Frontiers in Genetics | www.frontiersin.org June 2022 | Volume 13 | Article 891265 recommendation. In our experiments, we use the generalized Kullback-Leibler divergence as the update rules in NMF. • Singular value decomposition plus (SVD++) (Koren, 2008).
It exploits explicit and implicit feedback from users to combine the latent factor model and the neighborhood model into a unified model for the recommendation. • Meta-learned user preference estimator (MeLU) . It estimates user preferences based on a small number of items to alleviate the cold start problem for the recommendation. • Meta-learning method for cold start recommendation on Heterogeneous Information Networks (MetaHIN) (Lu et al., 2020). It creates a semantic-enhanced task constructor for exploring rich semantics, and a co-adaptation meta-learner with semantic-and task-wise adaptations within each task. • Neural collaborative filtering (NCF) . It is a general recommendation framework that uses designs a deep neural network to learn the interaction between a user and item features. • Item-based recommendation via autoencoder (AutoRec) (Sedhain et al., 2015). It is the first autoencoder framework in the recommendation, which learns the effective feature representations of items for collaborative filtering. • Hybrid Collaborative Recommendation via Semi-Autoencoder (HCRSA) (Shuai et al., 2017). It is a hybrid collaborative filtering framework based on the semiautoencoder, which incorporates auxiliary information for semantic rich representation learning. • Personalized recommendation with knowledge graph via dual-autoencoder (PRKG) (Yang et al., 2021). The side information of items is extracted from DBpedia and encoded into low-dimensional representations in this method, and a semi-autoencoder is introduced to incorporate this auxiliary information for the recommendation.
Implementation Details and Parameter Settings
The PREA toolkit (Lee et al., 2014) is adopted for the implementation of MF-based methods such as NMF and SVD++. For the methods of MeLU, MetaHIN, and HCRSA, we re-compile the source code as 4, 5, and 6. The default parameters of these three methods remain unchanged as reported in the original paper in the MovieLens dataset. For the method AutoRec, we select an item-based autoencoder that can achieve better performance than the user-based autoencoder. For fairness, the parameters of AutoRec and PRKG are consistent with ours in all two datasets. In our experiments, we set α = 0.1 after some preliminary tests for all datasets. The maximum number of iterations in gradient descent is set at 300. The number of hidden units is set at 300 for all datasets 5,6 .
Evaluation Metrics
In the experiments, we introduced root mean square error (RMSE) to measure the performance of our proposed KGTA and all compared methods in the recommendation, which can be shown as (12). It is worth mentioning that the smaller value of RMSE indicates better results.
RMSE ru,i∈TestSet where r u,i and r u,i ′ represent the original rating matrix and the predication matrix, respectively.
Experimental Results
For each data set, the percentages of 50%, 60%, 70%, and 80% are sampled into training data, respectively, and the rest are used for test data. The experimental results of RMSE on the MovieLens 100K and MovieLens 1M datasets are recorded in Table 3 and Figures 3, 4 respectively. Notably, all the results are obtained by The bold values provided in Table 3 represent the experimental results of our proposed method (KGTA) and are the best results among all the comparison methods.
repeating the experiments 5 times and taking the average value. From all the results, we have the following insightful observations: • The performance of all recommended methods is improved with the increase of training data. It is worth mentioning that meta-learning methods such as MetaHIN and MeLU have not changed much, which may be due to the metalearning methods being designed to alleviate the cold start problem for the recommendation. • Generally, among the three types of methods, meta-learning methods perform the worst, probably because they are primarily designed to address the cold start problem. The methods for deep neural networks can achieve more desirable performance in most cases than both metalearning and matrix factorization methods, which reveals the powerful ability of deep neural networks in learning the feature representations for personalized recommendation. • Among all the deep neural network methods for recommendation, our KGTA is significantly better than NCF and AutoRec, which shows the superiority of introducing auxiliary information for addressing the problem of data sparsity and improving the performance of personalized recommendations. • In the method of HCRSA, attributes such as the title, release date, and genre of a movie are introduced to the semiautoencoder model for prediction. From the results listed in Table 3 and Figures 3, 4, we can observe that our KGTA consistently outperforms HCRSA, which demonstrates the superiority of incorporating the key factors of collaborative filtering, such as the comments from users to items, to improve the performance of personalized recommendation. • Although both the methods introduce auxiliary information, our KGTA outperforms PRKG by up to 7 RMSE points on two well-known datasets, which shows the advantage of designing a serial connection of semiautoencoder and autoencoder for learning more abstract and higher-level feature representations in the recommendation. • Overall, the proposed KGTA performs best in all groups, which validates the effectiveness of incorporating the key information between users and items and designing a serial connection of semi-autoencoder and autoencoder for the recommendation. It should be noted that KGTA can achieve stable performance in both MovieLens 100K and MovieLens 1M. These results demonstrate that our KGTA can perform well even if the dataset is sparse.
Parameter Sensitivity
In this section, we investigate the influence of parameters in our proposed method, including the number of hidden layer neurons, the number of epochs, and the length of comments in the training. When one parameter is changed, the others are fixed in the experiments. The number of hidden layer neurons is varied from 100 to 800, the number of epochs is altered from 100 to 500, and the length of comments is sampled from the set {3, 5, 7, 9, 11, 13, 15, 17, 19, 21, and 23}. In the experiments, the validation was conducted on MovieLens 100K and MovieLens 1M, respectively. For the number of hidden layer neurons and the number of epochs, the experiments are conducted with 50%-80% of the training data. All the results are reported in Figures 5, 6, and we set the number of epoch = 500 for both datasets, the number of hidden layer neurons = 300 and thenumberofhiddenlayerneurons = 400 for MovieLens 100K and MovieLens 1M, respectively. For the length of comments, experiments are conducted on 50% of the training data with the best and most stable parameters configuration of the number of hidden layer neurons and epoch, all the results are reported in Figure 7, and we set the length of comments = 5 for both the datasets.
CONCLUSION
In this paper, we propose a feature representation learning method with a knowledge graph via triple-autoencoder for personalized recommendation called KGTA. We propose a serial connection between the semi-autoencoder and autoencoder methods. In our method, we were able to incorporate side information distilled from DBpedia for more useful item feature representations, and the key factors of collaborative filtering, such as comment information between users and items, are incorporated into the autoencoder as auxiliary information. Moreover, the item-based rating and all the external information are incorporated into the semiautoencoder to obtain low-dimensional information representation. Finally, the reconstructed output generated by the semi-autoencoder is input into a third autoencoder to learn better feature representations for personalized recommendation. Extensive experiments demonstrate the proposed method outperforms other state-of-the-art methods in effectiveness. In future work, we will try to achieve superior performance by incorporating less information and utilizing an attention network to strengthen the feature integration or without auxiliary information from the open knowledge base.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
YG: methodology, software, formal analysis, and writing. XX: conceptualization, supervision, and project administration. YZ: data curation, visualization, and writing. XS: visualization and validation. | 8,415.4 | 2022-06-03T00:00:00.000 | [
"Computer Science"
] |
Instance Mask Embedding and Attribute-Adaptive Generative Adversarial Network for Text-to-Image Synthesis
Existing image generation models have achieved the synthesis of reasonable individuals and complex but low-resolution images. Directly from complicated text to high-resolution image generation still remains a challenge. To this end, we propose the instance mask embedding and attribute-adaptive generative adversarial network (IMEAA-GAN). Firstly, we use the box regression network to compute a global layout containing the class labels and locations for each instance. Then the global generator encodes the layout, combines the whole text embedding and noise to preliminarily generate a low-resolution image; the instance embedding mechanism is used firstly to guide local refinement generators obtain fine-grained local features and generate a more realistic image. Finally, in order to synthesize the exact visual attributes, we introduce the multi-scale attribute-adaptive discriminator, which provides local refinement generators with the specific training signals to explicitly generate instance-level features. Extensive experiments based on the MS-COCO dataset and the Caltech-UCSD Birds-200-2011 dataset show that our model can obtain globally consistent attributes and generate complex images with local texture details.
I. INTRODUCTION
Conditional deep generative models have realized quite exciting progress in text-to-image generation. The widely used Generative Adversarial Networks (GANs) [1], which jointly learn generators and discriminators, have generated promising individual images on simple datasets. However, once there are heterogeneous objects and scenes in the text, the quality of the generated image becomes drastically worse [2]. This is mainly because most existing approaches only focus on global sentence embedding without considering that each word has a different level of information related to the image. Besides, the ambiguity of text and the unknown shapes of instances make the generation process more difficult to constrain [3]. As a result, those images generated by current models usually have lower resolution and blurred texture. Moreover, instance attributes represent important The associate editor coordinating the review of this manuscript and approving it for publication was Guitao Cao . image feature information [4], but existing methods use the sentence-conditional discriminator which only provides the coarse-grained training feedback, making it hard for generators to disentangle different regions and learn fine-grained attributes.
To address these three limitations, our proposed IMEAA-GAN harnesses a pre-trained box regression network [5] to obtain a global layout which contains class labels and bounding boxes, then generates complex images from this layout through a coarse-to-fine process, where the global generator initially generates a low-resolution image and two local refinement generators hierarchically synthesize highresolution images by combining the instance-wise attention and the instance mask embedding. Additionally, our model adopts the word-level and attribute-adaptive discriminators to provide fine-grained feedback, thus, the local refinement generators can be instructed to synthesize specific visual attributes.
The contributions of this paper can be listed as follows: VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ 1) To overcome the complexity and ambiguity of a whole sentence, we explicitly utilize the word-level embedding as input and use box regression network to obtain the global layout that contains spatial positions, object sizes, and class labels.
2) In order to make local refinement generators learn instance-level and fine-grained features, we propose the instance mask embedding mechanism to add pixel-level mask constrains. Therefore, our generators can get more details and semantic information for high-resolution image generation.
3) Two word-level and attribute-adaptive discriminators instead of commonly used sentence-conditional discriminator are employed to classify each attribute independently and generate exact signals for generators to synthesize certain visual attributes.
II. RELATED WORK
As one of the most commonly used image generation models, GANs include generators and discriminators. The generator is mainly used to learn pixel distributions and generate realistic images, while the discriminator should distinguish the received images as real or fake. They continually update in order to achieve dynamic equilibrium [6].
Many methods based on GANs have been proposed to improve image quality, and there are many input types. Zhu et al. [7] showed using sketches to modify images. Based on this, Lu et al. [8] adopted contextual GAN to synthesize images from sketch constraints. Similarly, Huang et al. [9] proposed an image-to-image translation model. In order to synthesize images from category labels, Brocket et al. [10] introduced a class-conditional model. Sharma et al. [11] improved the text-to-image generation by using dialogue. However, due to the complexity of the input text, Johnson et al. [12] proposed the sg2im method to convert the input text into scene graphs for image generation.
Among these various inputs, the text is the easiest and the most convenient type to perform manipulation. An increasing number of researchers have shown interest in text-toimage generation, and there are mainly two manifolds in the research community.
A. SINGLE-STAGE TEXT-TO-IMAGE GENERATION
Many approaches directly generate images from text without intermediate representations. For example, Reed et al. [13] have achieved simple image synthesis directly from captions without reasoning any semantic layouts. By contrast, Dong et al. [14] input both the image and text into conditional GAN (CGAN) to generate manipulated contents. Based on CGAN, Li et al. [15] proposed the Triple-GAN, which contains an extra classifier to label the generated image with its matching text for data augmentation, the labeled image-text pairs then can be used as the training data. Similarly, Dash et al. [16] proposed the TAC-GAN to generate diverse images by distinguishing real images from generated images and classifying real images into true classes. Nguyen et al. [17] introduced the PPGN, which is similar to TAC-GAN and contains a conditional network, to generate images from captions. Furthermore, based on conditional GANs, Cha et al. [18] improved the adversarial training process by forming positive-negative label pairs and employing an auxiliary classifier to predict the semantic consistency of a given image-caption pair.
All of these models produce diverse images directly from descriptions and their main focus isn't in synthesizing highresolution images, so they only use single-stage generation.
B. MULTI-STAGE TEXT-TO-IMAGE GENERATION
It's difficult to directly generate high-quality images from complex text, Denton et al. [19] adopted the LapGAN to generate images by constructing a Laplacian pyramid framework. However, this model still has limitations, the most obvious one is that its deep networks increase the training difficulty, resulting in model collapse. To solve this problem, Zhang et al. [20] employed StackGAN which contains two generators to synthesize images within two stages. Afterward, they improved the previous architecture by proposing the StackGAN++ [21] which is designed as a tree structure. But these two models only encode text into a single sentence vector for image generation. Similar to scene graphs, Hong et al. [22] introduced the text2img method, they utilized the inferred layouts to generate images, Li et al. [23] also obtained graphic layouts with wireframe discriminators. Given a coarse layout, Zhao et al. [24] generated images by disentangling each instance into a certain label part and uncertain appearance part. Hinz et al. [25] evaluated the detecting frequency of objects and synthesized multiple instances at various spatial locations based on an object pathway. Likewise, Li et al. [26] improved the grid-based attention mechanism by coupling attention with the layout. In order to minimize the differences between real and fake images, Yuan and Peng [27] showed symmetrical distillation networks. Then Sun and Wu [28] put forward a new feature normalization approach to synthesize visually different images from given layouts. Xu et al. [29] introduced the AttnGAN which aggregates the attention mechanism [30] and the DAMSM loss into text-to-image generation.
However, AttnGAN only leverages a global sentence vector and takes all instances equally, thus it may miss the detailed instance-level information. Our local refinement generators are able to uncover such difference by applying the instance mask embedding. Moreover, the proposed wordlevel attribute-adaptive discriminators have the capacity to disentangle each attribute independently in order to instruct two local refinement generators to synthesize certain visual attributes.
A. BOX REGRESSION NETWORK
Box regression network can effectively reason scene layouts from descriptions or scene graphs [31]. This network takes a sentence embedding or final object embedding as input and outputs the predicted bounding boxes B 1:T = {B 1 , B 2 , · · · , B T }. The t-th bounding box is parameterized indicates the location (x, y) and size (w × h) of the related object and l t ∈ {0, 1} L+1 represents the one-hot class label of the t-th box. We define L as the number of real object categories and the (L + 1)-th label as an end-of-text indicator. The joint probability is calculated as: is the box coordinate probability and p(l t ) represents the label distribution. It is hard to directly model the joint probability since it contains various parameters. Therefore, the coordinate probability of the t-th box is decomposed as: where the probability p(b x t , b y t |l t ) and p(b w t , b h t |b x t , b y t , l t ) are implemented by two bivariate Gaussian mixtures: where k indicates the number of mixture components, the label of the t-th object l t and π xy t,k , π wh t,k ∈ R, µ xy t,k , µ wh t,k ∈ R 4 , xy t,k , wh t,k ∈ R 4×4 are parameters of the Gaussian Mixture Model (GMM) [32], [33]. These parameters are calculated by the outputs of LSTM at each step: where h t is the hidden state, c t is the t-th cell state. Similarly, π xy t,k , π wh t,k , µ xy t,k , µ wh t,k and xy t,k , wh t,k are computes as: Inspired by the recent progress of the box regression network, we explicitly use it to predict locations for various instances. Different from sg2im [12] and text2img [22], we use word embedding instead of final vectors computed by graph convolutional network [34] or a sentence vector as input to obtain bounding boxes. Each box in our model not only predicts the location but also indicates the size and class label of each instance, which greatly differs from sg2im [12], the global layout is thus synthesized for the further multistage generation.
B. MASK REGRESSION NETWORK
Mask regression network [35] has been used for mask segmentation in many computer vision tasks. And Hong et al. [22] have constructed shape masks from captions for image generation. As shown in Fig. 1, the mask regression network encodes the bounding box tensor B t into a binary one B t ∈ {0, 1} h×w×l where h × w represents the instance size and l is the category label. After a down-sampling block, the encoded features are fed into Bi-LSTM and concatenated with noise z. If and only if the bounding box contains the related class label, the binary tensor B t is set to 1, other parts outside the box are all set to 0. After applying this mask operation, these masked features are then fed into a residual unit which allows the network to possess a deeper encoding ability by applying the skip connection [36]. Afterward, the predicted segmentation mask p t ∈ R h×w with all elements in the range (0,1) is obtained through several up-sampling layers for image generation.
Contrary to previous methods that use segmentation mask annotations for both low-resolution and high-resolution image synthesis, our approach employs the predicted pixellevel instance masks only as constraints to two identical local refinement generators so that its up-sampling path can preserve the capacity to refine local texture details. Hence, the synthesized instances are coherent with inferred masks while discarding ambiguous features and containing pixellevel details.
IV. IMEAA-GAN
The proposed IMEAA-GAN performs text-to-image synthesis in three steps: the box regression network infers global layouts to obtain categories, sizes, and locations of objects. Then the global generator generates relatively low-resolution global images from these layouts. Two local refinement generators finally synthesize high-resolution and photographic images.
A. GLOBAL LAYOUT GENERATION
We employ the box regression network to initially infer a global layout L from word-level embedding vectors. The global layout, as an intermediate representation, contains the corresponding bounding boxes for the related instances. The generation process of a global layout is illustrated in Fig. 2.
The box regression network is designed as an encoderdecoder architecture. For each instance, the network infers Firstly, our IMEAA-GAN takes the text as input, with a pretrained Bi-LSTM which is used as a text encoder, the whole text is encoded into word embedding vectors and also a global text embedding ϕ. Every word is related to two hidden states, we concatenate the two states to indicate the semantic information of a word. Thus a feature matrix of all the words is obtained, each column of this matrix represents a word feature vector. At the same time, we concatenate the last hidden states of two directions to get the global text embedding ϕ. Then we take LSTM [37] as the decoder to approximate the class label l t and the coordinates b t , these GMM parameters are mentioned in function (1). To achieve this, we decompose the conditional joint probability as: where T is the number of instances. We firstly predict the category l t for the t-th object, then compute the b t based on l t : here, the class label l l is calculated by softmax and the coordinates b t are modeled by GMM: where e t is the softmax logit calculated by the t-th step outputs of each LSTM unit. Similarly, these parameters π t,k ∈ R, µ t,k ∈ R 4 , and t,k ∈ R 4×4 that have been mentioned in function (3) and (4) are also computed in this way, k indicates the number of mixture elements. Finally, a global layout L that includes box coordinates and class labels for all entities is generated.
B. IMAGE GENERATION
Our IMEAA-GAN takes advantage of the multi-stage textto-image generation strategy [38]. Despite there are many methods, such as Obj-GAN [26], and our IMEAA-GAN both use the multi-stage generation, other methods are not robust to complex and ambiguous descriptions and the pixellevel features are not sufficiently used for image synthesis.
Obj-GAN has achieved image-level semantical consistency. However, during the generation process, Obj-GAN implements segmentation mask annotations for both low-resolution and high-resolution image synthesis, it is labor-intensive to collect these annotations. In addition, applying them in lowresolution image generation cannot efficiently improve the image quality, since these images are not finely synthesized and the image features are more tend to be random vectors. By contrast, our approach calculates the pixel-level instance mask embedding instead of collecting mask annotations. More importantly, we adopt the instance mask embedding only in two local refinement generators. In this way, our IMEAA-GAN can obtain the capability of capturing visual features and the flexibility of generating fine-grained instances. Given a coarse layout L 0 , the global generator G img 0 initially generates an image I 0 with 64 × 64 resolution. Then the local refinement generator G img 1 employs the instance-wise attention and instance mask embedding to refine different regions of the first generated image in order to synthesize a high-quality image. Here, two local refinement generators that have the same architecture are utilized for generating higher resolution images. For the sake of brevity, we will not show the generation process of the 256 × 256 image because it is the same as the 128 × 128 image.
1) GLOBAL GENERATOR
The global layout provides the semantic structure of the corresponding text. Fig. 3 shows that given a pre-generated layout L 0 , the global generator G img 0 is designed to produce an image that conforms to both the layout and text.
We first compute the global layout embedding vector µ 0 ∈ R h×w×d by down-sampling the global layout L 0 and add noise z by spatial replication and depth concatenation. The text embedding ϕ calculated by the pre-trained LSTM in the box regression network, the layout encoding µ 0 , and noise z are concatenated and fed into a residual unit implemented by several residual layers. Our model jointly aggregates the bounding box and text information into a latent feature representation, and we further apply one up-sampling layer to generate the global hidden feature vector y 0 from the latent representations. After the final 3 × 3 convolution layers, the global image with 64 × 64 resolution is initially generated. Specifically: where F 0 is modeled as neural networks, y 0 is the global hidden layer feature vector. Conditioned on y 0 , the global generator G img 0 then generates the low-resolution image I 0 .
2) LOCAL REFINEMENT GENERATOR
In the first stage of generation, local details are not explicitly utilized for instance-level image generation, most of the synthesized images lack fine-grained features, resulting in overly smooth textures. To generate high-resolution images, we further employ the local refinement generator, and the overall architecture is illustrated in Fig. 4. During the refinement process, we only repeat two times due to the memory limitation of GPU. With two identical local refinement generators G img 1 and G img 2 , we first generate the 128 × 128 images then synthesize 256 × 256 images.
a: INSTANCE-WISE ATTENTION
Our local refinement generator is designed as the encoderdecoder structure. It first encodes the global layout L 1 by several down-sampling layers to obtain the layout encoding vector µ 1 ∈ R h×w×d (d indicts the layout feature dimension). Considering that traditional grid attentionhas been successfully used for image captioning [39], image-to-image translation [40], and visual questioning and answer-ing [41]; the attention-based generative adversarial network AttnGAN uses attention mechanism for image generation; our two local refinement generators need to encode various context information of L 1 along the channel dimension. Hence, as shown in the bottom of Fig. 5, we employ the instance-wise attention to select the context relevant features. Specifically, with the sub-region vectors V region of the pregenerated image I 0 , our local refinement generator retrieves the relevant instance vectors from the layout L 1 . Afterward, it assigns instance-wise attention weights to each instance VOLUME 8, 2020 vector V t and then calculates the weighted sum of the input information. The instance-wise context vector of the t-th object is calculated as: where t ∈ (1, 2, · · · , T ) denotes the number of objects, V t and w t represent the embedding vector and the attention weight of the t-th instance, respectively.
b: INSTANCE MASK EMBEDDING MECHANISM
Different parts of bounding boxes may overlap during the refinement process, multiple pixels may cover the same pixel, and the output shapes do not always align with the ground truth. These problems can be solved as a space sampling issue where the proposed instance mask embedding can pose spatial and morphological constraints on instance feature projection.
In general, many methods use mask annotations, which are not flexible to obtain, to separately add the shape of each instance. As a result, the generated images as a whole may present poor scene layouts though each instance is correctly rendered. Differently, we employ the predicted pixel-level instance mask embedding for image synthesis, in this way we can avoid consuming too much model capacity and unstable training.
As shown in the top of Fig. 5, given a global layout L 1 , we use the mask regression network to obtain the aggregated mask P global ∈ R h×w . Our down-sampling block is made up of a 3 × 3 convolution (stride-2) followed by batch normalization and ReLU activation, the residual unit is implemented with three 3 × 3 convolution layers and a skip connection, and the up-sampling block consists of a 4 × 4 deconvolution (stride-2) followed by the batch normalization and ReLU activation. Then the aggregated mask P global is cropped to get the t-th instance mask embedding P t To clearly represent the overlapping parts and make the generated features comply with the instance mask embedding, the most relevant context vector should be selected by the local refinement generator.
Thus, for the t-th instance, we copy the instance-wise context vector V t context to the instance mask embedding P t , the pixellevel feature vector V which contains latent pixel details is calculated by: (16) where ⊗ is the vector outer-product, t is the number of instances in the image. When there are several pixels covering a single pixel, we perform max-pooling to select the corresponding pixel that associated with the most related instancewise context vector V t context , then employ pixel representation at this position.
Meanwhile, in order to integrate the global information from G img 0 to G img 1 , we inject the global hidden layer feature vector y 0 into the refinement stage (see Fig. 4). y 0 , µ 1 , and V as input are aggregated by concatenation along the channel dimension and subsequently fed into a residual unit. We further apply one up-sampling layer as the decoder to calculate the local hidden feature vector y 1 . As the input of the final 3×3 convolution layers, the hidden layer vector y 1 is subsequently mapped to an image with resolution 128 × 128. Specifically: where F 1 is modeled as neural networks, y 0 is the global hidden feature vector, and µ 1 represents the high-resolution layout encoding. The pixel-level feature vector V and the instance-wise context vector V context are calculated and aggregated into the concatenation of µ 1 and y 0 to get the local hidden feature vector y 1 . Then the local refinement generator outputs a high-resolution image I 1 conditioned on the hidden feature vector. Additionally, we also apply another local refinement generator G img 2 and finally have synthesized images with the resolution 256 × 256.
3) ATTRIBUTE-ADAPTIVE DISCRIMINATOR
The discriminator should have a large receptive field to differentiate synthesized and ground truth [42], this requires either bigger convolution kernels or a considerably deeper network, resulting in an increased model capacity and repeated patterning images. To this end, we employ multi-scale discriminators D img 0 , D img 1 , and D img 2 to separately train different resolution images. The sentence-level discriminator is adopted for D img 0 , the identical D img 1 and D img 2 are designed as wordlevel attribute-adaptive discriminators. Generative models tend to synthesize the ''average'' pattern instead of the related attribute features, this is mainly because the global sentence-wise discriminator cannot be attached to a specific type of visual attribute and only provides the coarse training feedback. Therefore, our attribute-adaptive discriminators D img 1 and D img 2 are trained to recognize each attribute and discriminate whether it exists in the synthesized image. Each attribute-adaptive discriminator is made up of word-level discriminators to disentangle different attributes with fine-grained training signals. the overall structure of the image discriminator and the proposed wordlevel attribute-adaptive discriminator is shown in Fig. 6.
The attribute-adaptive discriminator consists of a set of word-level discriminators {D 1 , D 2, , · · · , D N }. Given an image, the image encoder outputs image features, see Fig. 6 (b), we implement the global average pooling to all feature layers to compute the one-dimensional image feature vector e. Meanwhile, we use the text encoder to get word vectors {w 1 , w 2 , · · · , w T }, then respectively feed them into wordlevel discriminators. Take the t-th word vector w t as an example, the one-dimensional sigmoid word-level discriminator F wt is used to decide whether the synthesized image contains a visual attribute that related to w t . Specifically, the wordlevel discriminator F w t is represented as: where σ is the sigmoid function, e n represents the onedimensional image vector of the n-th image feature layer, W (w t ) denotes the weight matrix and b(w t ) is the bias. We also reduce the influence of less significant words in the discrimination process. For this, we apply the wordlevel instance-wise attention to indicate the correlation degree between the word and the visual attribute. The attention mechanism mainly has two aspects: calculating attention distributions; computing the average of the weighted sum based on attention distributions. Note that the discriminator should have a multi-scale receptive field to detect multi-scale image features, the attention distribution α t,n is calculated as: , S t,n = (v) T w t (20) where α t,n is the attention weight assigned to the t-th word of n-th image feature layer. S t,n is the attention scoring function calculated by the dot product model. v denotes the average of word vector w t . With the attention distribution, the final score of the wordlevel discriminator is multiplicatively aggregated as: α t,n (21) where I represents the generated image, x denotes the text, T is the total number of input words, α t,n represents the attention distribution. γ tn is the weight of softmax function, and this parameter is used to determine the importance of each word for the layer n.
Hence, compared with the sentence-level discriminator that operates at coarse-level and only determines whether the VOLUME 8, 2020 synthesized image roughly matches the text, our attributeadaptive discriminators can provide feedbacks at different stages and identify the existence of related visual attributes.
C. OBJECTIVE FUNCTION
Our final objective function consists of a GAN adversarial loss [1] and a DAMSM loss [29]. The GAN cross-entropy loss function L GAN is determined by the adversarial training of image generators and attribute-adaptive discriminators. Both generators and discriminators all consist of an uncon-ditional loss and a conditional loss. The generator objective is defined as: where the first item represents the unconditional loss, the second item is the conditional loss, I and x denote the synthesized image and the related text, respectively. The adversarial loss for each discriminator also consists of an unconditional and a conditional item: where P data represents the distribution of the ground truth. Additionally, we adopt the DAMSM loss introduced in AttnGAN to calculate the fine-grained image-text matching loss. Hence, our final objective loss is obtained by: where λ 1 is a hyper-parameter. L DAMSM is the loss of the Deep Attentional Multimodal Similarity Model (DAMSM) pre-trained on ground truth images and related descriptions. We set the learning rates of generators and discriminators all to 0.0002, the hyper-parameter of DAMSM loss is set λ 1 = 50 on MS-COCO and λ 1 = 5 on CUB. We use the Adam algorithm [43] to optimize the adversarial training. The exponential decay rates β 1 , β 2 ∈ [0, 1) for the first and second moment estimates are set to 0.5, 0.999, respectively.
1) DATASETS
We perform experiments on MS-COCO and CUB datasets. The MS-COCO dataset [44] has pixel-level annotations and contains 82,783 training images, 40504 validation, and 40,775 testing images. There are 80 object categories in this dataset, each image has 5 text descriptions and corresponding instance labels. Derived from the CUB-200 dataset, the CUB dataset [45] includes a total of 11,788 images that provide class labels, bounding boxes, and bird attributes information. It has 200 different bird categories, each image has 10 descriptions describing the bird attributes. We employ 150 bird categories (including 8,855 images) as our training set while those other 50 categories (including 2,933 images) as the testing set.
A pre-trained Inception v3 network [48] is adopted to compute the IS and FID. The IS evaluates the image quality and diversity, namely: this metric measures the uniqueness of synthesized images and the number of object categories [49], while the FID calculates the Wasserstein-2 distance [50] between the ground truth and synthesized images according to final layer activations. A lower FID indicates a shorter distance between the generated image distribution and ground truth image distribution. Therefore, the larger the IS value while the smaller the FID value, the better the model performance. Same as AttnGAN and MirrorGAN [51], we also apply R-precision to measure the matching degree between the image and text. Specifically, we randomly select 99 descriptions from the dataset, then compute cosine distance to indicate the similarity (in feature space) between the generated image and the related text. We sort these 100 (including a ground truth text) descriptions and select the top k most similar descriptions to calculate the R-precision. In practice, we set k = 1, meaning that the R-precision indicates whether the ground truth text more closely matches the synthesized image than those 99 randomly sampled text descriptions.
B. QUALITATIVE RESULTS
Our model has produced high-fidelity 256 × 256 images containing complex scenes and multiple instances, Fig. 7 shows the synthesized results on MS-COCO. Conditioned on the instance mask embedding, IMEAA-GAN is able to separate instances from the background, reduce overlapping pixels. Given the similar input, due to the use of attribute-adaptive discriminators, IMEAA-GAN can also synthesize various detailed attributes. For example, the sheep in the third column of Fig. 7 show that our approach can well distinguish the word-level information and generate diverse images corresponding to various features. To prove the generalization ability of our IMEAA-GAN, we also perform experiments on the CUB dataset. As shown in Fig. 8, the generate high-quality 256 × 256 images vividly display the color and texture of different birds, there are almost no indistinguishable instances and overlapping parts by using instance mask embedding mechanism, Moreover, with the guidance of attribute-adaptive discriminators, our images present correct and fine-grained attributes.
We adopt the multi-stage generation strategy to synthesize high-resolution images. During the refinement stage, we have attempted to stage up the generator to 4. However, the training process becomes unstable and difficult to control due to the complexity of deep neural networks and the memory limitation of the GPU. Therefore, we only apply one global generator and two local refinement generators for the optimal generation. The intermediate results of different stages on CUB and MS-COCO are illustrated in Fig. 9. Figure 9 shows that IMEAA-GAN is capable of refining images to match the text. The global generator initially generates coarse-grained 64 × 64 images (e.g. Fig. 9(a)), but these synthesized images lack fine-grained textures. Then two local refinement generators generate fine-grained images (e.g. Fig. 9(b), Fig. 9(c)). The context-wise instance vectors can be obtained by our generators, so the synthesized images VOLUME 8, 2020 are further well-improved, contain more accurate texture features and clear backgrounds. For example, in the second right row of Fig. 9, there is no short beak in the initial 64 × 64 image, our local refinement generators are able to encode the ''short beak'' information and synthesize the missing features.
Further, as illustrated in Fig. 10, we compare the IMEAA-GAN with other methods conditioned on the same text. The sg2im method converts the input text into scene graphs to infer semantic layouts, and this approach has achieved the synthesis of 128 × 128 images. But scene graphs lack core object attributes and spatial information (e.g. positions and sizes), it is difficult to generate details that consistent with semantic layouts. In addition, the information conveyed by scene graphs is very limited, the features of an instance are not only determined by its position and class labels but also interactions with others, so it fails to solve the overlapping pixels and separate different object appearances.
As shown in the second row of Fig. 10, AttnGAN has synthesized 256 × 256 images. Conditioned on a sentence vector, the effect of each word is not fully considered, it assigns all instances with the same weight. Thus, lacking word-level embedding and ignoring interactions between different instances are difficult for it to generate high-quality images. Besides, it uses sentence-level discriminators that only provide coarse-grained feedback, so its generators tend to generate texture associated with the wrong word. This can explain why the synthesized results appear realistic features but lack meaningful layouts and correct attributes.
The recent MirrorGAN [51] has made great progress on complex image generation, the example results are shown in the third row of Fig. 10. This method outperforms the first two models, it guarantees the semantical consistency in multiple object generation, the synthesized images match the text at the image level. Yet, MirrorGAN lacks investigations on uneven instance distribution and feature occlusion, the visual appearance and instance interactions are not finely regulated. For example, the ''cattle'' in the first image of the third row contain reasonable appearance, but the ''green hillside'' is inappropriately shown as the ''dry field''.
Different from these aforementioned methods, IMEAA-GAN adopts word-level attribute-adaptive discriminators. As presented in the last row of Fig. 10, the synthesized instances have correct attributes. Besides, due to the use of instance mask embedding and instance-wise attention mechanism, as well as the maximum pooling of multiple pixels, overlapping pixels between different instances have been solved. So these generated instances, which contain clear shapes and texture features, are more recognizable and semantically meaningful.
We also perform comparative experiments on the CUB dataset as shown in Fig. 11. Since sg2im mainly aims at the positional relationship between different instances, every image in CUB only contains a single object, so we just compare the IMEAA-GAN with AttnGAN and MirrorGAN.
Observing the second and third columns of Fig. 11, though these two methods both accurately capture attribute features, IMEAA-GAN can better display the main attributes and differentiate birds from their backgrounds. In general, our approach has the capacity to synthesize individuals with more vivid details as well as more clear shapes. Figure 12 demonstrates that IMEAA-GAN can generate diverse images using the same input. The results contain various shapes and complex scenes, this is mainly owing to word-level attribute-adaptive discriminators which provide specific signals. Therefore, only changing a few words, under the guidance of discriminators, the generators can synthesize images with detailed attributes, and these samples look similar but unique to each other. Table 1 and Table 2, we measure the performance of different methods in terms of IS, FID, and R-precision, the best results are in bold. Based on the MS-COCO and CUB datasets, compared with MirrorGAN, we have almost increased IS by 15.19% and 4.17%, R-precision by 2.96% and 2.63%. Compared with the officially pretrained AttnGAN, our model decreases the FID by 8.03% on MS-COCO and 32.90% on CUB, which confirms that IMEAA-GAN is able to generate images with more diverse objects and higher quality than other methods. VOLUME 8, 2020 Our model can obtain the most relevant instance at the position where has overlapping pixels, so these synthesized results are closely consistent with global layouts and ground truth images. Hence, as demonstrated in Table 1 and Table 2, feeding generated results into the pre-trained Inception v3 network, we get better performance of the IS and FID. In addition, we also obtain the highest R-precision, which indicates that the images and attributes generated by our generators are most relevant to descriptions. However, other methods occur lots of overlapping pixels and blurred objects, and the Wasserstein-2 distances between ground truth and generated samples are quite large. So it is hard to adaptively disentangle corresponding visual features under linguistic expression variants. By comparison, IMEAA-GAN greatly improves the quality and diversity of generated images, as well as the text-image matching degree.
As shown in
Besides, observing that the IS values based on CUB differ significantly from the MS-COCO, this is because all images in CUB are birds and the feature distributions are similar, while the MS-COCO contains different instance categories and complex scenes, the feature distributions among various objects are greatly different. Therefore, the IS values on MS-COCO are generally larger than that of the CUB.
D. ABLATION STUDY
To verify the effectiveness of the proposed discriminators, as shown in Fig. 13 (a), we visualize our word-level attributeadaptive discriminators. Meanwhile, to make a comparison, we adopt two commonly used sentence-level discriminators which have the same structure as our baseline model, the visualization maps of sentence-level discriminators are presented in Fig. 13 (b). The highlighted regions indicate the feedback information provided by discriminators. With feedbacks, generators are instructed to synthesize related attributes and instances. The discriminators in our baseline model are conditioned on a whole sentence, so it is hard to highlight word-level regions, thus, resulting in an excessively large range of highlighted areas. What's worse, the baseline method even omits highlighting when synthesizing certain attributes, see images in the third row of Fig. 13(b).
All these illustrate that sentence-level discriminators can only provide the coarse-grained information and fail to offer effective feedback signals. In contrast, our attribute-adaptive discriminators are word-level that can provide generators with detailed attribute feedbacks and highlight the related regions. Therefore, our generators can focus on the most relevant regions to perform pixel-level attribute generation.
Further, we also demonstrate the necessity of instance mask embedding for the local refinement generators. The image quality of the ablated version and our full model are shown in Fig. 14 (a) and Fig. 14 (b), respectively. The ablated model has the same settings as our full IMEAA-GAN except that it does not use the instance mask embedding (Fig. 14 (a) w/o IME). Images in Fig. 14 (a) lack detailed and complete features, for example, the zebras and giraffes in the first row are only synthesized with scattered features. It is difficult for the ablated model to synthesize corresponding instances in the correct locations, so the image accuracy and fidelity are quite low. With the instance mask embedding ( Fig. 14 (b) w/ IME), the synthesized images can meet the shape and location constraints. Even for complex scenes, for example, the giraffes in Fig. 14 (b), there are almost no overlapping pixels and indistinguishable instances.
VI. CONCLUSION
In this paper, we present a novel Instance Mask Embedding and Attribute-Adaptive Generative Adversarial Network (IMEAA-GAN) for text-to-image generation. With the instance mask embedding, which provides shape constraints and solves the overlapping problem between different pixels, our two local refinement generators are able to refine the initial image synthesized by a global generator. We also proposed the word-level attribute-adaptive discriminators, which focus on individual attributes and provide effective feedback to discriminate whether the generated instances match the attribute descriptions, so as to guide generators synthesize accurate features. Experimental results illustrate that our model is capable of generating complex images with highfidelity attributes on different datasets. However, once the text contains various scene settings and instances, the image quality drops drastically. Our future work will focus on using knowledge graphs to infer corresponding semantic layouts and generating multiple high-resolution images from a single semantic layout. | 8,962.8 | 2020-02-24T00:00:00.000 | [
"Computer Science"
] |
Potato Sorting Based on Size and Color in Machine Vision System
Potato (Solanum tuberosum) is cultivated as a major food resource in some countries that have moderate climate. Manual sorting is labor intensive. Furthermore in mechanical sorting the crop damages is high, for this reason we must operate a system in which the crop damages would be diminished. For sorting of potatoes fast, accurate and less labor intensive modern techniques such as Machine vision is created. Machine vision system is one of the modern sorting techniques. The basis of this method is imaging of samples, analysis of the images, comparing them with a standard and finally decision making in acceptance or rejection of samples. In this research 110 numbers of potatoes from Agria variety were prepared. Samples were pre-graded based on quantitative, qualitative and total factors manually before sorting. Quantitative, qualitative and total sorting in Machine vision system was performed by improving images quality and extracting the best thresholds. The accuracy of total sorting was %96.823.
Introduction
The potato (Solanum tuberosum) is an herbaceous annual that grows up to 100 cm (40 inches) tall and produces a tuber -also called potato -so rich in starch that it ranks as the world's fourth most important food crop, after maize, wheat and rice.The potato belongs to the Solanaceae -or "nightshade"-family of flowering plants, and shares the genus Solanum with at least 1,000 other species, including tomato and eggplant.S. tuberosum is divided into two, only slightly different, subspecies: indigene, which is adapted to short day conditions and is mainly grown in the Andes, and tuberosum, the potato now cultivated around the world, which is believed to be descended from a small introduction to Europe of andigena potatoes that later adapted to longer day lengths (FAO, 2008).
Potato consumption in any form as seed, using for human food, feeding animals or processing operations as chips, conserve operation and so on are dependant to special conditions which must prepare before those operations.The objective of sorting is preparation of these conditions.
By sorting we can grade crops based on size, shape, color, ripeness, damaging etcetera.The sorting operation by hand is time-consuming and its efficiency is low and sometimes its cost is high (Where the worker's wage is high).Mechanical grading can increase the sorting efficiency and the need for workers is decreased.
Technological advancement is gradually finding its applications in the field of agricultural and food, in response to one of the greatest challenges i.e. meeting the need of the growing population.Efforts are being geared up towards the replacement of human operator with automated systems, as human operations are inconsistent and less efficient.Automation means every action that is needed to control a process at optimum efficiency as controlled by a system that operates using instructions that have been programmed into it or response to some activities.Automated systems in most cases are faster and more precise (Narendra and Hareesh, 2010).
By using machine vision systems and image processing techniques we can grade the crops by high precision and speed and diminish crop damages.Computer vision has been recognized as a potential technique for the guidance or control of agricultural and food processes.Therefore, over the past 25 years, extensive studies have been carried out, thus generating many publications (Narendra and Hareesh, 2010).
In this research we operated the image processing techniques and finally the program is tested.
Machine vision has been applied for sorting of a wide range of agricultural products.Some of theses researches are mentioned bellow: Von Beckmann and Bulley in 1978 developed an electronic sorter for color and size grading of tomatoes.They used the ratio of surface reflectance in wavelength of 600 and 660 nm to sort tomatoes in 4 grades (Von Beckmann and Bulley, 1978).
Miller and Delwiche in 1989, developed a color vision system for detection and sorting of ripe peaches.For peach sorting their color was compared to color of standard ripe peach (Miller and Delwiche, 1989).
A prototype inspection station based on the United States Department of Agriculture (USDA) inspection standards was developed for potato grading.The station consisted of an imaging chamber, conveyor, camera, sorting unit, and personal computer for image acquisition, analysis, and equipment control.A sample of 9.1kg (201b) of pregraded potatoes was evaluated in three separate experimental runs to assess the system performance.The system correctly classified 80%, 77%, and 88% of the moving potatoes in the three runs at 3 potatoes/min, and 98%, 97%, and 97%, in three runs of stationary potatoes.Shape analysis was adversely affected by the potato motion, and this contributed to the misclassification error (Heinemann et al., 1996).
Laykin in 2002, used three methods for sorting of tomatoes.These methods were: Mean-Standard deviation, Slide Blocks and Quad tree (Laykin et al., 2002).
Deck in 1995 compared the color segmentation results of a Multilayer Feed Forward Neural Network (MLF-NN) and a traditional classifier for the color inspection of potatoes (Deck et al., 1995).Tao et al., in 1995 represented a method for sorting of green and good potatoes.They used HSI color system.Samples of potatoes were sorted by experts and farmers.They used 40 numbers of green and good potatoes in training phase and 20 numbers for each grade in test phase.In training phase all 40 good potatoes and 38 numbers of 40 green potatoes and in test phase all 20 good potatoes and 18 of 20 numbers of green potatoes were sorted correctly.The results of human and machine detection were close (Tao et al., 1995).
A high speed machine vision system for the quality inspection and grading of potatoes has been presented by Noordam et al. in 1995.The vision system graded potatoes on size, shape and external defects.For color grading of potatoes they used Linear Discriminate Analysis (LDA) and MLF-NN techniques.Results of LDA and MLF-NN sorting techniques implementing for different variety of potatoes were respectively 86.8%-98.6%and 88.1% -99.2% (Noordam et al., 1995).Zhou et al. (1998) evaluated weight, cross-sectional diameter, shape, and color of three cultivars of potato using a computer vision system which was able to classify 50 potato images per second.An ellipse was used as the shape descriptor for potato shape inspection and color thresholding was performed in the hue-saturation-value (HSV) color space to detect green color defects.The average success rate was 91.2% for weight inspection and 88.7% for diameter inspection.The shape and color inspection algorithms achieved 85.5% and 78.0%success rates, respectively.The overall success rate, combining all of the above criteria, was 86.5%.Rios-Cabrera et al. (2008) determined potato quality evaluating physical properties using Artificial Neural Networks (ANN's) to find misshapen potatoes.The results showed that FuzzyARTMAP outperformed the other models due to its stability and convergence speed with times as low as 1 ms per pattern which demonstrates its suitability for real-time inspection.Several algorithms to determine potato defects such as greening, scab, cracks were proposed.Barnes, et al. (2009) introduced novel methods for detecting blemishes in potatoes using machine vision.The results show that the method is able to build "minimalist" classifiers that optimize detection performance at low computational cost.In experiments, minimalist blemish detectors were trained for both white and red potato varieties, achieving 89.6% and 89.5% accuracy respectively.
Sorting Mechanism
The sorting mechanism was consisted of: The lighting source selection is a key factor in image processing operation.In designing the lighting chamber the outer light must be eliminated.In this research we used four florescent lamps by lateral positions in chamber and camera lens was entire to lighting chamber by a hole which was only entrance to outer which was covered by camera lens.Potatoes were placed under camera lens in the center of lighting chamber.A camera which was selected for image capturing was CCD camera.The software of this system was MATLAB (R2008a).
Transformation of RGB to Gray Scale Image
When a RGB is transformed to gray scale image, the image size is decreased and the image processing is accelerated.
Calculation of Threshold
Threshold extraction is the best way for image segmentation.If the image is consisted of a light object in a dark background, the grayscale pixels are placed in two modes.
Image Noise Elimination
In this research for elimination of some noises and reaching the best boundary, we used 25 by 25 Gaussian Low-pass Filter by standard deviation of 15.The way which we selected was the Replicating method.In this method size was developed by replication of values in outer boundary.
Extraction of Boundary
Boundary extraction is a major technique in image pre-processing and is used in the most algorithms.Boundary detection is the basic process for extraction of image information.We must select one method which its sensitivity to image noises is the least and can extract continuous boundary in a simple and fast way.We used Sobel estimate which its sensitivity to horizontal and vertical boundaries is higher than others.Sobel extracts boundary by non-linear calculation and it isn't dependant to point value.
Calculation of Area
By labeling the extracted boundary we can calculate quantitative parameters such as max diameter, min diameter, equivalent diameter, area, and perimeter and so on.We can use all these parameters to grade potatoes based on size.In this research we employed the area.The area was calculated by counting the number of pixels in the labeled region.
Evaluation the Combination of Intensity Transformation Functions and Color Spaces
The various combinations of intensity transformation functions and color spaces were implemented on images and by detection the pixels that belong to health class and numbering them, by dividing them to the total number of pixels the percentage of health class was calculated.By comparing the percentage with the percentage that specified by experts, the R 2 of them was calculated.The best combination which has the highest R 2 is 0.989 that belongs to the combination of HSV color space and logarithmic transformation (figure1).
Threshold Extracting and Evaluating in Quantitative Sorting
For calculation of area threshold, we divided each grade to training and testing groups.In training group the numbers of Small, Medium and Large groups were respectively: 12, 20 and 23.
For extracting the proper threshold, the tubers were pre-graded by experts in classes of Small, Medium and Large sizes.Thereafter each class was divided to phases: Training and Testing.In Training phase the threshold was extracted according to table (1), relations (1), ( 2) and (3).For evaluating the accuracy of this threshold to classify the samples based on size, it was operated on the samples of Test phase.This process is represented in table (2).As represented in this table the accuracy of this threshold on sorting test samples based on size was 100%.(3)
Threshold Extracting and Evaluating in Qualitative Sorting
110 numbers of potatoes were pre-graded by experts into 19 numbers of Grade1, 37 numbers of Grade 2, 33 numbers of Grade 3 and 21 numbers of Rejected groups.For extraction of appropriate threshold the samples were divided into two groups of Training and Testing.In Training phase the threshold was extracted by implementing the qualitative algorithm based on combination of logarithmic transformation with coefficient of 0.5 and HSV color space.The extracted threshold was implemented on the testing phase to identify accuracy of qualitative sorting.The process of threshold extraction is represented in table (3) and relations (4), ( 5) and ( 6).The evaluating of the extracted threshold is represented in table (4).
Threshold Extracting for Total Sorting
For total sorting, at first the threshold was extracted.This threshold was based on the considerable sorting factors in quantitative and qualitative sorting.For example the factors of small size and rejected qualitative were combined to create the total sorting threshold in Rejected group and so on.These factors are represented in table (5).
Total Sorting
For total sorting the samples were pre-graded by experts then the total sorting threshold was implemented by applying the total sorting algorithm.The result of total sorting was based on the comparison between pre-graded and algorithm graded samples.The result is shown in table (6).
Conclusions
For total sorting of potato (Agria variety), the quantitative and qualitative sorting was performed.At first step, in the quantitative sorting experts pre-graded the samples and those were divided into two groups of training and testing phases.In the training phase the threshold was extracted based on applying area calculation algorithm.For evaluating the quantitative algorithm, it was evaluating in the domain of testing phase.
For extraction of qualitative threshold, experts pre-graded potatoes based on health percentage.The pre-graded samples were divided into two groups of training and testing phases.The threshold was extracted in training and evaluated in testing phases.Accuracy of threshold evaluating in quantitative sorting based on Area in all three groups of Small, Medium and Large was 100%.The accuracy of qualitative threshold evaluation in groups of Rejected, Grade3, Grade2 and Grade1 was respectively: 100%, 96.97%, 89.19% and 100%.
For total sorting the quantitative and qualitative thresholds were combined.Potatoes were pre-sorted by experts based on both quantitative and qualitative factors.Then the accuracy of total sorting was obtained by comparison between pre-graded and Machine vision grading.The accuracy of total sorting was 96.823%.
Figure 1 .
Figure 1.The best combination of color spaces and intensity transformation functions
Table 1 .
Threshold extraction of quantitative sorting (Training phase)
Table 4 .
Threshold testing of qualitative sorting (Testing phase) | 3,026.8 | 2012-03-31T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
Three family $Z_3$ orbifold trinification, MSSM and doublet-triplet splitting problem
A $Z_3$ orbifold compactification of $E_8\times E_8^\prime$ heterotic string is considered toward a trinification $SU(3)^3$ with three light families. The GUT scale VEV's of the $SU(2)_W\times U(1)_Y\times SU(3)_c$ singlet chiral fields in two sets of the trinification spectrum allow an acceptable symmetry breaking pattern toward MSSM. We show that a doublet-triplet splitting is related to the absence of a $\Delta B$ nonzero operator.
I. INTRODUCTION
It seems that the family structure of the standard model(SM) is completed with three light ones. This observation stems from the recent experiments toward understanding neutrino oscillation, Big Bang nucleosynthesis, and experiments saturating the unitarity triangle. For a long time, the question, "Why are there three light families?", has been the heart of the family problem. In 4 dimensional(4D) field theories, the grand unification idea with a big gauge group was suggested toward this family structure, which is called the grand unification of families(GUF) [1]. For the GUF idea to work from a bottom-up approach, the three different gauge coupling constants observed at the electroweak scale should meet at a grand unification(GUT) scale M GU T . With the three light families and one Higgs doublet scalar fields, they do not meet. But one can make them meet by introducing a number of particles beyond the three family structure of the SM. One interesting possibility is the particle spectrum of the minimal supersymmetric SM(MSSM) [2].
With the advent of superstring models, the GUF idea seems to be automatically implemented. In particular, the 10 dimensional(10D) heterotic string models need big gauge groups, E 8 × E ′ 8 or SO(32) [3]. Among these, the E 8 × E ′ 8 has attracted a particular attention. However, the big gauge group is given in 10D, and one has to hide six internal spaces to contact with our 4D world. This process of hiding six internal spaces is known as "compactification", accompanies the breaking of the big 10D gauge group, and also generates multi-families in 4D [4,5]. The most serious objective in this compactification has been to obtain the MSSM in 4D. For an N = 1 supersymmetry, the internal space with an SU (3) holonomy has been suggested first [4]. But a more interesting and easily soluble case is the orbifold compactification [5]. In particular, the Z 3 orbifold models with two Wilson lines attracted a great deal of attention because of the multiplicity 3 in the spectrum [6]. Along this line, the standard-like models, which allow three families and SU(3) c × SU(2) × U(1) n groups, have been extensively studied [6,7].
The standard-like models, however, suffered from the following two problems: (i) the sin 2 θ W problem, and (ii) the problem of too many Higgs doublets.
With the MSSM spectrum, it is necessary to assume that the unification value of sin 2 θ W is 3 8 to reconcile with the low energy data on α QCD , α em and sin 2 θ W . The sin 2 θ W problem (i) is that it is generally difficult to obtain 3 8 for the unification value of sin 2 θ W . The problem of too many Higgs doublets is that the standard-like models have many pairs of Higgs doublets while the MSSM needs just one pair. To solve the above problems, recently it was suggested to unify the standard model in a semi-simple gauge group at the compactification scale so that the electroweak hypercharge is not leaked to U(1) n factors [8]. In [8], the motivation has been to embed the electroweak hypercharge in semi-simple groups with no need for the adjoint representation(HESSNA). In the HESSNA, the QCD gauge group must be already factored out so that an adjoint representation is not needed. The simplest HESSNA is the SU(3) 3 gauge group with the so-called trinification [9] spectrum for one family, ( This leads us to search for simple SU(3) 3 models for HESSNA. In this paper, we present a Z 3 orbifold model which leads to a model close to the MSSM below a GUT scale. We also show a correlation between the doublet-triplet splitting and the ∆B nonzero operator The heterotic string theory has N = 4 supersymmetry from the 4D viewpoint. To obtain chiral fermions in 4D, we have to reduce N = 4 supersymmetry down to N = 1.
The Z 3 orbifold reduces N = 4 down to N = 1 when we compactify the six internal spaces [5]. The six internal spaces are split into a direct product of three two-dimensional tori(y 1 − y 2 ; y 3 − y 4 ; y 5 − y 6 ). A Z 3 orbifolding of two dimensional torus gives three fixed points; thus three Z 3 orbifolded tori have 27 fixed points. The 27 fixed points are not distinguishable unless one introduces Wilson lines. The shift vector V and the six Wilsone lines a i (i = 1, · · · , 6) are embedded in the gauge group E 8 × E ′ 8 . The a 1 is transformed to a 2 by a Z 3 transformation, and we consider only three independent Wilson lines: a 1 = a 2 , a 3 = a 4 , a 5 = a 6 [10].
The model we study here is 1 (2) is allowed in superstring orbifolds. For the conditions to be satisfied, see Ref. [10]. The unbroken gauge group is Here, however, we assume that six U(1)'s are broken by VEV's of SU(3) 5 singlet fields at the string scale. Below the string scale, the effective gauge group is hence the invariance under the nonabelian gauge group is our main concern in this paper.
In HESSNA, one does not have to know the extra U(1) quantum numbers to pinpoint the electroweak hypercharge.
Thus in the observable sector, this compactification leads at low energy to an N = 1 effective field theory SU(3) 3 with three copies of trinification spectrum (1). The massless chiral fields are presented in Table I with the well-known method [10,6,8]. Because there are nine twisted sectors, the multiplicity in one twisted sector is 3. Because of Z 3 , the chiral fields of the untwisted sector also have the multiplicity 3. These are the bases for three chiral families. Note that the fields in the nine twisted sectors of Table I form vectorlike representations which can be removed at a GUT scale. Therefore, we will be interested in the 3 copies of the trinification spectrum appearing in the untwisted sector.
In many aspects for low energy physics, it is similar to an E 6 model with three families of 27. In the present model, however, the electroweak gauge group and SU(3) c are already 1 A precurser of the present model with V and a 1 was already given before [8,11].
split and we do not need an adjoint representation for the symmetry breaking [8].
When one blows up the fixed points and obtain a smooth Calabi-Yau manifold with an SU(3) holonomy, one SU(3) factor from the orbifold is identified with the SU(3) holonomy and is removed from the low energy gauge group [5]. We can identify one of SU(3)'s in the hidden sector for this purpose if we wish.
III. THE MINIMAL SUPERSYMMETRIC STANDARD MODEL
To obtain the low energy effective theory MSSM, we must break the SU(3) 3 gauge us represent the trinification fields of (1) as where M, I, α are the SU (3) gauge group down to the SM gauge group, The symmetry breaking is achieved by giving VEV's to the scalar partners of the three family trinification fields. In the first step of symmetry breaking (6), 9 Goldstone bosons are absorbed through the Higgs mechanism to the gauge bosons. These are contained in H d , H u , l, N 5 , e + , and N 10 . In the second step of (6), 3 further Goldstone bosons are absorbed to gauge bosons through the Higgs mechanism. The resulting gauge group is the SM gauge group and must be anomaly free. The study of this symmetry breaking pattern is not trivial and one must consider two steps of (6) together. With only one (3, 3, 1) representation, we We need at least two (3, 3, 1) representations which are supposed to be scalar partners of two out of three copies of (3).
After the Higgs mechanism, the remaining SM fields are linear combinations of the fields arising in (3). Then we can redefine the fields so that SM fields are renamed. The remaining fields from two sets of (3) To discuss the light spectrum more concisely, let us utilize the N = 1 supersymmetry explicitly. Possible cubic terms among the untwisted sector fields are [13], where a, b, c are the family indices. Note that we consider only the SU(3) 3 symmetry.
IV. DOUBLET-TRIPLET SPLITTING
For the MSSM, we need a pair of Higgs doublets. But if the coupling (7) is completely general, we cannot achieve this objective since H u and H d in the third family, not participating in the GUT group breaking, will be heavy. We need a fine-tuning to keep them light.
But this fine-tuning is correlated with a ∆B = 0 operator.
Before showing the doublet-triplet splitting explicitly, we point out that the resolution of this doublet-triplet splitting problem in the flipped SU(5) model [12] heavily assumes the absence of H d H u coupling. It is the familiar µ problem, and can be solved by introducing a Peccei-Quinn symmetry [14]. But in string theory, we can see that the H d H u term cannot arise at the tree level. Since both H d and H u belong to (3) in our compactification, a guessed term for H d H u , i.e. the term among the light fields (3, 3, 1) · (3, 3, 1) is forbidden from the gauge symmetry. In addition, however, the coupling (3, 3, 1) · (3, 3, 1) · (3, 3, 1) among the light fields, must be forbidden to remove the H d H u coupling at a GUT scale because H d H u can arise after giving a VEV to N 5 or N 10 . Below we show that this can be realized by a fine-tuning but this fine-tuning must be dictated from a ∆B nonzero operator.
The VEV's of N 5 and N 10 allow the following two types of nonvanishing mass terms.
The first possibility is coming from SU(3) 3 singlets by taking three different humors, and the second possibility is coming from SU(3) 3 singlets by picking up the same humor from Ψ a , Ψ b , and Ψ c . In general, these two possibilities are present. In the discussion on the GUT symmetry breaking, we allowed both of these couplings. Below, we mainly focus on the couplings of the third family.
The first possibility gives masses to D and D. For example, for N 10 (3rd family) =Ṽ 1 , 2 we obtain DM D D where and antiquark-humors, Ψ l , Ψ q , and Ψ a are represented as a singlet and a doublet of the permutation of {l, q, a} [15],
Note that DetM
where ω andω are the cube roots of unity ω = e 2πi/3 ,ω = e 4πi/3 . Note that Ψ 0 is a singlet under the permutation of l, q, a. On the other hand Ψ ± goes into a multiple of Ψ ∓ . Thus, Ψ + and Ψ − form a doublet under permutation, which we can represent as Ψ doublet ≡ (Ψ + , Ψ − ) T .
The S 3 invariant cubic couplings are Ψ 3 0 and Ψ 0 Ψ + Ψ − . In terms of humors, these are The above couplings include the so-called R-parity violating couplings of the MSSM. In particular, the ∆B = 0 operator u c d c d ′c (the so-called λ ′′ coupling) is dangerous. It is contained in Ψ 3 a . To remove this ∆B = 0 coupling Ψ 3 a , we fine-tune the Ψ 3 0 and Ψ 0 Ψ + Ψ − couplings such that they have the same magnitude but the opposite signs. Then, the S 3 invariant coupling is where in the second line we excluded the terms not allowed by the gauge invariance. Thus, the phenomenological requirement for proton stability excludes the H d H u allowing term Ψ 3 l (the second possibility), and hence H d and H u are left as light particles. Furthermore, the coupling allows the first possibility, i.e. the coupling chooses different humors in the cubic terms, and hence removes the color triplets D and D, realizing the doublet-triplet splitting.
If this argument is applied to the first two families, we will end up with two pairs of Higgs doublets, one pair too much. We must remove one more pair, but then we must allow a λ ′′ coupling. A sizable λ ′′ for the t quark family is not forbidden very strongly phenomenologically(For proton decay, a product λ ′ λ ′′ is constrained.). To obtain a phenomenologically acceptable MSSM, we may require this kind of fine-tuning, forbidding the same humor cou-pling, among the two lighter families 3 ; but allow an O(1) same humor coupling for the t family.
V. CONCLUSION
In conclusion, we constructed a Z 3 orbifold trinification model with three light families, and showed that the symmetry breaking leads to a spectrum close to the MSSM. The discussion on keeping one pair of H u and H d light needed a fine-tuning in this paper, but this fine tuning has been shown to be correlated with the absence of ∆B nonzero operator u c d c d ′c . It will be very interesting if this fine-tuning is naturally obtained. | 3,257.8 | 2003-05-01T00:00:00.000 | [
"Physics"
] |
Magnetic Properties of Polyvinyl Alcohol and Doxorubicine Loaded Iron Oxide Nanoparticles for Anticancer Drug Delivery Applications
The current study emphasizes the synthesis of iron oxide nanoparticles (IONPs) and impact of hydrophilic polymer polyvinyl alcohol (PVA) coating concentration as well as anticancer drug doxorubicin (DOX) loading on saturation magnetization for target drug delivery applications. Iron oxide nanoparticles particles were synthesized by a reformed version of the co-precipitation method. The coating of polyvinyl alcohol along with doxorubicin loading was carried out by the physical immobilization method. X-ray diffraction confirmed the magnetite (Fe3O4) structure of particles that remained unchanged before and after polyvinyl alcohol coating and drug loading. Microstructure and morphological analysis was carried out by transmission electron microscopy revealing the formation of nanoparticles with an average size of 10 nm with slight variation after coating and drug loading. Transmission electron microscopy, energy dispersive, and Fourier transform infrared spectra further confirmed the conjugation of polymer and doxorubicin with iron oxide nanoparticles. The room temperature superparamagnetic behavior of polymer-coated and drug-loaded magnetite nanoparticles were studied by vibrating sample magnetometer. The variation in saturation magnetization after coating evaluated that a sufficient amount of polyvinyl alcohol would be 3 wt. % regarding the externally controlled movement of IONPs in blood under the influence of applied magnetic field for in-vivo target drug delivery.
Introduction
Drug loaded nanoparticles (NPs) based cancer therapy possesses the potential to overcome the toxicity of the drug and its poor control on dosing when custom combination therapies are employed [1][2][3][4]. Since all the drugs used for cancer therapy have some side effects, which usually arise due to non-specificity of drug actions. For instance, in tumor therapy, side effects of cytotoxic drugs like bone marrow depression and reduction in immunity could be harmful to an extent that the therapy termination is mandatory [5,6].
Precise and targeted delivery of drugs to the tumor have got much importance in order to address the limitations of traditional therapies; therefore, search for drug delivery methods has been prompted in last few years. The selectively targeted chemotherapeutic agents to the tumor can provide more effective cancer therapy. The problems associated with conventional chemotherapy can be avoided by using magnetic drug delivery system i.e. IONPs carriers targeted by an external magnetic field [7][8][9][10][11]. Alexiou et al. showed that magnetically targeted drug delivery can completely reduce the tumor in rabbits without any side effects. Furthermore, the applied drug dose could cut down as compared to that one is used regularly [7,12].
The development of IONPs for target drug delivery needs to address the issues such as size and shape of the particles, surface coating and drug loading, in-vivo distribution of particles and most importantly the magnetic behavior of these particles [13,14]. Issa et al. reported in 2011, that clusters of nanoparticles formed in the blood stream due to the high surface to volume. The absorption of plasma proteins on the nanoparticles surface results in activation of the clearance mechanism by macrophage cells before they approach the site to be targeted [15]. Recently, Obaidat et al. and Issa et al. presented concise reviews on surface effects in magnetic nanoparticles concerned with biomedical applications and for efficient hyperthermia respectively [14,16]. Previously, authors reported that IONPs without any surface modification can stay in the blood for almost 4 hours and can evade reticulo-endothelial (RES) system [17]. The only issue with these particles is, without surface modification they are unable to load the sufficient amount of drug [18].
Various kinds of biocompatible and hydrophilic polymers including dextran [19], poly ethylene glycols (PEG) [20], PVA [21] and poly vinyl pyrrolidone (PVP) [22] can be used to functionalize the surface of IONPs to increase the circulation time and drug loading. The surface modification affects the magnetic properties of IONPs. The magnetic properties mainly saturation magnetization depends on the size of IONPs and surface effects that become significant as the size decreases [14,23].
In most of the established medical applications, 10 nm is the preferred size of nanoparticles. At this size limit, the magnetic energy of the nanoparticles minimized up to the extent that they become single magnetic domain. Obaidat et al. observed the superparamagnetic behavior of nanoparticles when synthesized at the above mentioned scale without surface modification and their fast response to applied magnetic field keeping the remanence and coercivity negligible [16]. However, surface modification of nanoparticles can contribute in varying the size and magnetic behavior. Surface spin disorder was reported to occur in IONPs and lead to high magnetic anisotropy [24], which was initially explained in terms of a dead magnetic layer at the surface [25] and afterwards to the disordered surface spin [26,27]. On the application of high magnetic fields, this disordered surface spin in nanoparticles could be a hurdle to attain saturation [28,29]. The reduction in saturation magnetization with increasing PVA concentration was also reported by Kayal et al. [30]. The dilution effect of absorbed water whether it comes from PVA or DOX along with hydroxyl content (-OH) enormously affects the surface of nanoparticles by distortion in the alignment of surface spins [31].
Several studies presented the synthesis and characterization of PVA coated iron oxide nanoparticles but the upper limit of PVA concentration was not reported yet. In the present work, PVA is chosen for coating the magnetic nanoparticles, and Doxorubicin drug (DOX) is used as an anti-cancer agent. The main goal and novelty of the present work is to conjugate the DOX with PVA coated (3 and 6wt%) iron oxide nanoparticle carriers to study the magnetic behavior for in-vivo target drug delivery and to speculate judiciously the maximum concentration of PVA related to evaluation of saturation magnetization that would be necessary for the controlled movement of IONPs in blood under the influence of applied magnetic field. Generally, there is no minimum saturation magnetization reported so for regarding controlled movement of IONPs in blood but at much smaller values of magnetization; the applied external magnetic field required for controlled movement might reach to very high value [32].
Experimental Section Chemicals
All reagents, iron (II) chloride, iron (III) chloride, sodium hydroxide and polyvinyl alcohol were purchased from MD Interactive Enterprise (JM051032-V) Malaysia and used without further purification. De-ionized water was used as a solvent in all experiments. Doxorubicin hydrochloride (DOX-HCl) was used for the drug loading.
Instruments
Bruker D8 Advance Diffractometer with Cu-Kα (λ = 1.5406 Å) radiation was used for X-ray powder diffraction measurements. XRD data was recorded across a 2θ range of 20°to 80°using a step size of 0.02°. TEM, HRTEM, and selected area electron diffraction (SAED) images were collected using JEM-2100 transmission electron microscope with accelerating voltage of 200 kV. Compositional analysis was carried out by EDS. Lake Shore's 7407 Vibrating Sample Magnetometer was used to obtain the magnetic measurements at room temperature with an applied magnetic field ranging from 0 to 10 kOe. FTIR spectroscopy (PerkinElmer Spectrum) with a resolution of 4 cm -1 was used to investigate the binding energies of all the PVA coated, and DOX loaded samples at room temperature.
Syntheses of uncoated and polymer (PVA) coated iron oxide nanoparticles
Uncoated iron oxide nanoparticles were synthesized by co-precipitation method. The synthesis methodology was same as reported by the author's previous work published elsewhere [33]. Firstly, 240 mL solution of FeCl 2 .4H 2 O (0.6 M) and FeCl 3 .6H 2 O (1.08 M) was prepared in deionized water. The prepared solution was stirred at 600 rpm for 25 min. to have complete dissolution of reactants at 60°C. The obtained solution has iron (II) chloride and iron (III) chloride with a molar ratio of 1:1.8. Afterward, this solution was poured drop wise in separately prepared 400 mL solution of NaOH (1.6 M) and maintained at stirring (800 rpm) for 25 min. at 60°C. As the reaction proceeds, a black precipitate of IONPs was formed but remained suspended in a base solution having pH of 11. The black precipitates were collected in the beaker with the help of strong permanent magnet and washed four times with deionized water. All the residual were removed with washing and the final pH of uncoated IONPs suspension lowered down to 7.2. The particles were then dried in vacuum oven at 40°C before characterization. It is worth mentioning that to avoid oxidation of iron from Fe 2+ to Fe 3+ , synthesis was performed with a slightly low ratio of Fe +3 rather than utilizing the nitrogen environment as reported earlier [30].
The coating of PVA on IONPs was performed with slight modifications in reformed method as proposed by Kayal et al. [30]; 3 wt.% PVA coating was achieved at 80°C by adding deliberately one gram of dried nanoparticles and three grams of PVA in nighty six grams of DI water and complete dissolution of PVA was achieved under vigorous stirring. The final solution had the ratio of IONPs: PVA = 1:3 and temperature was then allowed to decrease by switching off the temprature controller. The solution remained stirred for 20 hours at room temperature in order to achieve coating. 3 wt. % PVA functionalized IONPs were separated with the permanent magnet and washed three times with deionized water. Finally, pure PVA coated IONPs were separated and dried at 35°C after removal of all the residual with washing. 6 wt. % PVA coated IONPs were also obtained by repeating the above mentioned procedure with only one variation i.e. molar ratio of IONPs: PVA = 1:6. The aqueous suspension of prepared uncoated nanoparticles (on left) and PVA coated IONPs separated with 200 nm filter (on right), and the effect of the external magnetic field are illustrated in Fig 1. Loading of doxorubicin on uncoated and PVA coated iron oxide nanoparticles Doxorubicin hydrochloride (DOX-HCl) anticancer drug was loaded onto the uncoated and 3/ 6 wt. % PVA coated IONPs. The DOX loading of IONPs was carried out by dissolving 10 mg of DOX in 100 ml of distilled water by shaking with orbital shaker for 10 minutes followed by the addition of 50 mg IONPs particles and vigorous stirring (300 rpm) at 25°C for 22 hours [34]. A strong permanent magnet was used to separate the DOX loaded IONPs, which were then dried in vacuum oven at 40°C. Fig 2 shows the aqueous suspension of DOX loaded PVA coated IONPs and effect of external magnetic field on them.
Results and Discussion
Crystallographic structure and Morphology crystallinity of the complexes decreases due to "masking" effect of the polymer. Fig 3B-3E shows the broadening of XRD peaks due to a reduction in crystallite size of PVA coated and DOX loaded IONPs [5,35].
Microstructure of the nanoparticles was further studied in detail with the help of transmission electron microscopy (TEM), high resolution transmission electron microscopy (HRTEM) and selected area electron diffraction (SAED). HRTEM image (Fig 4A) pattern ( Fig 4B) shows a set of points obtained by diffraction of electrons from various planes of cubic Fe 3 O 4 [36].
Compositional analysis
The compositional analyses of the complexes were carried out by obtaining the energy dispersive spectra (EDS) as shown in Fig 5. The EDS analysis confirms that iron, carbon, and oxygen are the core elements in the uncoated, PVA coated and DOX loaded IONPs. The weight and atomic percentages along with binding energies of iron, oxygen, and carbon in IONPs complexes are depicted in Fig 5A-5C. Fig 5(A) reveals that the wt. % of iron and oxygen in uncoated IONPs are 71.14 and 28.86 respectively. The data obtained from EDS spectra is also presented in the form of chart and table (insets of Fig 5A-5C) for reader's convenience. Fig (5B) shows the wt. % of iron, oxygen, and carbon in 3 wt. % PVA coated IONPs as 62.06, 18.08 and 19.86 respectively. For DOX loaded 3 wt. % PVA coated IONPs, the wt. % of iron, oxygen and carbon is 41.17, 21.60 and 30.03, respectively as shown in Fig (5C). The presence of PVA and confirmation of DOX loading are evident from the decrement in iron peaks and appearance of additional peak of carbon along with the ratios between atomic and weight percentages of carbon and oxygen (Fig 5B and 5C). The appearance of copper surely comes from copper grid used for sample preparation. The binding energies of iron for all the complexes are presented by peaks observed at the energy values of 0.7, 6.5, and 7 keV [37,38].
Conjugation of drug and polymer attachment with IONPs
The FTIR is a suitable technique to find the attachment of polymer (PVA) to the IONPs and conjugation of the drug (DOX) with the PVA coated IONPs. The FTIR spectrum of 3 wt.% PVA coated IONPs after DOX loading is shown in Fig 6(E). The N-H stretching vibrations (3267 cm− 1 ), which are appeared in DOX loaded sample and overlap with that of alcoholic O-H band (3375 cm -1 ) observed in case of 3 wt.% PVA coated IONPs [41]. By comparing the FTIR spectra of pure DOX with that of DOX loaded 3 wt. % PVA coated IONPs, one can conclude that shift of band at 3375 cm− 1 to subordinate frequency 3267 cm -1 confirms the conjugation of DOX with PVA coated IONPs. This conjugation can be further explained by the interaction of-OH groups of PVA with-NH 2 groups of DOX via hydrogen bonding. The N-H stretching vibrations in DOX loaded 3 wt. % PVA coated IONPs appear at a lower frequency i.e. 865 cm− 1 compared to that observed in pure DOX at 885 cm− 1 validates the above mentioned conjugation [42].
Size effect
For in-vivo bio-distribution, the size of nanoparticles has major significance, which decides the routes and retention time of these particles in the body. The size, surface morphology, coating and the size distribution of the synthesized nanoparticle complexes were studied comparatively in detail with a transmission electron microscope (TEM). The TEM images (Fig 7A-7C) show the average diameter of nanoparticles for all complexes. For uncoated IONPs, TEM image ( Fig 7A) discloses that the agglomeration is still there even after dispersion in ethanol and shape of the nanoparticles is not completely spherical. The average diameter obtained for uncoated IONPs is 11.018 nm. The ability of PVA molecules to disperse the IONPs when coated with a polymer (3 wt. % PVA) is confirmed by the observation of well individualized PVA wrapped IONPs with spherical shape (Fig 7B). The PVA coated nanoparticles with an average diameter of 9.644 nm are obtained, and one can conclude that PVA supports to avoid agglomeration and reduces the particle size up to some extent as reported earlier [13,43]. It is important to mention here that, when DOX is loaded onto the PVA coated IONPs; their average diameter is observed to increase slightly by conjugation of DOX and becomes 10.42 nm (Fig 7C). This conjugation of DOX with PVA coated nanoparticles can be further confirmed by the existence of 2.5-3.5 nm thick layers around the particles. Thus the size of the IONPs obtained in the present study is fine enough to escape engulfment by the Reticulo-endothelial System (RES) of the body or circulating macrophages, thus having a better therapeutic efficiency due to increased residence time in the blood [44] and IONPs can behave as a "single super spin", which is good for magnetically targeted drug delivery [37].
Magnetic properties
The magnetic study of all samples reveals that the IONPs exhibit superparamagnetic behavior since hysteresis curves (Fig 8) show the absence of remanence and coercivity. The observed high magnetic response of these particles originates from the size dependent magnetic properties as expected. The existence of single magnetic domain in IONPs complex when the particles are small enough results in novel property termed as "single super spin" [45][46][47]. Saturation magnetization (M s ) of uncoated, PVA coated and DOX loaded IONPs was observed at an applied field of 10000 Oe. Saturation magnetization (M s ) of uncoated IONPs ( Fig 8A) has a value of 70.07 emu less than that of bulk magnetite 88 emu [48][49][50]. The saturation magnetization of DOX loaded IONPs before polymer coating is about 62 emu that is less than that of uncoated IONPs. For 3 and 6 wt. % PVA coated IONPs, saturation magnetization (Ms) is found to be reduced significantly to values of 54.42 and 35.07 emu, respectively. The rapid decrease in Ms with increasing concentration of PVA polymer coating is established earlier [51]. This is due to the radical-OH group of PVA and dilution effect from adsorbed water. TEM shows that after PVA coating, the size of the particles reduces and the surface effects might cause Ms to decrease [31]. Literature suggests that, if surfactant is coated on IONPs, the surfactant layer can be considered as a dead layer at the surface and observed reduction in magnetization would cause by the quenching of surface moments [52]. Boyer et al. [53] proposed that the hydroxyl environment of blood might also result in a reduction of saturation magnetization of nanoparticles during the in-vivo study. However, DOX loading on IONPs can change the alignment of surface atomic spins caused by the broken exchange that ultimately reduces the coordination between surfaces spins [35]. Thus, drug loading and polymer coating on IONPs can reduce the M s to 26.45 emu but the magnetic behavior remains unchanged i.e. superparamagnetic. The observed magnetic characteristics of IONPs are decent for delivering the drug to the target site when these particles are guided magnetically.
Conclusions
Superparamagnetic IONPs suspensions in ultra-pure water were prepared with an effective and simple scheme employed in reformed co-precipitation method. To avoid oxidation of iron from Fe +2 to Fe +3 , synthesis was performed with a slightly low ratio of Fe +3 rather than utilizing the nitrogen environment. TEM pattern clearly indicates a mean particle size of~10 nm for iron oxide particles. High resolution transmission electron microscopy (HRTEM) and selected area electron diffraction (SAED) confirms the crystalline nature of IONPs. PVA coating and DOX loading was confirmed by XRD, TEM, EDS and FTIR. The saturation magnetization being the most important factor for target drug delivery via IONPs guided by external magnetic field has been optimized depending on the PVA concentration and magnetic studies suggest that the PVA concentration of 3 wt. % after DOX loading resulted in saturation magnetization having fine enough value. Since, at higher concentrations of PVA, one may lose the control over the drug delivery via external magnetic field due to a reduction in the saturation magnetization. It would be difficult to guide the particles having smaller magnetizations in the blood vessels where the environment is completely different based on skin, tissue etc. The synthesized IONPs seem to have potential applications in diagnostic and therapeutic fields of biomedicine. | 4,341.4 | 2016-06-27T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
CAV Traffic Control to Mitigate the Impact of Congestion from Bottlenecks: A Linear Quadratic Regulator Approach and Microsimulation Study
This work investigates traffic control via controlled connected and automated vehicles (CAVs) using novel controllers derived from the linear-quadratic regulator (LQR) theory. CAV-platoons are modeled as moving bottlenecks impacting the surrounding traffic with their speeds as control inputs. An iterative controller algorithm based on the LQR theory is proposed along with a variant that allows for penalizing abrupt changes in platoons speeds. The controllers use the Lighthill-Whitham-Richards (LWR) model implemented using an extended cell transmission model (CTM) which considers the capacity drop phenomenon for a realistic representation of traffic in congestion. The impact of various parameters of the proposed controller on the control performance is analyzed. The effectiveness of the proposed traffic control algorithms is tested using a traffic control example and compared with existing proportional-integral (PI)- and model predictive control (MPC)- based controllers from the literature. A case study using the TransModeler traffic microsimulation software is conducted to test the usability of the proposed controller as well as existing controllers in a realistic setting and derive qualitative insights. It is observed that the proposed controller works well in both settings to mitigate the impact of the jam caused by a fixed bottleneck. The computation time required by the controller is also small making it suitable for real-time control.
I. INTRODUCTION
The advent of Connected and Autonomous Vehicle (CAV) technology has led to the opening of unforeseen avenues in the field of traffic control [1].Previously, control was restricted to using actuators that were fixed in space such as variable message signs [2], or boundary flow controllers [3].Compared to that, control using CAVs offers greater flexibility as it allows actuators to move in space in a desired manner, therefore, allowing them to be present at desired locations at desired times.In addition to that, using CAVs is relatively cheaper than using fixed actuators which need to be specifically deployed only for the single purpose of traffic control.CAVs on the other hand can be used for several applications like sensing or avoiding hazards due to dangerous driving behavior in their surrounding traffic [4].Also, it can be sometimes difficult to enforce control through traditional fixed actuators like speed limit signs as they can face the issue of low compliance from drivers in some communities.This can also be avoided with the use of CAVs in the traffic stream whose physical presence ahead of drivers would make it impossible for them to avoid the control.
Given the advantages mentioned above, it is essential to explore this newfound potential of traffic control via CAVs by developing new control methodologies that treat CAVs as moving actuators.In this work, we consider the problem of maximizing the mean speed of traffic through traffic jam dissipation by controlling the speed of CAV-platoons entering the road stretch at predefined time intervals using a Linear-Quadratic Regulator (LQR) methodology.CAV-platoons are treated as rolling roadblocks that block the entire flow of traffic at their location.While the general problem of traffic control by controlling the speed of CAV-platoons has been previously explored, the main focus of this work is on proposing and investigating a new controller implementation for this problem in the LQR framework which so far has not been explored in the literature and further compare it with existing approaches from the literature in terms of performance and computational tractability for real-time control.
Several studies in the past decade have considered the problem of moving-bottleneck control of traffic to improve traffic flow.Here a moving-bottleneck implies a reduced flow area that moves along the highway stretch such as that created by slow-moving vehicles.Traditionally, moving bottlenecks are assumed to only partially block the highway cross-section thus allowing part of the traffic to pass by.Unlike that, in this work, the CAV-platoons are considered to block the entire flow of traffic at their location.In [5], the authors have proposed a Proportional-Integral (PI)-type feedback regulator to perform traffic control by controlling the speed of CAV-platoons arriving on the considered highway stretch.While PI-based controllers can produce the desired improvements in traffic flow when coupled with certain arbitrary constraints on the vehicle speeds, in general, they do not guarantee optimal control, and as shown in this study can also result in undesirable control if specific arbitrary bounds on the controlled speeds are removed.In [6], the authors propose a model predictive control (MPC)-based speed control algorithm to control the traffic via CAV-platoons subject to the travel time reduction.They solve a nonlinear optimization problem by means of the interior-point algorithm [7] implemented in MATLAB.It considers an extended version of the first-order traffic dynamics [8], [9] considering the capacity drop phenomenon.Their proposed speed control algorithm is optimal and works well with longer prediction horizon lengths.However, solving a nonlinear optimization problem at each time step is highly time-consuming, especially for extensive networks with several links and junctions, as it requires performing the simulation several times, and therefore can be infeasible for real-time control.In both [5], [6], CAV-platoons are assumed to block the entire flow of traffic at their location.An approach for controlling the speed of the moving-bottlenecks to reduce the overall fuel consumption of the traffic stream is presented in [10] which utilizes the wavefront tracking approach to model the interaction between the controlled vehicles and the surrounding traffic described by a first-order traffic model.In comparison to the current study, [10] does not consider the capacity drop phenomenon, and also the fuel-consumption-based control approach cannot be extended for traffic flow improvements.Besides these articles which deal explicitly with the control aspect of moving-bottlenecks in traffic streams, there are also studies that dive deeper into the accurate modeling of traffic flow dynamics in the presence of moving-bottlenecks at the macroscopic level such as [11]- [15].Note that here we are only interested in studies that use CAV-platoons as moving-bottlenecks.Readers are referred to [1] for an extensive review of various other use cases associated with CAVs in the realm of traffic control.
In this work, we utilize the traffic model presented in [6] which incorporates the capacity drop phenomenon as it allows for realistic control.The authors in [6] present an MPC-based controller to address the problem of moving-bottleneck control of traffic using CAVs.To overcome the time requirement issue of the MPC-based control algorithm and to make a balance between the quality of the speed control algorithm and its computational requirements, we formulate the traffic control problem in the form of an LQR-based optimization problem which regulates the states around an equilibrium point while utilizing the structure of the state-space dynamics of the system.To solve the LQR-optimization problem, we use the Gauss-Newton LQR (GN-LQR) algorithm which has a time-varying structure since we have to deal with the nonlinearity of the traffic dynamics model via a linear time-varying (LTV) system obtained from the linearization process.Due to the complicated structure of the nonlinearity of the traffic dynamics model corresponding to certain states, we cannot utilize the classic analytic/symbolic methodology to calculate the Jacobian-based state-space matrices of the linearization process.Thus, in those cases, we utilize a numerical methodology developed by [16], to numerically calculate the Jacobian-based state-space matrices of the linearization process.
The standard LQR approach and its variants have been used for different control problems in traffic engineering for instance in [17]- [19].However, in the context of the present traffic control problem using CAV-platoons as moving-bottlenecks, the aforementioned version of LQR is novel.
Besides this, the traffic control studies that address the moving-bottleneck-based control of traffic such as [5], [6], [10] have only been carried out using macroscopic traffic simulations.While macroscopic traffic models are attractive due to their robustness and scalability, they are not always realistic which imposes questions on whether such controllers which use macroscopic traffic models at their core are useful in the real-world setting.To address this gap, we also present a microscopic traffic simulation-based case study that tests the proposed LQR-based controller under realistic settings and tries to address questions about the usability and corresponding gaps in the application of such controllers in the real world.
Given the main research gap in this area is the absence of an optimal controller offering fast computation of controls for real-time moving-bottleneck control of traffic and the absence of a study on the moving-bottleneck controllers under realistic settings, the present study makes the following contributions: 1) An LQR-based controller design with macroscopic model dynamics is proposed to control the speeds of CAV-platoons allowing for mitigation of the effect of jam-forming bottlenecks in the traffic stream.The LQR-based controller uses the structure of the state-space matrices of the traffic dynamics system and does not require performing repeated simulations for control, therefore requiring less computation time.The impact of various parameters of the LQR-based controller is investigated with respect to its performance in solving the given problem.2) A variant of the LQR-based controller allowing for a penalty on large changes in control inputs over consecutive time steps is developed and shown to reduce the magnitude of fluctuations in the controlled speeds allowing for safe and realistic control.3) We present a comparison of the proposed LQR-based controllers with existing MPC-based [6] and PI-based [5] controllers from the literature in terms of computational tractability and performance using macroscopic simulation.The proposed LQR-based controllers are observed to perform similarly to the existing controllers in terms of improvement in traffic conditions and outperform them in terms of computation time by about two orders of magnitude in the given traffic scenario (with PI-based controllers this is true when the controller requires tuning the controller gains in real-time).
4) The performance of the proposed LQR-based controller is further investigated using a microscopic traffic simulation setup and compared with the application of existing controllers to the same setup to assess its applicability and utility under realistic settings of traffic flow.The remainder of the article is organized as follows-Section II describes the traffic dynamics model used in this work.The problem statement with the LQR-based solution scheme and algorithms is presented in Section III.Section IV proposes research questions related to the problem, analyzes the proposed approach in a macroscopic setting, and compares it with existing approaches from the literature, followed by a microsimulation-based case study on a similar setup using the proposed and existing controllers.The article is concluded with Section V which presents preliminary answers to the proposed research questions and proposes directions for future work.
Notations: We denote the vectors and matrices by lowercase and uppercase bold symbols, respectively.The set of mdimensional real-valued vectors and n × p real-valued matrices are denoted by R m and R n×p , respectively.The identity matrix of dimension q is represented by I q .The vector/matrix transpose is denoted by T .The positive semi-definiteness and positive definiteness are represented by 0 and 0, respectively.The set-theoretical minimum operator is denoted by min.The dependency on the discrete-time time index k is shown by [k].The prefix δ adding to any time-varying quantity represents the linearized LTV dynamics value, i.e., the difference between the nonlinear dynamics value and the corresponding equilibrium value.
II. TRAFFIC DYNAMICS MODEL
Here, we present the state-space formulation for the traffic dynamics model considered in this work.The flow of traffic across a highway stretch with no on-ramps or off-ramps is modeled using the first-order LWR model [8], [9] while accounting for the capacity drop phenomenon [20], [21].The model is implemented using a Godunov scheme [22] which is proposed previously in [6], [23] and is an extension of the classical Cell Transmission Model (CTM) implementation proposed in [24].Within this, the highway stretch is divided into N L segments of equal length L (km) and the time horizon is divided into N T smaller duration of T (sec) each such that the Courant-Friedrichs-Lewy (CFL) condition [25]: T ≤ L/v f is satisfied where v f refers to the free-flow speed of traffic.Let N CAV be the total number of controlled CAV-platoons currently on the modeled highway stretch.The traffic dynamics model is given as follows: ∀i ∈ {1, . . ., N L }, where ρ i [k] represents the traffic density (vehicles per unit length) in Segment i at time index k, u[k] ∈ R NCAV denotes the control input and is given as where u j [k], ∀j ∈ {1, . . ., N CAV } denotes the control speed of CAV-platoon j in the traffic stream.φ i (., .) is the actual traffic flow (vehicles per unit time) that leaves Segment i and is given as assuming the CAV-platoon j is in Segment i at time index k.The demand and supply functions are further defined using minimum functions of the state and input variables respectively as follows: where Here, q cap i denotes the maximum capacity of Segment i, ρ c , ρ m , w c are parameters of the triangular fundamental diagram of traffic flow denoting the critical density, the maximum density, and the maximum congestion wave speed of traffic, respectively, and α ∈ [0, 1] is a coefficient denoting the extent of the capacity drop where α = 1 implies no capacity drop.Here, v i (u j [k]) denotes the maximum speed of traffic in Segment i at time index k which is given by the following conditional: It is noteworthy this is only the maximum possible speed of traffic in Segment i and not necessarily the actual speed since the actual speed will depend on the realized flow which is a function of both the demand and the supply as shown in (2).In the sequel, for convenience, we denote the demand, supply, and actual flow with the function names followed by the time index without mentioning the inputs required to calculate each.The position of CAV-platoon j on the highway is denoted by p j [k] where its evolution over time is given as where ) denotes the speed of CAV-platoon j during time index k.Note that u j [k] is the control speed of the CAV-platoon or the speed prescribed to the CAV-platoon by the controller while vj is the realized speed of the CAV-platoon which depends on the demand and supply conditions besides the control speed.
Here the bar on top of v is used to differentiate the speed of the CAV from the maximum speed of a segment which is also denoted by v. Figure 1 presents a schematic of the highway stretch with the two elements-Segments and CAV-platoons along with their associated states written underneath each label.
In the following, we also denote this final speed by the function name followed by the time index with the CAV-platoon-index as a subscript.If CAV j is in Segment i at time index k and is expected to end up in Segment i at the end of this time step while traveling at its control speed or if that is there is no restriction on the flow wanting to leave Segment i at time k, then its final speed can be set directly as vj [k] = u j [k] and its final position is calculated using (3).On the other hand, if CAV-platoon j is expected to end up in Segment i + 1 at the control speed and D i [k] > S i+1 [k] that is the flow is restricted by the downstream segment, then its final speed and hence final position needs to be calculated according to certain conditions which are presented in detail in [5] and depend on the platoon length denoted by l j (m), and the minimum demand needed for the platoon to pass to the next segment denoted by S min which is assigned an arbitrary value, apart from the variables and functions introduced above.Thus, (3) is, in fact, nonlinear as the calculation of vj [k] requires the evaluation of conditional statements.
The state-space equation can therefore be written as where the state vector at time index k consists of the traffic densities from all the segments and the current positions of the CAV-platoons on the highway, and the input vector is the same as in (1) consisting of the control speeds for all the CAV-platoons on the highway stretch.
Let n x := N L + N CAV be the number of states and n u := N CAV be the number of inputs.The matrix A = I nx , matrix G ∈ R nx×nx is a diagonal matrix representing the coefficients of the nonlinearities in the dynamics as and the vector-valued function f : R nx × R nu → R nx represents the nonlinearities in the evolution of traffic density and the position of the CAVs with time.In particular, the vector f (x[k], u[k]) can be written as The nonlinearity is indeed non-trivial since it consists of differences of nested minimum functions (2) as well as CAV-platoon speeds obtained from nested conditional statements.The presence of such nonlinearity in the state space makes it necessary for control problems based on the model to utilize nonlinear optimization schemes.
In the next section, we formally define the traffic control problem considered in this study along with the control methodology used to address it.
III. PROBLEM STATEMENT AND LQR-BASED TRAFFIC CONTROL ALGORITHMS
The underlying traffic control problem addressed in this work is defined as follows: Problem 1.Given the nonlinear traffic dynamics (4), control the speed of CAV-platoons entering the highway stretch at known time steps to mitigate the adverse effects of a traffic jam formed in the middle of the stretch.
Problem 1 can be defined in the form of an optimization problem as follows: where the cost function J ) is any function whose minimization ensures an improvement in the traffic conditions which can be in terms of an increase in the overall speed of traffic or a decrease in the overall congestion level on the highway in terms of traffic density.Here, the decision variables u[k] are the speeds of the CAV-platoons on the highway stretch.The essential constraints include the state-space dynamics (4) while the speeds of these platoons can also be constrained to an arbitrary set U.
In the present work, the optimization problem ( 6) is formulated in the LQR optimization framework [26].For linear systems, this results in a horizon-based optimization problem that aims to regulate the states and inputs of the system around the zero point taking into account the system dynamics over a given number of future time steps with the help of a state-feedback law for the control input in the form u where K[k] is called the gain matrix and is calculated using existing formulae from the literature.For nonlinear systems, an LQR-based optimization problem can be written by linearizing the system around an equilibrium point over the length of the horizon and regulating the difference between the actual state (respectively input) and the equilibrium state (respectively input) around the zero point which results in the control input trying to bring the system closer to the equilibrium states.In the context of traffic control, these equilibrium states and inputs are assigned values that result in an improvement in the state of traffic.In this case, the control input is defined by the following state-feedback law which takes into account the selected equilibrium states and inputs: where ) denote the time-varying LQR state-feedback matrix and the time-varying equilibrium point of the nonlinear system (4) at time index k, respectively.
To obtain the gain matrix K[k] at any time-step for controlling the nonlinear system within the LQR framework, the Gauss-Newton LQR algorithm [26] can be applied.The same is presented in the remainder of this section along with a variant of the GN-LQR algorithm that penalizes changes in control inputs over consecutive time steps.Various parameters of these algorithms are investigated in the ensuing sections in the context of traffic control using moving-bottlenecks.
A. The Gauss-Newton LQR algorithm
Here, we present an iterative LQR algorithm called the GN-LQR algorithm [26] which can be used to solve the LQR optimization problem (6) for the given nonlinear system (4).We introduce the following notation before presenting the GN-LQR algorithm: N : horizon length.N -step input and state matrices: Corresponding time-varying equilibrium counterparts: Corresponding control input difference matrix: Linearized state-space matrices: where The LQR cost function to be minimized: where 12 for k = 0, . . ., N − 1 do 13 set: The goal of the algorithm is to minimize the above objective function given the state-space dynamics (4) along with physical bounds on the speeds.With the above notation, the Gauss-Newton LQR (GN-LQR) algorithm [26] can be summarized in Algorithm 1.To the standard algorithm, we also add a step to impose a non-negativity constraint and an upper bound on the speed equal to the free-flow speed.
B. The Gauss-Newton LQR algorithm with a penalty on variation in inputs
The controls produced at any time step using the GN-LQR controller are independent of the controls in the previous time steps.Due to this, the optimal controls can vary significantly over consecutive time steps as is observed in Section IV.Since these controls are executed by CAV-platoons that are traveling within a traffic stream comprised of both autonomous and humandriven vehicles, the latter of which can sometimes have high reaction times, large changes in control inputs over consecutive time steps can result in life-threatening collisions due to vehicles not braking in time.To avoid such circumstances, here we present a variant of the LQR optimization problem which applies a penalty on changes in control inputs over consecutive time steps thus preventing large changes in control inputs.The implementation of the optimization problem is derived based on [27] which prescribes the inclusion of an additional term in the LQR objective function penalizing large variations in control inputs.
This is achieved by modifying the state-space formulation of the system by defining a new state which is an augmentation of the original state vector and the original control input vector and a new control input vector that captures the change in control input.For linear systems, the derivation of the new augmented system and a new LQR objective is provided in Appendix A. A new weight matrix R is introduced in the LQR optimization problem that governs the fluctuations in the control inputs.A larger magnitude of elements in R implies a larger penalty on the change in control inputs over consecutive time steps whereas R = 0 implies no penalty is imposed and the resulting optimization is equivalent to the standard LQR optimization problem.The GN-LQR algorithm presented in the previous section is modified in Step 4 to obtain the new algorithm, Algorithm 2, which is referred to as GN-LQR-with-penalty (GN-LQRP) in the remainder of the article.
In the next section, we present the impact of various parameters of the LQR-based algorithms on the performance of the controller and compare it with existing controllers from the literature.
Algorithm 2: The GN-LQRP Algorithm 1 input: State-space matrices A, G, nonlinear function f , initial state x[0], horizon length N , LQR weight matrices Q, R, R , error tolerance , maximum number of iterations M , initial guess for equilibrium control inputs U * , and initial guess for initial equilibrium state
IV. NUMERICAL STUDY AND IMPLEMENTATION
In this section, we investigate the performance of the proposed control algorithms for moving-bottleneck-based traffic control, mainly in the mitigation of the impact of traffic jams on a highway stretch.A primary investigation is carried out using macroscopic simulations performed with the CTM model described in Section II where the best-case performance of the controller is observed, its sensitivity to various parameters of the algorithm is examined, and comparisons are made with existing PI- [5] and MPC-based [6] controllers.Details of the implementation of the latter two controllers are also presented as part of the analysis.This is followed by a microsimulation-based case study using the Transmodeler traffic simulation software to test the near real-world performance of the controller and to learn the advantages and gaps in applying the controller in the real world.All macroscopic simulations applying the CTM model are performed using MATLAB R2021b running on a 64-bit Windows 10 with a 2.2GHz IntelR CoreTM i7-8750H CPU and 16GB of RAM.
A. Numerical study objectives
The goal of this study is to find the answers to the following questions: • Q1: Are the LQR-based controllers, namely GN-LQR and GN-LQRP, able to reduce the impact of bottlenecks on the highway traffic flow?How do they perform compared with PI-and MPC-based controllers proposed in the literature?• Q2: Are the LQR-based controllers computationally feasible for real-time traffic control?How do they compare with the PI-and MPC-based controllers in terms of computational tractability?• Q3: Are the controls produced by the controllers realistic with regard to application in the real world?• Q4: What is the impact of the various LQR-based controller parameters on its performance with respect to traffic control?
How do the horizon length, number of iterations, and LQR weight parameters impact the control speeds of the CAVplatoons?• Q5: How do the controllers perform in a realistic microscopic traffic setting?Is the performance comparable to the macroscopic case?What follows is a description of the traffic flow scenario and evaluation metrics used to test the performance of the controllers and for comparison with existing techniques.
B. Scenario description and evaluation metrics
The traffic is modeled using the dynamics presented in Section II.This section presents the values of the traffic parameters introduced in Section II along with a description of the default traffic scenario without any fixed bottleneck on the highway stretch, the uncontrolled scenario in the presence of a fixed bottleneck, and the evaluation metrics used to quantitatively compare between different scenarios and controllers.We consider an 8 km long highway stretch with no on-ramps or off-ramps which is divided into 16 even segments of length 0.5 km each.A total duration of 2 hr is considered for the example with time divided into steps of the duration of 10 sec each.The following values of traffic flow parameters are considered: ρ c = 60 veh/km, v f = 100 km/hr, w c = 38 km/hr, ρ m = 320 veh/km, q max = 6000 veh/hr, and α = 0.83, similar to [6].We consider a platoon length of 4.5 m which essentially implies platoons of one CAV per lane, and S min = 10.The initial density on all the segments is set to 20 veh/km.The available supply at the downstream end of the highway is set to q max while the demand wanting to enter at the upstream end of the highway has the profile shown in Figure 2 where the starting and ending demand is 1900 veh/hr and the value along the horizontal line is 5490 veh/hr.A reduced flow area is simulated on the highway by reducing the outflow of Segment 13 to 5400 veh/hr for the first hour after which the flow of the segment is restored to maximum capacity.The impact of the control is measured using three metrics, the Total Travel Time (TTT) in veh•hr, the Total Travel Distance (TTD) in veh•km, and the Mean Speed (MS) in km/hr which are defined similarly to [6] as follows: and MS = TTD/TTT, where T, L, N T , and N L are the duration of each time step, the length of each segment, the total number of time steps in the simulation, and the total number of segments in the considered highway stretch, respectively.In general, a lower TTT, a higher TTD, and a higher MS are desirable -The closer the traffic density is to the critical density, the better the values of these metrics as the traffic is free-flowing and at the maximum flow possible.Therefore, at each implementation of the LQR-based controllers we select such equilibrium states for linearization which improves the values of these metrics.In addition to these metrics, in order to compare the computational performance of different controllers, we consider the Average Computation Time (ACT) for each controller which is defined as the average time required to compute the control inputs at any time step during the simulation.It is computed simply as the average of the time consumed over all the runs of the controller during the simulation.
We assume that controlled vehicles enter the stretch every 15 time steps starting from time step 60 to time step 600 of the process horizon.Under normal circumstances, that is in the absence of the reduced flow area, TTT = 790 and TTD = 78, 998 which gives MS = 99.9. Figure 3 presents the simulated traffic densities in the presence of the reduced flow and without any control implementation, that is, u j [k] = v f , ∀j ∈ {1, 2, . . ., N CAV }.Note that here, N CAV is used to refer to the total number of CAV-platoons that will enter the highway stretch during the entire simulation and not the number of CAV-platoons present on the stretch at once which varies with time.In this case, the evaluation metrics are TTT = 1, 019 and TTD = 78, 998, and
C. Impact of LQR parameters
Here, we test the impact of various parameters of the LQR-based controllers in the context of traffic control using CAVplatoons as moving-bottlenecks.A default set of parameters is defined for the proposed controllers and different parameters are varied in isolation to assess their impact on the control performance which is measured using the parameters defined in Section IV-B, namely the TTT, TTD, and MS.
1) Default LQR-based controller implementation: In this section, we describe the default parameter values for the two LQRbased controllers, namely the GN-LQR (Algorithm 1) and the GN-LQRP (Algorithm 2) controllers.In the following sections, which test the impact of different parameters of the algorithm on their performance, the values of individual parameters are varied keeping the other parameters equal to the default values defined in this section.
For the GN-LQR and GN-LQRP algorithms, we select the weight matrices Q and R as 100I NL 0 0 0 NCAV and I NCAV , respectively, where N L is fixed to the number of segments in the considered highway stretch while N CAV varies with time depending on the number of CAV-platoons on the stretch at a given time.Here, the matrix Q indicates that the weight is only applied to the density states and not the position of the CAV-platoons which are also states of the system.The penalty weight matrix R is set to 30I NCAV .An equilibrium density of 59 veh/km which is 1 veh/km less than the critical density ρ c and an equilibrium speed of 99 km/hr which is 1 km/hr less than the free-flow speed v f is set for both controllers.In general, an equilibrium density of ρ c and a corresponding equilibrium speed of v f is desirable to achieve the maximum traffic flow.However, the equilibrium point is set slightly below these values to allow for numerical Jacobian calculation which requires the calculation of the nonlinear function at points around the equilibrium point.For speeds, v f is the upper bound, and for densities, the derivative is undefined at ρ c and changes sharply around that point thus making these exact values unusable for Jacobian calculation.In addition to the above settings, we set N = 3 time steps.The maximum number of iterations for both GN-LQR and GN-LQRP is set to 1 with an = 0.001.Figure 4 shows the density evolution achieved by applying the GN-LQR and GN-LQRP controllers with the default set of parameters.The corresponding values of MS are 94.5 km/hr and 83 km/hr, respectively.It can be observed from the figure that the default GN-LQR controller works well in reducing the congestion level on the highway stretch thus improving the MS of the traffic.This is achieved by creating smaller controlled reductions in segment flows (by reducing the CAV-platoon speeds) upstream of the bottleneck segment (Segment 13).Since the outflow of segments decreases with an increase in density above the critical density, reducing the flow of traffic in small amounts in the upstream segments thereby increasing their density in small amounts while preventing higher densities in the bottleneck segments results in overall higher flows across all the segments.This is the underlying idea behind moving-bottleneck control which is correctly executed by the LQR-based controller.On the other hand, the GN-LQRP controller does not perform as well since the CAV-platoon speeds are not sufficiently reduced prior to reaching the bottleneck segment due to the penalty on speed changes.In the ensuing sections, we present a detailed analysis of the impact of various parameters of the LQR algorithm on its performance including cases in which GN-LQRP performs equivalently to the GN-LQR controller while also preventing large fluctuations in the control speed.2) Impact of horizon length N : In this section, we test the impact of different values of the horizon length N on the performance of the GN-LQR and the GN-LQRP controllers in terms of the achieved MS.The value of N is varied from 1 to 90 with increments of 1 up to 20 followed by increments of 10.All other settings are as defined in Section IV-C1.The corresponding values of MS are presented in Figure 7.The GN-LQR algorithm performs well even with a small horizon length of 1-time step improving the MS by 15.6% to 93%.The value improves further to 94.5% by N = 3 beyond which it does not improve much with N .It is observed that above N = 20 the MS also tends to decrease showing large dips at N = 60 and N = 90.On closely examining the plots of the density evolution, it is observed that above N = 20, the controller prescribes the CAV-platoons to reduce their speeds at the upstream end of the highway stretch thus reducing the flow into the stretch eventually resulting in less vehicle accumulation at the bottleneck and improving the flow.The control makes sense since the controller is now able to look further into the future impact of each CAV-platoon and make the decision to reduce the platoon speeds sooner into the highway.See Figure 5 [left] for the density evolution at N = 40.However, this is not ideal, since spillbacks caused by congestion at the upstream end result in the creation of bottlenecks and the occurrence of capacity drop on previous links which is unaccounted for by this model.Additionally, at N = 60 and N = 90, there are further stoppages in the middle of the highway beside the stop at the entrance which results in reduced MS, for instance, see Figure 5 [right] which shows the density evolution at N = 60.This could be specific to the case and caused by a few stop decisions cascading into more stoppages.To avoid situations with jams created at the upstream end of the highway stretch, one solution is to avoid horizon lengths equal to or longer than the time taken by the CAVs to reach the bottleneck from the upstream end.Another solution is to attach additional weights to the input deviation term in the objective at the time of entry to ensure that CAVs enter the highway at free-flow speed and only reduce the speeds once sufficiently within the highway stretch to avoid spillbacks to previous links.
The performance of the GN-LQRP controller improves more substantially with increasing N up to N = 60 beyond which a degradation in its performance is observed.As also mentioned in the previous section, the reason for the poor performance of the GN-LQRP controller at low horizon lengths is the insufficient time for CAV-platoons to reduce their speeds due to the penalty on speed changes.As the horizon length increases, vehicles are able to start reducing their speeds sooner into the highway stretch and achieve enough speed reduction to reduce the impact of the bottleneck.Figure 6 presents the density evolution for the case with N = 50 time steps showing the speed reduction which starts from the upstream end of the highway stretch.Note that this case does not result in spillbacks since Segment 1 still has enough capacity to accommodate more vehicles from the previous link unlike the case in Figure 5 [left] which reaches close to the maximum density in Segment 1 for a brief period.Beyond N = 60, we observe similar spillback situations with GN-LQRP as well as situations similar to Figure 5 [right].
The variation in ACT with an increase in the horizon length for both GN-LQR and GN-LQRP controllers is presented in Figure 8.In the case of LQR-based controllers, the largest component of the computation time is dedicated to the computation of the derivatives of the nonlinear function.Therefore, as expected, the ACT increases almost linearly with the horizon length as the number of steps involving derivative calculation increase linearly.Deviation from the linear increase in ACT can be expected in some cases due to the accumulation of CAV-platoons on the highway which can increase the number of calculations per controller run or in cases where the CAV-platoons are blocked upstream of the highway reducing the number of calculations required.The order of magnitude for ACT is still a fraction of a second which makes it suitable for real-time control.
3) Impact of the number of iterations: Here, we test the impact of changing the number of iterations of the GN-LQR algorithm on its control performance.The number of iterations is varied from 1 to 10 while keeping the other parameters equal to the default values.Table I presents the values of the MS achieved at the different number of iterations.It is observed that the best performance of the GN-LQR controller is achieved at the number of iterations = 1.From Table I, it can be seen that increasing the number of iterations degrades the performance of the controller.The main reason for this is a higher reduction in the speeds of CAVs with an increasing number of iterations.The iterations are intended to find a point where the controls used to obtain the derivative of the system are close to the controls obtained by using the derivative at which point the system is said to have converged.However, when the derivative is not significantly affected by the control inputs such that the direction of change in inputs does not change with the control inputs used to obtain the derivative, then the controls continue to grow/reduce over iterations and only converge at the lower/upper bound of the inputs.Since the optimal speeds at any iteration are used as the equilibrium speeds for the next iteration, then if in the first iteration, the optimal controlled speeds are below the equilibrium speed then in the following iterations, the control speeds will continue to be below the equilibrium speed for the respective iterations eventually resulting in a speed equal to the lower bound.In cases where the optimal speed in the first iteration is above the equilibrium speed, convergence is achieved in the first iteration itself as the controls are capped to the free-flow speed, and the initial equilibrium speed is already set close to the free-flow speed.Therefore, those cases result in the same control even with a higher number of maximum iterations.In general, as the number of iterations increases, the control speeds of the CAV-platoons tend to be lower than with maximum iterations = 1 resulting in less than the best performance.4) Impact of objective weights-Q, R, and R : The objective of the LQR optimization problem defined in ( 9) is different from the MS metric used to judge the performance of the LQR-based controllers.Generally, the MS is expected to improve if the controls cause the states to move close to the critical density.Larger weights on the error terms for either the states or the control inputs (or the change in control inputs in the case of GN-LQRP) incline the controller towards producing controls to reduce the corresponding errors.Here we assess the relationship between the various weights in the LQR objective function and the MS obtained from the resulting control.For this analysis, the weights are defined in the form of diagonal matrices as Q = w Q I NL 0 0 0 NCAV , R = w R I NCAV , and R = w R I NCAV where w Q , w R , w R ∈ R. The weights w Q , w R , and w R are individually varied from 10 to 150 with an increment of 10 while keeping the other parameters equal to the default set of values.Figure 9 presents the plots of MS against the varying objective weights.It is observed that a larger difference between Q and R results in better MS.This is expected as the optimal control is to reduce the speed of the CAV-platoons before the bottleneck segment on the highway and a comparable weight on the speed (control input) error term prevents enough reduction in speed to improve the traffic flow.As also observed in Section IV-C2, the performance of the GN-LQRP controller is worse than the GN-LQR controller for smaller values of horizon length due to insufficient reduction in speeds.Increasing the magnitude of R , naturally results in further degradation in the performance as the reduction in speeds is further restricted.We also consider the variation in the values of R at N = 50 time steps at which the performance is observed to be equivalent to the performance of the GN-LQR controller according to Figure 7.In this case, the performance is equivalent to the GN-LQR controller at the smaller values of R and decreases with an increase in R due to the same reason of insufficient reduction in speeds.Figure 10 presents a plot of changes in control speed over consecutive time steps for CAV-platoon 11 which enters the highway stretch at 210 time-step at different values of R with N = 50 time steps.
D. Comparison of GN-LQR and GN-LQRP with PI-and MPC-based controllers
Two of the most common types of controllers implemented in traffic control systems are PI-and MPC-based controllers.PI-based controllers are much faster in computation and therefore adequate for real-time control, however, they do not offer the guarantee of optimal control.The MPC-based controller on the other hand guarantees optimality for a given horizon length, however when the system is nonlinear such as the system in the current study, the approach used to solve the problem is usually based on some meta-heuristic algorithm such as an evolutionary algorithm which can be inefficient due to repeating simulations and therefore unfit for real-time control.The LQR-based controllers investigated in the current study do not require performing the simulation several times to reach a solution, instead, it exploits the structure of the state-space matrices.In addition, the computed solution guarantees the optimality of the LQR objective which in this case ensures that the system states are as close to the critical density of the system as possible and therefore at the maximum flow.
The PI-based controller used for comparison in this study is implemented based on [5].The equation for obtaining the speed value at each time step is presented in (19).Setting values for the controller gains K p and K I is a challenging problem in general.The authors in [5] provide certain fixed gain values for the PI-based controller but do not present a formal method to derive them.Since the setup is slightly different from the one used in [5], we obtain optimal gains for the scenario in this study by setting up a nonlinear optimization problem with the objective of minimizing the MS.The details of the fitting are presented in Appendix B. The study [5] also prescribes a lower bound of 60 km/hr for the control speed.This lower bound is implemented by projecting the controller speed to within the bounds.A reason for this lower bound is to avoid the sudden large drops in speed to extremely low values of speed which may lead to accidents due to the low reaction time of drivers.However, with increased connectivity and autonomy in vehicles, it is possible to expect no plausible limit to what the speeds can be dropped to.In this study, we, therefore, also test the controller with and without a lower bound on the speed.The MPC-based controller is also implemented according to [6].The objective and constraints are set exactly as in [6] and are presented in Appendix C for reference.The horizon length for the MPC-based controller is set to 20 time steps as in [6].Again, a lower bound for the value of control speeds equal to 60 km/hr is prescribed.This lower bound can be naturally incorporated into the MPC-based controller as a constraint.
The LQR-based controller investigated in this study does not inherently allow for a lower bound similar to the above controllers.Instead of applying a lower bound similar to the PI-based controller, in this work, we use the GN-LQRP controller which applies a penalty on changes in the speeds.This is different from the implementation for PI-based and MPC-based controllers which do not account for changes in consecutive time steps.All the parameters for the LQR-based algorithms are set to their default values as defined in Section IV-C1 with the exception of the horizon length for GN-LQRP which is set to 50 time-steps as it is observed to be the best setting for GN-LQRP in Section IV-C2.
The obtained values for K P and K I for the PI-based controller with a lower bound of 60 km/hr on the controlled speeds are 0.7944 and 0.1091, respectively.The values obtained without a lower bound are 0.7908 and −8.9832, respectively.Figure 11 presents the density evolution plots for the PI-based controllers with and without a lower bound.The TTT, TTD, and MS for the tested controllers are presented in Table II.It is observed that the PI-based controller with a lower bound reduces the impact of the bottleneck by slowing down the traffic approaching the bottleneck thus creating smaller jams upstream of the bottleneck and reducing the density of the cells at the bottleneck which reduces the effect of capacity drop and improves the MS.On the other hand, the optimal controller gains without a lower bound result in no improvement in the MS over the uncontrolled case.To further understand the ineffectiveness of the PI-based without a bound, we set the controller gains to the values in [5] instead of their optimal values.Figure 11 also presents the plot of density evolution for this new setting of the PI-based controller.It is seen that the obtained control, in this case, is to stop all the vehicles at the entrance which is possible in this case since the speeds can drop to 0 km/hr.This makes sense for reducing the error term of the controller which only penalizes segments ahead of the controlled platoons that have a density above the critical density and in this case, the densities for all the segments ahead of the first segment are zeros therefore there is theoretically no error for the controller.However, this is not ideal for the traffic flow since it only reduces congestion on the current highway stretch at the expense of creating a spillback upstream of the stretch that results in congestion upstream.It results in a small MS value of 38.7% despite a small TTT value of 413.1 since the TTD value becomes very small equal to 16, 002 due to a smaller number of vehicles entering the stretch.Therefore the restricted speeds offer better results in this case in terms of our evaluation metrics.Figure 12 presents the density evolution plots for the MPC-based controllers with and without a lower bound which are similar in this case.The MPC-based controller tries to minimize the TTT while maximizing the outflow from the bottleneck and keeping the density of the bottleneck segment close to the critical density.Therefore, in both cases, it tries to reduce the density at the bottleneck to reduce the impact of the capacity drop rather than stopping all vehicles upstream of the stretch as in the case of the PI-based controller.In the case of MPC, while stopping all vehicles at the entrance would still minimize the TTT, it potentially creates a larger gap between the critical density and the density of the bottleneck segment and reduces the outflow from the bottleneck thereby making such a solution sub-optimal.
The density evolution plots for the LQR-based controllers are presented in Figures 4 and 6 and the evaluation metrics are Table II also presents the computation time (CT) in seconds for all the controllers which refers to the ACT in the case of MPC-and LQR-based controllers and to the offline computation time for gain calculation in the case of PI-based controllers.The PI-based controller is the fastest of the controllers as the gain computations are performed offline and there is virtually no computation time for the controller in real-time.Although in cases when the offline gains do not work as expected due to different realization of the traffic conditions than expected, then real-time computation of gains may be required similar to the MPC-based controller with a finite horizon for which the traffic conditions can be reliably known.Since the underlying problem is nonlinear, it will require solving a nonlinear optimization problem in real-time which would also be computationally expensive, only less expensive than MPC-based due to the lower number of control variables which in this case would just be the values of the two gains.The computation time for the LQR-based controllers mostly comprises the time to compute the derivatives of the state space equation with respect to the equilibrium states and inputs.As seen in Figure 8, the computation time of the GN-LQR almost linearly increases with N as the derivative calculations increase.Although since gradients can be computed with very few computations of the nonlinear function of the state-space model, this time requirement is negligible and therefore the overall time for obtaining controls from the LQR-based controller is also quite small.As expected, the computation time for the MPC-based controller is the highest of all the controllers due to the nonlinear optimization performed every time the controller is run.Note that computation times can vary significantly based on implementation and the authors do not claim their implementation of the controllers to be the most efficient.However, given that LQR-based and PI-based computations are expected to be faster due to the presence of only algebraic computations as compared to MPC-based which requires performing the simulation several times to solve the nonlinear optimization problem, the results for the computation times do serve to validate the hypothesis about the expected computational differences between MPC-based and the other controllers.
E. Microsimulation-based case study
In this section, we reproduce the traffic control scenarios presented in Section IV-B using a realistic microscopic traffic simulator and use it to test the performance of the proposed GN-LQR and GN-LQRP control algorithms under a realistic setting.The existing PI-and MPC-based controllers are also tested in microsimulation for the same setting.The micro-simulation is performed using TransModeler 6.1 [28], [29] while the control algorithm is implemented using MATLAB R2021b.The GISDK [30] API in TransModeler is used to interact with the controller.All processes related to the microsimulation-based analysis are run on a 64-bit Windows 10 with a 2.3GHz IntelR CoreTM i7-11800H CPU and 16GB of RAM.
1) Simulation and control pipeline: Note that in the microsimulation, the proposed controllers are tested in a scenario involving a multi-lane highway by controlling CAV-platoons formed by CAVs positioned side-by-side, acting as a rolling roadblock.The simulation framework is shown in Figure 14, consisting of four important modules: the TransModeler testbed, GISDK Python interface, CTM-based state-space model and controller, and visualization.The framework closes the loop for the state-feedback control, where TransModeler provides the testbed to simulate realistic traffic conditions.The simulation framework supports the trajectory-level traffic analysis, which helps us better understand and explain the control mechanisms of the proposed controllers.The ensuing sections provide a detailed overview of the simulation settings.
2) Network and demand: The tested road network is composed of an 8 km long highway stretch with 3 lanes.The same is depicted in Figure 14.In Figure 14, the origin of the considered road stretch is marked using meter markers as K0, and the end of the stretch is marked as K8.To capture the real demand and supply conditions and effectively form a CAV platoon before they enter the considered stretch of the roadway, two buffer zones of length 1 km each are established at both the beginning and the end of the highway segment, hence a 10 km long highway is simulated.The capacity of the highway stretch is set at 2000 veh/lane-hr.The speed limit for the entire stretch is 100 km/hr.The total simulation duration is 2 hours.The demand profile is the same as in Figure 2. The microscopic simulation parameters are carefully tuned to reproduce a bottleneck and capacity drop which are consistent with the scenario described in Section IV-B.A bottleneck is simulated on Segment 13 with the help of a lane-changing guide signal with a 30% compliance rate.This lane-changing guidance system is implemented in the inner lane (top lane in Figure 14) and is expected to prompt 30% of the traffic to switch lanes, resulting in an observed 10% decrease in outflow for Segment 13 which is the same as implemented in the macrosimulation-based case study.Interested readers are referred to [31] for more details on the dynamics model, parameter settings, and tuning for TransModeler, and move to the Appendix D to see more details on the models.In this case, the bottleneck begins at the start of the simulation and ends at the end of the first hour.Figure 15 [top left] shows the evolution of density with a bottleneck without control.3) Control actuator: The control actuators in this control system consist of vehicles (CAV-platoons) on the highway.Each CAV-platoon comprises of three vehicles moving side-by-side on the three lanes and acting as a rolling roadblock thus blocking all traffic behind them and not letting any vehicles overtake them.Since the road is blocked by the CAV-platoon, the control speed of the platoon will be mandatorily enforced on the upstream traffic, which is the key to the effectiveness of the controllers.These CAV-platoons are dispatched to the highway stretch using the AddVehicle() function in TransModeler.The origin, destination, lane, and speed of the platoon vehicles are then customized based on the simulation settings.The three CAVs are positioned separately on Lane 1, Lane 2, and Lane 3 (where Lane 1 is the inner lane and Lane 3 is the outer lane) at the same location.The IDs of the CAV-platoons are recorded and monitored for sensing and control purposes.Upon activation of the controller, the speed of the CAV-platoon is directly regulated to the recommended speed using the SetVehicleInfo() function in TransModeler.The state of the CAV-platoon is tracked using the GetVehicleInfo() function in TransModeler, and the trajectory of the platoon is monitored to serve as both the control input and the input for result visualization.
4) Sensors deployment and output: In this simulation, the sensors are categorized into two types: fixed sensors and mobile sensors.The fixed sensors are positioned along the highway diagram, which is divided into 16 segments of equal length, each spanning 0.5 km same as the space discretization in the state-space model of the controller.The output of the sensors includes the density of each segment.The simulation assumes that the density of each segment can be directly measured, instead of having to be estimated.The mobile sensors refer to the CAV-platoons that are dispatched to the road.The position of the CAV-platoons is used as the sensing input for the controllers.
5) Metrics calculation: For the microsimulation analysis, metrics including TTT, TTD, and MS are calculated using vehicle trajectories, such that T T T = N V i=1 t i , and T T D = N V i=1 x i , where x i and t i are the travel distance and travel time for vehicle i, and N V is the total number of vehicles loaded in the microsimulation.To generate the density dynamics heatmap, according to Edie's definition, the traffic density (ρ Edie ), flow (Q Edie ), and speed (V Edie ) [32], [33] can be defined by the following equations: The 4 parameters t, ∆t, x, ∆x bound a spatio-temporal box that contains multiple trajectory points, where A = ∆x × ∆t.Here, T tot is the total travel time of the vehicles in the bounded boxes, and X tot is the total travel distance.Finally, MS is calculated in the same way as described in Section IV-B.
6) Process update frequencies: Three types of update frequencies are considered in this work which are described as follows: (1) Simulation step update frequency (10 Hz) is the rate at which the simulation progresses.In other words, it's the number of times the simulation updates per second.At a frequency of 10 Hz, the microsimulation updates 1 time per 0.1 seconds.Each update would correspond to a "step" in the simulation, during which the state of the simulation could change based on the inputs and the underlying model.(2) Controller update frequency (0.1 Hz) is the rate at which the state of the controlled system (which could include various parameters or variables representing the highway dynamics and CAV platoon) and the is the rate at which the controlled speed (the output from the controller) is actuated by the CAV platoon.At a frequency of 1 Hz, the suggested speed from the controller is actuated by the CAV-platoons every 1 second.Before the suggested speed is updated, the CAV-platoon will forward at the same speed.This ensures that the platoon is receiving relatively frequent updates about the speed it should be traveling at.7) Results and discussion: In this section, we discuss the results obtained from the implementation of the aforementioned controllers namely GN-LQR, GN-LQRP, PI-and MPC-based.The parameter settings used for the various controllers are the same as in Section IV-D.For the PI-and MPC-based controllers, we only test the case with a lower bound of 60 km/hr on the control speed.For GN-LQRP, we also test two additional runs with different values of N as described later in this section.Table III presents the values of the evaluation metrics obtained from the microscopic traffic simulation both in the presence and absence of a bottleneck on the highway and in the presence of control using the aforementioned algorithms to mitigate the impact of the bottleneck.The computation time for the controllers is omitted in this table as the controllers' implementation is the same as in the above sections and there is no significant difference in computation time.From the table, it can be observed that for the case without a bottleneck, unlike the macroscopic scenario described in Section IV-B, the MS is 82.94 rather than 99.9 (which is almost equal to the free-flow speed).This is because in microsimulation, even though the desired speed of traffic is the free-flow speed which is the same between the macrosimulation and microsimulation, the modeled behavior of drivers and vehicle-vehicle interactions can result in reduced speeds on various occasions in the simulation.The MS is further reduced to 72.75 in the presence of the bottleneck as the bottleneck causes an increase in the TTT due to additional lane changing resulting in a slowing down of upstream traffic.The slight increase in the TTD in the case with no bottleneck is because of the additional CAV-platoons (36 platoons, that is 108 vehicles) that are loaded onto the network in this case but are left uncontrolled.Figure 15 presents the evolution of density in the controlled scenarios for the different controllers similar to those presented for the macroscopic simulations.Plots of the trajectories of vehicles in the simulation from 30 to 80 minutes for the three lanes are presented in Figure 16.The remaining simulation duration is omitted from the plot as it mostly contains traffic in a free-flowing state.Notice, that in the uncontrolled case, the slowing down of vehicles at the bottleneck extends upstream up to around the 2 km mark.The increase in the length of the jam results from the slowdown of approaching vehicles behind the already slowed-down vehicles from lane changing at the bottleneck.As the resulting slowed-down vehicles eventually arrive at the bottleneck, they are again required to change lanes which causes the jam to continue in time.
Compared to the uncontrolled case, the implementation of the GN-LQR controller results in an increase in the MS of traffic to 79.43 by reducing the TTT from 1,089 to 996 while the TTD only changes by a small amount.Figure 17 presents the space-time diagram for the trajectories in the simulation for the scenario with GN-LQR, which helps develop a qualitative understanding of the performance of the controller in a realistic setting.Notice that in comparison to the uncontrolled case, in this case, the jam only extends to around the 4 km mark.The slowdown of CAV-platoons can be observed from the reduction in the slope of the red lines on the plot (depicting the trajectories of the CAV-platoons) which occurs due to the control as well as naturally when the platoons meet the jam wave created by the bottleneck.While the controlled slowdown of CAV-platoons still results in a slowdown of vehicles upstream of the platoons, the resulting jam waves are much less severe and shorter resulting in less disruption to upstream traffic which recovers quickly.This causes the overall jam to reduce in size and increases the speed of traffic in general.This is similar to the observations made in the case of the macroscopic simulation-based analysis and provides preliminary confirmation of the usability of the controller in a realistic setting.
For the implementation of GN-LQRP, we consider three different values of N namely 50, 30, and 10 with all other parameter values the same in Section IV-D.The evaluation metrics for the three cases are presented in Table III.While N = 50 is determined as the best tuning value for the controller in the macroscopic setting, it is found that it is not the best value in the microscopic setting where it leads to stopping of vehicles more upstream of the bottleneck sometimes causing significant jam waves that lead up to the upstream end of the highway stretch (see Figure 19 in Appendix E).While GN-LQRP with N = 50 also results in a slowdown of vehicles more upstream of the bottleneck in the macroscopic setting, it does not lead to severe jams in that case.This difference between the macroscopic and microscopic scenarios for the GN-LQRP controller can be explained by the inherent differences in the impact of the vehicle slowdowns between the macroscopic and microscopic models.The TTD in this case is also smaller because of a reduction in the number of vehicles entering the stretch due to jams at the upstream end of the highway stretch.As found in Section IV-C2, reducing N results in a slowdown of vehicles to less upstream of the bottleneck.Therefore, here we also test the controller with reduced values of N = 10 and N = 30.It is observed that smaller N does prevent jams at the upstream end of the stretch restoring the value of TTD as compared to the case with N = 50.In fact, GN-LQRP with N = 10 is also able to outperform GN-LQR in the microscopic setting.From Figure 18, it can be observed that as compared to GN-LQR, GN-LQRP with N = 10 noticeably results in more gradual changes in the speed of traffic close to the bottleneck which results in reduced disruptions in upstream traffic from the controlled slowdowns.This is expected from GN-LQRP as it inherently penalizes abrupt changes in control speeds which are less realistic and provides optimal control under restricted speed changes.
Finally, the PI-and MPC-based controllers also result in an improvement in the state of traffic over the uncontrolled case by reducing the TTT and as a result the MS as seen from Table III.The plots of vehicle trajectories for the PI-and MPCbased controllers are presented in Appendix E. Between the two controllers, the MPC-based controller performs better and almost comparably with GN-LQRP while the PI-based controller performs worse than GN-LQR in microsimulation.Both the controllers show limited changes in CAV-platoon speeds (prior to joining the jam waves) as seen from the trajectory plots which is due to the lower bound on control speeds.MPC is able to use the limited changes to reduce the size of the jam more significantly as also observed from Figure 15.
V. CONCLUSIONS AND FUTURE DIRECTION
From the previous analysis, we have some preliminary suggestions regarding the questions posed in Section IV-A which are as follows: 1) A1: Both GN-LQR and GN-LQRP controllers are able to reduce the negative effects of fixed bottlenecks on the highway stretch in both macroscopic and microscopic traffic settings.The performance of the GN-LQR controller is comparable to the MPC-based controller (with and without a lower bound on control speeds) and the PI-based controller (with a lower 5) A5: Both GN-LQR and GN-LQRP controllers improve the MS metric of traffic over the uncontrolled case in the microscopic traffic simulation by reducing the speed of the traffic approaching the bottleneck resulting in a reduction in the length of the formed jam waves.While the same tuning parameters as the macrosimulation analysis for both the proposed controllers improve the traffic state in microsimulation as well, it is observed that GN-LQRP works better with smaller values of N = 10 in microsimulation.Larger values of N result in the slowing down of vehicles a long way upstream of the bottleneck resulting in comparatively worse performance than when the CAVs only slow down close to the bottleneck which takes place with a smaller N .Differences in the optimal tuning of controllers between the macroscopic and microscopic simulations are expected due to the inherent differences between the models and the difficulty associated with the precise determination of macroscopic model parameters used within the controller which can affect the traffic dynamics.The existing controllers are also observed to work well in microsimulation.In the current study, GN-LQRP (N = 10) and MPC (with a lower bound on the control speed) marginally outperform other controllers.The current analysis compared the different controllers at the same random seed value for the microsimulation.While understanding the impact of simulator stochasticity on the controllers' performance is out of the scope of the current work which mainly focuses on proposing and analyzing new controllers for traffic control and performing a preliminary test on their usability in microsimulation, performing multiple runs of the simulator with different seed values to account for stochasticity and analyzing them is necessary to develop a deeper understanding of the effectiveness of CAV-based control in the real world.There is also scope for a more in-depth analysis of CAV-based control under different scenarios such as in the presence of different jam/bottleneck triggering factors including but not limited to the existence of curvature, slope, lane-drop, tunnels, bridges, imposed speed limits, or ramps.There are several potential directions of improvement for the controllers mainly with regard to application in the microscopic simulation case which would further improve their performance in the real world.These include the extension of the control approach to account for uncertainty in parameter estimation for the state-space equation used in the controller as well as accounting for the lack of compliance of the CAVs to the control inputs.These changes would be a step in the direction of robust traffic control using CAVs.Also, additional flexibility in the form of lane-wise control of CAVs can be considered wherein the CAVs are no longer restricted to travel side-by-side but can block the traffic in individual lanes in coordination with other CAVs to achieve optimal control.This would require a lane-wise macroscopic traffic model similar to one in [34] to define the state-space model for the controller.Besides, the current work can also be extended to large-scale road networks and consideration of other forms of control such as ramp metering and variable speed limits which can be incorporated into the framework as inputs to the system thus allowing for integrated control.
APPENDIX A LQR OPTIMIZATION PROBLEM WITH A PENALTY ON CONTROL INPUT CHANGES
This section describes the formulation of the modified LQR optimization problem which penalizes changes in control inputs over consecutive time steps.To explain the idea of this controller, we use the equations for a standard linear system of the form where (11) simply describes the current input in terms of the previous input and the change in input.This system can also be written as follows: By defining new augmented vectors and matrices, we can write (12) as follows: Since ( 15) resembles a standard linear system, we can write a new optimization problem in the LQR framework which regulates both the states and the change in control inputs (instead of the control inputs directly) around the zero point.The objective function of this problem is defined as follows: where and R ∈ R (nu×nu) is the weight matrix for the penalty on control input changes.
The same idea can be applied to regulate the states of the nonlinear system (4) around a predefined equilibrium point while keeping the change in the control inputs to a minimum.The GN-LQR algorithm (Algorithm 1) can be applied for this purpose with some modifications.The modified algorithm is presented as Algorithm 2.
For Algorithm 2, the augmented states and control inputs and the corresponding equilibrium points are defined in the same way as in (13) which are further used to obtain the stacked augmented matrices corresponding to X, U , X * , U * .The augmented state-space matrices are defined using the linearized state-space matrices (8) as follows:
APPENDIX B PI-BASED CONTROLLER IMPLEMENTATION
The PI-based controller implemented in this work is based on that presented in [5].The control law is given as follows: where K P and K I are the controller gains, and e[k] is the controller error which defined as follows: where ρj [k] is the average density over segments downstream of CAV-platoon j and upstream of the fixed bottleneck on the highway stretch, and ρ is called the density set-point which is supposed to be the ideal value of ρj [k] and is set equal to the critical density ρ c .Here the average density is calculated using only the segments whose density is above a certain threshold value which in this case is set to the critical density.So if none of the segments between the CAV-platoon and the fixed bottleneck have density above ρ c , then the e j [k] is undefined (since there are no segments to calculate it over) and In this work, the optimal gains for the PI-based controller are obtained by setting up a nonlinear optimization problem with the objective of maximizing the MS.The fmincon() solver of MATLAB which implements the interior point algorithm is used to solve the nonlinear optimization problem as a minimization problem.The state evolution steps for the full duration of the simulation are set as constraints within fmincon() while the objective is set to the negative of MS.The bounds for the gain values are set to [−10, 10] for both gains which are found to be sufficient.The solver is initialized with the solution values 0.8 and 1.6 which are obtained from [5].
APPENDIX C MPC-BASED CONTROLLER IMPLEMENTATION
The MPC-based controller is implemented based on the implementation in [6].The corresponding optimization problem is given as follows: where k is the current time step, T is the duration of a time step, N P is the prediction horizon which is set to 20-time steps as in [6], N is the number of segments on the considered highway stretch, L i is the length of segment i, ρ i [h] is the density of segment i at time step h, ρī[h] is the density of the bottleneck segment at time step h, φī[h] is the outflow from the bottleneck segment at time step h, β 1 , β 2 , and β 3 are the objective weights set to 0.1, 0.1 and 0.8 respectively as in [6], and u min and u max are the upper and lower bound on the control speeds.While we only control the CAV-platoon speeds explicitly in this optimization, the density of highway segments is also a variable since it changes with the value of the control speed.The above optimization problem is solved using the interior point algorithm implemented through the fmincon() solver of MATLAB.
APPENDIX D CAR-FOLLOWING MODEL
The car-following model used in the microsimulation is the intelligent driver car-following model [35], shown in Equations ( 22)-( 24), the detailed parameters are listed in Table IV.In TransModeler, only the desired time gap T and the free acceleration exponent δ are editable.
Figure 1 :
Figure 1: Three segments of the modeled highway stretch along with two CAV-platoons and the corresponding states written underneath.Arrows indicate the direction of traffic flow.
Figure 2 :
Figure 2: Upstream demand profile for the given example.
Figure 3 :
Figure 3: Density (veh/km) evolution in the uncontrolled case with the reduced bottleneck flow.
Figure 5 :
Figure 5: Density (veh/km) evolution on the highway stretch with GN-LQR controller with [left] N = 40 time steps, and [right] N = 60 time steps.
Figure 6 :
Figure 6: Density (veh/km) evolution on the highway stretch with GN-LQRP controller with N = 50 time steps.
Figure 7 :
Figure 7: Variation in MS with horizon length N for GN-LQR and GN-LQRP.
Figure 8 :
Figure 8: Variation in Average Computation Time (seconds) for each run of the controller with horizon length N for GN-LQR.
Figure 9 :
Figure 9: Variation in MS with LQR objective weight matrices for GN-LQR and GN-LQRP.
Figure 10 :
Figure 10: Change in control speed value over consecutive time steps at different values of penalty weight for GN-LQR with a penalty.
Figure 11 :
Figure 11: Density (veh/km) evolution on the highway stretch with PI-based control with [top left] optimal gains and lower bound of 60 km/hr, [top right] optimal gains and no lower bound, and [bottom] arbitrary gains and no lower bound.
Figure 12 :
Figure 12: Density (veh/km) evolution on the highway stretch with MPC-based control with a [left] lower bound of 60 km/hr, and [right] no lower bound.
Figure 13 :
Figure 13: CAV platoon speed profile for platoon 11 with [left] PI-based controller with lower bound, and MPC-based controller with and without lower bound, and [right] GN-LQR controller and GN-LQRP controller with R = 30I.
Figure 14 :
Figure 14: The simulation framework developed in TransModeler to validate the controller.
Figure 15 :
Figure 15: Density (veh/km) evolution on the highway stretch in microsimulation for the different scenarios: [top left] no control with bottleneck, [top middle] with LQR control, [top right] with LQRP control, [bottom left] with PI control, [bottom right] with MPC control.
Figure 16 :
Figure 16: Vehicle trajectory space-time diagram for the uncontrolled scenario (the color of each trajectory point reflects the speed of the vehicle at the time): [top] the left lane, [middle] the middle lane, [bottom] the right lane.
Figure 17 :
Figure 17: Vehicle trajectory space-time diagram for the controlled scenario using GN-LQR controller with N = 3 (the color of each trajectory point reflects the speed of the vehicle at the time, and the CAV-platoons trajectories are labeled with red lines): [top] the left lane, [middle] the middle lane, [bottom] the right lane.
Figure 18 :
Figure 18: Vehicle trajectory space-time diagram for the controlled scenario using GN-LQRP controller (the color of each trajectory point reflects the speed of the vehicle at the time, and the CAV-platoons trajectories are labeled with red lines): [top] the left lane, [middle] the middle lane, [bottom] the right lane.
Figure 20 :
Figure 20: Vehicle trajectory space-time diagram for GN-LQRP (N = 30) (the color of each trajectory point reflects the speed of the vehicle at the time): [top] the left lane, [middle] the middle lane, [bottom] the right lane.
Figure 21 :
Figure 21: Vehicle trajectory space-time diagram for MPC (the color of each trajectory point reflects the speed of the vehicle at the time): [top] the left lane, [middle] the middle lane, [bottom] the right lane.
Figure 22 :
Figure 22: Vehicle trajectory space-time diagram for PI (the color of each trajectory point reflects the speed of the vehicle at the time): [top] the left lane, [middle] the middle lane, [bottom] the right lane.
State-space matrices A, G, nonlinear function f , initial state x[0], horizon length N , LQR weight matrices Q, R, error tolerance , maximum number of iterations M , initial guess for equilibrium control inputs U * , and initial guess for initial equilibrium state x * [0].
Table I :
Variation in MS with the number of iterations for GN-LQR.
Table II :
Comparison metrics over different traffic control scenarios for the given example.Computation time (CT in seconds) refers to ACT for MPC-and LQR-based controllers and offline computation time for PI-based controllers.
Table III :
Evaluation metrics for different scenarios tested in the microsimulation.
Table IV :
Parameter setting for the Intelligent Driver Car-following Model in TransModeler. | 17,792.6 | 2023-06-17T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Wind turbine main-bearing lubrication – Part 2: Simulation-based results for a double-row spherical roller main bearing in a 1.5 MW wind turbine
. This paper is the second in a two-part study on lubrication in wind turbine main bearings. Where Part 1 provided an introductory review of elastohydrodynamic lubrication theory, this paper will apply those ideas to investigate lubrication in the double-row spherical roller main bearing of a 1.5 MW wind turbine. Lubrication is investigated across a “contact conditions dataset” generated by inputting main-bearing applied loads, estimated from hub loads generated using aeroelastic simulation software, into a Hertzian contact model of the main bearing. From the Hertzian model is extracted values of roller load and contact patch dimensions, along with the time rate of change of contact patch dimensions. Also included in the dataset are additional environmental and operational variable values (e.g. wind speeds and shaft rotational speeds). A suitable formula for estimating film thickness within this particular bearing is then identified. Using lubricant properties of a commercially available wind turbine grease, specifically marketed for use in main bearings, an analysis of film thickness across the generated dataset is undertaken. The analysis includes consideration of effects relating to temperature, starvation, grease thickener interactions and possible non-steady effects. Results show that the studied main bearing is at risk of operating under Key and uncertainties the analysis are along with recommendations for future
Introduction
Higher-than-expected failure rates 1 for wind turbine main bearings have led to increased research focus on this component in recent years (Hart et al., 2019Guo et al., 2021;Nejad et al., 2022). Main-bearing failures are costly, with their replacement generally requiring the entire turbine rotor to be removed and supported during changeovers. As turbines move further from shore, component reliability becomes increasingly important due to additional costs associated with heavy-lifting vessel procurement and operation, access lead times, and impacts of lost revenue on the levelised cost of energy (Ren et al., 2021). Furthermore, the mechanisms leading to premature main-bearing failures are still not properly understood (Hart et al., 2019Guo et al., 2021;Nejad et al., 2022). This gap in knowledge, regarding fundamental causal mechanisms, represents a risk to the reliability of newer (larger) wind turbines, since it cannot currently be known a priori whether main-bearing failure rates are likely to be improved or worsened for these, and future, machines. In addition, practical solutions to ameliorate the current rates of main-bearing failure cannot be systematically identified, tested and developed until principal failure drivers in the wind turbine context are better understood. As argued in Hart et al. (2020), an improved understanding requires the full load pathway -including the turbulent wind field and aerodynamic and control interactions which drive the loading which passes into the drivetrain and then the main bearing -to be accounted for.
The main bearing performs the important task of supporting the rotor and reacting (to a greater or lesser extent) nontorque loads entering the drivetrain; crucially, this must be achieved while providing low-friction free rotation of the shaft and without rapid wear to bearing internal surfaces. Wind turbine main bearings therefore contain a lubricant, the role of which is to fully or partially separate bearing internal surfaces, via fluid and elastic-solid interactions, greatly improving frictional conditions and minimising wear. The lubricant and lubrication mechanisms are therefore fundamental to main-bearing operation and lifetime and, as a result, must be considered if internal main-bearing conditions and possible damage drivers are to be properly investigated. In order to begin addressing these issues, the present study investigates lubrication in a 1.5 MW wind turbine main bearing under realistic load and speed conditions obtained from wind turbine simulations. Lubrication is considered by applying film thickness equations along with other results from elastohydrodynamic lubrication (EHL) theory. The fields of lubrication and EHL are complex, nuanced and rapidly evolving. As such, simplified lubrication equations and associated results must be accompanied by a careful consideration of their context and validity. To this end, Part 1 of the present study provides a detailed introductory review of EHL theory which seeks to provide an accessible and representative overview of the field. This paper, Part 2, details the main-bearing lubrication analysis itself, including simulations and dataset generation, internal load and contactpatch evaluation, lubrication conditions, and film thickness analyses. Note, to aid the reader a table of symbols is provided in Appendix A.
Background
Main-bearing research to date has mainly focused on load modelling (Kock et al., 2019;Hart, 2020;Wang et al., 2020;Zheng et al., 2020a;Stirling et al., 2021), load characteristics (Cardaun et al., 2019;Hart et al., 2019;Hart, 2020;Guo et al., 2021) and implications for fatigue damage (Zheng et al., 2020b;Loriemi et al., 2021). Lubrication conditions within the main bearing are generally not considered. Two exceptions to this are Rolink et al. (2020) and Guo et al. (2021). In Rolink et al. (2020), EHL films are modelled as part of a general feasibility study of a novel plain-bearing so-lution for the main bearing in wind turbines. In Guo et al. (2021), axial motion of the shaft and main-bearing inner ring were investigated using both direct measurements and the application of simplified models. The main aim of the study was to ascertain whether axial motion in an operating main bearing is rapid enough to potentially disturb the EHL film. It was concluded that axial motions are slow and highly unlikely to impact the lubricating film in these components. This conclusion is valuable in that it helps narrow down the possible causes of premature failures in main bearings. Note, lubrication was not modelled directly in this previous study. 2 Therefore, to the best of the authors' knowledge, a detailed analysis of lubrication conditions and film thickness in a wind turbine main bearing, under realistic operating conditions, has not been presented before in the scientific literature.
Some previously undertaken modelling work was again used in this study. This relevant literature will therefore be described in more detail. Particularly relevant to the current study is Hart (2020), in which large repeating load patterns were shown to be experienced by the main bearing throughout operation. In addition, a Hertzian contact model of a double-row spherical roller main bearing was presented and used to investigate individual roller loads during operation. The described load structures were found to drive large, rapid variations (e.g. over 60 kN in around 1 s) in individual roller loads inside of the main bearing. The same Hertzian model, of a double-row spherical roller main bearing, is again used in the current investigation. The loads entering the system at the turbine hub are obtained from simulations, described below. Load inputs to the Hertzian model are then determined using a simplified drivetrain representation and a static load balance, between applied hub loading and main-bearing force response, at each time step (Hart et al., 2019;Hart, 2020). This approach is possible due to the statically determinate nature of three-point-mount drivetrains in which the main bearing provides a force reaction only (Stirling et al., 2021), as is the case here. In Stirling et al. (2021) it was shown that this simplified drivetrain representation is able to accurately recreate the radial response at the main bearing seen in a higher-fidelity, but still relatively simple, 3D finite element model. All thrust is assumed to be reacted by the main bearing. Having determined the total load acting on the main bearing at each time step, the internal load distribution and contact patch geometries for all rollers are then resolved using the Hertzian model at each time step. Effects of both radial and axial (thrust) loading at the main bearing are accounted for when resolving internal loads.
Also relevant to the current work are the findings of Kock et al. (2019), who showed that elastic effects beyond those of the contacts themselves can influence main-bearing internal loading. More specifically, it was shown that neglecting influences of elastic surroundings and bedplate flexibility can lead to roller load over-prediction of up to about 50 %. However, their analysis was undertaken for the downwind cylindrical roller bearing of a double main bearing (equivalently, fourpoint-mount) configuration, within a larger (6 MW) wind turbine. A number of differences are therefore present between their 6 MW turbine analysis and the 1.5 MW turbine (threepoint-mounted) drivetrain considered here, with size being the major one, since flexibility would be expected to play a greater role as the modelled turbine and its components become larger. The Hertzian model of Hart (2020), used in the current study, does not account for flexibility in the bearing system other than that at roller-raceway interfaces. As such, it may be over-predicting individual roller loads to some extent, as described in Kock et al. (2019). But, since our Hertzian model is for a considerably smaller bearing, load over-prediction would be expected to be less severe than the 50 % reported for the larger bearing model. The level of possible over-prediction is unknown. Therefore, a load sensitivity analysis of lubrication results will be undertaken in the present work to address this.
Wind turbine simulations in previous and current work were undertaken using DNV-GL Bladed software. Bladed is a design certified wind turbine aeroelastic simulation tool. Aerodynamic interactions are evaluated using bladeelement-momentum theory, with the structural response evaluated via a multibody formulation of wind turbine structural dynamics. Turbine loads generated using Bladed include the effects of aerodynamic, elastic, inertial and gravity-driven loading. Tilt is also present for the modelled turbine rotor, the effects of which are automatically accounted for in the applied drivetrain models since the hub frame of reference (and so also the force/moment outputs) is aligned with the tilted low-speed shaft. Non-steady wind fields, with which the simulated turbine interacts, are generated by combining deterministic components (tower shadow, shear, etc.) with kinematic turbulence, generated using a spectral model .
A thorough treatment of relevant EHL theory has been provided in Part 1 of this study . As such, familiarity with the material presented there is assumed throughout this paper. Finally, it is relevant to mention that in rolling bearing design standards (ISO, 2007) lubrication conditions are considered, but via the viscosity ratio (κ) rather than the film parameter ( ) directly. The two quantities are linked by the approximate relationship κ ≈ 1.3 (ISO, 2007). Since the aims of the present paper are to analyse lubrication conditions, film thickness values and related effects for the modelled main bearing, this link is not pursued further in the current work.
Methodology
In order to study lubrication in the case of a wind turbine main bearing, an extensive contact conditions dataset, including roller loads, shaft speeds and contact dimensions (along with other relevant quantities), was generated. These values, combined with lubricant data and additional bearing information, allow film estimation formulas and EHL results to be applied . The process by which this dataset was generated will be detailed first, followed by a summary of bearing and lubricant information relevant to the study. The methodology for the lubrication analysis itself will then be described. Figure 1 summarises the process by which contact conditions within the main bearing were evaluated. Background on the software and models used was provided in Sect. 2. Dataset generation followed a four stage process.
Generating the contact conditions dataset
1. Aeroelastic simulations of a 1.5 MW (variable speed, pitch-regulated) wind turbine across a range of hubheight mean wind speeds (6-24 m s −1 , in increments of 2 m s −1 ) and turbulence levels (low, medium and high as defined by the IEC design standards; IEC, 2019) were performed using DNV-GL Bladed software. Each simulation was run for 700 s of turbine operational time, with the first 100 s discarded to remove startup transients. From each resulting 10 min simulation 100 Hz time series of hub loading in 6 degrees of freedom were extracted (three force components and three moment components). Environmental and operational time series (shaft speed, rotor mean wind speed, turbine power, blade pitch angle, etc.) were also extracted at the same frequency.
2. Loads acting at the main bearing were then evaluated using a quasi-static force balance (see Sect. 2, Hart et al., 2019, andHart, 2020), taking the hub loads from Step 1 as input. As was the case in Hart (2020), the required load input to the Hertzian model is that being applied to the bearing by the shaft, as opposed to the reaction of the bearing. This is simply a case of ensuring the correct sign convention is being used. Figure 1. Visual summary of the process by which the contact conditions dataset was generated. Starting from turbulent wind interactions, loads introduced to the system are resolved in the drivetrain and then inside of the main bearing. The resulting dataset captures the load acting at individual rollers within the main bearing, along with other information important to lubrication.
for possible inertial effects within the bearing. The deflections, once solved for, fully characterise roller loading and contact geometries within the bearing.
4. Roller loads were then extracted for each roller in the downwind row of the double-row main bearing. This is because the upwind row is only occasionally loaded during normal operation, as a result of thrust (Hart, 2020;Loriemi et al., 2021). From roller load was also calculated the Hertzian semi-widths (a and b) of resulting contact patches at inner-and outer-raceway contacts (Hart, 2020). By evaluating the system at the next time step, finite differencing was used to estimate the rate of change of these quantities in time, i.e. da/dt and db/dt. These values are important for determining whether non-steady EHL effects may be significant here . Roller trajectories from one time step to the next were evaluated assuming pure rolling. Roller loads, contact patch semi-widths and semi-width time derivates were all included in the resulting dataset, along with environmental and operational values corresponding to the same point in time.
To avoid data quantities becoming unwieldy, data were extracted for each loaded roller in the downwind race at 200 randomly selected points in time from within each 10 min simulation. The resulting contact conditions dataset, from a total of 30 simulations in varying conditions, contains over 126 000 roller load (and associated operating conditions) entries from across the turbine's operational envelope.
Main-bearing and lubricant data
As described in Hart et al. (2022), lubrication analysis requires information relating to bearing geometry, surface roughness and lubricant properties. The modelled bearing is an SKF 240/630CA/W33, which is used for wind turbines of the size being simulated in this work. Some of its geometric information is available in the public domain; however, other details are not, and so approximate values only will be given in those cases. The lubricant which will be modelled in the current analysis is an industrial grease, specifically designed and marketed as a lubricant for wind turbines of this size, including for main bearings. Table 1 presents information relating to this bearing and the grease base oil; approximate values are given where necessary. Kinematic viscosity information of the base oil, from the grease manufacturer, was provided at 40 • C (460 mm 2 s −1 ) and 100 • C (16 mm 2 s −1 ) along with the relative density (0.9), as is standard. Interpolation to other temperatures was performed using the ASTMprescribed method (ASTM, 2020). The inverse asymptotic isoviscous pressure, α * , of the main-bearing grease base oil is not known. In Vergne and Bair (2014) a range of measured α * values are listed. At a temperature of 40 • C, reported measured values have a mean value plus-minus standard deviation range of α * = 21 ± 6 GPa −1 . While α * is temperature dependent, it will not vary much for small differences in temperature (on the order considered here; see Vergne and Bair, 2014). A value of α * = 21 GPa −1 is therefore assumed, this being the best available estimate without more information. The sensitivity of results to the α * value will be considered as part of the analysis. Effects related to grease ageing and re-greasing (Lugt, 2013) were not considered in this analysis.
Operating temperatures for the main bearing
Operating temperatures will strongly influence lubricant viscosity, which in turn has a significant effect on lubricant film thickness . The simulation software and models used to evaluate contact conditions in the current work do not model or predict operating temperatures. It is therefore necessary to identify an appropriate temperature range over which to consider main-bearing lubrication. Based on published temperature data for main bearings in the literature (de Mello et al., 2022;Beretta et al., 2021), normal operation of healthy main bearings can be seen to include temperatures of around 20-40 • C as standard, with higher temperatures occurring if a fault develops. These temperature values reported in the literature relate to measurements taken on the outside of the bearing casing, as opposed to inside of the main bearing. Lubricant contact-inlet temperatures may therefore be higher than the values listed above. Preliminary analyses, during the course of this work, revealed that an important transition in lubrication regime occurs at around 35 • C for the modelled main bearing and lubricant. Therefore, the analysis presented here will be centred on this value, with results for higher-and lower-temperature cases of 40 and 30 • C, respectively, also given. It is again emphasised that, based on published data, explored temperatures fall within the standard operating temperature range of healthy main bearings. For analysis at a given temperature, that same temperature is assumed to hold across all operating points in the contact conditions dataset. Temperatures in the main bearing will vary during operation as a result of frictional effects and internal load and speed interactions. How- Combined surface roughness of roller-raceway interfaces σ ≈ 300 nm Inner-and outer-raceway reduced radii in rolling direction R in x ≈ 0.03 m, R out x ≈ 0.04 m Inner-and outer-raceway contact ellipticity parameter Inlet dynamic viscosity (at atm. pressure) at 35 • C η o = 0.6525 Pa s Inverse asymptotic isoviscous pressure (assumed value) α * = 21 GPa −1 ever, significant thermal inertia is present (de Mello et al., 2022), meaning only relatively weak correlations exist between load, speed and temperature measurements. As such, each operating point may be seen at a range of operating temperatures, indicating that the application of lubrication equations across all operating points while using a single fixed temperature does provide a valid, albeit simplified, assessment of conditions that will be seen in practise.
Lubrication analysis methodology
While highly sophisticated EHL solvers have been developed, as outlined in Hart et al. (2022), these tend to model individual contacts (as opposed to full bearings), and setting up and running these solvers is highly non-trivial. Prior to the application of such methods, it is sensible to first consider the problem using simplified lubrication equations and other results from EHL theory. This will provide immediate insight into main-bearing lubrication conditions and behaviour, allow the need for more advanced solvers in this space to be assessed, and help determine where more detailed investigations might be best focused. Therefore, in order to estimate surface separation within the main bearing, for conditions identified using the modelling approach outlined above, an appropriate minimum film thickness (h m ) equation must be identified. In the present case, this includes determining whether roller-raceway contacts should be treated as lineor point-contact conjunctions. From Table 1, it can be seen that very high ellipticity values hold for contacts within the main bearing in question. In addition, the relative magnitudes of required adjustments to inner-and outer-raceway elasticity values when forming equivalent line-contact representations were found to be negligible (around 0.1 %) for the modelled bearing. This indicates that roller-raceway contacts in this bearing are very close indeed to their equivalent line-contact counterparts, in the context of lubrication. Furthermore, the ellipticity values themselves far exceed those for which point-contact film thickness equations have been developed . It was therefore concluded that film thickness for this bearing should be analysed using the equivalent line-contact formulation, outlined in Hart et al. (2022), in combination with a line-contact film thickness formula. The formula selected for this analysis was the more recent and comprehensively fitted Masjedi and Khonsari line-contact equation Masjedi and Khonsari, 2012). Given dimensionless roughness values of the main bearing in question, σ/R in x ≈ 9.7 × 10 −6 and σ/R out x ≈ 8.1 × 10 −6 , it follows that the associated roughsurface correction factor should also be used . Therefore, roller centre-line minimum film thickness values were estimated using For definitions of all terms see Appendix A and Hart et al. (2022). Assuming normally distributed roughness, it follows that s std = σ . Bearings of this type are hardened as much as possible, and so a value of V = 0.03, corresponding to a high Vickers hardness of 700 HV which is typical for bearing steel, is used throughout. The lubricant entrainment velocity, u, is calculated assuming pure rolling; see Appendix B. Recent work by Bergua et al. (2021), in which up-tower measurements from an operational wind turbine were found to show no appreciable levels of main-bearing gross slip, provides some justification for this assumption. In reality, some amount of individual roller slip would be expected to occur outside of the loaded zone, even in cases of no gross slip. Such effects are not accounted for in this analysis. However, the lack of gross slip observed for an operational main bearing (Bergua et al., 2021) indicates that the pure rolling assumption is likely valid throughout most of the loaded zone and, importantly, at the point of maximum load. Prior to lubricating film thickness results, contact pressures and dimensionless lubrication parameter values for the main bearing will be presented. As will be shown, operating parameter values for the main bearing lie well within the limits of where Eq.
(1) was fitted. Significant uncertainty is present regarding conditions inside of the main bearing which contribute to overall lubrication performance, especially due to the likely presence of starvation under grease lubrication (after the churning phase) . Lubrication was therefore investigated starting from a "best-case scenario" of fully flooded conditions, for which Eq. (1) is applied directly while assuming lubrication is driven by grease base-oil properties only. As discussed in Hart et al. (2022), it is not yet possible to easily predict starvation levels in full grease-lubricated bearings. Possible effects of starvation were therefore considered using a basic order-of-magnitude estimate of resulting film reductions. This was achieved by reducing film thickness predictions to 70 % of their fully flooded values. Based on the results of Masjedi and Khonsari (2015), this level of starvation coincides with a roughly 30 % reduction in lubricant mass flow rate through the contacts. To be clear, this treatment offers only a crude estimate of possible starvation effects which, in practise, will vary with speed and other parameters. For grease lubrication in particular, considerably more severe levels of starvation have been observed Lugt, 2019, 2020). See Hart et al. (2022) for a more detailed discussion of this aspect of EHL. Both fully flooded and starved results will be presented for temperatures of T = 30, 35 and 40 • C, allowing the sensitivity of film thickness to operating temperature to be quantified. Roller load and α * value sensitivities were also investigated. The approach outlined thus far does not account for possible grease thickener interactions on film thickness values. The importance of this aspect of grease lubrication in the main-bearing case will therefore be considered by combining film thickness predictions, obtained here, with results reported in the literature concerning the onset of thickener interactions. The analyses outlined thus far all rely on the chosen film thickness equation, which applies in steady-state lubrication. Possible non-steady effects are therefore not accounted for, with the above therefore referred to as steady-state film thickness analyses. The possible presence of significant non-steady EHL effects during mainbearing operation is investigated in a subsequent analysis, by considering db/dt and da/dt values relative to the concurrent value of lubricant entrainment velocity,ũ. A lower limit for significance is taken to be 25 %, based on findings reported in the literature Hooke, 2003).
At this stage, it is not possible to quantify the accuracy with which the described lubrication analysis is able to represent lubrication conditions, behaviour and film thickness in a real-world main bearing. Results must therefore be interpreted with care, keeping in mind the related discussions in Part 1 of this work . However, this analysis should allow for general conclusions to be reached regarding the dominant lubrication regime(s), key film thickness sensitivities and the likely importance of more complex effects (starvation, thickener interactions and non-steady effects). This, therefore, is the context in which results should be approached and interpreted.
Results
Lubrication analysis results will be presented in the current section. It is first important to revisit the approximations made during modelling stages. This includes the half-space approximation for contacting surfaces in EHL models used to generate film thickness equations, as well as the lubrication approximation applied in the same EHL models (see Hart et al., 2022). In addition, the outlined methodology analyses bearing internal loading and displacement independently of lubrication and lubricant film thickness values. Such an approximation is only valid if one does not significantly impact the other, more specifically, if roller deflections are significantly larger than film thickness values. Assessing the appropriateness of these approximations, in the current case, therefore requires model geometries and outputs to be compared. Such an analysis was undertaken, confirming (as far as is possible without more complex analyses) that all requirements are met with regards to ensuring modelling approximations may be considered valid. The analysis itself is presented in Appendix C.
Lubrication conditions
Prior to film analysis itself, it is necessary to consider the conditions (with respect to dimensionless parameter values) under which lubricated conjunctions within the main bearing are operating. This allows for the extremity of conditions to be assessed, while also providing an indication of the validity of the chosen film thickness equation. As outlined in Sect. 3.3, the main-bearing analysis was conducted while treating main-bearing point-contact conjunctions as equivalent line contacts. Dimensionless parameters in this case are therefore those of the equivalent line contacts. Moes' load and viscosity parameter values (M l and L, Hart et al., 2022) were calculated across the contact conditions dataset at each investigated temperature. Figure 2a shows the dimensionless parameter results at 35 • C, along with dimensionless parameter "fitting region limits" for the selected film thickness equation (see Hart et al., 2022). Both inner-and outer-raceway parameters are plotted, with considerable overlap occurring between the two. Points in the plot are coloured according to the wind turbine rotor mean wind speed occurring at that point in time.
The range of viscosity parameter (L) values visited during main-bearing operation can be seen to be small. In particular, the values of L at inner-and outer-raceway contacts correlate strongly with wind speed. This relationship stems from the turbine operating strategy, in which rotational speed increases with wind speed (until nearing rated power) in order to maintain aerodynamic efficiency . Variations in load parameter values (M l ) are considerably higher, with M l uncorrelated to wind speed. Both observations are unsurprising, given that significant load variation around the bearing circumference will be present (Hart, 2020) -gen- erally including an unloaded region. Each roller traversing the bearing circumference therefore experiences continuous variations in loading from minimum (possibly zero) to maximum (in terms of roller loads around the circumference at a given point in time) levels. Thus, observing a wide range of dimensionless load parameter values at each wind speed would be expected. With respect to the fitting region limits of the chosen film thickness equation: dimensionless viscosity parameter (L) values can be seen to lie well within the range over which the equation was developed; dimensionless load parameter values (M l ) also lie within their associated limits for the vast majority of points in the contact conditions dataset. Indeed, fitting region limits are only passed in the low-load region as w l → 0, and so M l → 0. It should be noted that outlying points at very low loads correspond to high values of steady-state film thickness. 3 In the context of lubrication analysis, it is regions of potentially low film thickness that are of primary interest, hence not around w l ≈ 0. As may be seen in the figure, all higher values of the dimensionless load parameter fall well within the limits of the applied film equation. To avoid erroneous results from extremely small load parameter values (the smallest of which are on the order of 10 −5 ), all results presented in subsequent sections exclude cases where w < 1 kN. Given the operating speeds of the modelled turbine, this limit ensures M l ≥ 1.4, with all presented results for parameter values therefore falling within or very close to the boundaries shown in Fig. 2a. In Hart et al. (2022), literature misrepresentations of the parameter ranges over which the Masjedi and Khonsari point-contact equation was fitted were described. It is pertinent to note that with regards to point-contact dimensionless parameters (L and M), operating points in the current dataset also fall well within the point-contact fitting region limits at higher load levels, but this appears not to be the case if the incorrectly reported limits are used. While the point-contact equation is not applied here, for reasons which have been outlined, this example demonstrates the need for fitting region limits to be checked, while also ensuring that the correct limits are applied.
Lubricant contact-inlet temperature is known to be a strong driver of viscosity and, hence, of dimensionless parameter values and film thickness . The impact of temperature variations on dimensionless parameter values was therefore considered. Figure 2b shows boundaries of operating point sets (more specifically their convex hulls) obtained from the contact conditions dataset at temperatures of T = 30, 35 and 40 • C. Increases in temperature can be seen to increase M l values and decrease L values across the operational dataset. The effect is reversed for a decrease in temperature. Observations made earlier in the current section can be seen to remain valid at each temperature considered.
Finally, contact pressures within main-bearing contact conjunctions were considered by approximating maximum EHL pressure values as those occurring under equivalent dry Hertzian contact (see Hart et al., 2022). By construction, the maximum pressure within point and equivalent line-contact representations of the conjunctions is equal . For inner-and outer-raceway contacts Fig. 2c shows maximum contact pressure values, plotted against roller applied loads, for all points in the contact conditions dataset. Surprisingly, these values are lower than might be expected given the high magnitude of loads reacted by the main bearing (Hart, 2020). For bearings of this type, the rated load level will sit at around 2.5-3 GPa. As shown in the figure, maximum contact pressures, obtained from the modelling undertaken here, all lie comfortably below these limiting values. The perhaps modest pressure levels seen here are likely due to the combined effects of low y direction curvature at contact interfaces 4 and a relatively large number of rollers, 27, in each row. More complex internal effects such as roller unseating and skewing are not modelled here; therefore, higher levels of pressure could occur in practise as a result of such interactions.
The presented lubrication condition results indicate that, having applied the restriction w > 1 kN, operating points fall such that the chosen film thickness equation is appropriate for performing a steady-state analysis of film thickness for this main bearing. Note, the standard caveats regarding the accuracy and interpretation of results obtained from simplified film thickness equations (as discussed in Hart et al., 2022) still apply. Contact pressure results indicate that, with respect to the effects modelled currently and assuming maximum pressures are well approximated by the Hertzian value, bearing material ultimate-strength limits are not being exceeded during operation.
Steady-state film thickness analyses
Having identified an appropriate low load cut-off and confirmed suitability of the chosen equation, steady-state film thickness analyses were undertaken as described in Sect. 3. The results of these analyses will now be presented. Figure 3a shows film parameter, = h m /σ , values obtained for operating points in the contact conditions dataset with T = 35 • C. Shaft speed and roller load values are also plotted. Note, shaft speed is proportional to the lubricant mean entrainment velocity,ũ, since pure rolling is assumed (see Appendix B). Inner raceway points are plotted individually, whereas, for the sake of clarity, the overlapping outerraceway results are summarised via their convex hull. Focusing on inner-raceway results, values can be seen to fall between 2.7 and 5.1, with a mean value of 3.8. As would be expected from the exponents in film equations, as well as the results of EHL theory more generally , film thickness values are strongly driven by rotational speed at all load levels. Variations in film thickness with load also show a relatively wide spread, but closer inspection reveals that most of this variation occurs at low roller loads (w → 0). Film thickness sensitivity to load falls dramatically as roller load increases. As indicated earlier, this is expected from general EHL results . Note also that the smallest loads are generally experienced as rollers move towards the edge of the loaded zone (Hart, 2020), which itself may move, while traversing the bearing circumference. Each roller will therefore pass through a wide range of load levels and values during each orbit. Outer-raceway results are qualitatively similar, but offset by a small amount in the positive direction relative to those of the inner raceway. Since pure rolling is assumed,ũ values are identical at inner and outer raceways (see Appendix B). Therefore, the observed film thickness differences at inner and outer raceways result from differing geometries at the respective contacts. With respect to the lubrication regimes associated with values , for fully flooded operation at T = 35 • C, both inner-and outer-raceway contacts are predicted to operate mainly in the elastohydrodynamic regime ( > 3), with some small amount of time spent in the mixed lubrication regime (1 < < 3). 5 From Fig. 3a, it is evident that the key factor (under fully flooded conditions) determining the lubrication regime, and possible transition to mixed lubrication, 5 There is some level of overlap between the lubrication regime designations relating to given ranges in (Hamrock et al., 2004;Hart et al., 2022). For the sake of simplicity, in the current analysis the standard ranges have been interpreted in their most optimistic light. Therefore, the elastohydrodynamic regime is taken to be 3 < < 5 (with hydrodynamic lubrication assumed to take place for > 5), and the mixed regime is taken as 1 < < 3. Boundary lubrication occurs for < 1.
Fully flooded results
is the shaft speed, with the least favourable conditions occurring at low speeds. Figure 3b shows the effect of temperature variations on film thickness, with inner-raceway -value boundaries plotted from results obtained at contact-inlet temperatures of T = 30, 35 and 40 • C. As would be expected, from lubricant viscosity behaviour, a reduction in operating temperature leads to an increase in film thickness, while reduced film thickness values are seen when the temperature is increased. From the point of view of the lubrication regime, impacts of these relatively modest changes in temperature are dramatic. A reduction in temperature of 5 • C can be seen to draw results well into the elastohydrodynamic regime and even into possible hydrodynamic lubrication. On the other hand, an increase of 5 • C draws results significantly into the mixed lubrication regime, with 79 % of operating points having < 3. With respect to fully flooded results, the operating temperature T = 35 • C therefore represents an important transition point (for the modelled main bearing and lubricant) in terms of operational lubrication regimes.
Starved results
Main bearings are grease lubricated and, therefore, expected to be operating under starved conditions throughout most of their operational lifetimes . As detailed in Sect. 3.3, a crude order of magnitude estimate of the impact of starvation on main-bearing lubrication was obtained by taking 70 % of the fully flooded values. The effect of this is shown in Fig. 3c. As with temperature variations, the impacts here are significant, with all operating points at T = 40 • C now well into the mixed lubrication regime and beginning to approach boundary lubrication ( < 1). For results at T = 35 • C, 88 % of operating points are now in the mixed regime. Starvation levels in operational main bearings are not yet known and, in reality, will vary with the operating conditions. However, much higher levels of grease starvation than that applied here have been observed in practise . Therefore, the results in Fig. 3c could still be providing an optimistic view of main-bearing lubrication.
Grease thickener interactions
As described in Part 1 of this study , at film thicknesses below a level related to the size of thickener fibres/fibre networks, the thickener interacts with the contact conjunction, altering film thickness behaviour such that it can no longer be estimated using oil lubrication formulas and properties of the grease base oil. Determining if such effects may be present for the modelled main bearing therefore requires estimated film thickness values to be compared with a suitable "transition thickness", above which base-oil effects would dominate and below which the film behaviour would be driven by thickener interactions. Such a value is not directly known for the main-bearing grease considered here. Instead, results reported in the literature are used to identify a sensible range for such a transition thickness. In Cen et al. (2014), Morales-Espejel et al. (2014) and Kanazawa et al. (2017), grease thickener interactions are investigated for a number of different greases; in the first study, the impact of the grease being mechanically worked is also assessed. From results presented in these previous studies, maximum observed values 6 of the transition film thickness sit at around 100 nm. Transition values, along with the relative magnitude of thickener effects, reduce as the grease is worked, with worked grease transition thicknesses of around just 10 nm seen in some cases. It is therefore proposed that a reasonable range in which to assume the transition thickness lies, for this main-bearing grease, is between 10 and 100 nm. From Table 1 σ ≈ 300 nm, and so Fig. 3 results (in terms of ) can be converted to approximate film thickness values using h m ≈ 300 · nm. Therefore, = 3 corresponds to a film thickness of h m ≈ 900 nm and = 1 corresponds to h m ≈ 300 nm. Across all results presented here, as well as for more severe cases of starvation than that considered, film thickness values remain significantly higher than the maximum value in the identified (approximate) range for transition film thickness. Based on the currently available information, a tentative conclusion is therefore reached, this being that it appears unlikely that grease thickener interactions significantly impact lubrication in the modelled main bearing. This result also implies that, at present, there is no reason to assume that oil-based film equations, such as the one applied here, are not able to provide sensible estimates of lubrication behaviour in this setting. As ever, the normal caveats regarding the accuracy of such equations remain (see Hart et al., 2022). It is emphasised that these conclusions are tentative. For example, if fibre and/or fibre-network dimensions in main-bearing greases are significantly larger than those of greases used in the cited literature, a different conclusion might result. Furthermore, surface roughness may also influence the point at which grease fibre interactions begin to influence lubrication behaviour. In Cen et al. (2014) and Kanazawa et al. (2017) surface roughness values are such that ratios are consistently greater than 1, never falling much below this value if at all. If starvation is really quite severe for the main bearing, values less than 1 are possible. In this case, it is not clear how operation inside of the boundary lubrication regime might impact grease interactions. Finally, it is important to note that the cited studies all ensure fully flooded inlets at all times. While the discussion has thus far centred mainly on film thickness values/ratios, it may also be the case that starvation impacts the likelihood of grease fibre interactions in other ways, for example by altering the quantity of thickener fibres available to the conjunction or influencing the deposition of degraded grease onto bearing surfaces . Much uncertainty therefore remains with respect to this particular aspect of the problem.
The presented steady-state results provide important insights into the lubrication problem for wind turbine main bearings. However, it should be remembered that certain approximations and assumptions are present in the applied model and chosen lubricant properties. The sensitivity of results to variable values associated with these sources of uncertainty was therefore considered. Sensitivity, with respect to values, was assessed for the variables of temperature (T ), inverse asymptotic isoviscous pressure (α * ) and roller load (w). Temperature was considered as part of the above analysis, where it was found that its effect is significant. The true α * value for the modelled lubricant is unknown and so an assumed value was used. The sensitivity to changes in this variable will indicate how critical it is to ensure an accurate value is known. Finally, as outlined in Sect. 2, results in the literature indicate that Hertzian models of the type applied here may result in over-estimations of the roller loads around the bearing circumference. In order to determine possible impacts of this on lubrication results, the effect of reducing load levels was also tested. The "standard case" was taken as being values calculated for T = 35 • C and α * = 21 GPa −1 and with load values unaltered. Each variable was then adjusted independently to determine its effect on values. For temperature, variations of ±5 • C were applied; for α * , variations were ±6 GPa −1 (see Sect. 3.2); for w the effect of all roller loads being halved was determined. Graphical results for temperature variations have been shown above, and those for roller load and α * variations can be found in Appendix D. Table 2 summarises the results of the sensitivity analysis, showing the mean percentage change to values (across the full contact conditions dataset) resulting from the specified changes to each variable. Temperature is the most sensitive of the tested variables and, in practise, will vary quite considerably during operation. A change of α * value is also impactful with respect to estimated film thickness values. Determining accurate/appropriate values for this param-eter will therefore be important in future main-bearing lubrication studies. Interestingly, and as might be expected from classical EHL theory, the results are insensitive to variations in loading. A 50 % reduction in loads elicits only a 6 % increase in values on average. This indicates that potential concerns related to modelled internal loads (see Sect. 2) are unlikely to have much influence on EHL findings. That is not to say that load levels and load distributions are not relevant to main-bearing lubrication, a point which will be revisited in Sect. 5. Starvation should also be considered in the context of these sensitivity results since it likely has a relative impact on the order of 30 % or more, with true values unknown but potentially quite high. Based on the above, key sensitivities for main-bearing lubrication are concluded to be contact-inlet temperatures, starvation levels and the grease base-oil α * value.
The significance of non-steady effects?
Considerations of possible non-steady effects will be undertaken treating the point contacts as such, as opposed to using the equivalent line-contact representation, which (as discussed) was necessary when estimating film thickness values. As described in Sect. 3.3, the possible presence of nonsteady EHL effects was investigated by considering contact patch time rate-of-change values in rolling and transverse directions (db/dt and da/dt, respectively) relative to the concurrent value of mean entrainment velocity (ũ). The relevance threshold is taken to be 25 % Hooke, 2003). Results at inner and outer raceways are almost identical, and so only inner-raceway results are shown here. Figure 4 shows contact conditions dataset values ofḃ/ũ, wherė b = db/dt, plotted against roller load. As previously, results are only shown where roller loads exceed 1 kN. Considering these results, it is clear thatḃ/ũ values fall well below the 0.25 value, which would signify the possible presence of significant non-steady EHL effects. Indeed, the estimated values are at least an order of magnitude out from this. Rolling direction non-steady effects, of the type considered, are therefore not expected to be occurring for the modelled main bearing. Figure 5a shows contact conditions dataset values oḟ a/ũ, whereȧ = da/dt, plotted against roller load. In contrast to the behaviour seen in rolling direction results, transverse direction ratio values are considerably higher, with a significant number of points lying below −0.25 and above 0.25. This is attributable to the high level of conformity between rollers and raceways in this direction, meaning contact conjunction edges move rapidly outwards or inwards as the applied load varies. With regards to film thickness and bearing damage, it is negative ratio values which are of interest; this is because it is when load is rapidly removed (and so the contact patch shrinks) that local reductions in film thickness are known to occur, increasing the risk of damage . Figure 5a therefore includes a vertical dashed line indicating the location of the −0.25 thresh- Fig. 5b. Various features of these plots are worth considering, the most immediately evident being that maximum observed ratio values (in terms of magnitude) reduce as roller load increases. This is expected from the Hertzian equations: since a ∝ w 1/3 (where w is roller load), it follows thatȧ ∝ẇ/w 2/3 , a quantity for which the same value ofẇ seen at a higher load elicits a smallerȧ response. The implication of this behaviour is that the possibility of non-steady effects is highest for lower values of roller load, with the largest ratio values seen close to the smallest loads (above the 1 kN cutoff). However,ȧ/ũ values less than −0.25 can still be seen to occur at moderate levels of applied loading, including one for which w = 12.5 kN. Since the contact conditions dataset represents only a subset of the simulation data, which itself contains a total of 30 10 min duration simulations, higher load values in this region could well be possible. Presented results therefore indicate that transverse direction non-steady EHL effects are predicted to be taking place, but at loads in approximately the lowest 10 % of those observed across the operational envelope. The impacts of such effects on the lubrication film and possible damage resulting are not yet known. Furthermore, non-steady effects are predicted to occur in the transverse, as opposed to rolling, direction. Very little, if any, work has been undertaken which investigates EHL in these circumstances. While consideration has been given to side-leakage effects associated with ellipticity (Wheeler et al., 2016;Damiens, 2003), the authors are unaware of any work which considers transverse direction (only) non-steady behaviour. This may be due to the relatively unique conditions experienced by wind turbine main bearings, meaning such behaviour in rolling element bearings may not have been observed/predicted before now.
It is emphasised that there is uncertainty present in these results due to modelling approximations and assumptions, as well as the use of finite differencing to estimate gradients. In particular, possible sliding and skewing behaviour in a real main bearing will influence mean entrainment velocity values,ũ, along with the time variations in load and pressure seen by each roller. Influences of main-bearing housing elasticity, not accounted for in this model, may also alter the magnitude and distribution of load around the bearing. Note, due to theȧ ∝ẇ/w 2/3 relationship, inclusion of housing elasticity could lead to increases or decreases in non-steady behaviour, and so more work would be needed to determine its effect. Finally, further work is required to understand the interaction of predicted non-steady effects with the grease thickener in a likely starved full roller bearing. In particular, it is necessary to consider what the effects of rapidly elongating contact patches (transverse to rolling) may be with regards to the dispersion/distribution of grease and bled oil, in the context of both individual contacts and throughout the main bearing as a whole.
Discussion and conclusions
There is much to unpack in the results which have been presented. A summary of key findings is therefore provided prior to further discussion.
1. The spherical roller main-bearing contacts in question should be treated as equivalent line contacts for the purposes of film thickness analysis using simplified lubrication equations.
2. When treated as such, lubrication conditions (in terms of dimensionless parameters) fall well within the region over which the applied equation was fitted (except at the very lowest load levels).
3. Maximum pressures estimated to occur in the contact conjunctions were lower than might be expected, with values not exceeding about 1.6 GPa. More complex effects, not accounted for here, could lead to the occurrence of higher pressure values than have been estimated. In particular, it was assumed that maximum pressures may be well approximated by the maximum Hertzian values.
4. In a best-case scenario of fully flooded lubrication, T = 35 • C was found to represent a transition point between EHL and mixed lubrication regimes. By T = 40 • C, close to 80 % of fully flooded operating points fell into the mixed lubrication regime.
5. Since main bearings are grease lubricated, starved conditions are expected to be present, but levels of starvation are unknown. An order of magnitude estimate of possible starvation effects (a 30 % reduction in film height) was applied, the effect of which was dramatic in the context of lubrication regimes. Under the assumed level of starvation, close to 90 % of operating points for T = 35 • C fell into the mixed regime, along with all operating points for T = 40 • C. Importantly, more severe levels of starvation could be present in real wind turbine main bearings.
6. A sensitivity analysis revealed that results were most strongly impacted by the contact-inlet temperature, starvation level and α * value. The effect of roller load on film thickness results was small.
7. Comparing presented results with available information in the literature, it was tentatively concluded that significant levels of grease-thickener interactions (with the contact conjunction) appear unlikely for the modelled main bearing.
8. Non-steady EHL effects were predicted to be negligible in the direction of rolling, but potentially of significance in the transverse direction. Possible non-steady effects in this latter case were found to occur at lower loads, with observed maximum values around 12.5 kN.
As has been emphasised, uncertainties are present, which means that care must be taken when interpreting these findings. In particular, further work is needed to better understand properties related to key sensitivities, these being temperature, starvation and α * values, for wind turbine main bearings and their lubricating greases. Furthermore, while presented results indicate lubricating films are only weakly sensitive to load, loading behaviour could still prove to have a significant but indirect effect. Specifically, it will be important to consider whether the highly variable and structured loading experienced by main bearings (Hart, 2020) may influence grease dispersion/distribution within the bearing and/or frictional behaviour such that starvation levels and contact-inlet temperatures are impacted. It will also be necessary to determine the impact of effects not accounted for here, including housing and bedplate flexibility and dynamic roller behaviour (sliding, skewing, etc.). The presence of individual roller sliding/slip, through the unloaded zone, could result in high levels of friction as the roller re-enters the loaded zone and is accelerated back to pure rolling. While slip is not expected to strongly influence film thickness values directly (Crook, 1961), increases in frictional energy may influence the temperature of the bulk lubricant in the main bearing. Where a roller is sliding, non-Newtonian lubricant properties, such as shear thinning, may become more important (Bair, 2005). Non-steady EHL effects in the main bearing also warrant further attention, especially when considering that transverse-direction-only non-steady effects are not believed to have been studied in detail previously. Finally, future work should also consider possible effects from intermittent wind turbine operation and cold starts on lubrication. At this stage, it is not clear how the results of this analysis might change for larger wind turbines. Indeed, it is difficult even to speculate. While larger wind turbines will experience increased load magnitudes and generally rotate more slowly 7 , the main bearing will likely be of greater diameter and contain more rollers. The former will increase roller loads and decrease entrainment velocities, while the latter tend to have opposite effects. The overall drivetrain design, including the main bearing(s), may also be quite different for larger wind turbines. Lubrication conditions in larger tur-7 For reasons of aerodynamic efficiency. bines will therefore ultimately be determined by the interplay of these various factors.
The modelled main bearing, in estimated starved conditions, was predicted to experience some proportion of time operating in the mixed lubrication regime for all temperatures of T ≥ 30 • C. If starvation is more severe in reality, mixed lubrication may begin at lower temperatures. These findings imply that the modelled main bearing would be operating under increased levels of friction and a heightened risk of wear and micro-pitting for a non-negligible proportion of its operational life. This result has clear implications for the likelihood of damage associated with such conditions.
With all described caveats in place, it is concluded that further development of the scientific understanding of lubrication in wind turbine main bearings is a necessary part of ongoing efforts to identify and understand key drivers of the observed high rates of failure for this component. Figure B1 shows the cross section of a roller orbiting the bearing centre in fixed and rotating reference frames, the latter moving with the roller centre.r in andr out are the perpendicular distances between the axis of rotation and inner-and outer-raceway contact centres (see Hart, 2020). Expressions for surface velocities may be derived as follows: assuming pure rolling, inner and outer contact surface tangential velocities must be equal (u I = u II ); hence ω r r roll = ( i − c )r in , (B2) ω r r roll = cr out , for ω r , i and c the angular velocities shown in Fig. B1 and r roll the roller centreline radius. From this it follows that c = i r iñ r in +r out .
Since u I = u II , u out = − cr out = − i r inr out r in +r out .
Entrainment velocities for inner and outer contacts can be seen to have equal magnitude but opposite direction in the applied frame of reference. Since lubrication is considered locally at each contact location, directional differences in the global reference frame are not relevant. Entrainment velocities at inner and outer raceways may therefore be considered equal, with both given bỹ u = i r inr out r in +r out .
(B7) Figure B1. Roller orbiting bearing centre with respect to (a) stationary and (b) rotating reference frames; in the latter case the position of the roller centre remains fixed. i and c describe angular velocity with respect to the bearing centre, and ω r describes angular velocity with respect to the roller centre. Note the axis around which ω r is defined is tilted by the contact angle, α, relative to the i and c axes.
Appendix C: The validity of modelling approximations
Outputs were checked to provide confidence in the validity of approximations applied at modelling stages. For a more detailed discussion of the approximations themselves, see Hart et al. (2022). With regards to contact dimensions and curvature radii, values of b/R x at inner and outer contacts never exceed 3 %, and values of a/R y never exceed 0.3 %. Figure C1 shows values of h m /δ and δ/b, where δ is roller deflection as given by Hertzian equations (see Hart, 2020), seen across the contact conditions dataset. These results show that maximum h m /δ values are around 6 %, but only at very low loads. At higher loads these values fall rapidly to around 0.2 % and lower. From the δ/b results it follows that everywhere δ < b < a, with the h m /δ results therefore providing an upper limit for ratios involving these other contact dimensions. h m /δ results directly demonstrate the validity of evaluating internal deflections and loading independently of lubrication, especially considering the results seen here for more highly loaded rollers, since it will be these rollers which have the most influence on the resulting internal loads and displacements at each time step. From analysis of the presented results, it may also be concluded that the "half-space" and "lubrication" approximations may be considered valid here. Figure C1. Contact dimension ratios plotted against shaft speed and/or roller load values. | 13,382.6 | 2021-10-04T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Energy efficient clustering using the AMHC (adoptive multi-hop clustering) technique
ABSTRACT
INTRODUCTION
IoT is nothing but the network of physical devices embedded with the software sensors, actuators, electronics and connectivity that enable the things in order to connect as well as to exchange the data. It has been observed that the uses of IoT devices has been increased about 31% every year to approximately 8.4 billion in the year 2017 and it is going to be 30 billion approximately in the year 2020 [1]. IoT has been applicable in the several areas such as smart energy, smart city smart home, smart agriculture etc. Thus, the main aim of IoT is to integrate the physical world to the virtual world. Agriculture is said to be one of the basis for livelihood, the growth of agriculture is considered as backbone of the country's development economically [2,3]. The monitoring system of the agriculture provides the environmental as well as controlling services for the field that leads to the growth of cop [4]. Technology utilization allows us to measure the several factors such as the soil moisture, water level, plant growth condition, humidity. It also tries to improvise the crop productivity [5]. For such problems, the scheme is developed known as AMHC clustering which is being widely used. Through this the WSN can be parted into the various disjointed clusters to take the responsibility of gathering the data and communication process. Hence, in WSN the nodes are required only to gather the information and transmit it to the cluster head, thus huge amount of energy is saved. The main part of this clustering model is to efficiently parting the WSN to disjointed clusters. To optimize the energy conservation, the algorithm of clustering has been proposed.
In order to estimate the performance of WSN, size of cluster is an essential metric [18]. If the size of cluster is small, then the more number of cluster will be available in the WSN, this affects the performance, if the size of cluster is large, then managing the cluster is difficult task for cluster head [19][20][21]. Through this research, the algorithm is proposed to solve the problem of the above issue. In this paper, a homogenous network is considered so that the graph can be formed as unit disk graph. Henceforth, we propose an algorithm of three phases named as AMHC strategy in order to solve the issue. In first stage, maximal independent set is assembled. In the second stage, extra nodes are coupled. Third stage is for checking and discarding the superfluous nodes. Hence, so it is more suitable for the WSNs, and this particular algorithm is applicable for both general purpose as well as UDG model [22].
The main aim of the existing model was to ensure the energy efficiency and the quality of the data by considering the combination of both SODCC and CCPCA, but it fails marginally in terms of performance. The disadvantage of the existing model is, it fails to deliver the satisfactory result, and for example the delay time is so, high that it is difficult to consider when comes to the real time scenario. Other primal disadvantage is that it consumes lot of energy for the quality of data, this makes system expensive. This particular research is organized such as: In section two, the literature is presented. Section 3 is all about proposed models, simulation and results are written in fourth section. The conclusion as well as future work are described in the last section of this paper
LITERATURE SURVEY
In this section of the research, several previous work is mentioned which helped in developing the proposed system. O. Younis and S. Fahmy [23], in this research, an energy efficient, distributed approach was presented for the ad hoc networks, here the protocol named as HEED is presented, which selects the cluster heads in accordance with the hybrid of NRE (Node Residual Energy) and parameters such as node proximity to their node degree is also selected. The HEED protocol that terminates in a unchanged number of iterations. The parameters of HEED such as network operational interval and minimum selection probability can be used for optimizing the resource usage in accordance to the application requirement and network density. HEED protocols tries to achieve the connected multihop inter-cluster network but it was applicable only for the small network. M. Demirbas, et al [24], a FLOC is presented, which parts the multi-hop wireless network to equal sized and the overlapping clusters. Here, each cluster contains the cluster head, and they are situated such that the nodes within the unit distance of cluster head exists in the cluster head and none of the node beyond the particular distance m from cluster heads belongs to the particular cluster. The locality is achieved by asserting m>=2 in FLOC. Regardless in the network size, the FLOC successfully exploits the particular nature of the wireless-radio model and hence obtains the clustering, although it tries to achieve regardless of the network size, but the outcome were not satisfactory.
J. Qiao and X. Zhang [25], the method of compressed data gathering is proposed in order to get rid of the problem of unbalanced position and the random selection. Even Clustering method is proposed based on the location, clustering is applied with the similar size of grids that ensures the positional balance. In case of uneven nodes, density based clustering is proposed. In DEC method, the factors such as density and location are considered and it equalizes the energy, extends the network lifetime and reduces the energy consumption. Several factors such as the environmental factor and the node sizes are ignored though. S. Hu and G. Li, [26], to avoid the failure of WSNs, RH (regular-hexagonal) clustering scheme of the sensor networks and hence analyze the model. Henceforth, the SFT (Scale Free topology) evolution mechanism is presented, later the characteristics of SFTEM using the mean-field theory. This system only saves WSN from its failure, but do not provide the efficient mechanism.
J. Zhou, et al [27], CDS (Connected dominating set) is proposed to serve as backbone of WSN, because there might be failure in sensor node due to various reasons. So, it is essential to design a fault tolerant along with the high redundancy in connectivity as well as coverage. So in this paper the algorithm is proposed named as -approximation for the CDS problem, it also gives the improvement in the performance ratio of given approximation algorithm on the UDG. [28] Almost all the approximation algorithm follows a two phased scheme in order to construct the CDS in network. In first phase, the DS (dominating SET) is constructed and in the second phase the nodes available are connected. MIS (Maximum Independent Set) is used as DS, thus the relation among the MCDS and MIS plays an essential role. In case of Homogenous network, the ad hoc networks are modeled as UDG (Unit Disk Graph) as well as UBG (Unit Ball Graphs) and in case of heterogeneous network; it is modeled as DGB and BGB. So, in this paper we focus on the problems of UB (Upper Bound) for the size of MISs in the heterogeneous network (wireless). To achieve this classical mathematics problem such as sphere packing and circle packing problem is used. R. Misra and C. Mandal [29], the MCDS problem is the NP-complete in the UDG, hence many heuristic based DAA (Distributed Approximation Algorithm) are used. To enhance the performance ratio a new method was introduced, which was based on two principles. First principle is that the domatic number of the connected graph should be two, second principle is that the OSS (Optimal substructure set) of independent set prefers with the CC (Common Connector).Thus, PST (Partial Steiner Tree) is achieved while constructing the independent sets. Afterwards the final post processing steps recognizes the Steiner nodes during the formation of Steiner tree for the IS (Independent sets).The data collected should be efficiently aggregated by the sensor is very much essential for the WSNs. In this research, the Design time-EA (efficient aggregation) algorithm is thoroughly studied. An efficient algorithm is proposed which produces DAT (data aggregation tree) and CFA (collision free Aggregation) schedule, the latency of the aggregation is bounded by the time slots also the LB (lower bound) is derived for the aggregation.
Several clustering algorithm has been presented in order to provide the higher efficiency, however all these algorithm and the scheme lacks the efficiency. In the paper described either the several factors regarding the clustering, factors such as location, density is ignored or they do not provide the desired outcome. On average almost all the paper discussed have ignore d the node size so, in order to overcome these problems we have proposed the methodology which is discussed in the next section of this research.
PROPOSED METHODOLOGIES 3.1. System model
Here, it is assumed that all the nodes in the WSNs are distributed in a 2D-plane and it has an equal maximal transmission range of single unit. Graph is represented by U= (X, Y). X represents the sensor node set and Y represents the edges. An edge (a, b) ∈ Y and a, b are the transmission range of each other's. Our proposed algorithm consists of three stages, which helps to discard superfluous nodes. The below diagram i.e. Figure 2 shows the proposed architecture of our model, it consists of base station, cluster heads, nodes. The nodes are connected to their respective cluster head, and this cluster heads are connected to the base station, they are connected with the multi hop routing.
AMHC (Adoptive multi-hop clustering) algorithm
The proposed algorithm is named as the AMHC (Adaptive Multi Hop Clustering) algorithm in U = (X, Y). AMHC algorithm is the coloring algorithm, in this case we usually use the four different color for denoting. The white, blue, grey, black color nodes are denoted as , , , respectively. At first all the nodes are , when the nodes are selected as the dominator, color is , and when the neighbor nodes are dominated by the , then it is . Nodes are the one which is used for connecting the dominators. AMHC algorithm contains has three stage, they are assembling, coupling, removing.
First stage: Assembling of maximum-IS (Independent sets)
The maximum independent also known as dominating set (ds) in the given graph. The node in maximum independent set is selected one by one. The first stage known as the assembling stage shows the steps involved while constructing the maximum independent sets. In first stage, the connected graph U= (X, Y) is taken as the input and the expected output is the maximum independent sets of connected graph U. in order to make B more nearer. The idea is to divide and conquer, the assembling algorithm consists of several procedure, each procedure has sub-procedure, and again these sub-procedure have the sub-procedure. The subset is chosen from the \ < such that ≤ is the dominating sets. The definition of P (i) is shown in (1).
Assembling algorithm
Since P (i) is MDF (Monotone Decreasing Function), the nodes in the P(i)-hop is more nearer than in P(i-1) . After the assembling algorithm, B is assumed as the P (0) hop-ds(dominating sets), that means the two nodes in B are connected by (2d+ 1). After the termination of the above algorithm, the single-hop connected ds (dominating set) are generated.
Second stage: Coupling the maximum-IS (Independent Sets)
After assembling, the Maximal-IS (independent sets) is obtained, and it is denoted as B. In the second stage the input taken is the connected graph U= (X, Y) and the output of first stage. The expected output after the second stage is connected dominating sets. In the algorithm the main intention is how the nodes in is determined. In any given graph, U= (X, Y), two nodes a, b ϵ X are said to be the h-hop connected only if there is existence of any path in the particular graph U and length is h. In order to make the size small, the most efficient nodes are selected which makes ≤ to form P (i)-connected ds (dominating sets). The main moto here is to select the nodes iteratively that minimizes the P (i)-hop CC (Connected Components). Moreover, to make this particular algorithm economical, the fewest nodes are considered. The second stage is described below. At first, the ℎ iteration of the ℎ round, the has t-1 nodes. And let (2) defines the D.
In \ , for a node v the P (i)-hop CC (Connected components) are reduced by −∆ ( ) ( ). Hence no. (b) is used to denotes the whole number of nodes in the shortest paths.
( ) is used for denoting the cost of b.
Here, the node, which has the largest cost, is selected.
Stage 3: Discarding the superfluous nodes:
After the stage 1 and stage 2, the common ds(dominating sets) are achieved and it is denoted as the C. Nodes in C are either or .Hence it is easy to find the existing superfluous nodes in C. In stage 3 the input taken are connected graph and the outcome of second stage, the possible outcomes are the smaller connected-DS (Dominating sets) of the graph. In this particular algorithm the main idea is to further minimize the size of D through checking and discarding the superfluous nodes that exist in D. Hence, according to The first need guarantees the dp (domination property), i.e. once the superfluous nodes are discarded, then the nodes that are available still dominates the complete network within the hops, here the dominator refers either or . The second need is analyzed such that , in any node ∈ , there arises two scenario, first scenario is that the sub Graph U[D] is prompted by D, if a is the leaf node, then discarding a does not have any effect on the sub graph connectivity. Second scenario is, if a is coupled with the more than one connectors, then the given sub graph is coupled only when the other connectors are also coupled. In order to check whether these given connectors are connected or not, the complete sub graph U [D -{a}] might be involved, so the TC (Time Complexity) is large. .
Discarding
Moreover, the size of D is reduced by the discarding algorithm, here in each iteration the nodes are considered, these nodes are basically leaves in the sub graph of U [D].
SIMULATION RESULT AND ANNALYSIS
The system configuration used in this research is windows 10 enterprises operating system along with 64 bit quad core processor, 2GB NVDIA graphics packed with 16 GB of RAM. Dot net based simulator known as sensoria simulator is used which uses the C sharp programming language. The Simulation is conducted based on the several parameters for energy efficiency, network lifetime and henceforth we compared this parameter with existing LEACH based algorithm. Moreover, Table 1 presents the various network parameter for simulation, here we have considered the praemeter such as Network size, number of sensor nodes used, and here sensor nodes used is 200, 400, 600 and 800. other parameter such as Number of Base Station, Initial energy of the nodes, length of packet, TS(Transmission Speed ), bandwidth, processing delay and few other parameters are mentioned which helps in getting the ideal simulation environment. The Tables 2-4 presents the value comparison with the existing system which is combination of second -order DCC (Data coupled Clustering) and Compressive-Projection PCA( Principal component analysis) also known as SODCC and CPPCA respectively. In the Tables 2-4, the improvement in the proposed method is calculated in percentage. In Table 2 we see that as the number of nodes increases, the improvisation in the performance over the existing takes place. Table 3 shows the number of failed nodes, as the number of nodes increases, the failed nodes increases automatically in case of existing while it keeps decreasing in case of proposed and there is marginal improvement. Table 4 gives the idea about the number of rounds performed and it is calculated in percentage, for 200 nodes, it is 71.23% and as the number of nodes increases the improvement in percentage goes high and it reaches up to 95.46%. The Figure 3 and Figure 4 shows the number of failed nodes and the end to end time delay i.e. processing delay respectively when there is a death of 30% of sensor node. The graph of the same shows that our proposed AMHC algorithm preforms better than the existing LEACH algorithm at different sensor nodes. From the Figure 3 and Figure 4, it is clear that when end to end delay keep getting reduced as the number of nodes increases which proves the efficiency of our algorithm when compared to the leach algorithm. Similarly, Figures 5-8 shows the number of rounds performed by the existing leach algorithm and AMHC algorithm respectively, after the death of the first sensor node for the various number of nodes. In Figure 5 it is shown that the proposed algorithm outperforms the leach algorithm, the number of rounds performed by leach scheme is for 200 sensor node is 233, whereas AMHC algorithm performs 810 rounds. In case of first sensor node death for 400 nodes, the number of rounds is 118 and 855 for LEACH and AMHC scheme respectively as shown in Figure 6. In case of first sensor node death for 600, the number of rounds performed is 105 and 1159 for existing algorithm and proposed algorithm as shown in Figure 7. Similarly, the comparison between these two algorithms are done for 800 nodes in and number of rounds is marginally big i.e. 54 and 1159 respectively in Figure 8. Figure 9 shows the network lifetime of a network after the death of 75% nodes for the 200 sensor nodes and it is little margin, Figure 10 shows the network lifetime for 400 nodes and similarly Figure 11 and Figure 12 shows the lifetime of a network for 600 and 800 respectively. With these three graphs, it has been observed that as the node increases the performance of the AMHC algorithm also marginally increases when compared with the leach algorithm.
CONCLUSION
Clustering is the technique, which was proposed to provide the efficient platform for the network topology in order to extend the lifetime of a network. Since most of the existing algorithm overlook the performance of network and the problem of multi-hop connection are ignored. So, in this paper, in any given homogenous network the problem of the network is analyzed and later an Adaptive Multi Hoping Clustering (AMHC) is proposed. The algorithm of AMHC consists of three stages namely assembling, coupling and discarding the superfluous nodes. In first stage, the distance between the neighboring nodes are made large and the maximal-IS (Independent Sets) are assembled, the second stage involves in coupling the maximal-IS (Independent Sets). Third and final stage involves in discarding the superfluous nodes. Later, the Our proposed algorithm AMHC is compared with the existing algorithm(SODCC+CPPCA) in terms of various parameter such as number of failed nodes, end to end time delay, number of rounds performed at the different nodes when first sensor node is dead and the lifetime of network is also compared with the leach algorithm. The comparison clearly shows that our algorithms outperforms the existing algorithms and it excels. When observed in terms of percentage the number of rounds performed reaches up to 95.46%. To achieve the high efficiency, the data gathering, data collection and routing is very much essential aspects, so to achieve that in future several scenario can be focused. | 4,571 | 2020-04-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Dosimetric evaluation study of IMRT and VMAT techniques for prostate cancer based on different multileaf collimator designs
The hypofractionated radiotherapy modality was established to reduce treatment durations and enhance therapeutic efficiency, as compared to conventional fractionation treatment. However, this modality is challenging because of rigid dosimetric constraints. This study aimed to assess the impact of multi-leaf collimator (MLC) widths (10 mm and 5 mm) on plan quality during the treatment of prostate cancer. Additionally, this study aimed to investigate the impact of the MLC mode of energy on the Agility flattening filter (FF), MLC Agility-free flattening filter (FFF), and MLCi2 for patients receiving hypofractionated radiotherapy. Two radiotherapy techniques; Intensity Modulated Radiotherapy (IMRT) and Volumetric Modulated Arc Radiotherapy (VMAT), were used in this research. In the present study, computed tomography simulations of ten patients (six plans per patient) with localized prostate adenocarcinoma were analyzed. Various dosimetric parameters were assessed, including monitor units, treatment delivery times, conformity, and homogeneity indices. To evaluate the plan quality, dose-volume histograms (DVHs) were estimated for each technique. The results demonstrated that the determined dosimetric parameters of planning target volume (PTV)p (such as D mean, conformity, and homogeneity index) showed greater improvement with MLC Agility FF and MLC Agility FFF than with MLCi2. Additionally, the treatment delivery time was reduced in the MLC Agility FF (by 31%) and MLC Agility FFF (by 10.8%) groups compared to the MLCi2 group. It is concluded that for both the VMAT and IMRT techniques, the smaller width (5 mm) MLCs revealed better planning target volume coverage, improved the dosimetric parameters for PTV, reduced the treatment time, and met the constraints for OARs. It is therefore recommended to use 5 mm MLCs for hypofractionated prostate cancer treatment due to better target coverage and better protection of OARs.
Introduction
Prostate cancer is one of the most common types of tumours among males worldwide (Alongi et al. 2013). External beam radiation therapy has historically been a mainstay of treatment for a large portion of these patients. Generally, the goal of radiation therapy is to provide a high radiation dose to the targeted tumour while simultaneously avoiding the healthy surrounding organs at risk (OARs) (Chae et al. 2016).
Intensity-modulated radiation treatment (IMRT) techniques were introduced to replace traditional 3D-conformal radiation therapy (3D-CRT) techniques, and this change in technique resulted in significantly greater dose conformity, sparing of the OARs, and lower radiation-induced toxicity (Vergeer et al. 2009); Van et al. 2008; (Nutting et al. 2009); (Holt et al. 2013). The fundamental benefit of IMRT is that it can deliver a specified dose of radiation to cancer target volumes with complicated geometries. Another unique characteristic of IMRT is that it can utilize dynamic multileaf collimators (DMLCs) to administer various doses to different target volumes within a single plan. Because the dynamic MLC-IMRT leaves are in constant motion during 1 3 therapy for each field (Jothybasu et al. 2009), the treatment time for each field is reduced. Each pair of opposing MLC leaves is swept across the target volume at a fixed beam angle while the speed and distance between leaves vary, and this action delivers the desired radiation intensity to a specific spot (Jothybasu et al. 2009); (Clark et al. 2002). The latest generation of IMRT techniques, volumetric modulated arc therapy (VMAT), has recently become widely available. In comparison to static-beam IMRT, rotating VMAT is designed to reduce treatment times while maintaining or improving plan quality (Bedford 2009); (Holt et al. 2013). Both IMRT and VMAT depend on the use of multileaf collimators (MLCs) for radiation therapy.
VMAT technology is a popular radiation delivery approach for prostate cancer treatment because it takes less time and uses fewer monitor units (MUs) than IMRT (Li et al. 2018). VMAT depends on the manipulation of gantry rotation, MLC movement, and dose rate modulation. To provide an optimum dose distribution, and thus a stronger therapeutic impact, two or more VMAT arcs are typically used to assure intricate target forms, target volumes, and differing dose prescription (Chae et al. 2016); (Li et al. 2018).
Clinical use of linear accelerators (linacs) with free flattening filter (FFF) is now possible because of VMAT and IMRT approaches, which may offer a substantially higher dose rate than in the typically used flattening filter (FF). The primary advantage of the FFF mode in radiation therapy is that it increases the dose rate while reducing head scattering and radiation leakage, resulting in an improved delivery efficiency and an increase in MUs, which helps to reduce the dose to OARs (Cakir et al. 2019;Arslan et. al.2020).
MLC is the most appropriate tool for beam shaping, and it is specific to each linear accelerator head type. Each type of MLC has specific characteristics, such as the leaf width, maximum leaf speed, minimum gap between opposing leaves, and inter-digitations abilities (Kantz et al. 2015). Therefore, the main purpose of this research was to investigate the dosimetric impact of different MLC designs on patients with localized prostate cancer by making comparisons between VMAT and IMRT techniques. This study was also designed to demonstrate any difference between FF and FFF plans at 5 mm leaf width of MLCs employing VMAT and IMRT, and the potentially resulting additional benefits for patients treated at different sites.
Treatment planning
Computed tomography (CT) simulations of ten patients with prostate cancer were selected for this study. Prior to CT simulation, patients were instructed to have a comfortably full bladder and an empty rectum. Three radio-opaque reference markers were then placed on the patient skin. Serial CT cuts of the abdomen and pelvis were obtained with 2.5 mm slice thickness (Cuccia et al. 2018). CT scans were simulated by a GE Light Speed Scanner (GE Health care Diagnostic Imaging). Images were then transferred to the focal contouring station for the delineation of the target (clinical target volume (CTV) and planning target volume (PTV)) and risk structure. The International Commission on Radiation Units and Measurements (ICRU) has been involved in an effort to improve collaboration in radiation treatment reporting. In a series of Reports (no. 29,38,50,58,62,and 71) (Purdy et al. 2004) (Landberg et al. 1999) (Born et al. 2006) (Stroom et al. 2002) (Berthelsen et al. 2007), recommendations for defining different volumes and dose specification points in radiotherapy were developed (Menzel 2014). The entire rectum, bladder, femoral heads, and penile bulb should be delineated according to the Radiation Therapy Oncology Group (RTOG) guidelines for a typical male pelvis (Gay et al. 2012).
The entire prostate gland was defined as Clinical Target Volume 60 (CTV), and the proximal 10 mm of the seminal vesicles were also included. The planning target volume (PTV60) was created with 7 mm expansion in all directions, with the exception of 4 mm posteriorly. The pelvic lymph node was defined as CTV44, which includes the distal common iliac, external, and internal iliac and obturator vessels. PTV44 was created with 7 mm in all directions. All of the plans were generated on the Monaco planning system (Version 5.11.02). Hypofractionated radiation therapy dose was delivered to the prostate (PTV60) with 60 Gy/20 fractions including a simultaneous integrated boost (SIB) to pelvic nodes (PTV44) 44 Gy/20 fractions.
Properties of multileaf collimators
This study investigated two linear accelerator head designs of MLC parts: one with Agility MLC parts (Elekta Versa HD) including different modes of energies typically used in modern linear accelerators (FF and FFF), and the other with MLCi2 parts (Elekta Synergy). Each type of MLC has unique characteristics in terms of leaf width, maximum speed, and minimum gap between opposing leaves as well as inter-digitation capabilities (Kantz et al. 2015).
The Elekta Agility MLC (Elekta AB, Stockholm, Sweden) beam pattern included 80 pairs of leaves, with each leaf pair measuring 5 mm wide, projected at the iso-center. The maximum field size was 40 × 40 cm. Leaves had a speed width of 3.5-6.5 cm/s and a minimum gap of 3 mm between opposite leaves joined with a dynamic leaf index, and the leaves can inter-digitize. Under the leaves, there was no auto-tracking backup diaphragm jaw (Table 1) (Ruschin et al. 2016); (Bedford et al. 2013).
In comparison, the MLCi2 had 40 pairs of leaves with 10 mm leaf width at the iso-center. The maximum field size was 40 × 40 cm. Leaves had a speed of about 2 cm/s and had a minimum gap of 5 mm between opposite leaves. Leaves had auto-tracking backup diaphragms beyond them, and backup jaws moved under the treatment to reduce leakage ( Table 1). The maximum space between leaves in the same leaf directory was 32.5 cm, the leaves were able to move over the central axis up to a distance of 12.5 cm, and leaves allowed to inter-digitize (Kantz et al. 2015). Table 1 summarizes the differences between the MLC types.
Dosimetric and plan evaluation
In this study, all plans were designed to compare plan quality and dose distribution among different types of MLC (Agility, MLCi2). For each patient, six plans were performed (VMAT: Agility FF, FFF and MLCi2) (IMRT: Agility FF, FFF and MLCi2) with an energy corresponding to a linac voltage of 6 MV and a fixed number of beams, arcs, angles, segments, and constraints. All plans were evaluated based on DVH. Ninety-five percent of the prescribed doses were accepted to cover ≥ 95% of the PTV, and all patients on the protocol were treated with a VMAT and IMRT technique (6 MV) requiring minimum PTV coverage of V95% prescription dose (60 Gy for prostate targets, 44 Gy for lymph nodes), reported as RTOG (Neto et al. 2015). Also, dosimetric indices such as conformity index (CI), homogeneity index (HI), and normalized dose contrast (NDC) of the PTV were evaluated on the basis of the International Commission on Radiation Units and Measurements report 62 (ICRU). The CI was defined as the volume of the PTV receiving the prescribed dose divided by the volume of the PTV. The ideal value for CI is one. The HI was calculated as follows (Eq. 1): where D2% is the dose received by 2% of the target volume, D98% is the dose received by 98% of the target volume. A lower HI value indicates better dose homogeneity, and its optimal value is zero. The NDC value is defined by Eq. 2. (1) where Actual DC is equal to the mean dose of (PTV60) divided by the mean dose of (PTV44), while the Ideal DC is calculated from the ratio between the prescribed doses of (PTV60) to the prescribed dose of (PTV44). This value was used to compare the dose gradient with an optimum value equal one. Dose-volume constraints for the hypofractionated prostate radiotherapy protocol used in the present study were as follows: for the rectum-V60 Gy < 15%, V56 Gy < 25%, V52 Gy < 35%, and V48 Gy < 50%; for the bladder: V60 Gy < 25%, V56Gy < 35%, and V52 Gy < 50%; for the penile bulb: mean dose of < 42 Gy; for the femoral heads: maximum dose (max dose) < 45 Gy; and for the small bowel bag: V45 Gy < 200 mL and D5 mL < 60 Gy (Ruschin et al. 2016).
Statistical analysis
The data were analyzed using the Statistical Package of Social Science (version 26). Data were presented as mean ± standard deviation. The different letters (A, B, C) indicate significant differences between the means of each parameter. The comparison between groups was performed using the Friedman test followed by the post hock test for pairwise comparison between groups. All tests were twotailed, with a P value of < 0.05 considered significant.
Results
For each patient, six plans were generated, and the results were then divided into two sets. The first result set was VMAT at different MICs (MLCi2, Agility (FFF), and Agility (FF)), while the second set included corresponding IMRT results. The acceptance criteria achieved in all plans with VMAT and IMRT techniques depend on the PTVs and OARs. Figure 1A, B show the dose distribution for VMAT and IMRT plans with different MLC designs for PTV60 and PTV44. Also, Fig. 2A
VMAT dosimetric parameters of PTVs and OARs
Regarding PTV60, the results revealed that there were no significant differences for D2% between VMAT techniques with different MLC types (MLCi2, Agility FFF and Agility FF). The remaining parameters (D98%, D95%, D50%, Dmin and Dmean) showed no significant difference between Agility FF and FFF. In contrast, there were statistically (2) NDC = (Table 2). There were statistically significant differences between Agility (FFF and FF) and MLCi2 for CI and HI. For PTV44, VMAT plan dosimetric parameters showed non-significant differences between Agility FF, FFF and MLCi2, with the exception of Dmin (Gy) for which there was a significant difference with agility FF and FFF compared to MLCi2. The OARs in Table 3 achieved the criteria that have been set in the planning system for all plans using the VMAT technique with different types of MLCs and different modes of energy. In terms of bladder dose, MLCi2 had the lowest value of maximum dose (Dmax) and V60Gy (Table 3). There were no significant differences obtained for the rectum, penile bulb, and both femoral heads with different types of MLCs. The bowel bag showed a significant difference with MLCi2 compared to Agility (FF or FFF).
Plan efficiency was evaluated with each type of MLCs via parameters such as MUs, time delivery (seconds), and NDC (Table 4). The present study found that Agility FFF required more MUs than the Agility FF and MLCi2 plans. Regarding the time of dose delivery, Agility FF and FFF plans significantly improved the time delivery compared to MLCi2 plans. Specifically, the actual time delivery was decreased with Agility MLC in both modes of energy by about 30% when compared to MLCi2. The quality of SIB-plan was assessed by measuring NDC for each type of MLC. For the NDC values close to 1 were obtained for all VMAT plans. However, Agility FF had a better plan quality in comparison to Agility FFF and MLCi2.
IMRT dosimetric parameters of PTVs and OARs
Tables 5, 6 summarize the results of the second set off PTVs and OAR dosimetric parameters for the IMRT technique. For PTV60, the results revealed that there were no significant differences in D2% of Agility FF, FFF, and MLCi2, but that the values of D98%, D95%, D50%, and Dmean showed a significant difference in Agility FF and FFF compared to MLCi2. The PTV60 values of CI and HI for Agility FF and FFF were better than those of MLCi2 obtained with IMRT plans, but no significant differences were found between Agility FF and FFF.
For PTV44, D2% had the lowest value for MLCi2 as compared to Agility FF and FFF. However, Agility MLC (FF and FFF) demonstrated a statistically significant improvement for other dosmetric parameters (D98%, D95%, D50%, Dmin, Dmean, and CI) as compared to MLCi2.
The OARs (Table 6) met the criteria that had been set in the planning system with all plans using the IMRT technique with different types of MLCs and different modes of energy. V60 Gy and V56 Gy for bladder dose were improved with MLCi2 than Agility MLC, but the mean dose using FF and FFF was significantly lower than that using MLCi2. In addition, the rectum dose showed better values at V60 Gy and V56 Gy with Agility FF and MLCi2 than with Agility FFF, while the mean dose was significantly lower with Agility FF than with FFF and MLCi2. MLCi2 and Agility FFF had the lowest values for V45 and D (5 ml) for bowel bags compared to Agility FF. Delivery efficiency and SIB-plan quality were compared for all IMRT plans at different types of MLCs, and the results are presented in Table 7. Agility FFF and FF plans required more MUs in the IMRT technique than in the MLCi2 technique, but the delivery time was shorter for Agility FF and FFF compared to MLCi2. The actual delivery time was more than 11% lower for MLC Agility in IMRT plans than for MLCi2. SIB-plan quality was significantly improved for Agility FFF and FF as compared to MLCi2.
Discussion
Recently, the use of MLCs has become one of the most important innovations in radiation therapy, because it offers the required level of treatment while preserving normal tissues (Hong et al. 2014). Depending on leaf width, MLCs enable the planning system to produce high-quality plans with fewer segments, less monitoring units, and shorter time (Kantz et al. 2015). The results of this study show that certain MLC design parameters, such as leaf width and traveling speed, had an effect on several parameters such as required MUs, segments, and treatment time, allowing adjustment of each MLC design to achieve the best plan possible. Various favorable MLC types and energy modalities were found among agility MLCs as a result of this study. Based on the technique used, the results of this study were divided into two groups, VMAT and IMRT procedures with Agility FF and FFF, which produced significantly better dosimetric results for PTV60 than for MLCi2. The CI and HI values for Agility in both modes of energy OARs parameters; VX Gy < X% percent volume of OARs receiving a dose of x Gy in less than x% of the volume, Dmax maximum dose, Dmean average dose, VX Gy < Xml volume of OARs in milli litter (ml) receiving a dose of x Gy is less than x ml, D (Xml) < X Gy no more than X Gy received by X ml of the volume P ≤ 0.05 was defined as statistically significant. For details see text (FF and FFF) were enhanced. This is consistent with prior research that investigated the effect of MLC widths on tumour dose distribution using a variety of radiation modalities. For example, Chae et al. (2014) compared the target coverage and gradient index of two MLC widths (2.5 mm and 5 mm) for VMAT and IMRT procedures in the treatment of spinal lesions. They found that the lower leaf width (2.5 mm) enhanced the target coverage and gradient index. (Blümer et al. 2014) used VMAT to compare two types of MLCs (5 mm and 10 mm). They found that better HI and CI in the plan for 5 mm than for 10 mm MLCs. With the exception of the femoral head in prostate and anal cancer patients and the spinal cord in head and neck cancer (HNC) patients, the DVH results for OARs in all three cancers with different sites in VMAT plans using 10 mm MLCs were identical. Using the VMAT approach, (Lafond et al. 2013) evaluated the effect of leaf width between 10 and 4 mm for HNC patients. They also demonstrated that for 4 mm MLC, the CI and HI for PTV were increased by 4.7% and 7.9%, respectively. Also, the target coverage was enhanced with 4 mm MLC rather than with 10 mm in nasopharyngeal IMRT, but there was no benefit in terms of OAR avoidance (Wang et al. 2011). The CI was significantly improved using a small MLC width and the target volume coverage was higher as compared when using a larger MLC width (Jin et al. 2005); (Dvorak et al. 2005). Additionally, the results showed that both MLC agility techniques outperformed MLCi2 in terms of PTV coverage, whereas there was no significant difference between FF and FFF. These findings were consistent with (Sun et al. 2018) who observed no significant variations in target dose distributions for esophageal cancer between FF plans and FFF plans using the VMAT technique. Their results showed that smaller leaves (agility) can preserve OARs as well as large leaves, but PTV coverage increased with decreasing leaf width, which indicates that MLCi2 may achieve the limitations around OARs without decreasing PTV coverage. Treatment delivery efficiency was greatly improved because the leaf speed was higher with agility MLC than with other modalities. These results were achieved employing agility VMAT and IMRT techniques, which were assisted by high-efficiency treatments that reduced treatment time even while delivering a high dose per fraction. It was reported that using modern radiotherapy technologies to reduce treatment duration resulted in greater compliance by patients and mild toxicity (Franco et al. 2021). Also, using Agility MLC during treatment may enhance patient comfort and reduce intra-fraction motion around organs.
The results of the present study demonstrate that, when compared to MLCi2, Agility reduced treatment time for VMAT and IMRT by 30% and 11%, respectively. When compared with the prescribed dose, Agility MLC delivered more MUs than MLCi2. To achieve a uniform dose distribution for SIB plans, the MUs in Agility were increased, which required an increase in segment numbers. Agility also involved a higher dose rate than MLCi2, therefore it required less time. However, it has been also shown that a number of parameters, including MUs, dose rate, and MLC movement speed, influence the delivery time as well. SIB-plan quality (as measured by the NDC factor) exhibited significant differences for Agility FF and FFF when compared with MLCi2 for both IMRT and VMAT plans. FFF required more MUs than FF to fulfill homogeneity and dose uniformity in PTVs. FFF has also been demonstrated to deliver a high dose rate, which helps in reducing treatment time by providing the highest dose per fraction for stereotactic body radiation (SBRT) (Sun et al. 2018). OARs parameters: VX Gy < X% percent volume of OARs receiving a dose of x Gy in less than x% of the volume; Dmax maximum dose, Dmean average dose, VX Gy < Xml volume of OARs in milli litter (ml) receiving a dose of x Gy is less than x ml of the volume, DX ml < X Gy no more than X Gy received by X ml of the volume P ≤ 0.05 was defined as statistically significant Data availability All data generated or analyzed during this study are included in this published article (the raw data will be available in case required them from the authors).
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,334 | 2022-12-28T00:00:00.000 | [
"Medicine",
"Physics"
] |
Seizure Activity Occurs in the Collagenase but not the Blood Infusion Model of Striatal Hemorrhagic Stroke in Rats
Seizures are a frequent complication of brain injury, including intracerebral hemorrhage (ICH), where seizures occur in about a third of patients. Rodents are used to study pathophysiology and neuroprotective therapies after ICH, but there have been no studies assessing the occurrence of seizures in these models. Thus, we compared seizure incidence and characteristics after infusing collagenase (0.14 U), which degrades blood vessels, and autologous blood (100 μL) into the striatum of rats. Saline was infused in others as a negative control, whereas iron, a by-product of degrading erythrocytes, served as a positive control. Ipsilateral and contralateral electroencephalographic (EEG) activity was continuously monitored with telemetry probes for a week after the stroke. There were no electrographic abnormalities during baseline recordings. As expected, saline did not elicit any epileptiform activity whereas iron caused seizure activity. Seizures occurred in 66 % of the collagenase group between 10 and 36 h, their duration ranged from 5 to 90 s, and these events were mostly observed bilaterally. No such activity occurred after blood infusion despite comparable lesion sizes of 32.5 and 40.9 mm3 in the collagenase and blood models, respectively (p = 0.222). Therefore, seizures are a common acute occurrence in the collagenase but not whole blood models of striatal ICH (p = 0.028, for incidence). These findings have potential implications for ICH studies such as for understanding model differences, helping select which model to use, and determining how seizures may affect or be affected by treatments applied after stroke.
Introduction
Intracerebral hemorrhage (ICH) occurs in~15 % of stroke patients, leading to~50 % mortality and significant disability in survivors [1,2]. So far, there are no specific neuroprotective therapies for ICH, although survivors benefit from rehabilitation. Thus, it is important to fully understand those factors that affect outcome after ICH in order to improve medical management and further limit death and disability. For instance, seizures are a common occurrence after ICH or even a presenting sign of an ICH. About 4-20 % of ICH patients will suffer from clinical seizures (e.g., convulsions), whereas 30 % of ICH victims will have subclinical seizures observable on an electroencephalogram (EEG; [2][3][4]). Current data suggests that the risk of seizures occurring within the first month is 8 % [3], and the risk of a seizure occurring after the first month and within the first year is 3 % [4]. Still, this could be an underestimate caused by the lack of continuous EEG monitoring in patients.
Intuitively, seizures are expected to worsen outcome after an ICH. Seizures can exacerbate excitotoxicity and oxidative stress [5,6], augment metabolic rate [7], and cause re-bleeding or increased bleeding due to elevated blood pressure and blood flow during seizures [3,8,7]. Intracranial pressure (ICP) also rises due to seizures [9], which can cause complications after ICH (e.g., herniation) and increase mortality [2]. Seizures may also cause aberrant brain plasticity (e.g., larger cortical maps) and impair recovery [10]. Lastly, even though the number of ICH patients that develop epilepsy is relatively low (2-5 %), the incidence of one seizure increases the chance of developing epilepsy [2,4,3].
All of this illustrates why seizures could be harmful. Clinical studies on this topic, however, have not consistently found that seizures are detrimental [11,[2][3][4], although some support this notion [12][13][14]. This variability among clinical studies could be attributed to several factors, such as inclusion criteria, methods for measuring EEG, lack of continuous EEG monitoring, use of anti-epileptic drugs (AEDs), among others. Current guidelines suggest that ICH patients with a depressed mental state should have continuous EEG monitoring, as most seizure activity occurring after ICH is subclinical [2][3][4]15]. Any seizure activity ought to be treated intravenously with an AED, and if seizure activity persists, AED treatment may continue orally. Prophylactic administration of AEDs, however, has been discouraged after evidence from studies indicating a worsening of outcome caused by administration of phenytoin before any signs of seizure activity after not only ICH [15,16] but also traumatic brain injury [17].
The incidence and consequences of seizures after an ICH have not been well studied in animal models. Thus far, swine studies have shown that excitability increases in certain areas of the brain after ICH [18]. Others have shown that in rodents, the most widely used ICH model, intracerebral infusions of blood components such as thrombin [19] and iron [20] cause seizures. To our knowledge, there have been no formal evaluations of seizure activity in the common rodent models of ICH, which involve injecting autologous blood [21] or collagenase [22] into the brain. Unlike the whole blood injection, bacterial collagenase, an enzyme that breaks down the basal lamina, causes bleeding over hours mimicking what frequently occurs in ICH patients [22,23]. Often, investigators target the striatum as it is a common site of ICH in humans and because it can contain a large hematoma that results in persistent, easily quantified, behavioral impairments [24]. In this study, we induced a moderate-sized striatal ICH in rats by injecting collagenase or whole blood, and we monitored rats with an implanted EEG telemetry probe for a week after the stroke. By using telemetry, we were able to continuously record EEG in freely moving untethered animals, which is the least stressful method for these animals. The objective of this study was to determine the incidence and characteristics of seizures that occur in these animal models of ICH.
Subjects
Twenty male Sprague-Dawley rats (250-400 g,~3 months old) obtained from the Biosciences breeding colony at the University of Alberta were assigned to either the collagenase, whole blood, or saline group. As a positive control, a rat received an injection of FeCl 2 . Food (Purina rodent chow) and water were provided ad lib and rats were housed individually in a temperature-and humidity-controlled room (lights on from 7 a.m.-7 p.m.).
EEG Probe Implantation
Surgical procedures were performed aseptically. Rats were anesthetized with isoflurane (4 % induction, 1.5-2.5 % maintenance in 60 % N 2 O, balance O 2 ) and body temperature was maintained at 37°C during anesthesia with a heated water blanket and a rectal temperature probe. An EEG telemetry probe (F40EET, Data Sciences International, St. Paul, MN) was inserted either in the peritoneal cavity or the neck (dorsal S.C. placement), and the leads were channeled under the skin and attached to screws stereotaxically placed ipsilateral (AP −1.5, ML 4) and contralateral (AP −1.5, ML −4) to the injection site (see Fig. 1). The leads were secured to the screws (0-80×3-32; Plastics One, Roanoke, VA) with dental cement. These telemetry probes measure EEG (sampled at 500 Hz, low-pass filtered at 100 Hz) and temperature and movement activity. The latter is detected by changes in signal strength as the probe moves across the receiver (Data Sciences International), providing a relative measure of activity [25].
Iron Injection
As a positive control to assess the ability of the EEG probe to detect seizures, we injected iron in the rat striatum (N=1). Immediately following attachment of the electrical leads, a hole was drilled (AP 0.5, ML 3.5) and a Hamilton 26-gauge needle was inserted 6.5 mm into the striatum to infuse 38.0 μg of FeCl 2 in 30 μL of saline [26,27]. The injection was completed over 10 min and the needle was removed after an additional 10 min. Clips were used to close the wound. The rat was euthanized 1 week later.
Collagenase, Blood, and Saline Injections
For the saline (N=3) and five of the collagenase rats (N=10), a baseline recording period of 1 week was undertaken before the injection. No baseline recording was done for the whole blood rats (N=6). Following baseline recordings, or directly after the securing of electrical leads, a hole was drilled (AP Fig. 1 Telemetry probe inserted either in the peritoneum or under the skin of the neck (left). Leads were attached to screws (striped circle) on the skull and cemented. One channel recorded from the ipsilateral hemisphere next to the site of injection (full circle) and the other channel recorded from the contralateral side. Negative leads were connected to a screw posterior to Lambda. No further grounding was required 0.5, ML 3.5) and a Hamilton 26-gauge needle was inserted 6.5 mm into the striatum. Either saline, 100 μL of autologous whole blood (from tail artery), or 0.14 U of bacterial collagenase in 0.7 μL of sterile saline was infused over 5 or 10 min (blood model) and the needle was removed after an additional 5 or 10 min [23]. Clips were used to close the wound. Saline control and whole blood rats were euthanized after 7 days whereas the collagenase rats were euthanized between 11 and 66 days. This difference in survival times was due to differences in probe placement and technical difficulties that ensued. For instance, probe placements in the neck region caused irritation due to collection of fluid (seroma).
Euthanasia and Lesion Volume Assessment
Rats were injected with sodium pentobarbital (100 mg/kg, i.p.) and then perfused with 0.9 % saline followed by 10 % neutral buffered formalin. Brains were extracted, cryostatsectioned at 40 μm, and stained with cresyl violet. Coronal sections taken every 200 μm were then analyzed with Image J [28], as routinely done on digitized images extending anterior, through, and posterior to the lesion. The volume of each hemisphere was calculated as follows: (average area of complete coronal section of the hemisphere−area of damage− ventricle)×interval between sections×number of sections [26,23]. This method takes into account both areas of injury as well as atrophy and ventricular dilation.
EEG Analysis
Baseline and post-infusion EEG traces were visualized with Dataquest A.R.T. 2.3 system (Data Sciences International) and the incidence, duration of seizures, as well as the time to seizure onset from injection were recorded. In three rats, an extended period of 30 days was further analyzed in this way. We compared 5-min epochs of non-epileptiform activity from the day prior to the injection with recordings taken at least 3 days post-stroke from the rats that had a 1-week baseline recording (collagenase N=4; saline N=3) in order to detect any changes in otherwise normal-looking EEG. We did not find a significant difference between the day prior to collagenase infusion and day 3 after stroke. Therefore, we considered traces from the third day post-collagenase infusion to be an additional substitute baseline measure for all collagenase rats. Baseline and putative epileptiform traces were exported and analyzed using custom code written in MATLAB (R2012a, Mathworks, Natick, MA). We computed for each signal the root mean square (RMS), the power spectral density using Welch's averaged modified periodogram method (6-s window, 2-s overlap), as well as dual channel coherence for ipsilateral and contralateral recordings (3-s window; 1-s overlap). Field signals and spectra were plotted for comparison with baseline measures using Origin 9.1 (Microcal Software, Northampton, MA). We compared all putative epileptiform measures, including RMS and 95 % confidence intervals of amplitude fluctuations, to those taken during baseline/control conditions in order to confirm that the events were abnormal. For ictal traces longer than 25 s, power spectra were compared to those baseline traces equal in duration to determine total increases in power, frequencies significantly affected by the seizures, as well as bilateral coherence. A randomized coherence distribution based on a series of sequential time-shifted (and timereversed) coherence computations from these actual traces was computed to calculate the coherence significance level. The maximum 95 % confidence limit for this randomized distribution (throughout the frequency range) across multiple datasets having durations ranging from 27 to 87 s ranged from 0.22 to 0.061, respectively. In order to determine cross-hemispheric coupling changes during epileptiform activity, we subtracted the coherence values of normal activity from those of epileptic traces and considered that any increases equal to or larger than the confidence limit for that trace was significant. For shorter duration aberrant activity (i.e., interictal spikes), we performed a detection analysis based on a threshold amplitude beyond the amplitude distribution of the normal traces (3.5 standard deviations from the mean [29] and computed the average waveform and number of events occurring per unit time).
Temperature and Activity
We computed temperature averages (F40EET probe) of 5-min periods taken from the hour after the seizure as the difference from the hour prior (e.g., 1 h before seizure average−5-min temperature average) and statistically analyzed it for comparison. If a seizure occurred in the hour before another seizure, then it was excluded. In the same manner, 1-h activity measures before and following seizure were averaged and compared to assess the impact of seizures on activity (as measured by signal strength changes as the probe moved across the receiver).
Statistical Analysis
Data are presented as mean±standard deviation (SD) and were analyzed by repeated measures analysis of variance (ANOVA) and student's t tests (SPSS v.17.0, SPSS Inc., Chicago, IL). The Fisher's exact test and Mann-Whitney U tests were used to compare between models.
Baseline EEG in the Collagenase Group
Baseline recordings allowed us to relate different behaviors to the EEG traces (Fig. 2), which were helpful for detecting abnormal activity. When we compared the RMS of 5-min traces of the day prior to stroke/sham surgery and days 1, 2, and 3 after the injection of either saline or collagenase (including only those ICH rats with seizures), we found a group (saline vs. collagenase) effect for the ipsilateral channel (p= 0.050, Fig. 3) depicting an increase in RMS in non-epileptic EEG traces after the injection in the collagenase group. This means that the average amplitude fluctuation of traces was higher for the collagenase group even during non-epileptic EEG. Moreover, there was a time effect for the contralateral side (p=0.028), the RMS on the first day after injection was larger than the third day (p=0.040) in both groups, although there was no treatment effect.
Seizures After Collagenase and Iron Injection
One collagenase rat had to be excluded due to equipment failure that did not allow for adequate EEG recordings. Six out of the nine remaining rats (66 %) had seizures within the first 2 days after the stroke with the earliest occurring after 10 h and the latest occurring after~36 h. Seizures ranged in duration from~5 to 90 s, and their averages ranged from 14 to 54 s (Table 1, Fig. 4b-d). Iron also caused seizures within hours of application (Fig. 4a), which displayed a similar pattern of tonic-clonic ictal events followed by notable EEG suppression and occasional afterdischarges. Seizures or other electrographic abnormalities were not observed in any of the six rats infused with 100 μL of autologous blood (p=0.028 for comparing number of rats with seizures between models). Not surprisingly, both the number (p=0.036) and duration of seizures (p=0.036) were significantly greater in the collagenase model. Fig. 3 RMS increased in normal EEG after collagenase-induced ICH in the ipsilateral side. An overall RMS-fold increase from the day prior to stroke for a period of 3 days was detected in the collagenase rats that had seizures (N=4) compared to saline injection in the ipsilateral side (p= 0.05). This indicates that there were more fluctuations in the EEG traces relating to normal activity after collagenase than after saline (sham surgery) infusion. This was not the case for the contralateral side, for which there was a day effect for both groups but no impact of the treatment These are examples of EEG activity during a awake and alert periods, which display higher frequency and lower amplitude than c slow-wave sleep. During anesthetics, such as b isoflurane, it is common for EEG to display burst-suppression in rats. We were also able to detect artifacts such as during d chewing in the EEG traces. In these instances, the experimenter observed all these behaviors In two collagenase rats, we detected extended periods of interictal epileptiform discharges, ranging from~1.5 to 14 h (Fig. 5). Over the span of 30 min in which seizures occurred in one animal, we detected abnormal interictal events in between obvious ictal activity. Interictal activity did not last more than 2 min; therefore, we were not able to analyze them using our event detection analysis due to their short duration. For the three rats in which 30 days of EEG were screened, two had seizures within 36 h, but none of them had any noticeable EEG abnormalities after that period. Therefore, none of our collagenase rats appeared to have developed epilepsy per se. Most of the seizures were bilateral, which were likely generalized from an initiation zone located in the affected hemisphere and propagated across contralateral hemisphere. There was only one case in which the seizure activity occurred only ipsilaterally ( Table 1).
Severity of Seizures in the Collagenase Group
We calculated RMS and quantified power increases of seizure traces and compared them to non-epileptic activity taken from day 3 post-collagenase surgery as an indicator of increases in amplitude at different frequencies (power) and fluctuations in the amplitude (RMS) of the ictal events. The smallest RMSfold increase for the ipsilateral side was 1.58 times larger than non-epileptiform activity, and the largest was 4.69 times larger during ictal activity. Similarly, for the contralateral side, the smallest epileptiform RMS was 1.45 times larger, and the largest was 4.96 larger. This indicates that during seizure activity, there was about a 50-500 % increase in amplitude fluctuations in these traces. In general, increases in power were seen at all frequencies up to 38 Hz, with a decrease in frequencies across the 0.5-5.8 Hz bandwidth detected in a single rat (Table 1). Also, in most cases, the ipsilateral channel had an equal or greater increase in power than the contralateral, although this was not the case for two rats. This could support the notion of an ipsilateral focus that propagates, and/ or generalizes, to the contralateral hemisphere.
For coherence calculations, we concentrated on increases in coherence at the frequencies in which we noted a change in power. Coherence values range from 0 to 1; values of 0 indicating that signal-specific frequencies between the two channels are completely unrelated, whereas values of 1 indicate that they are completely related. In most instances, we found significant increases in cross-hemispheric coherence during epileptiform activity as compared to normal traces taken 3 days after the collagenase injection (average increase across frequencies 0.28±0.14). Two cases showed exceptions to this rule, one in which no increase was observed and another in which coherence values were significantly decreased despite prominent bilateral seizure activity in both cases. In one rat, we also detected that coherence was decreased for frequencies lower than 6 Hz but that for higher frequencies coherence was increased. Indeed, increases in power for higher frequencies (12-40 Hz) during seizure events were all associated with significantly increased coherence (see example in Fig. 6). This might indicate that the more severe seizures were, the more likely that both hemispheres were engaged in epileptic activity.
Lesion Volume
The infusion of collagenase or blood caused significant damage and inflammation, as depicted in Fig. 7a (p=0.22 for lesion volume). In the collagenase model, we found no significant relationships between lesion volume and seizure Table 1 Characteristics of seizures after collagenase-induced ICH. EEG activity occurring in the first week after collagenase injection were visualized and analyzed. These are the characteristics for the seizures that occurred within the first 36 h after the stroke; no seizures were detected afterwards. For each rat, the number of seizures, laterality, total duration, and time of onset were documented. We also reported other factors to determine the variability of the traces as depicted by the RMS ratio (RMS seizure/RMS non-epileptic activity) and changes in voltage according to their frequency, as depicted by the power increased and frequencies affected. Also, for those frequencies in which the power was affected, coherence was assessed as an indicator of how coupled the activity was in between both channels. Here, coherence was computed as an increase from baseline coherence. Traces shorter than 25 s were not analyzed for power and coherence. Data , p=0.63). Thus, we did not find that lesion volume predicted seizure characteristics in this model. Owing to the lack of seizure activity, we did not perform this analysis with the whole blood model data.
Temperature and Activity Data
A repeated measures ANOVA did not depict a difference among the twelve 5-min average intervals (i.e., 1 h) following the seizure (p=0.382), indicating that there were no changes in temperature after the seizure. Even though there was no pattern in the temperature change among rats, one rat had a seizure leading to about 90 min of hypothermia (Fig. 8). Moreover, a paired t test on all of the rats' movement activity indicated no difference between activity before and after the seizure (p=0.7968).
Discussion
We expected seizures to occur in the blood infusion model, but this was not observed. Our results did confirm our hypothesis that seizures commonly occur after striatal ICH in the collagenase model. Sixty-six percent of rats in the collagenase group suffered seizures during the first 36 h following their stroke. Seizures commonly occur after brain injury and stroke, both in patients [30,2,31] and in other animal models of brain injury [32][33][34]. Thus, it is not surprising that collagenase rats would also display abnormal electrical activity as we demonstrated in this study, including both full-blown ictal and abnormal interictal activity, which mostly occurred bilaterally. Increased cross-hemispheric coherence coinciding with increased power suggests that the activity in both hemispheres during seizure activity was coupled and that more severe seizures recruited both hemispheres. Although there were some exceptions to this, coherence remained significantly increased, especially at the higher frequencies. We also demonstrated that even normal-looking electrical activity has an increased RMS for the first 3 days after collagenase-induced ICH, which could indicate that there were abnormalities in non-epileptic EEG activity during this limited time frame. Although robust epileptiform activity was a consistent phenomenon in our collagenase group, we did not find lesion volume to be a predictor for any of the seizure characteristics. Likewise, the lack of seizures in the blood model, which had a comparable lesion, argues against lesion or hematoma volume as key predictors of seizure activity. The incidence of electrographic seizures in our collagenase group (66 %) is more than double that documented in ICH patients [2]. The difference in incidence might be attributed to the greater range in patient characteristics (e.g., ICH locations, severity) in clinical studies, along with other factors such as species differences. Interestingly, other pre-clinical studies of brain insults, namely, hypoxic-ischemic injury [33], focal ischemia [34], and traumatic brain injury [32], all report a much higher percentage of animals developing seizures and epilepsy than what is reported in the clinic. However, continuous monitoring of EEG for many weeks or months is rare in clinical studies, making it difficult to compare animal and clinical data.
There are key differences between the collagenase and whole blood models that may explain the discrepancy in seizure incidence between these models. For instance, the whole blood model of ICH provides a somewhat different profile of injury (see Fig. 7a) with less secondary injury, inflammation, blood brain barrier damage, and smaller intracranial pressure spikes [23,35,36], but these may vary by species [37]. As with inflammation [38], it is possible that the timing, extent, and localization of thrombin production vary between models and this might account for differences in seizure activity. Note that intracerebral infusions of thrombin induce seizure activity [19]. Iron infusions also cause epileptogenic activity [20], as we presently confirmed. However, given the timing of iron release, which in our collagenase model occurs between 24 and 72 h [39], it is unlikely that iron causes seizures as they began between 10 and 22 h, and stopped by 36 h. As well, the hematoma volume is expected to be larger in rats infused with 100 μL of blood than those given collagenase [23]. Thus, if iron were the primary cause of seizures, there should have been more seizures in the whole blood model.
Early seizures are predictors of future epilepsy in stroke patients [13], although a study by Bladin and colleagues [4] showed that all of the patients that had late onset seizures, which were those at 2 or more weeks after the stroke, developed epilepsy. We did not find recurrent seizures after the first 36 h of the stroke, even though we screened EEG for up to a month after collagenase infusion. While this suggests that this model does not lead to epilepsy, a much larger sample size is needed especially given the small percentage expected to develop that condition. There is also the possibility that seizures may develop later than a month after ICH in rats, as occurs in other animal models such as traumatic brain injury [32].
There are some limitations to this study. First, we did not video record any of the seizure events, so the type of behavioral manifestations with these electrographic seizures remains unknown. Although, we occasionally noticed behavioral signs of focal seizures, such as clonic paw movements. Second, the relatively limited number of animals in this study cannot exclude the possibility that occasional seizure activity occurs in the whole blood model. Third, with larger sample sizes, a modest relationship between lesion volume and seizure Fig. 6 Seizures displayed higher power and increased coherence than normal EEG. This is an example of the coherence (top panel) and power spectrum (bottom panel) for the epileptiform event in Fig. 4d (dashed line) compared to normal EEG activity 3 days after collagenase-induced ICH (solid line). Any increase in coherence above the confidence interval limit (dotted line) of the difference between seizure and normal activity (gray solid line) was significant. The gray lines in the power spectrum represent the 95 % confidence interval (CI) and the black lines the mean values for the spectrum. For those increased frequencies, there were also increases in coherence Fig. 7 Lesion volume had no relationship with seizure incidence. The ICH typically damaged a substantial portion of the striatum with some damage to corpus callosum. No injury was found in sham-operated rats, other than a needle track. a Total lesion volume (M±SD in mm 3 ) for the collagenase and whole blood models was not significantly different. Photomicrographs illustrate each model's profile of injury at the level of maximal damage (cresyl violet stain). b There was no significant correlation between lesion volume (black squares) and the incidence of seizures in the collagenase model (r=0.16, p=0.67) Fig. 8 Temperature changes (post-seizure minus pre-seizure values) for 2 h post-seizure activity. Values are expressed as difference between baseline average and temperature (e.g., values below 0 indicate that hypothermia occurred after the seizure). Time 0 would be the time at which rats suffered of a seizure. There were no consistent changes in temperature among the rats characteristics may have been detected. Indeed, others have reported a relationship between epileptiform activity and infarct size after focal ischemia [40]. In a clinical study, however, small lesion size was a better predictor of seizure incidence [3]. Fourth, while we recommend use of telemetry probes, tethered systems have the advantage of allowing monitoring from more locations, which would be advantageous in future studies (e.g., to identify seizure focus). The use of telemetry probes also had some additional disadvantages (e.g., greater cost) including technical problems we encountered with the use of lead extenders and of course the inevitable loss of battery power. Lastly, while EEG eventually returned to normal, it is likely that seizure thresholds were altered as found in traumatic brain injured rats given a pro-convulsant challenge [32].
Further studies should be carried out to advance our knowledge of seizures occurring after ICH. Seizure incidence should be studied after changing the location of the lesion, as lobar/ cortical location has been associated with more seizure activity in patients [3,4]. Even though the striatal model of ICH is a common one, other structures have also been targeted, such as cortex [18] and hippocampus [41], and we are presently evaluating these models. It is possible that whole blood injections in different locations may elicit seizure activity. Also, patients with an ICH also have increased ICP after the insult [15], which is also common after a collagenase-induced ICH in rats [36]. This sustained rise in ICP could be associated either to the mass effect arising from the hematoma and edema or to seizure activity, which could especially be related to ICP spiking [9]. This could be elucidated by simultaneous EEG and ICP monitoring. Furthermore, future research should focus on the relationship between seizures, cell death, and recovery. Some clinical studies have related seizures with worsened outcome and mortality [12][13][14], although others failed to do so [11,[2][3][4]. In animal models, we can experimentally increase seizure activity with convulsant drugs or diminish it with AEDs and test its impact on several markers of cell death (e.g., neurodegeneration) and functional outcome.
In conclusion, seizures occur in the majority of rats subjected to a collagenase-induced striatal ICH but did not occur after infusion of whole blood-models widely used to study the pathophysiology of ICH and to assess neuroprotectants and rehabilitation therapies [35]. As ICH patients also suffer from seizures early after the stroke, the rat collagenase model has good face validity to model seizures occurring after ICH, although further work is needed to determine whether the underlying cause is the same as in patients. This is a key factor for translational purposes, as others have raised concerns regarding differences between animal and human ICH pathophysiology [42,43,35,[44][45][46]. Researchers should also consider that seizures could potentially impact their studies. For instance, seizure activity may exacerbate the damage caused by the stroke, altering the effectiveness of neuroprotective therapies. Also, treatments may indirectly reduce cell death by ameliorating seizure activity. We recommend the use of the whole blood model when seizures may be a confounding factor. As this is the first study to find that seizures occur after collagenase-induced ICH in rats, we encourage further research to understand the relationship between seizures, cell death, and recovery after ICH. This way, we will be able to enhance therapies currently provided to ICH patients. | 6,988.8 | 2014-07-24T00:00:00.000 | [
"Biology",
"Medicine"
] |
Operational tests of CRYRING@ESR without electron cooler solenoid compensation
We have tested operation of FAIR’s low-energy ion storage ring CRYRING@ESR with uncompensated electron cooler solenoid. With its standard working point on the lowest-order difference resonance, a second solenoid is normally used to cancel betatron coupling introduced by the cooler’s magnetic field. In operation with a D+ test beam, we found that omission of the compensation solenoid did not lead to a notable deterioration of beam intensity, quality, or cooling time, though the expected coupling of betatron motion is then clearly observed.
Introduction
CRYRING is a heavy-ion storage ring initially designed and operated by Manne Siegbahn Laboratory (MSL), Stockholm [1].As an in-kind contribution to FAIR, the ring has been transferred to GSI, Darmstadt.Recommissioned within the CRYRING@ESR project, it complements the existing GSI facilities by a storage ring optimised for low ion energies [2].The ring is able to store highly-charged heavy ions produced by the SIS18 synchrotron after deceleration in the Experimental Storage Ring (ESR).Additionally, a local low-energy injector linac allows stand-alone operation with light ions.Electron cooling is a central beam preparation technique and is routinely employed with the full variety of ion beams available at CRYRING@ESR [3].
With the standard working point (Q h , Q v ) close to the lowest-order difference resonance, the longitudinal magnetic field of the electron cooler solenoid is expected to introduce strong coupling of horizontal and vertical betatron motion.Hence, the cooler is accompanied by a second, compensation solenoid of equal integral field, but inverted polarity, to cancel that coupling effect [4].The compensation solenoid largely occupies one of the six drift sections of the ring (cf.Fig. 1).With four more drifts occupied by the injection bumpers, the extraction system, the bunching rf electrode, and the electron cooler itself, this leaves a single section of the ring free for installation of in-ring experiments.
Naturally, the question arises whether removal of the compensation solenoid in favour of additional experimental inserts could be a viable option.Except for a small class of use-cases involving polarised beams, most experiments proposed at CRYRING@ESR are not thought to be inherently disturbed by longitudinal field components, provided a high beam quality can be maintained also in presence of strong betatron coupling.As this mode of operation had never been attempted at MSL, we have performed a series of machine experiments to study the effectiveness of the compensation solenoid and the potential impact of its omission on the properties of stored and cooled beams.
Basic ring operation
At the most fundamental level, solenoid-induced coupling could adversely affect the ring's ability to accept and store beam because of the associated horizontal and vertical emittance exchange [4].As a test, we injected a beam of D + from CRYRING@ESR's local ion source and RFQ linac at 300 keV/u.A few 10 7 ions were then accelerated to 1 MeV/u and stored for typically 2 s.The low rigidity of 0.29 Tm promised high sensitivity to coupling by the electron cooler field, which was set to its typical value of 0.03 T once the ions had been accelerated.The cooler's electron beam, and thus electron cooling itself, was disabled to enhance sensitivity to potential acceptance limits.
The ring operated at its standard working point of (Q h , Q v ) = (2.42,2.42), which was kept identical through all phases of injection, acceleration, and storage.As, in the machine model presently underlying the accelerator controls, the mapping between working point and strengths of the two quadrupole families is imperfect, the magnet settings were manually refined using direct tune measurements by narrow-band transverse radio-frequency knock-out (RF-KO).The sextupole magnets, normally used to correct chromaticities, were left unpowered to exclude their possible contributions to any amount of betatron coupling observed.
Starting from this configuration, the compensation magnet was disconnected from the cooler's main solenoid, with which it is normally powered in series.We then scanned the quadrupole strengths such as to map-out a matrix of working points covering the expected stable region around (Q h , Q v ) = (2.42,2.42) (cf.Fig. 2a).For each setting, we measured the intensity of the accelerated 1-MeV/u D + beam, integrating over 1.5 s of storage.Again, the set tunes during injection, acceleration, and storage were kept identical and varied in-sync during the measurement series.
The results are shown in Fig. 2b, with brighter colours indicating higher stored beam intensities.The second and third order stop-bands surrounding the nominal working point (2.42, 2.42) on all sides are clearly visible.The relatively small area of the stable region is due to For comparison, Fig. 2c shows the same measurement in the standard magnetic configuration of the ring, i.e. with the cooler solenoid compensated and with the sextupole magnets optimised for vanishing horizontal and vertical chromaticities.While the area of the stable region is clearly larger overall, stored beam intensity near the difference resonance, especially at (Q h , Q v ) = (2.42,2.42), is not affected.
Coupling strength
The question how well coupling introduced in the cooler is cancelled by the compensation solenoid is interesting in view of experiments sensitive to longitudinal field components acting on the ions.
We measured the betatron coupling strength in both configurations, with and without compensation, by the method of 'closest tune approach' [5].We prepared a series of quadrupole settings that, in the uncoupled case, correspond to a diagonal scan of (Q h , Q v ), crossing the coupling resonance at (2.42, 2.42) (cf.Fig. 3).With coupling, the measured betatron frequencies exhibit an avoided-crossing behaviour, the smallest difference in the observed tunes being equal to the magnitude of the coupling coefficient |C − |.
Again, the probe beam was D + at 1 MeV/u.Also in this measurement, the sextupole lenses were disabled.The electron cooler solenoid operated at 0.03 T, but the ion beam was uncooled.As shown in Fig. 3a, no coupling is observed with the compensation solenoid enabled: Q h and Q v can cross smoothly within the uncertainty of the RF-KO-based tune measurement, which we determined to be ±3 × 10 −3 .
With the compensation solenoid disabled, the avoided crossing of Q h and Q v near (2.42, 2.42) becomes very apparent, as visible in Fig. 3b.The measurement also shows that, in vicinity of the coupling resonance, any single RF-KO kicker (horizontal or vertical) effectively excites betatron motion in both planes, a clear sign of energy redistribution among the two degrees of freedom of ion motion.From the perturbative coupling theory of Guignard [5], and using the approxima- tions developed by Simonsson for the CRYRING case [4], we expect a coupling coefficient Therein, B cool is the flux density of the cooler field, L eff its effective length, and (Bρ) 0 the rigidity of the ion beam.With L eff ≈ 2 m and the field set to 0.03 T, we expect |C − | ≈ 0.033 for the 0.29-Tm D + beam, in good agreement with the measured value |C − | exp = 0.032 (3).The dashed and dash-dotted lines in Fig. 3b are betatron tunes computed using MAD/X, given a simplified model of the cooler magnets.
Electron cooling
A central goal of the machine study was to quantify the performance of the electron cooler and the achievable quality of cooled beams in presence of coupling.For the electron cooling tests, we increased the energy of the stored D + beam to 2 MeV/u, allowing the cooler high-voltage supply to operate in a more typical regime, while still keeping the ion rigidity at a low 0.41 Tm.The set working point was kept at (2.42, 2.42) and chromaticity correction was re-enabled.Effectively, betatron coupling leads to entanglement of the horizontal and vertical cooling forces, as ion energy is redistributed among both planes.We tested this by purposely misaligning the cooler's electron beam with respect to the ion axis.The corresponding response of the ion beam is very different for the cases of compensated and uncompensated cooler solenoid, as depicted in Fig. 4. The top panels show the horizontal and vertical projected beam profiles as a function of storage time t, as measured using ionisation profile monitors.Directly after beam injection and acceleration (at t ∼ 2 s), the electron cooler was switched on with its beam vertically misaligned by an angle of ∼2 mrad.
With the compensation solenoid enabled, the resulting non-linear vertical drag leads to strong excitation of betatron oscillation in that plane, while, horizontally, betatron motion is damped by the still properly centred cooling force along that direction, as previously observed in MSL operation [6].Without solenoid compensation, isolated excitation of a single component of betatron motion is not possible.Hence, the vertical drag force widens the beam envelope in both planes.However, as that vertical drag is now partly compensated by the horizontal cooling force, the emittance blow-up along any direction is weaker compared to the isolated vertical excitation in the uncoupled case.
After several seconds of storage, the misalignment of the electron beam was abruptly corrected (dashed lines in Fig. 4), so that the ions were transversely cooled in both planes.The final stage of electron cooling is characterised by exponential shrinking of the beam envelope with time constant τ .Equilibrium of electron cooling and intra-beam scattering defines the final beam size σ ∞ [7].No significant differences in either τ or σ ∞ were found between operation with and without coupling compensation, as shown in the lower panels of Fig. 4.Both measurements were done at an electron density n e = 2.7 × 10 6 cm −3 and the magnetic expansion factor of the electron gun was 50.
The apparently weaker response of the ion envelopes to a misaligned electron beam could make optimal set-up of electron cooling more difficult in presence of coupling.As a check for this, we aligned the cooler beam in both configurations, starting with the assumed 'difficult' case of a disabled compensation solenoid.In both cases, we measured the longitudinal electron drag force via the bunched-beam phase shift method [8].The measurements are shown in Fig. 5, along with fits of Parkhomchuk's semi-empirical cooling force formula, using the effective electron temperature T eff as only free parameter [9].No significant difference in T eff is found, indicating equal cooling force in both cases.The electron density n e was 9.5 × 10 6 cm −3 at expansion 50.
Conclusion
We have tested operation of CRYRING@ESR with disabled cooler solenoid compensation, finding no indication of reduced acceptance or stability near the standard working point.Electron cooling works reliably also in presence of strong betatron coupling.The observed impact on the betatron tunes near the difference resonance is found to be in good agreement with theoretical expectations, and is reduced by at least an order of magnitude with the compensation solenoid enabled.
Figure 1 .
Figure 1.Schematic overview of CRYRING@ESR, with its drift sections labeled according to their main functions.
14thFigure 2 .
Figure 2. Stored beam intensity (D + , 1 MeV/u) measured for a matrix of working points (Q h , Q v ) centred on (2.42, 2.42) as indicated by the shaded area (a), with cooler solenoid compensation and chromaticity correction disabled (b), and for normal magnetic configuration (c).Brighter colours indicate greater ion numbers.
Figure 3 .
Figure 3. Measured tunes for diagonal scans of the set working point across the coupling resonance, as indicated in the inset, with (a) and without (b) compensation of the cooler solenoid.The probe beam was D + at 1.0 MeV/u.
Figure 4 .
Figure 4. Response of the D + beam envelopes to transverse drag and cooling forces with (left) and without (right) the compensation solenoid active.See text.
Figure 5 .
Figure 5. Measured drag force acting on the D + ions as a function of the velocity detuning ∆v between electron and ion beams (black dots), at electron density n e = 9.5 × 10 6 cm −3 (expansion factor 50), after cooler alignment with (top) and without (bottom) coupling compensation. | 2,742.6 | 2024-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
A reconstruction of Iberia accounting for W-Tethys/N-Atlantic kinematics since the late Permian-Triassic
.
Introduction
Plate tectonic reconstructions are based on the knowledge of magnetic anomalies that record age, rate and direction of seafloor spreading.Where these constraints are lacking or their recognition ambiguous, kinematic reconstructions rely on the description and interpretation of the structural, sedimentary, igneous and metamorphic rocks of rifted margins and orogens https://doi.org/10.5194/se-2020-24Preprint.Discussion started: 6 March 2020 c Author(s) 2020.CC BY 4.0 License.Atlantic.Extension and salt movements in the North Sea basins during the Late Triassic further point to the propagation of the North Atlantic rift (Goldsmith et al., 2003).
The persistence of shallow-marine to non-marine deposition during this period contrasts with the large accommodation space that is required at larger scale to sediment the giant evaporitic-province in the late Permian (Jackson et al., 2019) and in the Late Triassic (Štolfová and Shannon, 2009;Leleu et al., 2016;Ortí et al., 2017).Crustal thinning expected for this period therefore does not follow McKenzie's prediction of subsidence (McKenzie, 1978).
A first hypothesis to explain the difference with this model is that crustal attenuation induced density reduction of the thinned lithosphere by mantle phase transitions to lighter mineral phases during lithosphere thinning (Simon and Podladchikov, 2008) or due to the trapping of melt in the rising asthenosphere before breakup (Quirk and Rüpke, 2018) in addition to magmatic re-thickening of attenuated crust by underplating.Another possible hypothesis for the Permian-Triassic topographic evolution of the Iberian basins relies on the complex post-Variscan evolution of the Iberian lithosphere.Recent studies have shown that during the existence of Pangea supercontinent (∼300 to ∼200 Ma), temperature in the asthenospheric mantle increased due to the thermal insulation by the continental lid (Coltice et al., 2009;Ganne et al., 2016).Such mantle thermal anomaly could have further inhibited lithospheric mantle re-equilibration after late-Variscan mantle delamination over a long-time span.
Once mantle temperature dropped as a consequence of the Pangea breakup and magmatic emission at the Triassic/Jurassic boundary, lithospheric mantle started to cool and thicken, causing isostatic subsidence of the thinned Iberian crust and resulting in topographic drop.
This argues for a protracted period of ∼100 Myr (late Carboniferous to Late Triassic) of continental lithosphere thinning and magmatism prior to Jurassic break-up of the North Atlantic but contemporaneous with the Tethyan evolution.One main consequence is that the late Permian-Triassic extension has been so far underestimated in plate reconstructions, despite evidence for continuous extension.
3 From late Permian-Early Triassic rifting to Late Jurassic-Early Cretaceous rifting in Iberia The Permian-Triassic basins of Iberia are exposed in the inverted Mesozoic rift basins of the Basque-Cantabrian and Pyrenean belts, the Iberian Ranges, the Catalan Range and the Betic Cordillera (Figs. 1B and 3A).The coincidence between the orientations of the Alpine orogenic segments and the spatial distribution of suggest that the Cenozoic orogenic cycle largely inherits the earliest stages of the Tethyan rift evolution.In addition, these Permian-Triassic depocentres are superposed over Variscan structures (Fig. 1B), suggesting antecedent tectonic control of the Tethyan continental rift segment by the late Variscan evolution.
We analyse subsidence reconstructed based on a compilation of well data and synthetic stratigraphic section in the Aquitaine Basin (Brunet, 1984), Cameros and Iberian basins (Salas and Casas, 1993;Salas et al., 2001;Omodeo-Salé et al., 2017), West Iberia (Spooner et al., 2019), and the Betics (Hanne et al., 2003), to estimate 1D mean tectonic subsidence evolution in these areas (Fig. 3B, see Supplementary Material for individual tectonic subsidence curves in each region).For each region, we calculated the mean tectonic subsidence, following the approach of Spooner et al. (2019) for which wells that do not sample https://doi.org/10.5194/se-2020-24Preprint.Discussion started: 6 March 2020 c Author(s) 2020.CC BY 4.0 License. the entire stratigraphy are corrected based on the oldest well of the region.We then calculated the mean crustal stretching (β factor, Fig. 3C) for each tectonic subsidence curve based on isostatic calculation (Watts, 2001).
During the late Permian-Early Triassic, a first phase of significant tectonic subsidence, up to 500 m, is recorded in the Maestrat basin and on the Iberia paleomargin of the Betic basins (Salas and Casas, 1993;Van Wees et al., 1998;Salas et al., 2001;Hanne et al., 2003;Soto et al., 2019) (Fig. 3B-C).The westward migration of marine deposition in the Iberian basins during the middle Triassic (Anisian-Carnian, 240-230 Ma) (Sopeña et al., 1988) argues that Tethyan rifting propagated westward inboard Iberia.The same evolution is suggested by the stratigraphy and the depositional evolution constraints from the Catalan and Basque-Cantabrian basins (Sopeña et al., 1988), and in the Aquitaine domain (Fig. 3B) although ill-defined for the Permian times.
During the Late Triassic (220-200 Ma), the regional tectonic subsidence in all regions is found associated with the deposition of evaporites that spread all over Iberia, in the Betics, West Iberia and in the Aquitaine Basin (Fig. 3A).The distribution of salt terrane in Iberia and its surrounding (Fig. 3A) highlights a very large subsiding domain for this period.A maximum mean subsidence of 700 m is inferred in the Maestrat basin for the Triassic times.The relatively rapid subsidence in the Triassic contrasts with the slower subsidence observed during the Early-Middle Jurassic.A notable exception is depicted by the slight increase of subsidence between 200 and 150 Ma in the Betics (Fig. 3B-C), consistent with rifting across the Iberia-Africa boundary (Ramos et al., 2016;Fernández, 2019).
A third Late Jurassic-Early Cretaceous phase (150-110 Ma) is marked by the increase of tectonic subsidence in the Iberian basins, coeval with the expected timing of strike-slip deformation and rifting in Cameros (e.g., Rat et al., 2019;Aurell et al., 2019) and Columbrets (Etheve et al., 2018) basins as well as the initiation of mantle exhumation in the Atlantic domain (Fig. 1A) (Murillas et al., 1990;Mohn et al., 2015).The most recent extension is recorded in the Aquitaine Basin at 120-100 Ma that reflects the onset of oceanic spreading in the Bay of Biscay (Fig. 3B-C).
Subsidence analyses show thinning events in Iberia that reveal control by Tethys and Atlantic rifting (late Permian-Late Triassic) and later by the intra-Iberian-Pyrenean rift events (Late Jurassic-Early Cretaceous).In the Iberian basin, this latter event is characterized by a relatively large and short-lived subsidence (1.5 km in 30 Myrs) localized in narrow basins that suggests the strike-slip nature of the boundary between Ebro and Iberia in the Late Jurassic.The long-lasting rift evolution however show an average low stretching factor of about 1.2.
Kinematics of Iberia between Atlantic and Tethys
A plate reconstruction from late Permian to Cretaceous is presented in Fig. 4 based on a kinematic modelling using GPlates version 2.1 (Müller et al., 2018).This reconstruction aims to present the partitioning of the deformation within Iberia into a larger coherent kinematic model of the Tethys and Atlantic Oceans.A critical step in determining the pre-rifting configuration is to restore rifted margins.Here, we adopted the reconstructed continental crust geometry of Nirrengarten et al. (2018) based on a kinematic model of southern North Atlantic.Polygons from the model of Seton et al. (2012) were re-defined by including new https://doi.org/10.5194/se-2020-24Preprint.Discussion started: 6 March 2020 c Author(s) 2020.CC BY 4.0 License.smaller polygons (continental microblocks) separated by deformed areas in Iberia and Adria to account for internal deformation (Fig. 1B).
As full-fit cannot be reconstructed along the whole Iberia margin (Fig. 4A), we restore used full-fit only between Northwest Iberia (Galicia) and North America (Flemish Cap) to minimize the strike-slip movement between Iberia and Europe, rather than a full-fit in the Southwest Iberia that leads to significant overlapping between the Flemish Cap and Galicia.
Our kinematic model is based on the following constraints (Table 1): (1) geological constraints on the timing of deformation and subsidence during late Permian-Triassic time in the intra-and peri-Iberian basins mentioned above (Fig. 3); (2) age of rifting, mantle exhumation, onset of oceanic spreading in the Atlantic; (3) the present-day position of ophiolites bodies and the timing of the rifting, oceanic spreading and subduction for the Tethyan-related oceanic domains (Paleotethys, Neotethys, Pindos, Meliata, Vardar); (4) at 100 Ma, Iberia should be close to its present-day along-strike position relative to Europe, so that the orthogonal Pyrenean shortening is accommodated in the late Mesozoic-Cenozoic times.
We then integrate kinematic evolution for published models in both the Atlantic and the Tethys according to the following workflow: 1) the reconstruction of the western Tethys prior to the Late Jurassic is based on the kinematic evolution of the Mediterranean region since the Triassic from Van Hinsbergen et al. ( 2019) that we corrected for overlap over the western France, Iberian and Adriatic domains; 2) for the Late Jurassic and Cretaceous times, we compiled rotation poles for Adria and Africa from Handy et al. (2010) and for the North America-Europe system from Barnett-Moore et al. (2016); 3) Adria and Africa were then corrected for the position of Africa according to Heine et al. (2013).
Permian-Late Triassic (270-200 Ma)
The Neotethys Ocean opening initiated in the early Permian in the northern Gondwana margin, resulting in the northward drift of the Cimmerian terrane and the subduction of the Paleozoic Paleotethys Ocean (Stampfli et al., 2001;Stampfli and Borel, 2002).This occurred contemporaneously with the establishment of the Carboniferous-Permian magmatic activity in the North Sea rift and Midland Valley rift areas (Evans et al., 2003;Heeremans et al., 2004;Upton et al., 2004).
As the Neotethys rift propagated westwards, diffuse continental rifting took place in whole Western Europe defined by the position of the Paleozoic Variscan and Caledonian orogenic belts in the West, the Tornquist suture in the East and a diffuse transtensional transfer zone along the Africa-Iberia-Adria boundary (Fig. 4A).This is recorded by several late Permian rift domains located in the southern North Atlantic (Rasmussen et al., 1998;Leleu et al., 2016), in the Adriatic (Scisciani and Esestime, 2017) in the North Sea (Hassaan et al., 2020), in the Germanic rift basins, including the Zechstein basin (Evans, 1990;Van Wees et al., 2000;Jackson et al., 2019) and in Iberia (Figs. 2, 3 and 4A).Back-arc extension associated with the subduction of the Paleotethys (Van Hinsbergen et al., 2019) (Fig. 4B) triggered extension and formation of oceanic basins in the Pindos and Meliata domains during the Early ( 250 Ma) and Late Triassic (Carnian, 220 Ma), respectively (Channell and Kozur, 1997;Stampfli et al., 2001).As proposed by Schmid et al. (2008), the Pindos ocean was probably a western branch of the Neotethys rather than a unique ocean.The strike-slip reactivation of the Tornquist Zone could also be a far-field effect of Paleotethys closure (e.g., Phillips et al., 2018).(Stampfli and Borel, 2002;Schmid et al., 2008).The large rift-related subsidence in the Iberian basins (Fig. 3B) is kinematically consistent with the stretching lineations documented from Triassic strata (Soto et al., 2019).Ebro is already individualized from Iberia and moved eastwards relative to Iberia and Europe through right-lateral and left-lateral strike-slip movements, respectively.
Early Jurassic (200-160 Ma)
This period marks a gradual change from Tethyan-dominated to Atlantic-dominated tectonism in Iberia.As the Neotethys propagated in the Vardar Ocean, the Pindos and Meliata oceans started to close (Fig. 4C) (Channell and Kozur, 1997).Major dynamic changes occurred with the CAMP event (Olsen, 1997;Marzoli et al., 1999;McHone, 2000;Leleu et al., 2016;Peace et al., 2019) that led to breakup in the Central Atlantic Ocean during the 190-175 Ma interval (Pliensbachian-Toarcian) (Fig. 4C-D) according to Labails et al. (2010) and Olyphant et al. (2017), respectively.The propagation of the Central Atlantic rift northwards caused extension to propagate in the southern North Atlantic (Murillas et al., 1990;Leleu et al., 2016) and laterally, eastward in the Alpine Tethys (Schmid et al., 2008;Marroni et al., 2017) by some reactivation of Triassic Neotethyan rift structures.Evidence for nearly synchronous intrusions of MORB-type gabbro, in a western branch of the Alpine Tethys, is described at 180 Ma in the internal zones of eastern Betics (Puga et al., 2011), associated with the rapid subsidence in the Betics (Fig. 3B).However, whether this is related to incipient oceanic spreading or magmatism in hyper-extended margin is controversial.By contrast, both the thermal and stratigraphic evolutions (also Fig. 2) suggest that central Iberia remained little affected by the propagation of the Early Jurassic Atlantic rift Iberian basins (Aurell et al., 2019;Rat et al., 2019).A kinematic change from oblique to orthogonal E-W extension in the Alpine Tethys is marked by the onset of oceanic spreading between the Bajocian-Bathonian (170-166 Ma) and the Oxfordian (161 Ma) as suggested by the ages of MORB magmatism in the Alps (Schaltegger et al., 2002) and first post-rift sediments (Bill et al., 2001).As such the Jurassic Alpine Tethys has temporal and genetic affinities with the Atlantic Ocean evolution, rather than the Neotethys.The required differential movement between the opening the Alpine oceanic domains, the central Atlantic and the closure of the Neotethys and Vardar Oceans at 160 Ma induced the reactivation of the former diffuse transfer zone between Iberia and Africa into a localized transform plate boundary (Fig. 5A).
Late Jurassic-Early Cretaceous (160-100 Ma)
A major tectonic change occurred in the Late Jurassic-Early Cretaceous when the North Central Atlantic successfully rifted the continental domain located offshore Southwest Iberia in present-day coordinates (between 160 and 100 Ma, Fig. 5), as recorded by mantle exhumation and subsequent oceanic spreading at 150 Ma (e.g., Murillas et al., 1990;Mohn et al., 2015;Barnett-Moore et al., 2016) (Fig. 5B).At that time, the east-directed movement of Iberia relative to Ebro induced left-lateral trans-tensional faulting in a corridor shaped by the Iberian basins (Tugend et al., 2015;Aurell et al., 2019;Rat et al., 2019).We further infer a residual strike-slip movement between Ebro and Europe until the Mid-Cretaceous (118 Ma) when the Bay of Biscay opened and rotation of Iberia occurred (Sibuet et al., 2004;Barnett-Moore et al., 2016).The eastwards motion of Iberia relative to Adria resulted in the closure of the southern Alpine Tethys (Fig. 5C).Eastward rotation of Africa induces subduction along the northern Neotethyan margin (Schmid et al., 2008) (Fig. 5B-D).
Until 120 Ma (Early Cretaceous) eastward accommodation space is constantly created by the formation of rift segments in the Southwest Alpine domain (Valaisan domain and Southeast basins of France) and then Provence domains (Tavani et al., 2018).In the southern part of the Western Alps, reactivation of Tethyan normal faults are shown to be Late Jurassic-Early Cretaceous in age (Tavani et al., 2018).At 110 Ma, deformation migrates in the South Provence Basin making a straighter continuity of the Pyrenean system toward the East (Tavani et al., 2018).
5 Implications for strike-slip movements and the Europe-Iberia plate boundary Table 2 summarizes the timing, amounts and sense of strike-slip component of the Ebro kinematics relative to Europe and Iberia inferred from our model.Our reconstructions suggest a total left-lateral strike-slip movement of 278 km between Europe and Ebro.90 km were accommodated during the late Permian-Triassic period (Fig. 4A-C, 270-200 Ma).86 km were accommodated during the Jurassic (Figs. .We quantify 99 km and 19 km for the 140-120 and 120-100 Ma time intervals, respectively, leading to a total of 128 km of strike-slip movement during the Lower Cretaceous, in the range of amounts deduced from offshore and onshore geological observations (Olivet, 1996;Canérot, 2016).By 118 Ma, most of the strike-slip faulting is terminated as extension became orthogonal and Ebro is close to its present-day position (Jammes et al., 2009;Mouthereau et al., 2014).The maximum strain rate of 5 km.Myr −1 is obtained for the 140-120 Ma time interval, revealing progressive strain localization in the Pyrenean basins before mantle exhumation (Jammes et al., 2009;Lagabrielle et al., 2010;Mouthereau et al., 2014;Tugend et al., 2014).
The Iberia-Ebro boundary has a more complex tectonic history than the Europe-Ebro boundary.The rapid eastward displacement of Ebro during the late Permian to Late Jurassic period (Figs. 4 and 5) induces a total of 67 km (12, 33, 17, and 5 km during the 270-250, 250-200, 200-180, and 180-160 Ma time interval, respectively) right-lateral strike-slip between Ebro and Iberia (i.e., Galicia).This displacement has been partitioned with extension within the Iberian basins along a NW-directed intra-continental deformation corridor.This is consistent with stretching markers in Triassic rocks in this area (Soto et al., 2019).From 160 to 100 Ma, the northward propagation of the Central Atlantic spreading ridge into the southern North Atlantic resulted in a net left-lateral slip of 245 km and increasing strain rates of up to 9 km.Myr −1 , indicating the southern Ebro boundary became the main tectonic boundary in Iberia, accommodating eastwards displacement of Iberia into the Alpine Tethys region.Despite the requirement of such large movements in the Iberian Range geological evidence are lacking.This likely reflect the role played by the Triassic evaporites that decouples the large extension in the pre-salt basement from thinskinned extension in sedimentary cover as shown around Iberia by numerical studies (e.g., Grool et al., 2019;Duretz et al., 2019;Jourdon et al., 2020;Lagabrielle et al., 2020).Brunet, 1984), Betics (Hanne et al., 2003), Cameros basin (Salas and Casas, 1993;Salas et al., 2001;Omodeo-Sale et al., 2017), Maestrat basin (Salas and Casas, 1993;Salas et al., 2001) and West Iberia (Spooner et al., 2018) Olivet, 1996;Srivastava et al., 2000;Schettino and Turco, 2009;Fernandez, 2019 North Sea Nirrengarten et al., 2018;Hassan et al., 2019;Sandoval et al., 2019 Tethys & peri-Tethys
Table 1 .
Geodynamic and timing constrains used in the kinematic reconstruction model
Table 2 .
Quantification of strike-slip displacement between the European and Ebro and between the Iberia (Galicia) and Ebro. | 3,977.4 | 2020-03-06T00:00:00.000 | [
"Geology",
"Geography"
] |
The Onset of Double Diffusive Convection in a Viscoelastic Fluid-Saturated Porous Layer with Non-Equilibrium Model
The onset of double diffusive convection in a viscoelastic fluid-saturated porous layer is studied when the fluid and solid phase are not in local thermal equilibrium. The modified Darcy model is used for the momentum equation and a two-field model is used for energy equation each representing the fluid and solid phases separately. The effect of thermal non-equilibrium on the onset of double diffusive convection is discussed. The critical Rayleigh number and the corresponding wave number for the exchange of stability and over-stability are obtained, and the onset criterion for stationary and oscillatory convection is derived analytically and discussed numerically.
Introduction
The problem of double diffusive convection in porous media has attracted considerable interest during the past few decades because of its wide range of applications, including the disposal of the waste material, high quality crystal production, liquid gas storage and others.
Early studies on the phenomena of double diffusive convection in porous media are mainly concerned with problem of convective instability in a horizontal layer heated and salted from below. The double-diffusive convection instabilities in a horizontal porous layer was studied primarily by Nield [1,2] on the basis of linear stability theory for various thermal and solutal boundary conditions. Then the analysis is extended by Taunton [3] et al., Turner [4][5][6], Huppert and Turner [7]. Platten and Legros [8] reported excellent reviews about these studies, using subject of extensive theoretical and experimental investigations. Recently, Pritchard and Richardson [9] discussed how the dissolution or precipitation of the solute effect the onset of convection.
On the other hand, viscoelastic fluid flow in porous media is of interest for many engineering fields. Unfortunately, the convective instability problem for a binary viscoelastic fluid in the porous media has not been given much attention. Wang and Tan [10,11] performed the stability analysis of double diffusive convection of Maxwell fluid in a porous medium, and they pointed out that the relaxation time of Maxwell fluid enhances the instability of the system. Double-diffusive convection of Oldroyd-B fluid in the porous media is studied by Malashetty and co-workers [12][13][14].
In present research, we perform the linear stability of double diffusive convection in a viscoelastic fluid-saturated porous layer, with the assumption that the fluid and solid phases are not in local thermal equilibrium (LTE). The effects of parameters of the system on the onset of convection are discussed analytically and numerically. The critical Rayleigh number, wave number and frequency for exchange of stability are determined.
Basic Equations
We consider an infinite horizontal porous layer of depth d, saturated with a Maxwell fluid mixture heated and salted from below, with the vertically downward gravity force g acting on it. The lower surface is held at a temperature T 1 and concentration S 1 , the upper one is kept at a lower temperature T 2 and concentration S 2 . Moreover, T 1 wT 2 ,S 1 wS 2 : Assuming slow flows in porous media, the momentum balance equation can be linearized as where r is the density, q~(u,w) is the volume average velocity obtained by using a volume averaging technique and g is the acceleration due to gravity, p is the pressure. For general viscoelastic fluids, the constitutive relations between stress tensort t and strain tensorD D is given by Delenda et al [15] 1zl 1 L Lt where m is the viscosity, l 1 and l 2 are relaxation time and retardation time, respectively. When the viscoelastic fluid is Maxwell model, l 2~0 . Substituting Eq.(2) into (1), then we get the modified Darcy-Maxwell model to describe the flow in the porous media, neglecting the Soret and Dufour effects between temperature T and concentration S [11,16] + : q~0 ð3Þ where K and e are the permeability and porosity of the medium while k is the effective solutal diffusivity of the medium. We assume that the diffusion of temperature obeys the following equations, which is a non-equilibrium model between the solid and fluid phases, suggested by [2,14,17] where c is the specific heat, k is the thermal conductivity with the subscripts f and s denoting fluid and solid phase respectively, h is the inter-phase heat transfer coefficient. The inter-phase heat transfer coefficient h depends on the nature of the porous matrix and the saturating fluid, and the small values of h gives rise the relatively strong thermal non-equilibrium effects. In Eqs.(6)-(7), T f and T s are intrinsic average of the temperature fields and this allows one to set T f~Ts~Tb , whenever the boundary of the porous medium is maintained at the temperature T b . The onset of double diffusive convection can be studied under the Boussinesq approximation and an assumption that the fluid r depends linearly on the temperature T and solute concentration S where r f and r o are the densities at the current and reference state, respectively. The quantities b T and b S are the coefficients for thermal and solute expansion, respectively. Because of the Boussinesq approximation, which states that the effect of compressibility is negligible everywhere in the conservations except in the buoyancy term, is assumed to hold.
Basic State
The basic state is assumed to be quiescent and we superimpose a small perturbation on it. We eliminate the pressure from the momentum transport equation (4) and define stream function y by Then the following dimensionless variables are defined as Here the symbol ''Ã'' means dimensionless, and h, w are nondimensional temperatures of fluid and solid phase, respectively. Q is non-dimensional concentration of solute in porous medium. Substituting the above dimensionless variables in the system yields the following non-dimensional governing equations (for simplicity, the dimensionless mark ''*'' will be neglected hereinafter) where Lz 2 is the two-dimensional Laplacian operator, and the non-dimensional variables that appear in the above equations are defined as where the Ra is the thermal Rayleigh number, Rs is the solute Rayleigh number, l is the relaxation parameter, Pr is the Prandtl number, Da is the Darcy number, Va is the Vadasz number, g is the normalized porosity, n is the kinematic viscosity, Le is the Lewis number, a is the diffusive ratio, l is the porosity modified conductivity ratio, H is the non-dimensional interphase heat transfer coefficient. When H??, the solid and fluid phase have h~Q~w~0, on z~0 and 1:
Linear Stability Theory
In this section, we discuss the linear stability of the system. According to the normal mode analysis, the Eqs.(10)-(13) is solved using the time dependent periodic disturbances in a horizontal plane. We assume that the amplitudes are small enough, so the perturbed quantities can be expressed as follows Where a is the horizontal wavenumber, and s is the growth rate. Substitution of Eq.(15) into the linearized version of Eqs.(10)- (13), yields the following equation: The growth rate s is in general a complex quantity such that s~v r ziv i . The system with v r v0 is always stable, while for v r w0, it will unstable. For the neutral stability state v r~0 , we set where Since Ra is a physical quantity, it must be real. Hence, from Eq.(18) it follows that either v i~0 (steady onset) or D 2~0 (v i =0, oscillatory onset).
Stationary Convection
The steady onset corresponds to v i~0 and reduces the Eq. (18) to This result is obtained by Banu and Rees [18] in the case of a Darcy porous medium with thermal non-equilibrium model. When H??, in the case of local thermal equilibrium Eq.(17) takes the form Further Eq. (20) can be written as In the absence of the solute effect, Eq.(21) reduces to which is the classical result, obtained by Horton and Rogers [19]. The value of Rayleigh number Ra given by Eq.(17) can be minimized with respect to the wavenumber a by setting LRa La 2 and solve the equation where a 0 is the critical wavenumber for the LTE case,we obtain a 0~p from the Eq.(21). Substituting Eq.(26) into the Eq.(25), and rearranging the terms and then equating the coefficients of same powers of H will allow us to obtain the a 1 and a 2 , we get Substituting these values of a 0 , a 1 and a 2 into the Eq.(25), we can obtain the critical Rayleigh number for small H.
Letting LRa=La 2~0 , we obtain the following expression Similarly, we expand a in power series of H as Then, substituting these values of a 0 , a 1 and a 2 into the Eq.(28), we can obtain the critical Rayleigh number for large H.
Oscillatory Convection
For oscillatory onset v i is non-zero, which requires D 2~0 in (18), giving Fig. 3 and 4. Moreover, the larger the heat transfer coefficient H is, the faster the heat transfer enabling the viscoelastic fluid to attain greater percolation velocity. Therefore large heat transfer coefficient favors onset of convection. From Figs. 4, we observe that the effect of increasing c decreases the minimum of the Rayleigh number for stationary mode, indicating that the effect of the porosity modified conductivity ratio is to advance the onset of convection.
Numerical Results and Discussion
The variation of conductivity ratio on the critical Rayleigh number for stationary mode with the heat transfer coefficient for different values of conductivity ratio is shown in Fig. 5. We find that the critical Rayleigh number is independent of c for small values of H, but for large H, the critical Rayleigh number decreases with increasing c. Moreover, for very large c( §10), the critical Rayleigh number is independent of H. Thus, we can draw the conclusion that the presence of non-equilibrium of heat transfer between the viscoelastic fluid and solid make the system instable. Fig. 6-13 present the neutral curves for different values of the relaxation parameter c, Vadasz number, heat transfer coefficient H, normalized porosity parameter g, solute Rayleigh number Rs, porosity modified conductivity ratio c, Lewis number Le and diffusivity ratio a, respectively. As can be seen from the figures, these parameters has significant effects upon the neutral curves.
The effect of relaxation time on the neutral curves is shown in Fig. 6. It is shown in Fig. 6a, i.e., for local thermal non-equilibrium case, the minimum of the Rayleigh number is smaller when c is larger, which makes the onset of convection easier. Based on the theory of Maxwell fluid model, a fluid relaxation or characteristic time, c, is defined to quantify the viscoelastic behavior [20]. So we draw a conclusion that the physical mechanism is the increasing relaxation time increases the elasticity of a viscoelastic fluid thus causing instability. As a result, the elasticity of the Maxwell fluid has a destabilizing effect on the fluid layer in the porous media, and the oscillatory convection is easy to occur for viscoelastic fluid. And this result agrees with the result given by Wang and Tan [11], where they studied the double diffusive convection problem with thermal equilibrium, as shown in Fig. 6b.
From Fig. 7, We find that an increase in the value of the Vadasz number decreases the oscillatory Rayleigh number, indicating that the Vadasz number advances the onset of double diffusive convection, which is in agreement with the literature by Malashetty and Biradar [16].
The stationary Rayleigh number increases with an increase in the value of heat transfer coefficient H, as shown in Fig. 8, indicating that the effect of heat transfer coefficient is to enhance the stability of the system. At the same time, the same effect of H upon the oscillatory Rayleigh number can be observed in this figure. Comparing with the curve for local thermal equilibrium model, it can be seen that the the oscillatory convection is easy to occur for thermal non-equilibrium case.
In Fig. 9, we note that the effect of normalized porosity parameter is to advance the onset of oscillatory convection. From Fig. 10, we find that the increasing Rs has a stabilizing effect on the onset of double diffusive convection. The neutral stability curves for stationary and oscillatory modes for different values of porosity modified conductivity ratio is shown in Fig. 11, which leads us to the conclusion that the increasing porosity modified conductivity ratio has a destabilizing effect for the system.
The effect of Lewis number Le on the critical oscillatory Rayleigh number is shown in Fig. 12. From the figure, it can be found that increasing of Lewis number decreases the critical oscillatory Rayleigh number indicating that the Lewis number destabilizes the system in oscillatory mode. The physical interpre-tation has been given by Malashetty and Biradar [16], when Lew1, the diffusivity of heat is more than that of solute, and therefore, destabilizing solute gradient augments the onset of oscillatory convection. From Fig. 13, we observe that the diffusivity ratio a has little effect on the onset of double diffusive convection.
Conclusion
The onset of double diffusive convection in a binary Maxwell fluid, which is heated and salted from below, is studied analytically using using a thermal non-equilibrium model. Based on the normal mode technique, the linear stability has been studied analytically. The effects of relaxation time, heat transfer coefficient, normalized porosity parameter and other parameters on the stationary and oscillatory convection are discussed and shown graphically. It is found that the increasing relaxation time increases the elasticity of a viscoelastic fluid thus causing instability. The asymptotic solutions for both small and large values of H were obtained. In general, this work showed how the relaxation time and non-equilibrium model affects the double-diffusive convection in porous media, and it may be useful in some applications which contains heat and mass transfer. | 3,183.6 | 2013-11-28T00:00:00.000 | [
"Physics"
] |
The Mantle of Advanced Glycation End Products in Micro-and Macrovascular Complications of Type 2 Diabetes Mellitus
Type 2 Diabetes Mellitus is a multifactorial disorder that occurs because of a complex interplay between genetic predisposition and lifestyle choices as the two primary causative factors. It is characterized by chronic inflammation, insulin resistance, oxidative stress and hyperglycemia. Chronic state of hyperglycemia often results in the formation of Advanced Glycation End products, hereby abbreviated as AGEs via the infamous Maillard reaction. The Maillard reaction is defined as formation of adducts between reactive carbonyls in glucose, fructose, and their metabolites, such as methylglyoxal or deoxyglucosone, with amino groups in protein, DNA, and lipids. This reaction has been implicated as a root cause of several evils in diabetes associated microand macrovascular complications [1].
Type 2 Diabetes Mellitus
Type 2 Diabetes Mellitus is a multifactorial disorder that occurs because of a complex interplay between genetic predisposition and lifestyle choices as the two primary causative factors. It is characterized by chronic inflammation, insulin resistance, oxidative stress and hyperglycemia. Chronic state of hyperglycemia often results in the formation of Advanced Glycation End products, hereby abbreviated as AGEs via the infamous Maillard reaction. The Maillard reaction is defined as formation of adducts between reactive carbonyls in glucose, fructose, and their metabolites, such as methylglyoxal or deoxyglucosone, with amino groups in protein, DNA, and lipids. This reaction has been implicated as a root cause of several evils in diabetes associated micro-and macrovascular complications [1].
Advanced Glycation End Products are frequently referred to as glycotoxins; whose formation is induced by nonenzymatic glycemic and oxidative stress reactions [2]. These are a heterogenous group of biological entities formed via a nonenzymatic post-translational modification reaction between reducing sugars and the amino groups of proteins, nucleic acids and lipids [3]. AGEs are formed by the Maillard reaction was first described in 1912, by French scientist Louis Camille Maillard. This is a multistep process, initiated by the reversible reaction between the carbonyl group of a reducing sugar and terminal amino group of a protein, lipid or nucleic acid, resulting in the formation of a Schiff base. These further undergo irreversible rearrangements to form more stable ketoamines hereby referred to as the Amadori products; a putative example of which is HbA1c.These products undertake further structural rejoinders via oxidation, condensation, dehydration over the span of days to weeks and give way to irreversibly cross-linked, fluorescent microprotein derivatives known as AGEs [4,5]. These products persist in diabetic vessels for long time despite improved glycemic control and undergo slow degradation Figure 1.
In the context of type 2 diabetes mellitus, accelerated formation and accumulation of AGEs has been implicated in the onset of diabetic microvascular complications which are summarised below.
Diabetic retinopathy: Diabetic Retinopathy is one of the foremost microvascular complications in T2DM and the leading cause for acquired blindness in individuals of abstruse age/young adults. It starts with retinal microvascular cells being damaged due to hyperglycemia. Subsequent loss of pericytes leads to enhanced vascular permeability which leads to microvascular occlusion in the retina [6,7]. AGEs have been implicated in the onset and progression of microvascular disease in DM. Levels of endothelial cell specific mitogen VEGF in ocular fluid have been reported in various clinical studies to positively correlate with the amount of neovascularization in diabetic retinopathy [8]. AGEs accumulate in retinal pericytes during diabetes and adversely affect their function and survival [9]. Various studies have shown that AGEs accumulation causes apoptosis of retinal pericytes and its interaction with RAGE induces the expression of VEGF, DNA synthesis and angiogenesis which are all regarded as hallmarks of proliferative retinopathy [10]. AGE induced damage to pericytes predisposes vessels to angiogenesis, thrombogenesis and endothelial cell injury; which results in an overt clinical expression of diabetic retinopathy increased [11]. AGEs have been shown to stimulate angiogenesis by inducing growth and tube formation of microvascular endothelial cells via interaction with RAGE and subsequent VEGF expression [12]. AGEs have also demonstrated an enhanced leukocyte adhesion to cultured retinal microvascular endothelial cells by induction of the expression of ICAM-1 (intracellular cell adhesion molecule-1), which leads to leukostasis and blood-retinal barrier dysfunction as shown in various in vivo experiments [13][14][15].
Diabetic nephropathy: Nephropathy is one of the most common complications of T2DM. Increased glomerular basement membrane thickness, a decreased glomerular filtration rate, and an expanded mesangial volume are regarded as hallmarks of nephropathy [16]. Diabetes induced alterations in the physical and biochemical properties of the glomerular basement membrane results in proteinuria. AGEs have been implicated in the disruption of glomerular homeostasis as their accumulation in mesangial cells induces apoptosis and inhibits cell growth. Mesangial cells represent a key anatomical component of glomerulus, providing structural support for capillary tufts and modulating glomerular filtration via smooth muscle activity [17,18]. Secretion of VEGF and monocyte chemoattractant protein-1 (MCP-1) is also stimulated by AGEs, which results in hyperfiltration and microalbuminuria thus leading to the early phase of diabetic nephropathy [10]. Furthermore, serum levels of AGEs were also elevated in diabetic patients with nephropathy than in diabetic patients without clinically evident nephropathy [19]. These findings are suggestive of the fact that AGEs impact mesangial cells in the same manner as they affect pericytes and vascular wall damage is the stepping stone for all diabetic vascular complications. Over expression of RAGE in diabetic mice resulted in progressive glomerulosclerosis and renal dysfunction where as inactivation of RAGE in a mouse model of diabetic nephropathy suppressed kidney enlargement, increased glomerular cell number, induced mesangial expansion, advanced glomerulosclerosis, increased albuminuria, and increased serum creatinine levels compared with wild-type diabetic mice [20,21]. In the latter study, low molecular weight heparin treatment specifically prevented albuminuria, increased glomerular cell number, mesangial expansion, and glomerulosclerosis by acting as an antagonist to RAGE [20]. Recent studies have shown that RAS (renin-angiotensin system) inhibitors such as telmisartan or olmesartan can potentially inhibit AGEs evoked inflammatory responses in endothelial cells by downregulating RAGE expression. There by potentially preventing of diabetic vascular complications [22,23].
Diabetic neuropathy: Diabetes Mellitus is a key cause of peripheral neuropathy, which typically presents as distal symmetrical polyneuropathy [19]. Key pathological developments in human diabetic nerves include fiber loss, axonal degeneration and demyelination, and microangiopathic changes [24,25]. AGEs have been detected in sural, peroneal, and saphenous nerves of human diabetic subjects in the perineurium, endothelial cells and pericytes of endoneurial microvessels as well as in myelinated and unmyelinated fibers [26]. Accumulation of AGEs in the nerves of diabetes patients and the inhibition of AGEs formation by anti-glycation agents improved the neuropathic changes suffered by experimental diabetic rat model [27]. However, the pathologic mechanisms behind the actions of AGEs in diabetic neuropathy are poorly understood. AGEs have been demonstrated to affect the viability, replication, and the production of proinflammatory cytokines such as tumor necrosis factor-α (TNF-α) and interleukin-1β (IL-1β) in Schwann cells [28]. This toxic behavior of AGEs has also been observed in neuronal, vascular, and mesangial cells [10,29].
Cardiovascular diseases
AGE modifications of proteins frequently involve cross linking of proteins which has been implicated in vascular and myocardial stiffness and deterioration of structural integrity and physiological functioning of various organ systems in the context of isolated systolic hypertension and diastolic heart failure [30]. Several in vitro, in vivo and epidemiological studies have declared atherosclerosis to be an intrinsically inflammatory disease of the heart [31]. Activation of the AGE-RAGE pathway leads to the generation of oxidative stress which subsequently activates NF-kB signaling pathway in vascular wall cells. This sequence of events promotes atherosclerosis and inflammation promoting genes' expression which contributes to the development and progression of cardiovascular complications in diabetes [11,32,33]. Nitric oxide (NO) is a potent endogenous vasodilator having anti-inflammatory, anti-thrombotic anti-proliferative
Current Research in Diabetes & Obesity Journal
and anti-arthrogenic properties [34]. AGEs have been shown to inhibit endothelial NO synthase and concomitantly stimulate the production of peroxynitrite; a reactive intermediate and toxic product of NO's reaction with superoxide anion. AGE-RAGE interaction also stimulates the production of an endogenous inhibitor of endothelial NO synthase, asymmetric dimethyl arginine (ADMA) that is expressed in endothelial, renal mesangial and renal proximal tubular cells [35,36]. ADMA has been recently recognized as a potent biomarker of CVD and chronic kidney disease progression and could also be involved in cardiorenal complications in diabetes [5,37]. In diabetic patients, AGE modification impairs the plasma clearance of low density lipoprotein (LDL) and converts it into a more atherogenic and redox-sensitive mitogen activated protein kinase (MAPK) activator [38].
AGEs have also been implicated in the reduction of adenosine triphosphate-binding membrane cassette transporter A1 (ABCA1) and ABCG1 levels in THP-1 cells that inhibits cholesterol efflux from THP-1 macrophages to Apolipoprotein AI and HDL cholesterol respectively. This cycle of events implicates the involvement of AGE-RAGE axis in impaired reverse translocation of cholesterol in diabetes and accelerated formation of foam cells in atherosclerotic lesions [39,40]. AGEs promote thrombogenesis by activating as well as aggregating platelets and by enhancing the expression of tissue factor, which leads to thrombus formation. Recent studies have also shown that AGEs potentiate thromb in or factor Xa-mediated endothelial and renal cell damages via up regulation of protease-activated receptor-1 and -2 [41][42][43]. AGEs' interaction with their respective RAGE inhibits prostacyclin production and induce plasminogen activator inhibitor-1 generation in endothelial cells [44]. Therefore, it can be stated conclusively that AGEs possess the ability to stimulate platelet aggregation and fibrin stabilization, resulting in a predisposition to thrombogenesis and promotion of vascular injury in diabetes. AGE-induced pathological neovascularization of atherosclerotic plaques is often mediated by an ischemia and hypoxia mediated upregulation of VEGF [45].
This triggers pathological angiogenesis which contributes to plaque growth and instability within the atherosclerotic plaques in diabetes [5]. Endothelial cell dysfunction and decreased endothelial progenitor cell (EPC) function is another hallmark of increased risk of cardiovascular complications in diabetic patients [46]. AGEs are known to enhance apoptosis and suppress migration and tube formation of late EPCs by interacting with RAGE and subsequent downstream suppression of Akt and COX-2 [47]. This modification impairs vascular repair by inhibiting EPC adhesion, spread and migration via glycation of Arg-Gly-Asp motif of fibronectin [48]. Vascular calcification in atherosclerosis is often mediated by AGEs by means of osteoblastic differentiation of pericytes. Activation of RAGE inhibits myocardin-dependent smooth muscle cell (SMC) gene expression and induces osteogenic differentiation of vascular SMCs through Notch/Msx2 induction thus being involved in vascular calcification as well [49]. AGEs have been implicated in the induction of oxidative stress that subsequently induces SMC proliferation via activation of NADPH oxidase. AGE-RAGEinduced extracellular signal related kinase activation is reported to increase Na+/H+ exchanger-1 activity, which leads to a decrease in intracellular H+ and subsequently promotes a cellcycle progression and SMC proliferation [50].
Conclusion
Advanced Glycation End Products have been established to play a causative role in the onset of type 2 diabetes mellitus as well as its associated co-morbidities such as diabetic nephropathy, diabetic neuropathy, diabetic retinopathy and cardiovascular disease. This role is triggered by the chronic state of hyperglycemia that is accompanied by inflammation and oxidative stress and triggers multiple downstream signaling pathways that result in various micro-as well as macrovascular complications in diabetic patients. | 2,577.4 | 2017-06-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Electromagnetic force and torque in ponderable media
Maxwell's macroscopic equations combined with a generalized form of the Lorentz law of force are a complete and consistent set of equations. Not only are these five equations fully compatible with special relativity, they also conform with conservation laws of energy, momentum, and angular momentum. We demonstrate consistency with the conservation laws by showing that, when a beam of light enters a magnetic dielectric, a fraction of the incident linear (or angular) momentum pours into the medium at a rate determined by the Abraham momentum density, ExH/c^2, and the group velocity V_g of the electromagnetic field. The balance of the incident, reflected, and transmitted momenta is subsequently transferred to the medium as force (or torque) at the leading edge of the beam, which propagates through the medium with velocity V_g. Our analysis does not require"hidden"momenta to comply with the conservation laws, nor does it dissolve into ambiguities with regard to the nature of electromagnetic momentum in ponderable media. The linear and angular momenta of the electromagnetic field are clearly associated with the Abraham momentum, and the phase and group refractive indices (n_p and n_g) play distinct yet definitive roles in the expressions of force, torque, and momentum densities.
Introduction
Standard textbooks on electromagnetism tend to treat the macroscopic equations of Maxwell as somehow inferior to their microscopic counterparts [1,2]. This is due to the fact that, for real materials, polarization and magnetization densities P and M are defined as averages over small volumes that must nevertheless contain a large number of atomic dipoles. Consequently, the macroscopic E, D, H and B fields are regarded as spatial averages of the "actual" fields; without averaging, these fields would be wildly fluctuating on the scale of atomic dimensions. (The actual fields, of course, are presumed to be well-defined at all points in space and time.) There is also a tendency to elevate E and B to the status of "fundamental," while treating D and H as secondary or "derived" fields. This is an unfortunate state of affairs, considering that the macroscopic equations of Maxwell are a complete and self-consistent set, provided that the fields are treated as precisely-defined mathematical entities, i.e., without attempting to associate P and M with the properties of real materials. Stated differently, if material media consisted of dense collections of point dipoles, then any volume of the material, no matter how small, would contain an infinite number of such dipoles, eliminating thereby the need for the introduction of macroscopic averages into Maxwell's equations. Also, since in their simplest form, the macroscopic equations contain all four of the E, D, H, B fields, one should perhaps resist the temptation to designate some of these as more fundamental than others. Tellegen [3] has promoted the idea that E, D, H and B should be regarded as equally important physical entities, a point of view with which we agree.
Constitutive relations equate P with D -ε o E and M with B -µ o H, thus allowing P and M to be designated as secondary fields. Electric and magnetic energy densities and the Poynting vector may now be written as E e = E· D, E m = H · B, and S =E ×H, respectively, without the need to explain away the appearance of "derived" fields D and H in the expressions pertaining to a most fundamental physical entity. [We note in passing that, in deriving Poynting's theorem, the assumed rate of change of energy density ∂E /∂t = E·J free + E· ∂D/∂t + H · ∂B/∂t (1) is, in fact, the only postulate of the classical theory concerning electromagnetic energy.] The fifth fundamental equation of the classical theory, the Lorentz law of force F =q(E +V × B), expresses the force experienced by a particle of charge q moving with velocity V through an electromagnetic field. It is fairly straightforward to derive from this law the force and torque exerted on an electric dipole p (or the force and torque densities exerted on the polarization P). However, the Lorentz law is silent on the question of force/torque experienced by a magnetic dipole m in the presence of an electromagnetic field. Traditionally, magnetic dipoles have been treated as Amperian current loops, and the force and torque exerted upon them have been derived from the standard Lorentz law by considering the loop's current as arising from circulating electric charges. The problem with this approach is that, when examining the propagation of electromagnetic waves through magnetic media, one finds that linear and angular momenta are not conserved. Shockley [4] has famously called attention to the problem of "hidden" momentum within magnetic materials. Fortunately, it is possible to extend the Lorentz law to include the electromagnetic forces on both electric and magnetic dipoles in a way that is consistent with the conservation of energy, momentum, and angular momentum. This extension of the Lorentz law has been attempted a few times during the past forty years, each time from a different perspective but always resulting in essentially the same generalized form of the force law [4][5][6][7][8][9][10]. It is now possible to claim that we finally possess a generalized Lorentz law which, in conjunction with Maxwell's macroscopic equations, is fully consistent with the conservation laws of physics.
The goal of the present paper is to demonstrate the consistency of the generalized Lorentz law with conservation of linear and angular momenta. For the most part we will confine our attention to the case of homogeneous, isotropic, linear, and transparent media specified by their relative permittivity ε(ω) and permeability µ (ω), although a case involving a birefringent medium is discussed in section 5 as well. We derive expressions for the total force and torque exerted on magnetic dielectrics, thus clarifying the reasons behind the traditional division of linear and angular momenta into electromagnetic and mechanical parts.
Our methods should be applicable not only to transparent media, whose ε (ω) and µ (ω) are real-valued, but also to absorbing media, where at least one of these parameters is complex.
We emphasize at the outset that the results of the present analysis, when specialized to non-magnetic and/or non-dispersive media, are in complete agreement with our previous publications as well as with the results of Loudon and his co-workers reported in [11][12][13][14]. The present paper describes general methods of calculating electromagnetic force and torque in homogenous, linear, magnetic dielectrics. The field imparts some of its momentum to the host medium, typically at the leading or trailing edges of a light pulse, at the side-walls of a finitediameter beam, or at the surfaces and interfaces that separate homogeneous media of differing optical constants. These forces and torques convert some fraction of the electromagnetic momentum into mechanical momentum of the host (or vice-versa), while some of the initial momentum continues to exist in electromagnetic form. Under all the circumstances considered, the total linear and angular momenta of the light-matter system, comprising electromagnetic and mechanical contributions, are conserved.
Force and torque exerted on electric and magnetic dipoles by the electromagnetic field
In a recent publication [10] we derived the following generalized expression for the Lorentz force density in a homogeneous, linear, isotropic medium specified by its µ and ε parameters: Similar expressions have been derived by others (see, for example, Hansen and Yaghjian [8]). Our focus, however, has been the generalization of the Lorentz law in a way that is consistent with Maxwell's equations, with the principles of special relativity, and with the conservation laws. In conjunction with Eq. (2), Maxwell's equations in the MKSA system of units are: ∇· D = ρ free , ∇ ×H = J free + ∂D/∂ t, In these equations, electric displacement D and magnetic induction B are related to the polarization density P and magnetization density M via the constitutive relations In what follows, the medium will be assumed to have neither free charges nor free currents (i.e., ρ free =0, J free =0). Our homogeneous, linear, isotropic media will be assumed to be fully specified by their permittivity ε =ε ′ + iε ″ and permeability µ =µ′ + iµ″. Any loss of energy in such media will be associated with ε″ and µ″, which, by convention, are ≥ 0. The real parts of ε and µ, however, may be positive or negative; in particular, in negative-index media, ε′ < 0 and µ′ < 0. Using simple examples that are amenable to exact analysis, we have shown in a previous publication [10] that Eqs. (2) and (3) lead to a precise balance of momentum when all relevant forces, especially those at the boundaries, are properly taken into account. A major concern of the present paper is the extension of these arguments to prove the conservation of angular momentum. In [10] and elsewhere, we have considered an alternative formulation of the generalized Lorentz law, where bound electric and magnetic charge densities ρ e =−∇ · P and ρ m =−∇ · M directly experience the force of the E and H fields. The alternative formula is As far as the total force exerted on a given volume of material is concerned, Eqs. (2) and (4) can be shown to yield identical results provided that forces at the boundaries are properly treated in each case in accordance with the corresponding force equation [13,14]. The force distribution throughout the volume, of course, will depend on which formulation is used, but when integrated over the volume of interest, the two distributions always yield identical values for total force. With regard to torque, the situation is somewhat different. If Eq. (2) is used as the force expression, then torque density will be T 1 ( r, t) = r ×F 1 ( r, t) + P( r, t) ×E( r, t) + M( r, t) ×H( r, t).
On the other hand, the force expression of Eq. (4) is all that is needed for the calculation of torque, namely, T 2 ( r, t) = r ×F 2 ( r, t).
Again, Eqs. (5) and (6) can be shown to yield identical results for the total torque on a given volume of material, even though the predicted torque density distributions are usually different in the two formulations. The proof of equivalence of total force (and total torque) for the two formulations was originally given by Barnett and Loudon [13,14]. Subsequently, we extended their proof to cover the case of objects immersed in a liquid [15]. In our proof, we stated that P ×E (and, by analogy, M ×H ) will be zero in isotropic media and, therefore, the additional terms in Eq.(5) need not be considered. This statement, while valid in some cases, is generally incorrect. In other words, for the total torques in the two formulations to be identical, Eq.(5) must definitely contain the P ×E and M × H contributions.
In birefringent media, of course, P(r, t) is not always parallel to E(r, t), nor is M(r, t) parallel to H(r, t), thus making it obvious that the P ×E and M ×H terms in Eq. (5) are indeed necessary. Even in the case of isotropic media, these terms are needed in many circumstances, because P and E (or M and H ) could assume differing orientations. For instance, in a time-harmonic (i.e., single-frequency) circularly polarized electromagnetic field, P lags behind the rotating E-field when ε is complex; similarly, M lags behind the rotating Hfield when µ is complex. In general, therefore, when computing the torque in accordance with Eq.(5), one must beware of the possibility that absorption, dispersion, or birefringence could all create conditions under which P and E (or M and H ) will have differing orientations. One such situation will be encountered in section 5 below.
Relation between the Lorentz force and the forces obtained from energy gradients
The energy of a single electric dipole p immersed in an electromagnetic field is E p = p · E, while that of a single magnetic dipole m is E m = m · H. The force experienced by these dipoles is thus expected to be [1,2]: Note that p and m in these expressions represent individual dipoles and, therefore, the gradient operator acts only on the fields E and H in which the dipoles are immersed. The force in Eq.(2), however, is not exactly the same as that predicted by these energy gradients. One can rewrite Eq.(2) in terms of energy gradients as follows: We emphasize once again that the gradient operator in Eq.(8) acts on the E and H fields only; P and M must be treated as locally constant fields. The bottom line is that the effective force experienced by P and M has an extra term given by the time derivative of D ×B− (E× H ) /c 2 , which happens to be the difference between the Minkowski and Abraham momentum densities [16]. Once the fields settle into a single-frequency (i.e., time-harmonic) oscillation, the contribution to average force of the time-derivative term in Eq.(8) vanishes, leaving the gradient term as the effective force. However, in the presence of transient events, the timederivative term must be taken into account, otherwise one will end up with a certain amount of hidden momentum in the system.
Pulse of light entering a transparent, homogeneous, isotropic medium
In [17] we studied the transfer of angular momentum from a circularly polarized plane-wave to a semi-infinite isotropic dielectric using Eq. (6). Here we investigate a more general version of the problem involving a transparent magnetic medium using the alternative formulation of the Lorentz law given in Eqs. (2) and (5). Shown in Fig. 1 is a wide (but finite-diameter) light pulse entering a semi-infinite slab at normal incidence. The leading edge of the pulse propagates along the z-axis, exerting a force and a torque on the medium, which account, respectively, for the mechanical linear and angular momenta of the light inside the medium. The medium is transparent, isotropic, and dispersionless, so that, at all points along the beam's path, P is parallel to E and M is parallel to H. Also, at normal incidence, there will be no forces or torques exerted at the entrance facet (when the beam diameter is sufficiently large). The entire force and torque will thus arise from the action of the leading edge of the light pulse on electric and magnetic dipoles in accordance with Eqs. (2) and (5). We begin by rewriting the force equation, Eq. (2), for a transparent, isotropic, dispersionless medium in terms of the real-valued field amplitudes E(r, t) and H(r, t). This is possible because ε and µ are real-valued and frequency-independent. We set P(r,t) =ε o (ε − 1) E(r,t) and M(r,t) =µ o (µ −1) H(r,t), use Maxwell's curl equations to replace some of the space-derivatives with time-derivatives, combine various terms, and find the following equivalent of Eq. (2): Let us first compute the total force exerted by the transmitted portion of the pulse on the semiinfinite slab. Integrating the force density F 1 ( r, t) over the volume of the slab, we find that the gradient terms along x ∧ and y ∧ vanish, leaving only the gradient term along z ∧ . The third term in Eq.(9) contains a time derivative, but since the beam travels with a speed of c/n along the zaxis, ∂/∂t can be replaced with -(c/n)∂/∂ z, where n =√ µε is the refractive index of the material. (Note that lack of dispersion makes the group velocity equal to the phase velocity.) The integrated force is thus given by Once the leading edge is sufficiently advanced inside the material, the field amplitudes at z = 0 + stabilize and assume sinusoidal behavior. The z-component of the force can then be time-averaged over one period of oscillation. The beam diameter is large enough that the Fresnel reflection coefficient at normal incidence, is all that is needed to determine the reflected and transmitted E-and H-field amplitudes. At z = 0 + , immediately beneath the surface, the amplitudes of E x , E y , B z are (1 + r) times the corresponding incident amplitudes, while those of H x , H y , D z are (1 − r) times the corresponding incident amplitudes. This is a consequence of the symmetry of reflection as well as the continuity of E || , H || , B ⊥ and D ⊥ field components. Recognizing that the beam's crosssectional area is large, we ignore E z and H z at z = 0 + , then treat the remaining field amplitudes as uniform in the xy-plane at z = 0 + . Normalizing the integrated force by the cross-sectional area of the beam, the time-averaged force per unit area is found to be This time-averaged force is exerted on the slab by the leading edge of the beam as it propagates within the medium. In addition, electromagnetic (i.e., Abraham) momentum pours into the slab with a volume density of p EM =E ×H /c 2 . Since the leading edge moves a distance of c/n in one second, the electromagnetic momentum (per unit area per second) delivered to the medium is (cn Adding this to <F z > of Eq. (12) yields the total rate of flow of linear momentum into the slab as ½ε o ( . The final result is exactly equal to the rate of flow of incident plus reflected momenta in the free space, thus establishing the conservation of linear momentum. Extending the analysis to dispersive media (see the appendices) reveals the same general behavior, except that the leading edge of the beam now propagates with the group velocity V g =c/n g . The electromagnetic momentum density will still be given by the Abraham formula, p EM = E×H /c 2 , and the rate of flow of energy into the medium will still be determined by the Poynting vector S = E×H. If we then proceed to assume that the energy contained in a given volume of the material is N ħω, with ħ being the reduced Planck constant, ω the angular frequency of the light, and N the number of photons, the Abraham momentum per photon turns out to be ħω/(n g c). This is the general formula for the photon's electromagnetic momentum in a transparent medium having group refractive index n g . (This result applies to all transparent media, irrespective of the phase refractive index n p being positive or negative.) Next, we examine the torque exerted by F 1 (r, t) of Eq.(9) on the semi-infinite slab of The total torque is obtained by integrating r ×F 1 (r, t) over the volume of the material, as follows: Some of the integrals in Eq.(13) vanish because integration of the ∂ /∂x or ∂/∂y terms takes the integrand beyond the beam's finite diameter, where E-and H-field intensities are zero. Other integrals end up being zero because of the symmetry of the incident beam around the zaxis. (We are assuming circular symmetry around z, because the emphasis of the present analysis is on "spin" angular momentum, which arises from the polarization state of the beam, as opposed to "orbital" angular momentum, which is rooted in the circulation of the phase profile around the z-axis.) The only part of Eq.(13) that survives integration is the very last line, whose ∂/∂z terms, when integrated over z, yield an expression in terms of the field components at z = 0 + . These components may then be time-averaged to yield Here <S o > =½ E o ×H o is the time-averaged Poynting vector associated with the incident beam. The integral in the final line of Eq. (14) represents the total incident angular momentum per unit time. The integral is multiplied by (1 − r 2 ), which has the effect of subtracting the angular momentum carried away by the reflected beam. (Note that, unlike linear momentum whose contribution due to reflection adds to the exerted force, angular momentum reverses sign upon reflection and, therefore, its contribution to exerted torque must be subtracted.) The torque <T z > exerted on the medium by the leading edge of the pulse is thus equal to the net angular momentum influx multiplied by (1 − n −2 ). This implies that the electromagnetic (i.e., Abraham) angular momentum flux into the medium is 1/n 2 times the flux of "incident minus reflected" angular momenta. In other words, to conserve the total angular momentum of ħ per photon, each photon's electromagnetic angular momentum inside the slab must reduce to ħ/n 2 .
When the analysis is extended to dispersive media (see the appendices), the aforementioned reduction factor n 2 becomes n p n g , the product of phase and group refractive indices. Thus, upon entering a homogeneous, linear, isotropic, and transparent medium, the electromagnetic angular momentum per photon shrinks by a factor of n p n g , while, according to Eq. (14), the balance of the incident, reflected and transmitted angular momenta is transferred to the medium as a torque (via the leading edge of the beam). This result applies to negativeindex media as well, where the sign of the electromagnetic angular momentum is reversed relative to that of the incident beam (because n p < 0).
Torque on a birefringent slab
A birefringent slab having material parameters (ε x , µ x ) along the x-axis and (ε y , µ y ) along the y-axis is depicted in Fig.2. The semi-infinite slab is illuminated at normal incidence with an elliptically-polarized plane-wave. The incident field amplitudes are (E xo ,E yo ) and (H xo ,H yo ) = Z o −1 (E xo ,E yo ). The incident beam may be written as the sum of left-and right-circularlypolarized plane-waves (LCP and RCP) as follows: The rate of flow of angular momentum (per unit area per unit time) carried by the above beams is L z1 = ¼ (ε o /k o )|E xo − iE yo | 2 and L z2 = −¼ (ε o /k o )|E xo + iE yo | 2 , respectively. Therefore, the total angular momentum influx is L z = L z1 + L z2 = (ε o /k o ) Im(E * xo E yo ). The reflected beam has Efield amplitudes r 1 E xo and r 2 E yo , where The reflected angular momentum flux being L z = (ε o /k o ) Im(r 1 * E * xo r 2 E yo ), the angular momentum per unit area per unit time delivered to the semi-infinite slab will be Next we compute directly the torque on the semi-infinite medium. The field amplitudes within the medium are In this problem, the torque arises from the P ×E + M ×H part of Eq.(5); the other contributions, embodied in the r ×F 1 ( r, t) term, vanish when the latter is expressed in the form of Eq.(13) and its various integrals evaluated, then time-averaged. (Unlike the problem studied in section 4, here we are dealing with the steady-state situation where the leading edge of the beam has already passed through the medium and an exponentially-decaying field along the z-axis has been established.) The time-averaged torque per unit surface area experienced by the semi-infinite slab of Fig. 2 is thus given by An implicit assumption in deriving Eq.(19) is that at least one of the four parameters µ x , µ y , ε x , ε y is complex, so that absorption of light within the semi-infinite slab would cause the exponential function in the integrand to approach zero when z →∞. However, since the term (√ µ x ε y -√ µ y * ε x * ) appearing in the denominator is eventually cancelled out by an identical term in the numerator, the end result is independent of any material absorption.
The final result of Eq. (19) is in complete accord with Eq. (17), that is, the time-averaged torque exerted on the semi-infinite slab is exactly equal to the angular momentum influx carried by the incident beam, minus the angular momentum carried away by the beam reflected at the surface.
Concluding remarks
The generalized Lorentz law in conjunction with macroscopic Maxwell's equations, Eqs. (3), provide a complete and consistent set of equations that are fully compatible with the laws of conservation of energy and momentum. We have argued that the electromagnetic fields E, D, H, B should be treated as fundamental, while polarization and magnetization densities P and M can be considered as secondary fields derived from the constitutive relations, Eqs.(3b). Two formulations of the generalized Lorentz law are given in Eq.(2) and Eq.(4). These formulas are equally acceptable in the sense that they are both consistent with the conservation laws; moreover, they predict precisely the same total force on a given body of material, although the predicted force distributions at the surfaces and throughout the volume of the material could be drastically different in the two formulations. If Eq.(2) is used as the generalized law of force, then torque density will be given by Eq. (5). On the other hand, if force is given by Eq.(4), then the torque formula will be Eq. (6).
Two other postulates that need explicit enunciation in the classical theory of electromagnetism are related to energy and momentum. The first postulate declares that the time-rate-of-change of energy density is given by Eq.(1). The second postulate establishes the (linear) momentum density of the electromagnetic field as p EM = E×H/c 2 . Once these postulates are accepted, one can easily show that the rate of flow of energy (per unit area per unit time) is given by the Poynting vector S = E ×H, that the linear and angular momenta enter and exit a given medium with the group velocity V g , and that the balance of the incident, reflected and transmitted momenta is exerted by the electromagnetic field on the medium as force and torque. These forces and torques are typically concentrated at the edges of the beam and at the surfaces and interfaces of the material medium. In the examples presented in this paper, where the normally-incident beam is fairly uniform, circularly polarized, and has a large diameter, these forces and torques are localized at the leading edge of the beam.
[Depending on which form of the Lorentz law is used, i.e., Eq. (2) or Eq. (4), the forces and torques may appear in one part of the beam or another (e.g., at the leading edge of the beam or at the entrance facet of the medium), the force may be compressive at the side-walls of the beam in one formulation and expansive in the other, but, in all cases, the total force (and total torque) exerted on the material body will be found to be exactly the same, no matter which formulation is used.] Going slightly beyond the classical theory, if one assumes that the electromagnetic energy contained in a given volume of space (or material body) is divided into individual packets of ħω, then the electromagnetic (i.e., Abraham) momentum corresponding to each such bundle of energy (i.e., photon) will be ħω/(n g c), where n g is the group refractive index of the medium. For circularly-polarized light, the spin angular momentum of each photon is ħ/(n p n g ), where n p is the phase refractive index, in agreement with the predictions of Ref. [12]. The balance of linear and angular momenta among the incident, reflected, and transmitted beams is always transferred to the medium in the form of force and torque, which are sometimes identified as "mechanical" momenta of the light beam. When a beam of light enters a material medium from the free space, fractions of its linear and angular momenta remain electromagnetic, while the rest are exerted on the medium in the form of mechanical force and torque. Similarly, when a beam of light emerges from a host medium into the free space, its electromagnetic momenta are augmented by additional momenta that are taken away from the medium, resulting in mechanical backlash (i.e., oppositely directed force and torque) on the medium. We close by pointing out that our findings do not necessarily contradict the experimental results of [18], where the angular momentum transferred to an antenna at the center of a microwave cavity was found to be the same whether the antenna was placed in the air or immersed in a liquid dielectric. The photon inside the liquid has its electromagnetic angular momentum of ħ/(n p n g ), but it is also "dressed" with a certain amount of mechanical angular momentum. What is transferred to the antenna is, in general, a combination of the photon's electromagnetic and mechanical momenta; moreover, the backlash (i.e., momentum transfer to the liquid upon absorption of the photon by the antenna) must also be taken into account.
Appendix A: Electromagnetic momentum in dispersive media
Consider the superposition of two finite-diameter beams which are identical in every respect except for a small difference in their temporal frequencies, ω 1 and ω 2 . The electric and magnetic field amplitudes of this superposition are expressed in terms of a plane-wave spectrum of spatial frequencies (k x ,k y ) as follows: Here the sum is over ±ω 1 and ±ω 2 , where ω 1 and ω 2 are two distinct but closely-spaced frequencies. In general, k z = (ω/c)√ µε −(ck x /ω) 2 − (ck y /ω) 2 . However, since in the present application we are interested in the limit when k x and k y are confined to a small region in the vicinity of the origin of the k x k y -plane, we can safely set k z ≈(ω/c)√ µε. For the fields to be real-valued it is necessary and sufficient that their Fourier transforms be Hermitian, namely, Moreover, if the beam's cross-section in the xy-plane is required to be symmetric, say, with respect to the origin, that is, if it is demanded that the field amplitudes remain intact upon switching (x, y) to (− x, − y), we must have To ensure the two-frequency superposition of Eqs. (A1) yields a well-defined beat waveform we require the two amplitude-profiles to have equal magnitudes and opposite signs, that is, Under these circumstances, the beat-waveform's nodes at t =0 will be located at integermultiples of ∆ z ≈2πc/(ω 2 √ µ 2 ε 2 − ω 1 √ µ 1 ε 1 ), and the envelope's travel time between adjacent nodes will be T =2π/(ω 2 − ω 1 ). The beat's group velocity is thus For the beam defined by the above equations, the Poynting vector may be written as follows: Integrating S(r,t) over the beam's cross-sectional area A in the xy-plane, normalizing by A, and using the identity where δ(k) is Dirac's delta function, the beam's Abraham momentum density turns out to be Notice that ω and ω′ can each assume four different values, namely, ±ω 1 and ±ω 2 , for a total of 16 terms in the above summation. When time-averaging is performed over Eq. (A8), the only terms for which (1/T )∫ 0 T exp[−i(ω + ω′)t]dt ≠ 0 will be those with ω + ω′ = 0, namely, (ω, ω′ ) =(−ω 1 , ω 1 ), (−ω 2 , ω 2 ), (ω 1 ,−ω 1 ), and (ω 2 ,−ω 2 ). Therefore,
Appendix B: Electromagnetic angular momentum in dispersive media
For the beam defined by Eqs.(A1), the volume density of the z-component of angular momentum, L z ( z=0, t), is given by Using the identity where δ ′(k) is the derivative of δ(k) with respect to k, we find, upon time-averaging, We now replace the H-field components of Eq. (B3) with their equivalents in terms of E x ,E y , E z using Maxwell's 3 rd equation, namely, We then use Maxwell's 1 st equation, k x E x +k y E y +k z E z =0, to substitute for E z in terms of E x and E y . [Note that k z =(ω/c)√ µε−(ck x /ω) 2 − (ck y /ω) 2 is a function of k x and k y ; therefore, ∂k z /∂k x and ∂k z /∂k y must be included when evaluating Eq.(B3).] Upon algebraic manipulations we find Equation (B5) is an exact expression for the time-averaged angular momentum density (including contributions from both spin and orbital) in a transparent medium specified by ε(ω) and µ(ω). Presently we are interested only in spin angular momentum, so we confine our attention to the case of circularly-polarized light where E y (k x ,k y , ω 1, 2 ) = iE x (k x ,k y , ω 1, 2 ), with E x being a real-valued function of (k x ,k y ) for both ω 1 and ω 2 . Under these circumstances, those terms of Eq.(B5) that contain derivatives with respect to k x or k y either vanish (because real-valued functions have no imaginary parts) or cancel each other out. Moreover, k z ≈(ω/c)√µε, assuming E x and E y are smooth functions of (x,y), and that the beam's crosssectional area A is large compared to a wavelength. Equation (B5) thus simplifies as follows: The two frequencies are seen to contribute to < L z > independently of each other. According to Parseval's theorem, the integrated |E x | 2 in the k x k y -plane is equal to the integrated |E x | 2 over the beam's cross-sectional area in the xy-plane. The light beam's energy density in a dispersive where V g =c/n g is the beam's group velocity. The spin angular momentum in a given volume is thus equal to the energy content of the volume divided by n g n p ω, where n p =√ µε is the phase refractive index. (In a negative-index material, n p is negative and, therefore, the spin angular momentum must change sign upon entering from the free space.) Suppose a circularly-polarized plane-wave having amplitude E xo ( x ∧ + i y ∧ ) arrives at normal incidence at the surface of a transparent medium specified by ε(ω) and µ(ω). Using Fresnel's reflection coefficient given by Eq. (11), we find the E-field amplitude immediately inside the medium to be E x =(1+r)E xo =2E xo /(1 +√ ε/µ ). The spin angular momentum density in the medium will then be equal to the incident angular momentum density times (1 − r 2 )/√µε. The factor (1 − r 2 ), of course, accounts for the loss of angular momentum upon reflection at the surface. Whereas in the incidence space the angular momentum flows at the vacuum speed of light c, inside the medium the propagation speed is c/n g , causing the spin angular momentum contained in a given length of the beam to drop by a factor of n g √µε upon entering the medium. As will be shown in Appendix D below, the difference between the incident and transmitted angular momenta is imparted to the medium as torque.
Appendix C: Force exerted on a dispersive slab
Starting with the generalized Lorentz law of Eq.(2), the force exerted by the transmitted beam on the semi-infinite slab of Fig.1 can be calculated as follows: Substituting from Eqs.(A1) into Eq.(C1), integrating the force over the cross-sectional area of the beam, using the identity in Eq.(A7), and finally separating the terms in which ω′ =−ω from those in which ω′ ≠ −ω, we obtain In the above equation, the first sum (corresponding to ω′ =−ω) vanishes because, according to Maxwell's 1 st and 4 th equations both k ·E and k ·H are zero; moreover, the remaining terms are all real-valued. As for the second sum, we integrate this force along the z-axis from z= 0 to V g t, i.e., over the length of the beat waveform that enters the medium during the time interval (0, t). Here V g =(ω 1 − ω 2 )/(k z1 − k z2 ) is the group velocity, which, in the limit of small k x and k y , approaches the expression in Eq.(A5). Next we integrate over a single beat period, from t =0 to T =2π/(ω 2 − ω 1 ), noting that (1/T)∫ 0 T exp[−i(ω + ω′)t]dt = 0 for all allowed combinations of ω and ω′ (since ω′ =−ω is already excluded from the second sum). The second sum in Eq.(C2) contains 12 different combinations of ±ω 1 and ±ω 2 , but only four (ω, ω′ ) pairs which have opposite signs, namely, (−ω 1 , ω 2 ), (−ω 2 , ω 1 ), (ω 1 ,−ω 2 ), (ω 2 ,−ω 1 ), contribute to <F z >. For these pairs (k z +k ′ z )V g ≈(ω + ω′ ) and, therefore, As for the remaining eight pairs such as (ω 1 , ω 1 ) or (−ω 1 ,−ω 2 ) which have identical signs, we note that (k z + k′ z )V g − (ω + ω′ ) ≈2ω[(n p /n g ) − 1]. Now, if the phase index n p happens to differ substantially from the group index n g , the corresponding exponential function will vary rapidly with time, rendering the time-averaged contributions of the remaining eight terms negligible. If, on the other hand, n p and n g happen to be so close as to cause the exponential function to vary only slowly during the beat period, the eight terms split into two groups of four positive and four negative terms. [According to Eqs.(A4), the terms arising from either (ω 1 , ω 2 ) or (ω 2 , ω 1 ) are equal in magnitude but opposite in sign compared to those arising from (ω 1 , ω 1 ) and (ω 2 , ω 2 )]. The two groups of terms thus cancel each other out. Therefore, under all circumstances, only four terms contribute to the time-averaged force <F z >, yielding −ωµ o ε o (µ − 1)(k z + k′ z ) −1 [E x *(k x ,k y ,−ω′ )H y (k x ,k y , ω) −E y *(k x ,k y ,−ω′ )H x (k x ,k y , ω)]}dk x dk y .
Since, according to Eqs.(A4), the field amplitudes of the beams with frequencies ω 1 and ω 2 are equal but opposite in sign, Eq.(C4) reduces to In the limit when the beam's cross-sectional area A → ∞, the integrals of |E z | 2 and |H z | 2 approach zero, while k z +k′ z → (ω 1 /c)√ µ 1 ε 1 −(ω 2 /c)√ µ 2 ε 2 for (ω, ω′)=(ω 1 , −ω 2 ); the sign of the expression is reversed for (ω, ω′)=(ω 2 , −ω 1 ). Replacing for E x , E y , H x , H y in terms of the incident field amplitudes E xo , E yo , H xo , H yo and the Fresnel reflection coefficients r 1 , r 2 (given by Eq.(11)), we find the time-averaged force per unit surface area of the slab to be ±ω1, 2 <F z > =(1 +r 2 ) Real{(Ac) In this equation, the first term is the time-rate of flow of incident plus reflected momenta in the free space, while the second term accounts for the influx of Abraham momentum into the transparent medium. We have thus proved the conservation of linear momentum upon reflection from a transparent, dispersive, magnetic dielectric. | 9,293.2 | 2008-09-15T00:00:00.000 | [
"Physics"
] |
Time-feature attention-based convolutional auto-encoder for flight feature extraction
Quick Access Recorders (QARs) provide an important data source for Flight Operation Quality Assurance (FOQA) and flight safety. It is generally characterized by large volume, high-dimensionality and high frequency, and these features result in extreme complexities and uncertainties in its usage and comprehension. In this study, we proposed a Time-Feature Attention (TFA)-based Convolutional Auto-Encoder (TFA-CAE) network model to extract essential flight features from QAR data. As a case study, we used the QAR data landing at the Kunming Changshui International Airport and Lhasa Gonggar International Airport as the experimental data. The results show that (1) the TFA-CAE model performs the best in extracting representative flight features in comparison to some traditional or similar approaches, such as Principal Component Analysis (PCA), Convolutional Auto-Encoder (CAE), Self-Attention-based CAE (SA-CAE), Gate Recurrent Unit based Auto-Encoder (GRU-AE) and TFA-GRU-AE models; (2) flight patterns corresponding to different runways can be recognized; and (3) anomalous flights can effectively deviate from many observations. Overall, the TFA-CAE model provides a well-established technique for further usage of QAR data, such as flight risk detection or FOQA.
Principal Component Analysis (PCA), by which raw data are projected onto their principal dimensions according to the variance-covariances of the original samples 13 , is the most commonly used unsupervised method for feature extraction 14,15 .Linear Discriminant Analysis (LDA) and its variant, Marginal Fisher Analysis (MFA), are two supervised feature extraction methods, among which LDA finds a useful linear subspace by optimizing discriminant class data 16 and MFA characterizes the interclass separability and intraclass compactness of the given data to obtain the optimal projection 17 .All the above feature extraction methods have the same shortcoming: all the projections are linear transformations.Although other studies [18][19][20] have attempted to solve this problem using nonlinear kernel functions, the features extracted by the developed approaches may fail to cover all useful information of the input raw data since diverse nonlinear correlations exist in the complex industrial data 21 .
With the continuous development of Artificial Neural Networks (ANNs), they have become powerful technologies for approximating complicated functions and have achieved great success across various industrial applications.An Auto-Encoder (AE), containing an encoder and a decoder, is a special ANN model that extracts features by minimizing the reconstruction error in an unsupervised manner.The original input data are first mapped into a low-dimensional representation space to obtain the most appropriate features; the decoder then maps the features in the low-dimensional representation space to the input space.The loss error between the original input of the encoder and output of the decoder is used as the loss error to train the resulting model.Figure 1 shows a pictorial representation of the autoencoder network model.
AEs and their variants [22][23][24] have been applied in various fields, such as fault diagnosis 25,26 , smart grids 27 , and Natural Language Processing (NLP) 28 .However, the features extracted by the traditional AE may fail to satisfy the final discrimination task 29 .For multi-feature time series data, the traditional AE directly maps the original input to learn features, but this process ignores the inter-time and inter-feature relationships.In addition, previously developed feature extraction methods are not based on the requirements of specific applications, resulting in extracted features that are not applicable to realistic application tasks.In this article, a Time Feature Attention (TFA) module is developed to capture the internal relationship between different flight moments as well as the internal relationship between different flight parameters.On this basis, a TFA-based Convolutional AE (TFA-CAE) is proposed to perform feature extraction of QAR flight time series data.The remainder of this paper is organized as follows.The methodology used in our research is presented in "Data and methodology" section, where the details of the TFA and TFA-CAE are described in "TFA module" and "TFA-CAE model for QAR feature extraction" sections, respectively."Case study" section presents the experimental results of a case study.This study is summarized in "Conclusion and discussion" section.
Data and methodology
QAR data processing.During flight, aircraft are generally influenced by various kinds of factors, such as the external meteorological environment (speed and direction of wind, temperature and atmospheric pressure, etc.), conditions of the aircraft itself (status of engine, flight control settings, etc.), competencies and pilot techniques.The complex impacts of these factors on the aircraft are constant and fluctuate throughout the flight 30 .Although these factors are always in flux, their impacts on the aircraft are eventually transformed into changes in the kinematic and attitude flight parameters of the aircraft 31 .Thus, we select the attitude and kinematic flight parameters to perform feature extraction.The details of the flight parameters used in this article are shown in Table 1.
Figure 2 shows the fatal accidents and onboard fatalities in each flight phase from 2008 to 2017 32 .From the statistical results, we can see that the landing phase occupied only 1% of the total flight time but yielded high percentages of fatal accidents and onboard fatalities (up to 24% and 20%, respectively).Therefore, the landing phase is the focus of this article.
The specific study flight phase focused on in our research is illustrated in Fig. 3. Since landing phases occupy approximately 90 s in duration, as shown in Fig. 2, a sample duration of 90 s is used in this flight phase.Specifically, we start sampling at 90 s before the touchdown point and end sampling at the touchdown point.For each sampling moment, the values sampled are all the flight parameters shown in Table 1.
TFA module. The function of the attention mechanism has been widely demonstrated in many previous
studies [33][34][35][36][37][38][39][40] .The attention mechanism, on the one hand, helps a model to know the key places to focus on and, on the other hand, enhances the representational value of interests 40 .For a given QAR time series data in this article, we aim to specify when (the key time of the QAR data) and which (the key flight parameters of the QAR data) to focus on and to simultaneously enhance their corresponding representational values with the employment of an attention mechanism.In this article, we propose a TFA module to exploit both time and feature attention based on an efficient architecture.
The TFA module contains two submodules, the Time Attention Module (TAM) and the Feature Attention Module (FAM), which are placed together in sequential order.Given an original QAR time series S ∈ R F×T , a one-dimensional time attention map A t ∈ R 1×T is first produced by TAM and is then multiplied by S ∈ R F×T to generate the time-refined data S ′ ∈ R F×T .Immediately afterward, the FAM takes time-refined data S ′ ∈ R F×T as the input and infers a one-dimensional feature attention map A f ∈ R F×1 , which is immediately multiplied by S ′ ∈ R F×T to obtain the final refined data S ′′ ∈ R F×T .Figure 4 illustrates the overall computation process of the TFA module, which can be summarized as follows: where ⊙ stands for the Hadamard product.During multiplication, time attention values are broadcast along the direction of the dimension of flight parameters, while the values of feature attention are broadcast along the direction of the time dimension.Figures 5 and 6 show the overviews of the time attention module and feature attention module, respectively.In the remainder of this section, we will cover the details of these two modules.
(1) Time attention module (TAM): The role of the time attention module is to highlight the important moments of QAR time series data and suppress the unnecessary moments.Within the time attention module, this is achieved by increasing the representation weight of important flight moments while decreasing the representation weight of unnecessary ones.To produce the attention map, we exploit the relationship between the different flight moments of the QAR data.As each time point of QAR data is considered a time detector, time attention focuses on the time points that are meaningful ('when') given input QAR data.The time attention is calculated by collecting and squeezing the information of the feature dimension of QAR data.For this, a network module, namely Time Perceptron List (TPL), is proposed to aggregate the feature information, as shown in Fig. 5.The detailed operation process of the attention module is described below.Given original QAR data S ∈ R F×T as input, TAM first uses the TPL module to aggregate the information of the feature dimension of S ∈ R F×T , generating a time context descriptor The TPL consists of multiple single-layer perceptions that are arranged in a sequential manner along the time axis.The number of multiple single-layer perceptions is equal to the length of the QAR time series.Each single-layer perceptron fc i is used to collect the feature information of the QAR time series at time i , generating a context descriptor c t i .To produce our time attention map A t ∈ R 1×T , the time context descriptor is then forwarded to a Multi-Layer Perceptron (MLP) network with one hidden layer.The activation size of the hidden layer is set to R T/r×1 to reduce the overhead of the model's parameters, and r is the reduction ratio.After the time attention map passes through a sigmoid function, it is multiplied with the original QAR time series S ∈ R F×T using the Hadamard product, resulting in time-refined data S ′ ∈ R F×T .In short, the time attention is computed as: where σ stands for the sigmoid function, W MLP the QAR data works as a feature detector and is used for calculating its feature attention value by collecting and squeezing its information of the time dimension.Similar to the calculation of time attention, a Feature Perceptron List (FPL) module is constructed to aggregate feature information, as shown in Fig. 6.Given time-refined data S ′ ∈ R F×T , we first aggregate the feature information along the time axis of S ′ ∈ R F×T by using the FPL module, generating a feature context descriptor All the single- layer perceptions are arranged along the feature axis, and each single-layer perceptron fc j in the FPL module is used to collect the feature information along the time axis of the j th feature.The feature context descrip- tor is also then forwarded to a new network composed of an MLP with a hidden layer, producing a feature attention map A f ∈ R F×1 .The activation size of the hidden layer is set to R F/r×1 to reduce the overhead of the model's parameters, where r is the reduction ratio.After the feature map passes a sigmoid function, it is multiplied with time-refined data S ′ ∈ R F×T using the Hadamard product, resulting in the final refined data S " ∈ R F×T .In short, the feature attention is computed as: where σ stands for the sigmoid function, W MLP 41 .With complex structures, CNNs are able to extract richer and more complicated hidden features from high-dimensional data than RNNs 42 .Therefore, CNNs are selected to construct our AE model to extract flight features from QAR data.
In this article, we construct a Time-Feature Attention-based Convolutional Auto-Encoder (TFA-CAE) network model for extracting flight features from QAR time series data.Figure 7 shows the details of the TFA-CAE model, including its special structure and parameters.The TFA-CAE model mainly consists of three parts: the TFA module, an encoder and a decoder, where the TFA module is followed by a CAE.The overall workflow of the TFA-CAE model is described below.
(3) The TFA module is first applied to the original QAR data, producing the final refined time series data.Within the encoder, multiple convolutional layers and max-pooling layers are stacked in an interleaved manner for the extraction of hierarchical features.A 1D vector is generated by flattening all the units within the output of the last convolutional layer and is then transformed into a low-dimensional feature space (latent space) by the two subsequent fully connected layers.Designed as a symmetric form to the encoder, the decoder is composed of multiple unmax-pooling and deconvolutional layers that are stacked in an interleaved manner to reconstruct the original QAR data from the latent features.Moreover, during the training process of the TFA-CAE model, the indices of each max-pool layer within the encoder are fed to the symmetrical unmax-pooling layer within the decoder to perform upsampling.The parameters of the model are optimized through back-propagation of the error loss between the original QAR data and the reconstructed output of the decoder.
Case study
Experimental data.In this article, the flight datasets landing at Kunming Changshui International (ICAO: ZPPP, hereafter) and Lhasa Gonggar International (ICAO: ZULS, hereafter) airports are taken as the experimental data for our case study.The dataset contains 12,176 flights, and all the flights are extracted in the way shown in Fig. 3.All flights are sampled with the flight parameters listed in Table 1.After being standardized by min-max normalization, we split the dataset into a training dataset for training the model, a validation dataset to determine when to stop the model training process and a test dataset to evaluate the performance of the model.The dataset is divided in the ratio of 6:2:2.Table 2 presents the details of the division of each dataset.
Model training.Self-attention 43 , as a well-known attention mechanism variant, was proposed with the aim of capturing the internal relationships of data or features and has exhibited great performance in various applications, such as translation.This is similar to the idea of our TFA module proposed in this article.In the experiments of this article, we also construct a Self-Attention-based CAE (SA-CAE) model to extract flight features.In addition, we also adopted traditional CAE, Gate Recurrent Unit based Auto-Encoder (GRU-AE) and TFA-GRU-AE models for comparison with the TFA-CAE model.The PyTorch deep learning framework (version 1.11) is employed to construct and train all the above models.Moreover, the adaptive moment estimation (Adam) optimizer is employed for the optimization of all models.The batch size of the QAR training data is set as 32, and the learning rate is set as 0.0001.During the training processes of all network models, we introduce an early-stopping mechanism to decide when to terminate the training of models.Its patience is set to 15, meaning that the model training process is stopped when the error loss induced on the validation set no longer decreases after 15 epochs.In addition, the reduction ratios as for time attention and feature attention are fixed to 16 and 4, respectively.Noteworthy, a small fraction of anomalous flights deviated from the common flight pattern in the dataset, which may be due to harsh external atmospheric environments, improper pilot operations and malfunctions of the aircraft themselves.Therefore, to minimize the distortion of these anomalous flights on the model during the training process, we adopt the Huber loss function 44 with lower anomaly sensitivity to compute the error loss value.The Huber loss function is shown in Eq. ( 4): where y − f (x) is the residual and δ is the threshold parameter.When the residual is larger than δ , the Huber loss function uses the Mean Absolute Error (MAE) function to calculate the loss error; otherwise, the Mean Squared Error (MSE) function is employed to calculate the loss error.The setting of δ determines how anomalies are (4) Table 2.The details of the divisions of the three datasets.viewed.In the process of model training, each model is trained several times with δ ranging between 0.1 and 1; the step is 0.1.The setting of δ is decided when the average loss value on the test data first decreases and remains stable afterward.Eventually, the values of δ are set to 0.5 for the model.As described in "Introduction" section concerning the AE network model, the AE uses the encoder to map the original input to the feature representation in the latent space and the decoder to reconstruct the input with the feature representation.Therefore, a smaller error loss indicates a better feature representation from the original QAR data.In this article, all the AE models are trained with multiple dimensions of the latent space.The average loss values of all the models are shown in Fig. 8.By comparing the average loss values of the models, we can first see that the CAE models can extract more representative flight features than the GRU-AE models since the CAE models have smaller average loss values than the GRU-AE models.TFA module helps the models extract more representative flight features.The AE models with TFA module have smaller average loss values than the corresponding ones without TFA module.TFA-CAE model outperforms the other models in terms of flight feature extraction from QAR data since it attains the smallest average loss values as shown in Fig. 8.As the four flight patterns shown in Fig. 9a-f, we can see that both CAE and GRU models outperform the PCA method in discovering flight patterns in the extracted flight features since are not clear in.Moreover, the traditional CAE and SA-CAE models can extract more representative flight features from the original QAR data than GUR-AE and TFA-GRU-AE models.However, the divisions of flight patterns in Fig. 9e and f are much clearer than in Fig. 9b and c, the traditional CAE and SA-CAE models are inferior to GRU-AE and TFA-GRU-AE models in identifying different flight patterns.In addition, TFA module helps both CAE and GRU-AE models to clearly divide the flight patterns by comparing Fig. 9a,f with Fig. 9c,e.
Visualization results
Moreover, as shown in Fig. 9a, the flight objects within the sparse area around each flight pattern are separated clearly and generally considered anomalous flights that deviate from the common flight pattern.Overall, the TFA-CAE model proposed in this article can extract more representative flight features and obtain a better result of the discovery of flight patterns and their division, which provides a well-established technique for further usage of QAR data, such as flight risk detection or FOQA.
Conclusion and discussion
In this article, to address the difficulties of mining QAR data caused by their high-dimensional and high-frequency characteristics, we propose a TFA-CAE network model to perform flight feature extraction by essentially capturing the internal relationships among different flight phases as well as different sets of flight parameters.For comparison, the classic PCA approach, traditional CAE network, an SA-CAE and GRU-AE network models were also conducted with the same QAR dataset.The results show that our TFA-CAE model can extract more representative flight features and simultaneously discover runway-level flight patterns that are clearly separated from each other.Moreover, within the extracted flight features, the anomalous flights deviating from the common flight pattern are clearly separated from their corresponding flight patterns.The TFA-CAE model provides a well-established technique for further usage of QAR data, such as flight risk detection or FOQA.
Air transport is playing an increasingly popular and irreplaceable role in transportation, and flight safety has always been a crucial focus in civil aviation safety management.With the expectation that more flights www.nature.com/scientificreports/will depart in the future, flight safety management is facing increasing and new challenges.To address these challenges and further enhance flight safety, a shift has been made in civil aviation safety management from post-accident investigation and analysis to pre-accident warning.In response to such a requirement, civil aviation endeavors to effectively prevent potential flight accidents before they occur by innovatively and proactively identifying operationally significant safety events that are currently untracked.By appropriately dealing with these potential aviation safety incidents, the accident rate per year will remain at its lowest historical level.QAR data will provide an effective way to achieve Flight Operation Quality Assurance (FOQA).Since QAR data are onboard-recorded flight data and record many various types of flight parameters, they reflect various real flight situations that occur during the flight process, with factors such as the pilot's actual basic capabilities and skills, the actual flight patterns, the performance of the aircraft itself and the potential flight faults or anomalies.Massive and rich flight big data provide a complete database for studying flight risks and deep learning methods.
With the continuous development of ANNs, the combination of big QAR data and deep learning will provide an important and effective method for flight safety management.However, we only tried a two-dimensional time-series data set as the input of TFA-CAE model, which could be challengeable when more complex data are provided.Therefore, a more generic technical architecture for extracting flight features from variable-length time series data could be anticipated in the future.Besides, the evaluation is limited to a simple case study with QAR data collected from two specific airports, further experiments and comparisons with more datasets and baseline techniques are required for the generalization and perfection of techniques proposed in this study.In addition, the automatic discovery of common flight patterns and detection of anomalous flights or risks are two future topics that can enable better-targeted flight safety management.
Figure 1 .
Figure 1.The pictorial representation of an autoencoder network model.
Figure 2 .
Figure 2. Percentages of fatal accidents and onboard fatalities by phase of flight from 2008 to 2017 32 .
Figure 3 .
Figure 3. Schematic diagram of the data sampling process during the landing phases.
0
and W MLP 1 stand for the weights of the MLP network, and W MLP 0 is followed by a Rectified Linear Unit (ReLU) activation function.W TPL i stands for the weight of fc i in the TPL.(2)Feature attention module (FAM): The role of the feature attention mechanism is to focus on "which" features are informative.It can be considered complementary to time attention, which highlights the important flight feature parameters of time-refined QAR data and suppresses the unnecessary ones.A feature attention map is generated by exploiting the inter-feature relationships of the given QAR data.Each feature series of(1)
Figure 4 .
Figure 4. Overview of the TFA module.
0 and W MLP 1 stand
for the weights of the MLP network, and W MLP 0 is followed by a ReLU activation function.W FPL j stands for the weight of fc j in the FPL.TFA-CAE model for QAR feature extraction.AE architectures, including Convolutional Neural Network AEs (CNN-AEs) and Recurrent Neural Network AEs (RNN-AEs), have been demonstrated to be powerful nonlinear feature extraction models, boasting both flexibility and diversity.Typically, the nature of input data determines the selection of model architecture.Previously, it was generally accepted that RNN-based AEs were the preferred choice for dealing with time series data, while CNN-based AEs were preferred for image data.Nevertheless, it has recently been demonstrated that CNN-based AEs outperform general RNN-based AEs on time series data
Figure 5 .
Figure 5. Overview of the time attention module.
Figure 6 .
Figure 6.Overview of the feature attention module.
Figure 7 .
Figure 7.The special structure and parameters of the TFA-CAE model.
of feature extraction.With the latent space size (extracted features) set to 2 in our case study, we can visualize the extracted flight feature results.The visualization outcome of the flight features extracted by the CAE and GRU-AE models and the PCA during the landing phase is shown in Fig. 8.All flight features extracted by each individual model are labeled with their different flight patterns split by the head angle (magnetic north).
For time and feature attention, the arrangement order of these two submodules may affect global performance since each module has different functions.In this section, we compare the two different ways of arranging the time and feature attention submodules: sequential time-feature and sequential feature-time use of both attention modules.A Feature-Time Attention-based CAE (FTA-CAE) model was built and trained to compare with TFA-CAE model.The comparison of the average loss value between TFA-CAE and FTA-CAE models is shown in Fig. 10.From the result, we can see the average loss value of the FTA-CAE model is larger than that of the TFA-CAE model, time-feature attention outperforms feature-time attention in terms of helping the CAE model extract flight features.Furthermore, the visualization results of flight features extracted by the FTA-CAE and TFA-CAE models are compared in Fig. 11.As shown in Fig. 11b, the FTA-CAE model is able to discover the four flight patterns within the extracted flight features.By comparing Fig. 11a and b, the TFA-CAE model outperforms the FTA-CAE model in terms of the division of flight patterns since the flight patterns P 2 and P 3 are not clearly divided in Fig. 11b.
Figure 8 .
Figure 8.Comparison among the average loss values of the models on the test dataset.
Figure 10 .
Figure 10.Comparation of the average loss values between TFA-CAE and FTA-CAE models.
Figure 11 .
Figure 11.Flight feature extraction results obtained during the landing phase.As illustrated, subfigure (a) is the flight feature result extracted the TFA-CAE model while (b) is the flight feature result extracted by the FTA-CAE.
Table 1 .
Details of the selected flight parameters. | 5,702.2 | 2023-08-30T00:00:00.000 | [
"Computer Science"
] |